mirror of
https://github.com/Crosstalk-Solutions/project-nomad.git
synced 2026-03-28 11:39:26 +01:00
Ollama can silently run on CPU even when the host has an NVIDIA GPU, resulting in ~3 tok/s instead of ~167 tok/s. This happens when Ollama was installed before the GPU toolkit, or when the container was recreated without proper DeviceRequests. Users had zero indication. Adds a GPU health check to the system info API response that detects when the host has an NVIDIA runtime but nvidia-smi fails inside the Ollama container. Shows a warning banner on the System Information and AI Settings pages with a one-click "Reinstall AI Assistant" button that force-reinstalls Ollama with GPU passthrough. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| benchmark_service.ts | ||
| chat_service.ts | ||
| collection_manifest_service.ts | ||
| collection_update_service.ts | ||
| docker_service.ts | ||
| docs_service.ts | ||
| download_service.ts | ||
| map_service.ts | ||
| ollama_service.ts | ||
| queue_service.ts | ||
| rag_service.ts | ||
| system_service.ts | ||
| system_update_service.ts | ||
| zim_extraction_service.ts | ||
| zim_service.ts | ||