project-nomad/admin/app
Chris Sherwood 650ae407f3 feat(GPU): warn when GPU passthrough not working and offer one-click fix
Ollama can silently run on CPU even when the host has an NVIDIA GPU,
resulting in ~3 tok/s instead of ~167 tok/s. This happens when Ollama
was installed before the GPU toolkit, or when the container was
recreated without proper DeviceRequests. Users had zero indication.

Adds a GPU health check to the system info API response that detects
when the host has an NVIDIA runtime but nvidia-smi fails inside the
Ollama container. Shows a warning banner on the System Information
and AI Settings pages with a one-click "Reinstall AI Assistant"
button that force-reinstalls Ollama with GPU passthrough.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
..
controllers fix(AI): allow force refresh of models list 2026-03-11 14:08:09 -07:00
exceptions fix(Docs): documentation renderer fixes 2025-12-23 16:00:33 -08:00
jobs feat(RAG): display embedding queue and improve progress tracking 2026-03-04 20:05:14 -08:00
middleware feat: background job overhaul with bullmq 2025-12-06 23:59:01 -08:00
models feat(AI Assistant): custom name option for AI Assistant 2026-03-04 20:05:14 -08:00
services feat(GPU): warn when GPU passthrough not working and offer one-click fix 2026-03-11 14:08:09 -07:00
utils feat: zim content embedding 2026-02-08 13:20:10 -08:00
validators fix(AI): allow force refresh of models list 2026-03-11 14:08:09 -07:00