project-nomad/admin/inertia
Chris Sherwood 650ae407f3 feat(GPU): warn when GPU passthrough not working and offer one-click fix
Ollama can silently run on CPU even when the host has an NVIDIA GPU,
resulting in ~3 tok/s instead of ~167 tok/s. This happens when Ollama
was installed before the GPU toolkit, or when the container was
recreated without proper DeviceRequests. Users had zero indication.

Adds a GPU health check to the system info API response that detects
when the host has an NVIDIA runtime but nvidia-smi fails inside the
Ollama container. Shows a warning banner on the System Information
and AI Settings pages with a one-click "Reinstall AI Assistant"
button that force-reinstalls Ollama with GPU passthrough.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
..
app feat: [wip] native AI chat interface 2026-01-31 20:39:49 -08:00
components fix(AI): allow force refresh of models list 2026-03-11 14:08:09 -07:00
context feat: container controls & convienience scripts 2025-08-08 15:07:32 -07:00
css feat: alert and button styles redesign 2025-11-30 23:32:16 -08:00
hooks feat(RAG): display embedding queue and improve progress tracking 2026-03-04 20:05:14 -08:00
layouts feat(AI Assistant): custom name option for AI Assistant 2026-03-04 20:05:14 -08:00
lib fix(AI): allow force refresh of models list 2026-03-11 14:08:09 -07:00
pages feat(GPU): warn when GPU passthrough not working and offer one-click fix 2026-03-11 14:08:09 -07:00
providers feat: improve global error reporting with user notifs 2026-02-04 22:58:21 -08:00
tsconfig.json fix(Docs): documentation renderer fixes 2025-12-23 16:00:33 -08:00