project-nomad/admin/app
Chris Sherwood c16cfc3a93 fix(GPU): detect NVIDIA GPUs via Docker API instead of lspci
The previous lspci-based GPU detection fails inside Docker containers
because lspci isn't available, causing Ollama to always run CPU-only
even when a GPU + NVIDIA Container Toolkit are present on the host.

Replace with Docker API runtime check (docker.info() -> Runtimes) as
primary detection method. This works from inside any container via the
mounted Docker socket and confirms both GPU presence and toolkit
installation. Keep lspci as fallback for host-based installs and AMD.

Also add Docker-based GPU detection to benchmark hardware info — exec
nvidia-smi inside the Ollama container to get the actual GPU model name
instead of showing "Not detected".

Tested on nomad3 (Intel Core Ultra 9 285HX + RTX 5060): AI performance
went from 12.7 tok/s (CPU) to 281.4 tok/s (GPU) — a 22x improvement.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 15:18:52 -08:00
..
controllers feat: zim content embedding 2026-02-08 13:20:10 -08:00
exceptions fix(Docs): documentation renderer fixes 2025-12-23 16:00:33 -08:00
jobs feat: zim content embedding 2026-02-08 13:20:10 -08:00
middleware feat: background job overhaul with bullmq 2025-12-06 23:59:01 -08:00
models fix: rework content tier system to dynamically determine install status 2026-02-04 22:58:21 -08:00
services fix(GPU): detect NVIDIA GPUs via Docker API instead of lspci 2026-02-08 15:18:52 -08:00
utils feat: zim content embedding 2026-02-08 13:20:10 -08:00
validators feat: cron job for system update checks 2026-02-06 15:40:30 -08:00