mirror of
https://github.com/Crosstalk-Solutions/project-nomad.git
synced 2026-03-28 11:39:26 +01:00
The previous lspci-based GPU detection fails inside Docker containers because lspci isn't available, causing Ollama to always run CPU-only even when a GPU + NVIDIA Container Toolkit are present on the host. Replace with Docker API runtime check (docker.info() -> Runtimes) as primary detection method. This works from inside any container via the mounted Docker socket and confirms both GPU presence and toolkit installation. Keep lspci as fallback for host-based installs and AMD. Also add Docker-based GPU detection to benchmark hardware info — exec nvidia-smi inside the Ollama container to get the actual GPU model name instead of showing "Not detected". Tested on nomad3 (Intel Core Ultra 9 285HX + RTX 5060): AI performance went from 12.7 tok/s (CPU) to 281.4 tok/s (GPU) — a 22x improvement. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| about.md | ||
| faq.md | ||
| getting-started.md | ||
| home.md | ||
| release-notes.md | ||
| use-cases.md | ||