mirror of
https://github.com/Crosstalk-Solutions/project-nomad.git
synced 2026-03-28 03:29:25 +01:00
The previous lspci-based GPU detection fails inside Docker containers because lspci isn't available, causing Ollama to always run CPU-only even when a GPU + NVIDIA Container Toolkit are present on the host. Replace with Docker API runtime check (docker.info() -> Runtimes) as primary detection method. This works from inside any container via the mounted Docker socket and confirms both GPU presence and toolkit installation. Keep lspci as fallback for host-based installs and AMD. Also add Docker-based GPU detection to benchmark hardware info — exec nvidia-smi inside the Ollama container to get the actual GPU model name instead of showing "Not detected". Tested on nomad3 (Intel Core Ultra 9 285HX + RTX 5060): AI performance went from 12.7 tok/s (CPU) to 281.4 tok/s (GPU) — a 22x improvement. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| benchmark_service.ts | ||
| chat_service.ts | ||
| docker_service.ts | ||
| docs_service.ts | ||
| download_service.ts | ||
| map_service.ts | ||
| ollama_service.ts | ||
| queue_service.ts | ||
| rag_service.ts | ||
| system_service.ts | ||
| system_update_service.ts | ||
| zim_extraction_service.ts | ||
| zim_service.ts | ||