Backend returned { error: message } on 400 but frontend expected { message }.
catchInternal swallowed Axios errors and returned undefined, causing a
generic 'An internal error occurred' message instead of the real reason
(already installed, already in progress, not found).
- Fix 400 response shape to { success: false, message } in controller
- Replace catchInternal with direct error handling in installService,
affectService, and forceReinstallService API methods
- Extract error.response.data.message from Axios errors so callers
see the actual server message
The log2 normalization formula `50 * (1 + log2(ratio))` produces negative
values (clamped to 0) whenever the measured value is less than half the
reference. For example, a CPU scoring 1993 events/sec against a 5000
reference gives ratio=0.4, log2(0.4)=-1.32, score=-16 -> 0%.
Fix by dividing log2 by 3 to widen the usable range. This preserves the
50% score at the reference value while allowing below-average hardware
to receive proportional non-zero scores (e.g., 28% for the CPU above).
Also adds debug logging for CPU sysbench output parsing to aid future
diagnosis of parsing issues.
Fixes#415
Model downloads that fail (e.g., when Ollama is too old for a model)
were silently retrying 40 times with no UI feedback. Now errors are
broadcast via SSE and shown in the Active Model Downloads section.
Version mismatch errors use UnrecoverableError to fail immediately
instead of retrying. Stale failed jobs are cleared on retry so users
aren't permanently blocked.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
GPU detection results were only applied at container creation time and
never persisted. If live detection failed transiently (Docker daemon
hiccup, runtime temporarily unavailable), Ollama would silently fall
back to CPU-only mode with no way to recover short of force-reinstall.
Now _detectGPUType() persists successful detections to the KV store
(gpu.type = 'nvidia' | 'amd') and uses the saved value as a fallback
when live detection returns nothing. This ensures GPU config survives
across container recreations regardless of transient detection failures.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Failed download jobs persist in BullMQ forever with no way to clear
them, leaving stale error notifications in Content Explorer and Easy
Setup. Adds a dismiss button (X) on failed download cards that removes
the job from the queue via a new DELETE endpoint.
- Backend: DELETE /api/downloads/jobs/:jobId endpoint
- Frontend: X button on failed download cards with immediate refresh
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three bugs caused downloads to hang, disappear, or leave stuck spinners:
1. Wikipedia downloads that failed never updated the DB status from 'downloading',
leaving the spinner stuck forever. Now the worker's failed handler marks them as failed.
2. No stall detection on streaming downloads - if data stopped flowing mid-download,
the job hung indefinitely. Added a 5-minute stall timer that triggers retry.
3. Failed jobs were invisible to users since only waiting/active/delayed states were
queried. Now failed jobs appear with error indicators in the download list.
Closes#364, closes#216
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When Ollama isn't installed, every ZIM download dispatches embedding jobs
that fail and retry 30x with 60s backoff. With many ZIM files downloading
in parallel, this exhausts Redis connections with EPIPE/ECONNRESET errors.
Two changes:
1. Don't dispatch embedding jobs when Ollama isn't installed (belt)
2. Use BullMQ UnrecoverableError for "not installed" so jobs fail
immediately without retrying (suspenders)
Closes#351
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a "Debug Info" link to the footer and settings sidebar that opens a
modal with non-sensitive system information (version, OS, hardware, GPU,
installed services, internet status, update availability). Users can copy
the formatted text and paste it into GitHub issues.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a new settings page with Ko-fi donation link, Rogue Support
banner, and community contribution options (GitHub, Discord).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rotate the HMAC secret used for signing benchmark submissions to the
community leaderboard. The previous secret was compromised (hardcoded
in open-source code and used to submit a fake leaderboard entry).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
NOMAD is a LAN appliance — blocking RFC1918 private ranges (10.x,
172.16-31.x, 192.168.x) would prevent users from downloading content
from local network mirrors. Narrowed to only block loopback (localhost,
127.x, 0.0.0.0, ::1) and link-local (169.254.x, fe80::) addresses.
Restored require_tld: false for LAN hostnames without TLDs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Fixes 4 high-severity findings from a comprehensive security audit:
1. Path traversal on ZIM file delete — resolve()+startsWith() containment
2. Path traversal on Map file delete — same pattern
3. Path traversal on docs read — same pattern (already used in rag_service)
4. SSRF on download endpoints — block private/internal IPs, require TLD
Also adds assertNotPrivateUrl() to content update endpoints.
Full audit report attached as admin/docs/security-audit-v1.md.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ollama can silently run on CPU even when the host has an NVIDIA GPU,
resulting in ~3 tok/s instead of ~167 tok/s. This happens when Ollama
was installed before the GPU toolkit, or when the container was
recreated without proper DeviceRequests. Users had zero indication.
Adds a GPU health check to the system info API response that detects
when the host has an NVIDIA runtime but nvidia-smi fails inside the
Ollama container. Shows a warning banner on the System Information
and AI Settings pages with a one-click "Reinstall AI Assistant"
button that force-reinstalls Ollama with GPU passthrough.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The previous lspci-based GPU detection fails inside Docker containers
because lspci isn't available, causing Ollama to always run CPU-only
even when a GPU + NVIDIA Container Toolkit are present on the host.
Replace with Docker API runtime check (docker.info() -> Runtimes) as
primary detection method. This works from inside any container via the
mounted Docker socket and confirms both GPU presence and toolkit
installation. Keep lspci as fallback for host-based installs and AMD.
Also add Docker-based GPU detection to benchmark hardware info — exec
nvidia-smi inside the Ollama container to get the actual GPU model name
instead of showing "Not detected".
Tested on nomad3 (Intel Core Ultra 9 285HX + RTX 5060): AI performance
went from 12.7 tok/s (CPU) to 281.4 tok/s (GPU) — a 22x improvement.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Inside Docker, systeminformation reports the container's Alpine Linux
distro, container ID as hostname, and no GPU. This enriches the System
Information page with actual host details via the Docker API:
- Distribution and kernel version from docker.info()
- Real hostname from docker.info().Name
- GPU model and VRAM via nvidia-smi inside the Ollama container
- Graphics card in System Details (Model, Vendor, VRAM)
- Friendly uptime display (days/hours/minutes instead of minutes only)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Items actively downloading now appear at the top of the download list
instead of the bottom. Sorts by progress percentage descending so the
item furthest along is always first, and queued items (0%) are last.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add custom Markdoc renderers for images, links, paragraphs, code blocks,
inline code, and horizontal rules. Restyle existing heading, table, and
list components to match the desert tactical color palette. Add 8
screenshots to docs with polished image presentation (rounded corners,
shadow, captions). Constrain content width for readability.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update all 6 documentation files and docs_service.ts:
- home.md: Add AI Chat, Knowledge Base, and Benchmark sections;
replace Open WebUI references with built-in AI Chat links;
expand Quick Links table with new features
- getting-started.md: Update Easy Setup steps to match current
wizard (Capabilities/Maps/Content/Review); replace Open WebUI
section with AI Assistant and Knowledge Base sections; add
Wikipedia Selector and System Benchmark docs; update GPU specs
- faq.md: Add AI, Knowledge Base, Benchmark, and curated tier
FAQ entries; add troubleshooting for AI Chat, Knowledge Base
uploads, and benchmark submission; update all references from
Open WebUI to built-in AI Chat; add Discord community link
- use-cases.md: Add Knowledge Base mentions across Emergency Prep,
Homeschooling, Remote Work, Privacy, and Academic Research use
cases; add "Upload Relevant Documents" setup step; update
privacy section to emphasize built-in AI
- about.md: Fix "ultime" typo, add project evolution paragraph,
add community links section
- release-notes.md: Add all versions from v1.11.0 through v1.23.0
with accurate dates and changes from git history; consolidate
patch versions; update Support section with Discord link
- docs_service.ts: Replace alphabetical sidebar sort with custom
ordering (Home > Getting Started > Use Cases > FAQ > About >
Release Notes)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>