Compare commits

...

48 Commits

Author SHA1 Message Date
cosmistack-bot
cb47e1e4e3 chore(release): 1.32.0-rc.1 [skip ci] 2026-05-05 10:37:12 -07:00
chriscrosstalk
1ad898bc8b
fix(UI): four fixes for the System Update page (#827)
Closes #826.

1. Heading and subtext now read from `versionInfo` state (which the
   Check Again mutation already populates) instead of the server-rendered
   `props.system`. Previously the card kept showing "System Up to Date /
   Your system is running the latest version!" alongside the new
   `Latest Version` row + Start Update button after a successful recheck.
   Status icon also switched to `versionInfo` for consistency.

2. The pulling-state heading rendered the lowercase status enum
   (`pulling`, `pulled`, ...) and relied on a Tailwind `capitalize` class
   for the visible glyph. Screen readers and other accessible-name
   consumers got the lowercase value with no transform applied. Replaced
   with a `STAGE_LABELS` map so visual + accessible names match.

3. The sidecar (install/sidecar-updater/update-watcher.sh) writes
   `complete` for ~5s, then resets the status file to `idle`. The SPA
   could miss that window across the admin container restart, leaving
   the page parked on its last observed progress percentage indefinitely
   while the upgrade was actually finished on disk. A `seenAdvancedStageRef`
   now records whether the session ever observed an advanced stage; a
   later poll seeing `idle` is treated as the missed completion, and the
   page reloads as advertised in step 3 of the on-screen process. Reset
   on each Start Update.

4. Toggling Enable Early Access now triggers a recheck on success, so
   the eligible-version list updates immediately instead of requiring a
   manual Check Again click.

Single file touched: admin/inertia/pages/settings/update.tsx.
Typecheck (tsc --noEmit) passes; static UI changes verified in source.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:33:18 -07:00
Jake Turner
0fdf31c2e4
docs: update release notes 2026-05-04 19:30:06 +00:00
Jake Turner
0ddcfe9011
fix(System): self-heal stale updateAvailable flag after sidecar-driven update (#825) 2026-05-04 11:54:56 -07:00
chriscrosstalk
a7dbee55c4
feat(Content): custom ZIM library sources with pre-seeded mirrors (#593)
* feat(content): add custom ZIM library sources with pre-seeded mirrors

Users reported slow download speeds from the default Kiwix CDN. This adds
the ability to browse and download ZIM files from alternative Kiwix mirrors
or self-hosted repositories, all through the GUI.

- Add "Custom Libraries" button next to "Browse the Kiwix Library"
- Source dropdown to switch between Default (Kiwix) and custom libraries
- Browsable directory structure with breadcrumb navigation
- 5 pre-seeded official Kiwix mirrors (US, DE, DK, UK, Global CDN)
- Built-in mirrors protected from deletion
- Downloads use existing pipeline (progress, cancel, Kiwix restart)
- Source selection persists across page loads via localStorage
- Scrollable directory browser (600px max) with sticky header
- SSRF protection on all custom library URLs

Closes #576

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(content): recognize Wikipedia downloads from mirror sources

When Wikipedia is downloaded via a custom mirror instead of the default
Kiwix server, the completion callback now matches by filename instead
of exact URL. This ensures the Wikipedia selector correctly shows
"Installed" status and triggers old-version cleanup regardless of
which mirror was used.

Also handles the case where no Wikipedia selection exists yet (file
downloaded before visiting the selector), creating the record
automatically.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(ZIM): use cheerio for custom mirror directory parsing

* fix(ZIM): use URL constructor for more robust joining

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Jake Turner <jturner@cosmistack.com>
2026-05-04 11:30:59 -07:00
dependabot[bot]
d66eaa3d42 build(deps): bump picomatch in /admin
Bumps  and [picomatch](https://github.com/micromatch/picomatch). These dependencies needed to be updated together.

Updates `picomatch` from 4.0.3 to 4.0.4
- [Release notes](https://github.com/micromatch/picomatch/releases)
- [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/picomatch/compare/4.0.3...4.0.4)

Updates `picomatch` from 2.3.1 to 2.3.2
- [Release notes](https://github.com/micromatch/picomatch/releases)
- [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/picomatch/compare/4.0.3...4.0.4)

---
updated-dependencies:
- dependency-name: picomatch
  dependency-version: 4.0.4
  dependency-type: indirect
- dependency-name: picomatch
  dependency-version: 2.3.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-04 10:50:13 -07:00
Jake Turner
d81b66bb14
chore(deps): pin all deps to exact versions 2026-05-04 17:45:18 +00:00
Chris Sherwood
8ef2c69f56 docs: link to new WSL2 install guide from README and FAQ
The /install/wsl2 community-supported install guide is now live on
projectnomad.us. Update the README install section and FAQ to point
Windows users at it instead of deflecting WSL2 questions to "see the
Debian-only answer."

Doesn't change the official-support stance — bare-metal Debian-based
Linux remains the supported configuration. Just removes the dead-end
deflection so Windows users have a real path forward.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 10:26:05 -07:00
Kenneth Brewer
a2f3a84446
feat(maps): show map coordinates on mouse move (#786)
* feat: Updated the map to show the coordinates as the user moves the cursor over the map. Changed the cursor to a crosshairs to make it easier to place map markers.

* Moved the scale unit control to its own component file for easier maintenance. Enhanced the behavior of the coordinate display on the map to not display when over the on screen controls, and the navigation bar. Added a toggle to turn off the coordinate display if the user doesn't wish to see it. Intentionally left the coordinate display when over a map marker so that the coordinates of the map marker can be estimated. In the future I intend to add the coordinates of a map marker when the map marker is clicked so that behavior may change in the future.
---------

Co-authored-by: Kenneth Brewer <kennethbrewer3@protonmail.com>
2026-05-03 14:11:03 -07:00
chriscrosstalk
822b94629c
fix(UI): Country Picker UX polish + auto-refresh stored files (#817)
Three UX issues from manual testing of #780 on NOMAD3.

1. Slider was unusable for multi-step zoom changes
   `setLoading(true)` fired immediately on every selection or maxzoom change,
   which disabled the slider until the request returned. Even with the 400ms
   debounce delaying the network call, the UI was locked the whole time.
   User couldn't drag through zoom levels to find the right one.

   Fix: bump debounce to 1500ms, move `setLoading(true)` inside the setTimeout
   so it only flips after the debounce expires. Slider stays interactive
   throughout the wait. Slider `disabled` now only ties to `downloading`
   (active extract dispatch), not `loading` (preflight in flight). The
   existing requestId stale-safe pattern handles concurrent changes.

2. Newly-downloaded maps didn't show in Stored Map Files until manual refresh
   `props.maps.regionFiles` is rendered server-side and passed through Inertia
   props; without a partial reload it stayed stale until the user navigated
   away and back.

   Fix: watch `useDownloads({ filetype: 'map' })` count via a ref. When the
   count drops (a download finished), trigger `router.reload({ only: ['maps'] })`
   to refresh just the maps prop. Existing pattern from elsewhere in the
   codebase.

3. Country picker didn't surface already-downloaded countries
   When a user re-opened "Choose Countries" after downloading UK, UK appeared
   unchecked with no indication it was already on disk.

   Fix: pass installed pmtiles filenames into the modal as a prop; parse with
   regex `^([a-z]{2})_[\w-]+_z\d+\.pmtiles$` to extract country codes from
   single-country extracts (matching MapService.buildRegionSlug's iso2 lowercase
   slug pattern). Render an "Installed" badge on those countries with a tooltip
   explaining they're re-selectable for redownload at a different zoom.

   Group / custom multi-country extracts don't reverse-map cleanly from
   filename and are skipped here. Could be a follow-up if useful.

Files:
  admin/inertia/components/CountryPickerModal.tsx
    - SINGLE_COUNTRY_FILENAME_RE: iso2 + flexible date + zoom
    - installedFilenames prop with default []
    - installedCountrySet derivation via useMemo
    - "Installed" badge rendering on country list rows
    - Debounce: 400ms -> 1500ms; setLoading inside setTimeout
    - Slider disabled: only on `downloading`
  admin/inertia/pages/settings/maps.tsx
    - import useEffect/useRef
    - destructure activeMapDownloads from useDownloads
    - useEffect on download count drop -> router.reload({ only: ['maps'] })
    - pass installedFilenames to CountryPickerModal

All three fixes tested end-to-end on NOMAD3.
2026-05-03 14:06:56 -07:00
0xGlitch
27cd803090
feat(Maps): regional map downloads via go-pmtiles extract (#780)
* feat(maps): add regional map downloads via go-pmtiles extract

* address Copilot review feedback on PR #780

- auto-refresh preflight on selection/maxzoom change with 400ms debounce and
  requestId stale-safety so the confirm button no longer requires a two-step
  "Estimate Size" -> "Start Download" dance
- safeUpdateProgress helper replaces fire-and-forget updateProgress().catch()
  pattern so cancelled-job errors (code -1) can't surface as unhandled rejections
- gate world basemap source on worldBasemapReady - when ensureWorldBasemap()
  fails we already delete world.pmtiles, so emitting the source was producing
  404s on every tile request
- verify go-pmtiles binary SHA256 at image build time; upstream doesn't ship a
  checksums file so per-arch hashes are pinned as build args with a regenerate
  note when bumping PMTILES_VERSION
2026-05-03 13:47:53 -07:00
Chris Sherwood
360e7a0af4 feat(content-updates): show size, surface downloads in Active Downloads
Content Updates had three UX problems that compounded:

1. No size column, so users had to guess how big an update would be before
   clicking Update All. Upstream /api/v1/resources/check-updates doesn't
   return size, so CollectionUpdateService now enriches each update with
   a Content-Length HEAD request in parallel (5s timeout, non-fatal on
   failure — the row just renders an em-dash).

2. Small ZIM updates (1-8 MB) never appeared in Active Downloads. Two
   causes, both fixed: handleApply / handleApplyAll didn't invalidate the
   download-jobs query after dispatching, and useDownloads idled at 30s
   between polls — enough for a fast job to dispatch, download, and get
   cleaned up by removeOnComplete before the next refetch.

3. applyUpdate didn't forward title / totalBytes to RunDownloadJob, so
   any update that did briefly surface in Active Downloads had no label
   and no byte-count progress, just a filename and a percentage. It now
   passes both (matching zim_service's dispatch pattern).

Also parallelized applyAllUpdates so dispatching five updates doesn't
serialize five sequential BullMQ round-trips.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 13:17:07 -07:00
cuyua9
bb1834a364
fix(UI): wire map file delete confirmation to API (#732)
Co-authored-by: cuyua9 <cuyua9@users.noreply.github.com>
2026-05-03 12:49:06 -07:00
Kenneth Brewer
0836d84bb2
docs: added notes field info to the map pin API reference (#803) 2026-04-28 21:55:11 -07:00
chriscrosstalk
5924056502
feat(AI): improved AMD GPU acceleration for Ollama via ROCm + HSA override (#804)
* feat(AI): re-enable AMD GPU acceleration for Ollama via ROCm + HSA override

Re-enables AMD GPU support that was disabled in 77f1868 pending validation
of the ROCm image and device discovery. Validation done 2026-04-28 on a
Minisforum UM890 Pro (Ryzen 9 PRO 8945HS + Radeon 780M iGPU) — Ollama
correctly offloaded all model layers to the iGPU when the container was
started with /dev/kfd + /dev/dri passthrough and HSA_OVERRIDE_GFX_VERSION=11.0.0.
On llama3.2:1b, GPU inference ran at 51.83 tok/s vs 33.16 tok/s on CPU
(same hardware, same prompt) — a 1.56x speedup confirmed by Ollama logs
showing "load_tensors: offloaded 17/17 layers to GPU".

Changes
-------

docker_service.ts
- Restore _discoverAMDDevices() (simplified — pass /dev/dri as a directory
  entry, mirroring `docker run --device /dev/dri` behavior, instead of the
  prior brittle hardcoded card0/renderD128 fallback that broke on systems
  where the AMD GPU enumerates as card1+).
- Restore the AMD branch in _createContainer():
  - Switches Ollama image to ollama/ollama:rocm
  - Mounts /dev/kfd + /dev/dri via Devices
  - Sets HSA_OVERRIDE_GFX_VERSION=11.0.0 (required for unsupported-but-RDNA3
    iGPUs like gfx1103; harmless on supported discrete cards)
  - KV opt-out via ai.amdGpuAcceleration (default on)
- Mirror the AMD branch in updateContainer():
  - Lifted GPU detection above docker.pull() so AMD updates pull :rocm
    rather than the standard :targetVersion tag (per-version ROCm tags
    aren't always published)
  - Replaces stale HSA_OVERRIDE in the inspect-captured env on update,
    so containers built before this PR pick up the current value

system_service.ts
- New getOllamaInferenceComputeFromLogs() — parses Ollama startup log line
  "msg=\"inference compute\" ... library=CUDA|ROCm ..." which Ollama emits
  for both NVIDIA and AMD. Catches silent CPU fallback (e.g. NVML death
  after update, or HSA_OVERRIDE failure) that the prior nvidia-smi exec
  probe couldn't detect.
- gpuHealth refactored to use log parsing as the primary probe for both
  vendors, with nvidia-smi exec retained as the NVIDIA-only secondary
  path for hardware enrichment when log parsing has no startup line yet.
- AMD path uses gpu.type KV value (persisted by DockerService._detectGPUType)
  + ai.amdGpuAcceleration opt-out to determine hasRocmRuntime.

types/system.ts
- GpuHealthStatus extended additively: hasRocmRuntime + optional gpuVendor.

types/kv_store.ts
- New ai.amdGpuAcceleration boolean (default-on).

settings/models.tsx, settings/system.tsx
- passthrough_failed banner copy now reads vendor from gpuHealth.gpuVendor
  ("an AMD GPU" vs "an NVIDIA GPU"). Same Fix button hits the same
  force-reinstall endpoint, which now configures AMD correctly.

install_nomad.sh
- AMD detection in verify_gpu_setup() upgraded from a strict-positive
  "ROCm not currently available" message to "ROCm acceleration will be
  configured automatically." Also tightens the lspci match to display
  controller classes (avoids false positives from AMD CPU host bridges,
  matching the same fix already in DockerService._detectGPUType).

Auto-remediation
----------------

Issue #755 proposes auto-remediation when gpuHealth.status flips to
passthrough_failed (today the user has to click "Fix: Reinstall AI
Assistant"). When that PR lands, AMD coverage falls out for free since
this PR uses the same passthrough_failed status code via the shared
gpuHealth machinery — #755's guard will need to flip from
hasNvidiaRuntime === true to (hasNvidiaRuntime || hasRocmRuntime).

Closes #124 (AMD GPU support).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(AI): detect AMD GPU presence inside admin container via marker file

The admin container doesn't have lspci installed, and AMD GPUs don't register
a Docker runtime the way NVIDIA does — so DockerService._detectGPUType() and
SystemService.gpuHealth had no way to know an AMD GPU was present.

The previous implementation fell through to lspci, which silently failed inside
the admin container, leaving gpu.type unset and gpuHealth stuck at 'no_gpu'
even on systems with an AMD GPU. (NVIDIA worked because Docker registers the
nvidia runtime, which is reachable via dockerInfo.Runtimes from any container.)

Discovered while testing the AMD acceleration patch on a Minisforum UM890 Pro:
the AMD branch in _createContainer() never fired because _detectGPUType()
returned 'none' even on a host with a working /dev/kfd.

Fix
---

install_nomad.sh writes the host-detected GPU type ('nvidia' | 'amd') to a
marker file in the storage volume the admin container already bind-mounts:

  /opt/project-nomad/storage/.nomad-gpu-type  →  /app/storage/.nomad-gpu-type

DockerService._detectGPUType() reads the marker as a secondary probe (after
the Docker runtime check) — covers AMD detection from inside the container
without requiring lspci or a /dev bind mount.

SystemService falls back to the marker file when KV gpu.type is empty so the
System page reflects AMD presence even before the user installs AI Assistant
for the first time. (Without this, the page would say 'no_gpu' until Ollama
was installed, even on hosts with an AMD GPU detected at install time.)

Verified on NOMAD6 (UM890 Pro, Ubuntu 24.04, 780M iGPU): with the marker file
in place and admin restarted, the patch's AMD branch fires correctly on Force
Reinstall AI Assistant. Resulting nomad_ollama runs ollama/ollama:rocm with
/dev/kfd + /dev/dri passthrough and HSA_OVERRIDE_GFX_VERSION=11.0.0; Ollama
logs show 'library=ROCm compute=gfx1100 ... type=iGPU'. NOMAD's in-product
benchmark on the same hardware climbed from 33.8 tok/s (CPU) to 57.3 tok/s
(GPU) — a 1.69x speedup, with TTFT dropping from 148ms to 66ms.

Migration for existing AMD installs
-----------------------------------

Users on an existing NOMAD install with an AMD GPU have no marker file (the
install script wrote it on a fresh install). Two paths get them on the GPU:
  1. Re-run install_nomad.sh — writes the marker, no other side effects
  2. Manually: echo amd | sudo tee /opt/project-nomad/storage/.nomad-gpu-type

Either then triggers AMD detection on the next AI Assistant install/reinstall.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(AI): pull ollama/ollama:rocm separately when AMD branch overrides image

The pull-if-missing logic in _createContainer ran against service.container_image
(the DB-pinned tag, e.g. ollama/ollama:0.18.2). The AMD branch then overrode
finalImage to ollama/ollama:rocm — but if that image wasn't already local, the
container creation step failed with "no such image: ollama/ollama:rocm".

Caught while validating on NOMAD2 (Ryzen AI 9 HX 370 + Radeon 890M / RDNA 3.5):
the prior end-to-end test on NOMAD6 had silently passed because the rocm image
was already pulled there from an earlier sidecar test, masking the bug.

Fix: inside the AMD branch, after setting finalImage to ollama/ollama:rocm,
run a parallel _checkImageExists + docker.pull dance for the new tag.

Also confirmed via this validation: the same HSA_OVERRIDE_GFX_VERSION=11.0.0
override works on the 890M (gfx1150 / RDNA 3.5) — Ollama logs report
'library=ROCm compute=gfx1100 description="AMD Radeon 890M Graphics"' and
inference runs at 51.68 tok/s (matching the existing X1 Pro published tile
of 51.7 tok/s on the same hardware class). RDNA 3 (780M, gfx1103) and RDNA
3.5 (890M, gfx1150) both use the same override successfully.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* build(Dockerfile): include pciutils for lspci gpu detection fallback

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Jake Turner <jturner@cosmistack.com>
2026-04-28 21:53:56 -07:00
Ryanba
322087c1b7
fix(UI): improve global map banner display logic (#702) 2026-04-28 20:55:25 -07:00
Henry Estela
cc789c1863
fix(RAG): add start button in kb modal and ensure restart policy exists (#700)
Adds a check to RAG health to make sure nomad_qdrant is online, if not
then the user will be blocked from clicking any buttons in the KB modal
until they click the start qdrant button and let the container start

There is a new file qdrant_restart_policy_provider.ts, which tries to
ensure that the restart policy always exists for the nomad_qdrant
container even though the policy should have been there when the
container is created.
2026-04-27 22:26:46 -07:00
Kenneth Brewer
fe57d59868
docs: add map markers to API reference (#783)
Co-authored-by: Kenneth Brewer <kennethbrewer3@protonmail.com>
2026-04-27 22:21:06 -07:00
John Scherer
269c7ce695
fix(API): accept notes, marker_type, and position on markers endpoints (#770)
The VineJS validators in createMarker and updateMarker silently
dropped fields not in their schema. The MapMarker model and DB
include notes and marker_type, and GET responses return them, but
POST and PATCH would not persist them.

updateMarker additionally did not accept latitude/longitude, so
markers could not be repositioned via the API after creation.

- Add notes and marker_type to both validators and model assignments.
- Add latitude/longitude to the update validator.
- Add coordinate range validation on both endpoints.

Closes #768
2026-04-27 22:11:19 -07:00
chriscrosstalk
b194dfa136
fix(RAG): pass num_ctx and truncate to Ollama embed call (#763)
Some Ollama installs ship nomic-embed-text:v1.5 with the embedding
model's default num_ctx=2048, which the RAG chunker (sized for ~1500
tokens of estimated content with ratio=2 chars/token) can exceed on
dense PDFs. The result is `400 the input length exceeds the context
length` from /api/embed, which then hits the OpenAI-compatible
fallback (which also errors), and surfaces as a BadRequestError.

Pass options.num_ctx=8192 (nomic-embed-text v1.5's RoPE-extrapolated
max) and truncate=true (silent truncation safety net) on every
embed call so we don't depend on the local modelfile defaults.

Reported on #756 by @NC4WD; same root cause as #369 and #670 which
were closed without an actual fix.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 21:43:10 -07:00
chriscrosstalk
00b4b26224
fix(API): skip compression for Server-Sent Events (#798)
* fix(stream): skip compression for Server-Sent Events

The global compression middleware (added in v1.31.0-rc.2) buffers
response writes to determine encoding, which collapses per-token
streaming into a single block delivered after generation completes.
This broke the AI chat streaming UX from v1.31.0-rc.2 onward — text
no longer appears progressively as the model generates it, only at
the end.

Adds a filter to compression() that returns false when the response
Content-Type is text/event-stream. Other responses still go through
the default compression filter (compressible types are still
compressed; e.g. text/html via Brotli).

Reproduced on NOMAD3 v1.31.1: before fix, all SSE chunks for a 1B
model arrive within 10ms of each other after the model finishes.
After fix, tokens arrive at ~150ms intervals as they're generated
on a 12B model, with no Content-Encoding header on the SSE response.

Verified on the same host that /home still returns
Content-Encoding: br for HTML responses.

Closes #781. Reported and bisected by @toasterking
(works in v1.31.0-rc.1, broken from v1.31.0-rc.2 onward).

* fix(stream): use any for filter params to match existing as-any pattern

The compression library types its filter as (req: Request, res: Response)
expecting Express types, but AdonisJS passes raw IncomingMessage/ServerResponse
which is why the surrounding middleware uses `as any` casts at the call site.
The IncomingMessage/ServerResponse types I added are runtime-correct but
fail tsc against the library's declared types.

Drop the typed import in favor of `any` parameters, which matches how the
existing `compress(request.request as any, response.response as any, ...)`
call resolves the same mismatch.
2026-04-27 19:00:31 -07:00
chriscrosstalk
3bacd14dbd
feat(content-manager): add sortable file size column (#698)
Closes #685

Content Manager now surfaces the on-disk size of each ZIM file alongside
title/summary, and lets users sort the list by Size or Title. Defaults to
Size descending so the largest files are visible first.

- ZimService.list() now stats each file and returns size_bytes
- Content Manager table adds a formatted Size column (via formatBytes)
- Sortable headers for Title and Size with asc/desc toggle
2026-04-27 18:49:51 -07:00
chriscrosstalk
b168001450
fix(install): warn loudly on non-x86_64 architectures before pulling images (#797)
Detects the host architecture early in the preflight sequence. On any
architecture other than x86_64/amd64, prints a 5-line warning that
NOMAD officially supports x86_64 only, points at PR #419, and sleeps
10 seconds before continuing. Ctrl+C aborts cleanly before any Docker
work happens.

Preserves the community/hacker path: ARM64 users running with QEMU
binfmt_misc emulation can still let the install proceed. The change
just stops the silent 2.7GB amd64 pull on architectures where it
will not work, which leaves partial images and /opt/project-nomad/
debris that confuse first-time users.

Reported in #782.
2026-04-27 18:39:28 -07:00
chriscrosstalk
90946ecf5a
docs: require linked issue for non-trivial PRs (#799)
Tightens the existing "open an issue first" guidance: non-trivial PRs
must reference a corresponding issue and may be closed without one.
Adds an explicit carveout for trivial fixes (typos, doc clarifications,
small one-line bugs) so drive-by improvements still flow through.
2026-04-27 18:37:57 -07:00
chriscrosstalk
08d14473d2
build: write version.json from VERSION build-arg (#754)
The Dockerfile copied root package.json to /app/version.json, which
SystemService.getAppVersion() reads on every render of the app version in
the UI. semantic-release only reliably commits that bump back on the main
branch; on the rc branch it does not, so v1.31.1-rc.1 and v1.31.1-rc.2
both shipped with a version.json still reading 1.31.0. Result: a user who
upgrades to rc.2 sees "1.31.0" in the UI and a persistent "update to
v1.31.1-rc.2 available" prompt.

The build workflow already passes VERSION as a build-arg (used today only
for the OCI image label). Generating version.json from that arg at build
time makes the image tag the single source of truth and eliminates the
drift, regardless of what the committed-back package.json says.

Dev builds (no VERSION override) write "dev", which matches the existing
NODE_ENV=development short-circuit in getAppVersion().

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 20:12:36 -07:00
chriscrosstalk
9c98d8225b
fix(rag): repair ZIM embedding pipeline (sync filter, batch gate, DOM walk) (#745)
Three bugs in the RAG embedding pipeline, diagnosed and patched by @sbruschke
against v1.31.0 with working before/after chunk counts. All three are
root-cause contributors to #388.

1. scanAndSyncStorage queued every file under /storage/zim/ for embedding,
   including Kiwix's generated kiwix-library.xml. EmbedFileJob rejected it
   with "Unsupported file type" and the default 30-attempt retry policy
   kept it looping on every sync, flooding nomad_admin logs. Now gated on
   determineFileType(filePath) !== 'unknown'.

2. hasMoreBatches compared zimChunks.length (section-level chunk count
   under the 'structured' strategy) against ZIM_BATCH_SIZE (an article
   limit). Because articles emit multiple sections, the two are never
   equal for real archives and processing silently stopped after the
   first 50 articles. Now gated on articlesInBatch >= ZIM_BATCH_SIZE.

3. extractStructuredContent walked only direct children of <body>, so any
   ZIM that wraps content in a container div (Devdocs, Wikipedia,
   FreeCodeCamp, React docs, etc.) produced zero sections and silently
   embedded zero chunks while reporting success. Now walks the full DOM
   via $('body').find('h2, h3, h4, p, ul, ol, dl, table'), with a
   whole-body text fallback when the selector walk yields nothing.

Before/after chunk counts confirmed by @sbruschke on v1.31.0:
  devdocs_en_git   0 -> 916
  devdocs_en_react 0 -> 481
  devdocs_en_node  0 -> 423
  libretexts_en_eng 1 -> 35 (climbing)
Wikipedia resumed progressing normally through its 6M articles.

Closes #718
Closes #719
Closes #720
Closes #388

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 16:23:25 -07:00
chriscrosstalk
d22c0b202c
fix(ZIM): accumulate across Kiwix pages to prevent empty Content Explorer (#746)
When many ZIMs are already installed locally, a single Kiwix catalog page
(12 items) could return 12 already-installed items, which zim_service
would fully filter out client-side. The endpoint returned items: [] with
has_more: true, and the frontend's infinite-scroll guard
(flatData.length > 0) blocked fetchNextPage — leaving the user with
"No records found" despite plenty of uninstalled ZIMs available.

Backend now accumulates across up to 5 Kiwix fetches (60 items each)
until it has enough post-filter results to return, dedupes by entry id,
advances currentStart by actual entries returned (not requested), and
returns a next_start cursor. The frontend consumes that cursor instead
of computing Kiwix offsets locally, and the flatData.length > 0 guard is
removed so the existing on-mount effect drives bounded auto-fetch when
a short page lands.

The pre-existing has_more off-by-one (compared totalResults against the
input start rather than the post-fetch position) is fixed implicitly.

Diagnosis credit: @johno10661.

Closes #731

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 16:05:27 -07:00
chriscrosstalk
36b7613f85
fix(AI): stop local nomad_ollama container when remote Ollama is configured (#744)
When users set a remote Ollama URL via AI Settings, the local nomad_ollama
container continued running and competed with the remote host for port 11434
and GPU access. Now configureRemote stops the local container on set and
restores it on clear (if still present). Container and its models volume are
preserved so the local install can be re-enabled later.

Closes #662

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 16:02:49 -07:00
0xGlitch
644170ed6b
fix(UI): gate NAS Storage label on network filesystem type (#749)
Closes #743
2026-04-20 15:46:46 -07:00
chriscrosstalk
776d099c4a
fix(qdrant): disable anonymous telemetry by default (#747)
Qdrant's upstream default enables anonymous telemetry to telemetry.qdrant.io,
which doesn't match NOMAD's offline-first "zero telemetry" posture. Adding
QDRANT__TELEMETRY_DISABLED=true to the container environment turns it off for
fresh installs and reinstalls.

Existing installs keep their current telemetry-enabled container until the
Qdrant service is force-reinstalled via the Knowledge Base panel or the next
container recreation — Docker bakes Env into containers at create time, so
env changes require a new container.

Closes #742

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 15:25:58 -07:00
chriscrosstalk
c4aa23a9b6
docs: add Community Add-Ons page with field manuals + W3Schools packs (#753)
Introduces a dedicated page listing third-party ZIM content packs built
by the community. Launches with the two current add-ons (jrsphoto field
manuals, kennethbrewer W3Schools) and explains how to install a ZIM pack
and where to submit a new one for inclusion.

- New doc at admin/docs/community-add-ons.md
- Wired into DocsService DOC_ORDER (slot 4) and TITLE_OVERRIDES so the
  hyphen in "Add-Ons" is preserved in the sidebar
- README gets a link under Community & Resources

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 14:57:53 -07:00
Jake Turner
6e4795f0d8
docs: update release notes 2026-04-17 22:02:45 +00:00
dependabot[bot]
dcd9f4b238
build(deps): bump lodash from 4.17.23 to 4.18.1 in /admin (#643)
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.23 to 4.18.1.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.23...4.18.1)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.18.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 14:48:19 -07:00
dependabot[bot]
4497e36100
build(deps-dev): bump vite from 6.4.1 to 6.4.2 in /admin (#677)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 6.4.1 to 6.4.2.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v6.4.2/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v6.4.2/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 6.4.2
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 14:47:31 -07:00
dependabot[bot]
1aa26011b1
build(deps): bump @adonisjs/http-server from 7.8.0 to 7.8.1 in /admin (#724)
Bumps [@adonisjs/http-server](https://github.com/adonisjs/http-server) from 7.8.0 to 7.8.1.
- [Release notes](https://github.com/adonisjs/http-server/releases)
- [Commits](https://github.com/adonisjs/http-server/compare/v7.8.0...v7.8.1)

---
updated-dependencies:
- dependency-name: "@adonisjs/http-server"
  dependency-version: 7.8.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 14:47:06 -07:00
dependabot[bot]
38dfb19f18
build(deps): bump protobufjs from 7.5.4 to 7.5.5 in /admin (#737)
Bumps [protobufjs](https://github.com/protobufjs/protobuf.js) from 7.5.4 to 7.5.5.
- [Release notes](https://github.com/protobufjs/protobuf.js/releases)
- [Changelog](https://github.com/protobufjs/protobuf.js/blob/master/CHANGELOG.md)
- [Commits](https://github.com/protobufjs/protobuf.js/compare/protobufjs-v7.5.4...protobufjs-v7.5.5)

---
updated-dependencies:
- dependency-name: protobufjs
  dependency-version: 7.5.5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 14:46:24 -07:00
dependabot[bot]
5ee4e1187c
build(deps): bump protocol-buffers-schema from 3.6.0 to 3.6.1 in /admin (#736)
Bumps [protocol-buffers-schema](https://github.com/mafintosh/protocol-buffers-schema) from 3.6.0 to 3.6.1.
- [Commits](https://github.com/mafintosh/protocol-buffers-schema/compare/v3.6.0...v3.6.1)

---
updated-dependencies:
- dependency-name: protocol-buffers-schema
  dependency-version: 3.6.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 14:46:03 -07:00
dependabot[bot]
2075a62b60
build(deps): bump axios from 1.13.5 to 1.15.0 in /admin (#708)
Bumps [axios](https://github.com/axios/axios) from 1.13.5 to 1.15.0.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.13.5...v1.15.0)

---
updated-dependencies:
- dependency-name: axios
  dependency-version: 1.15.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 14:45:40 -07:00
dependabot[bot]
d8ee6f5ceb
build(deps): bump follow-redirects from 1.15.11 to 1.16.0 in /admin (#729)
Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.15.11 to 1.16.0.
- [Release notes](https://github.com/follow-redirects/follow-redirects/releases)
- [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.15.11...v1.16.0)

---
updated-dependencies:
- dependency-name: follow-redirects
  dependency-version: 1.16.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 14:45:03 -07:00
chriscrosstalk
53d143bb22
fix(AI): allow cancelling in-progress model downloads and ensure consistent progress UI (#701)
Adds a cancel button to in-progress Ollama model downloads and unifies
the Active Model Downloads card layout with the Active Downloads card
used for ZIMs, maps, and pmtiles (byte counts, progress bar, live speed,
status indicator).

Closes #676.
2026-04-17 14:43:41 -07:00
Luís Miguel
0d5b6f7927
fix(security): SSRF validation for map downloads and error sanitization (CWE-918, CWE-209) (#552)
* fix(security): add SSRF validation to map download URLs from manifest
* fix(security): sanitize verbose error in rag controller scan endpoint
* fix(security): sanitize verbose errors in benchmark controller
* fix(security): sanitize verbose error in system controller version fetch
* fix(security): sanitize verbose errors in chats controller (6 instances)
* fix(security): sanitize verbose errors in docker service (6 instances)
* fix(security): sanitize verbose error in system update service
* fix(security): sanitize verbose errors in collection update service
---------
Co-authored-by: Jake Turner <52841588+jakeaturner@users.noreply.github.com>
2026-04-17 14:12:02 -07:00
Jake Turner
f1dd184f4d fix(Downloads): remove duplicate err listnr and improv Range req stability 2026-04-17 14:01:27 -07:00
Aaron Bird
b5d4804d57 fix(downloads): stage downloads to .tmp to prevent Kiwix loading partial files
Downloads are now written to `filepath + '.tmp'` and atomically renamed
to the final path only on successful completion. Kiwix globs for `*.zim`
and ZimService filters `.endsWith('.zim')`, so `.tmp` files are invisible
to both during download. The same staging applies to `.pmtiles` map files.

Ref #372

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 14:01:27 -07:00
Ben Gauger
898c4441b9 fix(disk-display): show NAS Storage label in fsSize fallback path
Co-Authored-By: Ben Smith <bravosierra99@gmail.com>
2026-04-17 12:15:43 -07:00
Ben Gauger
b365130e76 fix(disk-collector): fix storage reporting for NFS mounts
Co-Authored-By: Ben Smith <bravosierra99@gmail.com>
2026-04-17 12:15:43 -07:00
Jake Turner
10e8957b78
fix: prevent ZIM corrupt file crash and deduplicate Ollama download logs (#741)
Corrupted ZIM files cause a native C++ abort (ZimFileFormatError) that
bypasses JS try/catch and kills the process. Add magic number validation
before passing files to @openzim/libzim so invalid files are skipped
gracefully. Also deduplicate Ollama download progress broadcasts — both
within a single stream (skip unchanged percentages) and across concurrent
callers (share one download promise per model).

Co-authored-by: aegisman <aegis@manicode.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 11:54:04 -07:00
Henry Estela
10ba8000cf
fix(AI): qwen2.5 loading on every chat message (#649)
Use the currently loaded model for chat title generation and query rewrite.
2026-04-17 11:37:44 -07:00
Henry Estela
462afae4ec
fix(AI): add null check to model name (#645)
When the OpenAI-compatible fallback (/v1/models) is used, models are mapped as { name: m.id, size: 0 } with no details field. Accessing model.details.parameter_size throws `TypeError: Cannot read properties of undefined`, which crashes the React render and causes the entire page to go blank.
2026-04-17 11:34:31 -07:00
74 changed files with 5021 additions and 950 deletions

View File

@ -30,7 +30,14 @@ We are committed to providing a welcoming environment for everyone. Disrespectfu
## Before You Start
**Open an issue first.** Before writing any code, please [open an issue](../../issues/new) to discuss your proposed change. This helps avoid duplicate work and ensures your contribution aligns with the project's direction.
**Open an issue first.** Before writing any code for a non-trivial change, you must [open an issue](../../issues/new) to discuss your proposed change. This helps avoid duplicate work and ensures your contribution aligns with the project's direction. **Pull requests submitted without a corresponding issue may be closed at the maintainers' discretion.**
**Trivial fixes are exempt** and may be submitted directly as a PR. Examples:
- Typo and grammar corrections
- Documentation clarifications
- Small one-line bug fixes with an obvious cause
If you're not sure whether your change qualifies as trivial, open an issue first.
When opening an issue:
- Use a clear, descriptive title
@ -149,7 +156,7 @@ This project uses [Semantic Versioning](https://semver.org/). Versions are manag
2. Open a pull request against the `dev` branch of this repository
3. In the PR description:
- Summarize what your changes do and why
- Reference the related issue (e.g., `Closes #123`)
- Reference the related issue (e.g., `Closes #123`) — required for non-trivial changes
- Note any relevant testing steps or environment details
4. Be responsive to feedback — maintainers may request changes. Pull requests with no activity for an extended period may be closed.

View File

@ -1,7 +1,14 @@
FROM node:22-slim AS base
# Install bash & curl for entrypoint script compatibility, graphicsmagick for pdf2pic, and vips-dev & build-base for sharp
RUN apt-get update && apt-get install -y bash curl graphicsmagick libvips-dev build-essential
RUN apt-get update && apt-get install -y \
bash \
curl \
graphicsmagick \
libvips-dev \
build-essential \
pciutils \
&& rm -rf /var/lib/apt/lists/*
# All deps stage
FROM base AS deps
@ -27,6 +34,31 @@ FROM base
ARG VERSION=dev
ARG BUILD_DATE
ARG VCS_REF
ARG TARGETARCH
# go-pmtiles (regional map extracts). Pinned so the CLI's stdout format stays
# in sync with parseDryRunOutput().
ARG PMTILES_VERSION=1.30.2
# Upstream releases don't ship a checksums file, so pin per-arch SHA256 here.
# When bumping PMTILES_VERSION, regenerate these with:
# curl -fsSL <release-url> | sha256sum
ARG PMTILES_SHA256_AMD64=2cd3aa18868297fc88425038f794efdc0995e0275f4ca16fa496dd79e245a40c
ARG PMTILES_SHA256_ARM64=804cdf071834e1156af554c1a26cc42b56b9cde5a2db9c6e3653d16fb846d5fa
RUN set -eux; \
case "${TARGETARCH:-amd64}" in \
amd64) PMTILES_ARCH=x86_64; PMTILES_SHA256="${PMTILES_SHA256_AMD64}" ;; \
arm64) PMTILES_ARCH=arm64; PMTILES_SHA256="${PMTILES_SHA256_ARM64}" ;; \
*) echo "Unsupported TARGETARCH: ${TARGETARCH}" >&2; exit 1 ;; \
esac; \
TARBALL="go-pmtiles_${PMTILES_VERSION}_Linux_${PMTILES_ARCH}.tar.gz"; \
cd /tmp; \
curl -fsSL -o "$TARBALL" \
"https://github.com/protomaps/go-pmtiles/releases/download/v${PMTILES_VERSION}/${TARBALL}"; \
echo "${PMTILES_SHA256} ${TARBALL}" | sha256sum -c -; \
tar -xzf "$TARBALL" -C /usr/local/bin pmtiles; \
rm -f "$TARBALL"; \
chmod +x /usr/local/bin/pmtiles; \
/usr/local/bin/pmtiles version
# Labels
LABEL org.opencontainers.image.title="Project N.O.M.A.D" \
@ -43,8 +75,10 @@ ENV NODE_ENV=production
WORKDIR /app
COPY --from=production-deps /app/node_modules /app/node_modules
COPY --from=build /app/build /app
# Copy root package.json for version info
COPY package.json /app/version.json
# Generate version.json from the VERSION build-arg so the image tag is the
# single source of truth (previously copied root package.json, which drifted
# from the tag when semantic-release did not commit the bump back).
RUN echo "{\"version\":\"${VERSION}\"}" > /app/version.json
# Copy docs and README for access within the container
COPY admin/docs /app/docs

6
FAQ.md
View File

@ -20,7 +20,9 @@ Long answer: Custom storage paths, mount points, and external drives (like iSCSI
## Can I run NOMAD on MAC, WSL2, or a non-Debian-based Distro?
See [Why does NOMAD require a Debian-based OS?](#why-does-nomad-require-a-debian-based-os)
**WSL2 on Windows** is community-supported via the [WSL2 install guide](https://www.projectnomad.us/install/wsl2) — covers two install paths (native Docker and Docker Desktop) with all known gotchas documented and empirical performance numbers comparing WSL2 to bare-metal.
**macOS and other non-Debian Linux distros** aren't officially supported. See [Why does NOMAD require a Debian-based OS?](#why-does-nomad-require-a-debian-based-os) for details.
## Why does NOMAD require a Debian-based OS?
@ -28,7 +30,7 @@ Project N.O.M.A.D. is currently designed to run on Debian-based Linux distributi
Support for other operating systems will come in the future, but because our development resources are limited as a free and open-source project, we needed to prioritize our efforts and focus on a narrower set of supported platforms for the initial release. We chose Debian-based Linux as our starting point because it's widely used, easy to spin up, and provides a stable environment for running Docker containers.
Community members have provided guides for running N.O.M.A.D. on other platforms (e.g. WSL2, Mac, etc.) in our Discord community and [Github Discussions](https://github.com/Crosstalk-Solutions/project-nomad/discussions), so if you're interested in running N.O.M.A.D. on a non-Debian-based system, we recommend checking there for any available resources or guides. However, keep in mind that if you choose to run N.O.M.A.D. on a non-Debian-based system, you may encounter issues that we won't be able to provide support for, and you may need to have a higher level of technical expertise to troubleshoot and resolve any problems that arise.
For Windows users, the [WSL2 install guide](https://www.projectnomad.us/install/wsl2) provides a community-supported path. Community members have also published guides for other platforms (e.g. macOS) in our Discord community and [Github Discussions](https://github.com/Crosstalk-Solutions/project-nomad/discussions), so if you're interested in running N.O.M.A.D. on a non-Debian-based system, we recommend checking there for any available resources or guides. However, keep in mind that if you choose to run N.O.M.A.D. on a non-Debian-based system, you may encounter issues that we won't be able to provide support for, and you may need to have a higher level of technical expertise to troubleshoot and resolve any problems that arise.
## Can I run NOMAD on a Raspberry Pi or other ARM-based device?
Project N.O.M.A.D. is currently designed to run on x86-64 architecture, and we have not yet tested or optimized it for ARM-based devices like the Raspberry Pi (and have not published any official images for ARM architecture).

View File

@ -32,7 +32,7 @@ sudo bash install_nomad.sh
Project N.O.M.A.D. is now installed on your device! Open a browser and navigate to `http://localhost:8080` (or `http://DEVICE_IP:8080`) to start exploring!
For a complete step-by-step walkthrough (including Ubuntu installation), see the [Installation Guide](https://www.projectnomad.us/install).
For a complete step-by-step walkthrough (including Ubuntu installation), see the [Installation Guide](https://www.projectnomad.us/install). For Windows users, see the [WSL2 install guide](https://www.projectnomad.us/install/wsl2) — community-supported path covering native Docker and Docker Desktop install routes.
### Advanced Installation
For more control over the installation process, copy and paste the [Docker Compose template](https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/management_compose.yaml) into a `docker-compose.yml` file and customize it to your liking (be sure to replace any placeholders with your actual values). Then, run `docker compose up -d` to start the Command Center and its dependencies. Note: this method is recommended for advanced users only, as it requires familiarity with Docker and manual configuration before starting.
@ -124,6 +124,7 @@ Contributions are welcome and appreciated! Please see [CONTRIBUTING.md](CONTRIBU
- **Benchmark Leaderboard:** [benchmark.projectnomad.us](https://benchmark.projectnomad.us) - See how your hardware stacks up against other NOMAD builds
- **Troubleshooting Guide:** [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Find solutions to common issues
- **FAQ:** [FAQ.md](FAQ.md) - Find answers to frequently asked questions
- **Community Add-Ons:** [admin/docs/community-add-ons.md](admin/docs/community-add-ons.md) - Third-party content packs built by the community
## License

View File

@ -55,6 +55,8 @@ export default defineConfig({
() => import('@adonisjs/transmit/transmit_provider'),
() => import('#providers/map_static_provider'),
() => import('#providers/kiwix_migration_provider'),
() => import('#providers/qdrant_restart_policy_provider'),
() => import('#providers/version_check_provider'),
],
/*
@ -106,6 +108,10 @@ export default defineConfig({
pattern: 'resources/views/**/*.edge',
reloadServer: false,
},
{
pattern: 'resources/geodata/**/*.geojson',
reloadServer: false,
},
{
pattern: 'public/**',
reloadServer: false,

View File

@ -5,6 +5,7 @@ import { runBenchmarkValidator, submitBenchmarkValidator } from '#validators/ben
import { RunBenchmarkJob } from '#jobs/run_benchmark_job'
import type { BenchmarkType } from '../../types/benchmark.js'
import { randomUUID } from 'node:crypto'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class BenchmarkController {
@ -52,9 +53,10 @@ export default class BenchmarkController {
result,
})
} catch (error) {
logger.error({ err: error }, '[BenchmarkController] Benchmark run failed')
return response.status(500).send({
success: false,
error: error.message,
error: 'An internal error occurred while running the benchmark.',
})
}
}
@ -181,9 +183,10 @@ export default class BenchmarkController {
} catch (error) {
// Pass through the status code from the service if available, otherwise default to 400
const statusCode = (error as any).statusCode || 400
logger.error({ err: error }, '[BenchmarkController] Benchmark submit failed')
return response.status(statusCode).send({
success: false,
error: error.message,
error: 'Failed to submit benchmark results.',
})
}
}

View File

@ -5,6 +5,7 @@ import { createSessionSchema, updateSessionSchema, addMessageSchema } from '#val
import KVStore from '#models/kv_store'
import { SystemService } from '#services/system_service'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class ChatsController {
@ -45,8 +46,9 @@ export default class ChatsController {
const session = await this.chatService.createSession(data.title, data.model)
return response.status(201).json(session)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to create session')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to create session',
error: 'Failed to create session',
})
}
}
@ -56,8 +58,9 @@ export default class ChatsController {
const suggestions = await this.chatService.getChatSuggestions()
return response.status(200).json({ suggestions })
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to get suggestions')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to get suggestions',
error: 'Failed to get suggestions',
})
}
}
@ -69,8 +72,9 @@ export default class ChatsController {
const session = await this.chatService.updateSession(sessionId, data)
return session
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to update session')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to update session',
error: 'Failed to update session',
})
}
}
@ -81,8 +85,9 @@ export default class ChatsController {
await this.chatService.deleteSession(sessionId)
return response.status(204)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to delete session')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to delete session',
error: 'Failed to delete session',
})
}
}
@ -94,8 +99,9 @@ export default class ChatsController {
const message = await this.chatService.addMessage(sessionId, data.role, data.content)
return response.status(201).json(message)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to add message')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to add message',
error: 'Failed to add message',
})
}
}
@ -105,8 +111,9 @@ export default class ChatsController {
const result = await this.chatService.deleteAllSessions()
return response.status(200).json(result)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to delete all sessions')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to delete all sessions',
error: 'Failed to delete all sessions',
})
}
}

View File

@ -4,6 +4,8 @@ import {
assertNotPrivateUrl,
downloadCollectionValidator,
filenameParamValidator,
mapExtractPreflightValidator,
mapExtractValidator,
remoteDownloadValidator,
remoteDownloadValidatorOptional,
} from '#validators/common'
@ -87,6 +89,28 @@ export default class MapsController {
}
}
async listCountries({}: HttpContext) {
return { countries: await this.mapService.listCountries() }
}
async listCountryGroups({}: HttpContext) {
return { groups: await this.mapService.listCountryGroups() }
}
async extractPreflight({ request }: HttpContext) {
const payload = await request.validateUsing(mapExtractPreflightValidator)
return await this.mapService.extractPreflight(payload)
}
async extractRegion({ request }: HttpContext) {
const payload = await request.validateUsing(mapExtractValidator)
const result = await this.mapService.extractRegion(payload)
return {
message: 'Extract started successfully',
...result,
}
}
async styles({ request, response }: HttpContext) {
// Automatically ensure base assets are present before generating styles
const baseAssetsExist = await this.mapService.ensureBaseAssets()
@ -137,9 +161,11 @@ export default class MapsController {
vine.compile(
vine.object({
name: vine.string().trim().minLength(1).maxLength(255),
longitude: vine.number(),
latitude: vine.number(),
longitude: vine.number().min(-180).max(180),
latitude: vine.number().min(-90).max(90),
color: vine.string().trim().maxLength(20).optional(),
notes: vine.string().trim().nullable().optional(),
marker_type: vine.string().trim().maxLength(20).optional(),
})
)
)
@ -148,6 +174,8 @@ export default class MapsController {
longitude: payload.longitude,
latitude: payload.latitude,
color: payload.color ?? 'orange',
notes: payload.notes ?? null,
marker_type: payload.marker_type ?? 'pin',
})
return marker
}
@ -163,11 +191,19 @@ export default class MapsController {
vine.object({
name: vine.string().trim().minLength(1).maxLength(255).optional(),
color: vine.string().trim().maxLength(20).optional(),
longitude: vine.number().min(-180).max(180).optional(),
latitude: vine.number().min(-90).max(90).optional(),
notes: vine.string().trim().nullable().optional(),
marker_type: vine.string().trim().maxLength(20).optional(),
})
)
)
if (payload.name !== undefined) marker.name = payload.name
if (payload.color !== undefined) marker.color = payload.color
if (payload.longitude !== undefined) marker.longitude = payload.longitude
if (payload.latitude !== undefined) marker.latitude = payload.latitude
if (payload.notes !== undefined) marker.notes = payload.notes
if (payload.marker_type !== undefined) marker.marker_type = payload.marker_type
await marker.save()
return marker
}

View File

@ -8,7 +8,7 @@ import { modelNameSchema } from '#validators/download'
import { chatSchema, getAvailableModelsSchema } from '#validators/ollama'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import { DEFAULT_QUERY_REWRITE_MODEL, RAG_CONTEXT_LIMITS, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { RAG_CONTEXT_LIMITS, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import logger from '@adonisjs/core/services/logger'
type Message = { role: 'system' | 'user' | 'assistant'; content: string }
@ -59,7 +59,7 @@ export default class OllamaController {
// Query rewriting for better RAG retrieval with manageable context
// Will return user's latest message if no rewriting is needed
const rewrittenQuery = await this.rewriteQueryWithContext(reqData.messages)
const rewrittenQuery = await this.rewriteQueryWithContext(reqData.messages, reqData.model)
logger.debug(`[OllamaController] Rewritten query for RAG: "${rewrittenQuery}"`)
if (rewrittenQuery) {
@ -157,7 +157,7 @@ export default class OllamaController {
await this.chatService.addMessage(sessionId, 'assistant', fullContent)
const messageCount = await this.chatService.getMessageCount(sessionId)
if (messageCount <= 2 && userContent) {
this.chatService.generateTitle(sessionId, userContent, fullContent).catch((err) => {
this.chatService.generateTitle(sessionId, userContent, fullContent, reqData.model).catch((err) => {
logger.error(`[OllamaController] Title generation failed: ${err instanceof Error ? err.message : err}`)
})
}
@ -172,7 +172,7 @@ export default class OllamaController {
await this.chatService.addMessage(sessionId, 'assistant', result.message.content)
const messageCount = await this.chatService.getMessageCount(sessionId)
if (messageCount <= 2 && userContent) {
this.chatService.generateTitle(sessionId, userContent, result.message.content).catch((err) => {
this.chatService.generateTitle(sessionId, userContent, result.message.content, reqData.model).catch((err) => {
logger.error(`[OllamaController] Title generation failed: ${err instanceof Error ? err.message : err}`)
})
}
@ -212,13 +212,21 @@ export default class OllamaController {
return response.status(404).send({ success: false, message: 'Ollama service record not found.' })
}
// Clear path: null or empty URL removes remote config and marks service as not installed
// Clear path: null or empty URL removes remote config. If a local nomad_ollama container
// still exists (user had previously installed AI Assistant locally), restart it and keep
// the service marked installed. Otherwise fall back to uninstalled.
if (!remoteUrl || remoteUrl.trim() === '') {
await KVStore.clearValue('ai.remoteOllamaUrl')
ollamaService.installed = false
const hasLocalContainer = await this._startLocalOllamaContainerIfExists()
ollamaService.installed = hasLocalContainer
ollamaService.installation_status = 'idle'
await ollamaService.save()
return { success: true, message: 'Remote Ollama configuration cleared.' }
return {
success: true,
message: hasLocalContainer
? 'Remote Ollama cleared. Local Ollama container restored.'
: 'Remote Ollama configuration cleared.',
}
}
// Validate URL format
@ -253,6 +261,10 @@ export default class OllamaController {
ollamaService.installation_status = 'idle'
await ollamaService.save()
// Stop the local nomad_ollama container (if running) so it doesn't compete with the
// remote host for GPU / port 11434. Preserves the container and its models volume.
await this._stopLocalOllamaContainer()
// Install Qdrant if not already installed (fire-and-forget)
const qdrantService = await Service.query().where('service_name', SERVICE_NAMES.QDRANT).first()
if (qdrantService && !qdrantService.installed) {
@ -270,6 +282,50 @@ export default class OllamaController {
return { success: true, message: 'Remote Ollama configured.' }
}
private async _stopLocalOllamaContainer(): Promise<void> {
try {
const containers = await this.dockerService.docker.listContainers({ all: true })
const ollamaContainer = containers.find((c) =>
c.Names.includes(`/${SERVICE_NAMES.OLLAMA}`)
)
if (!ollamaContainer || ollamaContainer.State !== 'running') {
return
}
await this.dockerService.docker.getContainer(ollamaContainer.Id).stop()
this.dockerService.invalidateServicesStatusCache()
logger.info('[OllamaController] Stopped local nomad_ollama (remote Ollama configured)')
} catch (error: any) {
logger.error(
{ err: error },
'[OllamaController] Failed to stop local nomad_ollama; remote Ollama is still active'
)
}
}
private async _startLocalOllamaContainerIfExists(): Promise<boolean> {
try {
const containers = await this.dockerService.docker.listContainers({ all: true })
const ollamaContainer = containers.find((c) =>
c.Names.includes(`/${SERVICE_NAMES.OLLAMA}`)
)
if (!ollamaContainer) {
return false
}
if (ollamaContainer.State !== 'running') {
await this.dockerService.docker.getContainer(ollamaContainer.Id).start()
this.dockerService.invalidateServicesStatusCache()
logger.info('[OllamaController] Started local nomad_ollama (remote Ollama cleared)')
}
return true
} catch (error: any) {
logger.error(
{ err: error },
'[OllamaController] Failed to start local nomad_ollama on remote clear'
)
return false
}
}
async deleteModel({ request }: HttpContext) {
const reqData = await request.validateUsing(modelNameSchema)
await this.ollamaService.deleteModel(reqData.model)
@ -312,9 +368,18 @@ export default class OllamaController {
}
private async rewriteQueryWithContext(
messages: Message[]
messages: Message[],
model: string
): Promise<string | null> {
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
try {
// Skip the entire RAG pipeline if there are no documents to search
const hasDocuments = await this.ragService.hasDocuments()
if (!hasDocuments) {
return null
}
// Get recent conversation history (last 6 messages for 3 turns)
const recentMessages = messages.slice(-6)
@ -322,7 +387,7 @@ export default class OllamaController {
// little RAG benefit until there is enough context to matter.
const userMessages = recentMessages.filter(msg => msg.role === 'user')
if (userMessages.length <= 2) {
return userMessages[userMessages.length - 1]?.content || null
return lastUserMessage?.content || null
}
const conversationContext = recentMessages
@ -336,17 +401,8 @@ export default class OllamaController {
})
.join('\n')
const installedModels = await this.ollamaService.getModels(true)
const rewriteModelAvailable = installedModels?.some(model => model.name === DEFAULT_QUERY_REWRITE_MODEL)
if (!rewriteModelAvailable) {
logger.warn(`[RAG] Query rewrite model "${DEFAULT_QUERY_REWRITE_MODEL}" not available. Skipping query rewriting.`)
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
return lastUserMessage?.content || null
}
// FUTURE ENHANCEMENT: allow the user to specify which model to use for rewriting
const response = await this.ollamaService.chat({
model: DEFAULT_QUERY_REWRITE_MODEL,
model,
messages: [
{
role: 'system',
@ -367,7 +423,6 @@ export default class OllamaController {
`[RAG] Query rewriting failed: ${error instanceof Error ? error.message : error}`
)
// Fallback to last user message if rewriting fails
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
return lastUserMessage?.content || null
}
}

View File

@ -6,6 +6,7 @@ import app from '@adonisjs/core/services/app'
import { randomBytes } from 'node:crypto'
import { sanitizeFilename } from '../utils/fs.js'
import { deleteFileSchema, getJobStatusSchema } from '#validators/rag'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class RagController {
@ -92,7 +93,13 @@ export default class RagController {
const syncResult = await this.ragService.scanAndSyncStorage()
return response.status(200).json(syncResult)
} catch (error) {
return response.status(500).json({ error: 'Error scanning and syncing storage', details: error.message })
logger.error({ err: error }, '[RagController] Error scanning and syncing storage')
return response.status(500).json({ error: 'Error scanning and syncing storage' })
}
}
public async health({ response }: HttpContext) {
const result = await this.ragService.checkQdrantHealth()
return response.status(200).json(result)
}
}

View File

@ -6,6 +6,7 @@ import { CheckServiceUpdatesJob } from '#jobs/check_service_updates_job'
import { affectServiceValidator, checkLatestVersionValidator, installServiceValidator, subscribeToReleaseNotesValidator, updateServiceValidator } from '#validators/system';
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class SystemController {
@ -144,7 +145,8 @@ export default class SystemController {
)
response.send({ versions: updates })
} catch (error) {
response.status(500).send({ error: `Failed to fetch versions: ${error.message}` })
logger.error({ err: error }, `[SystemController] Failed to fetch versions for ${serviceName}`)
response.status(500).send({ error: 'Failed to fetch available versions for this service.' })
}
}

View File

@ -6,7 +6,7 @@ import {
remoteDownloadWithMetadataValidator,
selectWikipediaValidator,
} from '#validators/common'
import { listRemoteZimValidator } from '#validators/zim'
import { addCustomLibraryValidator, browseLibraryValidator, idParamValidator, listRemoteZimValidator } from '#validators/zim'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
@ -85,4 +85,51 @@ export default class ZimController {
const payload = await request.validateUsing(selectWikipediaValidator)
return this.zimService.selectWikipedia(payload.optionId)
}
// Custom library endpoints
async listCustomLibraries({}: HttpContext) {
return this.zimService.listCustomLibraries()
}
async addCustomLibrary({ request, response }: HttpContext) {
const payload = await request.validateUsing(addCustomLibraryValidator)
assertNotPrivateUrl(payload.base_url)
try {
const source = await this.zimService.addCustomLibrary(payload.name, payload.base_url)
return { message: 'Custom library added', library: source }
} catch (error) {
if (error.message === 'Maximum of 10 custom libraries allowed') {
return response.status(400).send({ message: error.message })
}
throw error
}
}
async removeCustomLibrary({ request, response }: HttpContext) {
const payload = await request.validateUsing(idParamValidator)
try {
await this.zimService.removeCustomLibrary(payload.params.id)
return { message: 'Custom library removed' }
} catch (error) {
if (error.message === 'Custom library not found') {
return response.status(404).send({ message: error.message })
}
throw error
}
}
async browseLibrary({ request, response }: HttpContext) {
const payload = await request.validateUsing(browseLibraryValidator)
try {
return await this.zimService.browseLibraryUrl(payload.url)
} catch (error) {
if (error.message?.includes('loopback or link-local')) {
return response.status(400).send({ message: error.message })
}
return response.status(502).send({
message: 'Could not fetch directory listing from the provided URL',
})
}
}
}

View File

@ -21,6 +21,25 @@ export class DownloadModelJob {
return createHash('sha256').update(modelName).digest('hex').slice(0, 16)
}
/** In-memory registry of abort controllers for active model download jobs */
static abortControllers: Map<string, AbortController> = new Map()
/**
* Redis key used to signal cancellation across processes. Uses a `model-cancel` prefix
* so it cannot collide with content download cancel signals (`nomad:download:cancel:*`).
*/
static cancelKey(jobId: string): string {
return `nomad:download:model-cancel:${jobId}`
}
/** Signal cancellation via Redis so the worker process can pick it up on its next poll tick */
static async signalCancel(jobId: string): Promise<void> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const client = await queue.client
await client.set(this.cancelKey(jobId), '1', 'EX', 300) // 5 min TTL
}
async handle(job: Job) {
const { modelName } = job.data as DownloadModelJobParams
@ -41,43 +60,96 @@ export class DownloadModelJob {
`[DownloadModelJob] Ollama service is ready. Initiating download for ${modelName}`
)
// Services are ready, initiate the download with progress tracking
const result = await ollamaService.downloadModel(modelName, (progressPercent) => {
if (progressPercent) {
job.updateProgress(Math.floor(progressPercent)).catch((err) => {
if (err?.code !== -1) throw err
})
logger.info(
`[DownloadModelJob] Model ${modelName}: ${progressPercent}%`
)
// Register abort controller for this job — used both by in-process cancels (same process
// as the API server) and as the target of the Redis poll loop below.
const abortController = new AbortController()
DownloadModelJob.abortControllers.set(job.id!, abortController)
// Get Redis client for checking cancel signals from the API process
const queueService = new QueueService()
const cancelRedis = await queueService.getQueue(DownloadModelJob.queue).client
// Track whether cancellation was explicitly requested by the user. Only user-initiated
// cancels become UnrecoverableError — other failures (e.g., transient network errors)
// should still benefit from BullMQ's retry logic.
let userCancelled = false
// Poll Redis for cancel signal every 2s — independent of progress events so cancellation
// works even when the pull is mid-blob and not emitting progress updates.
let cancelPollInterval: ReturnType<typeof setInterval> | null = setInterval(async () => {
try {
const val = await cancelRedis.get(DownloadModelJob.cancelKey(job.id!))
if (val) {
await cancelRedis.del(DownloadModelJob.cancelKey(job.id!))
userCancelled = true
abortController.abort('user-cancel')
}
} catch {
// Redis errors are non-fatal; in-process AbortController covers same-process cancels
}
}, 2000)
// Store detailed progress in job data for clients to query
job.updateData({
...job.data,
status: 'downloading',
progress: progressPercent,
progress_timestamp: new Date().toISOString(),
}).catch((err) => {
if (err?.code !== -1) throw err
})
})
try {
// Services are ready, initiate the download with progress tracking
const result = await ollamaService.downloadModel(
modelName,
(progressPercent, bytes) => {
if (progressPercent) {
job.updateProgress(Math.floor(progressPercent)).catch((err) => {
if (err?.code !== -1) throw err
})
}
if (!result.success) {
logger.error(
`[DownloadModelJob] Failed to initiate download for model ${modelName}: ${result.message}`
// Store detailed progress in job data for clients to query
job.updateData({
...job.data,
status: 'downloading',
progress: progressPercent,
downloadedBytes: bytes?.downloadedBytes,
totalBytes: bytes?.totalBytes,
progress_timestamp: new Date().toISOString(),
}).catch((err) => {
if (err?.code !== -1) throw err
})
},
abortController.signal,
job.id!
)
// Don't retry errors that will never succeed (e.g., Ollama version too old)
if (result.retryable === false) {
throw new UnrecoverableError(result.message)
}
throw new Error(`Failed to initiate download for model: ${result.message}`)
}
logger.info(`[DownloadModelJob] Successfully completed download for model ${modelName}`)
return {
modelName,
message: result.message,
if (!result.success) {
logger.error(
`[DownloadModelJob] Failed to initiate download for model ${modelName}: ${result.message}`
)
// User-initiated cancel — must be unrecoverable to avoid the 40-attempt retry storm.
// The downloadModel() catch block returns retryable: false for cancels, so this branch
// catches both Ollama version mismatches (existing) AND user cancels (new).
if (result.retryable === false) {
throw new UnrecoverableError(result.message)
}
throw new Error(`Failed to initiate download for model: ${result.message}`)
}
logger.info(`[DownloadModelJob] Successfully completed download for model ${modelName}`)
return {
modelName,
message: result.message,
}
} catch (error: any) {
// Belt-and-suspenders: if downloadModel didn't recognize the cancel (e.g., the abort
// fired after the response stream completed but before our code returned), the cancel
// flag tells us this was a user action and should be unrecoverable.
if (userCancelled || abortController.signal.reason === 'user-cancel') {
if (!(error instanceof UnrecoverableError)) {
throw new UnrecoverableError(`Model download cancelled: ${error.message ?? error}`)
}
}
throw error
} finally {
if (cancelPollInterval !== null) {
clearInterval(cancelPollInterval)
cancelPollInterval = null
}
DownloadModelJob.abortControllers.delete(job.id!)
}
}

View File

@ -0,0 +1,294 @@
import { Job, UnrecoverableError } from 'bullmq'
import { spawn, ChildProcess } from 'child_process'
import { createHash } from 'crypto'
import { readdir, stat } from 'fs/promises'
import { basename, dirname, join } from 'path'
import { QueueService } from '#services/queue_service'
import logger from '@adonisjs/core/services/logger'
import { DownloadProgressData } from '../../types/downloads.js'
import { PMTILES_BINARY_PATH, buildPmtilesExtractArgs } from '../../constants/map_regions.js'
import { deleteFileIfExists } from '../utils/fs.js'
export interface RunExtractPmtilesJobParams {
sourceUrl: string
outputFilepath: string
/** Path to a GeoJSON FeatureCollection file passed to `pmtiles extract --region`. */
regionFilepath: string
maxzoom?: number
/** Hint for progress reporting; obtained from `pmtiles extract --dry-run` preflight */
estimatedBytes?: number
filetype: 'map'
title?: string
resourceMetadata?: {
resource_id: string
version: string
collection_ref: string | null
}
}
export class RunExtractPmtilesJob {
static get queue() {
return 'pmtiles-extract'
}
static get key() {
return 'run-pmtiles-extract'
}
/** In-memory registry of active child processes so in-process cancels can SIGTERM them */
static childProcesses: Map<string, ChildProcess> = new Map()
static getJobId(sourceUrl: string, regionFilepath: string, maxzoom?: number): string {
const payload = JSON.stringify({ sourceUrl, regionFilepath, maxzoom: maxzoom ?? null })
return createHash('sha256').update(payload).digest('hex').slice(0, 16)
}
/** Redis key used to signal cancellation across processes */
static cancelKey(jobId: string): string {
return `nomad:download:pmtiles-cancel:${jobId}`
}
static async signalCancel(jobId: string): Promise<void> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const client = await queue.client
await client.set(this.cancelKey(jobId), '1', 'EX', 300)
}
/** Awaits job.updateProgress and swallows BullMQ stale-job errors (code -1),
* which occur when the job was removed from Redis (e.g. cancelled) between
* the await being issued and the Redis write completing. Anything else
* re-throws so it's caught by the surrounding try rather than becoming an
* unhandled rejection. */
private async safeUpdateProgress(job: Job, progress: DownloadProgressData): Promise<void> {
try {
await job.updateProgress(progress)
} catch (err: any) {
if (err?.code !== -1) throw err
}
}
async handle(job: Job) {
const params = job.data as RunExtractPmtilesJobParams
const { sourceUrl, outputFilepath, regionFilepath, maxzoom, estimatedBytes } = params
logger.info(
`[RunExtractPmtilesJob] Starting extract: source=${sourceUrl} region=${regionFilepath} ` +
`maxzoom=${maxzoom ?? 'source-max'} out=${outputFilepath}`
)
const queueService = new QueueService()
const cancelRedis = await queueService.getQueue(RunExtractPmtilesJob.queue).client
let userCancelled = false
let proc: ChildProcess | null = null
let lastReportedBytes = -1
// One 2s tick polls the Redis cancel signal and reads file-size for progress. pmtiles
// writes incrementally but rewrites directories near the end so progress isn't strictly
// monotonic — we cap at 99% and skip emit when bytes are unchanged to avoid Redis chatter.
const tick = setInterval(async () => {
try {
const val = await cancelRedis.get(RunExtractPmtilesJob.cancelKey(job.id!))
if (val) {
await cancelRedis.del(RunExtractPmtilesJob.cancelKey(job.id!))
userCancelled = true
proc?.kill('SIGTERM')
}
} catch {
// Redis errors non-fatal — in-memory handle also covers same-process cancels
}
try {
const fileStat = await stat(outputFilepath)
const downloadedBytes = Number(fileStat.size)
if (downloadedBytes === lastReportedBytes) return
lastReportedBytes = downloadedBytes
const totalBytes = estimatedBytes ?? 0
const percent =
totalBytes > 0 ? Math.min(99, Math.floor((downloadedBytes / totalBytes) * 100)) : 0
await this.safeUpdateProgress(job, {
percent,
downloadedBytes,
totalBytes,
lastProgressTime: Date.now(),
} as DownloadProgressData)
} catch {
// File doesn't exist yet (subprocess still setting up)
}
}, 2000)
try {
const args = buildPmtilesExtractArgs({
sourceUrl,
outputFilepath,
regionFilepath,
maxzoom,
downloadThreads: 8,
overfetch: 0.2,
})
proc = spawn(PMTILES_BINARY_PATH, args, { stdio: ['ignore', 'pipe', 'pipe'] })
RunExtractPmtilesJob.childProcesses.set(job.id!, proc)
proc.stdout?.on('data', (chunk) => {
logger.debug(`[RunExtractPmtilesJob:${job.id}] ${chunk.toString().trimEnd()}`)
})
proc.stderr?.on('data', (chunk) => {
logger.debug(`[RunExtractPmtilesJob:${job.id}] ${chunk.toString().trimEnd()}`)
})
const exitCode: number = await new Promise((resolve, reject) => {
proc!.on('close', (code) => resolve(code ?? -1))
proc!.on('error', (err) => reject(err))
})
if (exitCode !== 0) {
await deleteFileIfExists(outputFilepath)
if (userCancelled) {
throw new UnrecoverableError(`Extract cancelled by user (exit ${exitCode})`)
}
throw new Error(`pmtiles extract exited with code ${exitCode}`)
}
// Final progress bump — tick caps at 99 so the UI doesn't flicker to 100 mid-extract
const finalStat = await stat(outputFilepath)
await this.safeUpdateProgress(job, {
percent: 100,
downloadedBytes: Number(finalStat.size),
totalBytes: estimatedBytes ?? Number(finalStat.size),
lastProgressTime: Date.now(),
} as DownloadProgressData)
// Reuse the HTTP download path's post-download hook so the file is registered and
// the previous version (if any) is deleted
await this.onComplete(params)
logger.info(
`[RunExtractPmtilesJob] Completed extract: out=${outputFilepath} size=${finalStat.size} bytes`
)
return { sourceUrl, outputFilepath }
} catch (error: any) {
if (userCancelled && !(error instanceof UnrecoverableError)) {
throw new UnrecoverableError(`Extract cancelled: ${error.message ?? error}`)
}
throw error
} finally {
clearInterval(tick)
RunExtractPmtilesJob.childProcesses.delete(job.id!)
}
}
private async onComplete(params: RunExtractPmtilesJobParams) {
if (!params.resourceMetadata) return
const [{ default: InstalledResource }, { DateTime }, fsUtils] = await Promise.all([
import('#models/installed_resource'),
import('luxon'),
import('../utils/fs.js'),
])
const fileStat = await fsUtils.getFileStatsIfExists(params.outputFilepath)
const existing = await InstalledResource.query()
.where('resource_id', params.resourceMetadata.resource_id)
.where('resource_type', 'map')
.first()
const oldFilePath = existing?.file_path ?? null
await InstalledResource.updateOrCreate(
{
resource_id: params.resourceMetadata.resource_id,
resource_type: 'map',
},
{
version: params.resourceMetadata.version,
collection_ref: params.resourceMetadata.collection_ref,
url: params.sourceUrl,
file_path: params.outputFilepath,
file_size_bytes: fileStat ? Number(fileStat.size) : null,
installed_at: DateTime.now(),
}
)
if (oldFilePath && oldFilePath !== params.outputFilepath) {
try {
await fsUtils.deleteFileIfExists(oldFilePath)
} catch (err) {
logger.warn(`[RunExtractPmtilesJob] Failed to delete old file ${oldFilePath}: ${err}`)
}
}
// Fallback: scan the pmtiles dir for orphans with the same resource_id that the DB
// lookup above didn't catch — e.g. a prior extract crashed before writing its
// InstalledResource row, or an earlier bug wrote a file without registering it.
// Matches both curated (`<id>_YYYY-MM.pmtiles`) and regional (`<id>_YYYYMMDD_zN.pmtiles`)
// naming — prefix-only so new filename formats don't silently miss.
const dir = dirname(params.outputFilepath)
const keepName = basename(params.outputFilepath)
const prefix = `${params.resourceMetadata.resource_id}_`
try {
const entries = await readdir(dir)
for (const entry of entries) {
if (entry === keepName || !entry.endsWith('.pmtiles')) continue
if (!entry.startsWith(prefix)) continue
const orphanPath = join(dir, entry)
if (orphanPath === oldFilePath) continue
try {
await fsUtils.deleteFileIfExists(orphanPath)
logger.info(`[RunExtractPmtilesJob] Pruned orphan pmtiles ${orphanPath}`)
} catch (err) {
logger.warn(`[RunExtractPmtilesJob] Failed to prune orphan ${orphanPath}: ${err}`)
}
}
} catch (err) {
logger.warn(`[RunExtractPmtilesJob] Directory scan for orphans failed: ${err}`)
}
}
static async getById(jobId: string): Promise<Job | undefined> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
return await queue.getJob(jobId)
}
static async dispatch(params: RunExtractPmtilesJobParams) {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobId = this.getJobId(params.sourceUrl, params.regionFilepath, params.maxzoom)
const existing = await queue.getJob(jobId)
if (existing) {
const state = await existing.getState()
if (state === 'active' || state === 'waiting' || state === 'delayed') {
return {
job: existing,
created: false,
message: `Extract job already exists for these params`,
}
}
// Stale (completed/failed) — remove so we can re-dispatch under the same deterministic id
try {
await existing.remove()
} catch {
// Already gone or locked — add() below will still report a meaningful error
}
}
// Fewer attempts than HTTP downloads — a failed extract usually means the source URL
// rotated or the CDN is throttling, and resuming mid-extract isn't supported by the CLI
const job = await queue.add(this.key, params, {
jobId,
attempts: 3,
backoff: { type: 'exponential', delay: 60000 },
removeOnComplete: true,
})
return {
job,
created: true,
message: `Dispatched pmtiles extract job`,
}
}
}

View File

@ -3,7 +3,21 @@ import type { HttpContext } from '@adonisjs/core/http'
import type { NextFn } from '@adonisjs/core/types/http'
import compression from 'compression'
const compress = env.get('DISABLE_COMPRESSION') ? null : compression()
// Skip compression for Server-Sent Events. The compression library buffers
// response writes to determine encoding, which collapses per-token streaming
// into a single block delivered after generation completes (regression in
// v1.31.0-rc.2, reported in #781 by @toasterking).
const compress = env.get('DISABLE_COMPRESSION')
? null
: compression({
filter: (req: any, res: any) => {
const contentType = res.getHeader('Content-Type')
if (typeof contentType === 'string' && contentType.includes('text/event-stream')) {
return false
}
return compression.filter(req, res)
},
})
export default class CompressionMiddleware {
async handle({ request, response }: HttpContext, next: NextFn) {

View File

@ -0,0 +1,24 @@
import { DateTime } from 'luxon'
import { BaseModel, column, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
export default class CustomLibrarySource extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare name: string
@column()
declare base_url: string
@column()
declare is_default: boolean
@column.dateTime({ autoCreate: true })
declare created_at: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updated_at: DateTime
}

View File

@ -4,7 +4,7 @@ import logger from '@adonisjs/core/services/logger'
import { DateTime } from 'luxon'
import { inject } from '@adonisjs/core'
import { OllamaService } from './ollama_service.js'
import { DEFAULT_QUERY_REWRITE_MODEL, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { toTitleCase } from '../utils/misc.js'
@inject()
@ -232,29 +232,22 @@ export class ChatService {
}
}
async generateTitle(sessionId: number, userMessage: string, assistantMessage: string) {
async generateTitle(sessionId: number, userMessage: string, assistantMessage: string, model: string) {
try {
const models = await this.ollamaService.getModels()
const titleModelAvailable = models?.some((m) => m.name === DEFAULT_QUERY_REWRITE_MODEL)
let title: string
if (!titleModelAvailable) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
} else {
const response = await this.ollamaService.chat({
model: DEFAULT_QUERY_REWRITE_MODEL,
messages: [
{ role: 'system', content: SYSTEM_PROMPTS.title_generation },
{ role: 'user', content: userMessage },
{ role: 'assistant', content: assistantMessage },
],
})
const response = await this.ollamaService.chat({
model,
messages: [
{ role: 'system', content: SYSTEM_PROMPTS.title_generation },
{ role: 'user', content: userMessage },
{ role: 'assistant', content: assistantMessage },
],
})
title = response?.message?.content?.trim()
if (!title) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
}
title = response?.message?.content?.trim()
if (!title) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
}
await this.updateSession(sessionId, { title })

View File

@ -53,8 +53,10 @@ export class CollectionUpdateService {
`[CollectionUpdateService] Update check complete: ${response.data.length} update(s) available`
)
const updates = await this.enrichWithSizes(response.data)
return {
updates: response.data,
updates,
checked_at: new Date().toISOString(),
}
} catch (error) {
@ -65,7 +67,7 @@ export class CollectionUpdateService {
return {
updates: [],
checked_at: new Date().toISOString(),
error: `Nomad API returned status ${error.response.status}`,
error: 'Failed to check for content updates. The update service may be temporarily unavailable.',
}
}
const message =
@ -74,7 +76,7 @@ export class CollectionUpdateService {
return {
updates: [],
checked_at: new Date().toISOString(),
error: `Failed to contact Nomad API: ${message}`,
error: 'Failed to contact the update service. Please try again later.',
}
}
}
@ -105,6 +107,8 @@ export class CollectionUpdateService {
update.resource_type === 'zim' ? ZIM_MIME_TYPES : PMTILES_MIME_TYPES,
forceNew: true,
filetype: update.resource_type,
title: update.resource_id,
totalBytes: update.size_bytes,
resourceMetadata: {
resource_id: update.resource_id,
version: update.latest_version,
@ -126,21 +130,42 @@ export class CollectionUpdateService {
async applyAllUpdates(
updates: ResourceUpdateInfo[]
): Promise<{ results: Array<{ resource_id: string; success: boolean; jobId?: string; error?: string }> }> {
const results: Array<{
resource_id: string
success: boolean
jobId?: string
error?: string
}> = []
for (const update of updates) {
const result = await this.applyUpdate(update)
results.push({ resource_id: update.resource_id, ...result })
}
const results = await Promise.all(
updates.map(async (update) => {
const result = await this.applyUpdate(update)
return { resource_id: update.resource_id, ...result }
})
)
return { results }
}
/**
* Fetch Content-Length for each update URL in parallel. HEAD failures are non-fatal
* the update row just renders without a size. Bounded to HEAD_TIMEOUT_MS so a slow
* mirror doesn't block the whole check.
*/
private async enrichWithSizes(updates: ResourceUpdateInfo[]): Promise<ResourceUpdateInfo[]> {
const HEAD_TIMEOUT_MS = 5000
return await Promise.all(
updates.map(async (update) => {
if (update.size_bytes) return update // Trust upstream if it already gave us one
try {
const head = await axios.head(update.download_url, {
timeout: HEAD_TIMEOUT_MS,
maxRedirects: 5,
validateStatus: (s) => s >= 200 && s < 400,
})
const len = Number(head.headers['content-length'])
return Number.isFinite(len) && len > 0 ? { ...update, size_bytes: len } : update
} catch {
return update
}
})
)
}
private buildFilename(update: ResourceUpdateInfo): string {
if (update.resource_type === 'zim') {
return `${update.resource_id}_${update.latest_version}.zim`

View File

@ -0,0 +1,308 @@
import { access, readFile, writeFile, mkdir } from 'fs/promises'
import { join, resolve } from 'path'
import { createHash } from 'crypto'
import { tmpdir } from 'os'
import logger from '@adonisjs/core/services/logger'
import type { Country, CountryCode, CountryGroup } from '../../types/maps.js'
interface NEFeature {
type: 'Feature'
properties: Record<string, any>
geometry: unknown
}
interface NEFeatureCollection {
type: 'FeatureCollection'
features: NEFeature[]
}
const COUNTRY_GEOJSON_PATH = join(
process.cwd(),
'resources',
'geodata',
'ne_50m_admin_0_countries.geojson'
)
// Natural Earth country polygons are land-only (no territorial waters), so a
// strict intersect leaves tiles fully over the ocean out of the extract —
// coastal cities render as grey off their coast. Inflate each polygon outward
// by ~11 km to pull in adjacent tiles without ballooning the extract size.
const REGION_BUFFER_DEGREES = 0.1
const GROUP_ORDER = [
'north-america',
'south-america',
'europe',
'africa',
'asia',
'oceania',
]
const GROUP_META: Record<string, { id: string; name: string; description: string }> = {
'North America': {
id: 'north-america',
name: 'North America',
description: 'All countries in North America and the Caribbean.',
},
'South America': {
id: 'south-america',
name: 'South America',
description: 'All countries in South America.',
},
Europe: {
id: 'europe',
name: 'Europe',
description: 'All countries in Europe.',
},
Africa: {
id: 'africa',
name: 'Africa',
description: 'All countries in Africa.',
},
Asia: {
id: 'asia',
name: 'Asia',
description: 'All countries in Asia.',
},
Oceania: {
id: 'oceania',
name: 'Oceania',
description: 'Australia, New Zealand, and Pacific island nations.',
},
}
export class CountriesService {
private static instance: CountriesService | null = null
private loadPromise: Promise<void> | null = null
private countries: Country[] = []
private byCode: Map<CountryCode, { country: Country; feature: NEFeature }> = new Map()
private groups: CountryGroup[] = []
static getInstance(): CountriesService {
if (!this.instance) {
this.instance = new CountriesService()
}
return this.instance
}
private async ensureLoaded(): Promise<void> {
if (this.byCode.size > 0) return
if (!this.loadPromise) {
this.loadPromise = this.load()
}
await this.loadPromise
}
private async load(): Promise<void> {
const raw = await readFile(COUNTRY_GEOJSON_PATH, 'utf8')
const fc = JSON.parse(raw) as NEFeatureCollection
// Natural Earth reuses a sovereign state's ISO_A2 for its dependencies
// (e.g. AU covers both Australia and Australian territories). Sort so the
// sovereign mainland wins the ISO-code slot, and skip any subsequent
// same-code dependency — otherwise the "AU" entry ends up being some tiny
// island territory.
const sortedFeatures = [...fc.features].sort((a, b) => typeRank(a) - typeRank(b))
const countries: Country[] = []
const byCode = new Map<CountryCode, { country: Country; feature: NEFeature }>()
const groupCodes: Record<string, CountryCode[]> = {}
for (const feature of sortedFeatures) {
const p = feature.properties
const code = resolveIso2(p)
if (!code) continue
if (byCode.has(code)) continue
const continent = typeof p.CONTINENT === 'string' ? p.CONTINENT : 'Other'
if (continent === 'Antarctica' || continent === 'Seven seas (open ocean)') continue
const country: Country = {
code,
code3: resolveIso3(p) ?? code,
name: typeof p.NAME === 'string' ? p.NAME : code,
continent,
subregion: typeof p.SUBREGION === 'string' ? p.SUBREGION : continent,
population: typeof p.POP_EST === 'number' ? p.POP_EST : 0,
}
countries.push(country)
byCode.set(code, { country, feature })
if (GROUP_META[continent]) {
const groupId = GROUP_META[continent].id
if (!groupCodes[groupId]) groupCodes[groupId] = []
groupCodes[groupId].push(code)
}
}
countries.sort((a, b) => a.name.localeCompare(b.name))
const groups: CountryGroup[] = GROUP_ORDER.flatMap((groupId) => {
const meta = Object.values(GROUP_META).find((m) => m.id === groupId)
if (!meta) return []
const codes = (groupCodes[groupId] ?? []).slice().sort()
if (codes.length === 0) return []
return [{ id: meta.id, name: meta.name, description: meta.description, countries: codes }]
})
this.countries = countries
this.byCode = byCode
this.groups = groups
logger.info(
`[CountriesService] Loaded ${countries.length} countries across ${groups.length} groups`
)
}
async list(): Promise<Country[]> {
await this.ensureLoaded()
return this.countries
}
async listGroups(): Promise<CountryGroup[]> {
await this.ensureLoaded()
return this.groups
}
/** Throws when a supplied code does not map to a known country. */
async resolveCodes(codes: CountryCode[]): Promise<CountryCode[]> {
await this.ensureLoaded()
const normalized = [...new Set(codes.map((c) => c.toUpperCase()))].sort()
const unknown = normalized.filter((c) => !this.byCode.has(c))
if (unknown.length > 0) {
throw new Error(`Unknown country code(s): ${unknown.join(', ')}`)
}
return normalized
}
/**
* Filename is keyed on a hash of the sorted ISO codes + buffer size so
* repeated calls with the same selection reuse the same path, and bumping
* the buffer auto-invalidates stale files.
*/
async writeRegionFile(codes: CountryCode[]): Promise<string> {
await this.ensureLoaded()
const resolved = await this.resolveCodes(codes)
const key = `b${REGION_BUFFER_DEGREES}:${resolved.join(',')}`
const hash = createHash('sha1').update(key).digest('hex').slice(0, 12)
const dir = resolve(tmpdir(), 'nomad-pmtiles-regions')
await mkdir(dir, { recursive: true })
const filepath = join(dir, `region-${hash}.geojson`)
try {
await access(filepath)
return filepath
} catch {}
const fc = {
type: 'FeatureCollection',
features: resolved.map((code) => {
const entry = this.byCode.get(code)!
return {
type: 'Feature',
properties: { iso: code, name: entry.country.name },
geometry: bufferGeometry(entry.feature.geometry, REGION_BUFFER_DEGREES),
}
}),
}
await writeFile(filepath, JSON.stringify(fc))
return filepath
}
}
function typeRank(f: NEFeature): number {
const t = typeof f.properties.TYPE === 'string' ? f.properties.TYPE : ''
if (t === 'Sovereign country') return 0
if (t === 'Country') return 1
if (t === 'Sovereignty') return 2
if (t === 'Disputed') return 3
if (t === 'Dependency') return 4
return 5
}
function resolveIso2(p: Record<string, any>): CountryCode | null {
// Natural Earth's ISO_A2 sometimes holds political escapes like "CN-TW" for
// Taiwan or "-99" for countries involved in disputes. Only accept clean
// 2-letter codes; fall back to ISO_A2_EH (which reliably has the real code).
const primary = typeof p.ISO_A2 === 'string' ? p.ISO_A2 : null
if (primary && /^[A-Z]{2}$/i.test(primary)) return primary.toUpperCase()
const fallback = typeof p.ISO_A2_EH === 'string' ? p.ISO_A2_EH : null
if (fallback && /^[A-Z]{2}$/i.test(fallback)) return fallback.toUpperCase()
return null
}
/**
* Inflate each polygon ring outward by `buffer` degrees via per-vertex
* averaged-normal offset. Not geodesically accurate but at small buffers
* (<= 0.2°) it's within a few percent of a proper geodesic buffer at
* country scale, which is plenty for tile-inclusion purposes.
*/
function bufferGeometry(geometry: unknown, buffer: number): unknown {
const geom = geometry as { type: string; coordinates: any }
if (geom?.type === 'Polygon') {
return { type: 'Polygon', coordinates: bufferPolygonRings(geom.coordinates, buffer) }
}
if (geom?.type === 'MultiPolygon') {
return {
type: 'MultiPolygon',
coordinates: geom.coordinates.map((poly: number[][][]) =>
bufferPolygonRings(poly, buffer)
),
}
}
return geometry
}
function bufferPolygonRings(rings: number[][][], buffer: number): number[][][] {
return rings.map((ring) => bufferRing(ring, buffer))
}
function bufferRing(ring: number[][], buffer: number): number[][] {
if (ring.length < 4) return ring
const sign = signedArea(ring) > 0 ? 1 : -1
const n = ring.length - 1
const out: number[][] = []
for (let i = 0; i < n; i++) {
const prev = ring[(i - 1 + n) % n]
const curr = ring[i]
const next = ring[(i + 1) % n]
const e1x = curr[0] - prev[0]
const e1y = curr[1] - prev[1]
const e2x = next[0] - curr[0]
const e2y = next[1] - curr[1]
const l1 = Math.hypot(e1x, e1y) || 1
const l2 = Math.hypot(e2x, e2y) || 1
const n1x = (e1y / l1) * sign
const n1y = (-e1x / l1) * sign
const n2x = (e2y / l2) * sign
const n2y = (-e2x / l2) * sign
const sumX = n1x + n2x
const sumY = n1y + n2y
const sl = Math.hypot(sumX, sumY) || 1
out.push([curr[0] + (sumX / sl) * buffer, curr[1] + (sumY / sl) * buffer])
}
out.push(out[0])
return out
}
function signedArea(ring: number[][]): number {
let a = 0
for (let i = 0; i < ring.length - 1; i++) {
a += ring[i][0] * ring[i + 1][1] - ring[i + 1][0] * ring[i][1]
}
return a / 2
}
function resolveIso3(p: Record<string, any>): string | null {
const primary = typeof p.ISO_A3 === 'string' ? p.ISO_A3 : null
if (primary && primary !== '-99') return primary.toUpperCase()
const fallback = typeof p.ISO_A3_EH === 'string' ? p.ISO_A3_EH : null
if (fallback && fallback !== '-99') return fallback.toUpperCase()
const adm = typeof p.ADM0_A3 === 'string' ? p.ADM0_A3 : null
if (adm && adm !== '-99') return adm.toUpperCase()
return null
}

View File

@ -10,7 +10,7 @@ import { KiwixLibraryService } from './kiwix_library_service.js'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import { exec } from 'child_process'
import { promisify } from 'util'
// import { readdir } from 'fs/promises'
import { readFile } from 'node:fs/promises'
import KVStore from '#models/kv_store'
import { BROADCAST_CHANNELS } from '../../constants/broadcast.js'
import { KIWIX_LIBRARY_CMD } from '../../constants/kiwix.js'
@ -110,10 +110,10 @@ export class DockerService {
message: `Invalid action: ${action}. Use 'start', 'stop', or 'restart'.`,
}
} catch (error: any) {
logger.error(`Error starting service ${serviceName}: ${error.message}`)
logger.error({ err: error }, `[DockerService] Error controlling service ${serviceName}`)
return {
success: false,
message: `Failed to start service ${serviceName}: ${error.message}`,
message: `Failed to ${action} service ${serviceName}. Check server logs for details.`,
}
}
}
@ -355,8 +355,8 @@ export class DockerService {
)
}
} catch (error: any) {
logger.warn(`Error during container cleanup: ${error.message}`)
this._broadcast(serviceName, 'cleanup-warning', `Warning during cleanup: ${error.message}`)
logger.warn({ err: error }, `[DockerService] Error during container cleanup for ${serviceName}`)
this._broadcast(serviceName, 'cleanup-warning', 'Warning during container cleanup. Check server logs for details.')
}
// Step 3: Clear volumes/data if needed
@ -382,11 +382,11 @@ export class DockerService {
this._broadcast(serviceName, 'no-volumes', `No volumes found to clear`)
}
} catch (error: any) {
logger.warn(`Error during volume cleanup: ${error.message}`)
logger.warn({ err: error }, `[DockerService] Error during volume cleanup for ${serviceName}`)
this._broadcast(
serviceName,
'volume-cleanup-warning',
`Warning during volume cleanup: ${error.message}`
'Warning during volume cleanup. Check server logs for details.'
)
}
@ -411,11 +411,11 @@ export class DockerService {
message: `Service ${serviceName} force reinstall initiated successfully. You can receive updates via server-sent events.`,
}
} catch (error: any) {
logger.error(`Force reinstall failed for ${serviceName}: ${error.message}`)
logger.error({ err: error }, `[DockerService] Force reinstall failed for ${serviceName}`)
await this._cleanupFailedInstallation(serviceName)
return {
success: false,
message: `Failed to force reinstall service ${serviceName}: ${error.message}`,
message: `Failed to force reinstall service ${serviceName}. Check server logs for details.`,
}
}
}
@ -500,6 +500,7 @@ export class DockerService {
// GPU-aware configuration for Ollama
let finalImage = service.container_image
let gpuHostConfig = containerConfig?.HostConfig || {}
let amdGpuConfigured = false
if (service.service_name === SERVICE_NAMES.OLLAMA) {
const gpuResult = await this._detectGPUType()
@ -523,16 +524,51 @@ export class DockerService {
],
}
} else if (gpuResult.type === 'amd') {
this._broadcast(
service.service_name,
'gpu-config',
`AMD GPU detected. ROCm GPU acceleration is not yet supported in this version — proceeding with CPU-only configuration. GPU support for AMD will be available in a future update.`
)
logger.warn('[DockerService] AMD GPU detected but ROCm support is not yet enabled. Using CPU-only configuration.')
// TODO: Re-enable AMD GPU support once ROCm image and device discovery are validated.
// When re-enabling:
// 1. Switch image to 'ollama/ollama:rocm'
// 2. Restore _discoverAMDDevices() to map /dev/kfd and /dev/dri/* into the container
// AMD acceleration is opt-out via the 'ai.amdGpuAcceleration' KV key (default-on).
// Per memory feedback: KV values can be string or boolean — coerce explicitly.
const amdEnabledRaw = await KVStore.getValue('ai.amdGpuAcceleration')
const amdAccelerationEnabled = String(amdEnabledRaw) !== 'false'
if (amdAccelerationEnabled) {
this._broadcast(
service.service_name,
'gpu-config',
`AMD GPU detected. Using ROCm image with /dev/kfd and /dev/dri passthrough...`
)
finalImage = 'ollama/ollama:rocm'
// The pull-if-missing earlier in this function used service.container_image
// (the DB-pinned tag, e.g. ollama/ollama:0.18.2). The AMD branch overrides
// to a different tag — so we need to pull :rocm separately if it's not local.
const rocmImageExists = await this._checkImageExists(finalImage)
if (!rocmImageExists) {
this._broadcast(
service.service_name,
'pulling',
`Pulling Docker image ${finalImage}...`
)
const rocmPullStream = await this.docker.pull(finalImage)
await new Promise((res) => this.docker.modem.followProgress(rocmPullStream, res))
}
const amdDevices = await this._discoverAMDDevices()
gpuHostConfig = {
...gpuHostConfig,
Devices: amdDevices,
}
amdGpuConfigured = true
logger.info(
`[DockerService] Configured ROCm image and ${amdDevices.length} AMD device entries for Ollama`
)
} else {
this._broadcast(
service.service_name,
'gpu-config',
`AMD GPU detected but acceleration is disabled via ai.amdGpuAcceleration. Using CPU-only configuration.`
)
logger.info('[DockerService] AMD GPU acceleration disabled by KV opt-out; using CPU-only configuration.')
}
} else if (gpuResult.toolkitMissing) {
this._broadcast(
service.service_name,
@ -555,6 +591,12 @@ export class DockerService {
if (flashAttentionEnabled !== false) {
ollamaEnv.push('OLLAMA_FLASH_ATTENTION=1')
}
if (amdGpuConfigured) {
// RDNA3 iGPUs (gfx1103: 780M, 880M, 890M, ...) aren't on AMD's official ROCm
// allowlist but work when forced to identify as gfx1100 via HSA_OVERRIDE_GFX_VERSION.
// Harmless on supported discrete cards (gfx1030 RX 6800, etc.) — they ignore the override.
ollamaEnv.push('HSA_OVERRIDE_GFX_VERSION=11.0.0')
}
}
this._broadcast(
@ -664,10 +706,10 @@ export class DockerService {
return { success: true, message: `Service ${serviceName} container removed successfully` }
} catch (error: any) {
logger.error(`Error removing service container: ${error.message}`)
logger.error({ err: error }, `[DockerService] Error removing service container ${serviceName}`)
return {
success: false,
message: `Failed to remove service ${serviceName} container: ${error.message}`,
message: `Failed to remove service ${serviceName} container. Check server logs for details.`,
}
}
}
@ -857,7 +899,10 @@ export class DockerService {
/**
* Detect GPU type and toolkit availability.
* Primary: Check Docker runtimes via docker.info() (works from inside containers).
* Fallback: lspci for host-based installs and AMD detection.
* Secondary: Read /app/storage/.nomad-gpu-type written by install_nomad.sh needed
* for AMD detection because lspci isn't available inside the admin container and
* AMD has no Docker runtime registration to query.
* Fallback: lspci for host-based installs.
*/
private async _detectGPUType(): Promise<{ type: 'nvidia' | 'amd' | 'none'; toolkitMissing?: boolean }> {
try {
@ -874,6 +919,24 @@ export class DockerService {
logger.warn(`[DockerService] Could not query Docker info for GPU runtimes: ${error.message}`)
}
// Secondary: install_nomad.sh writes the host-detected GPU type to a marker file in
// the storage volume so the admin container (which lacks lspci) can read it.
try {
const marker = (await readFile('/app/storage/.nomad-gpu-type', 'utf8')).trim()
if (marker === 'nvidia') {
// Hardware present but Docker doesn't have nvidia runtime → toolkit missing
logger.warn('[DockerService] NVIDIA GPU recorded in marker file but NVIDIA Container Toolkit is not installed')
return { type: 'none', toolkitMissing: true }
}
if (marker === 'amd') {
logger.info('[DockerService] AMD GPU detected via install-time marker file')
await this._persistGPUType('amd')
return { type: 'amd' }
}
} catch {
// No marker file — fall through to lspci attempt for host-based installs
}
// Fallback: lspci for host-based installs (not available inside Docker)
const execAsync = promisify(exec)
@ -937,60 +1000,23 @@ export class DockerService {
}
/**
* Discover AMD GPU DRI devices dynamically.
* Returns an array of device configurations for Docker.
* Build the Docker Devices array for AMD GPU passthrough.
*
* Returns /dev/kfd (Kernel Fusion Driver, required by ROCm) and /dev/dri (the DRM
* device tree). Passing /dev/dri as a single directory entry mirrors Docker CLI
* --device behavior the daemon expands it to all child devices (card*, renderD*)
* regardless of how the host enumerates them. This avoids the brittle hardcoded
* fallback (card0/renderD128) the prior implementation used, which was wrong on
* systems where the AMD GPU enumerates as card1+ (e.g. UM890 Pro 780M iGPU).
*/
// private async _discoverAMDDevices(): Promise<
// Array<{ PathOnHost: string; PathInContainer: string; CgroupPermissions: string }>
// > {
// try {
// const devices: Array<{
// PathOnHost: string
// PathInContainer: string
// CgroupPermissions: string
// }> = []
// // Always add /dev/kfd (Kernel Fusion Driver)
// devices.push({
// PathOnHost: '/dev/kfd',
// PathInContainer: '/dev/kfd',
// CgroupPermissions: 'rwm',
// })
// // Discover DRI devices in /dev/dri/
// try {
// const driDevices = await readdir('/dev/dri')
// for (const device of driDevices) {
// const devicePath = `/dev/dri/${device}`
// devices.push({
// PathOnHost: devicePath,
// PathInContainer: devicePath,
// CgroupPermissions: 'rwm',
// })
// }
// logger.info(
// `[DockerService] Discovered ${driDevices.length} DRI devices: ${driDevices.join(', ')}`
// )
// } catch (error) {
// logger.warn(`[DockerService] Could not read /dev/dri directory: ${error.message}`)
// // Fallback to common device names if directory read fails
// const fallbackDevices = ['card0', 'renderD128']
// for (const device of fallbackDevices) {
// devices.push({
// PathOnHost: `/dev/dri/${device}`,
// PathInContainer: `/dev/dri/${device}`,
// CgroupPermissions: 'rwm',
// })
// }
// logger.info(`[DockerService] Using fallback DRI devices: ${fallbackDevices.join(', ')}`)
// }
// return devices
// } catch (error) {
// logger.error(`[DockerService] Error discovering AMD devices: ${error.message}`)
// return []
// }
// }
private async _discoverAMDDevices(): Promise<
Array<{ PathOnHost: string; PathInContainer: string; CgroupPermissions: string }>
> {
return [
{ PathOnHost: '/dev/kfd', PathInContainer: '/dev/kfd', CgroupPermissions: 'rwm' },
{ PathOnHost: '/dev/dri', PathInContainer: '/dev/dri', CgroupPermissions: 'rwm' },
]
}
/**
* Update a service container to a new image version while preserving volumes and data.
@ -1014,12 +1040,60 @@ export class DockerService {
this.activeInstallations.add(serviceName)
// Compute new image string
// Compute new image string. AMD-on-Ollama overrides this to the rolling :rocm tag
// (set during GPU detection below) since per-version ROCm tags aren't always published.
const currentImage = service.container_image
const imageBase = currentImage.includes(':')
? currentImage.substring(0, currentImage.lastIndexOf(':'))
: currentImage
const newImage = `${imageBase}:${targetVersion}`
let newImage = `${imageBase}:${targetVersion}`
// GPU detection runs before the pull so AMD updates pull ollama/ollama:rocm rather
// than the standard tag. Detection result is reused below when building the new
// container config (devices, env). Non-Ollama services skip this entirely.
let updatedDeviceRequests: any[] | undefined = undefined
let updatedAmdDevices: any[] | undefined = undefined
let updatedAmdGpuConfigured = false
if (serviceName === SERVICE_NAMES.OLLAMA) {
const gpuResult = await this._detectGPUType()
if (gpuResult.type === 'nvidia') {
this._broadcast(
serviceName,
'update-gpu-config',
`NVIDIA container runtime detected. Configuring updated container with GPU support...`
)
updatedDeviceRequests = [
{ Driver: 'nvidia', Count: -1, Capabilities: [['gpu']] },
]
} else if (gpuResult.type === 'amd') {
const amdEnabledRaw = await KVStore.getValue('ai.amdGpuAcceleration')
const amdAccelerationEnabled = String(amdEnabledRaw) !== 'false'
if (amdAccelerationEnabled) {
this._broadcast(
serviceName,
'update-gpu-config',
`AMD GPU detected. Using ROCm image with /dev/kfd and /dev/dri passthrough...`
)
newImage = 'ollama/ollama:rocm'
updatedAmdDevices = await this._discoverAMDDevices()
updatedAmdGpuConfigured = true
} else {
this._broadcast(
serviceName,
'update-gpu-config',
`AMD GPU detected but acceleration is disabled via ai.amdGpuAcceleration. Using CPU-only configuration.`
)
}
} else if (gpuResult.toolkitMissing) {
this._broadcast(
serviceName,
'update-gpu-config',
`NVIDIA GPU detected but NVIDIA Container Toolkit is not installed. Using CPU-only configuration. Install the toolkit and reinstall AI Assistant for GPU acceleration: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html`
)
} else {
this._broadcast(serviceName, 'update-gpu-config', `No GPU detected. Using CPU-only configuration.`)
}
}
// Step 1: Pull new image
this._broadcast(serviceName, 'update-pulling', `Pulling image ${newImage}...`)
@ -1054,48 +1128,21 @@ export class DockerService {
const hostConfig = inspectData.HostConfig || {}
// Re-run GPU detection for Ollama so updates always reflect the current GPU environment.
// This handles cases where the NVIDIA Container Toolkit was installed after the initial
// Ollama setup, and ensures DeviceRequests are always built fresh rather than relying on
// round-tripping the Docker inspect format back into the create API.
let updatedDeviceRequests: any[] | undefined = undefined
if (serviceName === SERVICE_NAMES.OLLAMA) {
const gpuResult = await this._detectGPUType()
if (gpuResult.type === 'nvidia') {
this._broadcast(
serviceName,
'update-gpu-config',
`NVIDIA container runtime detected. Configuring updated container with GPU support...`
)
updatedDeviceRequests = [
{
Driver: 'nvidia',
Count: -1,
Capabilities: [['gpu']],
},
// GPU detection already ran above (before the pull) so we know the right image, devices,
// and whether HSA_OVERRIDE needs injection. For AMD, replace any prior HSA_OVERRIDE in
// the inspect-captured env so updates from older containers pick up the current value.
const baseEnv = inspectData.Config?.Env || []
const finalEnv = updatedAmdGpuConfigured
? [
...baseEnv.filter((e: string) => !e.startsWith('HSA_OVERRIDE_GFX_VERSION=')),
'HSA_OVERRIDE_GFX_VERSION=11.0.0',
]
} else if (gpuResult.type === 'amd') {
this._broadcast(
serviceName,
'update-gpu-config',
`AMD GPU detected. ROCm GPU acceleration is not yet supported — using CPU-only configuration.`
)
} else if (gpuResult.toolkitMissing) {
this._broadcast(
serviceName,
'update-gpu-config',
`NVIDIA GPU detected but NVIDIA Container Toolkit is not installed. Using CPU-only configuration. Install the toolkit and reinstall AI Assistant for GPU acceleration: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html`
)
} else {
this._broadcast(serviceName, 'update-gpu-config', `No GPU detected. Using CPU-only configuration.`)
}
}
: baseEnv
const newContainerConfig: any = {
Image: newImage,
name: serviceName,
Env: inspectData.Config?.Env || undefined,
Env: finalEnv.length > 0 ? finalEnv : undefined,
Cmd: inspectData.Config?.Cmd || undefined,
ExposedPorts: inspectData.Config?.ExposedPorts || undefined,
WorkingDir: inspectData.Config?.WorkingDir || undefined,
@ -1105,7 +1152,7 @@ export class DockerService {
PortBindings: hostConfig.PortBindings || undefined,
RestartPolicy: hostConfig.RestartPolicy || undefined,
DeviceRequests: serviceName === SERVICE_NAMES.OLLAMA ? updatedDeviceRequests : (hostConfig.DeviceRequests || undefined),
Devices: hostConfig.Devices || undefined,
Devices: serviceName === SERVICE_NAMES.OLLAMA && updatedAmdDevices ? updatedAmdDevices : (hostConfig.Devices || undefined),
},
NetworkingConfig: inspectData.NetworkSettings?.Networks
? {
@ -1204,10 +1251,10 @@ export class DockerService {
this._broadcast(
serviceName,
'update-rollback',
`Update failed: ${error.message}`
'Update failed. Check server logs for details.'
)
logger.error(`[DockerService] Update failed for ${serviceName}: ${error.message}`)
return { success: false, message: `Update failed: ${error.message}` }
logger.error({ err: error }, `[DockerService] Update failed for ${serviceName}`)
return { success: false, message: 'Update failed. Check server logs for details.' }
}
}

View File

@ -12,9 +12,10 @@ export class DocsService {
'home': 1,
'getting-started': 2,
'use-cases': 3,
'faq': 4,
'about': 5,
'release-notes': 6,
'community-add-ons': 4,
'faq': 5,
'about': 6,
'release-notes': 7,
}
async getDocs() {
@ -91,6 +92,7 @@ export class DocsService {
private static readonly TITLE_OVERRIDES: Record<string, string> = {
'faq': 'FAQ',
'community-add-ons': 'Community Add-Ons',
}
private prettify(filename: string) {

View File

@ -1,10 +1,18 @@
import { inject } from '@adonisjs/core'
import { QueueService } from './queue_service.js'
import { RunDownloadJob } from '#jobs/run_download_job'
import { RunExtractPmtilesJob } from '#jobs/run_extract_pmtiles_job'
import type { RunExtractPmtilesJobParams } from '#jobs/run_extract_pmtiles_job'
import { DownloadModelJob } from '#jobs/download_model_job'
import { DownloadJobWithProgress, DownloadProgressData } from '../../types/downloads.js'
import type { Job, Queue } from 'bullmq'
import { normalize } from 'path'
import { deleteFileIfExists } from '../utils/fs.js'
import transmit from '@adonisjs/transmit/services/main'
import { BROADCAST_CHANNELS } from '../../constants/broadcast.js'
type FileJobState = 'waiting' | 'active' | 'delayed' | 'failed'
type TaggedJob = { job: Job; state: FileJobState }
@inject()
export class DownloadService {
@ -24,27 +32,32 @@ export class DownloadService {
return { percent: parseInt(String(progress), 10) || 0 }
}
async listDownloadJobs(filetype?: string): Promise<DownloadJobWithProgress[]> {
// Get regular file download jobs (zim, map, etc.) — query each state separately so we can
// tag each job with its actual BullMQ state rather than guessing from progress data.
const queue = this.queueService.getQueue(RunDownloadJob.queue)
type FileJobState = 'waiting' | 'active' | 'delayed' | 'failed'
const [waitingJobs, activeJobs, delayedJobs, failedJobs] = await Promise.all([
/** Fetch all non-completed jobs from a queue, tagged with their current BullMQ state */
private async fetchJobsWithStates(queueName: string): Promise<TaggedJob[]> {
const queue = this.queueService.getQueue(queueName)
const [waiting, active, delayed, failed] = await Promise.all([
queue.getJobs(['waiting']),
queue.getJobs(['active']),
queue.getJobs(['delayed']),
queue.getJobs(['failed']),
])
const taggedFileJobs: Array<{ job: (typeof waitingJobs)[0]; state: FileJobState }> = [
...waitingJobs.map((j) => ({ job: j, state: 'waiting' as const })),
...activeJobs.map((j) => ({ job: j, state: 'active' as const })),
...delayedJobs.map((j) => ({ job: j, state: 'delayed' as const })),
...failedJobs.map((j) => ({ job: j, state: 'failed' as const })),
return [
...waiting.map((j) => ({ job: j, state: 'waiting' as const })),
...active.map((j) => ({ job: j, state: 'active' as const })),
...delayed.map((j) => ({ job: j, state: 'delayed' as const })),
...failed.map((j) => ({ job: j, state: 'failed' as const })),
]
}
const fileDownloads = taggedFileJobs.map(({ job, state }) => {
async listDownloadJobs(filetype?: string): Promise<DownloadJobWithProgress[]> {
const modelQueue = this.queueService.getQueue(DownloadModelJob.queue)
const [fileTagged, extractTagged, modelJobs] = await Promise.all([
this.fetchJobsWithStates(RunDownloadJob.queue),
this.fetchJobsWithStates(RunExtractPmtilesJob.queue),
modelQueue.getJobs(['waiting', 'active', 'delayed', 'failed']),
])
const fileDownloads = fileTagged.map(({ job, state }) => {
const parsed = this.parseProgress(job.progress)
return {
jobId: job.id!.toString(),
@ -61,26 +74,36 @@ export class DownloadService {
}
})
// Get Ollama model download jobs
const modelQueue = this.queueService.getQueue(DownloadModelJob.queue)
const modelJobs = await modelQueue.getJobs(['waiting', 'active', 'delayed', 'failed'])
const extractDownloads = extractTagged.map(({ job, state }) => {
const parsed = this.parseProgress(job.progress)
return {
jobId: job.id!.toString(),
url: job.data.sourceUrl,
progress: parsed.percent,
filepath: normalize(job.data.outputFilepath),
filetype: job.data.filetype || 'map',
title: job.data.title || undefined,
downloadedBytes: parsed.downloadedBytes,
totalBytes: parsed.totalBytes || job.data.estimatedBytes || undefined,
lastProgressTime: parsed.lastProgressTime,
status: state,
failedReason: job.failedReason || undefined,
}
})
const modelDownloads = modelJobs.map((job) => ({
jobId: job.id!.toString(),
url: job.data.modelName || 'Unknown Model', // Use model name as url
url: job.data.modelName || 'Unknown Model',
progress: parseInt(job.progress.toString(), 10),
filepath: job.data.modelName || 'Unknown Model', // Use model name as filepath
filepath: job.data.modelName || 'Unknown Model',
filetype: 'model',
status: (job.failedReason ? 'failed' : 'active') as 'active' | 'failed',
failedReason: job.failedReason || undefined,
}))
const allDownloads = [...fileDownloads, ...modelDownloads]
// Filter by filetype if specified
const allDownloads = [...fileDownloads, ...extractDownloads, ...modelDownloads]
const filtered = allDownloads.filter((job) => !filetype || job.filetype === filetype)
// Sort: active downloads first (by progress desc), then failed at the bottom
return filtered.sort((a, b) => {
if (a.status === 'failed' && b.status !== 'failed') return 1
if (a.status !== 'failed' && b.status === 'failed') return -1
@ -89,7 +112,11 @@ export class DownloadService {
}
async removeFailedJob(jobId: string): Promise<void> {
for (const queueName of [RunDownloadJob.queue, DownloadModelJob.queue]) {
for (const queueName of [
RunDownloadJob.queue,
RunExtractPmtilesJob.queue,
DownloadModelJob.queue,
]) {
const queue = this.queueService.getQueue(queueName)
const job = await queue.getJob(jobId)
if (job) {
@ -114,11 +141,60 @@ export class DownloadService {
const queue = this.queueService.getQueue(RunDownloadJob.queue)
const job = await queue.getJob(jobId)
if (!job) {
// Job already completed (removeOnComplete: true) or doesn't exist
return { success: true, message: 'Job not found (may have already completed)' }
if (job) {
return await this._cancelFileDownloadJob(jobId, job, queue)
}
const extractQueue = this.queueService.getQueue(RunExtractPmtilesJob.queue)
const extractJob = await extractQueue.getJob(jobId)
if (extractJob) {
return await this._cancelExtractJob(jobId, extractJob, extractQueue)
}
const modelQueue = this.queueService.getQueue(DownloadModelJob.queue)
const modelJob = await modelQueue.getJob(jobId)
if (modelJob) {
return await this._cancelModelDownloadJob(jobId, modelJob, modelQueue)
}
return { success: true, message: 'Job not found (may have already completed)' }
}
private async _cancelExtractJob(
jobId: string,
job: Job<RunExtractPmtilesJobParams>,
queue: Queue<RunExtractPmtilesJobParams>
): Promise<{ success: boolean; message: string }> {
const outputFilepath = job.data.outputFilepath
await RunExtractPmtilesJob.signalCancel(jobId)
// Same-process fallback when worker and API share a process
RunExtractPmtilesJob.childProcesses.get(jobId)?.kill('SIGTERM')
RunExtractPmtilesJob.childProcesses.delete(jobId)
await this._pollForTerminalState(job, jobId)
await this._removeJobWithLockFallback(job, queue, RunExtractPmtilesJob.queue, jobId)
if (outputFilepath) {
try {
await deleteFileIfExists(outputFilepath)
} catch {
// File may not exist yet (subprocess may not have opened it)
}
}
return { success: true, message: 'Extract cancelled and partial file deleted' }
}
/** Cancel a content download (zim, map, pmtiles, etc.) */
private async _cancelFileDownloadJob(
jobId: string,
job: any,
queue: any
): Promise<{ success: boolean; message: string }> {
const filepath = job.data.filepath
// Signal the worker process to abort the download via Redis
@ -128,45 +204,8 @@ export class DownloadService {
RunDownloadJob.abortControllers.get(jobId)?.abort('user-cancel')
RunDownloadJob.abortControllers.delete(jobId)
// Poll for terminal state (up to 4s at 250ms intervals) — cooperates with BullMQ's lifecycle
// instead of force-removing an active job and losing the worker's failure/cleanup path.
const POLL_INTERVAL_MS = 250
const POLL_TIMEOUT_MS = 4000
const deadline = Date.now() + POLL_TIMEOUT_MS
let reachedTerminal = false
while (Date.now() < deadline) {
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
try {
const state = await job.getState()
if (state === 'failed' || state === 'completed' || state === 'unknown') {
reachedTerminal = true
break
}
} catch {
reachedTerminal = true // getState() throws if job is already gone
break
}
}
if (!reachedTerminal) {
console.warn(`[DownloadService] cancelJob: job ${jobId} did not reach terminal state within timeout, removing anyway`)
}
// Remove the BullMQ job
try {
await job.remove()
} catch {
// Lock contention fallback: clear lock and retry once
try {
const client = await queue.client
await client.del(`bull:${RunDownloadJob.queue}:${jobId}:lock`)
const updatedJob = await queue.getJob(jobId)
if (updatedJob) await updatedJob.remove()
} catch {
// Best effort - job will be cleaned up on next dismiss attempt
}
}
await this._pollForTerminalState(job, jobId)
await this._removeJobWithLockFallback(job, queue, RunDownloadJob.queue, jobId)
// Delete the partial file from disk
if (filepath) {
@ -195,4 +234,87 @@ export class DownloadService {
return { success: true, message: 'Download cancelled and partial file deleted' }
}
/** Cancel an Ollama model download — mirrors the file cancel pattern but skips file cleanup */
private async _cancelModelDownloadJob(
jobId: string,
job: any,
queue: any
): Promise<{ success: boolean; message: string }> {
const modelName: string = job.data?.modelName ?? 'unknown'
// Signal the worker process to abort the pull via Redis
await DownloadModelJob.signalCancel(jobId)
// Also try in-memory abort (works if worker is in same process)
DownloadModelJob.abortControllers.get(jobId)?.abort('user-cancel')
DownloadModelJob.abortControllers.delete(jobId)
await this._pollForTerminalState(job, jobId)
await this._removeJobWithLockFallback(job, queue, DownloadModelJob.queue, jobId)
// Broadcast a cancelled event so the frontend hook clears the entry. We use percent: -2
// (distinct from -1 = error) so the hook can route it to a 2s auto-clear instead of the
// 15s error display. The frontend ALSO removes the entry optimistically from the API
// response, so this is belt-and-suspenders for cases where the SSE arrives first.
transmit.broadcast(BROADCAST_CHANNELS.OLLAMA_MODEL_DOWNLOAD, {
model: modelName,
jobId,
percent: -2,
status: 'cancelled',
timestamp: new Date().toISOString(),
})
// Note on partial blob cleanup: Ollama manages model blobs internally at
// /root/.ollama/models/blobs/. We deliberately do NOT call /api/delete here — Ollama's
// expected behavior is to retain partial blobs so a re-pull resumes from where it left
// off. If the user wants to reclaim that space, they can re-pull and let it complete,
// or delete the partially-downloaded model from the AI Settings page.
return { success: true, message: 'Model download cancelled' }
}
/** Wait up to 4s (250ms intervals) for the job to reach a terminal state */
private async _pollForTerminalState(job: any, jobId: string): Promise<void> {
const POLL_INTERVAL_MS = 250
const POLL_TIMEOUT_MS = 4000
const deadline = Date.now() + POLL_TIMEOUT_MS
while (Date.now() < deadline) {
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
try {
const state = await job.getState()
if (state === 'failed' || state === 'completed' || state === 'unknown') {
return
}
} catch {
return // getState() throws if job is already gone
}
}
console.warn(
`[DownloadService] cancelJob: job ${jobId} did not reach terminal state within timeout, removing anyway`
)
}
/** Remove a BullMQ job, clearing a stale worker lock if the first attempt fails */
private async _removeJobWithLockFallback(
job: any,
queue: any,
queueName: string,
jobId: string
): Promise<void> {
try {
await job.remove()
} catch {
// Lock contention fallback: clear lock and retry once
try {
const client = await queue.client
await client.del(`bull:${queueName}:${jobId}:lock`)
const updatedJob = await queue.getJob(jobId)
if (updatedJob) await updatedJob.remove()
} catch {
// Best effort - job will be cleaned up on next dismiss attempt
}
}
}
}

View File

@ -2,7 +2,7 @@ import { XMLBuilder, XMLParser } from 'fast-xml-parser'
import { readFile, writeFile, rename, readdir } from 'fs/promises'
import { join } from 'path'
import { Archive } from '@openzim/libzim'
import { KIWIX_LIBRARY_XML_PATH, ZIM_STORAGE_PATH, ensureDirectoryExists } from '../utils/fs.js'
import { KIWIX_LIBRARY_XML_PATH, ZIM_STORAGE_PATH, ensureDirectoryExists, isValidZimFile } from '../utils/fs.js'
import logger from '@adonisjs/core/services/logger'
import { randomUUID } from 'node:crypto'
@ -54,8 +54,12 @@ export class KiwixLibraryService {
*
* Returns null on any error so callers can fall back gracefully.
*/
private _readZimMetadata(zimFilePath: string): Partial<KiwixBook> | null {
private async _readZimMetadata(zimFilePath: string): Promise<Partial<KiwixBook> | null> {
try {
if (!(await isValidZimFile(zimFilePath))) {
logger.warn(`[KiwixLibraryService] Skipping invalid/corrupted ZIM file: ${zimFilePath}`)
return null
}
const archive = new Archive(zimFilePath)
const getMeta = (key: string): string | undefined => {
@ -197,17 +201,22 @@ export class KiwixLibraryService {
const excludeSet = new Set(opts?.excludeFilenames ?? [])
const zimFiles = entries.filter((name) => name.endsWith('.zim') && !excludeSet.has(name))
const books: KiwixBook[] = zimFiles.map((filename) => {
const meta = this._readZimMetadata(join(dirPath, filename))
const books: KiwixBook[] = []
for (const filename of zimFiles) {
const meta = await this._readZimMetadata(join(dirPath, filename))
if (meta === null) {
logger.warn(`[KiwixLibraryService] Skipping unreadable ZIM file: ${filename}`)
continue
}
const containerPath = `${CONTAINER_DATA_PATH}/${filename}`
return {
books.push({
...meta,
// Override fields that must be derived locally, not from ZIM metadata
id: meta?.id ?? filename.slice(0, -4),
path: containerPath,
title: meta?.title ?? this._filenameToTitle(filename),
}
})
})
}
const xml = this._buildXml(books)
await this._atomicWrite(xml)
@ -239,7 +248,12 @@ export class KiwixLibraryService {
}
const fullPath = join(process.cwd(), ZIM_STORAGE_PATH, zimFilename)
const meta = this._readZimMetadata(fullPath)
const meta = await this._readZimMetadata(fullPath)
if (meta === null) {
logger.error(`[KiwixLibraryService] Cannot add ${zimFilename}: file is invalid or corrupted.`)
return
}
existingBooks.push({
...meta,

View File

@ -16,10 +16,35 @@ import {
import { join, resolve, sep } from 'path'
import urlJoin from 'url-join'
import { RunDownloadJob } from '#jobs/run_download_job'
import { RunExtractPmtilesJob } from '#jobs/run_extract_pmtiles_job'
import logger from '@adonisjs/core/services/logger'
import { assertNotPrivateUrl } from '#validators/common'
import InstalledResource from '#models/installed_resource'
import { CollectionManifestService } from './collection_manifest_service.js'
import type { CollectionWithStatus, MapsSpec } from '../../types/collections.js'
import type { Country, CountryCode, CountryGroup, MapExtractPreflight } from '../../types/maps.js'
import {
EXTRACT_DEFAULT_MAX_ZOOM,
EXTRACT_MAX_ZOOM,
EXTRACT_MIN_ZOOM,
PMTILES_BINARY_PATH,
WORLD_BASEMAP_FILENAME,
WORLD_BASEMAP_MAX_ZOOM,
WORLD_BASEMAP_SOURCE_NAME,
buildPmtilesExtractArgs,
} from '../../constants/map_regions.js'
import { CountriesService } from './countries_service.js'
import { execFile } from 'child_process'
import { createHash, randomBytes } from 'crypto'
import { tmpdir } from 'os'
import { promisify } from 'util'
const execFileAsync = promisify(execFile)
const DRY_RUN_TIMEOUT_MS = 60_000
const DRY_RUN_MAX_BUFFER = 256 * 1024
// Real extract of z0-5 world tiles; generous to tolerate slow/metered links
// since a failure leaves the map grey for uncovered regions.
const WORLD_BASEMAP_EXTRACT_TIMEOUT_MS = 5 * 60_000
const PROTOMAPS_BUILDS_METADATA_URL = 'https://build-metadata.protomaps.dev/builds.json'
const PROTOMAPS_BUILD_BASE_URL = 'https://build.protomaps.com'
@ -52,10 +77,15 @@ export class MapService implements IMapService {
private readonly baseAssetsTarFile = 'base-assets.tar.gz'
private readonly baseDirPath = join(process.cwd(), this.mapStoragePath)
private baseAssetsExistCache: boolean | null = null
private worldBasemapReady = false
private worldBasemapInFlight: Promise<void> | null = null
async listRegions() {
const files = (await this.listAllMapStorageItems()).filter(
(item) => item.type === 'file' && item.name.endsWith('.pmtiles')
(item) =>
item.type === 'file' &&
item.name.endsWith('.pmtiles') &&
item.name !== WORLD_BASEMAP_FILENAME
)
return {
@ -119,6 +149,13 @@ export class MapService implements IMapService {
const downloadFilenames: string[] = []
for (const resource of toDownload) {
try {
assertNotPrivateUrl(resource.url)
} catch {
logger.warn(`[MapService] Blocked download from private/loopback URL: ${resource.url}`)
continue
}
const existing = await RunDownloadJob.getActiveByUrl(resource.url)
if (existing) {
logger.warn(`[MapService] Download already in progress for URL ${resource.url}, skipping.`)
@ -244,6 +281,7 @@ export class MapService implements IMapService {
url: string
): Promise<{ filename: string; size: number } | { message: string }> {
try {
assertNotPrivateUrl(url)
const parsed = new URL(url)
if (!parsed.pathname.endsWith('.pmtiles')) {
throw new Error(`Invalid PMTiles file URL: ${url}. URL must end with .pmtiles`)
@ -267,7 +305,8 @@ export class MapService implements IMapService {
return { filename, size }
} catch (error: any) {
return { message: `Preflight check failed: ${error.message}` }
logger.error({ err: error }, '[MapService] Preflight check failed for URL')
return { message: 'Preflight check failed. Please verify the URL is valid and accessible.' }
}
}
@ -317,11 +356,76 @@ export class MapService implements IMapService {
async ensureBaseAssets(): Promise<boolean> {
const exists = await this.checkBaseAssetsExist()
if (exists) {
return true
if (!exists) {
const downloaded = await this.downloadBaseAssets()
if (!downloaded) return false
}
return await this.downloadBaseAssets()
try {
await this.ensureWorldBasemap()
} catch (err) {
logger.warn(`[MapService] World basemap setup failed, continuing without it: ${err}`)
}
return true
}
/**
* Extract a low-zoom global basemap once so the map isn't grey outside a
* regional extract's polygon. Cheap (~15 MB, a handful of HTTP range
* requests) and layered underneath regional sources at render time.
*
* Memoizes success in-process, and de-duplicates concurrent callers via a
* shared in-flight promise so two simultaneous `/maps` requests on a cold
* start don't both launch `pmtiles extract` against the same output path.
*/
private async ensureWorldBasemap(): Promise<void> {
if (this.worldBasemapReady) return
if (this.worldBasemapInFlight) return this.worldBasemapInFlight
this.worldBasemapInFlight = this._setupWorldBasemap().finally(() => {
this.worldBasemapInFlight = null
})
return this.worldBasemapInFlight
}
private async _setupWorldBasemap(): Promise<void> {
const basePath = resolve(join(this.baseDirPath, 'pmtiles'))
const filepath = resolve(join(basePath, WORLD_BASEMAP_FILENAME))
if (!filepath.startsWith(basePath + sep)) {
throw new Error('Invalid world basemap path')
}
await ensureDirectoryExists(basePath)
const existing = await getFileStatsIfExists(filepath)
if (existing && Number(existing.size) > 0) {
this.worldBasemapReady = true
return
}
const info = await this.getGlobalMapInfo()
const args = buildPmtilesExtractArgs({
sourceUrl: info.url,
outputFilepath: filepath,
maxzoom: WORLD_BASEMAP_MAX_ZOOM,
downloadThreads: 4,
})
logger.info(
`[MapService] Extracting world basemap (z0-${WORLD_BASEMAP_MAX_ZOOM}) from ${info.url}`
)
try {
await execFileAsync(PMTILES_BINARY_PATH, args, {
timeout: WORLD_BASEMAP_EXTRACT_TIMEOUT_MS,
maxBuffer: DRY_RUN_MAX_BUFFER,
})
this.worldBasemapReady = true
} catch (err: any) {
await deleteFileIfExists(filepath)
throw new Error(
`pmtiles extract for world basemap failed: ${err.message}. stderr: ${err.stderr ?? ''}`
)
}
}
private async checkBaseAssetsExist(useCache: boolean = true): Promise<boolean> {
@ -357,6 +461,19 @@ export class MapService implements IMapService {
const sources: BaseStylesFile['sources'][] = []
const baseUrl = this.getPublicFileBaseUrl(host, 'pmtiles', protocol)
// World basemap goes first so its layers render underneath regional extracts.
// Only emitted when ensureWorldBasemap() succeeded — otherwise the style would
// reference a file that doesn't exist and produce 404s on every tile request.
if (this.worldBasemapReady) {
const worldSource: BaseStylesFile['sources'] = {}
worldSource[WORLD_BASEMAP_SOURCE_NAME] = {
type: 'vector',
attribution: PMTILES_ATTRIBUTION,
url: `pmtiles://${urlJoin(baseUrl, WORLD_BASEMAP_FILENAME)}`,
}
sources.push(worldSource)
}
for (const region of regions) {
if (region.type === 'file' && region.name.endsWith('.pmtiles')) {
// Strip .pmtiles and date suffix (e.g. "alaska_2025-12" -> "alaska") for stable source names
@ -479,12 +596,206 @@ export class MapService implements IMapService {
}
}
async listCountries(): Promise<Country[]> {
return CountriesService.getInstance().list()
}
async listCountryGroups(): Promise<CountryGroup[]> {
return CountriesService.getInstance().listGroups()
}
async extractPreflight(params: {
countries: CountryCode[]
maxzoom?: number
}): Promise<MapExtractPreflight> {
this.validateMaxzoom(params.maxzoom)
const countries = await CountriesService.getInstance().resolveCodes(params.countries)
const regionFilepath = await CountriesService.getInstance().writeRegionFile(countries)
const info = await this.getGlobalMapInfo()
return this.runDryRun(info, regionFilepath, params.maxzoom)
}
private async runDryRun(
info: { url: string; date: string; key: string },
regionFilepath: string,
maxzoom?: number
): Promise<MapExtractPreflight> {
const dryRunOutput = join(tmpdir(), `pmtiles-dry-run-${randomBytes(6).toString('hex')}.pmtiles`)
const args = buildPmtilesExtractArgs({
sourceUrl: info.url,
outputFilepath: dryRunOutput,
regionFilepath,
maxzoom,
dryRun: true,
})
let stdout = ''
let stderr = ''
try {
const result = await execFileAsync(PMTILES_BINARY_PATH, args, {
timeout: DRY_RUN_TIMEOUT_MS,
maxBuffer: DRY_RUN_MAX_BUFFER,
})
stdout = result.stdout
stderr = result.stderr
} catch (err: any) {
throw new Error(
`pmtiles extract --dry-run failed: ${err.message}. stderr: ${err.stderr ?? ''}`
)
}
const parsed = this.parseDryRunOutput(stdout + '\n' + stderr)
return {
tiles: parsed.tiles,
bytes: parsed.bytes,
source: { url: info.url, date: info.date, key: info.key },
}
}
async extractRegion(params: {
countries: CountryCode[]
maxzoom?: number
label?: string
estimatedBytes?: number
}): Promise<{ filename: string; jobId?: string }> {
this.validateMaxzoom(params.maxzoom)
const countriesService = CountriesService.getInstance()
const countries = await countriesService.resolveCodes(params.countries)
const regionFilepath = await countriesService.writeRegionFile(countries)
const maxzoom = params.maxzoom ?? EXTRACT_DEFAULT_MAX_ZOOM
const [baseAssetsExist, info, groups] = await Promise.all([
this.ensureBaseAssets(),
this.getGlobalMapInfo(),
countriesService.listGroups(),
])
if (!baseAssetsExist) {
throw new Error(
'Base map assets are missing and could not be downloaded. Please check your connection and try again.'
)
}
const groupMatch = findExactGroupMatch(countries, groups)
const slug = this.buildRegionSlug(countries, groupMatch)
const dateSlug = info.key.replace('.pmtiles', '')
const filename = `${slug}_${dateSlug}_z${maxzoom}.pmtiles`
const basePath = resolve(join(this.baseDirPath, 'pmtiles'))
const filepath = resolve(join(basePath, filename))
if (!filepath.startsWith(basePath + sep)) {
throw new Error('Invalid filename')
}
let estimatedBytes = params.estimatedBytes ?? 0
if (estimatedBytes === 0) {
try {
const preflight = await this.runDryRun(info, regionFilepath, maxzoom)
estimatedBytes = preflight.bytes
} catch (err) {
logger.warn(`[MapService] extractRegion preflight failed, proceeding without estimate: ${err}`)
}
}
const title = params.label ?? this.buildRegionTitle(countries, groupMatch)
const result = await RunExtractPmtilesJob.dispatch({
sourceUrl: info.url,
outputFilepath: filepath,
regionFilepath,
maxzoom,
estimatedBytes,
filetype: 'map',
title,
resourceMetadata: {
resource_id: slug,
version: dateSlug,
collection_ref: null,
},
})
if (!result.job) {
throw new Error('Failed to dispatch extract job')
}
logger.info(
`[MapService] Dispatched extract job ${result.job.id} for ${filename} ` +
`(countries=[${countries.join(',')}] maxzoom=${maxzoom} est=${estimatedBytes} bytes)`
)
return {
filename,
jobId: result.job.id,
}
}
private buildRegionSlug(countries: CountryCode[], groupMatch: CountryGroup | null): string {
if (groupMatch) return groupMatch.id
if (countries.length === 1) return countries[0].toLowerCase()
const hash = createHash('sha1').update(countries.join(',')).digest('hex').slice(0, 8)
return `custom-${hash}`
}
private buildRegionTitle(countries: CountryCode[], groupMatch: CountryGroup | null): string {
if (groupMatch) return groupMatch.name
if (countries.length === 1) return countries[0]
if (countries.length <= 3) return countries.join(', ')
return `${countries.slice(0, 2).join(', ')} +${countries.length - 2} more`
}
private validateMaxzoom(maxzoom: number | undefined): void {
if (typeof maxzoom !== 'number') return
if (
!Number.isInteger(maxzoom) ||
maxzoom < EXTRACT_MIN_ZOOM ||
maxzoom > EXTRACT_MAX_ZOOM
) {
throw new Error(
`maxzoom must be an integer in [${EXTRACT_MIN_ZOOM}, ${EXTRACT_MAX_ZOOM}]`
)
}
}
// go-pmtiles output format isn't stable across versions — parse loosely and
// fall back to zeros. The extract can still proceed without an estimate.
private parseDryRunOutput(output: string): { tiles: number; bytes: number } {
let bytes = 0
let tiles = 0
const byteLine = output.match(/archive\s+size\s+of\s+([\d,.]+)\s*(B|KB|MB|GB|TB|bytes?)?/i)
if (byteLine) {
const raw = parseFloat(byteLine[1].replace(/,/g, ''))
const unit = (byteLine[2] ?? 'B').toUpperCase()
const multipliers: Record<string, number> = {
B: 1,
BYTE: 1,
BYTES: 1,
KB: 1_000,
MB: 1_000_000,
GB: 1_000_000_000,
TB: 1_000_000_000_000,
}
bytes = Math.round(raw * (multipliers[unit] ?? 1))
}
const tileLine = output.match(/(?:tiles\s+to\s+extract|tiles)[^\d]*([\d,]+)/i)
if (tileLine) {
tiles = parseInt(tileLine[1].replace(/,/g, ''), 10) || 0
}
return { tiles, bytes }
}
async delete(file: string): Promise<void> {
let fileName = file
if (!fileName.endsWith('.pmtiles')) {
fileName += '.pmtiles'
}
if (fileName === WORLD_BASEMAP_FILENAME) {
throw new Error('The world basemap cannot be deleted')
}
const basePath = resolve(join(this.baseDirPath, 'pmtiles'))
const fullPath = resolve(join(basePath, fileName))
@ -563,3 +874,16 @@ export class MapService implements IMapService {
return baseUrl
}
}
function findExactGroupMatch(
countries: CountryCode[],
groups: CountryGroup[]
): CountryGroup | null {
return (
groups.find(
(g) =>
g.countries.length === countries.length &&
g.countries.every((c, i) => c === countries[i])
) ?? null
)
}

View File

@ -53,6 +53,7 @@ export class OllamaService {
private baseUrl: string | null = null
private initPromise: Promise<void> | null = null
private isOllamaNative: boolean | null = null
private activeDownloads: Map<string, Promise<{ success: boolean; message: string; retryable?: boolean }>> = new Map()
constructor() {}
@ -91,10 +92,46 @@ export class OllamaService {
/**
* Downloads a model from Ollama with progress tracking. Only works with Ollama backends.
* Use dispatchModelDownload() for background job processing where possible.
*
* @param signal Optional AbortSignal when triggered, the underlying axios stream is cancelled
* and the method returns a non-retryable failure so callers can mark the job
* unrecoverable in BullMQ and avoid the 40-attempt retry storm.
* @param jobId Optional BullMQ job id included in progress broadcasts so the frontend can
* correlate Transmit events to a cancellable job.
*/
async downloadModel(
model: string,
progressCallback?: (percent: number) => void
progressCallback?: (
percent: number,
bytes?: { downloadedBytes: number; totalBytes: number }
) => void,
signal?: AbortSignal,
jobId?: string
): Promise<{ success: boolean; message: string; retryable?: boolean }> {
// Deduplicate concurrent downloads of the same model
const existing = this.activeDownloads.get(model)
if (existing) {
logger.info(`[OllamaService] Download already in progress for "${model}", waiting on existing download.`)
return existing
}
const downloadPromise = this._doDownloadModel(model, progressCallback, signal, jobId)
this.activeDownloads.set(model, downloadPromise)
try {
return await downloadPromise
} finally {
this.activeDownloads.delete(model)
}
}
private async _doDownloadModel(
model: string,
progressCallback?: (
percent: number,
bytes?: { downloadedBytes: number; totalBytes: number }
) => void,
signal?: AbortSignal,
jobId?: string
): Promise<{ success: boolean; message: string; retryable?: boolean }> {
await this._ensureDependencies()
if (!this.baseUrl) {
@ -121,15 +158,45 @@ export class OllamaService {
}
}
// Stream pull via Ollama native API
// Stream pull via Ollama native API. axios supports `signal` natively for AbortController
// integration — when triggered, the request errors with code 'ERR_CANCELED' which we detect
// in the catch block below to return a non-retryable cancel result.
const pullResponse = await axios.post(
`${this.baseUrl}/api/pull`,
{ model, stream: true },
{ responseType: 'stream', timeout: 0 }
{ responseType: 'stream', timeout: 0, signal }
)
// Ollama's pull API reports progress per-digest (each blob). A single model can contain
// multiple blobs (weights, tokenizer, template, etc.) and each is reported in turn.
// Aggregate across all digests so the UI shows a single monotonically-increasing total,
// matching the behavior of the content download progress (Active Downloads section).
const digestProgress = new Map<string, { completed: number; total: number }>()
// Throttle broadcasts to once per BROADCAST_THROTTLE_MS — Ollama can emit hundreds of
// progress events per second for fast connections, which would flood the Transmit SSE
// channel and cause jittery speed calculations on the frontend.
const BROADCAST_THROTTLE_MS = 500
let lastBroadcastAt = 0
await new Promise<void>((resolve, reject) => {
let buffer = ''
// If the abort fires after headers are received but mid-stream, axios's signal handling
// destroys the stream which surfaces as an 'error' event — wire the signal listener so
// the promise rejects promptly with a recognizable cancel reason.
const onAbort = () => {
const err: any = new Error('Download cancelled')
err.code = 'ERR_CANCELED'
pullResponse.data.destroy(err)
}
if (signal) {
if (signal.aborted) {
onAbort()
return
}
signal.addEventListener('abort', onAbort, { once: true })
}
pullResponse.data.on('data', (chunk: Buffer) => {
buffer += chunk.toString()
const lines = buffer.split('\n')
@ -138,23 +205,74 @@ export class OllamaService {
if (!line.trim()) continue
try {
const parsed = JSON.parse(line)
if (parsed.completed && parsed.total) {
const percent = parseFloat(((parsed.completed / parsed.total) * 100).toFixed(2))
this.broadcastDownloadProgress(model, percent)
if (progressCallback) progressCallback(percent)
if (parsed.completed && parsed.total && parsed.digest) {
// Update this digest's progress — take the max seen value so transient
// out-of-order updates don't make the aggregate jump backwards.
const existing = digestProgress.get(parsed.digest)
digestProgress.set(parsed.digest, {
completed: Math.max(existing?.completed ?? 0, parsed.completed),
total: Math.max(existing?.total ?? 0, parsed.total),
})
// Compute aggregate across all known blobs
let aggCompleted = 0
let aggTotal = 0
for (const { completed, total } of digestProgress.values()) {
aggCompleted += completed
aggTotal += total
}
const percent = aggTotal > 0
? parseFloat(((aggCompleted / aggTotal) * 100).toFixed(2))
: 0
// Throttle broadcasts. Always call the progressCallback though — the worker
// uses it to update job state in Redis, which should reflect the latest view.
const now = Date.now()
if (now - lastBroadcastAt >= BROADCAST_THROTTLE_MS) {
lastBroadcastAt = now
this.broadcastDownloadProgress(model, percent, jobId, {
downloadedBytes: aggCompleted,
totalBytes: aggTotal,
})
}
if (progressCallback) {
progressCallback(percent, {
downloadedBytes: aggCompleted,
totalBytes: aggTotal,
})
}
}
} catch {
// ignore parse errors on partial lines
}
}
})
pullResponse.data.on('end', resolve)
pullResponse.data.on('error', reject)
pullResponse.data.on('end', () => {
if (signal) signal.removeEventListener('abort', onAbort)
resolve()
})
pullResponse.data.on('error', (err: any) => {
if (signal) signal.removeEventListener('abort', onAbort)
reject(err)
})
})
logger.info(`[OllamaService] Model "${model}" downloaded successfully.`)
return { success: true, message: 'Model downloaded successfully.' }
} catch (error) {
// Detect axios cancel (signal-triggered abort). Don't broadcast an error event for
// user-initiated cancels — the cancel handler in DownloadService already broadcasts
// a cancelled state. Returning retryable: false prevents BullMQ retries.
const isCancelled =
axios.isCancel(error) ||
(error as any)?.code === 'ERR_CANCELED' ||
(error as any)?.name === 'CanceledError'
if (isCancelled) {
logger.info(`[OllamaService] Model "${model}" download cancelled by user.`)
return { success: false, message: 'Download cancelled', retryable: false }
}
const errorMessage = error instanceof Error ? error.message : String(error)
logger.error(
`[OllamaService] Failed to download model "${model}": ${errorMessage}`
@ -362,10 +480,21 @@ export class OllamaService {
}
try {
// Prefer Ollama native endpoint (supports batch input natively)
// Prefer Ollama native endpoint (supports batch input natively).
// Pass num_ctx explicitly so we don't depend on the embedding model's
// modelfile defaults. Some installs ship nomic-embed-text:v1.5 with
// num_ctx=2048, which our chunker (sized for ~1500 tokens) can exceed
// on dense content, causing "input length exceeds context length" errors.
// truncate:true is a runtime safety net for any chunk that still overshoots.
// 8192 matches nomic-embed-text:v1.5's RoPE-extrapolated max.
const response = await axios.post(
`${this.baseUrl}/api/embed`,
{ model, input },
{
model,
input,
truncate: true,
options: { num_ctx: 8192 },
},
{ timeout: 60000 }
)
// Some backends (e.g. LM Studio) return HTTP 200 for unknown endpoints with an incompatible
@ -628,10 +757,19 @@ export class OllamaService {
})
}
private broadcastDownloadProgress(model: string, percent: number) {
private broadcastDownloadProgress(
model: string,
percent: number,
jobId?: string,
bytes?: { downloadedBytes: number; totalBytes: number }
) {
// Conditional spread on jobId/bytes — Transmit's Broadcastable type rejects fields whose
// value is `undefined`, so we omit each key entirely when its value isn't available.
transmit.broadcast(BROADCAST_CHANNELS.OLLAMA_MODEL_DOWNLOAD, {
model,
percent,
...(jobId ? { jobId } : {}),
...(bytes ? { downloadedBytes: bytes.downloadedBytes, totalBytes: bytes.totalBytes } : {}),
timestamp: new Date().toISOString(),
})
logger.info(`[OllamaService] Download progress for model "${model}": ${percent}%`)

View File

@ -52,14 +52,33 @@ export class RagService {
this.qdrantInitPromise = (async () => {
const qdrantUrl = await this.dockerService.getServiceURL(SERVICE_NAMES.QDRANT)
if (!qdrantUrl) {
throw new Error('Qdrant service is not installed or running.')
throw new Error('Qdrant vector database is offline. Restart the AI Assistant service in Settings to restore the Knowledge Base.')
}
this.qdrant = new QdrantClient({ url: qdrantUrl })
})()
})().catch((err) => {
this.qdrantInitPromise = null
this.qdrant = null
throw err
})
}
return this.qdrantInitPromise
}
public async checkQdrantHealth(): Promise<{ online: boolean; message?: string }> {
try {
await this._ensureDependencies()
await this.qdrant!.getCollections()
return { online: true }
} catch {
this.qdrant = null
this.qdrantInitPromise = null
return {
online: false,
message: 'Qdrant vector database is offline. Restart the AI Assistant service in Settings to restore the Knowledge Base.',
}
}
}
private async _ensureDependencies() {
if (!this.qdrant) {
await this._initializeQdrantClient()
@ -532,9 +551,12 @@ export class RagService {
}
}
// Count unique articles processed in this batch
// Count unique articles processed in this batch. hasMoreBatches gates on the article
// count — zimChunks.length counts section-level chunks (multiple per article under the
// 'structured' strategy), so comparing it to ZIM_BATCH_SIZE (an article limit) caps
// processing at the first batch for any real archive.
const articlesInBatch = new Set(zimChunks.map((c) => c.documentId)).size
const hasMoreBatches = zimChunks.length === ZIM_BATCH_SIZE
const hasMoreBatches = articlesInBatch >= ZIM_BATCH_SIZE
logger.info(
`[RAG] Successfully embedded ${totalChunks} total chunks from ${articlesInBatch} articles (hasMore: ${hasMoreBatches})`
@ -1013,6 +1035,16 @@ export class RagService {
* Retrieve all unique source files that have been stored in the knowledge base.
* @returns Array of unique full source paths
*/
public async hasDocuments(): Promise<boolean> {
try {
await this._ensureCollection(RagService.CONTENT_COLLECTION_NAME, RagService.EMBEDDING_DIMENSION)
const collectionInfo = await this.qdrant!.getCollection(RagService.CONTENT_COLLECTION_NAME)
return (collectionInfo.points_count ?? 0) > 0
} catch {
return false
}
}
public async getStoredFiles(): Promise<string[]> {
try {
await this._ensureCollection(
@ -1242,8 +1274,12 @@ export class RagService {
logger.info(`[RAG] Found ${sourcesInQdrant.size} unique sources in Qdrant`)
// Find files that are in storage but not in Qdrant
const filesToEmbed = filesInStorage.filter((filePath) => !sourcesInQdrant.has(filePath))
// Find files that are in storage, not already in Qdrant, and have an embeddable type.
// Non-embeddable files (e.g. kiwix-library.xml in /storage/zim) would otherwise be
// dispatched to EmbedFileJob, fail with "Unsupported file type", and retry on every sync.
const filesToEmbed = filesInStorage.filter(
(filePath) => !sourcesInQdrant.has(filePath) && determineFileType(filePath) !== 'unknown'
)
logger.info(`[RAG] Found ${filesToEmbed.length} files that need embedding`)

View File

@ -12,6 +12,7 @@ import {
} from '../../types/system.js'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import { readFileSync } from 'node:fs'
import { readFile } from 'node:fs/promises'
import path, { join } from 'node:path'
import { getAllFilesystems, getFile } from '../utils/fs.js'
import axios from 'axios'
@ -72,6 +73,61 @@ export class SystemService {
return false
}
/**
* Probe Ollama startup logs for the canonical "inference compute" line that records
* which compute backend was selected. This catches silent CPU fallback (e.g. when
* /dev/kfd is mounted but ROCm initialization fails, or NVML dies after an update)
* which the older nvidia-smi exec probe could not detect.
*
* Returns the parsed library, GPU model name, and VRAM in MiB, or null when:
* - the Ollama container is not running
* - the line has not been emitted (Ollama still starting up)
* - logs show CPU-only operation (no GPU detected)
*/
async getOllamaInferenceComputeFromLogs(): Promise<{
library: 'CUDA' | 'ROCm'
name: string
vramMiB: number
} | null> {
try {
const containers = await this.dockerService.docker.listContainers({ all: false })
const ollamaContainer = containers.find((c) => c.Names.includes(`/${SERVICE_NAMES.OLLAMA}`))
if (!ollamaContainer) return null
const container = this.dockerService.docker.getContainer(ollamaContainer.Id)
const buf = (await container.logs({
stdout: true,
stderr: true,
tail: 500,
follow: false,
})) as unknown as Buffer
const logs = buf.toString('utf8')
const lines = logs.split('\n').filter((l) => l.includes('msg="inference compute"'))
if (lines.length === 0) return null
const lastLine = lines[lines.length - 1]
const libraryMatch = lastLine.match(/library=(CUDA|ROCm)/)
if (!libraryMatch) return null
const descMatch = lastLine.match(/description="([^"]+)"/)
const totalMatch = lastLine.match(/total="([0-9.]+)\s*GiB"/)
return {
library: libraryMatch[1] as 'CUDA' | 'ROCm',
name:
descMatch?.[1] ||
(libraryMatch[1] === 'CUDA' ? 'NVIDIA GPU' : 'AMD GPU'),
vramMiB: totalMatch ? Math.round(Number.parseFloat(totalMatch[1]) * 1024) : 0,
}
} catch (error) {
logger.warn(
`[SystemService] Failed to probe Ollama logs for inference compute line: ${error instanceof Error ? error.message : error}`
)
return null
}
}
async getNvidiaSmiInfo(): Promise<
| Array<{ vendor: string; model: string; vram: number }>
| { error: string }
@ -317,10 +373,14 @@ export class SystemService {
logger.error('Error reading disk info file:', error)
}
// GPU health tracking — detect when host has NVIDIA GPU but Ollama can't access it
// GPU health tracking — detect when host has a GPU runtime but Ollama can't access it.
// Primary probe: parse Ollama's "inference compute" startup log line for both NVIDIA
// and AMD. Secondary probe (NVIDIA only): nvidia-smi exec, retained as a fallback for
// hardware enrichment when log parsing has not yet captured a startup line.
let gpuHealth: GpuHealthStatus = {
status: 'no_gpu',
hasNvidiaRuntime: false,
hasRocmRuntime: false,
ollamaGpuAccessible: false,
}
@ -340,27 +400,51 @@ export class SystemService {
}
// If si.graphics() returned no controllers (common inside Docker),
// fall back to nvidia runtime + nvidia-smi detection
// fall back to runtime + Ollama log probe to figure out what's accessible.
if (!graphics.controllers || graphics.controllers.length === 0) {
const runtimes = dockerInfo.Runtimes || {}
if ('nvidia' in runtimes) {
gpuHealth.hasNvidiaRuntime = true
const nvidiaInfo = await this.getNvidiaSmiInfo()
if (Array.isArray(nvidiaInfo)) {
graphics.controllers = nvidiaInfo.map((gpu) => ({
model: gpu.model,
vendor: gpu.vendor,
bus: '',
vram: gpu.vram,
vramDynamic: false, // assume false here, we don't actually use this field for our purposes.
}))
gpuHealth.hasNvidiaRuntime = 'nvidia' in runtimes
// AMD doesn't register a Docker runtime. Detection sources, in priority order:
// 1. KV 'gpu.type' (set by DockerService._detectGPUType after first Ollama install)
// 2. Marker file at /app/storage/.nomad-gpu-type (written by install_nomad.sh)
// The marker file matters because the System page should reflect AMD presence
// even before AI Assistant has been installed for the first time.
let savedGpuType: string | null | undefined = await KVStore.getValue('gpu.type') as string | undefined
if (!savedGpuType) {
try {
savedGpuType = (await readFile('/app/storage/.nomad-gpu-type', 'utf8')).trim()
} catch {}
}
const amdEnabledRaw = await KVStore.getValue('ai.amdGpuAcceleration')
const amdAccelerationEnabled = String(amdEnabledRaw) !== 'false'
gpuHealth.hasRocmRuntime = savedGpuType === 'amd' && amdAccelerationEnabled
if (gpuHealth.hasNvidiaRuntime || gpuHealth.hasRocmRuntime) {
gpuHealth.gpuVendor = gpuHealth.hasNvidiaRuntime ? 'nvidia' : 'amd'
// Primary probe: Ollama log parsing — works for both vendors and catches silent fallback
const logInfo = await this.getOllamaInferenceComputeFromLogs()
if (logInfo) {
graphics.controllers = [
{
model: logInfo.name,
vendor: logInfo.library === 'CUDA' ? 'NVIDIA' : 'AMD',
bus: '',
vram: logInfo.vramMiB,
vramDynamic: false,
},
]
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
} else if (nvidiaInfo === 'OLLAMA_NOT_FOUND') {
// No local Ollama container — check if a remote Ollama URL is configured
const externalOllamaGpu = await this.getExternalOllamaGpuInfo()
if (externalOllamaGpu) {
graphics.controllers = externalOllamaGpu.map((gpu) => ({
} else if (gpuHealth.hasNvidiaRuntime) {
// NVIDIA secondary path: nvidia-smi exec preserves prior behavior when
// the log parser hasn't seen a startup line yet (e.g. log rotation,
// very fresh container). Distinguishes "no Ollama container" from
// "container exists but GPU broken".
const nvidiaInfo = await this.getNvidiaSmiInfo()
if (Array.isArray(nvidiaInfo)) {
graphics.controllers = nvidiaInfo.map((gpu) => ({
model: gpu.model,
vendor: gpu.vendor,
bus: '',
@ -369,25 +453,66 @@ export class SystemService {
}))
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
} else if (nvidiaInfo === 'OLLAMA_NOT_FOUND') {
const externalOllamaGpu = await this.getExternalOllamaGpuInfo()
if (externalOllamaGpu) {
graphics.controllers = externalOllamaGpu.map((gpu) => ({
model: gpu.model,
vendor: gpu.vendor,
bus: '',
vram: gpu.vram,
vramDynamic: false,
}))
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
} else {
gpuHealth.status = 'ollama_not_installed'
}
} else {
gpuHealth.status = 'ollama_not_installed'
const externalOllamaGpu = await this.getExternalOllamaGpuInfo()
if (externalOllamaGpu) {
graphics.controllers = externalOllamaGpu.map((gpu) => ({
model: gpu.model,
vendor: gpu.vendor,
bus: '',
vram: gpu.vram,
vramDynamic: false,
}))
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
} else {
gpuHealth.status = 'passthrough_failed'
logger.warn(
`NVIDIA runtime detected but GPU passthrough failed: ${typeof nvidiaInfo === 'string' ? nvidiaInfo : JSON.stringify(nvidiaInfo)}`
)
}
}
} else {
const externalOllamaGpu = await this.getExternalOllamaGpuInfo()
if (externalOllamaGpu) {
graphics.controllers = externalOllamaGpu.map((gpu) => ({
model: gpu.model,
vendor: gpu.vendor,
bus: '',
vram: gpu.vram,
vramDynamic: false,
}))
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
// AMD path: no nvidia-smi equivalent worth running — log parser is authoritative.
// Distinguish "Ollama not running" from "Ollama running but no GPU log line".
const containers = await this.dockerService.docker.listContainers({ all: false })
const ollamaRunning = containers.some((c) =>
c.Names.includes(`/${SERVICE_NAMES.OLLAMA}`)
)
if (!ollamaRunning) {
const externalOllamaGpu = await this.getExternalOllamaGpuInfo()
if (externalOllamaGpu) {
graphics.controllers = externalOllamaGpu.map((gpu) => ({
model: gpu.model,
vendor: gpu.vendor,
bus: '',
vram: gpu.vram,
vramDynamic: false,
}))
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
} else {
gpuHealth.status = 'ollama_not_installed'
}
} else {
gpuHealth.status = 'passthrough_failed'
logger.warn(
`NVIDIA runtime detected but GPU passthrough failed: ${typeof nvidiaInfo === 'string' ? nvidiaInfo : JSON.stringify(nvidiaInfo)}`
'AMD GPU detected but Ollama logs show no ROCm initialization — passthrough or HSA override may have failed'
)
}
}

View File

@ -47,10 +47,10 @@ export class SystemUpdateService {
message: 'System update initiated. The admin container will restart during the process.',
}
} catch (error) {
logger.error('[SystemUpdateService]: Failed to request system update:', error)
logger.error({ err: error }, '[SystemUpdateService] Failed to request system update')
return {
success: false,
message: `Failed to request update: ${error.message}`,
message: 'Failed to request system update. Check server logs for details.',
}
}
}

View File

@ -5,6 +5,7 @@ import logger from '@adonisjs/core/services/logger'
import { ExtractZIMChunkingStrategy, ExtractZIMContentOptions, ZIMContentChunk, ZIMArchiveMetadata } from '../../types/zim.js'
import { randomUUID } from 'node:crypto'
import { access } from 'node:fs/promises'
import { isValidZimFile } from '../utils/fs.js'
export class ZIMExtractionService {
@ -51,7 +52,13 @@ export class ZIMExtractionService {
logger.error(`[ZIMExtractionService]: ZIM file not accessible: ${filePath}`)
throw new Error(`ZIM file not found or not accessible: ${filePath}`)
}
// Validate ZIM magic number before opening with native library.
// A corrupted file causes a native C++ abort that cannot be caught by JS.
if (!(await isValidZimFile(filePath))) {
throw new Error(`ZIM file is invalid or corrupted: ${filePath}`)
}
const archive = new Archive(filePath)
// Extract archive-level metadata once
@ -209,7 +216,10 @@ export class ZIMExtractionService {
const sections: Array<{ heading: string; text: string; level: number }> = [];
let currentSection = { heading: 'Introduction', content: [] as string[], level: 2 };
$('body').children().each((_, element) => {
// Walk the full DOM rather than only direct children of <body>. Modern ZIMs (Devdocs,
// Wikipedia, FreeCodeCamp, etc.) wrap article content in a container div, which under
// .children() would be a single non-heading/non-paragraph element and yield zero sections.
$('body').find('h2, h3, h4, p, ul, ol, dl, table').each((_, element) => {
const $el = $(element);
const tagName = element.tagName?.toLowerCase();
@ -246,6 +256,20 @@ export class ZIMExtractionService {
});
}
// Fallback: if the selector walk produced no sections but the body has meaningful
// text (unusual structure, minimal markup), emit one section with the full body text
// so the article still contributes to the knowledge base.
if (sections.length === 0) {
const bodyText = $('body').text().replace(/\s+/g, ' ').trim();
if (bodyText.length > 0) {
sections.push({
heading: title || 'Content',
text: bodyText,
level: 2,
});
}
}
return {
title,
sections,

View File

@ -4,6 +4,7 @@ import {
RemoteZimFileEntry,
} from '../../types/zim.js'
import axios from 'axios'
import * as cheerio from 'cheerio'
import { XMLParser } from 'fast-xml-parser'
import { isRawListRemoteZimFilesResponse, isRawRemoteZimFileEntry } from '../../util/zim.js'
import logger from '@adonisjs/core/services/logger'
@ -27,6 +28,8 @@ import { SERVICE_NAMES } from '../../constants/service_names.js'
import { CollectionManifestService } from './collection_manifest_service.js'
import { KiwixLibraryService } from './kiwix_library_service.js'
import type { CategoryWithStatus } from '../../types/collections.js'
import CustomLibrarySource from '#models/custom_library_source'
import { assertNotPrivateUrl } from '#validators/common'
const ZIM_MIME_TYPES = ['application/x-zim', 'application/x-openzim', 'application/octet-stream']
const WIKIPEDIA_OPTIONS_URL = 'https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/collections/wikipedia.json'
@ -40,7 +43,21 @@ export class ZimService {
await ensureDirectoryExists(dirPath)
const all = await listDirectoryContents(dirPath)
const files = all.filter((item) => item.name.endsWith('.zim'))
const zimEntries = all.filter((item) => item.name.endsWith('.zim'))
const files = await Promise.all(
zimEntries.map(async (entry) => {
const filePath = entry.type === 'file' ? entry.key : join(dirPath, entry.name)
const stats = await getFileStatsIfExists(filePath)
return {
...entry,
title: null,
summary: null,
author: null,
size_bytes: stats ? Number(stats.size) : null,
}
})
)
return {
files,
@ -57,84 +74,105 @@ export class ZimService {
query?: string
}): Promise<ListRemoteZimFilesResponse> {
const LIBRARY_BASE_URL = 'https://browse.library.kiwix.org/catalog/v2/entries'
// Kiwix returns pages of content unaware of what the user has installed locally. When
// the installed set is large, a single 12-item Kiwix page can come back with everything
// already installed → 0 post-filter items → frontend deadlock (#731). Accumulate across
// upstream pages so we return a useful batch. Bounded by MAX_KIWIX_FETCHES so a heavily
// saturated install doesn't hang a single request; the frontend scroll loop + auto-fetch
// effect handle continuation.
const KIWIX_PAGE_SIZE = 60
const MAX_KIWIX_FETCHES = 5
const res = await axios.get(LIBRARY_BASE_URL, {
params: {
start: start,
count: count,
lang: 'eng',
...(query ? { q: query } : {}),
},
responseType: 'text',
})
const data = res.data
const parser = new XMLParser({
ignoreAttributes: false,
attributeNamePrefix: '',
textNodeName: '#text',
})
const result = parser.parse(data)
if (!isRawListRemoteZimFilesResponse(result)) {
throw new Error('Invalid response format from remote library')
}
const entries = result.feed.entry
? Array.isArray(result.feed.entry)
? result.feed.entry
: [result.feed.entry]
: []
const filtered = entries.filter((entry: any) => {
return isRawRemoteZimFileEntry(entry)
})
const mapped: (RemoteZimFileEntry | null)[] = filtered.map((entry: RawRemoteZimFileEntry) => {
const downloadLink = entry.link.find((link: any) => {
return (
typeof link === 'object' &&
'rel' in link &&
'length' in link &&
'href' in link &&
'type' in link &&
link.type === 'application/x-zim'
)
})
if (!downloadLink) {
return null
}
// downloadLink['href'] will end with .meta4, we need to remove that to get the actual download URL
const download_url = downloadLink['href'].substring(0, downloadLink['href'].length - 6)
const file_name = download_url.split('/').pop() || `${entry.title}.zim`
const sizeBytes = parseInt(downloadLink['length'], 10)
return {
id: entry.id,
title: entry.title,
updated: entry.updated,
summary: entry.summary,
size_bytes: sizeBytes || 0,
download_url: download_url,
author: entry.author.name,
file_name: file_name,
}
})
// Filter out any null entries (those without a valid download link)
// or files that already exist in the local storage
// Snapshot locally-installed files once — the filesystem won't change mid-request.
const existing = await this.list()
const existingKeys = new Set(existing.files.map((file) => file.name))
const withoutExisting = mapped.filter(
(entry): entry is RemoteZimFileEntry => entry !== null && !existingKeys.has(entry.file_name)
)
const accumulated: RemoteZimFileEntry[] = []
const seenIds = new Set<string>()
let currentStart = start
let totalResults = 0
for (let i = 0; i < MAX_KIWIX_FETCHES; i++) {
const res = await axios.get(LIBRARY_BASE_URL, {
params: {
start: currentStart,
count: KIWIX_PAGE_SIZE,
lang: 'eng',
...(query ? { q: query } : {}),
},
responseType: 'text',
})
const parsed = parser.parse(res.data)
if (!isRawListRemoteZimFilesResponse(parsed)) {
throw new Error('Invalid response format from remote library')
}
totalResults = parsed.feed.totalResults
const rawEntries = parsed.feed.entry
? Array.isArray(parsed.feed.entry)
? parsed.feed.entry
: [parsed.feed.entry]
: []
// Empty upstream response — bail even if totalResults suggests more (transient Kiwix
// hiccup or totalResults drift between pages). Prevents a pointless spin.
if (rawEntries.length === 0) break
// Advance by actual returned count, not requested count. Short pages at the tail
// would otherwise cause us to skip entries on the next fetch.
currentStart += rawEntries.length
for (const raw of rawEntries) {
if (!isRawRemoteZimFileEntry(raw)) continue
const entry = raw as RawRemoteZimFileEntry
const downloadLink = entry.link.find(
(link: any) =>
typeof link === 'object' &&
'rel' in link &&
'length' in link &&
'href' in link &&
'type' in link &&
link.type === 'application/x-zim'
)
if (!downloadLink) continue
// downloadLink['href'] ends with .meta4; strip that to get the actual .zim URL.
const download_url = downloadLink['href'].substring(0, downloadLink['href'].length - 6)
const file_name = download_url.split('/').pop() || `${entry.title}.zim`
if (existingKeys.has(file_name)) continue
if (seenIds.has(entry.id)) continue
seenIds.add(entry.id)
const sizeBytes = parseInt(downloadLink['length'], 10)
accumulated.push({
id: entry.id,
title: entry.title,
updated: entry.updated,
summary: entry.summary,
size_bytes: sizeBytes || 0,
download_url,
author: entry.author.name,
file_name,
})
}
if (accumulated.length >= count) break
if (currentStart >= totalResults) break
}
return {
items: withoutExisting,
has_more: result.feed.totalResults > start,
total_count: result.feed.totalResults,
items: accumulated,
has_more: currentStart < totalResults,
total_count: totalResults,
next_start: currentStart,
}
}
@ -552,25 +590,47 @@ export class ZimService {
}
async onWikipediaDownloadComplete(url: string, success: boolean): Promise<void> {
const filename = url.split('/').pop() || ''
const selection = await this.getWikipediaSelection()
if (!selection || selection.url !== url) {
logger.warn(`[ZimService] Wikipedia download complete callback for unknown URL: ${url}`)
return
// Determine which Wikipedia option this file belongs to by matching filename
let matchedOptionId: string | null = null
try {
const options = await this.getWikipediaOptions()
for (const opt of options) {
if (opt.url && opt.url.split('/').pop() === filename) {
matchedOptionId = opt.id
break
}
}
} catch {
// If we can't fetch options, try to continue with existing selection
}
if (success) {
// Update status to installed
selection.status = 'installed'
await selection.save()
// Update or create the selection record
// Match by filename (not URL) so mirror downloads are recognized
if (selection) {
selection.option_id = matchedOptionId || selection.option_id
selection.url = url
selection.filename = filename
selection.status = 'installed'
await selection.save()
} else {
await WikipediaSelection.create({
option_id: matchedOptionId || 'unknown',
url: url,
filename: filename,
status: 'installed',
})
}
logger.info(`[ZimService] Wikipedia download completed successfully: ${selection.filename}`)
logger.info(`[ZimService] Wikipedia download completed successfully: ${filename}`)
// Delete the old Wikipedia file if it exists and is different
// We need to find what was previously installed
// Delete old Wikipedia files (keep only the newly installed one)
const existingFiles = await this.list()
const wikipediaFiles = existingFiles.files.filter((f) =>
f.name.startsWith('wikipedia_en_') && f.name !== selection.filename
f.name.startsWith('wikipedia_en_') && f.name !== filename
)
for (const oldFile of wikipediaFiles) {
@ -582,10 +642,137 @@ export class ZimService {
}
}
} else {
// Download failed - keep the selection record but mark as failed
selection.status = 'failed'
await selection.save()
logger.error(`[ZimService] Wikipedia download failed for: ${selection.filename}`)
// Download failed - update selection if it matches this file
if (selection && (!selection.filename || selection.filename === filename)) {
selection.status = 'failed'
await selection.save()
logger.error(`[ZimService] Wikipedia download failed for: ${filename}`)
} else {
logger.error(`[ZimService] Wikipedia download failed for: ${filename} (no matching selection)`)
}
}
}
// Custom library source management
async listCustomLibraries(): Promise<CustomLibrarySource[]> {
return CustomLibrarySource.all()
}
async addCustomLibrary(name: string, baseUrl: string): Promise<CustomLibrarySource> {
const count = await CustomLibrarySource.query().count('* as total')
const total = Number(count[0].$extras.total)
if (total >= 10) {
throw new Error('Maximum of 10 custom libraries allowed')
}
// Ensure URL ends with /
const normalizedUrl = baseUrl.endsWith('/') ? baseUrl : baseUrl + '/'
return CustomLibrarySource.create({
name,
base_url: normalizedUrl,
})
}
async removeCustomLibrary(id: number): Promise<void> {
const source = await CustomLibrarySource.find(id)
if (!source) {
throw new Error('Custom library not found')
}
if (source.is_default) {
throw new Error('Cannot remove a built-in mirror')
}
await source.delete()
}
async browseLibraryUrl(url: string): Promise<{
directories: { name: string; url: string }[]
files: { name: string; url: string; size_bytes: number | null }[]
}> {
assertNotPrivateUrl(url)
const normalizedUrl = url.endsWith('/') ? url : url + '/'
const res = await axios.get(normalizedUrl, {
responseType: 'text',
timeout: 15000,
headers: {
'Accept': 'text/html',
},
})
const html: string = res.data
const directories: { name: string; url: string }[] = []
const files: { name: string; url: string; size_bytes: number | null }[] = []
const $ = cheerio.load(html)
$('a').each((_, el) => {
const href = el.attribs?.href
if (!href || href === '../' || href === './' || href === '/' || href.startsWith('?') || href.startsWith('#')) {
return
}
if (href.startsWith('/') || href.startsWith('http://') || href.startsWith('https://')) {
return
}
if (href.endsWith('/')) {
const dirName = decodeURIComponent(href.replace(/\/$/, ''))
directories.push({
name: dirName,
url: new URL(href, normalizedUrl).toString(),
})
return
}
if (href.endsWith('.zim')) {
const fileName = decodeURIComponent(href)
// Apache/Nginx autoindex put the date + size in the text node directly
// following </a> within a <pre>. Walk forward across text siblings until
// we find a parseable size token.
let trailingText = ''
let sibling = el.next
while (sibling && sibling.type === 'text') {
trailingText += sibling.data
if (/\n/.test(sibling.data)) break
sibling = sibling.next
}
files.push({
name: fileName,
url: new URL(href, normalizedUrl).toString(),
size_bytes: this._parseListingSize(trailingText),
})
}
})
directories.sort((a, b) => a.name.localeCompare(b.name))
files.sort((a, b) => a.name.localeCompare(b.name))
return { directories, files }
}
/**
* Parse a directory-listing size token out of the text that follows an anchor.
* Apache renders e.g. ` 2024-01-15 10:30 5.1G`; Nginx renders raw bytes.
* Returns bytes or null if no size token is found.
*/
private _parseListingSize(text: string): number | null {
// Skip the date/time columns; grab the last numeric token (with optional suffix)
// before a newline. Matches `5.1G`, `5368709120`, `1.2T`, etc.
const sizeMatch = /([\d.]+\s*[KMGT]?B?|\d+)\s*$/i.exec(text.split('\n')[0].trim())
if (!sizeMatch) return null
const sizeStr = sizeMatch[1].replace(/\s|B$/gi, '')
const num = parseFloat(sizeStr)
if (isNaN(num)) return null
if (/^\d+$/.test(sizeStr)) return num
const suffix = sizeStr.slice(-1).toUpperCase()
const multipliers: Record<string, number> = { K: 1024, M: 1024 ** 2, G: 1024 ** 3, T: 1024 ** 4 }
return multipliers[suffix] ? Math.round(num * multipliers[suffix]) : null
}
}

View File

@ -6,6 +6,7 @@ import axios from 'axios'
import { Transform } from 'stream'
import { deleteFileIfExists, ensureDirectoryExists, getFileStatsIfExists } from './fs.js'
import { createWriteStream } from 'fs'
import { rename } from 'fs/promises'
import path from 'path'
/**
@ -27,13 +28,16 @@ export async function doResumableDownload({
const dirname = path.dirname(filepath)
await ensureDirectoryExists(dirname)
// Check if partial file exists for resume
// Stage download to a .tmp file so consumers (e.g. Kiwix) never see a partial file
const tempPath = filepath + '.tmp'
// Check if partial .tmp file exists for resume
let startByte = 0
let appendMode = false
const existingStats = await getFileStatsIfExists(filepath)
const existingStats = await getFileStatsIfExists(tempPath)
if (existingStats && !forceNew) {
startByte = existingStats.size
startByte = Number(existingStats.size)
appendMode = true
}
@ -55,14 +59,24 @@ export async function doResumableDownload({
}
}
// If file is already complete and not forcing overwrite just return filepath
if (startByte === totalBytes && totalBytes > 0 && !forceNew) {
// If final file already exists at correct size, return early (idempotent)
const finalFileStats = await getFileStatsIfExists(filepath)
if (finalFileStats && Number(finalFileStats.size) === totalBytes && totalBytes > 0 && !forceNew) {
return filepath
}
// If server doesn't support range requests and we have a partial file, delete it
// If .tmp file is already at correct size (complete but never renamed), just rename it
if (startByte === totalBytes && totalBytes > 0 && !forceNew) {
await rename(tempPath, filepath)
if (onComplete) {
await onComplete(url, filepath)
}
return filepath
}
// If server doesn't support range requests and we have a partial .tmp file, delete it
if (!supportsRangeRequests && startByte > 0) {
await deleteFileIfExists(filepath)
await deleteFileIfExists(tempPath)
startByte = 0
appendMode = false
}
@ -72,17 +86,29 @@ export async function doResumableDownload({
headers.Range = `bytes=${startByte}-`
}
const response = await axios.get(url, {
responseType: 'stream',
headers,
signal,
timeout,
})
const fetchStream = (hdrs: Record<string, string>) =>
axios.get(url, { responseType: 'stream', headers: hdrs, signal, timeout })
let response = await fetchStream(headers)
if (response.status !== 200 && response.status !== 206) {
throw new Error(`Failed to download: HTTP ${response.status}`)
}
// If we requested a range but the server returned 200 (ignored the Range header),
// appending would corrupt the .tmp file — delete it and restart from byte 0.
if (headers.Range && response.status === 200) {
response.data.destroy()
await deleteFileIfExists(tempPath)
startByte = 0
appendMode = false
delete headers.Range
response = await fetchStream(headers)
if (response.status !== 200 && response.status !== 206) {
throw new Error(`Failed to download: HTTP ${response.status}`)
}
}
return new Promise((resolve, reject) => {
let downloadedBytes = startByte
let lastProgressTime = Date.now()
@ -131,11 +157,10 @@ export async function doResumableDownload({
},
})
const writeStream = createWriteStream(filepath, {
const writeStream = createWriteStream(tempPath, {
flags: appendMode ? 'a' : 'w',
})
// Handle errors and cleanup
const cleanup = (error?: Error) => {
clearStallTimer()
progressStream.destroy()
@ -149,7 +174,6 @@ export async function doResumableDownload({
response.data.on('error', cleanup)
progressStream.on('error', cleanup)
writeStream.on('error', cleanup)
writeStream.on('error', cleanup)
signal?.addEventListener('abort', () => {
cleanup(new Error('Download aborted'))
@ -157,6 +181,20 @@ export async function doResumableDownload({
writeStream.on('finish', async () => {
clearStallTimer()
try {
// Atomically move the completed .tmp file to the final path
await rename(tempPath, filepath)
} catch (renameError) {
// A parallel job may have completed the same file first — treat as success
// if the destination already exists at the expected size.
const existing = await getFileStatsIfExists(filepath)
if (existing && Number(existing.size) === totalBytes && totalBytes > 0) {
// fall through to resolve
} else {
reject(renameError)
return
}
}
if (onProgress) {
onProgress({
downloadedBytes,
@ -207,7 +245,7 @@ export async function doResumableDownloadWithRetry({
})
return result // return on success
} catch (error) {
} catch (error: any) {
attempt++
lastError = error as Error

View File

@ -1,4 +1,4 @@
import { mkdir, readdir, readFile, stat, unlink } from 'fs/promises'
import { mkdir, open, readdir, readFile, stat, unlink } from 'fs/promises'
import path, { join } from 'path'
import { FileEntry } from '../../types/files.js'
import { createReadStream } from 'fs'
@ -99,6 +99,28 @@ export async function getFileStatsIfExists(
}
}
/**
* Validates that a file has the ZIM magic number (0x44D495A).
* Must be called before passing a file to @openzim/libzim Archive,
* because a corrupted ZIM causes a native C++ abort that cannot be
* caught by JS try/catch.
*/
export async function isValidZimFile(filePath: string): Promise<boolean> {
let fh
try {
fh = await open(filePath, 'r')
const buf = Buffer.alloc(4)
const { bytesRead } = await fh.read(buf, 0, 4, 0)
if (bytesRead < 4) return false
// ZIM magic number: 72 17 32 04 (little-endian 0x044D4953)
return buf[0] === 0x5a && buf[1] === 0x49 && buf[2] === 0x4d && buf[3] === 0x04
} catch {
return false
} finally {
await fh?.close()
}
}
export async function deleteFileIfExists(path: string): Promise<void> {
try {
await unlink(path)

View File

@ -100,6 +100,7 @@ const resourceUpdateInfoBase = vine.object({
installed_version: vine.string().trim(),
latest_version: vine.string().trim().minLength(1),
download_url: vine.string().url({ require_tld: false }).trim(),
size_bytes: vine.number().positive().optional(),
})
export const applyContentUpdateValidator = vine.compile(resourceUpdateInfoBase)
@ -111,3 +112,31 @@ export const applyAllContentUpdatesValidator = vine.compile(
.minLength(1),
})
)
// --- Map extract (regional pmtiles download) ---
// ISO 3166-1 alpha-2, 2 letters. Loose regex; CountriesService.resolveCodes
// does the authoritative check against the polygon dataset.
const countryCodeSchema = vine
.string()
.trim()
.toUpperCase()
.regex(/^[A-Z]{2}$/)
const countriesArraySchema = vine.array(countryCodeSchema).minLength(1).maxLength(300)
export const mapExtractPreflightValidator = vine.compile(
vine.object({
countries: countriesArraySchema.clone(),
maxzoom: vine.number().min(0).max(15).optional(),
})
)
export const mapExtractValidator = vine.compile(
vine.object({
countries: countriesArraySchema.clone(),
maxzoom: vine.number().min(0).max(15).optional(),
label: vine.string().trim().minLength(1).maxLength(64).optional(),
estimatedBytes: vine.number().min(0).optional(),
})
)

View File

@ -7,3 +7,30 @@ export const listRemoteZimValidator = vine.compile(
query: vine.string().optional(),
})
)
export const addCustomLibraryValidator = vine.compile(
vine.object({
name: vine.string().trim().minLength(1).maxLength(100),
base_url: vine
.string()
.url({ require_tld: false })
.trim(),
})
)
export const browseLibraryValidator = vine.compile(
vine.object({
url: vine
.string()
.url({ require_tld: false })
.trim(),
})
)
export const idParamValidator = vine.compile(
vine.object({
params: vine.object({
id: vine.number(),
}),
})
)

View File

@ -3,6 +3,7 @@ import type { CommandOptions } from '@adonisjs/core/types/ace'
import { Worker } from 'bullmq'
import queueConfig from '#config/queue'
import { RunDownloadJob } from '#jobs/run_download_job'
import { RunExtractPmtilesJob } from '#jobs/run_extract_pmtiles_job'
import { DownloadModelJob } from '#jobs/download_model_job'
import { RunBenchmarkJob } from '#jobs/run_benchmark_job'
import { EmbedFileJob } from '#jobs/embed_file_job'
@ -126,6 +127,7 @@ export default class QueueWork extends BaseCommand {
const queues = new Map<string, string>()
handlers.set(RunDownloadJob.key, new RunDownloadJob())
handlers.set(RunExtractPmtilesJob.key, new RunExtractPmtilesJob())
handlers.set(DownloadModelJob.key, new DownloadModelJob())
handlers.set(RunBenchmarkJob.key, new RunBenchmarkJob())
handlers.set(EmbedFileJob.key, new EmbedFileJob())
@ -133,6 +135,7 @@ export default class QueueWork extends BaseCommand {
handlers.set(CheckServiceUpdatesJob.key, new CheckServiceUpdatesJob())
queues.set(RunDownloadJob.key, RunDownloadJob.queue)
queues.set(RunExtractPmtilesJob.key, RunExtractPmtilesJob.queue)
queues.set(DownloadModelJob.key, DownloadModelJob.queue)
queues.set(RunBenchmarkJob.key, RunBenchmarkJob.queue)
queues.set(EmbedFileJob.key, EmbedFileJob.queue)
@ -149,6 +152,9 @@ export default class QueueWork extends BaseCommand {
private getConcurrencyForQueue(queueName: string): number {
const concurrencyMap: Record<string, number> = {
[RunDownloadJob.queue]: 3,
// pmtiles extract hits the Protomaps CDN with many parallel range reads per job;
// cap concurrency at 2 so a second extract doesn't starve the first.
[RunExtractPmtilesJob.queue]: 2,
[DownloadModelJob.queue]: 2, // Lower concurrency for resource-intensive model downloads
[RunBenchmarkJob.queue]: 1, // Run benchmarks one at a time for accurate results
[EmbedFileJob.queue]: 2, // Lower concurrency for embedding jobs, can be resource intensive

View File

@ -0,0 +1,32 @@
export const PMTILES_BINARY_PATH = '/usr/local/bin/pmtiles'
// Clamp these so a user can't ask for nonsense that never extracts
export const EXTRACT_MIN_ZOOM = 0
export const EXTRACT_MAX_ZOOM = 15
export const EXTRACT_DEFAULT_MAX_ZOOM = 15
// Low-zoom global fallback extracted once during base-asset setup (~15 MB). Layered
// underneath regional extracts so the map isn't grey outside a region's polygon.
export const WORLD_BASEMAP_FILENAME = 'world.pmtiles'
export const WORLD_BASEMAP_MAX_ZOOM = 5
export const WORLD_BASEMAP_SOURCE_NAME = 'world'
export interface PmtilesExtractArgOptions {
sourceUrl: string
outputFilepath: string
regionFilepath?: string
maxzoom?: number
dryRun?: boolean
downloadThreads?: number
overfetch?: number
}
export function buildPmtilesExtractArgs(opts: PmtilesExtractArgOptions): string[] {
const args = ['extract', opts.sourceUrl, opts.outputFilepath]
if (opts.regionFilepath) args.push(`--region=${opts.regionFilepath}`)
if (typeof opts.maxzoom === 'number') args.push(`--maxzoom=${opts.maxzoom}`)
if (opts.dryRun) args.push('--dry-run')
if (typeof opts.downloadThreads === 'number') args.push(`--download-threads=${opts.downloadThreads}`)
if (typeof opts.overfetch === 'number') args.push(`--overfetch=${opts.overfetch}`)
return args
}

View File

@ -0,0 +1,42 @@
import { BaseSchema } from '@adonisjs/lucid/schema'
export default class extends BaseSchema {
protected tableName = 'custom_library_sources'
async up() {
this.schema.createTable(this.tableName, (table) => {
table.increments('id').primary()
table.string('name', 100).notNullable()
table.string('base_url', 2048).notNullable()
table.boolean('is_default').notNullable().defaultTo(false)
table.timestamp('created_at').notNullable()
table.timestamp('updated_at').notNullable()
})
// Seed default Kiwix mirrors
const now = new Date().toISOString().slice(0, 19).replace('T', ' ')
const defaults = [
{ name: 'Debian CDN (Global)', base_url: 'https://cdimage.debian.org/mirror/kiwix.org/zim/' },
{ name: 'Your.org (US)', base_url: 'https://ftpmirror.your.org/pub/kiwix/zim/' },
{ name: 'FAU Erlangen (DE)', base_url: 'https://ftp.fau.de/kiwix/zim/' },
{ name: 'Dotsrc (DK)', base_url: 'https://mirrors.dotsrc.org/kiwix/zim/' },
{ name: 'MirrorService (UK)', base_url: 'https://www.mirrorservice.org/sites/download.kiwix.org/zim/' },
]
for (const d of defaults) {
await this.defer(async (db) => {
await db.table(this.tableName).insert({
name: d.name,
base_url: d.base_url,
is_default: true,
created_at: now,
updated_at: now,
})
})
}
}
async down() {
this.schema.dropTable(this.tableName)
}
}

View File

@ -57,6 +57,10 @@ export default class ServiceSeeder extends BaseSeeder {
PortBindings: { '6333/tcp': [{ HostPort: '6333' }], '6334/tcp': [{ HostPort: '6334' }] },
},
ExposedPorts: { '6333/tcp': {}, '6334/tcp': {} },
// Disable Qdrant's anonymous telemetry to telemetry.qdrant.io. NOMAD is offline-first
// and ships with zero telemetry by default — Qdrant's upstream default of enabled
// telemetry doesn't match that posture.
Env: ['QDRANT__TELEMETRY_DISABLED=true'],
}),
ui_location: '6333',
installed: false,

View File

@ -148,6 +148,15 @@ ZIM files provide offline Wikipedia, books, and other content via Kiwix.
| POST | `/api/maps/download-collection` | Download an entire collection by slug (async) |
| DELETE | `/api/maps/:filename` | Delete a local map file |
### Map Markers
| Method | Path | Description |
|--------|------|-------------|
| GET | `/api/maps/markers` | List map markers |
| POST | `/api/maps/markers` | Add map marker (body: {"name": "Test Marker", "notes": "Example note", "longitude": 0.0, "latitude": 0.0, "color": "yellow", "marker_type": "pin"} ) |
| PATCH | `/api/maps/markers/{id}` | Update a map marker (body: {"name": "Test Marker", "notes": "Example note", "longitude": 0.0, "latitude": 0.0, "color": "yellow", "marker_type": "pin"} ) fields that don't change can be omitted|
| DELETE | `/api/maps/markers/{id}` | Delete a map marker |
---
## Downloads

View File

@ -0,0 +1,48 @@
# Community Add-Ons
Project N.O.M.A.D. ships with a curated set of built-in tools and content, but the community has started building add-ons that extend the platform with specialized offline content packs. These are third-party projects, not maintained by the N.O.M.A.D. team. Install them at your own discretion, and please direct any bugs or feature requests to the add-on's own repository.
Have you built a NOMAD add-on? Open an issue on the [Project N.O.M.A.D. GitHub repository](https://github.com/Crosstalk-Solutions/project-nomad/issues/new) or send us a note through the [contact form on projectnomad.us](https://www.projectnomad.us/contact), and we'll review it for inclusion on this page.
---
## ZIM Content Packs
ZIM content packs drop additional offline reference material into your existing Kiwix library. They typically ship with an `install.sh` script that downloads source material, builds a ZIM file with `zimwriterfs`, and registers it with your running Kiwix container.
### U.S. Military Field Manuals
**Repository:** [github.com/jrsphoto/ZIM-military-field-manuals](https://github.com/jrsphoto/ZIM-military-field-manuals)
Roughly 180 public-domain U.S. military field manuals covering field medicine, survival, combat first aid, map reading, and more. Built into a searchable ZIM that drops into your Kiwix library.
Final ZIM size is around 2 GB. The builder downloads about 2 GB of source PDFs from archive.org during the build.
### W3Schools Programming Archive
**Repository:** [github.com/kennethbrewer3/ZIM-w3schools-offline](https://github.com/kennethbrewer3/ZIM-w3schools-offline)
A full offline copy of the W3Schools programming tutorials, covering HTML, CSS, JavaScript, Python, SQL, and more. Good for learning to code, looking up syntax, or teaching programming in an environment without internet.
Final ZIM size is around 700 MB. The builder downloads about 6 GB of source files from a GitHub mirror during the build.
---
## Installing a Community Add-On
Each add-on has its own install instructions, but most ZIM packs follow the same shape:
1. Clone the add-on's repository onto your NOMAD host over SSH.
2. Check the README for required build dependencies. Most need `git`, `python3`, `unzip`, and `zim-tools`.
3. Run the included `install.sh` with a `--deploy` flag, pointing it at your Kiwix library path (`/opt/project-nomad/storage/zim`) and your Kiwix container name (`nomad_kiwix_server`).
4. The script builds the ZIM, copies it into your Kiwix library, registers it with Kiwix, and restarts the Kiwix container.
Once the script finishes, the new content will appear in your Information Library the next time you load it.
Expect the initial build to take anywhere from a few minutes to an hour or more depending on the add-on's size and your host's CPU.
---
## A Note on Support
These add-ons are community-built and community-maintained. If something goes wrong with an install script or the content inside a ZIM, please open an issue on the add-on's own repository rather than Project N.O.M.A.D.'s. We're happy to help if the issue is with NOMAD itself, for example if Kiwix isn't picking up a new ZIM after an install, but we can't maintain or support third-party content.

View File

@ -1,5 +1,59 @@
# Release Notes
## Unreleased
### Features
- **AI Assistant**: Added improved support for AMD GPU acceleration for Ollama via ROCm + HSA override. Thanks @chriscrosstalk for the contribution!
- **Content Explorer**: Added support for custom ZIM library sources and pre-seeded ZIM library mirrors in addition to the default Kiwix library. Thanks @chriscrosstalk for the contribution!
- **Content Manager**: Content update sizes and downloads are now properly displayed in Active Downloads with progress bars and friendly names. Thanks @chriscrosstalk for the contribution!
- **Maps**: Map regions can now be extracted and downloaded locally from PMTiles to avoid the need for a full global map download for users who only want specific regions. Thanks @bgauger for the contribution!
### Bug Fixes
- **API**: Compression is now skipped for Server-Sent Events (SSE) responses to prevent issues with streaming endpoints. Thanks @chriscrosstalk for the fix!
- **Maps**: Fixed logic issues with the global map banner display. Thanks @Gujiassh for the fix!
- **Maps**: The selected map file is now properly deleted after confirming the action in the UI. Thanks @cuyua9 for the fix!
- **System**: Fixed an issue where the a pending update could still be indicated in the UI even after the system was updated successfully. Thanks @jakeaturner for the fix!
### Improvements
- **Build**: The Command Center image now uses the VERSION build arg to write `app/version.json` with the current version for improved version tracking and debugging, even in RC environments. Thanks @chriscrosstalk for the contribution!
- **Content Manager**: Added a sortable file size column to the ZIM files table in the Content Manager for easier management of storage space. Thanks @chriscrosstalk for the contribution!
- **Dependencies**: All package.json dependencies have been pinned to specific versions to ensure stability and reduce the risk of unexpected breaking changes/supply-chain compromises from upstream packages. Thanks @jakeaturner for the contribution!
- **Dependencies**: Updated various dependencies to close security vulnerabilities and improve stability
- **Docs**: Update CONTIRBUTING.md to require an issue to be opened before submitting a PR for non-trivial changes to ensure proper discussion and review of proposed changes. Thanks @chriscrosstalk for the contribution!
- **Docs**: Added the map markers endpoints to the API reference documentation. Thanks @kennethbrewer3 for the contribution!
- **Docs**: Added a link to the new WSL2 install guide in the README and FAQ. Thanks @chriscrosstalk for the contribution!
- **Install**: The install script now warns loudly if the user is attempting to install on a non-x86_64/amd64 platform to prevent unsupported installations and potential issues. Thanks @chriscrosstalk for the contribution!
- **Maps**: The maps API endpoints now properly accept and validate notes, marker_type, and position data for map markers and persist them in the database for retrieval in the UI. Thanks @jrsphoto for the contribution!
- **Maps**: The current coordinates of the mouse pointer can now be displayed in the map viewer for easier navigation and exploration. Thanks @kennethbrewer3 for the contribution!
- **RAG**: NOMAD now properly passed `num_ctx` and truncation to the Ollama embedding endpoint to ensure that the context window of the model is best utilized for embeddings. Thanks @chriscrosstalk for the contribution!
- **RAG**: Added a manual start button for Qdrant and a self-healing mechanism for Qdrant's restart-policy to ensure that the vector database is running properly for embedding and retrieval tasks. Thanks @hestela for the contribution!
## Version 1.31.1 - April 21, 2026
### Features
### Bug Fixes
- **AI Assistant**: In-progress model downloads can now be cancelled properly and the progress UI now matches that of file downloads. Thanks @chriscrosstalk for the contribution!
- **AI Assistant**: Fixed an issue where the AI Assistant settings page could crash if a model object did not have a details property. Thanks @hestela for the fix!
- **AI Assistant**: Fixed an issue with non-embeddable files being queued for embedding and flooding logs with errors. Thanks @sbruschke for the bug report and @chriscrosstalk for the fix!
- **AI Assistant**: Fixed an issue with ZIM batch embedding using the wrong batch count and causing remaining batches to be skipped. Thanks @sbruschke for the bug report and @chriscrosstalk for the fix!
- **AI Assistant**: Fixed an issue with ZIM content extraction only extracting the first-level children of the article body and thus missing a lot of content. Thanks @sbruschke for the bug report and @chriscrosstalk for the fix!
- **Disk Collector**: Improved reporting for NFS mount stats and display in the UI. Thanks @bgauger and @bravosierra99 for the contribution!
- **Downloads**: Downloads are now staged to .tmp files and atomically renamed upon completion to prevent issues with incomplete/corrupt files. Thanks @artbird309 for the contribution!
- **Downloads**: Removed a duplicate error listener and improved stability when handling Range requests for file downloads. Thanks @jakeaturner for the contribution!
- **Downloads**: Added improved handling for corrupt ZIM file downloads and removed duplicate Ollama download logs. Thanks @aegisman for the contribution!
- **Security**: Closed a potential SSRF vulnerability in the map file download functionality by implementing stricter URL validation and blocking private IP ranges. Thanks @LuisMIguelFurlanettoSousa for the fix!
- **Security**: Sanitized error messages from the backend to prevent potential information disclosure. Thanks @LuisMIguelFurlanettoSousa for the fix!
- **UI**: Fixed an issue with broken pagination for the Content Explorer that could cause some users to see a "No records found" message indefinitely. Thanks @johno10661 for the bug report and @chriscrosstalk for the fix!
- **UI**: Fixed an issue where all storage devices could report as "NAS Storage" regardless of actual type. Thanks @bgauger for the fix!
### Improvements
- **AI Assistant**: Now uses the currently loaded model for query rewriting and chat title generation for improved performance and consistency. Thanks @hestela for the contribution!
- **AI Assistant**: When a remote Ollama URL is configured, the Command Center will now attempt to stop NOMAD's local Ollama container to free up resources and avoid confusion. Thanks @chriscrosstalk for the contribution!
- **Dependencies**: Updated various dependencies to close security vulnerabilities and improve stability
- **Docs**: Added a "Community Add-Ons" page to the documentation to highlight some of the amazing community contributions that have been made since launch. Thanks @chriscrosstalk for the contribution!
- **Privacy**: Added the appropriate environment variable to disable telemetry for the Qdrant container. Note that this will only take effect on new installations of if the Qdrant container is force re-installed on existing installations. Thanks @berkdamerc for the find and @chriscrosstalk for the contribution!
## Version 1.31.0 - April 3, 2026
### Features

View File

@ -1,50 +1,214 @@
import { useCallback, useRef, useState } from 'react'
import useOllamaModelDownloads from '~/hooks/useOllamaModelDownloads'
import HorizontalBarChart from './HorizontalBarChart'
import StyledSectionHeader from './StyledSectionHeader'
import { IconAlertTriangle } from '@tabler/icons-react'
import StyledModal from './StyledModal'
import { IconAlertTriangle, IconLoader2, IconX } from '@tabler/icons-react'
import api from '~/lib/api'
import { useModals } from '~/context/ModalContext'
import { formatBytes } from '~/lib/util'
interface ActiveModelDownloadsProps {
withHeader?: boolean
}
function formatSpeed(bytesPerSec: number): string {
if (bytesPerSec <= 0) return '0 B/s'
if (bytesPerSec < 1024) return `${Math.round(bytesPerSec)} B/s`
if (bytesPerSec < 1024 * 1024) return `${(bytesPerSec / 1024).toFixed(1)} KB/s`
return `${(bytesPerSec / (1024 * 1024)).toFixed(1)} MB/s`
}
const ActiveModelDownloads = ({ withHeader = false }: ActiveModelDownloadsProps) => {
const { downloads } = useOllamaModelDownloads()
const { downloads, removeDownload } = useOllamaModelDownloads()
const { openModal, closeAllModals } = useModals()
const [cancellingModels, setCancellingModels] = useState<Set<string>>(new Set())
// Track previous downloadedBytes for speed calculation — mirrors the approach in
// ActiveDownloads.tsx so content + model downloads feel identical.
const prevBytesRef = useRef<Map<string, { bytes: number; time: number }>>(new Map())
const speedRef = useRef<Map<string, number[]>>(new Map())
const getSpeed = useCallback((model: string, currentBytes?: number): number => {
if (!currentBytes || currentBytes <= 0) return 0
const prev = prevBytesRef.current.get(model)
const now = Date.now()
if (prev && prev.bytes > 0 && currentBytes > prev.bytes) {
const deltaBytes = currentBytes - prev.bytes
const deltaSec = (now - prev.time) / 1000
if (deltaSec > 0) {
const instantSpeed = deltaBytes / deltaSec
// Simple moving average (last 5 samples)
const samples = speedRef.current.get(model) || []
samples.push(instantSpeed)
if (samples.length > 5) samples.shift()
speedRef.current.set(model, samples)
const avg = samples.reduce((a, b) => a + b, 0) / samples.length
prevBytesRef.current.set(model, { bytes: currentBytes, time: now })
return avg
}
}
// Only set initial observation; never advance timestamp when bytes unchanged
if (!prev) {
prevBytesRef.current.set(model, { bytes: currentBytes, time: now })
}
return speedRef.current.get(model)?.at(-1) || 0
}, [])
const runCancel = async (download: { model: string; jobId?: string }) => {
// Defensive guard: stale broadcasts during a hot upgrade may not include jobId.
// Without it we have nothing to call the cancel API with.
if (!download.jobId) return
setCancellingModels((prev) => new Set(prev).add(download.model))
try {
await api.cancelDownloadJob(download.jobId)
// Optimistically clear the entry — the Transmit cancelled broadcast usually
// arrives within a second but we don't want to leave the row hanging if it doesn't.
removeDownload(download.model)
// Clean up speed tracking refs for this model
prevBytesRef.current.delete(download.model)
speedRef.current.delete(download.model)
} finally {
setCancellingModels((prev) => {
const next = new Set(prev)
next.delete(download.model)
return next
})
}
}
const confirmCancel = (download: { model: string; jobId?: string }) => {
if (!download.jobId) return
openModal(
<StyledModal
title="Cancel Download?"
onConfirm={() => {
closeAllModals()
runCancel(download)
}}
onCancel={closeAllModals}
open={true}
confirmText="Cancel Download"
cancelText="Keep Downloading"
>
<div className="space-y-3 text-text-primary">
<p>
Stop downloading <span className="font-mono font-semibold">{download.model}</span>?
</p>
<p className="text-sm text-text-muted">
Any data already downloaded will remain on disk. If you re-download
this model later, it will resume from where it left off rather than
starting over.
</p>
</div>
</StyledModal>,
'confirm-cancel-model-download-modal'
)
}
return (
<>
{withHeader && <StyledSectionHeader title="Active Model Downloads" className="mt-12 mb-4" />}
<div className="space-y-4">
{downloads && downloads.length > 0 ? (
downloads.map((download) => (
<div
key={download.model}
className={`bg-desert-white rounded-lg p-4 border shadow-sm hover:shadow-lg transition-shadow ${
download.error ? 'border-red-400' : 'border-desert-stone-light'
}`}
>
{download.error ? (
<div className="flex items-start gap-3">
<IconAlertTriangle className="text-red-500 flex-shrink-0 mt-0.5" size={20} />
<div>
<p className="font-medium text-text-primary">{download.model}</p>
<p className="text-sm text-red-600 mt-1">{download.error}</p>
downloads.map((download) => {
const isCancelling = cancellingModels.has(download.model)
const canCancel = !!download.jobId && !download.error
const speed = getSpeed(download.model, download.downloadedBytes)
const hasBytes = !!(download.downloadedBytes && download.totalBytes)
return (
<div
key={download.model}
className={`rounded-lg p-4 border shadow-sm hover:shadow-lg transition-shadow ${
download.error
? 'bg-surface-primary border-red-300'
: 'bg-surface-primary border-default'
}`}
>
{download.error ? (
<div className="flex items-center gap-2">
<IconAlertTriangle className="w-5 h-5 text-red-500 flex-shrink-0" />
<div className="flex-1 min-w-0">
<p className="text-sm font-medium text-text-primary truncate">
{download.model}
</p>
<p className="text-xs text-red-600 mt-0.5">{download.error}</p>
</div>
</div>
</div>
) : (
<HorizontalBarChart
items={[
{
label: download.model,
value: download.percent,
total: '100%',
used: `${download.percent.toFixed(1)}%`,
type: 'ollama-model',
},
]}
/>
)}
</div>
))
) : (
<div className="space-y-2">
{/* Title + Cancel button row */}
<div className="flex items-start justify-between gap-2">
<div className="flex-1 min-w-0">
<p className="font-semibold text-desert-green truncate">
{download.model}
</p>
<span className="text-xs px-1.5 py-0.5 rounded bg-desert-stone-lighter text-desert-stone-dark font-mono">
ollama
</span>
</div>
{canCancel && (
isCancelling ? (
<IconLoader2 className="w-4 h-4 text-text-muted animate-spin flex-shrink-0" />
) : (
<button
onClick={() => confirmCancel(download)}
className="flex-shrink-0 p-1 rounded hover:bg-red-100 transition-colors"
title="Cancel download"
>
<IconX className="w-4 h-4 text-text-muted hover:text-red-500" />
</button>
)
)}
</div>
{/* Size info */}
<div className="flex justify-between items-baseline text-sm text-text-muted font-mono">
<span>
{hasBytes
? `${formatBytes(download.downloadedBytes!, 1)} / ${formatBytes(download.totalBytes!, 1)}`
: `${download.percent.toFixed(1)}% / 100%`}
</span>
</div>
{/* Progress bar */}
<div className="relative">
<div className="h-6 bg-desert-green-lighter bg-opacity-20 rounded-lg border border-default overflow-hidden">
<div
className="h-full rounded-lg transition-all duration-1000 ease-out bg-desert-green"
style={{ width: `${download.percent}%` }}
/>
</div>
<div
className={`absolute top-1/2 -translate-y-1/2 font-bold text-xs ${
download.percent > 15
? 'left-2 text-white drop-shadow-md'
: 'right-2 text-desert-green'
}`}
>
{Math.round(download.percent)}%
</div>
</div>
{/* Status indicator */}
<div className="flex items-center gap-2">
<div className="w-2 h-2 rounded-full bg-green-500 animate-pulse" />
<span className="text-xs text-text-muted">
Downloading...{speed > 0 ? ` ${formatSpeed(speed)}` : ''}
</span>
</div>
</div>
)}
</div>
)
})
) : (
<p className="text-text-muted">No active model downloads</p>
)}

View File

@ -0,0 +1,414 @@
import { useEffect, useMemo, useRef, useState } from 'react'
import { useQuery } from '@tanstack/react-query'
import { IconCheck, IconSearch, IconX } from '@tabler/icons-react'
import StyledModal, { StyledModalProps } from './StyledModal'
import LoadingSpinner from './LoadingSpinner'
import api from '~/lib/api'
import { formatBytes } from '~/lib/util'
import classNames from '~/lib/classNames'
import {
EXTRACT_DEFAULT_MAX_ZOOM,
EXTRACT_MAX_ZOOM,
EXTRACT_MIN_ZOOM,
} from '../../constants/map_regions'
import type {
Country,
CountryCode,
CountryGroup,
MapExtractPreflight,
} from '../../types/maps'
export type CountryPickerModalProps = Omit<
StyledModalProps,
| 'onConfirm'
| 'open'
| 'confirmText'
| 'cancelText'
| 'confirmVariant'
| 'children'
| 'title'
| 'large'
> & {
onDownloadStart?: () => void
/** Filenames of pmtiles already on disk; used to badge already-installed countries. */
installedFilenames?: string[]
}
// Single-country extracts use the slug `{iso2 lowercase}_{dateSlug}_z{maxzoom}.pmtiles`,
// matching MapService.buildRegionSlug (which lowercases the alpha-2 country code).
// dateSlug comes from the upstream pmtiles key with `.pmtiles` stripped — currently
// YYYYMMDD but we accept any digits/dashes. Group / custom filenames don't reverse-map
// to country codes, so we skip them here.
const SINGLE_COUNTRY_FILENAME_RE = /^([a-z]{2})_[\w-]+_z\d+\.pmtiles$/
const CountryPickerModal: React.FC<CountryPickerModalProps> = ({
onDownloadStart,
installedFilenames = [],
...modalProps
}) => {
const [selected, setSelected] = useState<Set<CountryCode>>(new Set())
const [search, setSearch] = useState('')
const [maxzoom, setMaxzoom] = useState<number>(EXTRACT_DEFAULT_MAX_ZOOM)
const [preflight, setPreflight] = useState<MapExtractPreflight | null>(null)
const [loading, setLoading] = useState(false)
const [downloading, setDownloading] = useState(false)
const [errorMessage, setErrorMessage] = useState<string | null>(null)
const preflightRequestIdRef = useRef(0)
const { data: countries = [], isLoading: countriesLoading } = useQuery({
queryKey: ['maps-countries'],
queryFn: () => api.listCountries(),
staleTime: Infinity,
})
const { data: groups = [] } = useQuery({
queryKey: ['maps-country-groups'],
queryFn: () => api.listCountryGroups(),
staleTime: Infinity,
})
const grouped = useMemo(() => {
const q = search.trim().toLowerCase()
const filtered = q
? countries.filter(
(c) => c.name.toLowerCase().includes(q) || c.code.toLowerCase().includes(q)
)
: countries
const buckets: Record<string, Country[]> = {}
for (const country of filtered) {
if (!buckets[country.continent]) buckets[country.continent] = []
buckets[country.continent].push(country)
}
return Object.entries(buckets).sort(([a], [b]) => a.localeCompare(b))
}, [countries, search])
const selectedCountries = useMemo(
() => countries.filter((c) => selected.has(c.code)),
[countries, selected]
)
const installedCountrySet = useMemo(() => {
const set = new Set<CountryCode>()
for (const filename of installedFilenames) {
const match = SINGLE_COUNTRY_FILENAME_RE.exec(filename)
if (match) set.add(match[1].toUpperCase() as CountryCode)
}
return set
}, [installedFilenames])
function toggleCountry(code: CountryCode) {
setSelected((prev) => {
const next = new Set(prev)
if (next.has(code)) next.delete(code)
else next.add(code)
return next
})
}
function toggleGroup(group: CountryGroup) {
setSelected((prev) => {
const next = new Set(prev)
const allIn = group.countries.every((c) => next.has(c))
if (allIn) {
group.countries.forEach((c) => next.delete(c))
} else {
group.countries.forEach((c) => next.add(c))
}
return next
})
}
function clearAll() {
setSelected(new Set())
}
// Auto-refresh the preflight whenever selection or maxzoom changes. Debounced
// so rapid multi-select clicks and slider drags collapse into a single CDN
// round-trip. Loading state only flips after the debounce expires so the UI
// stays interactive during the wait. Stale-safe via requestId so an earlier
// slow response can't clobber a later one.
useEffect(() => {
if (selected.size === 0) {
setPreflight(null)
setErrorMessage(null)
setLoading(false)
preflightRequestIdRef.current++
return
}
setErrorMessage(null)
const timer = setTimeout(async () => {
const requestId = ++preflightRequestIdRef.current
setLoading(true)
try {
const res = await api.extractMapPreflight({
countries: [...selected],
maxzoom,
})
if (requestId !== preflightRequestIdRef.current) return
if (!res) throw new Error('Preflight returned no data')
setPreflight(res)
} catch (err: any) {
if (requestId !== preflightRequestIdRef.current) return
console.error('Preflight failed:', err)
setErrorMessage(err?.message ?? 'Estimate failed')
} finally {
if (requestId === preflightRequestIdRef.current) setLoading(false)
}
}, 1500)
return () => clearTimeout(timer)
}, [selected, maxzoom])
async function startDownload() {
if (selected.size === 0) {
setErrorMessage('Pick at least one country before downloading.')
return
}
if (loading || !preflight) {
setErrorMessage('Still estimating size — hold on a moment.')
return
}
try {
setDownloading(true)
setErrorMessage(null)
await api.extractMapRegion({
countries: [...selected],
maxzoom,
estimatedBytes: preflight?.bytes,
})
onDownloadStart?.()
} catch (err: any) {
console.error('Extract dispatch failed:', err)
setErrorMessage(err?.message ?? 'Download failed')
} finally {
setDownloading(false)
}
}
return (
<StyledModal
{...modalProps}
title="Download map by country or region"
open={true}
confirmText="Start Download"
confirmIcon="IconDownload"
cancelText="Cancel"
confirmVariant="primary"
confirmLoading={loading || downloading}
cancelLoading={loading || downloading}
onConfirm={startDownload}
large
>
<div className="flex flex-col text-left gap-4 min-h-[60vh]">
<div className="flex gap-3 items-stretch">
<div className="relative flex-1">
<IconSearch className="absolute left-3 top-1/2 -translate-y-1/2 w-4 h-4 text-text-muted" />
<input
type="text"
value={search}
onChange={(e) => setSearch(e.target.value)}
placeholder={`Search ${countries.length} countries...`}
className="w-full pl-9 pr-3 py-2 rounded-md border border-border-default bg-surface-primary text-text-primary text-sm focus:outline-none focus:ring-2 focus:ring-desert-green"
/>
</div>
{selected.size > 0 && (
<button
type="button"
onClick={clearAll}
className="text-sm text-text-muted hover:text-text-primary px-3 cursor-pointer"
>
Clear all
</button>
)}
</div>
{groups.length > 0 && (
<div>
<p className="text-xs uppercase tracking-wide text-text-muted mb-2">
Quick picks
</p>
<div className="flex flex-wrap gap-2">
{groups.map((group) => {
const allIn =
group.countries.length > 0 &&
group.countries.every((c) => selected.has(c))
return (
<button
key={group.id}
type="button"
onClick={() => toggleGroup(group)}
className={classNames(
'px-3 py-1.5 rounded-full text-xs font-medium border transition-colors cursor-pointer',
allIn
? 'bg-desert-green text-white border-desert-green'
: 'bg-surface-primary text-text-primary border-border-default hover:border-desert-green'
)}
>
{allIn && <IconCheck className="inline w-3 h-3 mr-1" />}
{group.name}{' '}
<span className="opacity-60">({group.countries.length})</span>
</button>
)
})}
</div>
</div>
)}
<div className="flex-1 overflow-y-auto max-h-96 border border-border-default rounded-md bg-surface-secondary">
{countriesLoading ? (
<div className="flex items-center justify-center h-40">
<LoadingSpinner />
</div>
) : grouped.length === 0 ? (
<p className="text-text-muted text-sm p-6 text-center">
No countries match "{search}".
</p>
) : (
grouped.map(([continent, list]) => (
<div key={continent}>
<div className="sticky top-0 bg-surface-secondary border-b border-border-default px-4 py-2 text-xs uppercase tracking-wide text-text-muted font-semibold z-10">
{continent}
</div>
<ul>
{list.map((country) => {
const isSelected = selected.has(country.code)
const isInstalled = installedCountrySet.has(country.code)
return (
<li key={country.code}>
<button
type="button"
onClick={() => toggleCountry(country.code)}
className={classNames(
'w-full flex items-center gap-3 px-4 py-2 text-left text-sm transition-colors cursor-pointer',
isSelected
? 'bg-desert-green/10 hover:bg-desert-green/15'
: 'hover:bg-surface-primary'
)}
>
<span
className={classNames(
'w-4 h-4 rounded border flex items-center justify-center shrink-0',
isSelected
? 'bg-desert-green border-desert-green'
: 'border-border-default'
)}
>
{isSelected && <IconCheck className="w-3 h-3 text-white" />}
</span>
<span className="flex-1 text-text-primary">{country.name}</span>
{isInstalled && (
<span
className="text-[10px] uppercase tracking-wide font-semibold px-1.5 py-0.5 rounded bg-desert-green/15 text-desert-green border border-desert-green/30"
title="Already downloaded — re-select to update with a different zoom"
>
Installed
</span>
)}
<span className="text-xs font-mono text-text-muted">
{country.code}
</span>
</button>
</li>
)
})}
</ul>
</div>
))
)}
</div>
{selectedCountries.length > 0 && (
<div>
<p className="text-xs uppercase tracking-wide text-text-muted mb-2">
{selectedCountries.length} selected
</p>
<div className="flex flex-wrap gap-2 max-h-24 overflow-y-auto">
{selectedCountries.map((country) => (
<span
key={country.code}
className="inline-flex items-center gap-1 px-2 py-1 rounded bg-desert-green text-white text-xs"
>
{country.name}
<button
type="button"
onClick={() => toggleCountry(country.code)}
className="hover:bg-white/20 rounded cursor-pointer"
aria-label={`Remove ${country.name}`}
>
<IconX className="w-3 h-3" />
</button>
</span>
))}
</div>
</div>
)}
<div>
<label className="block text-sm text-text-primary font-medium mb-2">
Max zoom level: <span className="font-mono">{maxzoom}</span>
</label>
<input
type="range"
min={EXTRACT_MIN_ZOOM}
max={EXTRACT_MAX_ZOOM}
step={1}
value={maxzoom}
onChange={(e) => setMaxzoom(parseInt(e.target.value, 10))}
className="w-full accent-desert-green"
disabled={downloading}
/>
<div className="flex justify-between text-xs text-text-muted mt-1 font-mono">
<span>z{EXTRACT_MIN_ZOOM} (world)</span>
<span>z{EXTRACT_MAX_ZOOM} (street)</span>
</div>
<p className="text-xs text-text-muted mt-2">
Lower zoom = smaller file, less detail. Zoom 15 shows individual streets;
zoom 10 shows city-level detail.
</p>
</div>
<div className="bg-surface-secondary border border-border-default rounded-md p-3 min-h-14 text-sm font-mono">
<PreflightStatus
errorMessage={errorMessage}
loading={loading}
preflight={preflight}
hasSelection={selected.size > 0}
/>
</div>
</div>
</StyledModal>
)
}
type PreflightStatusProps = {
errorMessage: string | null
loading: boolean
preflight: MapExtractPreflight | null
hasSelection: boolean
}
function PreflightStatus({ errorMessage, loading, preflight, hasSelection }: PreflightStatusProps) {
if (errorMessage) {
return <p className="text-desert-red">{errorMessage}</p>
}
if (loading) {
return <p className="text-text-muted">Estimating size</p>
}
if (preflight) {
return (
<p className="text-text-primary">
{preflight.tiles.toLocaleString()} tiles, ~{formatBytes(preflight.bytes, 1)}{' '}
<span className="text-text-muted">(source build {preflight.source.date})</span>
</p>
)
}
if (!hasSelection) {
return <p className="text-text-muted">Pick at least one country to estimate size.</p>
}
return <p className="text-text-muted">Estimating size</p>
}
export default CountryPickerModal

View File

@ -1,5 +1,5 @@
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import { useRef, useState } from 'react'
import { useEffect, useRef, useState } from 'react'
import FileUploader from '~/components/file-uploader'
import StyledButton from '~/components/StyledButton'
import StyledSectionHeader from '~/components/StyledSectionHeader'
@ -10,6 +10,7 @@ import { IconX } from '@tabler/icons-react'
import { useModals } from '~/context/ModalContext'
import StyledModal from '../StyledModal'
import ActiveEmbedJobs from '~/components/ActiveEmbedJobs'
import { SERVICE_NAMES } from '../../../constants/service_names'
interface KnowledgeBaseModalProps {
aiAssistantName?: string
@ -30,6 +31,19 @@ export default function KnowledgeBaseModal({ aiAssistantName = "AI Assistant", o
const { openModal, closeModal } = useModals()
const queryClient = useQueryClient()
const [isStartingQdrant, setIsStartingQdrant] = useState(false)
const { data: healthStatus } = useQuery({
queryKey: ['qdrantHealth'],
queryFn: () => api.checkRAGHealth(),
refetchInterval: isStartingQdrant ? 3_000 : 30_000,
})
const qdrantOffline = healthStatus?.online === false
useEffect(() => {
if (!qdrantOffline) setIsStartingQdrant(false)
}, [qdrantOffline])
const { data: storedFiles = [], isLoading: isLoadingFiles } = useQuery({
queryKey: ['storedFiles'],
queryFn: () => api.getStoredRAGFiles(),
@ -64,6 +78,17 @@ export default function KnowledgeBaseModal({ aiAssistantName = "AI Assistant", o
},
})
const startQdrantMutation = useMutation({
mutationFn: () => api.affectService(SERVICE_NAMES.QDRANT, 'start'),
onSuccess: () => {
setIsStartingQdrant(true)
queryClient.invalidateQueries({ queryKey: ['qdrantHealth'] })
},
onError: (error: any) => {
addNotification({ type: 'error', message: error?.message || 'Failed to start Qdrant.' })
},
})
const syncMutation = useMutation({
mutationFn: () => api.syncRAGStorage(),
onSuccess: (data) => {
@ -149,6 +174,22 @@ export default function KnowledgeBaseModal({ aiAssistantName = "AI Assistant", o
</button>
</div>
<div className="overflow-y-auto flex-1 p-6">
{qdrantOffline && (
<div className="mb-4 p-4 bg-red-50 border border-red-200 rounded-lg text-red-700 text-sm dark:bg-red-950 dark:border-red-800 dark:text-red-300 flex items-center justify-between gap-4">
<span>
<strong>Knowledge Base unavailable:</strong> The Qdrant vector database is offline.
</span>
<StyledButton
variant="danger"
size="sm"
onClick={() => startQdrantMutation.mutate()}
loading={startQdrantMutation.isPending || isStartingQdrant}
disabled={startQdrantMutation.isPending || isStartingQdrant}
>
{isStartingQdrant ? 'Starting…' : 'Start Qdrant'}
</StyledButton>
</div>
)}
<div className="bg-surface-primary rounded-lg border shadow-md overflow-hidden">
<div className="p-6">
<FileUploader
@ -165,7 +206,7 @@ export default function KnowledgeBaseModal({ aiAssistantName = "AI Assistant", o
size="lg"
icon="IconUpload"
onClick={handleUpload}
disabled={files.length === 0 || isUploading}
disabled={files.length === 0 || isUploading || qdrantOffline}
loading={isUploading}
>
Upload
@ -236,7 +277,7 @@ export default function KnowledgeBaseModal({ aiAssistantName = "AI Assistant", o
icon="IconTrash"
onClick={() => cleanupFailedMutation.mutate()}
loading={cleanupFailedMutation.isPending}
disabled={cleanupFailedMutation.isPending}
disabled={cleanupFailedMutation.isPending || qdrantOffline}
>
Clean Up Failed
</StyledButton>
@ -252,7 +293,7 @@ export default function KnowledgeBaseModal({ aiAssistantName = "AI Assistant", o
size="md"
icon='IconRefresh'
onClick={handleConfirmSync}
disabled={syncMutation.isPending || isUploading}
disabled={syncMutation.isPending || isUploading || qdrantOffline}
loading={syncMutation.isPending || isUploading}
>
Sync Storage

View File

@ -0,0 +1,25 @@
type CoordinateOverlayProps = {
latitude: number
longitude: number
x: number
y: number
}
export default function CoordinateOverlay({
latitude,
longitude,
x,
y,
}: CoordinateOverlayProps) {
return (
<div
className="pointer-events-none absolute z-[9999] -translate-x-1/2 whitespace-nowrap rounded bg-black/75 px-2 py-1 font-mono text-[11px] text-white"
style={{
left: x,
top: y - 36,
}}
>
{latitude.toFixed(6)}, {longitude.toFixed(6)}
</div>
)
}

View File

@ -9,43 +9,111 @@ import Map, {
import type { MapRef, MapLayerMouseEvent } from 'react-map-gl/maplibre'
import maplibregl from 'maplibre-gl'
import 'maplibre-gl/dist/maplibre-gl.css'
import { Protocol } from 'pmtiles'
import { useEffect, useRef, useState, useCallback } from 'react'
type ScaleUnit = 'imperial' | 'metric'
import { useMapMarkers, PIN_COLORS } from '~/hooks/useMapMarkers'
import type { PinColorId } from '~/hooks/useMapMarkers'
import MarkerPin from './MarkerPin'
import MarkerPanel from './MarkerPanel'
import CoordinateOverlay from './CoordinateOverlay'
import ScaleUnitToggle from './ScaleUnitToggle'
export default function MapComponent() {
type ScaleUnit = 'imperial' | 'metric'
type MapComponentProps = {
isHoveringUI: boolean
showCoordinatesEnabled: boolean
}
export default function MapComponent({
isHoveringUI,
showCoordinatesEnabled,
}: MapComponentProps) {
const mapRef = useRef<MapRef>(null)
const animationFrameRef = useRef<number | null>(null)
const { markers, addMarker, deleteMarker } = useMapMarkers()
const [isDraggingMap, setIsDraggingMap] = useState(false)
const [placingMarker, setPlacingMarker] = useState<{ lng: number; lat: number } | null>(null)
const [markerName, setMarkerName] = useState('')
const [markerColor, setMarkerColor] = useState<PinColorId>('orange')
const [selectedMarkerId, setSelectedMarkerId] = useState<number | null>(null)
const [scaleUnit, setScaleUnit] = useState<ScaleUnit>(
() => (localStorage.getItem('nomad:map-scale-unit') as ScaleUnit) || 'metric'
)
const toggleScaleUnit = useCallback(() => {
setScaleUnit((prev) => {
const next = prev === 'metric' ? 'imperial' : 'metric'
localStorage.setItem('nomad:map-scale-unit', next)
return next
})
}, [])
const [cursorLngLat, setCursorLngLat] = useState<{
lng: number
lat: number
x: number
y: number
} | null>(null)
const [showCoordinates, setShowCoordinates] = useState(false)
// Add the PMTiles protocol to maplibre-gl
useEffect(() => {
let protocol = new Protocol()
const protocol = new Protocol()
maplibregl.addProtocol('pmtiles', protocol.tile)
return () => {
maplibregl.removeProtocol('pmtiles')
}
}, [])
useEffect(() => {
return () => {
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current)
}
}
}, [])
const hideCoordinates = useCallback(() => {
setShowCoordinates(false)
setCursorLngLat(null)
}, [])
const handleScaleUnitChange = useCallback((unit: ScaleUnit) => {
setScaleUnit(unit)
localStorage.setItem('nomad:map-scale-unit', unit)
}, [])
const handleMouseMove = useCallback(
(e: MapLayerMouseEvent) => {
const target = e.originalEvent.target as HTMLElement | null
if (
!showCoordinatesEnabled ||
isHoveringUI ||
isDraggingMap ||
target?.closest('.maplibregl-control-container, .maplibregl-ctrl')
) {
hideCoordinates()
return
}
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current)
}
animationFrameRef.current = requestAnimationFrame(() => {
setShowCoordinates(true)
setCursorLngLat({
lng: e.lngLat.lng,
lat: e.lngLat.lat,
x: e.point.x,
y: e.point.y,
})
})
},
[hideCoordinates, isHoveringUI, isDraggingMap, showCoordinatesEnabled]
)
const handleMapClick = useCallback((e: MapLayerMouseEvent) => {
setPlacingMarker({ lng: e.lngLat.lng, lat: e.lngLat.lat })
setMarkerName('')
@ -78,167 +146,180 @@ export default function MapComponent() {
return (
<MapProvider>
<Map
ref={mapRef}
reuseMaps
style={{
width: '100%',
height: '100vh',
<div
style={{ position: 'relative', width: '100%', height: '100vh' }}
onMouseLeave={() => {
setIsDraggingMap(false)
hideCoordinates()
}}
mapStyle={`${window.location.protocol}//${window.location.hostname}:${window.location.port}/api/maps/styles`}
mapLib={maplibregl}
initialViewState={{
longitude: -101,
latitude: 40,
zoom: 3.5,
onMouseMoveCapture={(e) => {
const target = e.target as HTMLElement | null
if (
target?.closest(
'.maplibregl-control-container, .maplibregl-ctrl, .maplibregl-ctrl-group, .maplibregl-ctrl-scale'
)
) {
hideCoordinates()
}
}}
onClick={handleMapClick}
>
<NavigationControl style={{ marginTop: '110px', marginRight: '36px' }} />
<FullscreenControl style={{ marginTop: '30px', marginRight: '36px' }} />
<ScaleControl position="bottom-left" maxWidth={150} unit={scaleUnit} />
<div style={{ position: 'absolute', bottom: '30px', left: '10px', zIndex: 2 }}>
<div
style={{
display: 'inline-flex',
borderRadius: '4px',
boxShadow: '0 0 0 2px rgba(0,0,0,0.1)',
overflow: 'hidden',
fontSize: '11px',
fontWeight: 600,
lineHeight: 1,
}}
>
<button
onClick={() => { if (scaleUnit !== 'metric') toggleScaleUnit() }}
style={{
background: scaleUnit === 'metric' ? '#424420' : 'white',
color: scaleUnit === 'metric' ? 'white' : '#666',
border: 'none',
padding: '4px 8px',
cursor: 'pointer',
}}
>
Metric
</button>
<button
onClick={() => { if (scaleUnit !== 'imperial') toggleScaleUnit() }}
style={{
background: scaleUnit === 'imperial' ? '#424420' : 'white',
color: scaleUnit === 'imperial' ? 'white' : '#666',
border: 'none',
padding: '4px 8px',
cursor: 'pointer',
}}
>
Imperial
</button>
</div>
</div>
<Map
ref={mapRef}
reuseMaps
style={{ width: '100%', height: '100vh' }}
cursor={isDraggingMap ? 'grabbing' : 'crosshair'}
mapStyle={`${window.location.protocol}//${window.location.hostname}:${window.location.port}/api/maps/styles`}
mapLib={maplibregl}
initialViewState={{
longitude: -101,
latitude: 40,
zoom: 3.5,
}}
onMouseDown={() => {
setIsDraggingMap(true)
hideCoordinates()
}}
onMouseUp={() => {
setIsDraggingMap(false)
}}
onDragStart={() => {
setIsDraggingMap(true)
hideCoordinates()
}}
onDragEnd={() => {
setIsDraggingMap(false)
hideCoordinates()
}}
onClick={handleMapClick}
onMouseMove={handleMouseMove}
onMouseLeave={hideCoordinates}
>
<NavigationControl style={{ marginTop: '110px', marginRight: '36px' }} />
<FullscreenControl style={{ marginTop: '30px', marginRight: '36px' }} />
<ScaleControl position="bottom-left" maxWidth={150} unit={scaleUnit} />
{/* Existing markers */}
{markers.map((marker) => (
<Marker
key={marker.id}
longitude={marker.longitude}
latitude={marker.latitude}
anchor="bottom"
onClick={(e) => {
e.originalEvent.stopPropagation()
setSelectedMarkerId(marker.id === selectedMarkerId ? null : marker.id)
setPlacingMarker(null)
}}
>
<MarkerPin
color={PIN_COLORS.find((c) => c.id === marker.color)?.hex}
active={marker.id === selectedMarkerId}
{showCoordinates && cursorLngLat && (
<CoordinateOverlay
latitude={cursorLngLat.lat}
longitude={cursorLngLat.lng}
x={cursorLngLat.x}
y={cursorLngLat.y}
/>
</Marker>
))}
)}
{/* Popup for selected marker */}
{selectedMarker && (
<Popup
longitude={selectedMarker.longitude}
latitude={selectedMarker.latitude}
anchor="bottom"
offset={[0, -36] as [number, number]}
onClose={() => setSelectedMarkerId(null)}
closeOnClick={false}
>
<div className="text-sm font-medium">{selectedMarker.name}</div>
</Popup>
)}
<ScaleUnitToggle
scaleUnit={scaleUnit}
onChange={handleScaleUnitChange}
onMouseEnter={hideCoordinates}
/>
{/* Popup for placing a new marker */}
{placingMarker && (
<Popup
longitude={placingMarker.lng}
latitude={placingMarker.lat}
anchor="bottom"
onClose={() => setPlacingMarker(null)}
closeOnClick={false}
>
<div className="p-1">
<input
autoFocus
type="text"
placeholder="Name this location"
value={markerName}
onChange={(e) => setMarkerName(e.target.value)}
onKeyDown={(e) => {
if (e.key === 'Enter') handleSaveMarker()
if (e.key === 'Escape') setPlacingMarker(null)
}}
className="block w-full rounded border border-gray-300 px-2 py-1 text-sm placeholder:text-gray-400 focus:outline-none focus:border-gray-500"
{markers.map((marker) => (
<Marker
key={marker.id}
longitude={marker.longitude}
latitude={marker.latitude}
anchor="bottom"
onClick={(e) => {
e.originalEvent.stopPropagation()
setSelectedMarkerId(marker.id === selectedMarkerId ? null : marker.id)
setPlacingMarker(null)
}}
>
<MarkerPin
color={PIN_COLORS.find((c) => c.id === marker.color)?.hex}
active={marker.id === selectedMarkerId}
/>
<div className="mt-1.5 flex gap-1 items-center">
{PIN_COLORS.map((c) => (
<button
key={c.id}
onClick={() => setMarkerColor(c.id)}
title={c.label}
className="rounded-full p-0.5 transition-transform"
style={{
outline: markerColor === c.id ? `2px solid ${c.hex}` : '2px solid transparent',
outlineOffset: '1px',
}}
>
<div
className="w-4 h-4 rounded-full"
style={{ backgroundColor: c.hex }}
/>
</button>
))}
</div>
<div className="mt-1.5 flex gap-1.5 justify-end">
<button
onClick={() => setPlacingMarker(null)}
className="text-xs text-gray-500 hover:text-gray-700 px-2 py-1 rounded transition-colors"
>
Cancel
</button>
<button
onClick={handleSaveMarker}
disabled={!markerName.trim()}
className="text-xs bg-[#424420] text-white rounded px-2.5 py-1 hover:bg-[#525530] disabled:opacity-40 transition-colors"
>
Save
</button>
</div>
</div>
</Popup>
)}
</Map>
</Marker>
))}
{/* Marker panel overlay */}
<MarkerPanel
markers={markers}
onDelete={handleDeleteMarker}
onFlyTo={handleFlyTo}
onSelect={setSelectedMarkerId}
selectedMarkerId={selectedMarkerId}
/>
{selectedMarker && (
<Popup
longitude={selectedMarker.longitude}
latitude={selectedMarker.latitude}
anchor="bottom"
offset={[0, -36]}
onClose={() => setSelectedMarkerId(null)}
closeOnClick={false}
>
<div className="text-sm font-medium">{selectedMarker.name}</div>
</Popup>
)}
{placingMarker && (
<Popup
longitude={placingMarker.lng}
latitude={placingMarker.lat}
anchor="bottom"
onClose={() => setPlacingMarker(null)}
closeOnClick={false}
>
<div onMouseEnter={hideCoordinates} className="p-1">
<input
autoFocus
type="text"
placeholder="Name this location"
value={markerName}
onChange={(e) => setMarkerName(e.target.value)}
onKeyDown={(e) => {
if (e.key === 'Enter') handleSaveMarker()
if (e.key === 'Escape') setPlacingMarker(null)
}}
className="block w-full rounded border border-gray-300 px-2 py-1 text-sm placeholder:text-gray-400 focus:outline-none focus:border-gray-500"
/>
<div className="mt-1.5 flex gap-1 items-center">
{PIN_COLORS.map((c) => (
<button
key={c.id}
type="button"
onClick={() => setMarkerColor(c.id)}
title={c.label}
className="rounded-full p-0.5 transition-transform"
style={{
outline:
markerColor === c.id ? `2px solid ${c.hex}` : '2px solid transparent',
outlineOffset: '1px',
}}
>
<div className="w-4 h-4 rounded-full" style={{ backgroundColor: c.hex }} />
</button>
))}
</div>
<div className="mt-1.5 flex gap-1.5 justify-end">
<button
type="button"
onClick={() => setPlacingMarker(null)}
className="text-xs text-gray-500 hover:text-gray-700 px-2 py-1 rounded transition-colors"
>
Cancel
</button>
<button
type="button"
onClick={handleSaveMarker}
disabled={!markerName.trim()}
className="text-xs bg-[#424420] text-white rounded px-2.5 py-1 hover:bg-[#525530] disabled:opacity-40 transition-colors"
>
Save
</button>
</div>
</div>
</Popup>
)}
</Map>
</div>
<div onMouseEnter={hideCoordinates}>
<MarkerPanel
markers={markers}
onDelete={handleDeleteMarker}
onFlyTo={handleFlyTo}
onSelect={setSelectedMarkerId}
selectedMarkerId={selectedMarkerId}
/>
</div>
</MapProvider>
)
}

View File

@ -0,0 +1,46 @@
type ScaleUnit = 'imperial' | 'metric'
type ScaleUnitToggleProps = {
scaleUnit: ScaleUnit
onChange: (unit: ScaleUnit) => void
onMouseEnter?: () => void
}
export default function ScaleUnitToggle({
scaleUnit,
onChange,
onMouseEnter,
}: ScaleUnitToggleProps) {
return (
<div
className="absolute bottom-[30px] left-[10px] z-[2]"
onMouseEnter={onMouseEnter}
>
<div className="inline-flex overflow-hidden rounded text-[11px] font-semibold leading-none shadow-[0_0_0_2px_rgba(0,0,0,0.1)]">
<button
type="button"
onClick={() => onChange('metric')}
className="border-0 px-2 py-1"
style={{
background: scaleUnit === 'metric' ? '#424420' : 'white',
color: scaleUnit === 'metric' ? 'white' : '#666',
}}
>
Metric
</button>
<button
type="button"
onClick={() => onChange('imperial')}
className="border-0 px-2 py-1"
style={{
background: scaleUnit === 'imperial' ? '#424420' : 'white',
color: scaleUnit === 'imperial' ? 'white' : '#666',
}}
>
Imperial
</button>
</div>
</div>
)
}

View File

@ -19,36 +19,66 @@ export function getAllDiskDisplayItems(
): DiskDisplayItem[] {
const validDisks = disks?.filter((d) => d.totalSize > 0) || []
// If /app/storage is backed by a network filesystem (NFS/CIFS), it won't
// appear in the block-device list. Prepend it so NAS and OS disk are both
// shown. Local-disk-backed /app/storage is already reported in disk[] and
// fsSize[], so skip it here to avoid a phantom "NAS Storage" entry.
const NETWORK_FS_TYPES = new Set(['nfs', 'nfs4', 'cifs', 'smbfs', 'smb2', 'smb3'])
const storageMount = fsSize?.find(
(fs) =>
fs.mount === '/app/storage' && fs.size > 0 && NETWORK_FS_TYPES.has(fs.type?.toLowerCase())
)
const storageMountItem: DiskDisplayItem[] = storageMount
? [
{
label: 'NAS Storage',
value: storageMount.use || 0,
total: formatBytes(storageMount.size),
used: formatBytes(storageMount.used),
subtext: `${formatBytes(storageMount.used)} / ${formatBytes(storageMount.size)}`,
totalBytes: storageMount.size,
usedBytes: storageMount.used,
},
]
: []
if (validDisks.length > 0) {
return validDisks.map((disk) => ({
label: disk.name || 'Unknown',
value: disk.percentUsed || 0,
total: formatBytes(disk.totalSize),
used: formatBytes(disk.totalUsed),
subtext: `${formatBytes(disk.totalUsed || 0)} / ${formatBytes(disk.totalSize || 0)}`,
totalBytes: disk.totalSize,
usedBytes: disk.totalUsed,
}))
return [
...storageMountItem,
...validDisks.map((disk) => ({
label: disk.name || 'Unknown',
value: disk.percentUsed || 0,
total: formatBytes(disk.totalSize),
used: formatBytes(disk.totalUsed),
subtext: `${formatBytes(disk.totalUsed || 0)} / ${formatBytes(disk.totalSize || 0)}`,
totalBytes: disk.totalSize,
usedBytes: disk.totalUsed,
})),
]
}
if (fsSize && fsSize.length > 0) {
const seen = new Set<number>()
const uniqueFs = fsSize.filter((fs) => {
if (fs.size <= 0 || seen.has(fs.size)) return false
if (storageMount && fs.mount === '/app/storage') return false
seen.add(fs.size)
return true
})
const realDevices = uniqueFs.filter((fs) => fs.fs.startsWith('/dev/'))
const displayFs = realDevices.length > 0 ? realDevices : uniqueFs
return displayFs.map((fs) => ({
label: fs.fs || 'Unknown',
value: fs.use || 0,
total: formatBytes(fs.size),
used: formatBytes(fs.used),
subtext: `${formatBytes(fs.used)} / ${formatBytes(fs.size)}`,
totalBytes: fs.size,
usedBytes: fs.used,
}))
return [
...storageMountItem,
...displayFs.map((fs) => ({
label: fs.fs || 'Unknown',
value: fs.use || 0,
total: formatBytes(fs.size),
used: formatBytes(fs.used),
subtext: `${formatBytes(fs.used)} / ${formatBytes(fs.size)}`,
totalBytes: fs.size,
usedBytes: fs.used,
})),
]
}
return []
@ -59,6 +89,15 @@ export function getPrimaryDiskInfo(
disks: NomadDiskInfo[] | undefined,
fsSize: Systeminformation.FsSizeData[] | undefined
): { totalSize: number; totalUsed: number } | null {
// First, check if /app/storage is on a dedicated filesystem (e.g. NFS mount).
// This is the most accurate source since it reflects the actual backing
// store for NOMAD content, regardless of whether it's a local disk or
// network-attached storage.
const storageMount = fsSize?.find((fs) => fs.mount === '/app/storage' && fs.size > 0)
if (storageMount) {
return { totalSize: storageMount.size, totalUsed: storageMount.used }
}
const validDisks = disks?.filter((d) => d.totalSize > 0) || []
if (validDisks.length > 0) {
const diskWithRoot = validDisks.find((d) =>

View File

@ -19,8 +19,9 @@ const useDownloads = (props: useDownloadsProps) => {
queryFn: () => api.listDownloadJobs(props.filetype),
refetchInterval: (query) => {
const data = query.state.data
// Only poll when there are active downloads; otherwise use a slower interval
return data && data.length > 0 ? 2000 : 30000
// Idle poll is kept tight so newly-dispatched jobs surface quickly — small ZIM
// updates can complete in ~2s, so a 30s idle interval almost always missed them.
return data && data.length > 0 ? 2000 : 3000
},
enabled: props.enabled ?? true,
})

View File

@ -1,11 +1,25 @@
import { useEffect, useRef, useState } from 'react'
import { useCallback, useEffect, useRef, useState } from 'react'
import { useTransmit } from 'react-adonis-transmit'
export type OllamaModelDownload = {
model: string
percent: number
timestamp: string
/**
* BullMQ job id included on progress events from v1.32+ so the frontend can
* call the cancel API. Optional for backward compat with stale broadcasts during
* a hot upgrade.
*/
jobId?: string
/**
* Aggregate bytes across all blobs in the model pull, summed from Ollama's
* per-digest progress events on the backend. Optional for backward compat.
*/
downloadedBytes?: number
totalBytes?: number
error?: string
/** Set to 'cancelled' alongside percent === -2 when the user cancels the download */
status?: 'cancelled'
}
export default function useOllamaModelDownloads() {
@ -13,6 +27,19 @@ export default function useOllamaModelDownloads() {
const [downloads, setDownloads] = useState<Map<string, OllamaModelDownload>>(new Map())
const timeoutsRef = useRef<Set<ReturnType<typeof setTimeout>>>(new Set())
/**
* Optimistically remove a download from local state used by the cancel UI to clear
* the entry immediately on a successful API call, in case the Transmit cancelled
* broadcast arrives late or the SSE connection drops at exactly the wrong moment.
*/
const removeDownload = useCallback((model: string) => {
setDownloads((current) => {
const next = new Map(current)
next.delete(model)
return next
})
}, [])
useEffect(() => {
const unsubscribe = subscribe('ollama-model-download', (data: OllamaModelDownload) => {
setDownloads((prev) => {
@ -30,6 +57,21 @@ export default function useOllamaModelDownloads() {
})
}, 15000)
timeoutsRef.current.add(errorTimeout)
} else if (data.percent === -2) {
// Download cancelled — clear quickly (matches the completion TTL).
// Component-level optimistic removal usually beats this branch, but it's
// here as a safety net for cases where the cancel comes from another tab
// or another client.
const cancelTimeout = setTimeout(() => {
timeoutsRef.current.delete(cancelTimeout)
setDownloads((current) => {
const next = new Map(current)
next.delete(data.model)
return next
})
}, 2000)
timeoutsRef.current.add(cancelTimeout)
updated.delete(data.model)
} else if (data.percent >= 100) {
// If download is complete, keep it for a short time before removing to allow UI to show 100% progress
updated.set(data.model, data)
@ -60,5 +102,5 @@ export default function useOllamaModelDownloads() {
const downloadsArray = Array.from(downloads.values())
return { downloads: downloadsArray, activeCount: downloads.size }
return { downloads: downloadsArray, activeCount: downloads.size, removeDownload }
}

View File

@ -4,6 +4,7 @@ import { ServiceSlim } from '../../types/services'
import { FileEntry } from '../../types/files'
import { CheckLatestVersionResult, SystemInformationResponse, SystemUpdateStatus } from '../../types/system'
import { DownloadJobWithProgress, WikipediaState } from '../../types/downloads'
import type { Country, CountryCode, CountryGroup, MapExtractPreflight } from '../../types/maps'
import { EmbedJobWithProgress } from '../../types/rag'
import type { CategoryWithStatus, CollectionWithStatus, ContentUpdateCheckResult, ResourceUpdateInfo } from '../../types/collections'
import { catchInternal } from './util'
@ -130,6 +131,15 @@ class API {
})()
}
async deleteMapRegionFile(filename: string): Promise<{ message: string }> {
return catchInternal(async () => {
const response = await this.client.delete<{ message: string }>(
`/maps/${encodeURIComponent(filename)}`
)
return response.data
})()
}
async downloadRemoteZimFile(
url: string,
metadata?: { title: string; summary?: string; author?: string; size_bytes?: number }
@ -451,6 +461,13 @@ class API {
})()
}
async checkRAGHealth() {
return catchInternal(async () => {
const response = await this.client.get<{ online: boolean; message?: string }>('/rag/health')
return response.data
})()
}
async getStoredRAGFiles() {
return catchInternal(async () => {
const response = await this.client.get<{ files: string[] }>('/rag/files')
@ -541,6 +558,46 @@ class API {
})()
}
async listCountries() {
return catchInternal(async () => {
const response = await this.client.get<{ countries: Country[] }>('/maps/countries')
return response.data.countries
})()
}
async listCountryGroups() {
return catchInternal(async () => {
const response = await this.client.get<{ groups: CountryGroup[] }>('/maps/country-groups')
return response.data.groups
})()
}
async extractMapPreflight(params: { countries: CountryCode[]; maxzoom?: number }) {
return catchInternal(async () => {
const response = await this.client.post<MapExtractPreflight>(
'/maps/extract-preflight',
params
)
return response.data
})()
}
async extractMapRegion(params: {
countries: CountryCode[]
maxzoom?: number
label?: string
estimatedBytes?: number
}) {
return catchInternal(async () => {
const response = await this.client.post<{
message: string
filename: string
jobId?: string
}>('/maps/extract', params)
return response.data
})()
}
async listCuratedMapCollections() {
return catchInternal(async () => {
const response = await this.client.get<CollectionWithStatus[]>(
@ -624,6 +681,42 @@ class API {
})()
}
async listCustomLibraries() {
return catchInternal(async () => {
const response = await this.client.get<{ id: number; name: string; base_url: string; is_default: boolean }[]>(
'/zim/custom-libraries'
)
return response.data
})()
}
async addCustomLibrary(name: string, base_url: string) {
return catchInternal(async () => {
const response = await this.client.post<{
message: string
library: { id: number; name: string; base_url: string }
}>('/zim/custom-libraries', { name, base_url })
return response.data
})()
}
async removeCustomLibrary(id: number) {
return catchInternal(async () => {
const response = await this.client.delete<{ message: string }>(`/zim/custom-libraries/${id}`)
return response.data
})()
}
async browseLibrary(url: string) {
return catchInternal(async () => {
const response = await this.client.get<{
directories: { name: string; url: string }[]
files: { name: string; url: string; size_bytes: number | null }[]
}>('/zim/browse-library', { params: { url } })
return response.data
})()
}
async deleteZimFile(filename: string) {
return catchInternal(async () => {
const response = await this.client.delete<{ message: string }>(`/zim/${filename}`)

View File

@ -0,0 +1,10 @@
export function hasDownloadedGlobalMap(
globalMapKey: string | null | undefined,
storedMapFiles: Array<{ name: string }>
): boolean {
if (!globalMapKey) {
return false
}
return storedMapFiles.some((file) => file.name === globalMapKey || /^\d{8}\.pmtiles$/.test(file.name))
}

View File

@ -1,38 +1,66 @@
import MapsLayout from '~/layouts/MapsLayout'
import { useState } from 'react'
import { Head, Link, router } from '@inertiajs/react'
import { IconArrowLeft } from '@tabler/icons-react'
import MapsLayout from '~/layouts/MapsLayout'
import MapComponent from '~/components/maps/MapComponent'
import StyledButton from '~/components/StyledButton'
import { IconArrowLeft } from '@tabler/icons-react'
import { FileEntry } from '../../types/files'
import Alert from '~/components/Alert'
import { FileEntry } from '../../types/files'
export default function Maps(props: {
maps: { baseAssetsExist: boolean; regionFiles: FileEntry[] }
}) {
const [isHoveringUI, setIsHoveringUI] = useState(false)
const [showMapCoordinates, setShowMapCoordinates] = useState(true)
const alertMessage = !props.maps.baseAssetsExist
? 'The base map assets have not been installed. Please download them first to enable map functionality.'
: props.maps.regionFiles.length === 0
? 'No map regions have been downloaded yet. Please download some regions to enable map functionality.'
: null
? 'No map regions have been downloaded yet. Please download some regions to enable map functionality.'
: null
return (
<MapsLayout>
<Head title="Maps" />
<div className="relative w-full h-screen overflow-hidden">
{/* Nav and alerts are overlayed */}
<div className="absolute top-0 left-0 right-0 z-50 flex justify-between p-4 bg-surface-secondary backdrop-blur-sm shadow-sm">
{/* Navbar */}
<div
className="absolute top-0 left-0 right-0 z-50 flex justify-between p-4 bg-surface-secondary backdrop-blur-sm shadow-sm"
onMouseEnter={() => setIsHoveringUI(true)}
onMouseLeave={() => setIsHoveringUI(false)}
>
<Link href="/home" className="flex items-center">
<IconArrowLeft className="mr-2" size={24} />
<p className="text-lg text-text-secondary">Back to Home</p>
</Link>
<Link href="/settings/maps" className='mr-4'>
<StyledButton variant="primary" icon="IconSettings">
Manage Map Regions
</StyledButton>
</Link>
<div className="flex items-center gap-3 mr-4">
<button
type="button"
onClick={() => setShowMapCoordinates((prev) => !prev)}
className="rounded px-3 py-2 text-sm bg-surface-primary text-text-secondary hover:opacity-80 transition"
>
{showMapCoordinates ? 'Hide Coordinates' : 'Show Coordinates'}
</button>
<Link href="/settings/maps">
<StyledButton variant="primary" icon="IconSettings">
Manage Map Regions
</StyledButton>
</Link>
</div>
</div>
{/* Alert */}
{alertMessage && (
<div className="absolute top-20 left-4 right-4 z-50">
<div
className="absolute top-20 left-4 right-4 z-50"
onMouseEnter={() => setIsHoveringUI(true)}
onMouseLeave={() => setIsHoveringUI(false)}
>
<Alert
title={alertMessage}
type="warning"
@ -47,8 +75,13 @@ export default function Maps(props: {
/>
</div>
)}
{/* Map */}
<div className="absolute inset-0">
<MapComponent />
<MapComponent
isHoveringUI={isHoveringUI}
showCoordinatesEnabled={showMapCoordinates}
/>
</div>
</div>
</MapsLayout>

View File

@ -6,17 +6,19 @@ import { useModals } from '~/context/ModalContext'
import StyledModal from '~/components/StyledModal'
import { FileEntry } from '../../../types/files'
import { useNotifications } from '~/context/NotificationContext'
import { useState } from 'react'
import { useEffect, useRef, useState } from 'react'
import api from '~/lib/api'
import DownloadURLModal from '~/components/DownloadURLModal'
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import useDownloads from '~/hooks/useDownloads'
import StyledSectionHeader from '~/components/StyledSectionHeader'
import CuratedCollectionCard from '~/components/CuratedCollectionCard'
import CountryPickerModal from '~/components/CountryPickerModal'
import type { CollectionWithStatus } from '../../../types/collections'
import ActiveDownloads from '~/components/ActiveDownloads'
import Alert from '~/components/Alert'
import { formatBytes } from '~/lib/util'
import { hasDownloadedGlobalMap } from '~/lib/global_map_banner'
const CURATED_COLLECTIONS_KEY = 'curated-map-collections'
const GLOBAL_MAP_INFO_KEY = 'global-map-info'
@ -28,6 +30,7 @@ export default function MapsManager(props: {
const { openModal, closeAllModals } = useModals()
const { addNotification } = useNotifications()
const [downloading, setDownloading] = useState(false)
const [deletingFileKey, setDeletingFileKey] = useState<string | null>(null)
const { data: curatedCollections } = useQuery({
queryKey: [CURATED_COLLECTIONS_KEY],
@ -35,16 +38,28 @@ export default function MapsManager(props: {
refetchOnWindowFocus: false,
})
const { invalidate: invalidateDownloads } = useDownloads({
const { data: activeMapDownloads = [], invalidate: invalidateDownloads } = useDownloads({
filetype: 'map',
enabled: true,
})
// Refresh the Stored Map Files list when a map download finishes. We pass props.maps.regionFiles
// straight through from the server-side render, so without an Inertia partial reload it stays stale
// until the user navigates away and back.
const prevMapDownloadCountRef = useRef(activeMapDownloads.length)
useEffect(() => {
if (activeMapDownloads.length < prevMapDownloadCountRef.current) {
router.reload({ only: ['maps'] })
}
prevMapDownloadCountRef.current = activeMapDownloads.length
}, [activeMapDownloads.length])
const { data: globalMapInfo } = useQuery({
queryKey: [GLOBAL_MAP_INFO_KEY],
queryFn: () => api.getGlobalMapInfo(),
refetchOnWindowFocus: false,
})
const globalMapAlreadyDownloaded = hasDownloadedGlobalMap(globalMapInfo?.key, props.maps.regionFiles)
const downloadGlobalMap = useMutation({
mutationFn: () => api.downloadGlobalMap(),
@ -118,18 +133,40 @@ export default function MapsManager(props: {
}
}
async function deleteFile(file: FileEntry) {
if (file.type !== 'file') return
try {
setDeletingFileKey(file.key)
await api.deleteMapRegionFile(file.key)
addNotification({
type: 'success',
message: `${file.name} has been deleted.`,
})
closeAllModals()
router.reload({ only: ['maps'] })
} catch (error) {
console.error('Error deleting map file:', error)
addNotification({
type: 'error',
message: `Failed to delete ${file.name}. Please try again.`,
})
} finally {
setDeletingFileKey(null)
}
}
async function confirmDeleteFile(file: FileEntry) {
openModal(
<StyledModal
title="Confirm Delete?"
onConfirm={() => {
closeAllModals()
}}
onConfirm={() => deleteFile(file)}
onCancel={closeAllModals}
open={true}
confirmText="Delete"
cancelText="Cancel"
confirmVariant="danger"
confirmLoading={file.type === 'file' && deletingFileKey === file.key}
>
<p className="text-text-secondary">
Are you sure you want to delete {file.name}? This action cannot be undone.
@ -196,6 +233,24 @@ export default function MapsManager(props: {
)
}
function openCountryPickerModal() {
openModal(
<CountryPickerModal
onCancel={closeAllModals}
installedFilenames={(props.maps.regionFiles ?? []).map((f) => f.name)}
onDownloadStart={() => {
invalidateDownloads()
addNotification({
type: 'success',
message: 'Download queued. Watch progress below.',
})
closeAllModals()
}}
/>,
'country-picker-modal'
)
}
async function openDownloadModal() {
openModal(
<DownloadURLModal
@ -251,7 +306,23 @@ export default function MapsManager(props: {
}}
/>
)}
{globalMapInfo && (
{globalMapInfo && globalMapAlreadyDownloaded && (
<Alert
title="Global Map Installed"
message={`Your global map build ${globalMapInfo.date} (${formatBytes(globalMapInfo.size, 1)}) is stored locally and ready for offline use.`}
type="success"
variant="bordered"
className="mt-8"
icon="IconCircleCheck"
buttonProps={{
variant: 'secondary',
children: 'Download latest build',
icon: 'IconRefresh',
onClick: () => confirmGlobalMapDownload(),
}}
/>
)}
{globalMapInfo && !globalMapAlreadyDownloaded && (
<Alert
title="Global Map Coverage Available"
message={`Download a complete worldwide map from Protomaps (${formatBytes(globalMapInfo.size, 1)}, build ${globalMapInfo.date}). This is a large file but covers the entire planet — no individual region downloads needed.`}
@ -268,6 +339,21 @@ export default function MapsManager(props: {
}}
/>
)}
<Alert
title="Download by country or region"
message="Pick the countries you actually need — from a single country to a whole continent — and we'll pull just those tiles from the global Protomaps archive. Much smaller than the full 125 GB global map."
type="info-inverted"
variant="bordered"
className="mt-8"
icon="IconMap2"
buttonProps={{
variant: 'primary',
children: 'Choose Countries',
icon: 'IconMap2',
onClick: openCountryPickerModal,
}}
/>
<div className="mt-8 mb-6 flex items-center justify-between">
<StyledSectionHeader title="Curated Map Regions" className="!mb-0" />
<StyledButton

View File

@ -283,7 +283,7 @@ export default function ModelsPage(props: {
type="warning"
variant="bordered"
title="GPU Not Accessible"
message={`Your system has an NVIDIA GPU, but ${aiAssistantName} can't access it. AI is running on CPU only, which is significantly slower.`}
message={`Your system has ${systemInfo?.gpuHealth?.gpuVendor === 'amd' ? 'an AMD' : 'an NVIDIA'} GPU, but ${aiAssistantName} can't access it. AI is running on CPU only, which is significantly slower.`}
className="!mt-6"
dismissible={true}
onDismiss={handleDismissGpuBanner}
@ -369,7 +369,7 @@ export default function ModelsPage(props: {
</td>
<td className="px-4 py-3">
<span className="text-sm text-text-secondary">
{model.details.parameter_size || 'N/A'}
{model.details?.parameter_size || 'N/A'}
</span>
</td>
<td className="px-4 py-3">

View File

@ -209,7 +209,7 @@ export default function SettingsPage(props: {
type="warning"
variant="bordered"
title="GPU Not Accessible to AI Assistant"
message="Your system has an NVIDIA GPU, but the AI Assistant can't access it. AI is running on CPU only, which is significantly slower."
message={`Your system has ${info?.gpuHealth?.gpuVendor === 'amd' ? 'an AMD' : 'an NVIDIA'} GPU, but the AI Assistant can't access it. AI is running on CPU only, which is significantly slower.`}
dismissible={true}
onDismiss={handleDismissGpuBanner}
buttonProps={{

View File

@ -5,16 +5,17 @@ import StyledTable from '~/components/StyledTable'
import StyledSectionHeader from '~/components/StyledSectionHeader'
import ActiveDownloads from '~/components/ActiveDownloads'
import Alert from '~/components/Alert'
import { useEffect, useState } from 'react'
import { useEffect, useRef, useState } from 'react'
import { IconAlertCircle, IconArrowBigUpLines, IconCheck, IconCircleCheck, IconReload } from '@tabler/icons-react'
import { SystemUpdateStatus } from '../../../types/system'
import type { ContentUpdateCheckResult, ResourceUpdateInfo } from '../../../types/collections'
import api from '~/lib/api'
import Input from '~/components/inputs/Input'
import Switch from '~/components/inputs/Switch'
import { useMutation } from '@tanstack/react-query'
import { useMutation, useQueryClient } from '@tanstack/react-query'
import { useNotifications } from '~/context/NotificationContext'
import { useSystemSetting } from '~/hooks/useSystemSetting'
import { formatBytes } from '~/lib/util'
type Props = {
updateAvailable: boolean
@ -23,8 +24,26 @@ type Props = {
earlyAccess: boolean
}
const STAGE_LABELS: Record<SystemUpdateStatus['stage'], string> = {
idle: 'Preparing Update',
starting: 'Starting Update',
pulling: 'Pulling Images',
pulled: 'Images Pulled',
recreating: 'Recreating Containers',
complete: 'Update Complete',
error: 'Update Failed',
}
const ADVANCED_STAGES: ReadonlySet<SystemUpdateStatus['stage']> = new Set([
'pulling',
'pulled',
'recreating',
'complete',
])
function ContentUpdatesSection() {
const { addNotification } = useNotifications()
const queryClient = useQueryClient()
const [checkResult, setCheckResult] = useState<ContentUpdateCheckResult | null>(null)
const [isChecking, setIsChecking] = useState(false)
const [applyingIds, setApplyingIds] = useState<Set<string>>(new Set())
@ -60,6 +79,9 @@ function ContentUpdatesSection() {
? { ...prev, updates: prev.updates.filter((u) => u.resource_id !== update.resource_id) }
: prev
)
// Force Active Downloads to refetch now — small updates finish before the next
// idle poll fires, so without this the user wouldn't see them.
queryClient.invalidateQueries({ queryKey: ['download-jobs'] })
} else {
addNotification({ type: 'error', message: result?.error || 'Failed to start update' })
}
@ -95,6 +117,9 @@ function ContentUpdatesSection() {
? { ...prev, updates: prev.updates.filter((u) => !successIds.has(u.resource_id)) }
: prev
)
if (successIds.size > 0) {
queryClient.invalidateQueries({ queryKey: ['download-jobs'] })
}
}
} catch {
addNotification({ type: 'error', message: 'Failed to apply updates' })
@ -182,6 +207,15 @@ function ContentUpdatesSection() {
</span>
),
},
{
accessor: 'size_bytes',
title: 'Size',
render: (record) => (
<span className="text-desert-stone-dark">
{record.size_bytes ? formatBytes(record.size_bytes, 1) : '—'}
</span>
),
},
{
accessor: 'installed_version',
title: 'Version',
@ -234,6 +268,12 @@ export default function SystemUpdatePage(props: { system: Props }) {
const [email, setEmail] = useState('')
const [versionInfo, setVersionInfo] = useState<Omit<Props, 'earlyAccess'>>(props.system)
const [showConnectionLostNotice, setShowConnectionLostNotice] = useState(false)
// Tracks whether this update session has progressed past 'idle'/'starting'.
// The sidecar sits on 'complete' for ~5s before resetting to 'idle' (see
// install/sidecar-updater/update-watcher.sh), and the SPA can miss that
// window across the admin container restart. If we resurface to 'idle'
// after seeing an advanced stage, treat it as the missed completion.
const seenAdvancedStageRef = useRef(false)
const earlyAccessSetting = useSystemSetting({
key: 'system.earlyAccess', initialData: {
@ -253,11 +293,22 @@ export default function SystemUpdatePage(props: { system: Props }) {
}
setUpdateStatus(response)
if (ADVANCED_STAGES.has(response.stage)) {
seenAdvancedStageRef.current = true
}
// If we can connect again, hide the connection lost notice
setShowConnectionLostNotice(false)
// Check if update is complete or errored
if (response.stage === 'complete') {
// Check if update is complete or errored. We also treat a return to
// 'idle' as completion if we previously saw an advanced stage — this
// catches the race where the sidecar's brief 'complete' window passes
// while we're disconnected during the admin container restart.
const isComplete =
response.stage === 'complete' ||
(response.stage === 'idle' && seenAdvancedStageRef.current)
if (isComplete) {
// Re-check version so the KV store clears the stale "update available" flag
// before we reload, otherwise the banner shows "current → current"
try {
@ -287,6 +338,7 @@ export default function SystemUpdatePage(props: { system: Props }) {
const handleStartUpdate = async () => {
try {
setError(null)
seenAdvancedStageRef.current = false
setIsUpdating(true)
const response = await api.startSystemUpdate()
if (!response || !response.success) {
@ -351,7 +403,7 @@ export default function SystemUpdatePage(props: { system: Props }) {
if (updateStatus?.stage === 'error')
return <IconAlertCircle className="h-12 w-12 text-desert-red" />
if (isUpdating) return <IconReload className="h-12 w-12 text-desert-green animate-spin" />
if (props.system.updateAvailable)
if (versionInfo.updateAvailable)
return <IconArrowBigUpLines className="h-16 w-16 text-desert-green" />
return <IconCircleCheck className="h-16 w-16 text-desert-olive" />
}
@ -363,6 +415,9 @@ export default function SystemUpdatePage(props: { system: Props }) {
onSuccess: () => {
addNotification({ message: 'Setting updated successfully.', type: 'success' })
earlyAccessSetting.refetch()
// Toggling Early Access changes which versions are eligible, so re-evaluate
// immediately rather than making the user click Check Again.
checkVersionMutation.mutate()
},
onError: (error) => {
console.error('Error updating setting:', error)
@ -444,11 +499,11 @@ export default function SystemUpdatePage(props: { system: Props }) {
{!isUpdating && (
<>
<h2 className="text-2xl font-bold text-desert-green mb-2">
{props.system.updateAvailable ? 'Update Available' : 'System Up to Date'}
{versionInfo.updateAvailable ? 'Update Available' : 'System Up to Date'}
</h2>
<p className="text-desert-stone-dark mb-6">
{props.system.updateAvailable
? `A new version (${props.system.latestVersion}) is available for your Project N.O.M.A.D. instance.`
{versionInfo.updateAvailable
? `A new version (${versionInfo.latestVersion}) is available for your Project N.O.M.A.D. instance.`
: 'Your system is running the latest version!'}
</p>
</>
@ -456,8 +511,8 @@ export default function SystemUpdatePage(props: { system: Props }) {
{isUpdating && updateStatus && (
<>
<h2 className="text-2xl font-bold text-desert-green mb-2 capitalize">
{updateStatus.stage === 'idle' ? 'Preparing Update' : updateStatus.stage}
<h2 className="text-2xl font-bold text-desert-green mb-2">
{STAGE_LABELS[updateStatus.stage] ?? updateStatus.stage}
</h2>
<p className="text-desert-stone-dark mb-6">{updateStatus.message}</p>
</>

View File

@ -1,5 +1,6 @@
import { Head } from '@inertiajs/react'
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import { useMemo, useState } from 'react'
import StyledTable from '~/components/StyledTable'
import SettingsLayout from '~/layouts/SettingsLayout'
import api from '~/lib/api'
@ -10,11 +11,18 @@ import useServiceInstalledStatus from '~/hooks/useServiceInstalledStatus'
import Alert from '~/components/Alert'
import { ZimFileWithMetadata } from '../../../../types/zim'
import { SERVICE_NAMES } from '../../../../constants/service_names'
import { formatBytes } from '~/lib/util'
import { IconArrowDown, IconArrowUp, IconArrowsSort } from '@tabler/icons-react'
type SortKey = 'name' | 'size'
type SortDirection = 'asc' | 'desc'
export default function ZimPage() {
const queryClient = useQueryClient()
const { openModal, closeAllModals } = useModals()
const { isInstalled } = useServiceInstalledStatus(SERVICE_NAMES.KIWIX)
const [sortKey, setSortKey] = useState<SortKey>('size')
const [sortDirection, setSortDirection] = useState<SortDirection>('desc')
const { data, isLoading } = useQuery<ZimFileWithMetadata[]>({
queryKey: ['zim-files'],
queryFn: getFiles,
@ -25,6 +33,49 @@ export default function ZimPage() {
return res.data.files
}
const sortedData = useMemo(() => {
if (!data) return []
const copy = [...data]
copy.sort((a, b) => {
let cmp = 0
if (sortKey === 'size') {
const aSize = a.size_bytes ?? 0
const bSize = b.size_bytes ?? 0
cmp = aSize - bSize
} else {
const aName = (a.title || a.name).toLowerCase()
const bName = (b.title || b.name).toLowerCase()
cmp = aName.localeCompare(bName)
}
return sortDirection === 'asc' ? cmp : -cmp
})
return copy
}, [data, sortKey, sortDirection])
function toggleSort(key: SortKey) {
if (sortKey === key) {
setSortDirection((d) => (d === 'asc' ? 'desc' : 'asc'))
} else {
setSortKey(key)
setSortDirection(key === 'size' ? 'desc' : 'asc')
}
}
function renderSortHeader(label: string, key: SortKey) {
const active = sortKey === key
const Icon = !active ? IconArrowsSort : sortDirection === 'asc' ? IconArrowUp : IconArrowDown
return (
<button
type="button"
onClick={() => toggleSort(key)}
className="flex items-center gap-1 font-semibold text-text-primary hover:text-desert-orange"
>
{label}
<Icon className="size-4" />
</button>
)
}
async function confirmDeleteFile(file: ZimFileWithMetadata) {
openModal(
<StyledModal
@ -83,7 +134,7 @@ export default function ZimPage() {
columns={[
{
accessor: 'title',
title: 'Title',
title: renderSortHeader('Title', 'name'),
render: (record) => (
<span className="font-medium">
{record.title || record.name}
@ -99,6 +150,15 @@ export default function ZimPage() {
</span>
),
},
{
accessor: 'size_bytes',
title: renderSortHeader('Size', 'size'),
render: (record) => (
<span className="text-text-secondary tabular-nums">
{record.size_bytes ? formatBytes(record.size_bytes, 1) : '—'}
</span>
),
},
{
accessor: 'actions',
title: 'Actions',
@ -117,7 +177,7 @@ export default function ZimPage() {
),
},
]}
data={data || []}
data={sortedData}
/>
</main>
</div>

View File

@ -21,7 +21,16 @@ import useInternetStatus from '~/hooks/useInternetStatus'
import Alert from '~/components/Alert'
import useServiceInstalledStatus from '~/hooks/useServiceInstalledStatus'
import Input from '~/components/inputs/Input'
import { IconSearch, IconBooks } from '@tabler/icons-react'
import {
IconSearch,
IconBooks,
IconFolder,
IconFileDownload,
IconChevronRight,
IconPlus,
IconTrash,
IconLibrary,
} from '@tabler/icons-react'
import useDebounce from '~/hooks/useDebounce'
import CategoryCard from '~/components/CategoryCard'
import TierSelectionModal from '~/components/TierSelectionModal'
@ -34,6 +43,13 @@ import { SERVICE_NAMES } from '../../../../constants/service_names'
const CURATED_CATEGORIES_KEY = 'curated-categories'
const WIKIPEDIA_STATE_KEY = 'wikipedia-state'
const CUSTOM_LIBRARIES_KEY = 'custom-libraries'
type CustomLibrary = { id: number; name: string; base_url: string; is_default: boolean }
type BrowseResult = {
directories: { name: string; url: string }[]
files: { name: string; url: string; size_bytes: number | null }[]
}
export default function ZimRemoteExplorer() {
const queryClient = useQueryClient()
@ -56,6 +72,20 @@ export default function ZimRemoteExplorer() {
const [selectedWikipedia, setSelectedWikipedia] = useState<string | null>(null)
const [isSubmittingWikipedia, setIsSubmittingWikipedia] = useState(false)
// Custom library state - persist selection to localStorage
const [selectedSource, setSelectedSource] = useState<'default' | number>(() => {
try {
const saved = localStorage.getItem('nomad:zim-library-source')
if (saved && saved !== 'default') return parseInt(saved, 10)
} catch {}
return 'default'
})
const [browseUrl, setBrowseUrl] = useState<string | null>(null)
const [breadcrumbs, setBreadcrumbs] = useState<{ name: string; url: string }[]>([])
const [manageModalOpen, setManageModalOpen] = useState(false)
const [newLibraryName, setNewLibraryName] = useState('')
const [newLibraryUrl, setNewLibraryUrl] = useState('')
const debouncedSetQuery = debounce((val: string) => {
setQuery(val)
}, 400)
@ -79,12 +109,34 @@ export default function ZimRemoteExplorer() {
enabled: true,
})
// Fetch custom libraries
const { data: customLibraries } = useQuery({
queryKey: [CUSTOM_LIBRARIES_KEY],
queryFn: () => api.listCustomLibraries(),
refetchOnWindowFocus: false,
})
// Browse custom library directory
const {
data: browseData,
isLoading: isBrowsing,
error: browseError,
} = useQuery<BrowseResult>({
queryKey: ['browse-library', browseUrl],
queryFn: () => api.browseLibrary(browseUrl!) as Promise<BrowseResult>,
enabled: !!browseUrl && selectedSource !== 'default',
refetchOnWindowFocus: false,
retry: false,
})
const { data, fetchNextPage, isFetching, isLoading } =
useInfiniteQuery<ListRemoteZimFilesResponse>({
queryKey: ['remote-zim-files', query],
queryFn: async ({ pageParam = 0 }) => {
const pageParsed = parseInt((pageParam as number).toString(), 10)
const start = isNaN(pageParsed) ? 0 : pageParsed * 12
// pageParam is an opaque Kiwix offset returned by the backend as `next_start`.
// The backend accumulates across multiple upstream pages when needed (#731), so the
// frontend can't derive the next offset from a 12-item page assumption.
const start = typeof pageParam === 'number' ? pageParam : 0
const res = await api.listRemoteZimFiles({ start, count: 12, query: query || undefined })
if (!res) {
throw new Error('Failed to fetch remote ZIM files.')
@ -92,14 +144,10 @@ export default function ZimRemoteExplorer() {
return res.data
},
initialPageParam: 0,
getNextPageParam: (_lastPage, pages) => {
if (!_lastPage.has_more) {
return undefined // No more pages to fetch
}
return pages.length
},
getNextPageParam: (lastPage) => (lastPage.has_more ? lastPage.next_start : undefined),
refetchOnWindowFocus: false,
placeholderData: keepPreviousData,
enabled: selectedSource === 'default',
})
const flatData = useMemo(() => {
@ -119,18 +167,16 @@ export default function ZimRemoteExplorer() {
(parentRef?: HTMLDivElement | null) => {
if (parentRef) {
const { scrollHeight, scrollTop, clientHeight } = parentRef
//once the user has scrolled within 200px of the bottom of the table, fetch more data if we can
if (
scrollHeight - scrollTop - clientHeight < 200 &&
!isFetching &&
hasMore &&
flatData.length > 0
) {
// Fetch more when near the bottom. The `flatData.length > 0` guard that used to be
// here caused the #731 deadlock when a heavily-saturated install returned an empty
// page with has_more=true — removing it lets the existing on-mount/on-data effect
// below drive bounded auto-fetch until hasMore flips false.
if (scrollHeight - scrollTop - clientHeight < 200 && !isFetching && hasMore) {
fetchNextPage()
}
}
},
[fetchNextPage, isFetching, hasMore, flatData.length]
[fetchNextPage, isFetching, hasMore]
)
const virtualizer = useVirtualizer({
@ -145,6 +191,50 @@ export default function ZimRemoteExplorer() {
fetchOnBottomReached(tableParentRef.current)
}, [fetchOnBottomReached])
// Restore custom library selection on mount when data loads
useEffect(() => {
if (selectedSource !== 'default' && customLibraries) {
const lib = customLibraries.find((l) => l.id === selectedSource)
if (lib && !browseUrl) {
setBrowseUrl(lib.base_url)
setBreadcrumbs([{ name: lib.name, url: lib.base_url }])
} else if (!lib) {
// Saved library was deleted
setSelectedSource('default')
localStorage.setItem('nomad:zim-library-source', 'default')
}
}
}, [customLibraries, selectedSource])
// When selecting a custom library, navigate to its root
const handleSourceChange = (value: string) => {
localStorage.setItem('nomad:zim-library-source', value)
if (value === 'default') {
setSelectedSource('default')
setBrowseUrl(null)
setBreadcrumbs([])
} else {
const id = parseInt(value, 10)
const lib = customLibraries?.find((l) => l.id === id)
if (lib) {
setSelectedSource(id)
setBrowseUrl(lib.base_url)
setBreadcrumbs([{ name: lib.name, url: lib.base_url }])
}
}
}
const navigateToDirectory = (name: string, url: string) => {
setBrowseUrl(url)
setBreadcrumbs((prev) => [...prev, { name, url }])
}
const navigateToBreadcrumb = (index: number) => {
const crumb = breadcrumbs[index]
setBrowseUrl(crumb.url)
setBreadcrumbs((prev) => prev.slice(0, index + 1))
}
async function confirmDownload(record: RemoteZimFileEntry) {
openModal(
<StyledModal
@ -170,6 +260,31 @@ export default function ZimRemoteExplorer() {
)
}
async function confirmCustomDownload(file: { name: string; url: string; size_bytes: number | null }) {
openModal(
<StyledModal
title="Confirm Download?"
onConfirm={() => {
downloadCustomFile(file)
closeAllModals()
}}
onCancel={closeAllModals}
open={true}
confirmText="Download"
cancelText="Cancel"
confirmVariant="primary"
>
<p className="text-text-primary">
Are you sure you want to download{' '}
<strong>{file.name}</strong>
{file.size_bytes ? ` (${formatBytes(file.size_bytes)})` : ''}? The Kiwix
application will be restarted after the download is complete.
</p>
</StyledModal>,
'confirm-download-custom-modal'
)
}
async function downloadFile(record: RemoteZimFileEntry) {
try {
await api.downloadRemoteZimFile(record.download_url, {
@ -184,6 +299,26 @@ export default function ZimRemoteExplorer() {
}
}
async function downloadCustomFile(file: { name: string; url: string; size_bytes: number | null }) {
try {
await api.downloadRemoteZimFile(file.url, {
title: file.name.replace(/\.zim$/, ''),
size_bytes: file.size_bytes ?? undefined,
})
addNotification({
message: `Started downloading "${file.name}"`,
type: 'success',
})
invalidateDownloads()
} catch (error) {
console.error('Error downloading file:', error)
addNotification({
message: 'Failed to start download.',
type: 'error',
})
}
}
// Category/tier handlers
const handleCategoryClick = (category: CategoryWithStatus) => {
if (!isOnline) return
@ -269,6 +404,35 @@ export default function ZimRemoteExplorer() {
},
})
// Custom library management
const addLibraryMutation = useMutation({
mutationFn: () => api.addCustomLibrary(newLibraryName.trim(), newLibraryUrl.trim()),
onSuccess: () => {
addNotification({ message: 'Custom library added.', type: 'success' })
queryClient.invalidateQueries({ queryKey: [CUSTOM_LIBRARIES_KEY] })
setNewLibraryName('')
setNewLibraryUrl('')
},
onError: () => {
addNotification({ message: 'Failed to add custom library.', type: 'error' })
},
})
const removeLibraryMutation = useMutation({
mutationFn: (id: number) => api.removeCustomLibrary(id),
onSuccess: (_data, id) => {
addNotification({ message: 'Custom library removed.', type: 'success' })
queryClient.invalidateQueries({ queryKey: [CUSTOM_LIBRARIES_KEY] })
if (selectedSource === id) {
setSelectedSource('default')
setBrowseUrl(null)
setBreadcrumbs([])
}
},
})
const hasCustomLibraries = customLibraries && customLibraries.length > 0
return (
<SettingsLayout>
<Head title="Content Explorer | Project N.O.M.A.D." />
@ -307,7 +471,7 @@ export default function ZimRemoteExplorer() {
Force Refresh Collections
</StyledButton>
</div>
{/* Wikipedia Selector */}
{isLoadingWikipedia ? (
<div className="mt-8 bg-surface-primary rounded-lg border border-border-subtle p-6">
@ -365,87 +529,303 @@ export default function ZimRemoteExplorer() {
) : (
<p className="text-text-muted mt-4">No curated content categories available.</p>
)}
<StyledSectionHeader title="Browse the Kiwix Library" className="mt-12 mb-4" />
<div className="flex justify-start mt-4">
<Input
name="search"
label=""
placeholder="Search available ZIM files..."
value={queryUI}
onChange={(e) => {
setQueryUI(e.target.value)
debouncedSetQuery(e.target.value)
}}
className="w-1/3"
leftIcon={<IconSearch className="w-5 h-5 text-text-muted" />}
/>
{/* Kiwix Library / Custom Library Browser */}
<div className="mt-12 mb-4 flex items-center justify-between">
<StyledSectionHeader title="Browse the Kiwix Library" className="!mb-0" />
<StyledButton
onClick={() => setManageModalOpen(true)}
disabled={!isOnline}
icon="IconLibrary"
>
{hasCustomLibraries ? 'Manage Custom Libraries' : 'Add Custom Library'}
</StyledButton>
</div>
<StyledTable<RemoteZimFileEntry & { actions?: any }>
data={flatData.map((i, idx) => {
const row = virtualizer.getVirtualItems().find((v) => v.index === idx)
return {
...i,
height: `${row?.size || 48}px`, // Use the size from the virtualizer
translateY: row?.start || 0,
}
})}
ref={tableParentRef}
loading={isLoading}
columns={[
{
accessor: 'title',
},
{
accessor: 'author',
},
{
accessor: 'summary',
},
{
accessor: 'updated',
render(record) {
return new Intl.DateTimeFormat('en-US', {
dateStyle: 'medium',
}).format(new Date(record.updated))
},
},
{
accessor: 'size_bytes',
title: 'Size',
render(record) {
return formatBytes(record.size_bytes)
},
},
{
accessor: 'actions',
render(record) {
return (
<div className="flex space-x-2">
<StyledButton
icon={'IconDownload'}
onClick={() => {
confirmDownload(record)
}}
{/* Source selector dropdown */}
{hasCustomLibraries && (
<div className="flex items-center gap-3 mb-4">
<label className="text-sm font-medium text-text-secondary">Source:</label>
<select
value={selectedSource === 'default' ? 'default' : String(selectedSource)}
onChange={(e) => handleSourceChange(e.target.value)}
className="rounded-md border border-border-default bg-surface-primary text-text-primary px-3 py-1.5 text-sm focus:outline-none focus:ring-2 focus:ring-desert-green"
>
<option value="default">Default (Kiwix)</option>
{customLibraries.map((lib) => (
<option key={lib.id} value={String(lib.id)}>
{lib.name}
</option>
))}
</select>
</div>
)}
{/* Default Kiwix library browser */}
{selectedSource === 'default' && (
<>
<div className="flex justify-start mt-4">
<Input
name="search"
label=""
placeholder="Search available ZIM files..."
value={queryUI}
onChange={(e) => {
setQueryUI(e.target.value)
debouncedSetQuery(e.target.value)
}}
className="w-1/3"
leftIcon={<IconSearch className="w-5 h-5 text-text-muted" />}
/>
</div>
<StyledTable<RemoteZimFileEntry & { actions?: any }>
data={flatData.map((i, idx) => {
const row = virtualizer.getVirtualItems().find((v) => v.index === idx)
return {
...i,
height: `${row?.size || 48}px`,
translateY: row?.start || 0,
}
})}
ref={tableParentRef}
loading={isLoading}
columns={[
{
accessor: 'title',
},
{
accessor: 'author',
},
{
accessor: 'summary',
},
{
accessor: 'updated',
render(record) {
return new Intl.DateTimeFormat('en-US', {
dateStyle: 'medium',
}).format(new Date(record.updated))
},
},
{
accessor: 'size_bytes',
title: 'Size',
render(record) {
return formatBytes(record.size_bytes)
},
},
{
accessor: 'actions',
render(record) {
return (
<div className="flex space-x-2">
<StyledButton
icon={'IconDownload'}
onClick={() => {
confirmDownload(record)
}}
>
Download
</StyledButton>
</div>
)
},
},
]}
className="relative overflow-x-auto overflow-y-auto h-[600px] w-full mt-4"
tableBodyStyle={{
position: 'relative',
height: `${virtualizer.getTotalSize()}px`,
}}
containerProps={{
onScroll: (e) => fetchOnBottomReached(e.currentTarget as HTMLDivElement),
}}
compact
rowLines
/>
</>
)}
{/* Custom library directory browser */}
{selectedSource !== 'default' && (
<div className="mt-4">
{/* Breadcrumb navigation */}
<nav className="flex items-center gap-1 text-sm text-text-muted mb-4 flex-wrap">
{breadcrumbs.map((crumb, idx) => (
<span key={idx} className="flex items-center gap-1">
{idx > 0 && <IconChevronRight className="w-4 h-4" />}
{idx < breadcrumbs.length - 1 ? (
<button
onClick={() => navigateToBreadcrumb(idx)}
className="text-desert-green hover:underline"
>
Download
</StyledButton>
</div>
)
},
},
]}
className="relative overflow-x-auto overflow-y-auto h-[600px] w-full mt-4"
tableBodyStyle={{
position: 'relative',
height: `${virtualizer.getTotalSize()}px`,
}}
containerProps={{
onScroll: (e) => fetchOnBottomReached(e.currentTarget as HTMLDivElement),
}}
compact
rowLines
/>
{crumb.name}
</button>
) : (
<span className="text-text-primary font-medium">{crumb.name}</span>
)}
</span>
))}
</nav>
{isBrowsing && (
<div className="flex justify-center py-12">
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-desert-green"></div>
</div>
)}
{browseError && (
<Alert
title="Could not fetch directory listing from this URL."
message="The server may not support directory browsing, or the URL may be incorrect."
type="error"
variant="solid"
/>
)}
{!isBrowsing && !browseError && browseData && (
<div className="bg-surface-primary rounded-lg border border-border-subtle overflow-hidden relative" style={{ maxHeight: '600px', overflowY: 'auto' }}>
{browseData.directories.length === 0 && browseData.files.length === 0 ? (
<p className="text-text-muted p-6 text-center">
No directories or ZIM files found at this location.
</p>
) : (
<table className="w-full text-sm">
<thead>
<tr className="border-b border-border-subtle bg-surface-secondary sticky top-0 z-10">
<th className="text-left px-4 py-3 font-medium text-text-secondary">Name</th>
<th className="text-right px-4 py-3 font-medium text-text-secondary w-32">Size</th>
<th className="text-right px-4 py-3 font-medium text-text-secondary w-36"></th>
</tr>
</thead>
<tbody>
{browseData.directories.map((dir) => (
<tr
key={dir.url}
className="border-b border-border-subtle hover:bg-surface-secondary cursor-pointer transition-colors"
onClick={() => navigateToDirectory(dir.name, dir.url)}
>
<td className="px-4 py-3">
<span className="flex items-center gap-2 text-text-primary">
<IconFolder className="w-5 h-5 text-desert-orange" />
{dir.name}
</span>
</td>
<td className="text-right px-4 py-3 text-text-muted">--</td>
<td className="text-right px-4 py-3">
<IconChevronRight className="w-4 h-4 text-text-muted ml-auto" />
</td>
</tr>
))}
{browseData.files.map((file) => (
<tr
key={file.url}
className="border-b border-border-subtle hover:bg-surface-secondary transition-colors"
>
<td className="px-4 py-3">
<span className="flex items-center gap-2 text-text-primary">
<IconFileDownload className="w-5 h-5 text-desert-green" />
{file.name}
</span>
</td>
<td className="text-right px-4 py-3 text-text-muted">
{file.size_bytes ? formatBytes(file.size_bytes) : '--'}
</td>
<td className="text-right px-4 py-3">
<StyledButton
icon="IconDownload"
onClick={() => confirmCustomDownload(file)}
>
Download
</StyledButton>
</td>
</tr>
))}
</tbody>
</table>
)}
</div>
)}
</div>
)}
<ActiveDownloads filetype="zim" withHeader />
{/* Manage Custom Libraries Modal */}
<StyledModal
title="Manage Custom Libraries"
open={manageModalOpen}
onCancel={() => setManageModalOpen(false)}
cancelText="Close"
>
<div className="space-y-6">
<div>
<p className="text-sm text-text-muted mb-4">
Add Kiwix mirrors or other ZIM file sources for faster downloads.
</p>
{/* Existing libraries */}
{customLibraries && customLibraries.length > 0 && (
<div className="space-y-2 mb-6">
{customLibraries.map((lib) => (
<div
key={lib.id}
className="flex items-center justify-between bg-surface-secondary rounded-lg px-4 py-3 border border-border-subtle"
>
<div className="min-w-0 flex-1">
<p className="font-medium text-text-primary truncate">
{lib.name}
{lib.is_default && (
<span className="ml-2 text-xs text-text-muted font-normal">(built-in)</span>
)}
</p>
<p className="text-xs text-text-muted truncate">{lib.base_url}</p>
</div>
{!lib.is_default && (
<button
onClick={() => removeLibraryMutation.mutate(lib.id)}
className="ml-3 p-1.5 text-text-muted hover:text-red-500 transition-colors rounded"
title="Remove library"
>
<IconTrash className="w-4 h-4" />
</button>
)}
</div>
))}
</div>
)}
{/* Add new library form */}
<div className="space-y-3">
<Input
name="library-name"
label="Library Name"
placeholder="e.g., Debian Mirror"
value={newLibraryName}
onChange={(e) => setNewLibraryName(e.target.value)}
/>
<Input
name="library-url"
label="Base URL"
placeholder="e.g., https://cdimage.debian.org/mirror/kiwix.org/zim/"
value={newLibraryUrl}
onChange={(e) => setNewLibraryUrl(e.target.value)}
/>
<StyledButton
icon="IconPlus"
onClick={() => addLibraryMutation.mutate()}
disabled={
!newLibraryName.trim() ||
!newLibraryUrl.trim() ||
addLibraryMutation.isPending
}
>
Add Library
</StyledButton>
</div>
</div>
</div>
</StyledModal>
</main>
</div>
</SettingsLayout>

View File

@ -38,7 +38,7 @@
"@vinejs/vine": "^3.0.1",
"@vitejs/plugin-react": "^4.6.0",
"autoprefixer": "^10.4.21",
"axios": "^1.13.5",
"axios": "^1.15.0",
"better-sqlite3": "^12.1.1",
"bullmq": "^5.65.1",
"cheerio": "^1.2.0",
@ -96,7 +96,7 @@
"prettier": "^3.5.3",
"ts-node-maintained": "^10.9.5",
"typescript": "~5.8.3",
"vite": "^6.4.1"
"vite": "^6.4.2"
}
},
"node_modules/@adobe/css-tools": {
@ -520,9 +520,9 @@
}
},
"node_modules/@adonisjs/http-server": {
"version": "7.8.0",
"resolved": "https://registry.npmjs.org/@adonisjs/http-server/-/http-server-7.8.0.tgz",
"integrity": "sha512-aVMOpExPDNwxjnKGnc4g4sJTIQC3CfNwzWfPFWJm4WnAGXxdI3OxI2zU9FTopB50y0OVK3dWO4/c1Fu6U4vjWQ==",
"version": "7.8.1",
"resolved": "https://registry.npmjs.org/@adonisjs/http-server/-/http-server-7.8.1.tgz",
"integrity": "sha512-ScwKHJstXQbkQXSNqD6MOESowZ+WhRyDXxjSQV/T7IpyMEg/F8NxpR5jAvrpw1BaGzd3t50LrgTrb7ouD8DOpA==",
"license": "MIT",
"dependencies": {
"@paralleldrive/cuid2": "^2.2.2",
@ -4383,7 +4383,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4400,7 +4399,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4417,7 +4415,6 @@
"cpu": [
"arm"
],
"dev": true,
"license": "Apache-2.0",
"optional": true,
"os": [
@ -4434,7 +4431,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4451,7 +4447,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4468,7 +4463,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4485,7 +4479,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4502,7 +4495,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4519,7 +4511,6 @@
"cpu": [
"ia32"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4536,7 +4527,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -6408,14 +6398,14 @@
}
},
"node_modules/axios": {
"version": "1.13.5",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.13.5.tgz",
"integrity": "sha512-cz4ur7Vb0xS4/KUN0tPWe44eqxrIu31me+fbang3ijiNscE129POzipJJA6zniq2C/Z6sJCjMimjS8Lc/GAs8Q==",
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.15.0.tgz",
"integrity": "sha512-wWyJDlAatxk30ZJer+GeCWS209sA42X+N5jU2jy6oHTp7ufw8uzUTVFBX9+wTfAlhiJXGS0Bq7X6efruWjuK9Q==",
"license": "MIT",
"dependencies": {
"follow-redirects": "^1.15.11",
"form-data": "^4.0.5",
"proxy-from-env": "^1.1.0"
"proxy-from-env": "^2.1.0"
}
},
"node_modules/bail": {
@ -9068,9 +9058,9 @@
}
},
"node_modules/follow-redirects": {
"version": "1.15.11",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.11.tgz",
"integrity": "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==",
"version": "1.16.0",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.16.0.tgz",
"integrity": "sha512-y5rN/uOsadFT/JfYwhxRS5R7Qce+g3zG97+JrtFZlC9klX/W5hD7iiLzScI4nZqUS7DNUdhPgw4xI8W2LuXlUw==",
"funding": [
{
"type": "individual",
@ -11029,9 +11019,9 @@
}
},
"node_modules/lodash": {
"version": "4.17.23",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
"version": "4.18.1",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.18.1.tgz",
"integrity": "sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==",
"license": "MIT"
},
"node_modules/lodash-es": {
@ -12184,9 +12174,9 @@
}
},
"node_modules/micromatch/node_modules/picomatch": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==",
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.2.tgz",
"integrity": "sha512-V7+vQEJ06Z+c5tSye8S+nHUfI51xoXIXjHQ99cQtKUkQqqO1kO/KCJUfZXuB47h/YBlDhah2H3hdUGXn8ie0oA==",
"license": "MIT",
"engines": {
"node": ">=8.6"
@ -13361,9 +13351,9 @@
"license": "ISC"
},
"node_modules/picomatch": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz",
"integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"license": "MIT",
"engines": {
"node": ">=12"
@ -13758,9 +13748,9 @@
}
},
"node_modules/protobufjs": {
"version": "7.5.4",
"resolved": "https://registry.npmjs.org/protobufjs/-/protobufjs-7.5.4.tgz",
"integrity": "sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg==",
"version": "7.5.5",
"resolved": "https://registry.npmjs.org/protobufjs/-/protobufjs-7.5.5.tgz",
"integrity": "sha512-3wY1AxV+VBNW8Yypfd1yQY9pXnqTAN+KwQxL8iYm3/BjKYMNg4i0owhEe26PWDOMaIrzeeF98Lqd5NGz4omiIg==",
"hasInstallScript": true,
"license": "BSD-3-Clause",
"dependencies": {
@ -13782,9 +13772,9 @@
}
},
"node_modules/protocol-buffers-schema": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/protocol-buffers-schema/-/protocol-buffers-schema-3.6.0.tgz",
"integrity": "sha512-TdDRD+/QNdrCGCE7v8340QyuXd4kIWIgapsE2+n/SaGiSSbomYl4TjHlvIoCWRpE7wFt02EpB35VVA2ImcBVqw==",
"version": "3.6.1",
"resolved": "https://registry.npmjs.org/protocol-buffers-schema/-/protocol-buffers-schema-3.6.1.tgz",
"integrity": "sha512-VG2K63Igkiv9p76tk1lilczEK1cT+kCjKtkdhw1dQZV3k3IXJbd3o6Ho8b9zJZaHSnT2hKe4I+ObmX9w6m5SmQ==",
"license": "MIT"
},
"node_modules/proxy-addr": {
@ -13801,10 +13791,13 @@
}
},
"node_modules/proxy-from-env": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz",
"integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==",
"license": "MIT"
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-2.1.0.tgz",
"integrity": "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA==",
"license": "MIT",
"engines": {
"node": ">=10"
}
},
"node_modules/pump": {
"version": "3.0.3",
@ -16425,9 +16418,9 @@
}
},
"node_modules/vite": {
"version": "6.4.1",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.1.tgz",
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
"version": "6.4.2",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.2.tgz",
"integrity": "sha512-2N/55r4JDJ4gdrCvGgINMy+HH3iRpNIz8K6SFwVsA+JbQScLiC+clmAxBgwiSPgcG9U15QmvqCGWzMbqda5zGQ==",
"license": "MIT",
"dependencies": {
"esbuild": "^0.25.0",

View File

@ -38,94 +38,94 @@
"#jobs/*": "./app/jobs/*.js"
},
"devDependencies": {
"@adonisjs/assembler": "^7.8.2",
"@adonisjs/eslint-config": "^2.0.0",
"@adonisjs/prettier-config": "^1.4.4",
"@adonisjs/tsconfig": "^1.4.0",
"@japa/assert": "^4.0.1",
"@japa/plugin-adonisjs": "^4.0.0",
"@japa/runner": "^4.2.0",
"@adonisjs/assembler": "7.8.2",
"@adonisjs/eslint-config": "2.1.2",
"@adonisjs/prettier-config": "1.4.5",
"@adonisjs/tsconfig": "1.4.1",
"@japa/assert": "4.2.0",
"@japa/plugin-adonisjs": "4.0.0",
"@japa/runner": "4.5.0",
"@swc/core": "1.11.24",
"@tanstack/eslint-plugin-query": "^5.81.2",
"@types/compression": "^1.8.1",
"@types/dockerode": "^4.0.1",
"@types/luxon": "^3.6.2",
"@types/node": "^22.15.18",
"@types/react": "^19.1.8",
"@types/react-dom": "^19.1.6",
"@types/stopword": "^2.0.3",
"eslint": "^9.26.0",
"hot-hook": "^0.4.0",
"prettier": "^3.5.3",
"ts-node-maintained": "^10.9.5",
"typescript": "~5.8.3",
"vite": "^6.4.1"
"@tanstack/eslint-plugin-query": "5.91.4",
"@types/compression": "1.8.1",
"@types/dockerode": "4.0.1",
"@types/luxon": "3.7.1",
"@types/node": "22.19.7",
"@types/react": "19.2.10",
"@types/react-dom": "19.2.3",
"@types/stopword": "2.0.3",
"eslint": "9.39.2",
"hot-hook": "0.4.0",
"prettier": "3.8.1",
"ts-node-maintained": "10.9.6",
"typescript": "5.8.3",
"vite": "6.4.2"
},
"dependencies": {
"@adonisjs/auth": "^9.4.0",
"@adonisjs/core": "^6.18.0",
"@adonisjs/cors": "^2.2.1",
"@adonisjs/inertia": "^3.1.1",
"@adonisjs/lucid": "^21.8.2",
"@adonisjs/session": "^7.5.1",
"@adonisjs/shield": "^8.2.0",
"@adonisjs/static": "^1.1.1",
"@adonisjs/transmit": "^2.0.2",
"@adonisjs/transmit-client": "^1.0.0",
"@adonisjs/vite": "^4.0.0",
"@chonkiejs/core": "^0.0.7",
"@headlessui/react": "^2.2.4",
"@inertiajs/react": "^2.0.13",
"@markdoc/markdoc": "^0.5.2",
"@openzim/libzim": "^4.0.0",
"@protomaps/basemaps": "^5.7.0",
"@qdrant/js-client-rest": "^1.16.2",
"@tabler/icons-react": "^3.34.0",
"@tailwindcss/vite": "^4.1.10",
"@tanstack/react-query": "^5.81.5",
"@tanstack/react-query-devtools": "^5.83.0",
"@tanstack/react-virtual": "^3.13.12",
"@uppy/core": "^5.2.0",
"@uppy/dashboard": "^5.1.0",
"@uppy/react": "^5.1.1",
"@vinejs/vine": "^3.0.1",
"@vitejs/plugin-react": "^4.6.0",
"autoprefixer": "^10.4.21",
"axios": "^1.13.5",
"better-sqlite3": "^12.1.1",
"bullmq": "^5.65.1",
"cheerio": "^1.2.0",
"compression": "^1.8.1",
"dockerode": "^4.0.7",
"edge.js": "^6.2.1",
"fast-xml-parser": "^5.5.7",
"fuse.js": "^7.1.0",
"jszip": "^3.10.1",
"luxon": "^3.6.1",
"maplibre-gl": "^4.7.1",
"mysql2": "^3.14.1",
"ollama": "^0.6.3",
"openai": "^6.27.0",
"pdf-parse": "^2.4.5",
"pdf2pic": "^3.2.0",
"pino-pretty": "^13.0.0",
"pmtiles": "^4.4.0",
"postcss": "^8.5.6",
"react": "^19.1.0",
"react-adonis-transmit": "^1.0.1",
"react-dom": "^19.1.0",
"react-map-gl": "^8.1.0",
"react-markdown": "^10.1.0",
"reflect-metadata": "^0.2.2",
"remark-gfm": "^4.0.1",
"sharp": "^0.34.5",
"stopword": "^3.1.5",
"systeminformation": "^5.31.0",
"tailwindcss": "^4.2.1",
"tar": "^7.5.11",
"tesseract.js": "^7.0.0",
"url-join": "^5.0.0",
"yaml": "^2.8.3"
"@adonisjs/auth": "9.6.0",
"@adonisjs/core": "6.19.3",
"@adonisjs/cors": "2.2.1",
"@adonisjs/inertia": "3.1.1",
"@adonisjs/lucid": "21.8.2",
"@adonisjs/session": "7.7.1",
"@adonisjs/shield": "8.2.0",
"@adonisjs/static": "1.1.1",
"@adonisjs/transmit": "2.0.2",
"@adonisjs/transmit-client": "1.1.0",
"@adonisjs/vite": "4.0.0",
"@chonkiejs/core": "0.0.7",
"@headlessui/react": "2.2.9",
"@inertiajs/react": "2.3.13",
"@markdoc/markdoc": "0.5.4",
"@openzim/libzim": "4.0.0",
"@protomaps/basemaps": "5.7.0",
"@qdrant/js-client-rest": "1.16.2",
"@tabler/icons-react": "3.36.1",
"@tailwindcss/vite": "4.1.18",
"@tanstack/react-query": "5.90.20",
"@tanstack/react-query-devtools": "5.91.3",
"@tanstack/react-virtual": "3.13.18",
"@uppy/core": "5.2.0",
"@uppy/dashboard": "5.1.0",
"@uppy/react": "5.1.1",
"@vinejs/vine": "3.0.1",
"@vitejs/plugin-react": "4.7.0",
"autoprefixer": "10.4.24",
"axios": "1.15.0",
"better-sqlite3": "12.6.2",
"bullmq": "5.67.2",
"cheerio": "1.2.0",
"compression": "1.8.1",
"dockerode": "4.0.9",
"edge.js": "6.4.0",
"fast-xml-parser": "5.5.9",
"fuse.js": "7.1.0",
"jszip": "3.10.1",
"luxon": "3.7.2",
"maplibre-gl": "4.7.1",
"mysql2": "3.16.2",
"ollama": "0.6.3",
"openai": "6.27.0",
"pdf-parse": "2.4.5",
"pdf2pic": "3.2.0",
"pino-pretty": "13.1.3",
"pmtiles": "4.4.0",
"postcss": "8.5.6",
"react": "19.2.4",
"react-adonis-transmit": "1.0.1",
"react-dom": "19.2.4",
"react-map-gl": "8.1.0",
"react-markdown": "10.1.0",
"reflect-metadata": "0.2.2",
"remark-gfm": "4.0.1",
"sharp": "0.34.5",
"stopword": "3.1.5",
"systeminformation": "5.31.0",
"tailwindcss": "4.2.2",
"tar": "7.5.11",
"tesseract.js": "7.0.0",
"url-join": "5.0.0",
"yaml": "2.8.3"
},
"hotHook": {
"boundaries": [

View File

@ -0,0 +1,62 @@
import logger from '@adonisjs/core/services/logger'
import type { ApplicationService } from '@adonisjs/core/types'
/**
* Ensures the nomad_qdrant container has the `unless-stopped` restart policy.
*
* Existing installations may have been created before this policy was enforced
* in the service seeder. Docker allows updating a container's restart policy
* without recreating it via the container.update() API.
*
* This provider runs once on every admin startup. If the policy is already
* correct, the check is a no-op.
*/
export default class QdrantRestartPolicyProvider {
constructor(protected app: ApplicationService) {}
async boot() {
if (this.app.getEnvironment() !== 'web') return
setImmediate(async () => {
try {
const Service = (await import('#models/service')).default
const { SERVICE_NAMES } = await import('../constants/service_names.js')
const Docker = (await import('dockerode')).default
const qdrantService = await Service.query()
.where('service_name', SERVICE_NAMES.QDRANT)
.first()
if (!qdrantService?.installed) {
logger.info('[QdrantRestartPolicyProvider] Qdrant not installed — skipping restart policy check.')
return
}
const docker = new Docker({ socketPath: '/var/run/docker.sock' })
const containers = await docker.listContainers({ all: true })
const containerInfo = containers.find((c) => c.Names.includes(`/${SERVICE_NAMES.QDRANT}`))
if (!containerInfo) {
logger.warn('[QdrantRestartPolicyProvider] Qdrant container not found — skipping restart policy check.')
return
}
const container = docker.getContainer(containerInfo.Id)
const inspected = await container.inspect()
const currentPolicy = inspected.HostConfig?.RestartPolicy?.Name
if (currentPolicy === 'unless-stopped') {
logger.info('[QdrantRestartPolicyProvider] Qdrant already has unless-stopped restart policy — no update needed.')
return
}
logger.info(`[QdrantRestartPolicyProvider] Qdrant restart policy is "${currentPolicy ?? 'none'}" — updating to unless-stopped.`)
await container.update({ RestartPolicy: { Name: 'unless-stopped', MaximumRetryCount: 0 } })
logger.info('[QdrantRestartPolicyProvider] Qdrant restart policy updated successfully.')
} catch (err: any) {
logger.error(`[QdrantRestartPolicyProvider] Failed to update Qdrant restart policy: ${err.message}`)
// Non-fatal: the container will still run, just without auto-restart on crash.
}
})
}
}

View File

@ -0,0 +1,56 @@
import logger from '@adonisjs/core/services/logger'
import type { ApplicationService } from '@adonisjs/core/types'
/**
* Self-heals stale `system.updateAvailable` after a sidecar-driven update.
*
* When the admin container is recreated on a new image, the KVStore still
* carries pre-update values for `system.updateAvailable` and
* `system.latestVersion`. Without intervention the UI keeps showing the
* "update available" banner until the next scheduled CheckUpdateJob (could be up to ~12h).
*
* Synchronous self-heal (no network): if the cached "latest" is not newer
* than the version we are now running, clear `updateAvailable`. The next
* scheduled CheckUpdateJob refreshes the cache from GitHub we deliberately
* do not hit the network from boot to avoid coupling container startup to
* a network request to Github (e.g. container restart loop = flooding GitHub with requests).
*
* Note: this provider does not set `updateAvailable` to true if the cached
* "latest" is newer than the current version. We rely on the next scheduled
* CheckUpdateJob to do that, to avoid false positives in case of a stale cache.
*/
export default class VersionCheckProvider {
constructor(protected app: ApplicationService) { }
async boot() {
if (this.app.getEnvironment() !== 'web') return
setImmediate(async () => {
try {
const KVStore = (await import('#models/kv_store')).default
const { SystemService } = await import('#services/system_service')
const { isNewerVersion } = await import('../app/utils/version.js')
const current = SystemService.getAppVersion()
if (current === 'dev' || current === '0.0.0'){
logger.info(`[VersionCheckProvider] Skipping self-heal for version ${current}. Appears to be a dev build without proper version set.`)
return
}
logger.info(`[VersionCheckProvider] Checking for stale updateAvailable (current=${current})`)
const cachedLatest = (await KVStore.getValue('system.latestVersion')) as string | null
const earlyAccess = ((await KVStore.getValue('system.earlyAccess')) ?? false) as boolean
if (cachedLatest && !isNewerVersion(cachedLatest, current, earlyAccess)) {
await KVStore.setValue('system.updateAvailable', false)
logger.info(
`[VersionCheckProvider] Cleared stale updateAvailable (cached=${cachedLatest}, current=${current})`
)
}
} catch (err: any) {
logger.warn(`[VersionCheckProvider] Self-heal skipped: ${err?.message ?? err}`)
}
})
}
}

File diff suppressed because one or more lines are too long

View File

@ -80,6 +80,10 @@ router
router.post('/download-collection', [MapsController, 'downloadCollection'])
router.get('/global-map-info', [MapsController, 'globalMapInfo'])
router.post('/download-global-map', [MapsController, 'downloadGlobalMap'])
router.get('/countries', [MapsController, 'listCountries'])
router.get('/country-groups', [MapsController, 'listCountryGroups'])
router.post('/extract-preflight', [MapsController, 'extractPreflight'])
router.post('/extract', [MapsController, 'extractRegion'])
router.get('/markers', [MapsController, 'listMarkers'])
router.post('/markers', [MapsController, 'createMarker'])
router.patch('/markers/:id', [MapsController, 'updateMarker'])
@ -143,6 +147,7 @@ router
router.delete('/failed-jobs', [RagController, 'cleanupFailedJobs'])
router.get('/job-status', [RagController, 'getJobStatus'])
router.post('/sync', [RagController, 'scanAndSync'])
router.get('/health', [RagController, 'health'])
})
.prefix('/api/rag')
@ -178,6 +183,12 @@ router
router.get('/wikipedia', [ZimController, 'getWikipediaState'])
router.post('/wikipedia/select', [ZimController, 'selectWikipedia'])
router.get('/custom-libraries', [ZimController, 'listCustomLibraries'])
router.post('/custom-libraries', [ZimController, 'addCustomLibrary'])
router.delete('/custom-libraries/:id', [ZimController, 'removeCustomLibrary'])
router.get('/browse-library', [ZimController, 'browseLibrary'])
router.delete('/:filename', [ZimController, 'delete'])
})
.prefix('/api/zim')

View File

@ -0,0 +1,37 @@
import * as assert from 'node:assert/strict'
import { test } from 'node:test'
import { hasDownloadedGlobalMap } from '../../inertia/lib/global_map_banner.js'
test('returns true when the global map key already exists on disk', () => {
assert.equal(
hasDownloadedGlobalMap('20260402.pmtiles', [
{ name: '20260402.pmtiles' },
{ name: 'california.pmtiles' },
]),
true
)
})
test('returns false when the global map key is missing', () => {
assert.equal(
hasDownloadedGlobalMap('20260402.pmtiles', [
{ name: 'california.pmtiles' },
]),
false
)
})
test('returns true when an older global map build is already on disk', () => {
assert.equal(
hasDownloadedGlobalMap('20260402.pmtiles', [
{ name: '20260315.pmtiles' },
{ name: 'california.pmtiles' },
]),
true
)
})
test('returns false when there is no global map info', () => {
assert.equal(hasDownloadedGlobalMap(undefined, [{ name: '20260402.pmtiles' }]), false)
})

View File

@ -86,6 +86,7 @@ export type ResourceUpdateInfo = {
installed_version: string
latest_version: string
download_url: string
size_bytes?: number
}
export type ContentUpdateCheckResult = {

View File

@ -12,6 +12,7 @@ export const KV_STORE_SCHEMA = {
'gpu.type': 'string',
'ai.remoteOllamaUrl': 'string',
'ai.ollamaFlashAttention': 'boolean',
'ai.amdGpuAcceleration': 'boolean',
} as const
type KVTagToType<T extends string> = T extends 'boolean' ? boolean : string

View File

@ -21,3 +21,37 @@ export type MapLayer = {
'source-layer'?: string
[key: string]: any
}
/** ISO 3166-1 alpha-2 country code (e.g. "DE", "FR", "US"). */
export type CountryCode = string
export type Country = {
code: CountryCode
code3: string
name: string
continent: string
subregion: string
population: number
}
export type CountryGroup = {
id: string
name: string
description: string
countries: CountryCode[]
}
export type MapExtractRequest = {
countries: CountryCode[]
maxzoom?: number
}
export type MapExtractPreflight = {
tiles: number
bytes: number
source: {
url: string
date: string
key: string
}
}

View File

@ -3,7 +3,9 @@ import { Systeminformation } from 'systeminformation'
export type GpuHealthStatus = {
status: 'ok' | 'passthrough_failed' | 'no_gpu' | 'ollama_not_installed'
hasNvidiaRuntime: boolean
hasRocmRuntime: boolean
ollamaGpuAccessible: boolean
gpuVendor?: 'nvidia' | 'amd'
}
export type SystemInformationResponse = {

View File

@ -16,6 +16,7 @@ export type ListRemoteZimFilesResponse = {
items: RemoteZimFileEntry[]
has_more: boolean
total_count: number
next_start: number
}
export type RawRemoteZimFileEntry = {

View File

@ -86,6 +86,21 @@ check_is_debian_based() {
echo -e "${GREEN}#${RESET} This script is running on a Debian-based system.\\n"
}
check_is_x86_64() {
local arch
arch="$(uname -m)"
if [[ "${arch}" != "x86_64" && "${arch}" != "amd64" ]]; then
echo -e "${YELLOW}#${RESET} WARNING: Detected architecture '${arch}'. NOMAD officially supports x86_64 only.\\n"
echo -e "${YELLOW}#${RESET} ARM64/aarch64 support is tracked in PR #419 and is not yet ready.\\n"
echo -e "${YELLOW}#${RESET} Continuing on an unsupported architecture will likely fail and may leave\\n"
echo -e "${YELLOW}#${RESET} partial Docker images and files behind that you'll need to clean up manually.\\n"
echo -e "${YELLOW}#${RESET} Continuing in 10 seconds... press Ctrl+C now to abort.\\n"
sleep 10
return
fi
echo -e "${GREEN}#${RESET} Architecture check passed (${arch}).\\n"
}
ensure_dependencies_installed() {
local missing_deps=()
@ -502,18 +517,35 @@ verify_gpu_setup() {
echo -e "${YELLOW}${RESET} Docker NVIDIA runtime not detected\\n"
fi
# Check for AMD GPU
# Check for AMD GPU — restrict to display controller classes to avoid false positives
# from AMD CPU host bridges, PCI bridges, and chipset devices.
local has_amd_gpu='false'
if command -v lspci &> /dev/null; then
if lspci 2>/dev/null | grep -iE "amd|radeon" &> /dev/null; then
echo -e "${YELLOW}${RESET} AMD GPU detected (ROCm support not currently available)\\n"
if lspci 2>/dev/null | grep -iE "VGA|3D controller|Display" | grep -iE "amd|radeon" &> /dev/null; then
has_amd_gpu='true'
echo -e "${GREEN}${RESET} AMD GPU detected — ROCm acceleration will be configured automatically when AI Assistant is installed.\\n"
fi
fi
# Write detected GPU type to a marker file the admin container can read. The admin
# container lacks lspci and AMD GPUs don't register a Docker runtime, so this is the
# only reliable way for the admin to know an AMD GPU is present at install time.
local gpu_marker_path="${NOMAD_DIR}/storage/.nomad-gpu-type"
if command -v nvidia-smi &> /dev/null; then
echo 'nvidia' | sudo tee "${gpu_marker_path}" > /dev/null 2>&1 || true
elif [[ "${has_amd_gpu}" == 'true' ]]; then
echo 'amd' | sudo tee "${gpu_marker_path}" > /dev/null 2>&1 || true
else
sudo rm -f "${gpu_marker_path}" 2>/dev/null || true
fi
echo -e "${YELLOW}===========================================${RESET}\\n"
# Summary
if command -v nvidia-smi &> /dev/null && docker info 2>/dev/null | grep -q "nvidia"; then
echo -e "${GREEN}#${RESET} GPU acceleration is properly configured! The AI Assistant will use your GPU.\\n"
elif [[ "${has_amd_gpu}" == 'true' ]]; then
echo -e "${GREEN}#${RESET} GPU acceleration will be enabled (AMD/ROCm) when AI Assistant is installed from the dashboard.\\n"
else
echo -e "${YELLOW}#${RESET} GPU acceleration not detected. The AI Assistant will run in CPU-only mode.\\n"
if command -v nvidia-smi &> /dev/null && ! docker info 2>/dev/null | grep -q "nvidia"; then
@ -539,6 +571,7 @@ success_message() {
# Pre-flight checks
check_is_debian_based
check_is_x86_64
check_is_bash
check_has_sudo
ensure_dependencies_installed

View File

@ -44,7 +44,9 @@ while true; do
# These are not real filesystem roots and report misleading sizes
[[ -f "/host${mountpoint}" ]] && continue
STATS=$(df -B1 "/host${mountpoint}" 2>/dev/null | awk 'NR==2{print $2,$3,$4,$5}')
# Use -P (POSIX) to force single-line output even when device names
# are long (e.g. NFS mounts), which otherwise wrap across two lines
STATS=$(df -P -B1 "/host${mountpoint}" 2>/dev/null | awk 'NR==2{print $2,$3,$4,$5}')
[[ -z "$STATS" ]] && continue
read -r size used avail pct <<< "$STATS"
@ -60,7 +62,7 @@ while true; do
# The disk-collector container always has /storage bind-mounted from the host,
# so df on /storage reflects the actual backing device and its capacity.
if [[ "$FIRST" -eq 1 ]] && mountpoint -q /storage 2>/dev/null; then
STATS=$(df -B1 /storage 2>/dev/null | awk 'NR==2{print $1,$2,$3,$4,$5}')
STATS=$(df -P -B1 /storage 2>/dev/null | awk 'NR==2{print $1,$2,$3,$4,$5}')
if [[ -n "$STATS" ]]; then
read -r dev size used avail pct <<< "$STATS"
pct="${pct/\%/}"

View File

@ -1,6 +1,6 @@
{
"name": "project-nomad",
"version": "1.31.0",
"version": "1.32.0-rc.1",
"description": "\"",
"main": "index.js",
"scripts": {