Compare commits

...

417 Commits
v1.0.0 ... main

Author SHA1 Message Date
Henry Estela
44ecf41ca6 Add model download to FAQ.md 2026-03-26 23:12:49 -07:00
cosmistack-bot
5c92c89813 docs(release): finalize v1.30.3 release notes [skip ci] 2026-03-25 23:40:34 +00:00
cosmistack-bot
f9e3773ec3 chore(release): 1.30.3 [skip ci] 2026-03-25 23:39:41 +00:00
cosmistack-bot
e5a7edca03 chore(release): 1.30.3-rc.2 [skip ci] 2026-03-25 16:30:35 -07:00
Jake Turner
bd015f4c56 fix(UI): improve version display in Settings sidebar (#547) 2026-03-25 16:30:35 -07:00
Jake Turner
0e60e246e1 ops: remove deprecated sidecar-updater files from install script (#546) 2026-03-25 16:30:35 -07:00
Jake Turner
c67653b87a fix(UI): use StyledButton in TierSelectionModal for consistency (#543) 2026-03-25 16:30:35 -07:00
cosmistack-bot
643eaea84b chore(release): 1.30.3-rc.1 [skip ci] 2026-03-25 16:30:35 -07:00
Jake Turner
150134a9fa docs: update release notes 2026-03-25 16:30:35 -07:00
Tom Boucher
6b558531be fix: surface actual error message when service installation fails
Backend returned { error: message } on 400 but frontend expected { message }.
catchInternal swallowed Axios errors and returned undefined, causing a
generic 'An internal error occurred' message instead of the real reason
(already installed, already in progress, not found).

- Fix 400 response shape to { success: false, message } in controller
- Replace catchInternal with direct error handling in installService,
  affectService, and forceReinstallService API methods
- Extract error.response.data.message from Axios errors so callers
  see the actual server message
2026-03-25 16:30:35 -07:00
Bortlesboat
4642dee6ce fix: benchmark scores clamped to 0% for below-average hardware
The log2 normalization formula `50 * (1 + log2(ratio))` produces negative
values (clamped to 0) whenever the measured value is less than half the
reference. For example, a CPU scoring 1993 events/sec against a 5000
reference gives ratio=0.4, log2(0.4)=-1.32, score=-16 -> 0%.

Fix by dividing log2 by 3 to widen the usable range. This preserves the
50% score at the reference value while allowing below-average hardware
to receive proportional non-zero scores (e.g., 28% for the CPU above).

Also adds debug logging for CPU sysbench output parsing to aid future
diagnosis of parsing issues.

Fixes #415
2026-03-25 16:30:35 -07:00
Chris Sherwood
78c0b1d24d fix(ai): surface model download errors and prevent silent retry loops
Model downloads that fail (e.g., when Ollama is too old for a model)
were silently retrying 40 times with no UI feedback. Now errors are
broadcast via SSE and shown in the Active Model Downloads section.
Version mismatch errors use UnrecoverableError to fail immediately
instead of retrying. Stale failed jobs are cleared on retry so users
aren't permanently blocked.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 16:30:35 -07:00
Jake Turner
0226e651c7 fix: bump default ollama and cyberchef versions 2026-03-25 16:30:35 -07:00
LuisMIguelFurlanettoSousa
7ab5e65826 fix(zim): adicionar método deleteZimFile ausente no API client
O Content Manager chamava api.deleteZimFile() para deletar arquivos
ZIM, mas esse método nunca foi implementado na classe API, causando
"TypeError: deleteZimFile is not a function".

O backend (DELETE /api/zim/:filename → ZimController.delete) já
existia e funcionava corretamente — só faltava o método no client
frontend que faz a ponte.

Closes #372
2026-03-25 16:30:35 -07:00
Chris Sherwood
b7ed8b6694 docs: add installation guide link to README
Link to the full step-by-step install walkthrough on projectnomad.us/install,
placed below the Quick Install command for users who need Ubuntu setup help.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 16:30:35 -07:00
builder555
4443799cc9 fix(Collections): update ZIM files to latest versions (#332)
* fix: update data sources to newer versions
* fix: bump spec version for wikipedia
2026-03-25 16:30:35 -07:00
Divyank Singh
4219e753da build: increase mysql healthcheck retries to avoid race condition on lower-end hardware (#480) 2026-03-25 16:30:35 -07:00
Chris Sherwood
f00bfff77c fix(install): prevent MySQL credential mismatch on reinstall
When the install script runs a second time (e.g., after a failed first
attempt), it generates new random database passwords and writes them to
compose.yml. However, MySQL only initializes credentials on first startup
when its data directory is empty. If /opt/project-nomad/mysql/ persists
from the previous attempt, MySQL skips initialization and keeps the old
passwords, causing "Access denied" errors for nomad_admin.

Fix: remove the MySQL data directory before generating new credentials
so MySQL reinitializes with the correct passwords.

Closes #404

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 16:30:35 -07:00
chriscrosstalk
5e93f2661b fix: correct Rogue Support URL on Support the Project page (#472)
roguesupport.com changed to rogue.support (the actual domain).
Updates href and display text in two places.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 16:30:35 -07:00
Brenex
9a8378d63a
build: fix grep command in install script for NVIDIA runtime detection (#526) 2026-03-25 14:36:19 -07:00
Salman Chishti
982dceb949
ci: upgrade checkout action version (#361) 2026-03-25 14:31:53 -07:00
Jake Turner
6a1d0e83f9
ci: bump checkout action veon build-sidecar and validate-collections 2026-03-25 21:29:51 +00:00
Jake Turner
edcfd937e2
ci: add build check for PRs 2026-03-25 20:41:53 +00:00
Jake Turner
f9062616b8
docs: add FAQ 2026-03-25 06:04:17 +00:00
Jake Turner
efe6af9b24
ci: add collection URLs validation check 2026-03-24 05:31:53 +00:00
Jake Turner
8b96793c4d
build: add latest initial zim file 2026-03-24 02:08:18 +00:00
cosmistack-bot
735b9e8ae6 chore(release): 1.30.2 [skip ci] 2026-03-23 19:47:10 +00:00
Chris Sherwood
c409896718 fix(collections): update Full Wikipedia URL to current Kiwix mirror
The 2024-01 all_maxi ZIM was removed from Kiwix mirrors, causing
silent 404 failures for users selecting "Complete Wikipedia (Full)".
Updated to 2026-02 release (115 GB).

Closes #216

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 12:46:26 -07:00
Philip L. Welch
f004c002a7 docs: fix typo in README url 2026-03-20 17:15:56 -07:00
cosmistack-bot
d501d2dc7e chore(release): 1.30.1 [skip ci] 2026-03-20 19:29:55 +00:00
Jake Turner
8e84ece2ef
fix(ui): ref issue in benchmark page 2026-03-20 19:29:13 +00:00
cosmistack-bot
a4de8d05f7 docs(release): finalize v1.30.0 release notes [skip ci] 2026-03-20 18:48:42 +00:00
cosmistack-bot
28df8e6b23 chore(release): 1.30.0 [skip ci] 2026-03-20 18:46:55 +00:00
Jake Turner
baeb96b863 fix(ui): support proper size override of LoadingSpinner 2026-03-20 11:46:10 -07:00
Jake Turner
d645fc161b fix(ui): reduce SSE reconnect churn and polling overhead on navigation 2026-03-20 11:46:10 -07:00
Jake Turner
b8cf1b6127 fix(disk): correct storage display by fixing device matching and dedup mount entries 2026-03-20 11:46:10 -07:00
cosmistack-bot
f5a181b09f chore(release): 1.30.0-rc.2 [skip ci] 2026-03-20 11:46:10 -07:00
Jake Turner
4784cd6e43 docs: update release notes 2026-03-20 11:46:10 -07:00
Jake Turner
467299b231 docs: update port mapping guidance in compose file 2026-03-20 11:46:10 -07:00
Jake Turner
5dfa6d7810 docs: update release notes 2026-03-20 11:46:10 -07:00
Chris Sherwood
571f6bb5a2 fix(GPU): persist GPU type to KV store for reliable passthrough
GPU detection results were only applied at container creation time and
never persisted. If live detection failed transiently (Docker daemon
hiccup, runtime temporarily unavailable), Ollama would silently fall
back to CPU-only mode with no way to recover short of force-reinstall.

Now _detectGPUType() persists successful detections to the KV store
(gpu.type = 'nvidia' | 'amd') and uses the saved value as a fallback
when live detection returns nothing. This ensures GPU config survives
across container recreations regardless of transient detection failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
023e3f30af fix(downloads): allow users to dismiss failed downloads
Failed download jobs persist in BullMQ forever with no way to clear
them, leaving stale error notifications in Content Explorer and Easy
Setup. Adds a dismiss button (X) on failed download cards that removes
the job from the queue via a new DELETE endpoint.

- Backend: DELETE /api/downloads/jobs/:jobId endpoint
- Frontend: X button on failed download cards with immediate refresh

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
d6c6cb66fa fix(docs): remove internal security audit from public documentation
The security audit report was an internal pre-launch document that
shouldn't be exposed in the user-facing documentation sidebar.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
b8d36da9e1 fix(UI): hide 'Start here!' badge after Easy Setup is completed
The KV store returns ui.hasVisitedEasySetup as boolean true, but the
comparison checked against string 'true'. Since true !== 'true', the
badge was always shown even after completing Easy Setup.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
6b41ebbd45 fix(UI): clear stale update banner after successful update
After an update completes, the page reloads but the KV store still has
updateAvailable=true from the pre-update check. This causes the banner
to show "Current 1.30.0-rc.1 → New 1.30.0-rc.1" until the user
manually clicks Check Again.

Now triggers a version re-check before the post-update reload so the
KV store is updated and the banner reflects the correct state.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
cosmistack-bot
85492454a5 chore(release): 1.30.0-rc.1 [skip ci] 2026-03-20 11:46:10 -07:00
Jake Turner
77e83085d6 docs: updated release notes with latest changes 2026-03-20 11:46:10 -07:00
Jake Turner
0ec5334e0d docs: additional comments in management_compose about storage config 2026-03-20 11:46:10 -07:00
Jake Turner
6cb2a0d944 ops: added additional warning about possible overwrites of existing custom installs 2026-03-20 11:46:10 -07:00
Jake Turner
6934e8b4d1 ops: added a check for docker-compose version in Nomad utility scripts 2026-03-20 11:46:10 -07:00
Jake Turner
bb0c4d19d8 docs: add note about Dozzle optionality 2026-03-20 11:46:10 -07:00
Jake Turner
1c179efde2 docs: improve docs for advanced install 2026-03-20 11:46:10 -07:00
Jake Turner
5dc48477f6 fix(Docker): ensure fresh GPU detection when Ollama ctr updated 2026-03-20 11:46:10 -07:00
Chris Sherwood
b0b8f07661 fix: improve download reliability with stall detection, failure visibility, and Wikipedia status tracking
Three bugs caused downloads to hang, disappear, or leave stuck spinners:
1. Wikipedia downloads that failed never updated the DB status from 'downloading',
   leaving the spinner stuck forever. Now the worker's failed handler marks them as failed.
2. No stall detection on streaming downloads - if data stopped flowing mid-download,
   the job hung indefinitely. Added a 5-minute stall timer that triggers retry.
3. Failed jobs were invisible to users since only waiting/active/delayed states were
   queried. Now failed jobs appear with error indicators in the download list.

Closes #364, closes #216

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Jake Turner
5e290119ab fix(maps): remove DC from South Atlantic until generated 2026-03-20 11:46:10 -07:00
Chris Sherwood
ab5a7cb178 fix(maps): split combined Indiana/Michigan entry into separate states
The East North Central region had a single "indianamichigan" entry pointing
to a pmtiles file that doesn't exist. Indiana and Michigan are separate
files in the maps repo.

Closes #350

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
5b990b7323 fix(collections): update stale React devdocs ZIM URL
Kiwix skipped the January 2026 build of devdocs_en_react — the
2026-01 URL returns 404. Updated to 2026-02 which exists.

Closes #269

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Jake Turner
92ce7400e7 feat: make Nomad fully composable 2026-03-20 11:46:10 -07:00
Andrew Barnes
d53ccd2dc8 fix: prefer real block devices over tmpfs for storage display
The disk-collector could produce an empty fsSize array when
/host/proc/1/mounts is unreadable, causing the admin UI to fall back
to systeminformation's fsSize which includes tmpfs mounts. This led to
the storage display showing ~1.5 GB (tmpfs /run) instead of the actual
storage capacity.

Two changes:
- disk-collector: fall back to df on /storage when host mount table
  yields no real filesystems, since /storage is always bind-mounted
  from the host and reflects the actual backing device.
- easy-setup UI: when falling back to systeminformation fsSize, filter
  for /dev/ block devices and prefer the largest one instead of blindly
  taking the first entry.

Fixes #373
2026-03-20 11:46:10 -07:00
Jake Turner
c0b1980bbc build: change compose to use prebuilt sidecar-updater image 2026-03-20 11:46:10 -07:00
Jake Turner
9b74c71f29 fix(UI): minor styling fixes for Night Ops 2026-03-20 11:46:10 -07:00
orbisai0security
9802dd7c70 fix: upgrade systeminformation to 5.31.0 (CVE-2026-26318)
systeminformation: systeminformation: Arbitrary code execution via unsanitized `locate` output
Resolves CVE-2026-26318
2026-03-20 11:46:10 -07:00
dependabot[bot]
138ad84286 build(deps): bump fast-xml-parser from 5.3.8 to 5.5.6 in /admin
Bumps [fast-xml-parser](https://github.com/NaturalIntelligence/fast-xml-parser) from 5.3.8 to 5.5.6.
- [Release notes](https://github.com/NaturalIntelligence/fast-xml-parser/releases)
- [Changelog](https://github.com/NaturalIntelligence/fast-xml-parser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/NaturalIntelligence/fast-xml-parser/compare/v5.3.8...v5.5.6)

---
updated-dependencies:
- dependency-name: fast-xml-parser
  dependency-version: 5.5.6
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
34076b107b fix: prevent embedding retry storm when Ollama is not installed
When Ollama isn't installed, every ZIM download dispatches embedding jobs
that fail and retry 30x with 60s backoff. With many ZIM files downloading
in parallel, this exhausts Redis connections with EPIPE/ECONNRESET errors.

Two changes:
1. Don't dispatch embedding jobs when Ollama isn't installed (belt)
2. Use BullMQ UnrecoverableError for "not installed" so jobs fail
   immediately without retrying (suspenders)

Closes #351

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
dependabot[bot]
5e0fba29ca build(deps): bump undici in /admin
Bumps  and [undici](https://github.com/nodejs/undici). These dependencies needed to be updated together.

Updates `undici` from 6.23.0 to 6.24.1
- [Release notes](https://github.com/nodejs/undici/releases)
- [Commits](https://github.com/nodejs/undici/compare/v6.23.0...v6.24.1)

Updates `undici` from 7.20.0 to 7.24.3
- [Release notes](https://github.com/nodejs/undici/releases)
- [Commits](https://github.com/nodejs/undici/compare/v6.23.0...v6.24.1)

---
updated-dependencies:
- dependency-name: undici
  dependency-version: 6.24.1
  dependency-type: indirect
- dependency-name: undici
  dependency-version: 7.24.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-20 11:46:10 -07:00
dependabot[bot]
06e1c4f4f2 build(deps): bump tar from 7.5.10 to 7.5.11 in /admin
Bumps [tar](https://github.com/isaacs/node-tar) from 7.5.10 to 7.5.11.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v7.5.10...v7.5.11)

---
updated-dependencies:
- dependency-name: tar
  dependency-version: 7.5.11
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
fbc48dd115 fix: default LOG_LEVEL to info in production
Debug logging in production is unnecessarily noisy. Users who need
debug output can still set LOG_LEVEL=debug in their compose.yml.

Closes #285

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
e4fde22dd9 feat(UI): add Debug Info modal for bug reporting
Add a "Debug Info" link to the footer and settings sidebar that opens a
modal with non-sensitive system information (version, OS, hardware, GPU,
installed services, internet status, update availability). Users can copy
the formatted text and paste it into GitHub issues.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
826c819b4a docs: update hardware price ranges to reflect 2026 market
Updated hardware guide price references from $200–$800+ to $150–$1,000+
based on community leaderboard data (41 submissions) and current market
pricing. DDR5 RAM and GPU prices are significantly inflated — budget DDR4
refurbs start at $150, recommended AMD APU builds run $500–$800, and
dedicated GPU builds start at $1,000+. Also noted AMD Ryzen 7 with
Radeon graphics as the community sweet spot in the FAQ.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
fe0c2afe60 fix(security): remove MySQL and Redis port exposure to host
MySQL (3306) and Redis (6379) were published to all host interfaces
despite only being accessed by the admin container via Docker's internal
network. Redis has no authentication, so anyone on the LAN could connect.

Removes the port mappings — containers still communicate internally via
Docker service names.

Closes #279

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Jake Turner
9220b4b83d fix(maps): respect request protocol for reverse proxy HTTPS support 2026-03-20 11:46:10 -07:00
Chris Sherwood
6120e257e8 fix(security): also disable Dozzle container actions
Dozzle runs on port 9999 with no authentication. DOZZLE_ENABLE_ACTIONS
allows anyone on the LAN to stop/restart containers. NOMAD already
handles container management through its own admin UI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
bd642ac1e8 fix(security): disable Dozzle web shell access
Dozzle's DOZZLE_ENABLE_SHELL=true on an unauthenticated port allows
anyone on the LAN to open a shell into containers, including nomad_admin
which has the Docker socket mounted — creating a path to host root.

Disables shell access while keeping log viewing and container actions
(restart/stop) enabled.

Closes #278

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
6a737ed83f feat(UI): add Support the Project settings page
Adds a new settings page with Ko-fi donation link, Rogue Support
banner, and community contribution options (GitHub, Discord).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Chris Sherwood
b1edef27e8 feat(UI): add Night Ops dark mode with theme toggle
Add a warm charcoal dark mode ("Night Ops") using CSS variable swapping
under [data-theme="dark"]. All 23 desert palette variables are overridden
with dark-mode counterparts, and ~313 generic Tailwind classes (bg-white,
text-gray-*, border-gray-*) are replaced with semantic tokens.

Infrastructure:
- CSS variable overrides in app.css for both themes
- ThemeProvider + useTheme hook (localStorage + KV store sync)
- ThemeToggle component (moon/sun icons, "Night Ops"/"Day Ops" labels)
- FOUC prevention script in inertia_layout.edge
- Toggle placed in StyledSidebar and Footer for access on every page

Color replacements across 50 files:
- bg-white → bg-surface-primary
- bg-gray-50/100 → bg-surface-secondary
- text-gray-900/800 → text-text-primary
- text-gray-600/500 → text-text-secondary/text-text-muted
- border-gray-200/300 → border-border-subtle/border-border-default
- text-desert-white → text-white (fixes invisible text on colored bg)
- Button hover/active states use dedicated btn-green-hover/active vars

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:46:10 -07:00
Jake Turner
ed0b0f76ec
docs: update feature request and issues config 2026-03-19 23:15:24 +00:00
Jake Turner
b40d8190af
ci: add sidecar-updater build action 2026-03-19 23:08:13 +00:00
Jake Turner
8bb8b414f8
chore: add additional warnings to migrate-disk-collector 2026-03-15 03:19:52 +00:00
Jake Turner
fb05ab53e2 build: fix collect-disk-info output 2026-03-14 19:54:51 -07:00
Jake Turner
a4e6a9bd9f build: compose and install script updates for disk-collector sidecar 2026-03-14 19:54:51 -07:00
Jake Turner
5113cc3eed
build: disk-collector sidecar and associated workflows 2026-03-15 00:00:33 +00:00
cosmistack-bot
86575bfc73 chore(release): 1.29.1 [skip ci] 2026-03-13 20:46:59 +00:00
Chris Sherwood
baf16ae824 fix(security): rotate benchmark HMAC signing secret
Rotate the HMAC secret used for signing benchmark submissions to the
community leaderboard. The previous secret was compromised (hardcoded
in open-source code and used to submit a fake leaderboard entry).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 13:46:17 -07:00
Jake Turner
db22b0c5f6
chore: add Github issue templates 2026-03-13 07:13:42 +00:00
Jake Turner
5d97d471d0
docs: add CONTRIBUTING guidelines 2026-03-12 22:48:53 +00:00
Jake Turner
84aa125c0f
docs: add Contributor Covenant Code of Conduct
Added Contributor Covenant Code of Conduct to outline community standards and enforcement guidelines.
2026-03-11 17:07:41 -07:00
cosmistack-bot
0f8a391e39 docs(release): finalize v1.29.0 release notes [skip ci] 2026-03-11 21:09:53 +00:00
cosmistack-bot
3491dda753 chore(release): 1.29.0 [skip ci] 2026-03-11 21:09:31 +00:00
Jake Turner
25f4ed37e6 chore: remove alpha banner from README 2026-03-11 14:08:09 -07:00
cosmistack-bot
62e33aeff5 chore(release): 1.29.0-rc.5 [skip ci] 2026-03-11 14:08:09 -07:00
Jake Turner
e7ab2b197c build: add OCI image labels to Dockerfile 2026-03-11 14:08:09 -07:00
Chris Sherwood
63e1f56aa0 fix(UI): replace WikiHow reference with DIY repair guides
WikiHow ZIM files were deprecated by Kiwix after WikiHow requested
removal to protect their content from LLM training harvesting.
Replace with "DIY repair guides and how-to content" which accurately
reflects the iFixit, Stack Exchange, and other how-to content
available in NOMAD's curated collections.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
Chris Sherwood
9422c76bc6 feat(collections): add Project Gutenberg ZIMs and fix broken education entry
Add Project Gutenberg books from the Library of Congress Classification
to relevant curated collection categories:

- Agriculture Comprehensive: Gutenberg Agriculture (LCC-S, 4.3 GB) —
  classic texts on farming, animal husbandry, and food preservation
- Survival Comprehensive: Gutenberg Military Science (LCC-U, 1.2 GB) —
  classic military strategy, tactics, and field manuals

Remove broken gutenberg_en_education entry from Education Standard tier.
The URL returned 404 — Kiwix only publishes LCC-coded Gutenberg ZIMs,
not topic-named ones. The pre-1928 educational philosophy texts were
also not practical enough for NOMAD's audience.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
Jake Turner
a77edcaac3 ci: tag with and without v prefix 2026-03-11 14:08:09 -07:00
cosmistack-bot
99561b420f chore(release): 1.29.0-rc.4 [skip ci] 2026-03-11 14:08:09 -07:00
Jake Turner
96e5027055 feat(AI Assistant): performance improvements and smarter RAG context usage 2026-03-11 14:08:09 -07:00
Jake Turner
460756f581 feat(AI Assistant): improved state management and performance 2026-03-11 14:08:09 -07:00
Jake Turner
6f0fae0033 feat(AI Assistant): remember last model used 2026-03-11 14:08:09 -07:00
cosmistack-bot
41c64fb50b chore(release): 1.29.0-rc.3 [skip ci] 2026-03-11 14:08:09 -07:00
Jake Turner
d30c1a1407 fix(System): ensure nomad container image tag resolves correctly 2026-03-11 14:08:09 -07:00
cosmistack-bot
9c74339893 chore(release): 1.29.0-rc.2 [skip ci] 2026-03-11 14:08:09 -07:00
Jake Turner
be25408fe7 fix(Settings): hide AI Assistant from navigation until installed 2026-03-11 14:08:09 -07:00
Chris Sherwood
5d3c659d05 fix(security): narrow SSRF scope to allow RFC1918 LAN addresses
NOMAD is a LAN appliance — blocking RFC1918 private ranges (10.x,
172.16-31.x, 192.168.x) would prevent users from downloading content
from local network mirrors. Narrowed to only block loopback (localhost,
127.x, 0.0.0.0, ::1) and link-local (169.254.x, fe80::) addresses.
Restored require_tld: false for LAN hostnames without TLDs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
Chris Sherwood
75106a8f61 fix(security): path traversal and SSRF protections from pre-launch audit
Fixes 4 high-severity findings from a comprehensive security audit:

1. Path traversal on ZIM file delete — resolve()+startsWith() containment
2. Path traversal on Map file delete — same pattern
3. Path traversal on docs read — same pattern (already used in rag_service)
4. SSRF on download endpoints — block private/internal IPs, require TLD

Also adds assertNotPrivateUrl() to content update endpoints.

Full audit report attached as admin/docs/security-audit-v1.md.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
Chris Sherwood
b9dd32be25 docs: update documentation for recent features and hardware page
- Add hardware guide link (projectnomad.us/hardware) to README, FAQ, and About page
- Add Apache 2.0 license section to README and About page
- Add Early Access Channel FAQ and Getting Started mention
- Add GPU passthrough warning troubleshooting entry to FAQ
- Add Knowledge Base document deletion to FAQ and Getting Started

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
Jake Turner
58b106f388 feat: support for updating services 2026-03-11 14:08:09 -07:00
cosmistack-bot
7db8568e19 chore(release): 1.29.0-rc.1 [skip ci] 2026-03-11 14:08:09 -07:00
dependabot[bot]
20a313ce08 build(deps): bump tar from 7.5.9 to 7.5.10 in /admin
Bumps [tar](https://github.com/isaacs/node-tar) from 7.5.9 to 7.5.10.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v7.5.9...v7.5.10)

---
updated-dependencies:
- dependency-name: tar
  dependency-version: 7.5.10
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-11 14:08:09 -07:00
Chris Sherwood
650ae407f3 feat(GPU): warn when GPU passthrough not working and offer one-click fix
Ollama can silently run on CPU even when the host has an NVIDIA GPU,
resulting in ~3 tok/s instead of ~167 tok/s. This happens when Ollama
was installed before the GPU toolkit, or when the container was
recreated without proper DeviceRequests. Users had zero indication.

Adds a GPU health check to the system info API response that detects
when the host has an NVIDIA runtime but nvidia-smi fails inside the
Ollama container. Shows a warning banner on the System Information
and AI Settings pages with a one-click "Reinstall AI Assistant"
button that force-reinstalls Ollama with GPU passthrough.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 14:08:09 -07:00
Jake Turner
db69428193 fix(AI): allow force refresh of models list 2026-03-11 14:08:09 -07:00
Jake Turner
bc016e6c60
ci: configure dependabot to target rc branch 2026-03-11 20:35:52 +00:00
cosmistack-bot
45a30c0188 chore(release): 1.28.1 [skip ci] 2026-03-09 05:45:20 +00:00
Jake Turner
0e94d5daa4
fix: container update pattern in run_updater_fixes 2026-03-05 04:32:09 +00:00
Jake Turner
744504dd1e
fix: typo in run_updater_fixes 2026-03-05 04:18:47 +00:00
cosmistack-bot
e1c808f90d docs(release): finalize v1.28.0 release notes [skip ci] 2026-03-05 04:08:18 +00:00
cosmistack-bot
c1395794d4 chore(release): 1.28.0 [skip ci] 2026-03-05 04:07:56 +00:00
Jake Turner
a105ac1a83
fix: update channel flexibility 2026-03-05 04:06:56 +00:00
cosmistack-bot
bc7f84c123 chore(release): 1.28.0-rc.1 [skip ci] 2026-03-04 20:05:14 -08:00
Jake Turner
dfa896e86b feat(RAG): allow deletion of files from KB 2026-03-04 20:05:14 -08:00
Jake Turner
99b96c3df7 feat(RAG): display embedding queue and improve progress tracking 2026-03-04 20:05:14 -08:00
dependabot[bot]
80ae0aacf8 build(deps-dev): bump minimatch from 3.1.2 to 3.1.5 in /admin
Bumps [minimatch](https://github.com/isaacs/minimatch) from 3.1.2 to 3.1.5.
- [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md)
- [Commits](https://github.com/isaacs/minimatch/compare/v3.1.2...v3.1.5)

---
updated-dependencies:
- dependency-name: minimatch
  dependency-version: 3.1.5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-04 20:05:14 -08:00
dependabot[bot]
d9d3d2e068 build(deps): bump fast-xml-parser from 5.3.6 to 5.3.8 in /admin
Bumps [fast-xml-parser](https://github.com/NaturalIntelligence/fast-xml-parser) from 5.3.6 to 5.3.8.
- [Release notes](https://github.com/NaturalIntelligence/fast-xml-parser/releases)
- [Changelog](https://github.com/NaturalIntelligence/fast-xml-parser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/NaturalIntelligence/fast-xml-parser/compare/v5.3.6...v5.3.8)

---
updated-dependencies:
- dependency-name: fast-xml-parser
  dependency-version: 5.3.8
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-04 20:05:14 -08:00
dependabot[bot]
56b0d69421 build(deps): bump rollup from 4.57.1 to 4.59.0 in /admin
Bumps [rollup](https://github.com/rollup/rollup) from 4.57.1 to 4.59.0.
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rollup/rollup/compare/v4.57.1...v4.59.0)

---
updated-dependencies:
- dependency-name: rollup
  dependency-version: 4.59.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-04 20:05:14 -08:00
Chris Sherwood
782985bac0 fix(legal): update Legal Notices to Apache 2.0 license and add Qdrant attribution
Replace MIT license text with Apache 2.0 to match the repo LICENSE file,
update copyright to 2024-2026, and add Qdrant to third-party attribution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 20:05:14 -08:00
Jake Turner
96beab7e69 feat(AI Assistant): custom name option for AI Assistant 2026-03-04 20:05:14 -08:00
Chris Sherwood
b806cefe3a chore: add Apache 2.0 license
The repo currently has no license file, which means the code is technically
"all rights reserved" by default. Adding Apache 2.0 to formalize the project
as open source with patent protection, while remaining permissive enough for
institutional adoption (schools, NGOs, government agencies).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 20:05:14 -08:00
Jake Turner
e2b447e142
build: fix wait-for-it url and update to Apache 2 license 2026-03-04 05:09:08 +00:00
cosmistack-bot
639b026e6f docs(release): finalize v1.27.0 release notes [skip ci] 2026-03-04 04:54:55 +00:00
cosmistack-bot
617dc111c2 chore(release): 1.27.0 [skip ci] 2026-03-04 04:54:33 +00:00
Jake Turner
d4a50f3e9c docs: update release notes 2026-03-03 20:51:38 -08:00
Jake Turner
efa57ec010 feat: early access release channel 2026-03-03 20:51:38 -08:00
Jake Turner
6817e2e47e fix: improve type-safety for KVStore values 2026-03-03 20:51:38 -08:00
cosmistack-bot
e12e7c1696 chore(release): 1.27.0-rc.1 [skip ci] 2026-03-03 20:51:38 -08:00
Jake Turner
fbfaf5fdae docs: update release notes 2026-03-03 20:51:38 -08:00
Jake Turner
00bd864831 fix(AI): improved perf via rewrite and streaming logic 2026-03-03 20:51:38 -08:00
Jake Turner
41eb30d84d ops: support RC versions 2026-03-03 20:51:38 -08:00
Jake Turner
6874a2824f feat(Models): paginate available models endpoint 2026-03-03 20:51:38 -08:00
Jake Turner
a3f10dd158 fix: update default branch name 2026-03-01 16:08:46 -08:00
cosmistack-bot
76fcbe46fa chore(release): 1.26.1 [skip ci] 2026-02-19 05:43:13 +00:00
Jake Turner
765207f956 fix(AI): type error in fallback models 2026-02-18 21:42:36 -08:00
cosmistack-bot
7a3c4bfbba docs(release): finalize v1.26.0 release notes [skip ci] 2026-02-19 05:25:28 +00:00
cosmistack-bot
6af9b46e4e chore(release): 1.26.0 [skip ci] 2026-02-19 05:24:42 +00:00
dependabot[bot]
6cb1cfe727 build(deps): bump systeminformation from 5.30.7 to 5.30.8 in /admin
Bumps [systeminformation](https://github.com/sebhildebrandt/systeminformation) from 5.30.7 to 5.30.8.
- [Release notes](https://github.com/sebhildebrandt/systeminformation/releases)
- [Changelog](https://github.com/sebhildebrandt/systeminformation/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sebhildebrandt/systeminformation/compare/v5.30.7...v5.30.8)

---
updated-dependencies:
- dependency-name: systeminformation
  dependency-version: 5.30.8
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-18 21:23:34 -08:00
dependabot[bot]
83d328a29a build(deps): bump tar from 7.5.7 to 7.5.9 in /admin
Bumps [tar](https://github.com/isaacs/node-tar) from 7.5.7 to 7.5.9.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v7.5.7...v7.5.9)

---
updated-dependencies:
- dependency-name: tar
  dependency-version: 7.5.9
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-18 21:23:13 -08:00
Jake Turner
485d34e0c8 fix(UI): move content updates section 2026-02-18 21:22:53 -08:00
Jake Turner
98b65c421c feat(AI): thinking and response streaming 2026-02-18 21:22:53 -08:00
cosmistack-bot
16ce1e2945 docs(release): finalize v1.25.2 release notes [skip ci] 2026-02-18 22:54:36 +00:00
cosmistack-bot
0ee0dc13e8 chore(release): 1.25.2 [skip ci] 2026-02-18 22:53:53 +00:00
dependabot[bot]
5840bfc24b build(deps): bump fast-xml-parser from 5.3.4 to 5.3.6 in /admin
Bumps [fast-xml-parser](https://github.com/NaturalIntelligence/fast-xml-parser) from 5.3.4 to 5.3.6.
- [Release notes](https://github.com/NaturalIntelligence/fast-xml-parser/releases)
- [Changelog](https://github.com/NaturalIntelligence/fast-xml-parser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/NaturalIntelligence/fast-xml-parser/compare/v5.3.4...v5.3.6)

---
updated-dependencies:
- dependency-name: fast-xml-parser
  dependency-version: 5.3.6
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-18 14:52:53 -08:00
dependabot[bot]
cdf931be2f build(deps): bump qs from 6.14.1 to 6.14.2 in /admin
Bumps [qs](https://github.com/ljharb/qs) from 6.14.1 to 6.14.2.
- [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ljharb/qs/compare/v6.14.1...v6.14.2)

---
updated-dependencies:
- dependency-name: qs
  dependency-version: 6.14.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-18 14:52:35 -08:00
Jake Turner
ed26df7aff docs: updated release notes 2026-02-18 14:52:06 -08:00
Jake Turner
e75d54bd69 fix(UI): gracefully handle legacy docs and knowledge-base paths 2026-02-18 14:52:06 -08:00
Jake Turner
43ebaa93c1 fix(AI): leave chat suggestions disabled by default 2026-02-18 14:52:06 -08:00
Jake Turner
77f1868cf8 fix(AI): improve GPU detection logic 2026-02-18 14:52:06 -08:00
Jake Turner
3ee3cffad9 fix(UI): invert update banner colors 2026-02-18 14:52:06 -08:00
Jake Turner
b2e4ce7261 ops: add optional storage dir removal to uninstall script 2026-02-18 14:52:06 -08:00
Jake Turner
ad31a985ea ops: fix uninstall script to remove network and updater volume 2026-02-18 14:52:06 -08:00
cosmistack-bot
b63c33d277 docs(release): finalize v1.25.1 release notes [skip ci] 2026-02-12 06:49:18 +00:00
cosmistack-bot
8b0add66d9 chore(release): 1.25.1 [skip ci] 2026-02-12 06:49:07 +00:00
Jake Turner
8609a551f2 fix(Settings): improve user guidance during system update 2026-02-11 22:48:27 -08:00
Jake Turner
fcb696587a docs: fix release action to include all commit info 2026-02-11 22:48:27 -08:00
Jake Turner
a49322b63b fix(Updates): avoid issues with stale cache when checking latest version 2026-02-11 22:48:27 -08:00
cosmistack-bot
76ac713406 docs(release): finalize v1.25.0 release notes [skip ci] 2026-02-12 06:12:16 +00:00
cosmistack-bot
0177f25b1f chore(release): 1.25.0 [skip ci] 2026-02-12 06:11:03 +00:00
Jake Turner
279ee1254c
fix(Benchmark): improved error reporting and fix sysbench race condition 2026-02-11 22:09:31 -08:00
Jake Turner
d55ff7b466
feat: curated content update checking 2026-02-11 21:49:46 -08:00
Jake Turner
c4514e8c3d
fix(Settings): standardize manifest fetching behavior 2026-02-11 16:13:21 -08:00
Jake Turner
d7d3821c06
fix(Settings): improve Maps Manager UI 2026-02-11 16:00:49 -08:00
Jake Turner
32d206cfd7
feat: curated content system overhaul 2026-02-11 15:44:46 -08:00
Jake Turner
4ac261477a feat: Unified release note management 2026-02-11 12:40:39 -08:00
Jake Turner
4425e02c3c fix(UI): icon imports in settings/update.tsx 2026-02-11 11:21:40 -08:00
Jake Turner
df6247b425 feat(Easy Setup): visual cue to start at Easy Setup for OOBE 2026-02-11 11:16:52 -08:00
Jake Turner
988dba318c fix(Updater): file bind mount causing stale compose file ref 2026-02-11 10:43:24 -08:00
Chris Sherwood
f02c5e5cd0 fix(System): use available memory for usage calculation
mem.used on Linux includes reclaimable buff/cache, which caused the
System Information page to show 97% memory usage on a 64GB machine
that actually had 53GB available. The warning banner fired at >90%
creating a false alarm.

Now uses (total - available) for the gauge, percentage, and displayed
values. Also renames "Free RAM" to "Available RAM" using mem.available
instead of mem.free, since free is misleadingly small on Linux (it
excludes reclaimable cache).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:17:55 -08:00
dependabot[bot]
7f136c6441 build(deps): bump axios from 1.13.4 to 1.13.5 in /admin
Bumps [axios](https://github.com/axios/axios) from 1.13.4 to 1.13.5.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.13.4...v1.13.5)

---
updated-dependencies:
- dependency-name: axios
  dependency-version: 1.13.5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-11 10:17:21 -08:00
Jake Turner
f090468d20
build: switch to node:22-slim image for libzim compat 2026-02-09 16:23:47 -08:00
cosmistack-bot
3f41c8801c chore(release): 1.24.0 [skip ci] 2026-02-09 23:26:49 +00:00
Jake Turner
cf8c94ddb2
fix(Install): improve Docker GPU configuration 2026-02-09 15:26:14 -08:00
Jake Turner
4747863702 feat(AI Assistant): allow manual scan and resync KB 2026-02-09 15:16:18 -08:00
Jake Turner
9301c44d3f fix(AI Assistant): chat suggestion performance improvements 2026-02-08 16:19:27 -08:00
Jake Turner
276bdcd0b2 feat(AI Assistant): query rewriting for enhanced context retrieval 2026-02-08 16:19:27 -08:00
Jake Turner
921eef30d6 refactor: reusable utility for running nvidia-smi 2026-02-08 15:18:52 -08:00
Chris Sherwood
c16cfc3a93 fix(GPU): detect NVIDIA GPUs via Docker API instead of lspci
The previous lspci-based GPU detection fails inside Docker containers
because lspci isn't available, causing Ollama to always run CPU-only
even when a GPU + NVIDIA Container Toolkit are present on the host.

Replace with Docker API runtime check (docker.info() -> Runtimes) as
primary detection method. This works from inside any container via the
mounted Docker socket and confirms both GPU presence and toolkit
installation. Keep lspci as fallback for host-based installs and AMD.

Also add Docker-based GPU detection to benchmark hardware info — exec
nvidia-smi inside the Ollama container to get the actual GPU model name
instead of showing "Not detected".

Tested on nomad3 (Intel Core Ultra 9 285HX + RTX 5060): AI performance
went from 12.7 tok/s (CPU) to 281.4 tok/s (GPU) — a 22x improvement.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 15:18:52 -08:00
Chris Sherwood
812d13c3da fix(System): correct memory usage percentage calculation
The percentage was using (total - available) / total which excludes
reclaimable buffers/cache, but the displayed "Used RAM" value uses
mem.used which includes them. This mismatch showed 14% alongside
22 GB / 62 GB. Now uses mem.used / mem.total so the percentage
matches the displayed numbers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 13:23:39 -08:00
Chris Sherwood
b0be99700d fix(System): show host OS, hostname, GPU instead of container info
Inside Docker, systeminformation reports the container's Alpine Linux
distro, container ID as hostname, and no GPU. This enriches the System
Information page with actual host details via the Docker API:

- Distribution and kernel version from docker.info()
- Real hostname from docker.info().Name
- GPU model and VRAM via nvidia-smi inside the Ollama container
- Graphics card in System Details (Model, Vendor, VRAM)
- Friendly uptime display (days/hours/minutes instead of minutes only)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 13:23:39 -08:00
Chris Sherwood
569dae057d fix(collections): correct devdocs ZIM filenames in Computing & Technology
Kiwix renamed devdocs files from `devdocs.io_en_*` to `devdocs_en_*`.
The old URLs returned 404, causing all Computing & Technology downloads
to silently fail during Easy Setup. Affects 9 resources across all 3
tiers (Python, JavaScript, HTML, CSS, Node.js, React, Git, Docker,
Linux/Bash docs).

Relates to #127

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 13:21:26 -08:00
Jake Turner
f8117ede68 fix(AI Assistant): inline code rendering 2026-02-08 13:20:10 -08:00
Jake Turner
6745dbf3d1 feat: move KB UI into AI Assistant UI 2026-02-08 13:20:10 -08:00
Jake Turner
8726700a0a feat: zim content embedding 2026-02-08 13:20:10 -08:00
Chris Sherwood
c2b6e079af fix(Downloads): sort active downloads by progress descending
Items actively downloading now appear at the top of the download list
instead of the bottom. Sorts by progress percentage descending so the
item furthest along is always first, and queued items (0%) are last.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 13:14:04 -08:00
Chris Sherwood
711bd07f7b fix(docs): use download-then-run install command instead of pipe
The install script has interactive prompts (install confirmation and
license acceptance) that require stdin. Piping via `curl | bash`
consumes stdin, causing `read -p` to receive no input and exit with
"Invalid Response."

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 13:13:19 -08:00
Jake Turner
12286b9d34 feat: display model download progress 2026-02-06 16:22:23 -08:00
Jake Turner
2e0ab10075 feat: cron job for system update checks 2026-02-06 15:40:30 -08:00
dependabot[bot]
40741530fd build(deps): bump @adonisjs/bodyparser from 10.1.2 to 10.1.3 in /admin
Bumps [@adonisjs/bodyparser](https://github.com/adonisjs/bodyparser) from 10.1.2 to 10.1.3.
- [Release notes](https://github.com/adonisjs/bodyparser/releases)
- [Commits](https://github.com/adonisjs/bodyparser/compare/v10.1.2...v10.1.3)

---
updated-dependencies:
- dependency-name: "@adonisjs/bodyparser"
  dependency-version: 10.1.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-06 14:41:47 -08:00
Chris Sherwood
1a95b84a8c feat(docs): polish docs rendering with desert-themed components
Add custom Markdoc renderers for images, links, paragraphs, code blocks,
inline code, and horizontal rules. Restyle existing heading, table, and
list components to match the desert tactical color palette. Add 8
screenshots to docs with polished image presentation (rounded corners,
shadow, captions). Constrain content width for readability.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
8b8e00de8b fix(docs): remove double period after LLC on about page
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
f3c16c674c fix(docs): display FAQ as uppercase in sidebar
Add title override map so 'faq' displays as 'FAQ' instead of 'Faq'.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
184e96df06 docs: note that Wikipedia selection replaces previous download
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
c4730511c9 docs: remove installation section from getting-started
Users reading in-app docs already have NOMAD installed. Remove
install instructions, system requirements, and security/privacy
sections that duplicate the README. Start directly with Easy Setup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
a8f41298fd docs: remove minimum specs, align recommended specs with website
Drop minimum specs section — NOMAD is a premium resource designed
for robust hardware. Align recommended storage to 500 GB+ SSD to
match projectnomad.us. Add Internet-in-a-Box mention for users
seeking a lightweight alternative.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
ceaba61574 fix(docs): point Wikipedia/reference link to Content Explorer
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
aadaac8169 fix(docs): point Wikipedia Selector refs to /settings/zim/remote-explorer
The Wikipedia Selector lives at Content Explorer
(/settings/zim/remote-explorer), not Content Manager (/settings/zim).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
d9ed8e9602 fix(docs): correct broken internal links to match actual routes
Fix links verified against routes.ts:
- /settings → /settings/system
- /settings/updates → /settings/update (singular)
- /settings/maps-manager → /settings/maps
- /settings/wikipedia-selector → /settings/zim
- /settings/zim-manager → /settings/zim
Replace Wikipedia Selector references with Content Manager to
match sidebar labels in SettingsLayout.tsx.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
efb4db4fa8 fix(docs): correct Install Apps link to /settings/apps
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
3dde0c149b docs: overhaul in-app documentation and add sidebar ordering
Update all 6 documentation files and docs_service.ts:

- home.md: Add AI Chat, Knowledge Base, and Benchmark sections;
  replace Open WebUI references with built-in AI Chat links;
  expand Quick Links table with new features

- getting-started.md: Update Easy Setup steps to match current
  wizard (Capabilities/Maps/Content/Review); replace Open WebUI
  section with AI Assistant and Knowledge Base sections; add
  Wikipedia Selector and System Benchmark docs; update GPU specs

- faq.md: Add AI, Knowledge Base, Benchmark, and curated tier
  FAQ entries; add troubleshooting for AI Chat, Knowledge Base
  uploads, and benchmark submission; update all references from
  Open WebUI to built-in AI Chat; add Discord community link

- use-cases.md: Add Knowledge Base mentions across Emergency Prep,
  Homeschooling, Remote Work, Privacy, and Academic Research use
  cases; add "Upload Relevant Documents" setup step; update
  privacy section to emphasize built-in AI

- about.md: Fix "ultime" typo, add project evolution paragraph,
  add community links section

- release-notes.md: Add all versions from v1.11.0 through v1.23.0
  with accurate dates and changes from git history; consolidate
  patch versions; update Support section with Discord link

- docs_service.ts: Replace alphabetical sidebar sort with custom
  ordering (Home > Getting Started > Use Cases > FAQ > About >
  Release Notes)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:41:30 -08:00
Chris Sherwood
474ca2a76b docs: update README with feature overview and fix stop script bug
Expand the "How It Works" section with a proper capability overview,
add a "What's Included" table, fix the stop script referencing
start_nomad.sh, and add AMD GPU support to optimal specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 14:39:36 -08:00
cosmistack-bot
3c3684497b chore(release): 1.23.0 [skip ci] 2026-02-05 08:18:03 +00:00
Jake Turner
36b6d8ed7a fix: rework content tier system to dynamically determine install status
Removes the InstalledTier model and instead checks presence of files on-the-fly. Avoid broken state by handling on the server-side vs. marking as installed by client-side API call
2026-02-04 22:58:21 -08:00
Jake Turner
fcc749ec57 feat: improve global error reporting with user notifs 2026-02-04 22:58:21 -08:00
Jake Turner
52e90041f4 feat(Maps): maps use full page by default 2026-02-04 21:54:36 -08:00
Jake Turner
c3278efc01 fix(AI): add cloud flag to fallback models 2026-02-04 21:35:18 -08:00
Jake Turner
6b17e6ff68 fix(Curated Collections): ensure resources are not duplicated on fetch-latest 2026-02-04 21:35:18 -08:00
Jake Turner
d63c5bc668 feat: add back to home link on standard header 2026-02-04 21:10:31 -08:00
Jake Turner
5e584eb5d0 fix(Kiwix): avoid restarting container while download jobs running 2026-02-04 17:58:50 -08:00
Jake Turner
cc61fbea3b fix(Docs): add pretty rendering for tables 2026-02-04 17:05:47 -08:00
Jake Turner
bfc6c3d113 fix(Docker): ensure containers fully removed on failed service install 2026-02-04 17:05:34 -08:00
Jake Turner
a91c13867d fix: filter cloud models from API response 2026-02-04 17:05:20 -08:00
Jake Turner
d4cbc0c2d5 feat(AI): add fuzzy search to models list 2026-02-04 16:45:12 -08:00
cosmistack-bot
1952d585d3 chore(release): 1.22.0 [skip ci] 2026-02-04 07:35:17 +00:00
Jake Turner
fa8300b5df fix(Maps): ensure asset urls resolve correctly 2026-02-03 23:34:32 -08:00
Jake Turner
42568a9e7e fix(Settings): rename port column in Apps Settings 2026-02-03 23:33:49 -08:00
Jake Turner
ab07551719 feat: auto add NOMAD docs to KB on AI install 2026-02-03 23:15:54 -08:00
Jake Turner
907982062f feat(Ollama): cleanup model download logic and improve progress tracking 2026-02-03 23:15:54 -08:00
Jake Turner
5de3c5f261 fix: hide chat button and page unless AI Assistant installed 2026-02-03 23:15:39 -08:00
Chris Sherwood
18e55c747a fix(Wikipedia): prevent loading spinner overlay during download
The LoadingSpinner component defaults to fullscreen mode which renders
a Semantic UI dimmer overlay with "Loading" text. This was overlapping
with the blue "Downloading Wikipedia" status banner.

Changed to use fullscreen={false} iconOnly to render just the spinner
icon inline within the banner.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 23:15:16 -08:00
Chris Sherwood
738b57e854 fix(EasySetup): scroll to top when navigating between steps
Adds a useEffect that smoothly scrolls the window to the top whenever
the wizard step changes. This ensures users always see the beginning
of each step content rather than remaining scrolled down from the
previous step.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 23:14:50 -08:00
Chris Sherwood
2c4fc59428 feat(ContentManager): Display friendly names instead of filenames
Content Manager now shows Title and Summary columns from Kiwix metadata
instead of just raw filenames. Metadata is captured when files are
downloaded from Content Explorer and stored in a new zim_file_metadata
table. Existing files without metadata gracefully fall back to showing
the filename.

Changes:
- Add zim_file_metadata table and model for storing title, summary, author
- Update download flow to capture and store metadata from Kiwix library
- Update Content Manager UI to display Title and Summary columns
- Clean up metadata when ZIM files are deleted

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 23:14:28 -08:00
cosmistack-bot
3b31be66f9 chore(release): 1.21.0 [skip ci] 2026-02-02 00:27:10 +00:00
Jake Turner
9731ce839d fix(Docker): pin Qdrant and Ollama versions 2026-02-02 00:26:26 +00:00
Jake Turner
a697d930fe feat(AI): add Ollama support for NVIDIA and AMD GPUs 2026-02-02 00:24:10 +00:00
Jake Turner
d1f40663d3 feat(RAG): initial beta with preprocessing, embedding, semantic retrieval, and ctx passage 2026-02-01 23:59:21 +00:00
Jake Turner
1923cd4cde feat(AI): chat suggestions and assistant settings 2026-02-01 07:24:21 +00:00
Jake Turner
029c2176f7 fix(Chat): sidebar display 2026-02-01 05:48:23 +00:00
Jake Turner
31c671bdb5 fix: service name defs and ollama ui location 2026-02-01 05:46:23 +00:00
Jake Turner
4584844ca6 refactor(Benchmarks): cleanup api calls 2026-02-01 05:23:11 +00:00
Jake Turner
a2aa33168d fix: remove heroicons references and unused icons 2026-01-31 21:00:51 -08:00
Chris Sherwood
68f374e3a8 feat: Add dedicated Wikipedia Selector with smart package management
Adds a standalone Wikipedia selection section that appears prominently in both
the Easy Setup Wizard and Content Explorer. Features include:

- Six Wikipedia package options ranging from Quick Reference (313MB) to Complete
  Wikipedia with Full Media (99.6GB)
- Card-based radio selection UI with clear size indicators
- Smart replacement: downloads new package before deleting old one
- Status tracking: shows Installed, Selected, or Downloading badges
- "No Wikipedia" option for users who want to skip or remove Wikipedia

Technical changes:
- New wikipedia_selections database table and model
- New /api/zim/wikipedia and /api/zim/wikipedia/select endpoints
- WikipediaSelector component with consistent styling
- Integration with existing download queue system
- Callback updates status to 'installed' on successful download
- Wikipedia removed from tiered category system to avoid duplication

UI improvements:
- Added section dividers and icons (AI Models, Wikipedia, Additional Content)
- Consistent spacing between major sections in Easy Setup Wizard
- Content Explorer gets matching Wikipedia section with submit button

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 21:00:51 -08:00
Chris Sherwood
80aa556b42 docs: Add website and community links to README
- Add centered header with logo and tagline
- Add badge links to website, Discord, and benchmark leaderboard
- Update description to "offline-first knowledge and education server"
- Add new "Community & Resources" section with descriptive links

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 20:44:38 -08:00
Chris Sherwood
f6db05bed2 fix(benchmark): Detect Intel Arc Graphics on Core Ultra processors
When running in Docker, the systeminformation library cannot see the
host's GPU hardware. This adds a fallback to detect Intel integrated
graphics from the CPU model name, similar to how we handle AMD APUs
with Radeon graphics.

Intel Core Ultra processors (Meteor Lake, Arrow Lake) include Intel
Arc Graphics integrated. This change detects "Core Ultra" in the CPU
brand and reports "Intel Arc Graphics (Integrated)" as the GPU model.

Note: This is for display purposes only - Ollama does not support
Intel integrated graphics for AI acceleration.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 20:43:41 -08:00
Chris Sherwood
b9ebc6c54e fix(EasySetup): Remove built-in System Benchmark from wizard
System Benchmark is a built-in feature that doesn't require installation,
so it shouldn't appear in the Easy Setup Wizard where users select things
to install. Users can access the benchmark through Settings > Benchmark.

- Removed benchmark entry from ADDITIONAL_TOOLS array
- Removed unused isBuiltInCapability helper and related dead code
- Simplified renderCapabilityCard by removing built-in specific styling
- Removed unused IconArrowRight import

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 20:41:35 -08:00
Jake Turner
adf76d272e fix: remove Open WebUI 2026-01-31 20:39:49 -08:00
Jake Turner
0da050c5a3 fix(UI): switch to tabler icons only for consistency 2026-01-31 20:39:49 -08:00
Jake Turner
243f749090 feat: [wip] native AI chat interface 2026-01-31 20:39:49 -08:00
Jake Turner
50174d2edb feat(RAG): [wip] RAG capabilities 2026-01-31 20:39:49 -08:00
Jake Turner
c78736c8da feat(Docker): avoid repulling existing images 2026-01-31 20:39:49 -08:00
Jake Turner
cb85785cb1 feat(Ollama): fallback list of recommended models if API down 2026-01-28 15:54:15 -08:00
cosmistack-bot
e8cc17a20d chore(release): 1.20.0 [skip ci] 2026-01-28 21:48:04 +00:00
Chris Sherwood
2921017191 feat(collections): Expand curated categories and improve tier modal UX
- Add 3 new curated categories: DIY & Repair, Agriculture & Food, Computing & Technology
- Reorganize content logically (moved DIY/food content from Survival to appropriate new categories)
- Update tier selection modal to show only each tier's own resources
- Add "(plus everything in X)" text for inherited tier content
- Reduces visual redundancy and makes tiers easier to compare

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 13:47:01 -08:00
Chris Sherwood
9a93fc9e04 feat: Expand Legal Notices and move to bottom of Settings sidebar
- Move Legal Notices to bottom of Settings sidebar (below System)
- Add Third-Party Software Attribution section (Kiwix, Kolibri, Open WebUI, Ollama, CyberChef, FlatNotes)
- Add Privacy Statement (zero telemetry, local-first, no accounts, offline-capable)
- Add Content Disclaimer for third-party content
- Add Medical and Emergency Information Disclaimer
- Add Data Storage section with installation paths

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 13:45:43 -08:00
Chris Sherwood
e8aabfce1e fix(install): Handle missing curl dependency on fresh Ubuntu installs
- Add ensure_dependencies_installed function that checks for and installs curl
- Update README with one-liner install command for fresh systems
- Function is extensible for future dependency requirements

Fixes issue where fresh Ubuntu 24.04 installs fail because curl is not
installed by default.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 13:44:54 -08:00
Chris Sherwood
08f9722b59 fix(migrations): Fix timestamp ordering for builder_tag migration
The migration file used a 10-digit timestamp (1769324448) while all other
migrations use 13-digit timestamps. When sorted numerically, this caused
the builder_tag ALTER TABLE migration to run before the benchmark_results
CREATE TABLE migration, breaking fresh installs.

Renamed: 1769324448 -> 1769324448000 (append 000 to match 13-digit format)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 13:43:21 -08:00
cosmistack-bot
fb09ff3daf chore(release): 1.19.0 [skip ci] 2026-01-28 07:23:30 +00:00
Jake Turner
8cfe490b57 feat: subscribe to release notes 2026-01-27 23:22:26 -08:00
Jake Turner
c8de767052 feat(Maps): automatically download base assets if missing 2026-01-27 20:49:56 -08:00
Chris Sherwood
e7336f2a8e fix(SystemInfo): Fall back to fsSize when disk array is empty
The Storage Devices section on System Information showed "No storage
devices detected" because the disk info file (/storage/nomad-disk-info.json)
returned an empty array. The fsSize data from systeminformation was
available but not used as a fallback.

Applies the same fallback pattern from the Easy Setup wizard (PR #90):
- Try disk array first, filtering to entries with totalSize > 0
- Fall back to fsSize data when disk array is empty
- Deduplicate fsSize entries by size (same disk mounted multiple places)
- Filter to real block devices (/dev/), excluding virtual filesystems
- Update Storage Devices count in System Status to match

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 20:28:45 -08:00
chriscrosstalk
7a5a254dd5
feat(benchmark): Require full benchmark with AI for community sharing (#99)
* feat(benchmark): Require full benchmark with AI for community sharing

Only allow users to share benchmark results with the community leaderboard
when they have completed a full benchmark that includes AI performance data.

Frontend changes:
- Add AI Assistant installation check via service API query
- Show pre-flight warning when clicking Full Benchmark without AI installed
- Disable AI Only button when AI Assistant not installed
- Show "Partial Benchmark" info alert for non-shareable results
- Only display "Share with Community" for full benchmarks with AI data
- Add note about AI installation requirement with link to Apps page

Backend changes:
- Validate benchmark_type is 'full' before allowing submission
- Require ai_tokens_per_second > 0 for community submission
- Return clear error messages explaining requirements

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(benchmark): UI improvements and GPU detection fix

- Fix GPU detection to properly identify AMD discrete GPUs
- Fix gauge colors (high scores now green, low scores red)
- Fix gauge centering (SVG size matches container)
- Add info tooltips for Tokens/sec and Time to First Token

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(benchmark): Extract iGPU from AMD APU CPU name as fallback

When systeminformation doesn't detect graphics controllers (common on
headless Linux), extract the integrated GPU name from AMD APU CPU model
strings like "AMD Ryzen AI 9 HX 370 w/ Radeon 890M".

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat(benchmark): Add Builder Tag system for community leaderboard

- Add builder_tag column to benchmark_results table
- Create BuilderTagSelector component with word dropdowns + randomize
- Add 50 adjectives and 50 nouns for NOMAD-themed tags (e.g., Tactical-Llama-1234)
- Add anonymous sharing option checkbox
- Add builder tag display in Benchmark Details section
- Add Benchmark History section showing all past benchmarks
- Update submission API to accept anonymous flag
- Add /api/benchmark/builder-tag endpoint to update tags

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat(benchmark): Add HMAC signing for leaderboard submissions

Sign benchmark submissions with HMAC-SHA256 to prevent casual API abuse.
Includes X-NOMAD-Timestamp and X-NOMAD-Signature headers.

Note: Since NOMAD is open source, a determined attacker could extract
the secret. This provides protection against casual abuse only.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 00:24:31 -08:00
cosmistack-bot
8f1b8de792 chore(release): 1.18.0 [skip ci] 2026-01-24 23:38:11 +00:00
Jake Turner
1b31c6f80d fix(Open WebUI): install status indicator 2026-01-24 15:37:09 -08:00
Chris Sherwood
5afc3a270a feat: Improve curated collections UX with persistent tier selection
- Add installed_tiers table to persist user's tier selection per category
- Change tier selection behavior: clicking a tier now highlights it locally,
  user must click "Submit" to confirm (previously clicked = immediate download)
- Remove "Recommended" badge and asterisk (*) from tier displays
- Highlight installed tier instead of recommended tier in CategoryCard
- Add "Click to choose" hint when no tier is installed
- Save installed tier when downloading from Content Explorer or Easy Setup
- Pass installed tier to modal as default selection

Database:
- New migration: create installed_tiers table (category_slug unique, tier_slug)
- New model: InstalledTier

Backend:
- ZimService.listCuratedCategories() now includes installedTierSlug
- New ZimService.saveInstalledTier() method
- New POST /api/zim/save-installed-tier endpoint

Frontend:
- TierSelectionModal: local selection state, "Close" → "Submit" button
- CategoryCard: highlight based on installedTierSlug, add "Click to choose"
- Content Explorer: save tier after download, refresh categories
- Easy Setup: save tiers on wizard completion

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 15:33:50 -08:00
Jake Turner
64e6e11389 feat(Docker): container URL resolution util and networking improvs 2026-01-24 15:27:56 -08:00
Chris Sherwood
e31f956289 fix(benchmark): Fix AI benchmark connectivity and improve error handling
- Add OLLAMA_API_URL environment variable for Docker networking
- Use host.docker.internal to reach Ollama from NOMAD container
- Add extra_hosts config in compose for Linux compatibility
- Add downloading_ai_model status with clear progress indicator
- Show model download progress on first AI benchmark run
- Fail AI-only benchmarks with clear error if AI unavailable
- Display benchmark errors to users via Alert component
- Improve error messages with error codes for debugging

Fixes issue where AI benchmark silently failed due to NOMAD container
being unable to reach Ollama at localhost:11434.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 15:27:56 -08:00
cosmistack-bot
565abca821 chore(release): 1.17.0 [skip ci] 2026-01-23 22:26:11 +00:00
Jake Turner
8ae47e03d8 fix(Notifications): improve styling 2026-01-23 22:25:18 +00:00
Chris Sherwood
b94deef437 feat: Update Settings nomenclature and add tiered content collections
- Rename 'Models Manager' to 'AI Model Manager'
- Rename 'ZIM Manager' to 'Content Manager'
- Rename 'ZIM Remote Explorer' to 'Content Explorer'
- Rename 'Curated ZIM Collections' to 'Curated Content Collections'
- Add tiered category collections (Essential/Standard/Comprehensive) to
  Content Explorer, matching the Easy Setup Wizard Step 3 for consistency
- Reorganize Settings sidebar alphabetically

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 14:22:04 -08:00
Chris Sherwood
42a18c8dc6 fix(EasySetup): select valid primary disk for storage projection bar
The storage projection bar was blindly using disk[0], which on systems
with multiple drives (like the Minisforum AI X1 Pro) could be an empty
or uninitialized drive (e.g., sda showing N/A / N/A).

Now the disk selection:
1. Filters out disks with totalSize === 0 (invalid/empty drives)
2. Prefers disk containing root (/) or /storage mount point
3. Falls back to largest valid disk if no root mount found

This fixes the NaN% and 0 Bytes display on multi-drive systems.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 14:20:11 -08:00
Jake Turner
4ef954c9b5 fix(UI): remove splash screen 2026-01-23 14:17:25 -08:00
Jake Turner
f49b9abb81 fix(Maps): static path resolution 2026-01-23 14:17:25 -08:00
Jake Turner
d5db024eee feat(Queues): support working all queues with single command 2026-01-23 11:07:47 -08:00
Jake Turner
a42b6b85f6 fix(Benchmark): use node:crypto IFO uuid package 2026-01-22 21:48:12 -08:00
Jake Turner
525eecbbde fix(Benchmark): icon definitions 2026-01-22 21:48:12 -08:00
Jake Turner
8092fb58d8 fix(Benchmark): remove unused seeder definition 2026-01-22 21:48:12 -08:00
Jake Turner
438d683bac fix(Benchmark): cleanup types for SSOT 2026-01-22 21:48:12 -08:00
Chris Sherwood
6efd049424 fix(benchmark): Add settings nav link, fix submission bug, improve UX
- Add Benchmark to Settings sidebar navigation
- Fix Luxon DateTime bug when saving submission timestamp
- Add privacy explanation text before Share button
- Add error handling and display for failed submissions
- Show "Submitting..." state and success confirmation
- Add link to view leaderboard after successful submission

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 21:48:12 -08:00
Chris Sherwood
755807f95e feat: Add system benchmark feature with NOMAD Score
Add comprehensive benchmarking capability to measure server performance:

Backend:
- BenchmarkService with CPU, memory, disk, and AI benchmarks using sysbench
- Database migrations for benchmark_results and benchmark_settings tables
- REST API endpoints for running benchmarks and retrieving results
- CLI commands: benchmark:run, benchmark:results, benchmark:submit
- BullMQ job for async benchmark execution with SSE progress updates
- Synchronous mode option (?sync=true) for simpler local dev setup

Frontend:
- Benchmark settings page with circular gauges for scores
- NOMAD Score display with weighted composite calculation
- System Performance section (CPU, Memory, Disk Read/Write)
- AI Performance section (tokens/sec, time to first token)
- Hardware Information display
- Expandable Benchmark Details section
- Progress simulation during sync benchmark execution

Easy Setup Integration:
- Added System Benchmark to Additional Tools section
- Built-in capability pattern for non-Docker features
- Click-to-navigate behavior for built-in tools

Fixes:
- Docker log multiplexing issue (Tty: true) for proper output parsing
- Consolidated disk benchmarks into single container execution

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 21:48:12 -08:00
dependabot[bot]
6bee84f367 build(deps): bump lodash from 4.17.21 to 4.17.23 in /admin
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.21 to 4.17.23.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.21...4.17.23)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.17.23
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-22 16:44:10 -08:00
Chris Sherwood
24f10ea3d5 feat: Use friendly app names on Dashboard with open source attribution
Updates the Dashboard to use the same user-friendly names as the Easy Setup
Wizard, giving credit to the open source projects powering each capability:

- Kiwix → Information Library (Powered by Kiwix)
- Kolibri → Education Platform (Powered by Kolibri)
- Open WebUI → AI Assistant (Powered by Open WebUI + Ollama)
- FlatNotes → Notes (Powered by FlatNotes)
- CyberChef → Data Tools (Powered by CyberChef)

Also reorders Dashboard cards to prioritize Core Capabilities first, with
Maps promoted to Core Capability status, followed by Additional Tools,
then system items (Easy Setup, Install Apps, Docs, Settings).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 16:43:32 -08:00
Chris Sherwood
6c650a0ded fix(docs): Remove broken service links that pointed to invalid routes
Services like Kiwix, Kolibri, and Open WebUI run on separate ports,
not as paths under the Command Center. Links like /kiwix, /kolibri,
and /openwebui don't exist - users must launch these from the Apps
page or home screen.

- Update home.md to direct users to Apps page or home screen
- Update getting-started.md with correct launch instructions
- Keep /maps link (Maps is embedded in Command Center)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 16:32:03 -08:00
dependabot[bot]
6236b29e1c build(deps): bump tar from 7.5.3 to 7.5.6 in /admin
Bumps [tar](https://github.com/isaacs/node-tar) from 7.5.3 to 7.5.6.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v7.5.3...v7.5.6)

---
updated-dependencies:
- dependency-name: tar
  dependency-version: 7.5.6
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-22 16:31:21 -08:00
cosmistack-bot
33f04728c5 chore(release): 1.16.0 [skip ci] 2026-01-20 06:50:52 +00:00
Jake Turner
9bb4ff5afc feat: force-reinstall option for apps 2026-01-19 22:50:15 -08:00
Jake Turner
04e169fe7b fix(Easy Setup): add selected model size to storage projection 2026-01-19 22:50:15 -08:00
Jake Turner
937da5d869 feat(Open WebUI): manage models via Command Center 2026-01-19 22:15:52 -08:00
Jake Turner
b3ef977484 feat: [wip] Open WebUI manipulation 2026-01-19 22:15:52 -08:00
Jake Turner
b6e6e10328 fix(CuratedCategories): improve fetching from Github 2026-01-19 14:41:51 -08:00
Jake Turner
111ad5aec8 build: add dockerignore file 2026-01-19 14:41:51 -08:00
cosmistack-bot
ff88e3e868 chore(release): 1.15.0 [skip ci] 2026-01-19 18:32:53 +00:00
copilot-swe-agent[bot]
f905871392 Add NOMAD_STORAGE_PATH schema definition to start/env.ts
Co-authored-by: jakeaturner <52841588+jakeaturner@users.noreply.github.com>
2026-01-19 10:29:24 -08:00
Chris Sherwood
d86c78dba5 feat: Add Windows Docker Desktop support for local development
- Detect Windows platform and use named pipe (//./pipe/docker_engine)
  instead of Unix socket for Docker Desktop compatibility
- Add NOMAD_STORAGE_PATH environment variable for configurable
  storage paths across different platforms
- Update seeder to use environment variable with Linux default
- Document new environment variable in .env.example

This enables local development on Windows machines with Docker Desktop
while maintaining Linux production compatibility.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:29:24 -08:00
dependabot[bot]
15aa1f3598 build(deps): bump tar from 7.5.2 to 7.5.3 in /admin
Bumps [tar](https://github.com/isaacs/node-tar) from 7.5.2 to 7.5.3.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v7.5.2...v7.5.3)

---
updated-dependencies:
- dependency-name: tar
  dependency-version: 7.5.3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-19 10:28:58 -08:00
chriscrosstalk
59b45a745a
feat: Redesign Easy Setup wizard Step 1 with user-friendly categories (#65)
- Replace technical app names with user-friendly capability categories:
  - "Information Library" (Kiwix) - offline Wikipedia, medical refs, etc.
  - "Education Platform" (Kolibri) - Khan Academy, K-12 content
  - "AI Assistant" (Open WebUI + Ollama) - local AI chat
- Add bullet point feature lists for each core capability
- Move secondary apps (Notes, Data Tools) to collapsible "Additional Tools"
- Show already-installed capabilities with "Installed" badge and disabled state
- Update terminology: "capabilities" instead of "apps", "content packs" instead of "ZIM collections"
- Update Review step to show capability names with technical names in parentheses

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Jake Turner <52841588+jakeaturner@users.noreply.github.com>
2026-01-19 10:22:46 -08:00
Chris Sherwood
f414d9e1c0 chore: Rename step 3 label from 'ZIM Files' to 'Content'
More user-friendly terminology for non-technical users.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
7bf3f25c47 feat: Add storage projection bar to easy setup wizard
Adds a dynamic storage projection bar that shows users how their
selections will impact disk space:

- Displays current disk usage and projected usage after installation
- Updates in real-time as users select maps, ZIM collections, and tiers
- Color-coded warnings (green→tan→orange→red) based on projected usage
- Shows "exceeds available space" warning if selections exceed capacity
- Works on both Linux (disk array) and Windows (fsSize array)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
c03f2ae702 docs: Add categories to-do list for future expansion
Tracks potential Kiwix categories to add to the tiered collections
system, organized by priority level.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
1027bd8e0f chore: Switch categories URL to raw GitHub for dev reliability
jsDelivr CDN was aggressively caching old data during development.
Raw GitHub URLs provide more immediate updates when pushing changes
to the feature branch.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
05441bd6a2 feat: Add Education & Reference category with 3 tiers
Essential (~5 GB):
- Wikipedia Top 45k articles (no images)
- Wikibooks (no images)

Standard (~19 GB, includes Essential):
- TED-Ed educational videos
- Wikiversity tutorials
- LibreTexts STEM (Math, Physics, Chemistry, Biology)
- Project Gutenberg Education

Comprehensive (~59 GB, includes Standard):
- Full Wikipedia (6M+ articles, no images)
- Wikibooks with images
- TED Conference talks
- LibreTexts Humanities, Engineering, Geosciences, Business

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
c9c29955ee chore: Add cache-busting parameter to categories URL
jsDelivr aggressively caches branch references. Adding version
parameter ensures fresh data is fetched when categories are updated.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
b007f5e0fe feat: Add Survival & Preparedness category with 3 tiers
Essential (~138 MB):
- Food for Preppers, FOSS Cooking, Based.Cooking

Standard (~3.6 GB, includes Essential):
- Canadian Prepper: Winter Prepping & Bug Out Roll
- Gardening Q&A, Cooking Q&A

Comprehensive (~21 GB, includes Standard):
- Urban Prepper, Canadian Prepper: Prepping Food & Bug Out Concepts
- Learning Self-Reliance, iFixit Repair Guides, DIY Q&A

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
8e6e44e688 fix: Use jsDelivr CDN for categories JSON to avoid CORS issues
GitHub raw URLs don't allow cross-origin requests from localhost.
Using jsDelivr CDN which serves GitHub content with proper CORS headers.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
3cb5dceb1d feat: Add tiered collection categories UI
- Add kiwix-categories.json with Medicine category and 3 tiers
- Create CategoryCard component for displaying category cards
- Create TierSelectionModal for tier selection UI
- Integrate categories into Easy Setup wizard (Step 3)
- Add TypeScript types for categories and tiers
- Fallback to legacy flat collections if categories unavailable

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:15:20 -08:00
Chris Sherwood
6f0c829d36 fix: Notification auto-dismiss not working due to stale closure
The removeNotification function was using a stale reference to the
notifications array from the closure scope, causing the setTimeout
callback to filter against an outdated state.

Changed to use functional update pattern (prev => prev.filter(...))
which correctly references the current state when the timeout fires.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:06:44 -08:00
Chris Sherwood
109bad9b6e docs: Add installation instructions and CLI maintenance commands
- Add Installation section to getting-started.md with system requirements
- Add install commands, post-install access info
- Add privacy and security notes
- Add Command-Line Maintenance section to FAQ with helper scripts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:05:59 -08:00
Chris Sherwood
adecb66fa8 docs: Replace placeholder content with comprehensive documentation
- Replace Lorem Ipsum home.md with proper welcome page
- Add getting-started.md: New user onboarding guide
- Add faq.md: FAQ and troubleshooting for offline use
- Add use-cases.md: Use case examples (emergency prep, homeschool, etc.)

Documentation written with non-technical users in mind, focusing on
clarity and self-sufficiency when offline.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:05:59 -08:00
cosmistack-bot
ab909e8ef8 chore(release): 1.14.0 [skip ci] 2026-01-16 18:37:04 +00:00
Jake Turner
08d0f88737 feat: auto-fetch latest curated collections 2026-01-16 10:35:37 -08:00
Jake Turner
003902b84b fix(Docker): improve container state management 2026-01-16 10:35:37 -08:00
cosmistack-bot
e1b1b187b0 chore(release): 1.13.0 [skip ci] 2026-01-15 23:56:37 +00:00
Jake Turner
393c177af1 feat: [wip] self updates 2026-01-15 15:54:59 -08:00
Jake Turner
b6ac6b1e84 feat(Maps): enhance missing assets warnings 2026-01-15 15:54:59 -08:00
Jake Turner
400cd740bd fix: curated collections ui tweak 2026-01-15 15:54:59 -08:00
Jake Turner
4b74118fd9 feat: easy setup wizard 2026-01-15 15:54:59 -08:00
dependabot[bot]
6500599c6d build(deps): bump @adonisjs/lucid from 21.6.1 to 21.8.2 in /admin
Bumps [@adonisjs/lucid](https://github.com/adonisjs/lucid) from 21.6.1 to 21.8.2.
- [Release notes](https://github.com/adonisjs/lucid/releases)
- [Commits](https://github.com/adonisjs/lucid/compare/v21.6.1...v21.8.2)

---
updated-dependencies:
- dependency-name: "@adonisjs/lucid"
  dependency-version: 21.8.2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-15 15:54:59 -08:00
Jake Turner
bb67bab9a9 feat: improved app cards and custom icons 2026-01-15 15:54:59 -08:00
Jake Turner
5793fc2139 feat: [wip] easy setup wizard 2026-01-15 15:54:59 -08:00
Jake Turner
bb0a939458 fix(install): change admin container pull_policy to always 2026-01-15 15:54:59 -08:00
cosmistack-bot
7dad9e3324 chore(release): 1.12.3 [skip ci] 2026-01-13 16:04:08 +00:00
Jake Turner
4bb5dd6a18
fix(scripts): remove disk info file on uninstall 2026-01-13 08:02:36 -08:00
cosmistack-bot
ae46e276fb chore(release): 1.12.2 [skip ci] 2026-01-13 16:00:28 +00:00
Jake Turner
6a9ede1776 fix(admin): disk info mount and stability 2026-01-13 07:59:45 -08:00
cosmistack-bot
752b023798 chore(release): 1.12.1 [skip ci] 2026-01-13 15:03:46 +00:00
Jake Turner
fb8598ff55 fix(admin): improve service install status management 2026-01-13 06:58:05 -08:00
Jake Turner
c46b75e63d fix(admin): improve duplicate install request handling 2026-01-13 06:58:05 -08:00
Jake Turner
3e4985c3c7 fix(admin): missing Zim download API client method 2026-01-13 06:58:05 -08:00
Jake Turner
2440d23986 fix(admin): base map assets download url 2026-01-13 06:58:05 -08:00
Jake Turner
a95c2faf12 fix(install): disk info file mount 2026-01-13 06:58:05 -08:00
Jake Turner
5a19882273 fix(admin): port binding for OpenWebUI 2026-01-13 06:58:05 -08:00
Jake Turner
1cc695ff75 fix(admin): improve memory usage indicators 2026-01-13 06:58:05 -08:00
Jake Turner
da23acbe5e fix(admin): add favicons 2026-01-13 06:58:05 -08:00
Jake Turner
df55b48e1c fix(admin): container healthcheck 2026-01-13 06:58:05 -08:00
Jake Turner
80a1d0eef4 fix(install): ensure update script always pulls latest imgs 2026-01-13 06:58:05 -08:00
Jake Turner
275ca80931 fix(install): use modern docker compose command in update script 2026-01-13 06:58:05 -08:00
Jake Turner
ed5851eac1 fix(install): ensure update script executable 2026-01-13 06:58:05 -08:00
dependabot[bot]
aa8516c92d build(deps): bump qs from 6.14.0 to 6.14.1 in /admin
Bumps [qs](https://github.com/ljharb/qs) from 6.14.0 to 6.14.1.
- [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ljharb/qs/compare/v6.14.0...v6.14.1)

---
updated-dependencies:
- dependency-name: qs
  dependency-version: 6.14.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-13 06:57:38 -08:00
dependabot[bot]
bfddc793ba build(deps): bump @adonisjs/bodyparser from 10.1.0 to 10.1.2 in /admin
Bumps [@adonisjs/bodyparser](https://github.com/adonisjs/bodyparser) from 10.1.0 to 10.1.2.
- [Release notes](https://github.com/adonisjs/bodyparser/releases)
- [Commits](https://github.com/adonisjs/bodyparser/compare/v10.1.0...v10.1.2)

---
updated-dependencies:
- dependency-name: "@adonisjs/bodyparser"
  dependency-version: 10.1.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-13 06:57:13 -08:00
cosmistack-bot
f5e3cd32f4 chore(release): 1.12.0 [skip ci] 2025-12-24 20:01:17 +00:00
Jake Turner
a2206b8c13 feat(System): check internet status on backend and allow custom test url 2025-12-24 12:00:32 -08:00
cosmistack-bot
7029e1ea81 chore(release): 1.11.1 [skip ci] 2025-12-24 07:46:29 +00:00
Jake Turner
b020d925ad fix(Maps): custom pmtiles file downloads 2025-12-23 23:45:56 -08:00
cosmistack-bot
0d13ff7bac chore(release): 1.11.0 [skip ci] 2025-12-24 00:02:24 +00:00
dependabot[bot]
51880d0a46 build(deps): bump systeminformation from 5.27.7 to 5.27.14 in /admin
Bumps [systeminformation](https://github.com/sebhildebrandt/systeminformation) from 5.27.7 to 5.27.14.
- [Changelog](https://github.com/sebhildebrandt/systeminformation/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sebhildebrandt/systeminformation/compare/v5.27.7...v5.27.14)

---
updated-dependencies:
- dependency-name: systeminformation
  dependency-version: 5.27.14
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-23 16:01:22 -08:00
Jake Turner
0c8527921c fix(Docs): documentation renderer fixes 2025-12-23 16:00:33 -08:00
Jake Turner
6ac9d147cf feat(Collections): map region collections 2025-12-23 16:00:33 -08:00
Jake Turner
f618512ad1
fix(Collections): naming convention in maps.json 2025-12-19 15:06:50 -08:00
Jake Turner
8a5211d11a
feat(Maps): add maps curated collection 2025-12-19 13:52:47 -08:00
cosmistack-bot
8b34799e9d chore(release): 1.10.1 [skip ci] 2025-12-08 04:19:34 +00:00
Jake Turner
9ec514e145
fix(Zim): storage path 2025-12-07 20:18:58 -08:00
cosmistack-bot
f318426a57 chore(release): 1.10.0 [skip ci] 2025-12-08 03:14:11 +00:00
Jake Turner
5205d5909d
feat: disk info collection 2025-12-07 19:13:43 -08:00
Jake Turner
2ff7b055b5
fix(Kiwix): initial download and setup 2025-12-07 16:04:41 -08:00
Jake Turner
ce8dbd91ab
fix(install): add redis env variables to compose file 2025-12-07 10:57:32 -08:00
cosmistack-bot
1e9e4cb1e7 chore(release): 1.9.0 [skip ci] 2025-12-07 08:00:11 +00:00
Jake Turner
7569aa935d
feat: background job overhaul with bullmq 2025-12-06 23:59:01 -08:00
Jake Turner
2985929079
fix(install): host env variable 2025-12-05 18:41:54 -08:00
Jake Turner
529af9835f
fix(install): character escaping issues with env variable replacement 2025-12-05 18:35:44 -08:00
cosmistack-bot
cf06794db6 chore(release): 1.8.0 [skip ci] 2025-12-06 02:17:02 +00:00
Jake Turner
95ba0a95c9 fix: download util improvements 2025-12-05 18:16:23 -08:00
Jake Turner
a8bfc083d4 feat(install): replace secrets with rand pwds and host 2025-12-05 18:16:23 -08:00
Jake Turner
a557ff3ad1 fix(install): url env variable 2025-12-05 18:16:23 -08:00
Jake Turner
605dce11e8 fix(Kiwix): initial zim file download 2025-12-05 18:16:23 -08:00
Jake Turner
e3257d1408 fix(ZimService): cleanup unused variable 2025-12-05 18:16:23 -08:00
cosmistack-bot
098979dd02 chore(release): 1.7.0 [skip ci] 2025-12-05 23:48:18 +00:00
Jake Turner
824fc613b6 fix(DockerService): cleanup old OSM stuff 2025-12-05 15:47:22 -08:00
Jake Turner
035f1c67b1 fix(install): cleanup compose file names 2025-12-05 15:47:22 -08:00
Jake Turner
dd4e7c2c4f feat: curated zim collections 2025-12-05 15:47:22 -08:00
Jake Turner
44c733e244
feat(Collections): store additional data with resources 2025-12-03 23:15:20 -08:00
Jake Turner
27acfed361
feat(Collections): add Preppers Library 2025-12-03 22:59:14 -08:00
Jake Turner
ba04b90cc4
feat(Collections): add slug, icon, language 2025-12-03 22:35:48 -08:00
Jake Turner
5214add84f
docs: add kiwix collections file 2025-12-02 08:50:34 -08:00
Jake Turner
d1842364bc
fix: hide query devtools in prod 2025-12-02 08:39:58 -08:00
Jake Turner
606dd3ad0b
feat: [wip] custom map and zim downloads 2025-12-02 08:25:09 -08:00
Jake Turner
dc2bae1065
feat: system info page redesign 2025-12-01 21:13:44 -08:00
Jake Turner
f4a69ea401
feat: alert and button styles redesign 2025-11-30 23:32:16 -08:00
Jake Turner
12a6f2230d
feat: [wip] new maps system 2025-11-30 22:29:16 -08:00
cosmistack-bot
bff5136564 chore(release): 1.6.0 [skip ci] 2025-11-19 00:35:59 +00:00
Jake Turner
9670a78fb4 feat: kolibri app 2025-11-18 16:35:16 -08:00
dependabot[bot]
c2f33075fd build(deps-dev): bump vite from 6.3.5 to 6.4.1 in /admin
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 6.3.5 to 6.4.1.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/create-vite@6.4.1/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 6.4.1
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-18 16:33:12 -08:00
dependabot[bot]
44deb0c23b build(deps-dev): bump js-yaml from 4.1.0 to 4.1.1 in /admin
Bumps [js-yaml](https://github.com/nodeca/js-yaml) from 4.1.0 to 4.1.1.
- [Changelog](https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodeca/js-yaml/compare/4.1.0...4.1.1)

---
updated-dependencies:
- dependency-name: js-yaml
  dependency-version: 4.1.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-18 16:29:51 -08:00
dependabot[bot]
0750d98572 build(deps): bump validator from 13.15.15 to 13.15.20 in /admin
Bumps [validator](https://github.com/validatorjs/validator.js) from 13.15.15 to 13.15.20.
- [Release notes](https://github.com/validatorjs/validator.js/releases)
- [Changelog](https://github.com/validatorjs/validator.js/blob/master/CHANGELOG.md)
- [Commits](https://github.com/validatorjs/validator.js/compare/13.15.15...13.15.20)

---
updated-dependencies:
- dependency-name: validator
  dependency-version: 13.15.20
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-18 16:29:21 -08:00
dependabot[bot]
09e21f5f0c build(deps): bump axios from 1.10.0 to 1.13.1 in /admin
Bumps [axios](https://github.com/axios/axios) from 1.10.0 to 1.13.1.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.10.0...v1.13.1)

---
updated-dependencies:
- dependency-name: axios
  dependency-version: 1.13.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-18 16:29:03 -08:00
dependabot[bot]
cb64617572 build(deps): bump tar-fs from 2.1.3 to 2.1.4 in /admin
Bumps [tar-fs](https://github.com/mafintosh/tar-fs) from 2.1.3 to 2.1.4.
- [Commits](https://github.com/mafintosh/tar-fs/compare/v2.1.3...v2.1.4)

---
updated-dependencies:
- dependency-name: tar-fs
  dependency-version: 2.1.4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-18 16:28:45 -08:00
Jake Turner
721c6b9653
fix: update container name in management-compose 2025-11-18 16:00:09 -08:00
cosmistack-bot
6d109b856f chore(release): 1.5.0 [skip ci] 2025-11-18 23:53:00 +00:00
Jake Turner
7acfd33d5c
feat: version footer and fix CI version handlng 2025-11-18 15:51:45 -08:00
cosmistack-bot
64b874b1f3 chore(release): 1.4.0 [skip ci] 2025-11-18 23:46:13 +00:00
Jake Turner
b8eaaa7ac6
feat(Services): friendly names and descriptions 2025-11-18 14:02:22 -08:00
Jake Turner
c66839655a fix(Scripts): typo in management compose file path 2025-10-09 23:12:21 -07:00
Jake Turner
102835290b fix(Scripts): log dir creation improvements during install 2025-10-09 23:11:59 -07:00
cosmistack-bot
f17a1042fb chore(release): 1.3.0 [skip ci] 2025-10-10 05:08:01 +00:00
Jake Turner
8c5b49d226 feat: add logo to README 2025-10-09 22:01:33 -07:00
Jake Turner
a4297aba36 feat(Scripts): uninstall now removes non-management app containers 2025-10-09 21:52:53 -07:00
Jake Turner
4da08a8312 fix(OSM): apply dir permission fixes more robustly 2025-10-09 21:51:07 -07:00
cosmistack-bot
be0c794d9b chore(release): 1.2.0 [skip ci] 2025-10-07 15:52:22 +00:00
Jake Turner
b677fbbe81 feat: add dozzle for enhanced logs and metrics 2025-10-07 00:13:39 -07:00
Jake Turner
033cc10420 feat: add flatnotes as app 2025-10-06 23:53:40 -07:00
Jake Turner
1df7c490a6 feat: add cyberchef as app 2025-10-06 23:22:50 -07:00
Jake Turner
478427060f fix(OSM): renderer file perms 2025-10-06 22:22:55 -07:00
Jake Turner
85e6b84e32 fix(OSM): use absolute host paths 2025-10-06 21:13:22 -07:00
Jake Turner
4e1377554a fix(OSM): directory paths and access 2025-10-06 21:13:22 -07:00
Jake Turner
51583c8925 fix(OSM): error handling 2025-10-06 21:13:22 -07:00
Jake Turner
4ab36f331a feat: uninstall script 2025-10-06 21:13:22 -07:00
Jake Turner
876475e25b fix(ZIM): host initial zim download in GH repo 2025-09-02 22:44:01 -07:00
cosmistack-bot
0b395ef53b chore(release): 1.1.0 [skip ci] 2025-08-21 06:05:57 +00:00
Jake Turner
3dbcd7a714 fix(OSM): increase memory for import 2025-08-20 23:05:19 -07:00
Jake Turner
b29dd99fd7 fix(OSM): change default import file 2025-08-20 23:05:19 -07:00
Jake Turner
82501883b6 feat(Docs): add release notes 2025-08-20 23:05:19 -07:00
Jake Turner
07a198f918 feat(Settings): display system information 2025-08-20 23:05:19 -07:00
Jake Turner
377f49162f feat(Settings): add legal notices page 2025-08-20 23:05:19 -07:00
Jake Turner
2099750e06 fix(OSM): osm installation 2025-08-20 23:05:19 -07:00
Jake Turner
5ee949b96a fix(Docker): [wip] OSM install fixes 2025-08-20 23:05:19 -07:00
Jake Turner
9e216c366f feat(ZIM): improved ZIM downloading and auto-restart kiwix serve 2025-08-20 23:05:19 -07:00
Jake Turner
7c2b0964dc feat: container controls & convienience scripts 2025-08-08 15:07:32 -07:00
Jake Turner
5fc490715d ref: cleanup service seeder 2025-08-08 15:07:32 -07:00
Jake Turner
d7967d0d74 feat(install): add start & stop helper scripts 2025-08-08 15:07:32 -07:00
cosmistack-bot
fd7a1fd42d chore(release): 1.0.1 [skip ci] 2025-07-12 03:23:57 +00:00
Jake Turner
2373f2c1b2 fix(open-webui): ollama connection 2025-07-11 20:21:44 -07:00
Jake Turner
44b7bfee16 fix(Docs): fix doc rendering 2025-07-11 15:31:07 -07:00
Jake Turner
97655ef75d fix(Install): update script URLs 2025-07-11 14:24:29 -07:00
Jake Turner
d0b451e969 build: update dockerfile location 2025-07-11 13:30:08 -07:00
285 changed files with 36967 additions and 3413 deletions

8
.dockerignore Normal file
View File

@ -0,0 +1,8 @@
.env
.env.*
.git
node_modules
*.log
admin/storage
admin/node_modules
admin/build

193
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@ -0,0 +1,193 @@
name: Bug Report
description: Report a bug or issue with Project N.O.M.A.D.
title: "[Bug]: "
labels: ["bug", "needs-triage"]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to report a bug! Please fill out the information below to help us diagnose and fix the issue.
**Before submitting:**
- Search existing issues to avoid duplicates
- Ensure you're running the latest version of N.O.M.A.D.
- Redact any personal or sensitive information from logs/configs
- Please don't submit issues related to running N.O.M.A.D. on Unraid or another NAS - we don't have plans to support these kinds of platforms at this time
- type: dropdown
id: issue-category
attributes:
label: Issue Category
description: What area is this issue related to?
options:
- Installation/Setup
- AI Assistant (Ollama)
- Knowledge Base/RAG (Document Upload)
- Docker/Container Issues
- GPU Configuration
- Content Downloads (ZIM, Maps, Collections)
- Service Management (Start/Stop/Update)
- System Performance/Resources
- UI/Frontend Issue
- Other
validations:
required: true
- type: textarea
id: description
attributes:
label: Bug Description
description: Provide a clear and concise description of what the bug is
placeholder: What happened? What did you expect to happen?
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: Steps to Reproduce
description: How can we reproduce this issue?
placeholder: |
1. Go to '...'
2. Click on '...'
3. See error
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected Behavior
description: What did you expect to happen?
placeholder: Describe the expected outcome
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Actual Behavior
description: What actually happened?
placeholder: Describe what actually occurred, including any error messages
validations:
required: true
- type: input
id: nomad-version
attributes:
label: N.O.M.A.D. Version
description: What version of N.O.M.A.D. are you running? (Check Settings > Update or run `docker ps` and check nomad_admin image tag)
placeholder: "e.g., 1.29.0"
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating System
description: What OS are you running N.O.M.A.D. on?
options:
- Ubuntu 24.04
- Ubuntu 22.04
- Ubuntu 20.04
- Debian 13 (Trixie)
- Debian 12 (Bookworm)
- Debian 11 (Bullseye)
- Other Debian-based
- Other (not yet officially supported)
validations:
required: true
- type: input
id: docker-version
attributes:
label: Docker Version
description: What version of Docker are you running? (`docker --version`)
placeholder: "e.g., Docker version 24.0.7"
- type: dropdown
id: gpu-present
attributes:
label: Do you have a dedicated GPU?
options:
- "Yes"
- "No"
- "Not sure"
validations:
required: true
- type: input
id: gpu-model
attributes:
label: GPU Model (if applicable)
description: What GPU model do you have? (Check Settings > System or run `nvidia-smi` if NVIDIA GPU)
placeholder: "e.g., NVIDIA GeForce RTX 3060"
- type: textarea
id: system-specs
attributes:
label: System Specifications
description: Provide relevant system specs (CPU, RAM, available disk space)
placeholder: |
CPU:
RAM:
Available Disk Space:
GPU (if any):
- type: textarea
id: service-status
attributes:
label: Service Status (if relevant)
description: If this is a service-related issue, what's the status of relevant services? (Check Settings > Apps or run `docker ps`)
placeholder: |
Paste output from `docker ps` or describe service states from the UI
- type: textarea
id: logs
attributes:
label: Relevant Logs
description: |
Include any relevant logs or error messages. **Please redact any personal/sensitive information.**
Useful commands for collecting logs:
- N.O.M.A.D. management app: `docker logs nomad_admin`
- Ollama: `docker logs nomad_ollama`
- Qdrant: `docker logs nomad_qdrant`
- Specific service: `docker logs nomad_<service-name>`
placeholder: Paste relevant log output here
render: shell
- type: textarea
id: browser-console
attributes:
label: Browser Console Errors (if UI issue)
description: If this is a UI issue, include any errors from your browser's developer console (F12)
placeholder: Paste browser console errors here
render: javascript
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem (drag and drop images here)
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context about the problem here (network setup, custom configurations, recent changes, etc.)
- type: checkboxes
id: terms
attributes:
label: Pre-submission Checklist
description: Please confirm the following before submitting
options:
- label: I have searched for existing issues that might be related to this bug
required: true
- label: I am running the latest version of Project N.O.M.A.D. (or have noted my version above)
required: true
- label: I have redacted any personal or sensitive information from logs and screenshots
required: true
- label: This issue is NOT related to running N.O.M.A.D. on an unsupported/non-Debian-based OS
required: false

17
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,17 @@
blank_issues_enabled: false
contact_links:
- name: 💬 Discord Community
url: https://discord.com/invite/crosstalksolutions
about: Join our Discord community for general questions, support, and discussions
- name: 📖 Documentation
url: https://projectnomad.us
about: Check the official documentation and guides
- name: 🏆 Community Leaderboard
url: https://benchmark.projectnomad.us
about: View the N.O.M.A.D. benchmark leaderboard
- name: 🤝 Contributing Guide
url: https://github.com/Crosstalk-Solutions/project-nomad/blob/main/CONTRIBUTING.md
about: Learn how to contribute to Project N.O.M.A.D.
- name: 📅 Roadmap
url: https://roadmap.projectnomad.us
about: See our public roadmap, vote on features, and suggest new ones

View File

@ -0,0 +1,150 @@
name: Feature Request
description: Suggest a new feature or enhancement for Project N.O.M.A.D.
title: "[Feature]: "
labels: ["enhancement", "needs-discussion"]
body:
- type: markdown
attributes:
value: |
Thanks for your interest in improving Project N.O.M.A.D.! Before you submit a feature request, consider checking our [roadmap](https://roadmap.projectnomad.us) to see if it's already planned or in progress. You're welcome to suggest new ideas there if you don't plan on opening PRs yourself.
**Please note:** Feature requests are not guaranteed to be implemented. All requests are evaluated based on alignment with the project's goals, feasibility, and community demand.
**Before submitting:**
- Search existing feature requests and our [roadmap](https://roadmap.projectnomad.us) to avoid duplicates
- Consider if this aligns with N.O.M.A.D.'s mission: offline-first knowledge and education
- Consider the technical feasibility of the feature: N.O.M.A.D. is designed to be containerized and run on a wide range of hardware, so features that require heavy resources (aside from GPU-intensive tasks) or complex host configurations may be less likely to be implemented
- Consider the scope of the feature: Small, focused enhancements that can be implemented incrementally are more likely to be implemented than large, broad features that would require significant development effort or have an unclear path forward
- If you're able to contribute code, testing, or documentation, that significantly increases the chances of your feature being implemented
- type: dropdown
id: feature-category
attributes:
label: Feature Category
description: What area does this feature relate to?
options:
- New Service/Tool Integration
- AI Assistant Enhancement
- Knowledge Base/RAG Improvement
- Content Management (ZIM, Maps, Collections)
- UI/UX Improvement
- System Management
- Performance Optimization
- Documentation
- Security
- Other
validations:
required: true
- type: textarea
id: problem
attributes:
label: Problem Statement
description: What problem does this feature solve? Is your feature request related to a pain point?
placeholder: I find it frustrating when... / It would be helpful if... / Users struggle with...
validations:
required: true
- type: textarea
id: solution
attributes:
label: Proposed Solution
description: Describe the feature or enhancement you'd like to see
placeholder: Add a feature that... / Change the behavior to... / Integrate with...
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternative Solutions
description: Have you considered any alternative solutions or workarounds?
placeholder: I've tried... / Another approach could be... / A workaround is...
- type: textarea
id: use-case
attributes:
label: Use Case
description: Describe a specific scenario where this feature would be valuable
placeholder: |
As a [type of user], when I [do something], I want to [accomplish something] so that [benefit].
Example: Because I have a dedicated GPU, I want to be able to see in the UI if GPU support is enabled so that I can optimize performance and troubleshoot issues more easily.
- type: dropdown
id: user-type
attributes:
label: Who would benefit from this feature?
description: What type of users would find this most valuable?
multiple: true
options:
- Individual/Home Users
- Families
- Teachers/Educators
- Students
- Survivalists/Preppers
- Developers/Contributors
- Organizations
- All Users
validations:
required: true
- type: dropdown
id: priority
attributes:
label: How important is this feature to you?
options:
- Critical - Blocking my use of N.O.M.A.D.
- High - Would significantly improve my experience
- Medium - Would be nice to have
- Low - Minor convenience
validations:
required: true
- type: textarea
id: implementation-ideas
attributes:
label: Implementation Ideas (Optional)
description: If you have technical suggestions for how this could be implemented, share them here
placeholder: This could potentially use... / It might integrate with... / A possible approach is...
- type: textarea
id: examples
attributes:
label: Examples or References
description: Are there similar features in other applications? Include links, screenshots, or descriptions
placeholder: Similar to how [app name] does... / See this example at [URL]
- type: dropdown
id: willing-to-contribute
attributes:
label: Would you be willing to help implement this?
description: Contributing increases the likelihood of implementation
options:
- "Yes - I can write the code"
- "Yes - I can help test"
- "Yes - I can help with documentation"
- "Maybe - with guidance"
- "No - I don't have the skills/time"
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context, mockups, diagrams, or information about the feature request
- type: checkboxes
id: checklist
attributes:
label: Pre-submission Checklist
description: Please confirm the following before submitting
options:
- label: I have searched for existing feature requests that might be similar
required: true
- label: This feature aligns with N.O.M.A.D.'s mission of offline-first knowledge and education
required: true
- label: I understand that feature requests are not guaranteed to be implemented
required: true

7
.github/dependabot.yaml vendored Normal file
View File

@ -0,0 +1,7 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/admin"
schedule:
interval: "weekly"
target-branch: "rc"

133
.github/scripts/finalize-release-notes.sh vendored Executable file
View File

@ -0,0 +1,133 @@
#!/usr/bin/env bash
#
# finalize-release-notes.sh
#
# Stamps the "## Unreleased" section in a release-notes file with a version
# and date, and extracts the section content for use in GitHub releases / email.
# Also includes all commits since the last release for complete transparency.
#
# Usage: finalize-release-notes.sh <version> <file-path>
#
# Exit codes:
# 0 - Success: section stamped and extracted
# 1 - No "## Unreleased" section found (skip gracefully)
# 2 - Unreleased section exists but is empty (skip gracefully)
set -euo pipefail
VERSION="${1:?Usage: finalize-release-notes.sh <version> <file-path>}"
FILE="${2:?Usage: finalize-release-notes.sh <version> <file-path>}"
if [[ ! -f "$FILE" ]]; then
echo "Error: File not found: $FILE" >&2
exit 1
fi
# Find the line number of the ## Unreleased header (case-insensitive)
HEADER_LINE=$(grep -inm1 '^## unreleased' "$FILE" | cut -d: -f1)
if [[ -z "$HEADER_LINE" ]]; then
echo "No '## Unreleased' section found. Skipping."
exit 1
fi
TOTAL_LINES=$(wc -l < "$FILE")
# Find the next section header (## Version ...) or --- separator after the Unreleased header
NEXT_SECTION_LINE=""
if [[ $HEADER_LINE -lt $TOTAL_LINES ]]; then
NEXT_SECTION_LINE=$(tail -n +"$((HEADER_LINE + 1))" "$FILE" \
| grep -nm1 '^## \|^---$' \
| cut -d: -f1)
fi
if [[ -n "$NEXT_SECTION_LINE" ]]; then
# NEXT_SECTION_LINE is relative to HEADER_LINE+1, convert to absolute
END_LINE=$((HEADER_LINE + NEXT_SECTION_LINE - 1))
else
# Section runs to end of file
END_LINE=$TOTAL_LINES
fi
# Extract content between header and next section (exclusive of both boundaries)
CONTENT_START=$((HEADER_LINE + 1))
CONTENT_END=$END_LINE
# Extract the section body (between header line and the next boundary)
SECTION_BODY=$(sed -n "${CONTENT_START},${CONTENT_END}p" "$FILE" | sed '/^$/N;/^\n$/d')
# Check for actual content: strip blank lines and lines that are only markdown headers (###...)
TRIMMED=$(echo "$SECTION_BODY" | sed '/^[[:space:]]*$/d')
HAS_CONTENT=$(echo "$SECTION_BODY" | sed '/^[[:space:]]*$/d' | grep -v '^###' || true)
if [[ -z "$TRIMMED" || -z "$HAS_CONTENT" ]]; then
echo "Unreleased section is empty. Skipping."
exit 2
fi
# Format the date as "Month Day, Year"
DATE_STAMP=$(date +'%B %-d, %Y')
NEW_HEADER="## Version ${VERSION} - ${DATE_STAMP}"
# Build the replacement: swap the header line, keep everything else intact
{
# Lines before the Unreleased header
if [[ $HEADER_LINE -gt 1 ]]; then
head -n "$((HEADER_LINE - 1))" "$FILE"
fi
# New versioned header
echo "$NEW_HEADER"
# Content between header and next section
sed -n "${CONTENT_START},${CONTENT_END}p" "$FILE"
# Rest of the file after the section
if [[ $END_LINE -lt $TOTAL_LINES ]]; then
tail -n +"$((END_LINE + 1))" "$FILE"
fi
} > "${FILE}.tmp"
mv "${FILE}.tmp" "$FILE"
# Get commits since the last release
LAST_TAG=$(git describe --tags --abbrev=0 HEAD^ 2>/dev/null || echo "")
COMMIT_LIST=""
if [[ -n "$LAST_TAG" ]]; then
echo "Fetching commits since ${LAST_TAG}..."
# Get commits between last tag and HEAD, excluding merge commits and skip ci commits
COMMIT_LIST=$(git log "${LAST_TAG}..HEAD" \
--no-merges \
--pretty=format:"- %s ([%h](https://github.com/${GITHUB_REPOSITORY}/commit/%H))" \
--grep="\[skip ci\]" --invert-grep \
|| echo "")
else
echo "No previous tag found, fetching all commits..."
COMMIT_LIST=$(git log \
--no-merges \
--pretty=format:"- %s ([%h](https://github.com/${GITHUB_REPOSITORY}/commit/%H))" \
--grep="\[skip ci\]" --invert-grep \
|| echo "")
fi
# Write the extracted section content (for GitHub release body / future email)
{
echo "$NEW_HEADER"
echo ""
if [[ -n "$TRIMMED" ]]; then
echo "$TRIMMED"
echo ""
fi
# Add commit history if available
if [[ -n "$COMMIT_LIST" ]]; then
echo "---"
echo ""
echo "### 📝 All Changes"
echo ""
echo "$COMMIT_LIST"
fi
} > "${FILE}.section"
echo "Finalized release notes for v${VERSION}"
echo " Updated: ${FILE}"
echo " Extracted: ${FILE}.section"
exit 0

25
.github/workflows/build-admin-on-pr.yml vendored Normal file
View File

@ -0,0 +1,25 @@
name: Build Admin
on: pull_request
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Set up Node.js
uses: actions/setup-node@v6
with:
node-version: '24'
cache: 'npm'
- name: Install dependencies
run: npm ci
working-directory: ./admin
- name: Run build
run: npm run build
working-directory: ./admin

View File

@ -0,0 +1,51 @@
name: Build Disk Collector Image
on:
workflow_dispatch:
inputs:
version:
description: 'Semantic version to label the Docker image under (no "v" prefix, e.g. "1.2.3")'
required: true
type: string
tag_latest:
description: 'Also tag this image as :latest?'
required: false
type: boolean
default: false
jobs:
check_authorization:
name: Check authorization to publish new Docker image
runs-on: ubuntu-latest
outputs:
isAuthorized: ${{ steps.check-auth.outputs.is_authorized }}
steps:
- name: check-auth
id: check-auth
run: echo "is_authorized=${{ contains(secrets.DEPLOYMENT_AUTHORIZED_USERS, github.triggering_actor) }}" >> $GITHUB_OUTPUT
build:
name: Build disk-collector image
needs: check_authorization
if: needs.check_authorization.outputs.isAuthorized == 'true'
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Log in to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: install/sidecar-disk-collector
push: true
tags: |
ghcr.io/crosstalk-solutions/project-nomad-disk-collector:${{ inputs.version }}
ghcr.io/crosstalk-solutions/project-nomad-disk-collector:v${{ inputs.version }}
${{ inputs.tag_latest && 'ghcr.io/crosstalk-solutions/project-nomad-disk-collector:latest' || '' }}

View File

@ -0,0 +1,54 @@
name: Build Primary Docker Image
on:
workflow_dispatch:
inputs:
version:
description: 'Semantic version to label the Docker image under (no "v" prefix, e.g. "1.2.3")'
required: true
type: string
tag_latest:
description: 'Also tag this image as :latest? (Keep false for RC and beta releases)'
required: false
type: boolean
default: false
jobs:
check_authorization:
name: Check authorization to publish new Docker image
runs-on: ubuntu-latest
outputs:
isAuthorized: ${{ steps.check-auth.outputs.is_authorized }}
steps:
- name: check-auth
id: check-auth
run: echo "is_authorized=${{ contains(secrets.DEPLOYMENT_AUTHORIZED_USERS, github.triggering_actor) }}" >> $GITHUB_OUTPUT
build:
name: Build Docker image
needs: check_authorization
if: needs.check_authorization.outputs.isAuthorized == 'true'
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Log in to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
push: true
tags: |
ghcr.io/crosstalk-solutions/project-nomad:${{ inputs.version }}
ghcr.io/crosstalk-solutions/project-nomad:v${{ inputs.version }}
${{ inputs.tag_latest && 'ghcr.io/crosstalk-solutions/project-nomad:latest' || '' }}
build-args: |
VERSION=${{ inputs.version }}
BUILD_DATE=${{ github.event.workflow_run.created_at }}
VCS_REF=${{ github.sha }}

View File

@ -1,12 +1,17 @@
name: Build Docker Image name: Build Sidecar Updater Image
on: on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
version: version:
description: 'Semantic version to label the Docker image under' description: 'Semantic version to label the Docker image under (no "v" prefix, e.g. "1.2.3")'
required: true required: true
type: string type: string
tag_latest:
description: 'Also tag this image as :latest?'
required: false
type: boolean
default: false
jobs: jobs:
check_authorization: check_authorization:
@ -19,11 +24,16 @@ jobs:
id: check-auth id: check-auth
run: echo "is_authorized=${{ contains(secrets.DEPLOYMENT_AUTHORIZED_USERS, github.triggering_actor) }}" >> $GITHUB_OUTPUT run: echo "is_authorized=${{ contains(secrets.DEPLOYMENT_AUTHORIZED_USERS, github.triggering_actor) }}" >> $GITHUB_OUTPUT
build: build:
name: Build Docker image name: Build sidecar-updater image
needs: check_authorization needs: check_authorization
if: needs.check_authorization.outputs.isAuthorized == 'true' if: needs.check_authorization.outputs.isAuthorized == 'true'
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps: steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Log in to GitHub Container Registry - name: Log in to GitHub Container Registry
uses: docker/login-action@v2 uses: docker/login-action@v2
with: with:
@ -31,7 +41,11 @@ jobs:
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v4 uses: docker/build-push-action@v5
with: with:
context: install/sidecar-updater
push: true push: true
tags: ghcr.io/crosstalk-solutions/project-nomad-admin:${{ inputs.version }} tags: |
ghcr.io/crosstalk-solutions/project-nomad-sidecar-updater:${{ inputs.version }}
ghcr.io/crosstalk-solutions/project-nomad-sidecar-updater:v${{ inputs.version }}
${{ inputs.tag_latest && 'ghcr.io/crosstalk-solutions/project-nomad-sidecar-updater:latest' || '' }}

View File

@ -22,7 +22,7 @@ jobs:
newVersion: ${{ steps.semver.outputs.new_release_version }} newVersion: ${{ steps.semver.outputs.new_release_version }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
persist-credentials: false persist-credentials: false
@ -34,4 +34,58 @@ jobs:
GIT_AUTHOR_NAME: cosmistack-bot GIT_AUTHOR_NAME: cosmistack-bot
GIT_AUTHOR_EMAIL: dev@cosmistack.com GIT_AUTHOR_EMAIL: dev@cosmistack.com
GIT_COMMITTER_NAME: cosmistack-bot GIT_COMMITTER_NAME: cosmistack-bot
GIT_COMMITTER_EMAIL: dev@cosmistack.com GIT_COMMITTER_EMAIL: dev@cosmistack.com
- name: Finalize release notes
# Skip for pre-releases (versions containing a hyphen, e.g. 1.27.0-rc.1)
if: |
steps.semver.outputs.new_release_published == 'true' &&
!contains(steps.semver.outputs.new_release_version, '-')
id: finalize-notes
env:
GITHUB_REPOSITORY: ${{ github.repository }}
run: |
git pull origin main
chmod +x .github/scripts/finalize-release-notes.sh
EXIT_CODE=0
.github/scripts/finalize-release-notes.sh \
"${{ steps.semver.outputs.new_release_version }}" \
admin/docs/release-notes.md || EXIT_CODE=$?
if [[ "$EXIT_CODE" -eq 0 ]]; then
echo "has_notes=true" >> $GITHUB_OUTPUT
else
echo "has_notes=false" >> $GITHUB_OUTPUT
fi
- name: Commit finalized release notes
if: |
steps.semver.outputs.new_release_published == 'true' &&
steps.finalize-notes.outputs.has_notes == 'true' &&
!contains(steps.semver.outputs.new_release_version, '-')
run: |
git config user.name "cosmistack-bot"
git config user.email "dev@cosmistack.com"
git remote set-url origin https://x-access-token:${{ secrets.COSMISTACKBOT_ACCESS_TOKEN }}@github.com/${{ github.repository }}.git
git add admin/docs/release-notes.md
git commit -m "docs(release): finalize v${{ steps.semver.outputs.new_release_version }} release notes [skip ci]"
git push origin main
- name: Update GitHub release body
if: |
steps.semver.outputs.new_release_published == 'true' &&
steps.finalize-notes.outputs.has_notes == 'true' &&
!contains(steps.semver.outputs.new_release_version, '-')
env:
GH_TOKEN: ${{ secrets.COSMISTACKBOT_ACCESS_TOKEN }}
run: |
gh release edit "v${{ steps.semver.outputs.new_release_version }}" \
--notes-file admin/docs/release-notes.md.section
# Future: Send release notes email
# - name: Send release notes email
# if: steps.semver.outputs.new_release_published == 'true' && steps.finalize-notes.outputs.has_notes == 'true'
# run: |
# curl -X POST "https://api.projectnomad.us/api/v1/newsletter/release" \
# -H "Authorization: Bearer ${{ secrets.NOMAD_API_KEY }}" \
# -H "Content-Type: application/json" \
# -d "{\"version\": \"${{ steps.semver.outputs.new_release_version }}\", \"body\": $(cat admin/docs/release-notes.md.section | jq -Rs .)}"

View File

@ -0,0 +1,58 @@
name: Validate Collection URLs
on:
push:
paths:
- 'collections/**.json'
pull_request:
paths:
- 'collections/**.json'
jobs:
validate-urls:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Extract and validate URLs
run: |
FAILED=0
CHECKED=0
FAILED_URLS=""
# Recursively extract all non-null string URLs from every JSON file in collections/
URLS=$(jq -r '.. | .url? | select(type == "string")' collections/*.json | sort -u)
while IFS= read -r url; do
[ -z "$url" ] && continue
CHECKED=$((CHECKED + 1))
printf "Checking: %s ... " "$url"
# Use Range: bytes=0-0 to avoid downloading the full file.
# --max-filesize 1 aborts early if the server ignores the Range header
# and returns 200 with the full body. The HTTP status is still captured.
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
--range 0-0 \
--max-filesize 1 \
--max-time 30 \
--location \
"$url")
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "206" ]; then
echo "OK ($HTTP_CODE)"
else
echo "FAILED ($HTTP_CODE)"
FAILED=$((FAILED + 1))
FAILED_URLS="$FAILED_URLS\n - $url (HTTP $HTTP_CODE)"
fi
done <<< "$URLS"
echo ""
echo "Checked $CHECKED URLs, $FAILED failed."
if [ "$FAILED" -gt 0 ]; then
echo ""
echo "Broken URLs:"
printf "%b\n" "$FAILED_URLS"
exit 1
fi

View File

@ -1,5 +1,8 @@
{ {
"branches": ["master"], "branches": [
"main",
{ "name": "rc", "prerelease": "rc" }
],
"plugins": [ "plugins": [
"@semantic-release/commit-analyzer", "@semantic-release/commit-analyzer",
"@semantic-release/release-notes-generator", "@semantic-release/release-notes-generator",

128
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

187
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,187 @@
# Contributing to Project N.O.M.A.D.
Thank you for your interest in contributing to Project N.O.M.A.D.! Community contributions are what keep this project growing and improving. Please read this guide fully before getting started — it will save you (and the maintainers) a lot of time.
> **Note:** Acceptance of contributions is not guaranteed. All pull requests are evaluated based on quality, relevance, and alignment with the project's goals. The maintainers of Project N.O.M.A.D. ("Nomad") reserve the right accept, deny, or modify any pull request at their sole discretion.
---
## Table of Contents
- [Code of Conduct](#code-of-conduct)
- [Before You Start](#before-you-start)
- [Getting Started](#getting-started)
- [Development Workflow](#development-workflow)
- [Commit Messages](#commit-messages)
- [Release Notes](#release-notes)
- [Versioning](#versioning)
- [Submitting a Pull Request](#submitting-a-pull-request)
- [Feedback & Community](#feedback--community)
---
## Code of Conduct
Please read and review our full [Code of Conduct](https://github.com/Crosstalk-Solutions/project-nomad/blob/main/CODE_OF_CONDUCT.md) before contributing. In short: please be respectful and considerate in all interactions with maintainers and other contributors.
We are committed to providing a welcoming environment for everyone. Disrespectful or abusive behavior will not be tolerated.
---
## Before You Start
**Open an issue first.** Before writing any code, please [open an issue](../../issues/new) to discuss your proposed change. This helps avoid duplicate work and ensures your contribution aligns with the project's direction.
When opening an issue:
- Use a clear, descriptive title
- Describe the problem you're solving or the feature you want to add
- If it's a bug, include steps to reproduce it and as much detail about your environment as possible
- Ensure you redact any personal or sensitive information in any logs, configs, etc.
---
## Getting Started with Contributing
**Please note**: this is the Getting Started guide for developing and contributing to Nomad, NOT [installing Nomad](https://github.com/Crosstalk-Solutions/project-nomad/blob/main/README.md) for regular use!
### Prerequisites
- A Debian-based OS (Ubuntu recommended)
- `sudo`/root privileges
- Docker installed and running
- A stable internet connection (required for dependency downloads)
- Node.js (for frontend/admin work)
### Fork & Clone
1. Click **Fork** at the top right of this repository
2. Clone your fork locally:
```bash
git clone https://github.com/YOUR_USERNAME/project-nomad.git
cd project-nomad
```
3. Add the upstream remote so you can stay in sync:
```bash
git remote add upstream https://github.com/Crosstalk-Solutions/project-nomad.git
```
### Avoid Installing a Release Version Locally
Because Nomad relies heavily on Docker, we actually recommend against installing a release version of the project on the same local machine where you are developing. This can lead to conflicts with ports, volumes, and other resources. Instead, you can run your development version in a separate Docker environment while keeping your local machine clean. It certainly __can__ be done, but it adds complexity to your setup and workflow. If you choose to install a release version locally, please ensure you have a clear strategy for managing potential conflicts and resource usage.
---
## Development Workflow
1. **Sync with upstream** before starting any new work. We prefer rebasing over merge commits to keep a clean, linear git history as much as possible (this also makes it easier for maintainers to review and merge your changes). To sync with upstream:
```bash
git fetch upstream
git checkout main
git rebase upstream/main
```
2. **Create a feature branch** off `main` with a descriptive name:
```bash
git checkout -b fix/issue-123
# or
git checkout -b feature/add-new-tool
```
3. **Make your changes.** Follow existing code style and conventions. Test your changes locally against a running N.O.M.A.D. instance before submitting.
4. **Add release notes** (see [Release Notes](#release-notes) below).
5. **Commit your changes** using [Conventional Commits](#commit-messages).
6. **Push your branch** and open a pull request.
---
## Commit Messages
This project uses [Conventional Commits](https://www.conventionalcommits.org/). All commit messages must follow this format:
```
<type>(<scope>): <description>
```
**Common types:**
| Type | When to use |
|------|-------------|
| `feat` | A new user-facing feature |
| `fix` | A bug fix |
| `docs` | Documentation changes only |
| `refactor` | Code change that isn't a fix or feature and does not affect functionality |
| `chore` | Build process, dependency updates, tooling |
| `test` | Adding or updating tests |
**Scope** is optional but encouraged — use it to indicate the area of the codebase affected (e.g., `api`, `ui`, `maps`).
**Examples:**
```
feat(ui): add dark mode toggle to Command Center
fix(api): resolve container status not updating after restart
docs: update hardware requirements in README
chore(deps): bump docker-compose to v2.24
```
---
## Release Notes
Human-readable release notes live in [`admin/docs/release-notes.md`](admin/docs/release-notes.md) and are displayed directly in the Command Center UI.
When your changes include anything user-facing, **add a summary to the `## Unreleased` section** at the top of that file under the appropriate heading:
- **Features** — new user-facing capabilities
- **Bug Fixes** — corrections to existing behavior
- **Improvements** — enhancements, refactors, docs, or dependency updates
Use the format `- **Area**: Description` to stay consistent with existing entries.
**Example:**
```markdown
## Unreleased
### Features
- **Maps**: Added support for downloading South America regional maps
### Bug Fixes
- **AI Chat**: Fixed document upload failing on filenames with special characters
```
> When a release is triggered, CI automatically stamps the version and date, commits the update, and publishes the content to the GitHub release. You do not need to do this manually.
---
## Versioning
This project uses [Semantic Versioning](https://semver.org/). Versions are managed in the root `package.json` and updated automatically by `semantic-release`. The `project-nomad` Docker image uses this version. The `admin/package.json` version stays at `0.0.0` and should not be changed manually.
---
## Submitting a Pull Request
1. Push your branch to your fork:
```bash
git push origin your-branch-name
```
2. Open a pull request against the `main` branch of this repository
3. In the PR description:
- Summarize what your changes do and why
- Reference the related issue (e.g., `Closes #123`)
- Note any relevant testing steps or environment details
4. Be responsive to feedback — maintainers may request changes. Pull requests with no activity for an extended period may be closed.
---
## Feedback & Community
Have questions or want to discuss ideas before opening an issue? Join the community:
- **Discord:** [Join the Crosstalk Solutions server](https://discord.com/invite/crosstalksolutions) — the best place to get help, share your builds, and talk with other N.O.M.A.D. users
- **Website:** [www.projectnomad.us](https://www.projectnomad.us)
- **Benchmark Leaderboard:** [benchmark.projectnomad.us](https://benchmark.projectnomad.us)
---
*Project N.O.M.A.D. is licensed under the [Apache License 2.0](LICENSE).*

58
Dockerfile Normal file
View File

@ -0,0 +1,58 @@
FROM node:22-slim AS base
# Install bash & curl for entrypoint script compatibility, graphicsmagick for pdf2pic, and vips-dev & build-base for sharp
RUN apt-get update && apt-get install -y bash curl graphicsmagick libvips-dev build-essential
# All deps stage
FROM base AS deps
WORKDIR /app
ADD admin/package.json admin/package-lock.json ./
RUN npm ci
# Production only deps stage
FROM base AS production-deps
WORKDIR /app
ADD admin/package.json admin/package-lock.json ./
RUN npm ci --omit=dev
# Build stage
FROM base AS build
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
ADD admin/ ./
RUN node ace build
# Production stage
FROM base
ARG VERSION=dev
ARG BUILD_DATE
ARG VCS_REF
# Labels
LABEL org.opencontainers.image.title="Project N.O.M.A.D" \
org.opencontainers.image.description="The Project N.O.M.A.D Official Docker image" \
org.opencontainers.image.version="${VERSION}" \
org.opencontainers.image.created="${BUILD_DATE}" \
org.opencontainers.image.revision="${VCS_REF}" \
org.opencontainers.image.vendor="Crosstalk Solutions, LLC" \
org.opencontainers.image.documentation="https://github.com/CrosstalkSolutions/project-nomad/blob/main/README.md" \
org.opencontainers.image.source="https://github.com/CrosstalkSolutions/project-nomad" \
org.opencontainers.image.licenses="Apache-2.0"
ENV NODE_ENV=production
WORKDIR /app
COPY --from=production-deps /app/node_modules /app/node_modules
COPY --from=build /app/build /app
# Copy root package.json for version info
COPY package.json /app/version.json
# Copy docs and README for access within the container
COPY admin/docs /app/docs
COPY README.md /app/README.md
# Copy entrypoint script and ensure it's executable
COPY install/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
EXPOSE 8080
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

100
FAQ.md Normal file
View File

@ -0,0 +1,100 @@
# Frequently Asked Questions (FAQ)
Find answers to some of the most common questions about Project N.O.M.A.D.
## Can I customize the port(s) that NOMAD uses?
Yes, you can customize the ports that NOMAD's core services (Command Center, MySQL, Redis) use. Please refer to the [Advanced Installation](README.md#advanced-installation) section of the README for more details on how to do this.
Note: As of 3/24/2026, only the core services defined in the `docker-compose.yml` file currently support port customization - the installable applications (e.g. Ollama, Kiwix, etc.) do not yet support this, but we have multiple PR's in the works to add this feature for all installable applications in a future release.
## Can I customize the storage location for NOMAD's data?
Yes, you can customize the storage location for NOMAD's content by modifying the `docker-compose.yml` file to adjust the appropriate bind mounts to point to your desired storage location on your host machine. Please refer to the [Advanced Installation](README.md#advanced-installation) section of the README for more details on how to do this.
## Can I run NOMAD on MAC, WSL2, or a non-Debian-based Distro?
See [Why does NOMAD require a Debian-based OS?](#why-does-nomad-require-a-debian-based-os)
## Why does NOMAD require a Debian-based OS?
Project N.O.M.A.D. is currently designed to run on Debian-based Linux distributions (with Ubuntu being the recommended distro) because our installation scripts and Docker configurations are optimized for this environment. While it's technically possible to run the Docker containers on other operating systems that support Docker, we have not tested or optimized the installation process for non-Debian-based systems, so we cannot guarantee a smooth experience on those platforms at this time.
Support for other operating systems will come in the future, but because our development resources are limited as a free and open-source project, we needed to prioritize our efforts and focus on a narrower set of supported platforms for the initial release. We chose Debian-based Linux as our starting point because it's widely used, easy to spin up, and provides a stable environment for running Docker containers.
Community members have provided guides for running N.O.M.A.D. on other platforms (e.g. WSL2, Mac, etc.) in our Discord community and [Github Discussions](https://github.com/Crosstalk-Solutions/project-nomad/discussions), so if you're interested in running N.O.M.A.D. on a non-Debian-based system, we recommend checking there for any available resources or guides. However, keep in mind that if you choose to run N.O.M.A.D. on a non-Debian-based system, you may encounter issues that we won't be able to provide support for, and you may need to have a higher level of technical expertise to troubleshoot and resolve any problems that arise.
## Can I run NOMAD on a Raspberry Pi or other ARM-based device?
Project N.O.M.A.D. is currently designed to run on x86-64 architecture, and we have not yet tested or optimized it for ARM-based devices like the Raspberry Pi (and have not published any official images for ARM architecture).
Support for ARM-based devices is on our roadmap, but our initial focus was on x86-64 hardware due to its widespread use and compatibility with a wide range of applications.
Community members have forked and published their own ARM-compatible images and installation guides for running N.O.M.A.D. on Raspberry Pi and other ARM-based devices in our Discord community and [Github Discussions](https://github.com/Crosstalk-Solutions/project-nomad/discussions), but these are not officially supported by the core development team, and we cannot guarantee their functionality or provide support for any issues that arise when using these community-created resources.
## What are the hardware requirements for running NOMAD?
Project N.O.M.A.D. itself is quite lightweight and can run on even modest x86-64 hardware, but the tools and resources you choose to install with N.O.M.A.D. will determine the specs required for your unique deployment. Please see the [Hardware Guide](https://www.projectnomad.us/hardware) for detailed build recommendations at various price points.
## Does NOMAD support languages other than English?
As of March 2026, Project N.O.M.A.D.'s UI is only available in English, and the majority of the tools and resources available through N.O.M.A.D. are also primarily in English. However, we have multi-language support on our roadmap for a future release, and we are actively working on adding support for additional languages both in the UI and in the available tools/resources. If you're interested in contributing to this effort, please check out our [CONTRIBUTING.md](CONTRIBUTING.md) file for guidelines on how to get involved.
## What technologies is NOMAD built with?
Project N.O.M.A.D. is built using a combination of technologies, including:
- **Docker:** for containerization of the Command Center and its dependencies
- **Node.js & TypeScript:** for the backend of the Command Center, particularly the [AdonisJS](https://adonisjs.com/) framework
- **React:** for the frontend of the Command Center, utilizing [Vite](https://vitejs.dev/) and [Inertia.js](https://inertiajs.com/) under the hood
- **MySQL:** for the Command Center's database
- **Redis:** for various caching, background jobs, "cron" tasks, and other internal processes within the Command Center
NOMAD makes use of the Docker-outside-of-Docker ("DooD") pattern, which allows the Command Center to manage and orchestrate other Docker containers on the host machine without needing to run Docker itself inside a container. This approach provides better performance and compatibility with a wider range of host environments while still allowing for powerful container management capabilities through the Command Center's UI.
## Can I run NOMAD if I have existing Docker containers on my machine?
Yes, you can safely run Project N.O.M.A.D. on a machine that already has existing Docker containers. NOMAD is designed to coexist with other Docker containers and will not interfere with them as long as there are no port conflicts or resource constraints.
All of NOMAD's containers are prefixed with `nomad_` in their names, so they can be easily identified and managed separately from any other containers you may have running. Just make sure to review the ports that NOMAD's core services (Command Center, MySQL, Redis) use during installation and adjust them if necessary to avoid conflicts with your existing containers.
## Why does NOMAD require access to the Docker socket?
See [What technologies is NOMAD built with?](#what-technologies-is-nomad-built-with)
## Can I use any AI models?
NOMAD by default uses Ollama inside of a docker container to run LLM Models for the AI Assistant. So if you find a model on HuggingFace for example, you won't be able to use that model in NOMAD. The list of available models in the AI Assistant settings (/settings/models) may not show all of the models you are looking for. If you found a model from https://ollama.com/search that you'd like to try and its not in the settings page, you can use a curl command to download the model.
`curl -X POST -H "Content-Type: application/json" -d '{"model":"MODEL_NAME_HERE"}' http://localhost:8080/api/ollama/models` replacing MODEL_NAME_HERE with the model name from whats in the ollama website.
## Do I have to install the AI features in NOMAD?
No, the AI features in NOMAD (Ollama, Qdrant, custom RAG pipeline, etc.) are all optional and not required to use the core functionality of NOMAD.
## Is NOMAD actually free? Are there any hidden costs?
Yes, Project N.O.M.A.D. is completely free and open-source software licensed under the Apache License 2.0. There are no hidden costs or fees associated with using NOMAD itself, and we don't have any plans to introduce "premium" features or paid tiers.
Aside from the cost of the hardware you choose to run it on, there are no costs associated with using NOMAD.
## Do you sell hardware or pre-built devices with NOMAD pre-installed?
No, we do not sell hardware or pre-built devices with NOMAD pre-installed at this time. Project N.O.M.A.D. is a free and open-source software project, and we provide detailed installation instructions and hardware recommendations for users to set up their own NOMAD instances on compatible hardware of their choice. The tradeoff to this DIY approach is some additional setup time and technical know-how required on the user's end, but it also allows for greater flexibility and customization in terms of hardware selection and configuration to best suit each user's unique needs, budget, and preferences.
## How quickly are issues resolved when reported?
We strive to address and resolve issues as quickly as possible, but please keep in mind that Project N.O.M.A.D. is a free and open-source project maintained by a small team of volunteers. We prioritize issues based on their severity, impact on users, and the resources required to resolve them. Critical issues that affect a large number of users are typically addressed more quickly, while less severe issues may take longer to resolve. Aside from the development efforts needed to address the issue, we do our best to conduct thorough testing and validation to ensure that any fix we implement doesn't introduce new issues or regressions, which also adds to the time it takes to resolve an issue.
We also encourage community involvement in troubleshooting and resolving issues, so if you encounter a problem, please consider checking our Discord community and Github Discussions for potential solutions or workarounds while we work on an official fix.
## How often are new features added or updates released?
We aim to release updates and new features on a regular basis, but the exact timing can vary based on the complexity of the features being developed, the resources available to our volunteer development team, and the feedback and needs of our community. We typically release smaller "patch" versions more frequently to address bugs and make minor improvements, while larger feature releases may take more time to develop and test before they're ready for release.
## I opened a PR to contribute a new feature or fix a bug. How long does it usually take for PRs to be reviewed and merged?
We appreciate all contributions to the project and strive to review and merge pull requests (PRs) as quickly as possible. The time it takes for a PR to be reviewed and merged can vary based on several factors, including the complexity of the changes, the current workload of our maintainers, and the need for any additional testing or revisions.
Because NOMAD is still a young project, some PRs (particularly those for new features) may take longer to review and merge as we prioritize building out the core functionality and ensuring stability before adding new features. However, we do our best to provide timely feedback on all PRs and keep contributors informed about the status of their contributions.
## I have a question that isn't answered here. Where can I ask for help?
If you have a question that isn't answered in this FAQ, please feel free to ask for help in our Discord community (https://discord.com/invite/crosstalksolutions) or on our Github Discussions page (https://github.com/Crosstalk-Solutions/project-nomad/discussions).
## I have a suggestion for a new feature or improvement. How can I share it?
We welcome and encourage suggestions for new features and improvements! We highly encourage sharing your ideas (or upvoting existing suggestions) on our public roadmap at https://roadmap.projectnomad.us, where we track new feature requests. This is the best way to ensure that your suggestion is seen by the development team and the community, and it also allows other community members to upvote and show support for your idea, which can help prioritize it for future development.

190
LICENSE Normal file
View File

@ -0,0 +1,190 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to the Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by the Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding any notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2024-2026 Crosstalk Solutions LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

111
README.md
View File

@ -1,24 +1,64 @@
# Project N.O.M.A.D. (Node for Offline Media, Archives, and Data) <div align="center">
Project N.O.M.A.D., is a self-contained, offline survival computer packed with critical tools, knowledge, and AI to keep you informed and empowered—anytime, anywhere. <img src="https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/admin/public/project_nomad_logo.png" width="200" height="200"/>
# Project N.O.M.A.D.
### Node for Offline Media, Archives, and Data
**Knowledge That Never Goes Offline**
[![Website](https://img.shields.io/badge/Website-projectnomad.us-blue)](https://www.projectnomad.us)
[![Discord](https://img.shields.io/badge/Discord-Join%20Community-5865F2)](https://discord.com/invite/crosstalksolutions)
[![Benchmark](https://img.shields.io/badge/Benchmark-Leaderboard-green)](https://benchmark.projectnomad.us)
</div>
---
Project N.O.M.A.D. is a self-contained, offline-first knowledge and education server packed with critical tools, knowledge, and AI to keep you informed and empowered—anytime, anywhere.
## Installation & Quickstart ## Installation & Quickstart
Project N.O.M.A.D. can be installed on any Debian-based operating system (we recommend Ubuntu). Installation is completely terminal-based, and all tools and resources are designed to be accessed through the browser, so there's no need for a desktop environment if you'd rather setup N.O.M.A.D. as a "server" and access it through other clients. Project N.O.M.A.D. can be installed on any Debian-based operating system (we recommend Ubuntu). Installation is completely terminal-based, and all tools and resources are designed to be accessed through the browser, so there's no need for a desktop environment if you'd rather setup N.O.M.A.D. as a "server" and access it through other clients.
*Note: sudo/root privileges are required to run the install script* *Note: sudo/root privileges are required to run the install script*
### Quick Install (Debian-based OS Only)
```bash ```bash
curl -fsSL https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/master/install/install_nomad.sh -o install_nomad.sh sudo apt-get update && sudo apt-get install -y curl && curl -fsSL https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/install_nomad.sh -o install_nomad.sh && sudo bash install_nomad.sh
sudo bash ./install_nomad.sh
``` ```
Project N.O.M.A.D. is now installed on your device! Open a browser and navigate to `http://localhost:8080` (or `http://DEVICE_IP:8080`) to start exploring! Project N.O.M.A.D. is now installed on your device! Open a browser and navigate to `http://localhost:8080` (or `http://DEVICE_IP:8080`) to start exploring!
## How It Works For a complete step-by-step walkthrough (including Ubuntu installation), see the [Installation Guide](https://www.projectnomad.us/install).
From a technical standpoint, N.O.M.A.D. is primarily a management UI and API that orchestrates a goodie basket of containerized offline archive tools and resources such as
[Kiwix](https://kiwix.org/), [OpenStreetMap](https://www.openstreetmap.org/), [Ollama](https://ollama.com/), [OpenWebUI](https://openwebui.com/), and more.
By abstracting the installation of each of these awesome tools, N.O.M.A.D. makes getting your offline survival computer up and running a breeze! N.O.M.A.D. also includes some additional built-in handy tools, such as a ZIM library managment interface, calculators, and more. ### Advanced Installation
For more control over the installation process, copy and paste the [Docker Compose template](https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/management_compose.yaml) into a `docker-compose.yml` file and customize it to your liking (be sure to replace any placeholders with your actual values). Then, run `docker compose up -d` to start the Command Center and its dependencies. Note: this method is recommended for advanced users only, as it requires familiarity with Docker and manual configuration before starting.
## How It Works
N.O.M.A.D. is a management UI ("Command Center") and API that orchestrates a collection of containerized tools and resources via [Docker](https://www.docker.com/). It handles installation, configuration, and updates for everything — so you don't have to.
**Built-in capabilities include:**
- **AI Chat with Knowledge Base** — local AI chat powered by [Ollama](https://ollama.com/), with document upload and semantic search (RAG via [Qdrant](https://qdrant.tech/))
- **Information Library** — offline Wikipedia, medical references, ebooks, and more via [Kiwix](https://kiwix.org/)
- **Education Platform** — Khan Academy courses with progress tracking via [Kolibri](https://learningequality.org/kolibri/)
- **Offline Maps** — downloadable regional maps via [ProtoMaps](https://protomaps.com)
- **Data Tools** — encryption, encoding, and analysis via [CyberChef](https://gchq.github.io/CyberChef/)
- **Notes** — local note-taking via [FlatNotes](https://github.com/dullage/flatnotes)
- **System Benchmark** — hardware scoring with a [community leaderboard](https://benchmark.projectnomad.us)
- **Easy Setup Wizard** — guided first-time configuration with curated content collections
N.O.M.A.D. also includes built-in tools like a Wikipedia content selector, ZIM library manager, and content explorer.
## What's Included
| Capability | Powered By | What You Get |
|-----------|-----------|-------------|
| Information Library | Kiwix | Offline Wikipedia, medical references, survival guides, ebooks |
| AI Assistant | Ollama + Qdrant | Built-in chat with document upload and semantic search |
| Education Platform | Kolibri | Khan Academy courses, progress tracking, multi-user support |
| Offline Maps | ProtoMaps | Downloadable regional maps with search and navigation |
| Data Tools | CyberChef | Encryption, encoding, hashing, and data analysis |
| Notes | FlatNotes | Local note-taking with markdown support |
| System Benchmark | Built-in | Hardware scoring, Builder Tags, and community leaderboard |
## Device Requirements ## Device Requirements
While many similar offline survival computers are designed to be run on bare-minimum, lightweight hardware, Project N.O.M.A.D. is quite the opposite. To install and run the While many similar offline survival computers are designed to be run on bare-minimum, lightweight hardware, Project N.O.M.A.D. is quite the opposite. To install and run the
@ -40,13 +80,18 @@ To run LLM's and other included AI tools:
#### Optimal Specs #### Optimal Specs
- Processor: AMD Ryzen 7 or Intel Core i7 or better - Processor: AMD Ryzen 7 or Intel Core i7 or better
- RAM: 32 GB system memory - RAM: 32 GB system memory
- Graphics: NVIDIA RTX 3060 or better (more VRAM = run larger models) - Graphics: NVIDIA RTX 3060 or AMD equivalent or better (more VRAM = run larger models)
- Storage: At least 250 GB free disk space (preferably on SSD) - Storage: At least 250 GB free disk space (preferably on SSD)
- OS: Debian-based (Ubuntu recommended) - OS: Debian-based (Ubuntu recommended)
- Stable internet connection (required during install only) - Stable internet connection (required during install only)
**For detailed build recommendations at three price points ($150$1,000+), see the [Hardware Guide](https://www.projectnomad.us/hardware).**
Again, Project N.O.M.A.D. itself is quite lightweight - it's the tools and resources you choose to install with N.O.M.A.D. that will determine the specs required for your unique deployment Again, Project N.O.M.A.D. itself is quite lightweight - it's the tools and resources you choose to install with N.O.M.A.D. that will determine the specs required for your unique deployment
## Frequently Asked Questions (FAQ)
For answers to common questions about Project N.O.M.A.D., please see our [FAQ](FAQ.md) page.
## About Internet Usage & Privacy ## About Internet Usage & Privacy
Project N.O.M.A.D. is designed for offline usage. An internet connection is only required during the initial installation (to download dependencies) and if you (the user) decide to download additional tools and resources at a later time. Otherwise, N.O.M.A.D. does not require an internet connection and has ZERO built-in telemetry. Project N.O.M.A.D. is designed for offline usage. An internet connection is only required during the initial installation (to download dependencies) and if you (the user) decide to download additional tools and resources at a later time. Otherwise, N.O.M.A.D. does not require an internet connection and has ZERO built-in telemetry.
@ -54,3 +99,49 @@ To test internet connectivity, N.O.M.A.D. attempts to make a request to Cloudfla
## About Security ## About Security
By design, Project N.O.M.A.D. is intended to be open and available without hurdles - it includes no authentication. If you decide to connect your device to a local network after install (e.g. for allowing other devices to access it's resources), you can block/open ports to control which services are exposed. By design, Project N.O.M.A.D. is intended to be open and available without hurdles - it includes no authentication. If you decide to connect your device to a local network after install (e.g. for allowing other devices to access it's resources), you can block/open ports to control which services are exposed.
**Will authentication be added in the future?** Maybe. It's not currently a priority, but if there's enough demand for it, we may consider building in an optional authentication layer in a future release to support uses cases where multiple users need access to the same instance but with different permission levels (e.g. family use with parental controls, classroom use with teacher/admin accounts, etc.). We have a suggestion for this on our public roadmap, so if this is something you'd like to see, please upvote it here: https://roadmap.projectnomad.us/posts/1/user-authentication-please-build-in-user-auth-with-admin-user-roles
For now, we recommend using network-level controls to manage access if you're planning to expose your N.O.M.A.D. instance to other devices on a local network. N.O.M.A.D. is not designed to be exposed directly to the internet, and we strongly advise against doing so unless you really know what you're doing, have taken appropriate security measures, and understand the risks involved.
## Contributing
Contributions are welcome and appreciated! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines on how to contribute to the project.
## Community & Resources
- **Website:** [www.projectnomad.us](https://www.projectnomad.us) - Learn more about the project
- **Discord:** [Join the Community](https://discord.com/invite/crosstalksolutions) - Get help, share your builds, and connect with other NOMAD users
- **Benchmark Leaderboard:** [benchmark.projectnomad.us](https://benchmark.projectnomad.us) - See how your hardware stacks up against other NOMAD builds
- **Troubleshooting Guide:** [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Find solutions to common issues
- **FAQ:** [FAQ.md](FAQ.md) - Find answers to frequently asked questions
## License
Project N.O.M.A.D. is licensed under the [Apache License 2.0](LICENSE).
## Helper Scripts
Once installed, Project N.O.M.A.D. has a few helper scripts should you ever need to troubleshoot issues or perform maintenance that can't be done through the Command Center. All of these scripts are found in Project N.O.M.A.D.'s install directory, `/opt/project-nomad`
###
###### Start Script - Starts all installed project containers
```bash
sudo bash /opt/project-nomad/start_nomad.sh
```
###
###### Stop Script - Stops all installed project containers
```bash
sudo bash /opt/project-nomad/stop_nomad.sh
```
###
###### Update Script - Attempts to pull the latest images for the Command Center and its dependencies (i.e. mysql) and recreate the containers. Note: this *only* updates the Command Center containers. It does not update the installable application containers - that should be done through the Command Center UI
```bash
sudo bash /opt/project-nomad/update_nomad.sh
```
###### Uninstall Script - Need to start fresh? Use the uninstall script to make your life easy. Note: this cannot be undone!
```bash
curl -fsSL https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/uninstall_nomad.sh -o uninstall_nomad.sh && sudo bash uninstall_nomad.sh
```

View File

@ -4,10 +4,15 @@ LOG_LEVEL=info
APP_KEY=some_random_key APP_KEY=some_random_key
NODE_ENV=development NODE_ENV=development
SESSION_DRIVER=cookie SESSION_DRIVER=cookie
DRIVE_DISK=fs
DB_HOST=localhost DB_HOST=localhost
DB_PORT=3306 DB_PORT=3306
DB_USER=root DB_USER=root
DB_DATABASE=nomad DB_DATABASE=nomad
DB_PASSWORD=password DB_PASSWORD=password
DB_SSL=false DB_SSL=false
REDIS_HOST=localhost
REDIS_PORT=6379
# Storage path for NOMAD content (ZIM files, maps, etc.)
# On Windows dev, use an absolute path like: C:/nomad-storage
# On Linux production, use: /opt/project-nomad/storage
NOMAD_STORAGE_PATH=/opt/project-nomad/storage

View File

@ -1,32 +0,0 @@
FROM node:22.16.0-alpine3.22 AS base
# Install bash & curl for entrypoint script compatibility
RUN apk add --no-cache bash curl
# All deps stage
FROM base AS deps
WORKDIR /app
ADD package.json package-lock.json ./
RUN npm ci
# Production only deps stage
FROM base AS production-deps
WORKDIR /app
ADD package.json package-lock.json ./
RUN npm ci --omit=dev
# Build stage
FROM base AS build
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
ADD . .
RUN node ace build
# Production stage
FROM base
ENV NODE_ENV=production
WORKDIR /app
COPY --from=production-deps /app/node_modules /app/node_modules
COPY --from=build /app/build /app
EXPOSE 8080
CMD ["node", "./bin/server.js"]

View File

@ -1,5 +0,0 @@
## Docker container
```
docker run --rm -it -p 8080:8080 jturnercosmistack/projectnomad:admin-latest -e PORT=8080 -e HOST=0.0.0.0 -e APP_KEY=secretlongpasswordsecret -e LOG_LEVEL=debug -e DRIVE_DISK=fs
```

View File

@ -52,8 +52,8 @@ export default defineConfig({
() => import('@adonisjs/cors/cors_provider'), () => import('@adonisjs/cors/cors_provider'),
() => import('@adonisjs/lucid/database_provider'), () => import('@adonisjs/lucid/database_provider'),
() => import('@adonisjs/inertia/inertia_provider'), () => import('@adonisjs/inertia/inertia_provider'),
() => import('@adonisjs/drive/drive_provider'), () => import('@adonisjs/transmit/transmit_provider'),
() => import('@adonisjs/transmit/transmit_provider') () => import('#providers/map_static_provider')
], ],
/* /*

View File

@ -0,0 +1,275 @@
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import { BenchmarkService } from '#services/benchmark_service'
import { runBenchmarkValidator, submitBenchmarkValidator } from '#validators/benchmark'
import { RunBenchmarkJob } from '#jobs/run_benchmark_job'
import type { BenchmarkType } from '../../types/benchmark.js'
import { randomUUID } from 'node:crypto'
@inject()
export default class BenchmarkController {
constructor(private benchmarkService: BenchmarkService) {}
/**
* Start a benchmark run (async via job queue, or sync if specified)
*/
async run({ request, response }: HttpContext) {
const payload = await request.validateUsing(runBenchmarkValidator)
const benchmarkType: BenchmarkType = payload.benchmark_type || 'full'
const runSync = request.input('sync') === 'true' || request.input('sync') === true
// Check if a benchmark is already running
const status = this.benchmarkService.getStatus()
if (status.status !== 'idle') {
return response.status(409).send({
success: false,
error: 'A benchmark is already running',
current_benchmark_id: status.benchmarkId,
})
}
// Run synchronously if requested (useful for local dev without Redis)
if (runSync) {
try {
let result
switch (benchmarkType) {
case 'full':
result = await this.benchmarkService.runFullBenchmark()
break
case 'system':
result = await this.benchmarkService.runSystemBenchmarks()
break
case 'ai':
result = await this.benchmarkService.runAIBenchmark()
break
default:
result = await this.benchmarkService.runFullBenchmark()
}
return response.send({
success: true,
benchmark_id: result.benchmark_id,
nomad_score: result.nomad_score,
result,
})
} catch (error) {
return response.status(500).send({
success: false,
error: error.message,
})
}
}
// Generate benchmark ID and dispatch job (async)
const benchmarkId = randomUUID()
const { job, created } = await RunBenchmarkJob.dispatch({
benchmark_id: benchmarkId,
benchmark_type: benchmarkType,
include_ai: benchmarkType === 'full' || benchmarkType === 'ai',
})
return response.status(201).send({
success: true,
job_id: job?.id || benchmarkId,
benchmark_id: benchmarkId,
message: created
? `${benchmarkType} benchmark started`
: 'Benchmark job already exists',
})
}
/**
* Run a system-only benchmark (CPU, memory, disk)
*/
async runSystem({ response }: HttpContext) {
const status = this.benchmarkService.getStatus()
if (status.status !== 'idle') {
return response.status(409).send({
success: false,
error: 'A benchmark is already running',
})
}
const benchmarkId = randomUUID()
await RunBenchmarkJob.dispatch({
benchmark_id: benchmarkId,
benchmark_type: 'system',
include_ai: false,
})
return response.status(201).send({
success: true,
benchmark_id: benchmarkId,
message: 'System benchmark started',
})
}
/**
* Run an AI-only benchmark
*/
async runAI({ response }: HttpContext) {
const status = this.benchmarkService.getStatus()
if (status.status !== 'idle') {
return response.status(409).send({
success: false,
error: 'A benchmark is already running',
})
}
const benchmarkId = randomUUID()
await RunBenchmarkJob.dispatch({
benchmark_id: benchmarkId,
benchmark_type: 'ai',
include_ai: true,
})
return response.status(201).send({
success: true,
benchmark_id: benchmarkId,
message: 'AI benchmark started',
})
}
/**
* Get all benchmark results
*/
async results({}: HttpContext) {
const results = await this.benchmarkService.getAllResults()
return {
results,
total: results.length,
}
}
/**
* Get the latest benchmark result
*/
async latest({}: HttpContext) {
const result = await this.benchmarkService.getLatestResult()
if (!result) {
return { result: null }
}
return { result }
}
/**
* Get a specific benchmark result by ID
*/
async show({ params, response }: HttpContext) {
const result = await this.benchmarkService.getResultById(params.id)
if (!result) {
return response.status(404).send({
error: 'Benchmark result not found',
})
}
return { result }
}
/**
* Submit benchmark results to central repository
*/
async submit({ request, response }: HttpContext) {
const payload = await request.validateUsing(submitBenchmarkValidator)
const anonymous = request.input('anonymous') === true || request.input('anonymous') === 'true'
try {
const submitResult = await this.benchmarkService.submitToRepository(payload.benchmark_id, anonymous)
return response.send({
success: true,
repository_id: submitResult.repository_id,
percentile: submitResult.percentile,
})
} catch (error) {
// Pass through the status code from the service if available, otherwise default to 400
const statusCode = (error as any).statusCode || 400
return response.status(statusCode).send({
success: false,
error: error.message,
})
}
}
/**
* Update builder tag for a benchmark result
*/
async updateBuilderTag({ request, response }: HttpContext) {
const benchmarkId = request.input('benchmark_id')
const builderTag = request.input('builder_tag')
if (!benchmarkId) {
return response.status(400).send({
success: false,
error: 'benchmark_id is required',
})
}
const result = await this.benchmarkService.getResultById(benchmarkId)
if (!result) {
return response.status(404).send({
success: false,
error: 'Benchmark result not found',
})
}
// Validate builder tag format if provided
if (builderTag) {
const tagPattern = /^[A-Za-z]+-[A-Za-z]+-\d{4}$/
if (!tagPattern.test(builderTag)) {
return response.status(400).send({
success: false,
error: 'Invalid builder tag format. Expected: Word-Word-0000',
})
}
}
result.builder_tag = builderTag || null
await result.save()
return response.send({
success: true,
builder_tag: result.builder_tag,
})
}
/**
* Get comparison stats from central repository
*/
async comparison({}: HttpContext) {
const stats = await this.benchmarkService.getComparisonStats()
return { stats }
}
/**
* Get current benchmark status
*/
async status({}: HttpContext) {
return this.benchmarkService.getStatus()
}
/**
* Get benchmark settings
*/
async settings({}: HttpContext) {
const { default: BenchmarkSetting } = await import('#models/benchmark_setting')
return await BenchmarkSetting.getAllSettings()
}
/**
* Update benchmark settings
*/
async updateSettings({ request, response }: HttpContext) {
const { default: BenchmarkSetting } = await import('#models/benchmark_setting')
const body = request.body()
if (body.allow_anonymous_submission !== undefined) {
await BenchmarkSetting.setValue(
'allow_anonymous_submission',
body.allow_anonymous_submission ? 'true' : 'false'
)
}
return response.send({
success: true,
settings: await BenchmarkSetting.getAllSettings(),
})
}
}

View File

@ -0,0 +1,113 @@
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import { ChatService } from '#services/chat_service'
import { createSessionSchema, updateSessionSchema, addMessageSchema } from '#validators/chat'
import KVStore from '#models/kv_store'
import { SystemService } from '#services/system_service'
import { SERVICE_NAMES } from '../../constants/service_names.js'
@inject()
export default class ChatsController {
constructor(private chatService: ChatService, private systemService: SystemService) {}
async inertia({ inertia, response }: HttpContext) {
const aiAssistantInstalled = await this.systemService.checkServiceInstalled(SERVICE_NAMES.OLLAMA)
if (!aiAssistantInstalled) {
return response.status(404).json({ error: 'AI Assistant service not installed' })
}
const chatSuggestionsEnabled = await KVStore.getValue('chat.suggestionsEnabled')
return inertia.render('chat', {
settings: {
chatSuggestionsEnabled: chatSuggestionsEnabled ?? false,
},
})
}
async index({}: HttpContext) {
return await this.chatService.getAllSessions()
}
async show({ params, response }: HttpContext) {
const sessionId = parseInt(params.id)
const session = await this.chatService.getSession(sessionId)
if (!session) {
return response.status(404).json({ error: 'Session not found' })
}
return session
}
async store({ request, response }: HttpContext) {
try {
const data = await request.validateUsing(createSessionSchema)
const session = await this.chatService.createSession(data.title, data.model)
return response.status(201).json(session)
} catch (error) {
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to create session',
})
}
}
async suggestions({ response }: HttpContext) {
try {
const suggestions = await this.chatService.getChatSuggestions()
return response.status(200).json({ suggestions })
} catch (error) {
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to get suggestions',
})
}
}
async update({ params, request, response }: HttpContext) {
try {
const sessionId = parseInt(params.id)
const data = await request.validateUsing(updateSessionSchema)
const session = await this.chatService.updateSession(sessionId, data)
return session
} catch (error) {
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to update session',
})
}
}
async destroy({ params, response }: HttpContext) {
try {
const sessionId = parseInt(params.id)
await this.chatService.deleteSession(sessionId)
return response.status(204)
} catch (error) {
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to delete session',
})
}
}
async addMessage({ params, request, response }: HttpContext) {
try {
const sessionId = parseInt(params.id)
const data = await request.validateUsing(addMessageSchema)
const message = await this.chatService.addMessage(sessionId, data.role, data.content)
return response.status(201).json(message)
} catch (error) {
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to add message',
})
}
}
async destroyAll({ response }: HttpContext) {
try {
const result = await this.chatService.deleteAllSessions()
return response.status(200).json(result)
} catch (error) {
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to delete all sessions',
})
}
}
}

View File

@ -0,0 +1,30 @@
import { CollectionUpdateService } from '#services/collection_update_service'
import {
assertNotPrivateUrl,
applyContentUpdateValidator,
applyAllContentUpdatesValidator,
} from '#validators/common'
import type { HttpContext } from '@adonisjs/core/http'
export default class CollectionUpdatesController {
async checkForUpdates({}: HttpContext) {
const service = new CollectionUpdateService()
return await service.checkForUpdates()
}
async applyUpdate({ request }: HttpContext) {
const update = await request.validateUsing(applyContentUpdateValidator)
assertNotPrivateUrl(update.download_url)
const service = new CollectionUpdateService()
return await service.applyUpdate(update)
}
async applyAllUpdates({ request }: HttpContext) {
const { updates } = await request.validateUsing(applyAllContentUpdatesValidator)
for (const update of updates) {
assertNotPrivateUrl(update.download_url)
}
const service = new CollectionUpdateService()
return await service.applyAllUpdates(updates)
}
}

View File

@ -9,15 +9,13 @@ export default class DocsController {
) { } ) { }
async list({ }: HttpContext) { async list({ }: HttpContext) {
const docs = await this.docsService.getDocs(); return await this.docsService.getDocs();
return { articles: docs };
} }
async show({ params, inertia }: HttpContext) { async show({ params, inertia }: HttpContext) {
const content = await this.docsService.parseFile(`${params.slug}.md`); const content = await this.docsService.parseFile(params.slug);
return inertia.render('docs/show', { return inertia.render('docs/show', {
content, content,
title: "Documentation"
}); });
} }
} }

View File

@ -0,0 +1,23 @@
import type { HttpContext } from '@adonisjs/core/http'
import { DownloadService } from '#services/download_service'
import { downloadJobsByFiletypeSchema } from '#validators/download'
import { inject } from '@adonisjs/core'
@inject()
export default class DownloadsController {
constructor(private downloadService: DownloadService) {}
async index() {
return this.downloadService.listDownloadJobs()
}
async filetype({ request }: HttpContext) {
const payload = await request.validateUsing(downloadJobsByFiletypeSchema)
return this.downloadService.listDownloadJobs(payload.params.filetype)
}
async removeJob({ params }: HttpContext) {
await this.downloadService.removeFailedJob(params.jobId)
return { success: true }
}
}

View File

@ -0,0 +1,48 @@
import { SystemService } from '#services/system_service'
import { ZimService } from '#services/zim_service'
import { CollectionManifestService } from '#services/collection_manifest_service'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
@inject()
export default class EasySetupController {
constructor(
private systemService: SystemService,
private zimService: ZimService
) {}
async index({ inertia }: HttpContext) {
const services = await this.systemService.getServices({ installedOnly: false })
return inertia.render('easy-setup/index', {
system: {
services: services,
},
})
}
async complete({ inertia }: HttpContext) {
return inertia.render('easy-setup/complete')
}
async listCuratedCategories({}: HttpContext) {
return await this.zimService.listCuratedCategories()
}
async refreshManifests({}: HttpContext) {
const manifestService = new CollectionManifestService()
const [zimChanged, mapsChanged, wikiChanged] = await Promise.all([
manifestService.fetchAndCacheSpec('zim_categories'),
manifestService.fetchAndCacheSpec('maps'),
manifestService.fetchAndCacheSpec('wikipedia'),
])
return {
success: true,
changed: {
zim_categories: zimChanged,
maps: mapsChanged,
wikipedia: wikiChanged,
},
}
}
}

View File

@ -15,7 +15,6 @@ export default class HomeController {
async home({ inertia }: HttpContext) { async home({ inertia }: HttpContext) {
const services = await this.systemService.getServices({ installedOnly: true }); const services = await this.systemService.getServices({ installedOnly: true });
console.log(services)
return inertia.render('home', { return inertia.render('home', {
system: { system: {
services services

View File

@ -0,0 +1,108 @@
import { MapService } from '#services/map_service'
import {
assertNotPrivateUrl,
downloadCollectionValidator,
filenameParamValidator,
remoteDownloadValidator,
remoteDownloadValidatorOptional,
} from '#validators/common'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
@inject()
export default class MapsController {
constructor(private mapService: MapService) {}
async index({ inertia }: HttpContext) {
const baseAssetsCheck = await this.mapService.ensureBaseAssets()
const regionFiles = await this.mapService.listRegions()
return inertia.render('maps', {
maps: {
baseAssetsExist: baseAssetsCheck,
regionFiles: regionFiles.files,
},
})
}
async downloadBaseAssets({ request }: HttpContext) {
const payload = await request.validateUsing(remoteDownloadValidatorOptional)
if (payload.url) assertNotPrivateUrl(payload.url)
await this.mapService.downloadBaseAssets(payload.url)
return { success: true }
}
async downloadRemote({ request }: HttpContext) {
const payload = await request.validateUsing(remoteDownloadValidator)
assertNotPrivateUrl(payload.url)
const filename = await this.mapService.downloadRemote(payload.url)
return {
message: 'Download started successfully',
filename,
url: payload.url,
}
}
async downloadCollection({ request }: HttpContext) {
const payload = await request.validateUsing(downloadCollectionValidator)
const resources = await this.mapService.downloadCollection(payload.slug)
return {
message: 'Collection download started successfully',
slug: payload.slug,
resources,
}
}
// For providing a "preflight" check in the UI before actually starting a background download
async downloadRemotePreflight({ request }: HttpContext) {
const payload = await request.validateUsing(remoteDownloadValidator)
assertNotPrivateUrl(payload.url)
const info = await this.mapService.downloadRemotePreflight(payload.url)
return info
}
async fetchLatestCollections({}: HttpContext) {
const success = await this.mapService.fetchLatestCollections()
return { success }
}
async listCuratedCollections({}: HttpContext) {
return await this.mapService.listCuratedCollections()
}
async listRegions({}: HttpContext) {
return await this.mapService.listRegions()
}
async styles({ request, response }: HttpContext) {
// Automatically ensure base assets are present before generating styles
const baseAssetsExist = await this.mapService.ensureBaseAssets()
if (!baseAssetsExist) {
return response.status(500).send({
message:
'Base map assets are missing and could not be downloaded. Please check your connection and try again.',
})
}
const styles = await this.mapService.generateStylesJSON(request.host(), request.protocol())
return response.json(styles)
}
async delete({ request, response }: HttpContext) {
const payload = await request.validateUsing(filenameParamValidator)
try {
await this.mapService.delete(payload.params.filename)
} catch (error) {
if (error.message === 'not_found') {
return response.status(404).send({
message: `Map file with key ${payload.params.filename} not found`,
})
}
throw error // Re-throw any other errors and let the global error handler catch
}
return {
message: 'Map file deleted successfully',
}
}
}

View File

@ -0,0 +1,275 @@
import { ChatService } from '#services/chat_service'
import { OllamaService } from '#services/ollama_service'
import { RagService } from '#services/rag_service'
import { modelNameSchema } from '#validators/download'
import { chatSchema, getAvailableModelsSchema } from '#validators/ollama'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import { DEFAULT_QUERY_REWRITE_MODEL, RAG_CONTEXT_LIMITS, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import logger from '@adonisjs/core/services/logger'
import type { Message } from 'ollama'
@inject()
export default class OllamaController {
constructor(
private chatService: ChatService,
private ollamaService: OllamaService,
private ragService: RagService
) { }
async availableModels({ request }: HttpContext) {
const reqData = await request.validateUsing(getAvailableModelsSchema)
return await this.ollamaService.getAvailableModels({
sort: reqData.sort,
recommendedOnly: reqData.recommendedOnly,
query: reqData.query || null,
limit: reqData.limit || 15,
force: reqData.force,
})
}
async chat({ request, response }: HttpContext) {
const reqData = await request.validateUsing(chatSchema)
// Flush SSE headers immediately so the client connection is open while
// pre-processing (query rewriting, RAG lookup) runs in the background.
if (reqData.stream) {
response.response.setHeader('Content-Type', 'text/event-stream')
response.response.setHeader('Cache-Control', 'no-cache')
response.response.setHeader('Connection', 'keep-alive')
response.response.flushHeaders()
}
try {
// If there are no system messages in the chat inject system prompts
const hasSystemMessage = reqData.messages.some((msg) => msg.role === 'system')
if (!hasSystemMessage) {
const systemPrompt = {
role: 'system' as const,
content: SYSTEM_PROMPTS.default,
}
logger.debug('[OllamaController] Injecting system prompt')
reqData.messages.unshift(systemPrompt)
}
// Query rewriting for better RAG retrieval with manageable context
// Will return user's latest message if no rewriting is needed
const rewrittenQuery = await this.rewriteQueryWithContext(reqData.messages)
logger.debug(`[OllamaController] Rewritten query for RAG: "${rewrittenQuery}"`)
if (rewrittenQuery) {
const relevantDocs = await this.ragService.searchSimilarDocuments(
rewrittenQuery,
5, // Top 5 most relevant chunks
0.3 // Minimum similarity score of 0.3
)
logger.debug(`[RAG] Retrieved ${relevantDocs.length} relevant documents for query: "${rewrittenQuery}"`)
// If relevant context is found, inject as a system message with adaptive limits
if (relevantDocs.length > 0) {
// Determine context budget based on model size
const { maxResults, maxTokens } = this.getContextLimitsForModel(reqData.model)
let trimmedDocs = relevantDocs.slice(0, maxResults)
// Apply token cap if set (estimate ~4 chars per token)
// Always include the first (most relevant) result — the cap only gates subsequent results
if (maxTokens > 0) {
const charCap = maxTokens * 4
let totalChars = 0
trimmedDocs = trimmedDocs.filter((doc, idx) => {
totalChars += doc.text.length
return idx === 0 || totalChars <= charCap
})
}
logger.debug(
`[RAG] Injecting ${trimmedDocs.length}/${relevantDocs.length} results (model: ${reqData.model}, maxResults: ${maxResults}, maxTokens: ${maxTokens || 'unlimited'})`
)
const contextText = trimmedDocs
.map((doc, idx) => `[Context ${idx + 1}] (Relevance: ${(doc.score * 100).toFixed(1)}%)\n${doc.text}`)
.join('\n\n')
const systemMessage = {
role: 'system' as const,
content: SYSTEM_PROMPTS.rag_context(contextText),
}
// Insert system message at the beginning (after any existing system messages)
const firstNonSystemIndex = reqData.messages.findIndex((msg) => msg.role !== 'system')
const insertIndex = firstNonSystemIndex === -1 ? 0 : firstNonSystemIndex
reqData.messages.splice(insertIndex, 0, systemMessage)
}
}
// Check if the model supports "thinking" capability for enhanced response generation
// If gpt-oss model, it requires a text param for "think" https://docs.ollama.com/api/chat
const thinkingCapability = await this.ollamaService.checkModelHasThinking(reqData.model)
const think: boolean | 'medium' = thinkingCapability ? (reqData.model.startsWith('gpt-oss') ? 'medium' : true) : false
// Separate sessionId from the Ollama request payload — Ollama rejects unknown fields
const { sessionId, ...ollamaRequest } = reqData
// Save user message to DB before streaming if sessionId provided
let userContent: string | null = null
if (sessionId) {
const lastUserMsg = [...reqData.messages].reverse().find((m) => m.role === 'user')
if (lastUserMsg) {
userContent = lastUserMsg.content
await this.chatService.addMessage(sessionId, 'user', userContent)
}
}
if (reqData.stream) {
logger.debug(`[OllamaController] Initiating streaming response for model: "${reqData.model}" with think: ${think}`)
// Headers already flushed above
const stream = await this.ollamaService.chatStream({ ...ollamaRequest, think })
let fullContent = ''
for await (const chunk of stream) {
if (chunk.message?.content) {
fullContent += chunk.message.content
}
response.response.write(`data: ${JSON.stringify(chunk)}\n\n`)
}
response.response.end()
// Save assistant message and optionally generate title
if (sessionId && fullContent) {
await this.chatService.addMessage(sessionId, 'assistant', fullContent)
const messageCount = await this.chatService.getMessageCount(sessionId)
if (messageCount <= 2 && userContent) {
this.chatService.generateTitle(sessionId, userContent, fullContent).catch((err) => {
logger.error(`[OllamaController] Title generation failed: ${err instanceof Error ? err.message : err}`)
})
}
}
return
}
// Non-streaming (legacy) path
const result = await this.ollamaService.chat({ ...ollamaRequest, think })
if (sessionId && result?.message?.content) {
await this.chatService.addMessage(sessionId, 'assistant', result.message.content)
const messageCount = await this.chatService.getMessageCount(sessionId)
if (messageCount <= 2 && userContent) {
this.chatService.generateTitle(sessionId, userContent, result.message.content).catch((err) => {
logger.error(`[OllamaController] Title generation failed: ${err instanceof Error ? err.message : err}`)
})
}
}
return result
} catch (error) {
if (reqData.stream) {
response.response.write(`data: ${JSON.stringify({ error: true })}\n\n`)
response.response.end()
return
}
throw error
}
}
async deleteModel({ request }: HttpContext) {
const reqData = await request.validateUsing(modelNameSchema)
await this.ollamaService.deleteModel(reqData.model)
return {
success: true,
message: `Model deleted: ${reqData.model}`,
}
}
async dispatchModelDownload({ request }: HttpContext) {
const reqData = await request.validateUsing(modelNameSchema)
await this.ollamaService.dispatchModelDownload(reqData.model)
return {
success: true,
message: `Download job dispatched for model: ${reqData.model}`,
}
}
async installedModels({ }: HttpContext) {
return await this.ollamaService.getModels()
}
/**
* Determines RAG context limits based on model size extracted from the model name.
* Parses size indicators like "1b", "3b", "8b", "70b" from model names/tags.
*/
private getContextLimitsForModel(modelName: string): { maxResults: number; maxTokens: number } {
// Extract parameter count from model name (e.g., "llama3.2:3b", "qwen2.5:1.5b", "gemma:7b")
const sizeMatch = modelName.match(/(\d+\.?\d*)[bB]/)
const paramBillions = sizeMatch ? parseFloat(sizeMatch[1]) : 8 // default to 8B if unknown
for (const tier of RAG_CONTEXT_LIMITS) {
if (paramBillions <= tier.maxParams) {
return { maxResults: tier.maxResults, maxTokens: tier.maxTokens }
}
}
// Fallback: no limits
return { maxResults: 5, maxTokens: 0 }
}
private async rewriteQueryWithContext(
messages: Message[]
): Promise<string | null> {
try {
// Get recent conversation history (last 6 messages for 3 turns)
const recentMessages = messages.slice(-6)
// Skip rewriting for short conversations. Rewriting adds latency with
// little RAG benefit until there is enough context to matter.
const userMessages = recentMessages.filter(msg => msg.role === 'user')
if (userMessages.length <= 2) {
return userMessages[userMessages.length - 1]?.content || null
}
const conversationContext = recentMessages
.map(msg => {
const role = msg.role === 'user' ? 'User' : 'Assistant'
// Truncate assistant messages to first 200 chars to keep context manageable
const content = msg.role === 'assistant'
? msg.content.slice(0, 200) + (msg.content.length > 200 ? '...' : '')
: msg.content
return `${role}: "${content}"`
})
.join('\n')
const installedModels = await this.ollamaService.getModels(true)
const rewriteModelAvailable = installedModels?.some(model => model.name === DEFAULT_QUERY_REWRITE_MODEL)
if (!rewriteModelAvailable) {
logger.warn(`[RAG] Query rewrite model "${DEFAULT_QUERY_REWRITE_MODEL}" not available. Skipping query rewriting.`)
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
return lastUserMessage?.content || null
}
// FUTURE ENHANCEMENT: allow the user to specify which model to use for rewriting
const response = await this.ollamaService.chat({
model: DEFAULT_QUERY_REWRITE_MODEL,
messages: [
{
role: 'system',
content: SYSTEM_PROMPTS.query_rewrite,
},
{
role: 'user',
content: `Conversation:\n${conversationContext}\n\nRewritten Query:`,
},
],
})
const rewrittenQuery = response.message.content.trim()
logger.info(`[RAG] Query rewritten: "${rewrittenQuery}"`)
return rewrittenQuery
} catch (error) {
logger.error(
`[RAG] Query rewriting failed: ${error instanceof Error ? error.message : error}`
)
// Fallback to last user message if rewriting fails
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
return lastUserMessage?.content || null
}
}
}

View File

@ -0,0 +1,85 @@
import { RagService } from '#services/rag_service'
import { EmbedFileJob } from '#jobs/embed_file_job'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import app from '@adonisjs/core/services/app'
import { randomBytes } from 'node:crypto'
import { sanitizeFilename } from '../utils/fs.js'
import { deleteFileSchema, getJobStatusSchema } from '#validators/rag'
@inject()
export default class RagController {
constructor(private ragService: RagService) { }
public async upload({ request, response }: HttpContext) {
const uploadedFile = request.file('file')
if (!uploadedFile) {
return response.status(400).json({ error: 'No file uploaded' })
}
const randomSuffix = randomBytes(6).toString('hex')
const sanitizedName = sanitizeFilename(uploadedFile.clientName)
const fileName = `${sanitizedName}-${randomSuffix}.${uploadedFile.extname || 'txt'}`
const fullPath = app.makePath(RagService.UPLOADS_STORAGE_PATH, fileName)
await uploadedFile.move(app.makePath(RagService.UPLOADS_STORAGE_PATH), {
name: fileName,
})
// Dispatch background job for embedding
const result = await EmbedFileJob.dispatch({
filePath: fullPath,
fileName,
})
return response.status(202).json({
message: result.message,
jobId: result.jobId,
fileName,
filePath: `/${RagService.UPLOADS_STORAGE_PATH}/${fileName}`,
alreadyProcessing: !result.created,
})
}
public async getActiveJobs({ response }: HttpContext) {
const jobs = await EmbedFileJob.listActiveJobs()
return response.status(200).json(jobs)
}
public async getJobStatus({ request, response }: HttpContext) {
const reqData = await request.validateUsing(getJobStatusSchema)
const fullPath = app.makePath(RagService.UPLOADS_STORAGE_PATH, reqData.filePath)
const status = await EmbedFileJob.getStatus(fullPath)
if (!status.exists) {
return response.status(404).json({ error: 'Job not found for this file' })
}
return response.status(200).json(status)
}
public async getStoredFiles({ response }: HttpContext) {
const files = await this.ragService.getStoredFiles()
return response.status(200).json({ files })
}
public async deleteFile({ request, response }: HttpContext) {
const { source } = await request.validateUsing(deleteFileSchema)
const result = await this.ragService.deleteFileBySource(source)
if (!result.success) {
return response.status(500).json({ error: result.message })
}
return response.status(200).json({ message: result.message })
}
public async scanAndSync({ response }: HttpContext) {
try {
const syncResult = await this.ragService.scanAndSyncStorage()
return response.status(200).json(syncResult)
} catch (error) {
return response.status(500).json({ error: 'Error scanning and syncing storage', details: error.message })
}
}
}

View File

@ -1,19 +1,28 @@
import KVStore from '#models/kv_store';
import { BenchmarkService } from '#services/benchmark_service';
import { MapService } from '#services/map_service';
import { OllamaService } from '#services/ollama_service';
import { SystemService } from '#services/system_service'; import { SystemService } from '#services/system_service';
import { updateSettingSchema } from '#validators/settings';
import { inject } from '@adonisjs/core'; import { inject } from '@adonisjs/core';
import type { HttpContext } from '@adonisjs/core/http' import type { HttpContext } from '@adonisjs/core/http'
import type { KVStoreKey } from '../../types/kv_store.js';
@inject() @inject()
export default class SettingsController { export default class SettingsController {
constructor( constructor(
private systemService: SystemService, private systemService: SystemService,
private mapService: MapService,
private benchmarkService: BenchmarkService,
private ollamaService: OllamaService
) { } ) { }
async system({ inertia }: HttpContext) { async system({ inertia }: HttpContext) {
// const services = await this.systemService.getServices(); const systemInfo = await this.systemService.getSystemInfo();
return inertia.render('settings/system', { return inertia.render('settings/system', {
// system: { system: {
// services info: systemInfo
// } }
}); });
} }
@ -25,6 +34,53 @@ export default class SettingsController {
} }
}); });
} }
async legal({ inertia }: HttpContext) {
return inertia.render('settings/legal');
}
async support({ inertia }: HttpContext) {
return inertia.render('settings/support');
}
async maps({ inertia }: HttpContext) {
const baseAssetsCheck = await this.mapService.ensureBaseAssets();
const regionFiles = await this.mapService.listRegions();
return inertia.render('settings/maps', {
maps: {
baseAssetsExist: baseAssetsCheck,
regionFiles: regionFiles.files
}
});
}
async models({ inertia }: HttpContext) {
const availableModels = await this.ollamaService.getAvailableModels({ sort: 'pulls', recommendedOnly: false, query: null, limit: 15 });
const installedModels = await this.ollamaService.getModels();
const chatSuggestionsEnabled = await KVStore.getValue('chat.suggestionsEnabled')
const aiAssistantCustomName = await KVStore.getValue('ai.assistantCustomName')
return inertia.render('settings/models', {
models: {
availableModels: availableModels?.models || [],
installedModels: installedModels || [],
settings: {
chatSuggestionsEnabled: chatSuggestionsEnabled ?? false,
aiAssistantCustomName: aiAssistantCustomName ?? '',
}
}
});
}
async update({ inertia }: HttpContext) {
const updateInfo = await this.systemService.checkLatestVersion();
return inertia.render('settings/update', {
system: {
updateAvailable: updateInfo.updateAvailable,
latestVersion: updateInfo.latestVersion,
currentVersion: updateInfo.currentVersion
}
});
}
async zim({ inertia }: HttpContext) { async zim({ inertia }: HttpContext) {
return inertia.render('settings/zim/index') return inertia.render('settings/zim/index')
@ -33,4 +89,28 @@ export default class SettingsController {
async zimRemote({ inertia }: HttpContext) { async zimRemote({ inertia }: HttpContext) {
return inertia.render('settings/zim/remote-explorer'); return inertia.render('settings/zim/remote-explorer');
} }
async benchmark({ inertia }: HttpContext) {
const latestResult = await this.benchmarkService.getLatestResult();
const status = this.benchmarkService.getStatus();
return inertia.render('settings/benchmark', {
benchmark: {
latestResult,
status: status.status,
currentBenchmarkId: status.benchmarkId
}
});
}
async getSetting({ request, response }: HttpContext) {
const key = request.qs().key;
const value = await KVStore.getValue(key as KVStoreKey);
return response.status(200).send({ key, value });
}
async updateSetting({ request, response }: HttpContext) {
const reqData = await request.validateUsing(updateSettingSchema);
await this.systemService.updateSetting(reqData.key, reqData.value);
return response.status(200).send({ success: true, message: 'Setting updated successfully' });
}
} }

View File

@ -1,6 +1,9 @@
import { DockerService } from '#services/docker_service'; import { DockerService } from '#services/docker_service';
import { SystemService } from '#services/system_service' import { SystemService } from '#services/system_service'
import { installServiceValidator } from '#validators/system'; import { SystemUpdateService } from '#services/system_update_service'
import { ContainerRegistryService } from '#services/container_registry_service'
import { CheckServiceUpdatesJob } from '#jobs/check_service_updates_job'
import { affectServiceValidator, checkLatestVersionValidator, installServiceValidator, subscribeToReleaseNotesValidator, updateServiceValidator } from '#validators/system';
import { inject } from '@adonisjs/core' import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http' import type { HttpContext } from '@adonisjs/core/http'
@ -8,9 +11,19 @@ import type { HttpContext } from '@adonisjs/core/http'
export default class SystemController { export default class SystemController {
constructor( constructor(
private systemService: SystemService, private systemService: SystemService,
private dockerService: DockerService private dockerService: DockerService,
private systemUpdateService: SystemUpdateService,
private containerRegistryService: ContainerRegistryService
) { } ) { }
async getInternetStatus({ }: HttpContext) {
return await this.systemService.getInternetStatus();
}
async getSystemInfo({ }: HttpContext) {
return await this.systemService.getSystemInfo();
}
async getServices({ }: HttpContext) { async getServices({ }: HttpContext) {
return await this.systemService.getServices({ installedOnly: true }); return await this.systemService.getServices({ installedOnly: true });
} }
@ -22,12 +35,147 @@ export default class SystemController {
if (result.success) { if (result.success) {
response.send({ success: true, message: result.message }); response.send({ success: true, message: result.message });
} else { } else {
response.status(400).send({ error: result.message }); response.status(400).send({ success: false, message: result.message });
} }
} }
async simulateSSE({ response }: HttpContext) { async affectService({ request, response }: HttpContext) {
this.dockerService.simulateSSE(); const payload = await request.validateUsing(affectServiceValidator);
response.send({ message: 'Started simulation of SSE' }) const result = await this.dockerService.affectContainer(payload.service_name, payload.action);
if (!result) {
response.internalServerError({ error: 'Failed to affect service' });
return;
}
response.send({ success: result.success, message: result.message });
}
async checkLatestVersion({ request }: HttpContext) {
const payload = await request.validateUsing(checkLatestVersionValidator)
return await this.systemService.checkLatestVersion(payload.force);
}
async forceReinstallService({ request, response }: HttpContext) {
const payload = await request.validateUsing(installServiceValidator);
const result = await this.dockerService.forceReinstall(payload.service_name);
if (!result) {
response.internalServerError({ error: 'Failed to force reinstall service' });
return;
}
response.send({ success: result.success, message: result.message });
}
async requestSystemUpdate({ response }: HttpContext) {
if (!this.systemUpdateService.isSidecarAvailable()) {
response.status(503).send({
success: false,
error: 'Update sidecar is not available. Ensure the updater container is running.',
});
return;
}
const result = await this.systemUpdateService.requestUpdate();
if (result.success) {
response.send({
success: true,
message: result.message,
note: 'Monitor update progress via GET /api/system/update/status. The connection may drop during container restart.',
});
} else {
response.status(409).send({
success: false,
error: result.message,
});
}
}
async getSystemUpdateStatus({ response }: HttpContext) {
const status = this.systemUpdateService.getUpdateStatus();
if (!status) {
response.status(500).send({
error: 'Failed to retrieve update status',
});
return;
}
response.send(status);
}
async getSystemUpdateLogs({ response }: HttpContext) {
const logs = this.systemUpdateService.getUpdateLogs();
response.send({ logs });
}
async subscribeToReleaseNotes({ request }: HttpContext) {
const reqData = await request.validateUsing(subscribeToReleaseNotesValidator);
return await this.systemService.subscribeToReleaseNotes(reqData.email);
}
async getDebugInfo({}: HttpContext) {
const debugInfo = await this.systemService.getDebugInfo()
return { debugInfo }
}
async checkServiceUpdates({ response }: HttpContext) {
await CheckServiceUpdatesJob.dispatch()
response.send({ success: true, message: 'Service update check dispatched' })
}
async getAvailableVersions({ params, response }: HttpContext) {
const serviceName = params.name
const service = await (await import('#models/service')).default
.query()
.where('service_name', serviceName)
.where('installed', true)
.first()
if (!service) {
return response.status(404).send({ error: `Service ${serviceName} not found or not installed` })
}
try {
const hostArch = await this.getHostArch()
const updates = await this.containerRegistryService.getAvailableUpdates(
service.container_image,
hostArch,
service.source_repo
)
response.send({ versions: updates })
} catch (error) {
response.status(500).send({ error: `Failed to fetch versions: ${error.message}` })
}
}
async updateService({ request, response }: HttpContext) {
const payload = await request.validateUsing(updateServiceValidator)
const result = await this.dockerService.updateContainer(
payload.service_name,
payload.target_version
)
if (result.success) {
response.send({ success: true, message: result.message })
} else {
response.status(400).send({ error: result.message })
}
}
private async getHostArch(): Promise<string> {
try {
const info = await this.dockerService.docker.info()
const arch = info.Architecture || ''
const archMap: Record<string, string> = {
x86_64: 'amd64',
aarch64: 'arm64',
armv7l: 'arm',
amd64: 'amd64',
arm64: 'arm64',
}
return archMap[arch] || arch.toLowerCase()
} catch {
return 'amd64'
}
} }
} }

View File

@ -1,47 +1,88 @@
import { ZimService } from '#services/zim_service'; import { ZimService } from '#services/zim_service'
import { inject } from '@adonisjs/core'; import {
assertNotPrivateUrl,
downloadCategoryTierValidator,
filenameParamValidator,
remoteDownloadWithMetadataValidator,
selectWikipediaValidator,
} from '#validators/common'
import { listRemoteZimValidator } from '#validators/zim'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http' import type { HttpContext } from '@adonisjs/core/http'
@inject() @inject()
export default class ZimController { export default class ZimController {
constructor( constructor(private zimService: ZimService) {}
private zimService: ZimService
) { }
async list({ }: HttpContext) { async list({}: HttpContext) {
return await this.zimService.list(); return await this.zimService.list()
}
async listRemote({ request }: HttpContext) {
const payload = await request.validateUsing(listRemoteZimValidator)
const { start = 0, count = 12, query } = payload
return await this.zimService.listRemote({ start, count, query })
}
async downloadRemote({ request }: HttpContext) {
const payload = await request.validateUsing(remoteDownloadWithMetadataValidator)
assertNotPrivateUrl(payload.url)
const { filename, jobId } = await this.zimService.downloadRemote(payload.url)
return {
message: 'Download started successfully',
filename,
jobId,
url: payload.url,
}
}
async listCuratedCategories({}: HttpContext) {
return await this.zimService.listCuratedCategories()
}
async downloadCategoryTier({ request }: HttpContext) {
const payload = await request.validateUsing(downloadCategoryTierValidator)
const resources = await this.zimService.downloadCategoryTier(
payload.categorySlug,
payload.tierSlug
)
return {
message: 'Download started successfully',
categorySlug: payload.categorySlug,
tierSlug: payload.tierSlug,
resources,
}
}
async delete({ request, response }: HttpContext) {
const payload = await request.validateUsing(filenameParamValidator)
try {
await this.zimService.delete(payload.params.filename)
} catch (error) {
if (error.message === 'not_found') {
return response.status(404).send({
message: `ZIM file with key ${payload.params.filename} not found`,
})
}
throw error // Re-throw any other errors and let the global error handler catch
} }
async listRemote({ request }: HttpContext) { return {
const { start = 0, count = 12 } = request.qs(); message: 'ZIM file deleted successfully',
return await this.zimService.listRemote({ start, count });
} }
}
async downloadRemote({ request, response }: HttpContext) { // Wikipedia selector endpoints
const { url } = request.body()
await this.zimService.downloadRemote(url);
response.status(200).send({ async getWikipediaState({}: HttpContext) {
message: 'Download started successfully' return this.zimService.getWikipediaState()
}); }
}
async delete({ request, response }: HttpContext) { async selectWikipedia({ request }: HttpContext) {
const { key } = request.params(); const payload = await request.validateUsing(selectWikipediaValidator)
return this.zimService.selectWikipedia(payload.optionId)
try { }
await this.zimService.delete(key); }
} catch (error) {
if (error.message === 'not_found') {
return response.status(404).send({
message: `ZIM file with key ${key} not found`
});
}
throw error; // Re-throw any other errors and let the global error handler catch
}
response.status(200).send({
message: 'ZIM file deleted successfully'
});
}
}

View File

@ -0,0 +1,6 @@
import { Exception } from '@adonisjs/core/exceptions'
export default class InternalServerErrorException extends Exception {
static status = 500
static code = 'E_INTERNAL_SERVER_ERROR'
}

View File

@ -0,0 +1,134 @@
import { Job } from 'bullmq'
import { QueueService } from '#services/queue_service'
import { DockerService } from '#services/docker_service'
import { ContainerRegistryService } from '#services/container_registry_service'
import Service from '#models/service'
import logger from '@adonisjs/core/services/logger'
import transmit from '@adonisjs/transmit/services/main'
import { BROADCAST_CHANNELS } from '../../constants/broadcast.js'
import { DateTime } from 'luxon'
export class CheckServiceUpdatesJob {
static get queue() {
return 'service-updates'
}
static get key() {
return 'check-service-updates'
}
async handle(_job: Job) {
logger.info('[CheckServiceUpdatesJob] Checking for service updates...')
const dockerService = new DockerService()
const registryService = new ContainerRegistryService()
// Determine host architecture
const hostArch = await this.getHostArch(dockerService)
const installedServices = await Service.query().where('installed', true)
let updatesFound = 0
for (const service of installedServices) {
try {
const updates = await registryService.getAvailableUpdates(
service.container_image,
hostArch,
service.source_repo
)
const latestUpdate = updates.length > 0 ? updates[0].tag : null
service.available_update_version = latestUpdate
service.update_checked_at = DateTime.now()
await service.save()
if (latestUpdate) {
updatesFound++
logger.info(
`[CheckServiceUpdatesJob] Update available for ${service.service_name}: ${service.container_image}${latestUpdate}`
)
}
} catch (error) {
logger.error(
`[CheckServiceUpdatesJob] Failed to check updates for ${service.service_name}: ${error.message}`
)
// Continue checking other services
}
}
logger.info(
`[CheckServiceUpdatesJob] Completed. ${updatesFound} update(s) found for ${installedServices.length} service(s).`
)
// Broadcast completion so the frontend can refresh
transmit.broadcast(BROADCAST_CHANNELS.SERVICE_UPDATES, {
status: 'completed',
updatesFound,
timestamp: new Date().toISOString(),
})
return { updatesFound }
}
private async getHostArch(dockerService: DockerService): Promise<string> {
try {
const info = await dockerService.docker.info()
const arch = info.Architecture || ''
// Map Docker architecture names to OCI names
const archMap: Record<string, string> = {
x86_64: 'amd64',
aarch64: 'arm64',
armv7l: 'arm',
amd64: 'amd64',
arm64: 'arm64',
}
return archMap[arch] || arch.toLowerCase()
} catch (error) {
logger.warn(
`[CheckServiceUpdatesJob] Could not detect host architecture: ${error.message}. Defaulting to amd64.`
)
return 'amd64'
}
}
static async scheduleNightly() {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
await queue.upsertJobScheduler(
'nightly-service-update-check',
{ pattern: '0 3 * * *' },
{
name: this.key,
opts: {
removeOnComplete: { count: 7 },
removeOnFail: { count: 5 },
},
}
)
logger.info('[CheckServiceUpdatesJob] Service update check scheduled with cron: 0 3 * * *')
}
static async dispatch() {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const job = await queue.add(
this.key,
{},
{
attempts: 3,
backoff: { type: 'exponential', delay: 60000 },
removeOnComplete: { count: 7 },
removeOnFail: { count: 5 },
}
)
logger.info(`[CheckServiceUpdatesJob] Dispatched ad-hoc service update check job ${job.id}`)
return job
}
}

View File

@ -0,0 +1,77 @@
import { Job } from 'bullmq'
import { QueueService } from '#services/queue_service'
import { DockerService } from '#services/docker_service'
import { SystemService } from '#services/system_service'
import logger from '@adonisjs/core/services/logger'
import KVStore from '#models/kv_store'
export class CheckUpdateJob {
static get queue() {
return 'system'
}
static get key() {
return 'check-update'
}
async handle(_job: Job) {
logger.info('[CheckUpdateJob] Running update check...')
const dockerService = new DockerService()
const systemService = new SystemService(dockerService)
try {
const result = await systemService.checkLatestVersion()
if (result.updateAvailable) {
logger.info(
`[CheckUpdateJob] Update available: ${result.currentVersion}${result.latestVersion}`
)
} else {
await KVStore.setValue('system.updateAvailable', false)
logger.info(
`[CheckUpdateJob] System is up to date (${result.currentVersion})`
)
}
return result
} catch (error) {
logger.error(`[CheckUpdateJob] Update check failed: ${error.message}`)
throw error
}
}
static async scheduleNightly() {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
await queue.upsertJobScheduler(
'nightly-update-check',
{ pattern: '0 2,14 * * *' }, // Every 12 hours at 2am and 2pm
{
name: this.key,
opts: {
removeOnComplete: { count: 7 },
removeOnFail: { count: 5 },
},
}
)
logger.info('[CheckUpdateJob] Update check scheduled with cron: 0 2,14 * * *')
}
static async dispatch() {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const job = await queue.add(this.key, {}, {
attempts: 3,
backoff: { type: 'exponential', delay: 60000 },
removeOnComplete: { count: 7 },
removeOnFail: { count: 5 },
})
logger.info(`[CheckUpdateJob] Dispatched ad-hoc update check job ${job.id}`)
return job
}
}

View File

@ -0,0 +1,130 @@
import { Job, UnrecoverableError } from 'bullmq'
import { QueueService } from '#services/queue_service'
import { createHash } from 'crypto'
import logger from '@adonisjs/core/services/logger'
import { OllamaService } from '#services/ollama_service'
export interface DownloadModelJobParams {
modelName: string
}
export class DownloadModelJob {
static get queue() {
return 'model-downloads'
}
static get key() {
return 'download-model'
}
static getJobId(modelName: string): string {
return createHash('sha256').update(modelName).digest('hex').slice(0, 16)
}
async handle(job: Job) {
const { modelName } = job.data as DownloadModelJobParams
logger.info(`[DownloadModelJob] Attempting to download model: ${modelName}`)
const ollamaService = new OllamaService()
// Even if no models are installed, this should return an empty array if ready
const existingModels = await ollamaService.getModels()
if (!existingModels) {
logger.warn(
`[DownloadModelJob] Ollama service not ready yet for model ${modelName}. Will retry...`
)
throw new Error('Ollama service not ready yet')
}
logger.info(
`[DownloadModelJob] Ollama service is ready. Initiating download for ${modelName}`
)
// Services are ready, initiate the download with progress tracking
const result = await ollamaService.downloadModel(modelName, (progressPercent) => {
if (progressPercent) {
job.updateProgress(Math.floor(progressPercent))
logger.info(
`[DownloadModelJob] Model ${modelName}: ${progressPercent}%`
)
}
// Store detailed progress in job data for clients to query
job.updateData({
...job.data,
status: 'downloading',
progress: progressPercent,
progress_timestamp: new Date().toISOString(),
})
})
if (!result.success) {
logger.error(
`[DownloadModelJob] Failed to initiate download for model ${modelName}: ${result.message}`
)
// Don't retry errors that will never succeed (e.g., Ollama version too old)
if (result.retryable === false) {
throw new UnrecoverableError(result.message)
}
throw new Error(`Failed to initiate download for model: ${result.message}`)
}
logger.info(`[DownloadModelJob] Successfully completed download for model ${modelName}`)
return {
modelName,
message: result.message,
}
}
static async getByModelName(modelName: string): Promise<Job | undefined> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobId = this.getJobId(modelName)
return await queue.getJob(jobId)
}
static async dispatch(params: DownloadModelJobParams) {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobId = this.getJobId(params.modelName)
// Clear any previous failed job so a fresh attempt can be dispatched
const existing = await queue.getJob(jobId)
if (existing) {
const state = await existing.getState()
if (state === 'failed') {
await existing.remove()
}
}
try {
const job = await queue.add(this.key, params, {
jobId,
attempts: 40, // Many attempts since services may take considerable time to install
backoff: {
type: 'fixed',
delay: 60000, // Check every 60 seconds
},
removeOnComplete: false, // Keep for status checking
removeOnFail: false, // Keep failed jobs for debugging
})
return {
job,
created: true,
message: `Dispatched model download job for ${params.modelName}`,
}
} catch (error) {
if (error.message.includes('job already exists')) {
const active = await queue.getJob(jobId)
return {
job: active,
created: false,
message: `Job already exists for model ${params.modelName}`,
}
}
throw error
}
}
}

View File

@ -0,0 +1,259 @@
import { Job, UnrecoverableError } from 'bullmq'
import { QueueService } from '#services/queue_service'
import { EmbedJobWithProgress } from '../../types/rag.js'
import { RagService } from '#services/rag_service'
import { DockerService } from '#services/docker_service'
import { OllamaService } from '#services/ollama_service'
import { createHash } from 'crypto'
import logger from '@adonisjs/core/services/logger'
export interface EmbedFileJobParams {
filePath: string
fileName: string
fileSize?: number
// Batch processing for large ZIM files
batchOffset?: number // Current batch offset (for ZIM files)
totalArticles?: number // Total articles in ZIM (for progress tracking)
isFinalBatch?: boolean // Whether this is the last batch (prevents premature deletion)
}
export class EmbedFileJob {
static get queue() {
return 'file-embeddings'
}
static get key() {
return 'embed-file'
}
static getJobId(filePath: string): string {
return createHash('sha256').update(filePath).digest('hex').slice(0, 16)
}
async handle(job: Job) {
const { filePath, fileName, batchOffset, totalArticles } = job.data as EmbedFileJobParams
const isZimBatch = batchOffset !== undefined
const batchInfo = isZimBatch ? ` (batch offset: ${batchOffset})` : ''
logger.info(`[EmbedFileJob] Starting embedding process for: ${fileName}${batchInfo}`)
const dockerService = new DockerService()
const ollamaService = new OllamaService()
const ragService = new RagService(dockerService, ollamaService)
try {
// Check if Ollama and Qdrant services are installed and ready
// Use UnrecoverableError for "not installed" so BullMQ won't retry —
// retrying 30x when the service doesn't exist just wastes Redis connections
const ollamaUrl = await dockerService.getServiceURL('nomad_ollama')
if (!ollamaUrl) {
logger.warn('[EmbedFileJob] Ollama is not installed. Skipping embedding for: %s', fileName)
throw new UnrecoverableError('Ollama service is not installed. Install AI Assistant to enable file embeddings.')
}
const existingModels = await ollamaService.getModels()
if (!existingModels) {
logger.warn('[EmbedFileJob] Ollama service not ready yet. Will retry...')
throw new Error('Ollama service not ready yet')
}
const qdrantUrl = await dockerService.getServiceURL('nomad_qdrant')
if (!qdrantUrl) {
logger.warn('[EmbedFileJob] Qdrant is not installed. Skipping embedding for: %s', fileName)
throw new UnrecoverableError('Qdrant service is not installed. Install AI Assistant to enable file embeddings.')
}
logger.info(`[EmbedFileJob] Services ready. Processing file: ${fileName}`)
// Update progress starting
await job.updateProgress(5)
await job.updateData({
...job.data,
status: 'processing',
startedAt: job.data.startedAt || Date.now(),
})
logger.info(`[EmbedFileJob] Processing file: ${filePath}`)
// Progress callback: maps service-reported 0-100% into the 5-95% job range
const onProgress = async (percent: number) => {
await job.updateProgress(Math.min(95, Math.round(5 + percent * 0.9)))
}
// Process and embed the file
// Only allow deletion if explicitly marked as final batch
const allowDeletion = job.data.isFinalBatch === true
const result = await ragService.processAndEmbedFile(
filePath,
allowDeletion,
batchOffset,
onProgress
)
if (!result.success) {
logger.error(`[EmbedFileJob] Failed to process file ${fileName}: ${result.message}`)
throw new Error(result.message)
}
// For ZIM files with batching, check if more batches are needed
if (result.hasMoreBatches) {
const nextOffset = (batchOffset || 0) + (result.articlesProcessed || 0)
logger.info(
`[EmbedFileJob] Batch complete. Dispatching next batch at offset ${nextOffset}`
)
// Dispatch next batch (not final yet)
await EmbedFileJob.dispatch({
filePath,
fileName,
batchOffset: nextOffset,
totalArticles: totalArticles || result.totalArticles,
isFinalBatch: false, // Explicitly not final
})
// Calculate progress based on articles processed
const progress = totalArticles
? Math.round((nextOffset / totalArticles) * 100)
: 50
await job.updateProgress(progress)
await job.updateData({
...job.data,
status: 'batch_completed',
lastBatchAt: Date.now(),
chunks: (job.data.chunks || 0) + (result.chunks || 0),
})
return {
success: true,
fileName,
filePath,
chunks: result.chunks,
hasMoreBatches: true,
nextOffset,
message: `Batch embedded ${result.chunks} chunks, next batch queued`,
}
}
// Final batch or non-batched file - mark as complete
const totalChunks = (job.data.chunks || 0) + (result.chunks || 0)
await job.updateProgress(100)
await job.updateData({
...job.data,
status: 'completed',
completedAt: Date.now(),
chunks: totalChunks,
})
const batchMsg = isZimBatch ? ` (final batch, total chunks: ${totalChunks})` : ''
logger.info(
`[EmbedFileJob] Successfully embedded ${result.chunks} chunks from file: ${fileName}${batchMsg}`
)
return {
success: true,
fileName,
filePath,
chunks: result.chunks,
message: `Successfully embedded ${result.chunks} chunks`,
}
} catch (error) {
logger.error(`[EmbedFileJob] Error embedding file ${fileName}:`, error)
await job.updateData({
...job.data,
status: 'failed',
failedAt: Date.now(),
error: error instanceof Error ? error.message : 'Unknown error',
})
throw error
}
}
static async listActiveJobs(): Promise<EmbedJobWithProgress[]> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobs = await queue.getJobs(['waiting', 'active', 'delayed'])
return jobs.map((job) => ({
jobId: job.id!.toString(),
fileName: (job.data as EmbedFileJobParams).fileName,
filePath: (job.data as EmbedFileJobParams).filePath,
progress: typeof job.progress === 'number' ? job.progress : 0,
status: ((job.data as any).status as string) ?? 'waiting',
}))
}
static async getByFilePath(filePath: string): Promise<Job | undefined> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobId = this.getJobId(filePath)
return await queue.getJob(jobId)
}
static async dispatch(params: EmbedFileJobParams) {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobId = this.getJobId(params.filePath)
try {
const job = await queue.add(this.key, params, {
jobId,
attempts: 30,
backoff: {
type: 'fixed',
delay: 60000, // Check every 60 seconds for service readiness
},
removeOnComplete: { count: 50 }, // Keep last 50 completed jobs for history
removeOnFail: { count: 20 } // Keep last 20 failed jobs for debugging
})
logger.info(`[EmbedFileJob] Dispatched embedding job for file: ${params.fileName}`)
return {
job,
created: true,
jobId,
message: `File queued for embedding: ${params.fileName}`,
}
} catch (error) {
if (error.message && error.message.includes('job already exists')) {
const existing = await queue.getJob(jobId)
logger.info(`[EmbedFileJob] Job already exists for file: ${params.fileName}`)
return {
job: existing,
created: false,
jobId,
message: `Embedding job already exists for: ${params.fileName}`,
}
}
throw error
}
}
static async getStatus(filePath: string): Promise<{
exists: boolean
status?: string
progress?: number
chunks?: number
error?: string
}> {
const job = await this.getByFilePath(filePath)
if (!job) {
return { exists: false }
}
const state = await job.getState()
const data = job.data
return {
exists: true,
status: data.status || state,
progress: typeof job.progress === 'number' ? job.progress : undefined,
chunks: data.chunks,
error: data.error,
}
}
}

View File

@ -0,0 +1,101 @@
import { Job } from 'bullmq'
import { QueueService } from '#services/queue_service'
import { BenchmarkService } from '#services/benchmark_service'
import type { RunBenchmarkJobParams } from '../../types/benchmark.js'
import logger from '@adonisjs/core/services/logger'
import { DockerService } from '#services/docker_service'
export class RunBenchmarkJob {
static get queue() {
return 'benchmarks'
}
static get key() {
return 'run-benchmark'
}
async handle(job: Job) {
const { benchmark_id, benchmark_type } = job.data as RunBenchmarkJobParams
logger.info(`[RunBenchmarkJob] Starting benchmark ${benchmark_id} of type ${benchmark_type}`)
const dockerService = new DockerService()
const benchmarkService = new BenchmarkService(dockerService)
try {
let result
switch (benchmark_type) {
case 'full':
result = await benchmarkService.runFullBenchmark()
break
case 'system':
result = await benchmarkService.runSystemBenchmarks()
break
case 'ai':
result = await benchmarkService.runAIBenchmark()
break
default:
throw new Error(`Unknown benchmark type: ${benchmark_type}`)
}
logger.info(`[RunBenchmarkJob] Benchmark ${benchmark_id} completed with NOMAD score: ${result.nomad_score}`)
return {
success: true,
benchmark_id: result.benchmark_id,
nomad_score: result.nomad_score,
}
} catch (error) {
logger.error(`[RunBenchmarkJob] Benchmark ${benchmark_id} failed: ${error.message}`)
throw error
}
}
static async dispatch(params: RunBenchmarkJobParams) {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
try {
const job = await queue.add(this.key, params, {
jobId: params.benchmark_id,
attempts: 1, // Benchmarks shouldn't be retried automatically
removeOnComplete: {
count: 10, // Keep last 10 completed jobs
},
removeOnFail: {
count: 5, // Keep last 5 failed jobs
},
})
logger.info(`[RunBenchmarkJob] Dispatched benchmark job ${params.benchmark_id}`)
return {
job,
created: true,
message: `Benchmark job ${params.benchmark_id} dispatched successfully`,
}
} catch (error) {
if (error.message.includes('job already exists')) {
const existing = await queue.getJob(params.benchmark_id)
return {
job: existing,
created: false,
message: `Benchmark job ${params.benchmark_id} already exists`,
}
}
throw error
}
}
static async getJob(benchmarkId: string): Promise<Job | undefined> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
return await queue.getJob(benchmarkId)
}
static async getJobState(benchmarkId: string): Promise<string | undefined> {
const job = await this.getJob(benchmarkId)
return job ? await job.getState() : undefined
}
}

View File

@ -0,0 +1,154 @@
import { Job } from 'bullmq'
import { RunDownloadJobParams } from '../../types/downloads.js'
import { QueueService } from '#services/queue_service'
import { doResumableDownload } from '../utils/downloads.js'
import { createHash } from 'crypto'
import { DockerService } from '#services/docker_service'
import { ZimService } from '#services/zim_service'
import { MapService } from '#services/map_service'
import { EmbedFileJob } from './embed_file_job.js'
export class RunDownloadJob {
static get queue() {
return 'downloads'
}
static get key() {
return 'run-download'
}
static getJobId(url: string): string {
return createHash('sha256').update(url).digest('hex').slice(0, 16)
}
async handle(job: Job) {
const { url, filepath, timeout, allowedMimeTypes, forceNew, filetype, resourceMetadata } =
job.data as RunDownloadJobParams
await doResumableDownload({
url,
filepath,
timeout,
allowedMimeTypes,
forceNew,
onProgress(progress) {
const progressPercent = (progress.downloadedBytes / (progress.totalBytes || 1)) * 100
job.updateProgress(Math.floor(progressPercent))
},
async onComplete(url) {
try {
// Create InstalledResource entry if metadata was provided
if (resourceMetadata) {
const { default: InstalledResource } = await import('#models/installed_resource')
const { DateTime } = await import('luxon')
const { getFileStatsIfExists, deleteFileIfExists } = await import('../utils/fs.js')
const stats = await getFileStatsIfExists(filepath)
// Look up the old entry so we can clean up the previous file after updating
const oldEntry = await InstalledResource.query()
.where('resource_id', resourceMetadata.resource_id)
.where('resource_type', filetype as 'zim' | 'map')
.first()
const oldFilePath = oldEntry?.file_path ?? null
await InstalledResource.updateOrCreate(
{ resource_id: resourceMetadata.resource_id, resource_type: filetype as 'zim' | 'map' },
{
version: resourceMetadata.version,
collection_ref: resourceMetadata.collection_ref,
url: url,
file_path: filepath,
file_size_bytes: stats ? Number(stats.size) : null,
installed_at: DateTime.now(),
}
)
// Delete the old file if it differs from the new one
if (oldFilePath && oldFilePath !== filepath) {
try {
await deleteFileIfExists(oldFilePath)
console.log(`[RunDownloadJob] Deleted old file: ${oldFilePath}`)
} catch (deleteError) {
console.warn(
`[RunDownloadJob] Failed to delete old file ${oldFilePath}:`,
deleteError
)
}
}
}
if (filetype === 'zim') {
const dockerService = new DockerService()
const zimService = new ZimService(dockerService)
await zimService.downloadRemoteSuccessCallback([url], true)
// Only dispatch embedding job if AI Assistant (Ollama) is installed
const ollamaUrl = await dockerService.getServiceURL('nomad_ollama')
if (ollamaUrl) {
try {
await EmbedFileJob.dispatch({
fileName: url.split('/').pop() || '',
filePath: filepath,
})
} catch (error) {
console.error(`[RunDownloadJob] Error dispatching EmbedFileJob for URL ${url}:`, error)
}
}
} else if (filetype === 'map') {
const mapsService = new MapService()
await mapsService.downloadRemoteSuccessCallback([url], false)
}
} catch (error) {
console.error(
`[RunDownloadJob] Error in download success callback for URL ${url}:`,
error
)
}
job.updateProgress(100)
},
})
return {
url,
filepath,
}
}
static async getByUrl(url: string): Promise<Job | undefined> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobId = this.getJobId(url)
return await queue.getJob(jobId)
}
static async dispatch(params: RunDownloadJobParams) {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const jobId = this.getJobId(params.url)
try {
const job = await queue.add(this.key, params, {
jobId,
attempts: 3,
backoff: { type: 'exponential', delay: 2000 },
removeOnComplete: true,
})
return {
job,
created: true,
message: `Dispatched download job for URL ${params.url}`,
}
} catch (error) {
if (error.message.includes('job already exists')) {
const existing = await queue.getJob(jobId)
return {
job: existing,
created: false,
message: `Job already exists for URL ${params.url}`,
}
}
throw error
}
}
}

View File

@ -0,0 +1,20 @@
import type { HttpContext } from '@adonisjs/core/http'
import type { NextFn } from '@adonisjs/core/types/http'
import StaticMiddleware from '@adonisjs/static/static_middleware'
import { AssetsConfig } from '@adonisjs/static/types'
/**
* See #providers/map_static_provider.ts for explanation
* of why this middleware exists.
*/
export default class MapsStaticMiddleware {
constructor(
private path: string,
private config: AssetsConfig
) {}
async handle(ctx: HttpContext, next: NextFn) {
const staticMiddleware = new StaticMiddleware(this.path, this.config)
return staticMiddleware.handle(ctx, next)
}
}

View File

@ -0,0 +1,85 @@
import { DateTime } from 'luxon'
import { BaseModel, column, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
import type { BenchmarkType, DiskType } from '../../types/benchmark.js'
export default class BenchmarkResult extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare benchmark_id: string
@column()
declare benchmark_type: BenchmarkType
// Hardware information
@column()
declare cpu_model: string
@column()
declare cpu_cores: number
@column()
declare cpu_threads: number
@column()
declare ram_bytes: number
@column()
declare disk_type: DiskType
@column()
declare gpu_model: string | null
// System benchmark scores
@column()
declare cpu_score: number
@column()
declare memory_score: number
@column()
declare disk_read_score: number
@column()
declare disk_write_score: number
// AI benchmark scores (nullable for system-only benchmarks)
@column()
declare ai_tokens_per_second: number | null
@column()
declare ai_model_used: string | null
@column()
declare ai_time_to_first_token: number | null
// Composite NOMAD score (0-100)
@column()
declare nomad_score: number
// Repository submission tracking
@column({
serialize(value) {
return Boolean(value)
},
})
declare submitted_to_repository: boolean
@column.dateTime()
declare submitted_at: DateTime | null
@column()
declare repository_id: string | null
@column()
declare builder_tag: string | null
@column.dateTime({ autoCreate: true })
declare created_at: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updated_at: DateTime
}

View File

@ -0,0 +1,60 @@
import { DateTime } from 'luxon'
import { BaseModel, column, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
import type { BenchmarkSettingKey } from '../../types/benchmark.js'
export default class BenchmarkSetting extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare key: BenchmarkSettingKey
@column()
declare value: string | null
@column.dateTime({ autoCreate: true })
declare created_at: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updated_at: DateTime
/**
* Get a setting value by key
*/
static async getValue(key: BenchmarkSettingKey): Promise<string | null> {
const setting = await this.findBy('key', key)
return setting?.value ?? null
}
/**
* Set a setting value by key (creates if not exists)
*/
static async setValue(key: BenchmarkSettingKey, value: string | null): Promise<BenchmarkSetting> {
const setting = await this.firstOrCreate({ key }, { key, value })
if (setting.value !== value) {
setting.value = value
await setting.save()
}
return setting
}
/**
* Get all benchmark settings as a typed object
*/
static async getAllSettings(): Promise<{
allow_anonymous_submission: boolean
installation_id: string | null
last_benchmark_run: string | null
}> {
const settings = await this.all()
const map = new Map(settings.map((s) => [s.key, s.value]))
return {
allow_anonymous_submission: map.get('allow_anonymous_submission') === 'true',
installation_id: map.get('installation_id') ?? null,
last_benchmark_run: map.get('last_benchmark_run') ?? null,
}
}
}

View File

@ -0,0 +1,29 @@
import { DateTime } from 'luxon'
import { BaseModel, column, belongsTo, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
import type { BelongsTo } from '@adonisjs/lucid/types/relations'
import ChatSession from './chat_session.js'
export default class ChatMessage extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare session_id: number
@column()
declare role: 'system' | 'user' | 'assistant'
@column()
declare content: string
@belongsTo(() => ChatSession, { foreignKey: 'id', localKey: 'session_id' })
declare session: BelongsTo<typeof ChatSession>
@column.dateTime({ autoCreate: true })
declare created_at: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updated_at: DateTime
}

View File

@ -0,0 +1,29 @@
import { DateTime } from 'luxon'
import { BaseModel, column, hasMany, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
import type { HasMany } from '@adonisjs/lucid/types/relations'
import ChatMessage from './chat_message.js'
export default class ChatSession extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare title: string
@column()
declare model: string | null
@hasMany(() => ChatMessage, {
foreignKey: 'session_id',
localKey: 'id',
})
declare messages: HasMany<typeof ChatMessage>
@column.dateTime({ autoCreate: true })
declare created_at: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updated_at: DateTime
}

View File

@ -0,0 +1,22 @@
import { DateTime } from 'luxon'
import { BaseModel, column, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
import type { ManifestType } from '../../types/collections.js'
export default class CollectionManifest extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare type: ManifestType
@column()
declare spec_version: string
@column({
consume: (value: string) => (typeof value === 'string' ? JSON.parse(value) : value),
prepare: (value: any) => JSON.stringify(value),
})
declare spec_data: any
@column.dateTime()
declare fetched_at: DateTime
}

View File

@ -0,0 +1,33 @@
import { DateTime } from 'luxon'
import { BaseModel, column, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
export default class InstalledResource extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare resource_id: string
@column()
declare resource_type: 'zim' | 'map'
@column()
declare collection_ref: string | null
@column()
declare version: string
@column()
declare url: string
@column()
declare file_path: string
@column()
declare file_size_bytes: number | null
@column.dateTime()
declare installed_at: DateTime
}

View File

@ -0,0 +1,64 @@
import { DateTime } from 'luxon'
import { BaseModel, column, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
import { KV_STORE_SCHEMA, type KVStoreKey, type KVStoreValue } from '../../types/kv_store.js'
import { parseBoolean } from '../utils/misc.js'
/**
* Generic key-value store model for storing various settings
* that don't necessitate their own dedicated models.
*/
export default class KVStore extends BaseModel {
static table = 'kv_store'
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare key: KVStoreKey
@column()
declare value: string | null
@column.dateTime({ autoCreate: true })
declare created_at: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updated_at: DateTime
/**
* Get a setting value by key, automatically deserializing to the correct type.
*/
static async getValue<K extends KVStoreKey>(key: K): Promise<KVStoreValue<K> | null> {
const setting = await this.findBy('key', key)
if (!setting || setting.value === undefined || setting.value === null) {
return null
}
const raw = String(setting.value)
return (KV_STORE_SCHEMA[key] === 'boolean' ? parseBoolean(raw) : raw) as KVStoreValue<K>
}
/**
* Set a setting value by key (creates if not exists), automatically serializing to string.
*/
static async setValue<K extends KVStoreKey>(key: K, value: KVStoreValue<K>): Promise<KVStore> {
const serialized = String(value)
const setting = await this.firstOrCreate({ key }, { key, value: serialized })
if (setting.value !== serialized) {
setting.value = serialized
await setting.save()
}
return setting
}
/**
* Clear a setting value by key, storing null so getValue returns null.
*/
static async clearValue<K extends KVStoreKey>(key: K): Promise<void> {
const setting = await this.findBy('key', key)
if (setting && setting.value !== null) {
setting.value = null
await setting.save()
}
}
}

View File

@ -20,6 +20,21 @@ export default class Service extends BaseModel {
@column() @column()
declare container_config: string | null declare container_config: string | null
@column()
declare friendly_name: string | null
@column()
declare description: string | null
@column()
declare powered_by: string | null
@column()
declare display_order: number | null
@column()
declare icon: string | null // must be a TablerIcons name to be properly rendered in the UI (e.g. "IconBrandDocker")
@column({ @column({
serialize(value) { serialize(value) {
return Boolean(value) return Boolean(value)
@ -27,6 +42,9 @@ export default class Service extends BaseModel {
}) })
declare installed: boolean declare installed: boolean
@column()
declare installation_status: 'idle' | 'installing' | 'error'
@column() @column()
declare depends_on: string | null declare depends_on: string | null
@ -44,6 +62,15 @@ export default class Service extends BaseModel {
@column() @column()
declare metadata: string | null declare metadata: string | null
@column()
declare source_repo: string | null
@column()
declare available_update_version: string | null
@column.dateTime()
declare update_checked_at: DateTime | null
@column.dateTime({ autoCreate: true }) @column.dateTime({ autoCreate: true })
declare created_at: DateTime declare created_at: DateTime

View File

@ -0,0 +1,27 @@
import { DateTime } from 'luxon'
import { BaseModel, column, SnakeCaseNamingStrategy } from '@adonisjs/lucid/orm'
export default class WikipediaSelection extends BaseModel {
static namingStrategy = new SnakeCaseNamingStrategy()
@column({ isPrimary: true })
declare id: number
@column()
declare option_id: string
@column()
declare url: string | null
@column()
declare filename: string | null
@column()
declare status: 'none' | 'downloading' | 'installed' | 'failed'
@column.dateTime({ autoCreate: true })
declare created_at: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updated_at: DateTime
}

View File

@ -0,0 +1,834 @@
import { inject } from '@adonisjs/core'
import logger from '@adonisjs/core/services/logger'
import transmit from '@adonisjs/transmit/services/main'
import si from 'systeminformation'
import axios from 'axios'
import { DateTime } from 'luxon'
import BenchmarkResult from '#models/benchmark_result'
import BenchmarkSetting from '#models/benchmark_setting'
import { SystemService } from '#services/system_service'
import type {
BenchmarkType,
BenchmarkStatus,
BenchmarkProgress,
HardwareInfo,
DiskType,
SystemScores,
AIScores,
SysbenchCpuResult,
SysbenchMemoryResult,
SysbenchDiskResult,
RepositorySubmission,
RepositorySubmitResponse,
RepositoryStats,
} from '../../types/benchmark.js'
import { randomUUID, createHmac } from 'node:crypto'
import { DockerService } from './docker_service.js'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import { BROADCAST_CHANNELS } from '../../constants/broadcast.js'
import Dockerode from 'dockerode'
// HMAC secret for signing submissions to the benchmark repository
// This provides basic protection against casual API abuse.
// Note: Since NOMAD is open source, a determined attacker could extract this.
// For stronger protection, see challenge-response authentication.
const BENCHMARK_HMAC_SECRET = '778ba65d0bc0e23119e5ffce4b3716648a7d071f0a47ec3f'
// Re-export default weights for use in service
const SCORE_WEIGHTS = {
ai_tokens_per_second: 0.30,
cpu: 0.25,
memory: 0.15,
ai_ttft: 0.10,
disk_read: 0.10,
disk_write: 0.10,
}
// Benchmark configuration constants
const SYSBENCH_IMAGE = 'severalnines/sysbench:latest'
const SYSBENCH_CONTAINER_NAME = 'nomad_benchmark_sysbench'
// Reference model for AI benchmark - small but meaningful
const AI_BENCHMARK_MODEL = 'llama3.2:1b'
const AI_BENCHMARK_PROMPT = 'Explain recursion in programming in exactly 100 words.'
// Reference scores for normalization (calibrated to 0-100 scale)
// These represent "expected" scores for a mid-range system (score ~50)
const REFERENCE_SCORES = {
cpu_events_per_second: 5000, // sysbench cpu events/sec for ~50 score
memory_ops_per_second: 5000000, // sysbench memory ops/sec for ~50 score
disk_read_mb_per_sec: 500, // 500 MB/s read for ~50 score
disk_write_mb_per_sec: 400, // 400 MB/s write for ~50 score
ai_tokens_per_second: 30, // 30 tok/s for ~50 score
ai_ttft_ms: 500, // 500ms time to first token for ~50 score (lower is better)
}
@inject()
export class BenchmarkService {
private currentBenchmarkId: string | null = null
private currentStatus: BenchmarkStatus = 'idle'
constructor(private dockerService: DockerService) {}
/**
* Run a full benchmark suite
*/
async runFullBenchmark(): Promise<BenchmarkResult> {
return this._runBenchmark('full', true)
}
/**
* Run system benchmarks only (CPU, memory, disk)
*/
async runSystemBenchmarks(): Promise<BenchmarkResult> {
return this._runBenchmark('system', false)
}
/**
* Run AI benchmark only
*/
async runAIBenchmark(): Promise<BenchmarkResult> {
return this._runBenchmark('ai', true)
}
/**
* Get the latest benchmark result
*/
async getLatestResult(): Promise<BenchmarkResult | null> {
return await BenchmarkResult.query().orderBy('created_at', 'desc').first()
}
/**
* Get all benchmark results
*/
async getAllResults(): Promise<BenchmarkResult[]> {
return await BenchmarkResult.query().orderBy('created_at', 'desc')
}
/**
* Get a specific benchmark result by ID
*/
async getResultById(benchmarkId: string): Promise<BenchmarkResult | null> {
return await BenchmarkResult.findBy('benchmark_id', benchmarkId)
}
/**
* Submit benchmark results to central repository
*/
async submitToRepository(benchmarkId?: string, anonymous?: boolean): Promise<RepositorySubmitResponse> {
const result = benchmarkId
? await this.getResultById(benchmarkId)
: await this.getLatestResult()
if (!result) {
throw new Error('No benchmark result found to submit')
}
// Only allow full benchmarks with AI data to be submitted to repository
if (result.benchmark_type !== 'full') {
throw new Error('Only full benchmarks can be shared with the community. Run a Full Benchmark to share your results.')
}
if (!result.ai_tokens_per_second || result.ai_tokens_per_second <= 0) {
throw new Error('Benchmark must include AI performance data. Ensure AI Assistant is installed and run a Full Benchmark.')
}
if (result.submitted_to_repository) {
throw new Error('Benchmark result has already been submitted')
}
const submission: RepositorySubmission = {
cpu_model: result.cpu_model,
cpu_cores: result.cpu_cores,
cpu_threads: result.cpu_threads,
ram_gb: Math.round(result.ram_bytes / (1024 * 1024 * 1024)),
disk_type: result.disk_type,
gpu_model: result.gpu_model,
cpu_score: result.cpu_score,
memory_score: result.memory_score,
disk_read_score: result.disk_read_score,
disk_write_score: result.disk_write_score,
ai_tokens_per_second: result.ai_tokens_per_second,
ai_time_to_first_token: result.ai_time_to_first_token,
nomad_score: result.nomad_score,
nomad_version: SystemService.getAppVersion(),
benchmark_version: '1.0.0',
builder_tag: anonymous ? null : result.builder_tag,
}
try {
// Generate HMAC signature for submission verification
const timestamp = Date.now().toString()
const payload = timestamp + JSON.stringify(submission)
const signature = createHmac('sha256', BENCHMARK_HMAC_SECRET)
.update(payload)
.digest('hex')
const response = await axios.post(
'https://benchmark.projectnomad.us/api/v1/submit',
submission,
{
timeout: 30000,
headers: {
'X-NOMAD-Timestamp': timestamp,
'X-NOMAD-Signature': signature,
},
}
)
if (response.data.success) {
result.submitted_to_repository = true
result.submitted_at = DateTime.now()
result.repository_id = response.data.repository_id
await result.save()
await BenchmarkSetting.setValue('last_benchmark_run', new Date().toISOString())
}
return response.data as RepositorySubmitResponse
} catch (error) {
const detail = error.response?.data?.error || error.message || 'Unknown error'
const statusCode = error.response?.status
logger.error(`Failed to submit benchmark to repository: ${detail} (Status: ${statusCode})`)
// Create an error with the status code attached for proper handling upstream
const err: any = new Error(`Failed to submit benchmark: ${detail}`)
err.statusCode = statusCode
throw err
}
}
/**
* Get comparison stats from central repository
*/
async getComparisonStats(): Promise<RepositoryStats | null> {
try {
const response = await axios.get('https://benchmark.projectnomad.us/api/v1/stats', {
timeout: 10000,
})
return response.data as RepositoryStats
} catch (error) {
logger.warn(`Failed to fetch comparison stats: ${error.message}`)
return null
}
}
/**
* Get current benchmark status
*/
getStatus(): { status: BenchmarkStatus; benchmarkId: string | null } {
return {
status: this.currentStatus,
benchmarkId: this.currentBenchmarkId,
}
}
/**
* Detect system hardware information
*/
async getHardwareInfo(): Promise<HardwareInfo> {
this._updateStatus('detecting_hardware', 'Detecting system hardware...')
try {
const [cpu, mem, diskLayout, graphics] = await Promise.all([
si.cpu(),
si.mem(),
si.diskLayout(),
si.graphics(),
])
// Determine disk type from primary disk
let diskType: DiskType = 'unknown'
if (diskLayout.length > 0) {
const primaryDisk = diskLayout[0]
if (primaryDisk.type?.toLowerCase().includes('nvme')) {
diskType = 'nvme'
} else if (primaryDisk.type?.toLowerCase().includes('ssd')) {
diskType = 'ssd'
} else if (primaryDisk.type?.toLowerCase().includes('hdd') || primaryDisk.interfaceType === 'SATA') {
// SATA could be SSD or HDD, check if it's rotational
diskType = 'hdd'
}
}
// Get GPU model (prefer discrete GPU with dedicated VRAM)
let gpuModel: string | null = null
if (graphics.controllers && graphics.controllers.length > 0) {
// First, look for discrete GPUs (NVIDIA, AMD discrete, or any with significant VRAM)
const discreteGpu = graphics.controllers.find((g) => {
const vendor = g.vendor?.toLowerCase() || ''
const model = g.model?.toLowerCase() || ''
// NVIDIA GPUs are always discrete
if (vendor.includes('nvidia') || model.includes('geforce') || model.includes('rtx') || model.includes('quadro')) {
return true
}
// AMD discrete GPUs (Radeon, not integrated APU graphics)
if ((vendor.includes('amd') || vendor.includes('ati')) &&
(model.includes('radeon') || model.includes('rx ') || model.includes('vega')) &&
!model.includes('graphics')) {
return true
}
// Any GPU with dedicated VRAM > 512MB is likely discrete
if (g.vram && g.vram > 512) {
return true
}
return false
})
gpuModel = discreteGpu?.model || graphics.controllers[0]?.model || null
}
// Fallback: Check Docker for nvidia runtime and query GPU model via nvidia-smi
if (!gpuModel) {
try {
const dockerInfo = await this.dockerService.docker.info()
const runtimes = dockerInfo.Runtimes || {}
if ('nvidia' in runtimes) {
logger.info('[BenchmarkService] NVIDIA container runtime detected, querying GPU model via nvidia-smi')
const systemService = new (await import('./system_service.js')).SystemService(this.dockerService)
const nvidiaInfo = await systemService.getNvidiaSmiInfo()
if (Array.isArray(nvidiaInfo) && nvidiaInfo.length > 0) {
gpuModel = nvidiaInfo[0].model
} else {
logger.warn(`[BenchmarkService] NVIDIA runtime detected but failed to get GPU info: ${typeof nvidiaInfo === 'string' ? nvidiaInfo : JSON.stringify(nvidiaInfo)}`)
}
}
} catch (dockerError) {
logger.warn(`[BenchmarkService] Could not query Docker info for GPU detection: ${dockerError.message}`)
}
}
// Fallback: Extract integrated GPU from CPU model name
if (!gpuModel) {
const cpuFullName = `${cpu.manufacturer} ${cpu.brand}`
// AMD APUs: e.g., "AMD Ryzen AI 9 HX 370 w/ Radeon 890M" -> "Radeon 890M"
const radeonMatch = cpuFullName.match(/w\/\s*(Radeon\s+\d+\w*)/i)
if (radeonMatch) {
gpuModel = radeonMatch[1]
}
// Intel Core Ultra: These have Intel Arc Graphics integrated
// e.g., "Intel Core Ultra 9 285HX" -> "Intel Arc Graphics (Integrated)"
if (!gpuModel && cpu.manufacturer?.toLowerCase().includes('intel')) {
if (cpu.brand?.toLowerCase().includes('core ultra')) {
gpuModel = 'Intel Arc Graphics (Integrated)'
}
}
}
return {
cpu_model: `${cpu.manufacturer} ${cpu.brand}`,
cpu_cores: cpu.physicalCores,
cpu_threads: cpu.cores,
ram_bytes: mem.total,
disk_type: diskType,
gpu_model: gpuModel,
}
} catch (error) {
logger.error(`Error detecting hardware: ${error.message}`)
throw new Error(`Failed to detect hardware: ${error.message}`)
}
}
/**
* Main benchmark execution method
*/
private async _runBenchmark(type: BenchmarkType, includeAI: boolean): Promise<BenchmarkResult> {
if (this.currentStatus !== 'idle') {
throw new Error('A benchmark is already running')
}
this.currentBenchmarkId = randomUUID()
this._updateStatus('starting', 'Starting benchmark...')
try {
// Detect hardware
const hardware = await this.getHardwareInfo()
// Run system benchmarks
let systemScores: SystemScores = {
cpu_score: 0,
memory_score: 0,
disk_read_score: 0,
disk_write_score: 0,
}
if (type === 'full' || type === 'system') {
systemScores = await this._runSystemBenchmarks()
}
// Run AI benchmark if requested and Ollama is available
let aiScores: Partial<AIScores> = {}
if (includeAI && (type === 'full' || type === 'ai')) {
try {
aiScores = await this._runAIBenchmark()
} catch (error) {
// For AI-only benchmarks, failing is fatal - don't save useless results with all zeros
if (type === 'ai') {
throw new Error(`AI benchmark failed: ${error.message}. Make sure AI Assistant is installed and running.`)
}
// For full benchmarks, AI is optional - continue without it
logger.warn(`AI benchmark skipped: ${error.message}`)
}
}
// Calculate NOMAD score
this._updateStatus('calculating_score', 'Calculating NOMAD score...')
const nomadScore = this._calculateNomadScore(systemScores, aiScores)
// Save result
const result = await BenchmarkResult.create({
benchmark_id: this.currentBenchmarkId,
benchmark_type: type,
cpu_model: hardware.cpu_model,
cpu_cores: hardware.cpu_cores,
cpu_threads: hardware.cpu_threads,
ram_bytes: hardware.ram_bytes,
disk_type: hardware.disk_type,
gpu_model: hardware.gpu_model,
cpu_score: systemScores.cpu_score,
memory_score: systemScores.memory_score,
disk_read_score: systemScores.disk_read_score,
disk_write_score: systemScores.disk_write_score,
ai_tokens_per_second: aiScores.ai_tokens_per_second || null,
ai_model_used: aiScores.ai_model_used || null,
ai_time_to_first_token: aiScores.ai_time_to_first_token || null,
nomad_score: nomadScore,
submitted_to_repository: false,
})
this._updateStatus('completed', 'Benchmark completed successfully')
this.currentStatus = 'idle'
this.currentBenchmarkId = null
return result
} catch (error) {
this._updateStatus('error', `Benchmark failed: ${error.message}`)
this.currentStatus = 'idle'
this.currentBenchmarkId = null
throw error
}
}
/**
* Run system benchmarks using sysbench in Docker
*/
private async _runSystemBenchmarks(): Promise<SystemScores> {
// Ensure sysbench image is available
await this._ensureSysbenchImage()
// Run CPU benchmark
this._updateStatus('running_cpu', 'Running CPU benchmark...')
const cpuResult = await this._runSysbenchCpu()
// Run memory benchmark
this._updateStatus('running_memory', 'Running memory benchmark...')
const memoryResult = await this._runSysbenchMemory()
// Run disk benchmarks
this._updateStatus('running_disk_read', 'Running disk read benchmark...')
const diskReadResult = await this._runSysbenchDiskRead()
this._updateStatus('running_disk_write', 'Running disk write benchmark...')
const diskWriteResult = await this._runSysbenchDiskWrite()
// Normalize scores to 0-100 scale
return {
cpu_score: this._normalizeScore(cpuResult.events_per_second, REFERENCE_SCORES.cpu_events_per_second),
memory_score: this._normalizeScore(memoryResult.operations_per_second, REFERENCE_SCORES.memory_ops_per_second),
disk_read_score: this._normalizeScore(diskReadResult.read_mb_per_sec, REFERENCE_SCORES.disk_read_mb_per_sec),
disk_write_score: this._normalizeScore(diskWriteResult.write_mb_per_sec, REFERENCE_SCORES.disk_write_mb_per_sec),
}
}
/**
* Run AI benchmark using Ollama
*/
private async _runAIBenchmark(): Promise<AIScores> {
try {
this._updateStatus('running_ai', 'Running AI benchmark...')
const ollamaAPIURL = await this.dockerService.getServiceURL(SERVICE_NAMES.OLLAMA)
if (!ollamaAPIURL) {
throw new Error('AI Assistant service location could not be determined. Ensure AI Assistant is installed and running.')
}
// Check if Ollama is available
try {
await axios.get(`${ollamaAPIURL}/api/tags`, { timeout: 5000 })
} catch (error) {
const errorCode = error.code || error.response?.status || 'unknown'
throw new Error(`Ollama is not running or not accessible (${errorCode}). Ensure AI Assistant is installed and running.`)
}
// Check if the benchmark model is available, pull if not
const ollamaService = new (await import('./ollama_service.js')).OllamaService()
const modelResponse = await ollamaService.downloadModel(AI_BENCHMARK_MODEL)
if (!modelResponse.success) {
throw new Error(`Model does not exist and failed to download: ${modelResponse.message}`)
}
// Run inference benchmark
const startTime = Date.now()
const response = await axios.post(
`${ollamaAPIURL}/api/generate`,
{
model: AI_BENCHMARK_MODEL,
prompt: AI_BENCHMARK_PROMPT,
stream: false,
},
{ timeout: 120000 }
)
const endTime = Date.now()
const totalTime = (endTime - startTime) / 1000 // seconds
// Ollama returns eval_count (tokens generated) and eval_duration (nanoseconds)
if (response.data.eval_count && response.data.eval_duration) {
const tokenCount = response.data.eval_count
const evalDurationSeconds = response.data.eval_duration / 1e9
const tokensPerSecond = tokenCount / evalDurationSeconds
// Time to first token from prompt_eval_duration
const ttft = response.data.prompt_eval_duration
? response.data.prompt_eval_duration / 1e6 // Convert to ms
: (totalTime * 1000) / 2 // Estimate if not available
return {
ai_tokens_per_second: Math.round(tokensPerSecond * 100) / 100,
ai_model_used: AI_BENCHMARK_MODEL,
ai_time_to_first_token: Math.round(ttft * 100) / 100,
}
}
// Fallback calculation
const estimatedTokens = response.data.response?.split(' ').length * 1.3 || 100
const tokensPerSecond = estimatedTokens / totalTime
return {
ai_tokens_per_second: Math.round(tokensPerSecond * 100) / 100,
ai_model_used: AI_BENCHMARK_MODEL,
ai_time_to_first_token: Math.round((totalTime * 1000) / 2),
}
} catch (error) {
throw new Error(`AI benchmark failed: ${error.message}`)
}
}
/**
* Calculate weighted NOMAD score
*/
private _calculateNomadScore(systemScores: SystemScores, aiScores: Partial<AIScores>): number {
let totalWeight = 0
let weightedSum = 0
// CPU score
weightedSum += systemScores.cpu_score * SCORE_WEIGHTS.cpu
totalWeight += SCORE_WEIGHTS.cpu
// Memory score
weightedSum += systemScores.memory_score * SCORE_WEIGHTS.memory
totalWeight += SCORE_WEIGHTS.memory
// Disk scores
weightedSum += systemScores.disk_read_score * SCORE_WEIGHTS.disk_read
totalWeight += SCORE_WEIGHTS.disk_read
weightedSum += systemScores.disk_write_score * SCORE_WEIGHTS.disk_write
totalWeight += SCORE_WEIGHTS.disk_write
// AI scores (if available)
if (aiScores.ai_tokens_per_second !== undefined && aiScores.ai_tokens_per_second !== null) {
const aiScore = this._normalizeScore(
aiScores.ai_tokens_per_second,
REFERENCE_SCORES.ai_tokens_per_second
)
weightedSum += aiScore * SCORE_WEIGHTS.ai_tokens_per_second
totalWeight += SCORE_WEIGHTS.ai_tokens_per_second
}
if (aiScores.ai_time_to_first_token !== undefined && aiScores.ai_time_to_first_token !== null) {
// For TTFT, lower is better, so we invert the score
const ttftScore = this._normalizeScoreInverse(
aiScores.ai_time_to_first_token,
REFERENCE_SCORES.ai_ttft_ms
)
weightedSum += ttftScore * SCORE_WEIGHTS.ai_ttft
totalWeight += SCORE_WEIGHTS.ai_ttft
}
// Normalize by actual weight used (in case AI benchmarks were skipped)
const nomadScore = totalWeight > 0 ? (weightedSum / totalWeight) * 100 : 0
return Math.round(Math.min(100, Math.max(0, nomadScore)) * 100) / 100
}
/**
* Normalize a raw score to 0-100 scale using log scaling
* This provides diminishing returns for very high scores
*/
private _normalizeScore(value: number, reference: number): number {
if (value <= 0) return 0
// Log scale with widened range: dividing log2 by 3 prevents scores from
// clamping to 0% for below-average hardware. Gives 50% at reference value.
const ratio = value / reference
const score = 50 * (1 + Math.log2(Math.max(0.01, ratio)) / 3)
return Math.min(100, Math.max(0, score)) / 100
}
/**
* Normalize a score where lower is better (like latency)
*/
private _normalizeScoreInverse(value: number, reference: number): number {
if (value <= 0) return 1
// Inverse: lower values = higher scores, with widened log range
const ratio = reference / value
const score = 50 * (1 + Math.log2(Math.max(0.01, ratio)) / 3)
return Math.min(100, Math.max(0, score)) / 100
}
/**
* Ensure sysbench Docker image is available
*/
private async _ensureSysbenchImage(): Promise<void> {
try {
await this.dockerService.docker.getImage(SYSBENCH_IMAGE).inspect()
} catch {
this._updateStatus('starting', `Pulling sysbench image...`)
const pullStream = await this.dockerService.docker.pull(SYSBENCH_IMAGE)
await new Promise((resolve) => this.dockerService.docker.modem.followProgress(pullStream, resolve))
}
}
/**
* Run sysbench CPU benchmark
*/
private async _runSysbenchCpu(): Promise<SysbenchCpuResult> {
const output = await this._runSysbenchCommand([
'sysbench',
'cpu',
'--cpu-max-prime=20000',
'--threads=4',
'--time=30',
'run',
])
// Parse output for events per second
const eventsMatch = output.match(/events per second:\s*([\d.]+)/i)
const totalTimeMatch = output.match(/total time:\s*([\d.]+)s/i)
const totalEventsMatch = output.match(/total number of events:\s*(\d+)/i)
logger.debug(`[BenchmarkService] CPU output parsing - events/s: ${eventsMatch?.[1]}, total_time: ${totalTimeMatch?.[1]}, total_events: ${totalEventsMatch?.[1]}`)
return {
events_per_second: eventsMatch ? parseFloat(eventsMatch[1]) : 0,
total_time: totalTimeMatch ? parseFloat(totalTimeMatch[1]) : 30,
total_events: totalEventsMatch ? parseInt(totalEventsMatch[1]) : 0,
}
}
/**
* Run sysbench memory benchmark
*/
private async _runSysbenchMemory(): Promise<SysbenchMemoryResult> {
const output = await this._runSysbenchCommand([
'sysbench',
'memory',
'--memory-block-size=1K',
'--memory-total-size=10G',
'--threads=4',
'run',
])
// Parse output
const opsMatch = output.match(/Total operations:\s*\d+\s*\(([\d.]+)\s*per second\)/i)
const transferMatch = output.match(/([\d.]+)\s*MiB\/sec/i)
const timeMatch = output.match(/total time:\s*([\d.]+)s/i)
return {
operations_per_second: opsMatch ? parseFloat(opsMatch[1]) : 0,
transfer_rate_mb_per_sec: transferMatch ? parseFloat(transferMatch[1]) : 0,
total_time: timeMatch ? parseFloat(timeMatch[1]) : 0,
}
}
/**
* Run sysbench disk read benchmark
*/
private async _runSysbenchDiskRead(): Promise<SysbenchDiskResult> {
// Run prepare, test, and cleanup in a single container
// This is necessary because each container has its own filesystem
const output = await this._runSysbenchCommand([
'sh',
'-c',
'sysbench fileio --file-total-size=1G --file-num=4 prepare && ' +
'sysbench fileio --file-total-size=1G --file-num=4 --file-test-mode=seqrd --time=30 run && ' +
'sysbench fileio --file-total-size=1G --file-num=4 cleanup',
])
// Parse output - look for the Throughput section
const readMatch = output.match(/read,\s*MiB\/s:\s*([\d.]+)/i)
const readsPerSecMatch = output.match(/reads\/s:\s*([\d.]+)/i)
logger.debug(`[BenchmarkService] Disk read output parsing - read: ${readMatch?.[1]}, reads/s: ${readsPerSecMatch?.[1]}`)
return {
reads_per_second: readsPerSecMatch ? parseFloat(readsPerSecMatch[1]) : 0,
writes_per_second: 0,
read_mb_per_sec: readMatch ? parseFloat(readMatch[1]) : 0,
write_mb_per_sec: 0,
total_time: 30,
}
}
/**
* Run sysbench disk write benchmark
*/
private async _runSysbenchDiskWrite(): Promise<SysbenchDiskResult> {
// Run prepare, test, and cleanup in a single container
// This is necessary because each container has its own filesystem
const output = await this._runSysbenchCommand([
'sh',
'-c',
'sysbench fileio --file-total-size=1G --file-num=4 prepare && ' +
'sysbench fileio --file-total-size=1G --file-num=4 --file-test-mode=seqwr --time=30 run && ' +
'sysbench fileio --file-total-size=1G --file-num=4 cleanup',
])
// Parse output - look for the Throughput section
const writeMatch = output.match(/written,\s*MiB\/s:\s*([\d.]+)/i)
const writesPerSecMatch = output.match(/writes\/s:\s*([\d.]+)/i)
logger.debug(`[BenchmarkService] Disk write output parsing - written: ${writeMatch?.[1]}, writes/s: ${writesPerSecMatch?.[1]}`)
return {
reads_per_second: 0,
writes_per_second: writesPerSecMatch ? parseFloat(writesPerSecMatch[1]) : 0,
read_mb_per_sec: 0,
write_mb_per_sec: writeMatch ? parseFloat(writeMatch[1]) : 0,
total_time: 30,
}
}
/**
* Run a sysbench command in a Docker container
*/
private async _runSysbenchCommand(cmd: string[]): Promise<string> {
let container: Dockerode.Container | null = null
try {
// Create container with TTY to avoid multiplexed output
container = await this.dockerService.docker.createContainer({
Image: SYSBENCH_IMAGE,
Cmd: cmd,
name: `${SYSBENCH_CONTAINER_NAME}_${Date.now()}`,
Tty: true, // Important: prevents multiplexed stdout/stderr headers
HostConfig: {
AutoRemove: false, // Don't auto-remove to avoid race condition with fetching logs
},
})
// Start container
await container.start()
// Wait for completion
await container.wait()
// Get logs after container has finished
const logs = await container.logs({
stdout: true,
stderr: true,
})
// Parse logs (Docker logs include header bytes)
const output = logs.toString('utf8')
.replace(/[\x00-\x08]/g, '') // Remove control characters
.trim()
// Manually remove the container after getting logs
try {
await container.remove()
} catch (removeError) {
// Log but don't fail if removal fails (container might already be gone)
logger.warn(`Failed to remove sysbench container: ${removeError.message}`)
}
return output
} catch (error) {
// Clean up container on error if it exists
if (container) {
try {
await container.remove({ force: true })
} catch (removeError) {
// Ignore removal errors
}
}
logger.error(`Sysbench command failed: ${error.message}`)
throw new Error(`Sysbench command failed: ${error.message}`)
}
}
/**
* Broadcast benchmark progress update
*/
private _updateStatus(status: BenchmarkStatus, message: string) {
this.currentStatus = status
const progress: BenchmarkProgress = {
status,
progress: this._getProgressPercent(status),
message,
current_stage: this._getStageLabel(status),
timestamp: new Date().toISOString(),
}
transmit.broadcast(BROADCAST_CHANNELS.BENCHMARK_PROGRESS, {
benchmark_id: this.currentBenchmarkId,
...progress,
})
logger.info(`[BenchmarkService] ${status}: ${message}`)
}
/**
* Get progress percentage for a given status
*/
private _getProgressPercent(status: BenchmarkStatus): number {
const progressMap: Record<BenchmarkStatus, number> = {
idle: 0,
starting: 5,
detecting_hardware: 10,
running_cpu: 25,
running_memory: 40,
running_disk_read: 55,
running_disk_write: 70,
downloading_ai_model: 80,
running_ai: 85,
calculating_score: 95,
completed: 100,
error: 0,
}
return progressMap[status] || 0
}
/**
* Get human-readable stage label
*/
private _getStageLabel(status: BenchmarkStatus): string {
const labelMap: Record<BenchmarkStatus, string> = {
idle: 'Idle',
starting: 'Starting',
detecting_hardware: 'Detecting Hardware',
running_cpu: 'CPU Benchmark',
running_memory: 'Memory Benchmark',
running_disk_read: 'Disk Read Test',
running_disk_write: 'Disk Write Test',
downloading_ai_model: 'Downloading AI Model',
running_ai: 'AI Inference Test',
calculating_score: 'Calculating Score',
completed: 'Complete',
error: 'Error',
}
return labelMap[status] || status
}
}

View File

@ -0,0 +1,289 @@
import ChatSession from '#models/chat_session'
import ChatMessage from '#models/chat_message'
import logger from '@adonisjs/core/services/logger'
import { DateTime } from 'luxon'
import { inject } from '@adonisjs/core'
import { OllamaService } from './ollama_service.js'
import { DEFAULT_QUERY_REWRITE_MODEL, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { toTitleCase } from '../utils/misc.js'
@inject()
export class ChatService {
constructor(private ollamaService: OllamaService) {}
async getAllSessions() {
try {
const sessions = await ChatSession.query().orderBy('updated_at', 'desc')
return sessions.map((session) => ({
id: session.id.toString(),
title: session.title,
model: session.model,
timestamp: session.updated_at.toJSDate(),
lastMessage: null, // Will be populated from messages if needed
}))
} catch (error) {
logger.error(
`[ChatService] Failed to get sessions: ${error instanceof Error ? error.message : error}`
)
return []
}
}
async getChatSuggestions() {
try {
const models = await this.ollamaService.getModels()
if (!models || models.length === 0) {
return [] // If no models are available, return empty suggestions
}
// Larger models generally give "better" responses, so pick the largest one
const largestModel = models.reduce((prev, current) => {
return prev.size > current.size ? prev : current
})
if (!largestModel) {
return []
}
const response = await this.ollamaService.chat({
model: largestModel.name,
messages: [
{
role: 'user',
content: SYSTEM_PROMPTS.chat_suggestions,
}
],
stream: false,
})
if (response && response.message && response.message.content) {
const content = response.message.content.trim()
// Handle both comma-separated and newline-separated formats
let suggestions: string[] = []
// Try splitting by commas first
if (content.includes(',')) {
suggestions = content.split(',').map((s) => s.trim())
}
// Fall back to newline separation
else {
suggestions = content
.split(/\r?\n/)
.map((s) => s.trim())
// Remove numbered list markers (1., 2., 3., etc.) and bullet points
.map((s) => s.replace(/^\d+\.\s*/, '').replace(/^[-*•]\s*/, ''))
// Remove surrounding quotes if present
.map((s) => s.replace(/^["']|["']$/g, ''))
}
// Filter out empty strings and limit to 3 suggestions
const filtered = suggestions
.filter((s) => s.length > 0)
.slice(0, 3)
return filtered.map((s) => toTitleCase(s))
} else {
return []
}
} catch (error) {
logger.error(
`[ChatService] Failed to get chat suggestions: ${
error instanceof Error ? error.message : error
}`
)
return []
}
}
async getSession(sessionId: number) {
try {
const session = await ChatSession.query().where('id', sessionId).preload('messages').first()
if (!session) {
return null
}
return {
id: session.id.toString(),
title: session.title,
model: session.model,
timestamp: session.updated_at.toJSDate(),
messages: session.messages.map((msg) => ({
id: msg.id.toString(),
role: msg.role,
content: msg.content,
timestamp: msg.created_at.toJSDate(),
})),
}
} catch (error) {
logger.error(
`[ChatService] Failed to get session ${sessionId}: ${
error instanceof Error ? error.message : error
}`
)
return null
}
}
async createSession(title: string, model?: string) {
try {
const session = await ChatSession.create({
title,
model: model || null,
})
return {
id: session.id.toString(),
title: session.title,
model: session.model,
timestamp: session.created_at.toJSDate(),
}
} catch (error) {
logger.error(
`[ChatService] Failed to create session: ${error instanceof Error ? error.message : error}`
)
throw new Error('Failed to create chat session')
}
}
async updateSession(sessionId: number, data: { title?: string; model?: string }) {
try {
const session = await ChatSession.findOrFail(sessionId)
if (data.title) {
session.title = data.title
}
if (data.model !== undefined) {
session.model = data.model
}
await session.save()
return {
id: session.id.toString(),
title: session.title,
model: session.model,
timestamp: session.updated_at.toJSDate(),
}
} catch (error) {
logger.error(
`[ChatService] Failed to update session ${sessionId}: ${
error instanceof Error ? error.message : error
}`
)
throw new Error('Failed to update chat session')
}
}
async addMessage(sessionId: number, role: 'system' | 'user' | 'assistant', content: string) {
try {
const message = await ChatMessage.create({
session_id: sessionId,
role,
content,
})
// Update session's updated_at timestamp
const session = await ChatSession.findOrFail(sessionId)
session.updated_at = DateTime.now()
await session.save()
return {
id: message.id.toString(),
role: message.role,
content: message.content,
timestamp: message.created_at.toJSDate(),
}
} catch (error) {
logger.error(
`[ChatService] Failed to add message to session ${sessionId}: ${
error instanceof Error ? error.message : error
}`
)
throw new Error('Failed to add message')
}
}
async deleteSession(sessionId: number) {
try {
const session = await ChatSession.findOrFail(sessionId)
await session.delete()
return { success: true }
} catch (error) {
logger.error(
`[ChatService] Failed to delete session ${sessionId}: ${
error instanceof Error ? error.message : error
}`
)
throw new Error('Failed to delete chat session')
}
}
async getMessageCount(sessionId: number): Promise<number> {
try {
const count = await ChatMessage.query().where('session_id', sessionId).count('* as total')
return Number(count[0].$extras.total)
} catch (error) {
logger.error(
`[ChatService] Failed to get message count for session ${sessionId}: ${error instanceof Error ? error.message : error}`
)
return 0
}
}
async generateTitle(sessionId: number, userMessage: string, assistantMessage: string) {
try {
const models = await this.ollamaService.getModels()
const titleModelAvailable = models?.some((m) => m.name === DEFAULT_QUERY_REWRITE_MODEL)
let title: string
if (!titleModelAvailable) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
} else {
const response = await this.ollamaService.chat({
model: DEFAULT_QUERY_REWRITE_MODEL,
messages: [
{ role: 'system', content: SYSTEM_PROMPTS.title_generation },
{ role: 'user', content: userMessage },
{ role: 'assistant', content: assistantMessage },
],
})
title = response?.message?.content?.trim()
if (!title) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
}
}
await this.updateSession(sessionId, { title })
logger.info(`[ChatService] Generated title for session ${sessionId}: "${title}"`)
} catch (error) {
logger.error(
`[ChatService] Failed to generate title for session ${sessionId}: ${error instanceof Error ? error.message : error}`
)
// Fall back to truncated user message
try {
const fallbackTitle = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
await this.updateSession(sessionId, { title: fallbackTitle })
} catch {
// Silently fail - session keeps "New Chat" title
}
}
}
async deleteAllSessions() {
try {
await ChatSession.query().delete()
return { success: true, message: 'All chat sessions deleted' }
} catch (error) {
logger.error(
`[ChatService] Failed to delete all sessions: ${
error instanceof Error ? error.message : error
}`
)
throw new Error('Failed to delete all chat sessions')
}
}
}

View File

@ -0,0 +1,317 @@
import axios from 'axios'
import vine from '@vinejs/vine'
import logger from '@adonisjs/core/services/logger'
import { DateTime } from 'luxon'
import { join } from 'path'
import CollectionManifest from '#models/collection_manifest'
import InstalledResource from '#models/installed_resource'
import { zimCategoriesSpecSchema, mapsSpecSchema, wikipediaSpecSchema } from '#validators/curated_collections'
import {
ensureDirectoryExists,
listDirectoryContents,
getFileStatsIfExists,
ZIM_STORAGE_PATH,
} from '../utils/fs.js'
import type {
ManifestType,
ZimCategoriesSpec,
MapsSpec,
CategoryWithStatus,
CollectionWithStatus,
SpecResource,
SpecTier,
} from '../../types/collections.js'
const SPEC_URLS: Record<ManifestType, string> = {
zim_categories: 'https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/collections/kiwix-categories.json',
maps: 'https://github.com/Crosstalk-Solutions/project-nomad/raw/refs/heads/main/collections/maps.json',
wikipedia: 'https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/collections/wikipedia.json',
}
const VALIDATORS: Record<ManifestType, any> = {
zim_categories: zimCategoriesSpecSchema,
maps: mapsSpecSchema,
wikipedia: wikipediaSpecSchema,
}
export class CollectionManifestService {
private readonly mapStoragePath = '/storage/maps'
// ---- Spec management ----
async fetchAndCacheSpec(type: ManifestType): Promise<boolean> {
try {
const response = await axios.get(SPEC_URLS[type], { timeout: 15000 })
const validated = await vine.validate({
schema: VALIDATORS[type],
data: response.data,
})
const existing = await CollectionManifest.find(type)
const specVersion = validated.spec_version
if (existing) {
const changed = existing.spec_version !== specVersion
existing.spec_version = specVersion
existing.spec_data = validated
existing.fetched_at = DateTime.now()
await existing.save()
return changed
}
await CollectionManifest.create({
type,
spec_version: specVersion,
spec_data: validated,
fetched_at: DateTime.now(),
})
return true
} catch (error) {
logger.error(`[CollectionManifestService] Failed to fetch spec for ${type}:`, error?.message || error)
return false
}
}
async getCachedSpec<T>(type: ManifestType): Promise<T | null> {
const manifest = await CollectionManifest.find(type)
if (!manifest) return null
return manifest.spec_data as T
}
async getSpecWithFallback<T>(type: ManifestType): Promise<T | null> {
try {
await this.fetchAndCacheSpec(type)
} catch {
// Fetch failed, will fall back to cache
}
return this.getCachedSpec<T>(type)
}
// ---- Status computation ----
async getCategoriesWithStatus(): Promise<CategoryWithStatus[]> {
const spec = await this.getSpecWithFallback<ZimCategoriesSpec>('zim_categories')
if (!spec) return []
const installedResources = await InstalledResource.query().where('resource_type', 'zim')
const installedMap = new Map(installedResources.map((r) => [r.resource_id, r]))
return spec.categories.map((category) => ({
...category,
installedTierSlug: this.getInstalledTierForCategory(category.tiers, installedMap),
}))
}
async getMapCollectionsWithStatus(): Promise<CollectionWithStatus[]> {
const spec = await this.getSpecWithFallback<MapsSpec>('maps')
if (!spec) return []
const installedResources = await InstalledResource.query().where('resource_type', 'map')
const installedIds = new Set(installedResources.map((r) => r.resource_id))
return spec.collections.map((collection) => {
const installedCount = collection.resources.filter((r) => installedIds.has(r.id)).length
return {
...collection,
all_installed: installedCount === collection.resources.length,
installed_count: installedCount,
total_count: collection.resources.length,
}
})
}
// ---- Tier resolution ----
static resolveTierResources(tier: SpecTier, allTiers: SpecTier[]): SpecResource[] {
const visited = new Set<string>()
return CollectionManifestService._resolveTierResourcesInner(tier, allTiers, visited)
}
private static _resolveTierResourcesInner(
tier: SpecTier,
allTiers: SpecTier[],
visited: Set<string>
): SpecResource[] {
if (visited.has(tier.slug)) return [] // cycle detection
visited.add(tier.slug)
const resources: SpecResource[] = []
if (tier.includesTier) {
const included = allTiers.find((t) => t.slug === tier.includesTier)
if (included) {
resources.push(...CollectionManifestService._resolveTierResourcesInner(included, allTiers, visited))
}
}
resources.push(...tier.resources)
return resources
}
getInstalledTierForCategory(
tiers: SpecTier[],
installedMap: Map<string, InstalledResource>
): string | undefined {
// Check from highest tier to lowest (tiers are ordered low to high in spec)
const reversedTiers = [...tiers].reverse()
for (const tier of reversedTiers) {
const resolved = CollectionManifestService.resolveTierResources(tier, tiers)
if (resolved.length === 0) continue
const allInstalled = resolved.every((r) => installedMap.has(r.id))
if (allInstalled) {
return tier.slug
}
}
return undefined
}
// ---- Filename parsing ----
static parseZimFilename(filename: string): { resource_id: string; version: string } | null {
const name = filename.replace(/\.zim$/, '')
const match = name.match(/^(.+)_(\d{4}-\d{2})$/)
if (!match) return null
return { resource_id: match[1], version: match[2] }
}
static parseMapFilename(filename: string): { resource_id: string; version: string } | null {
const name = filename.replace(/\.pmtiles$/, '')
const match = name.match(/^(.+)_(\d{4}-\d{2})$/)
if (!match) return null
return { resource_id: match[1], version: match[2] }
}
// ---- Filesystem reconciliation ----
async reconcileFromFilesystem(): Promise<{ zim: number; map: number }> {
let zimCount = 0
let mapCount = 0
console.log("RECONCILING FILESYSTEM MANIFESTS...")
// Reconcile ZIM files
try {
const zimDir = join(process.cwd(), ZIM_STORAGE_PATH)
await ensureDirectoryExists(zimDir)
const zimItems = await listDirectoryContents(zimDir)
const zimFiles = zimItems.filter((f) => f.name.endsWith('.zim'))
console.log(`Found ${zimFiles.length} ZIM files on disk. Reconciling with database...`)
// Get spec for URL lookup
const zimSpec = await this.getCachedSpec<ZimCategoriesSpec>('zim_categories')
const specResourceMap = new Map<string, SpecResource>()
if (zimSpec) {
for (const cat of zimSpec.categories) {
for (const tier of cat.tiers) {
for (const res of tier.resources) {
specResourceMap.set(res.id, res)
}
}
}
}
const seenZimIds = new Set<string>()
for (const file of zimFiles) {
console.log(`Processing ZIM file: ${file.name}`)
// Skip Wikipedia files (managed by WikipediaSelection model)
if (file.name.startsWith('wikipedia_en_')) continue
const parsed = CollectionManifestService.parseZimFilename(file.name)
console.log(`Parsed ZIM filename:`, parsed)
if (!parsed) continue
seenZimIds.add(parsed.resource_id)
const specRes = specResourceMap.get(parsed.resource_id)
const filePath = join(zimDir, file.name)
const stats = await getFileStatsIfExists(filePath)
await InstalledResource.updateOrCreate(
{ resource_id: parsed.resource_id, resource_type: 'zim' },
{
version: parsed.version,
url: specRes?.url || '',
file_path: filePath,
file_size_bytes: stats ? Number(stats.size) : null,
installed_at: DateTime.now(),
}
)
zimCount++
}
// Remove entries for ZIM files no longer on disk
const existingZim = await InstalledResource.query().where('resource_type', 'zim')
for (const entry of existingZim) {
if (!seenZimIds.has(entry.resource_id)) {
await entry.delete()
}
}
} catch (error) {
logger.error('[CollectionManifestService] Error reconciling ZIM files:', error)
}
// Reconcile map files
try {
const mapDir = join(process.cwd(), this.mapStoragePath, 'pmtiles')
await ensureDirectoryExists(mapDir)
const mapItems = await listDirectoryContents(mapDir)
const mapFiles = mapItems.filter((f) => f.name.endsWith('.pmtiles'))
// Get spec for URL/version lookup
const mapSpec = await this.getCachedSpec<MapsSpec>('maps')
const mapResourceMap = new Map<string, SpecResource>()
if (mapSpec) {
for (const col of mapSpec.collections) {
for (const res of col.resources) {
mapResourceMap.set(res.id, res)
}
}
}
const seenMapIds = new Set<string>()
for (const file of mapFiles) {
const parsed = CollectionManifestService.parseMapFilename(file.name)
if (!parsed) continue
seenMapIds.add(parsed.resource_id)
const specRes = mapResourceMap.get(parsed.resource_id)
const filePath = join(mapDir, file.name)
const stats = await getFileStatsIfExists(filePath)
await InstalledResource.updateOrCreate(
{ resource_id: parsed.resource_id, resource_type: 'map' },
{
version: parsed.version,
url: specRes?.url || '',
file_path: filePath,
file_size_bytes: stats ? Number(stats.size) : null,
installed_at: DateTime.now(),
}
)
mapCount++
}
// Remove entries for map files no longer on disk
const existingMaps = await InstalledResource.query().where('resource_type', 'map')
for (const entry of existingMaps) {
if (!seenMapIds.has(entry.resource_id)) {
await entry.delete()
}
}
} catch (error) {
logger.error('[CollectionManifestService] Error reconciling map files:', error)
}
logger.info(`[CollectionManifestService] Reconciled ${zimCount} ZIM files, ${mapCount} map files`)
return { zim: zimCount, map: mapCount }
}
}

View File

@ -0,0 +1,157 @@
import logger from '@adonisjs/core/services/logger'
import env from '#start/env'
import axios from 'axios'
import InstalledResource from '#models/installed_resource'
import { RunDownloadJob } from '../jobs/run_download_job.js'
import { ZIM_STORAGE_PATH } from '../utils/fs.js'
import { join } from 'path'
import type {
ResourceUpdateCheckRequest,
ResourceUpdateInfo,
ContentUpdateCheckResult,
} from '../../types/collections.js'
import { NOMAD_API_DEFAULT_BASE_URL } from '../../constants/misc.js'
const MAP_STORAGE_PATH = '/storage/maps'
const ZIM_MIME_TYPES = ['application/x-zim', 'application/x-openzim', 'application/octet-stream']
const PMTILES_MIME_TYPES = ['application/vnd.pmtiles', 'application/octet-stream']
export class CollectionUpdateService {
async checkForUpdates(): Promise<ContentUpdateCheckResult> {
const nomadAPIURL = env.get('NOMAD_API_URL') || NOMAD_API_DEFAULT_BASE_URL
if (!nomadAPIURL) {
return {
updates: [],
checked_at: new Date().toISOString(),
error: 'Nomad API is not configured. Set the NOMAD_API_URL environment variable.',
}
}
const installed = await InstalledResource.all()
if (installed.length === 0) {
return {
updates: [],
checked_at: new Date().toISOString(),
}
}
const requestBody: ResourceUpdateCheckRequest = {
resources: installed.map((r) => ({
resource_id: r.resource_id,
resource_type: r.resource_type,
installed_version: r.version,
})),
}
try {
const response = await axios.post<ResourceUpdateInfo[]>(`${nomadAPIURL}/api/v1/resources/check-updates`, requestBody, {
timeout: 15000,
})
logger.info(
`[CollectionUpdateService] Update check complete: ${response.data.length} update(s) available`
)
return {
updates: response.data,
checked_at: new Date().toISOString(),
}
} catch (error) {
if (axios.isAxiosError(error) && error.response) {
logger.error(
`[CollectionUpdateService] Nomad API returned ${error.response.status}: ${JSON.stringify(error.response.data)}`
)
return {
updates: [],
checked_at: new Date().toISOString(),
error: `Nomad API returned status ${error.response.status}`,
}
}
const message =
error instanceof Error ? error.message : 'Unknown error contacting Nomad API'
logger.error(`[CollectionUpdateService] Failed to check for updates: ${message}`)
return {
updates: [],
checked_at: new Date().toISOString(),
error: `Failed to contact Nomad API: ${message}`,
}
}
}
async applyUpdate(
update: ResourceUpdateInfo
): Promise<{ success: boolean; jobId?: string; error?: string }> {
// Check if a download is already in progress for this URL
const existingJob = await RunDownloadJob.getByUrl(update.download_url)
if (existingJob) {
const state = await existingJob.getState()
if (state === 'active' || state === 'waiting' || state === 'delayed') {
return {
success: false,
error: `A download is already in progress for ${update.resource_id}`,
}
}
}
const filename = this.buildFilename(update)
const filepath = this.buildFilepath(update, filename)
const result = await RunDownloadJob.dispatch({
url: update.download_url,
filepath,
timeout: 30000,
allowedMimeTypes:
update.resource_type === 'zim' ? ZIM_MIME_TYPES : PMTILES_MIME_TYPES,
forceNew: true,
filetype: update.resource_type,
resourceMetadata: {
resource_id: update.resource_id,
version: update.latest_version,
collection_ref: null,
},
})
if (!result || !result.job) {
return { success: false, error: 'Failed to dispatch download job' }
}
logger.info(
`[CollectionUpdateService] Dispatched update download for ${update.resource_id}: ${update.installed_version}${update.latest_version}`
)
return { success: true, jobId: result.job.id }
}
async applyAllUpdates(
updates: ResourceUpdateInfo[]
): Promise<{ results: Array<{ resource_id: string; success: boolean; jobId?: string; error?: string }> }> {
const results: Array<{
resource_id: string
success: boolean
jobId?: string
error?: string
}> = []
for (const update of updates) {
const result = await this.applyUpdate(update)
results.push({ resource_id: update.resource_id, ...result })
}
return { results }
}
private buildFilename(update: ResourceUpdateInfo): string {
if (update.resource_type === 'zim') {
return `${update.resource_id}_${update.latest_version}.zim`
}
return `${update.resource_id}_${update.latest_version}.pmtiles`
}
private buildFilepath(update: ResourceUpdateInfo, filename: string): string {
if (update.resource_type === 'zim') {
return join(process.cwd(), ZIM_STORAGE_PATH, filename)
}
return join(process.cwd(), MAP_STORAGE_PATH, 'pmtiles', filename)
}
}

View File

@ -0,0 +1,484 @@
import logger from '@adonisjs/core/services/logger'
import { isNewerVersion, parseMajorVersion } from '../utils/version.js'
export interface ParsedImageReference {
registry: string
namespace: string
repo: string
tag: string
/** Full name for registry API calls: namespace/repo */
fullName: string
}
export interface AvailableUpdate {
tag: string
isLatest: boolean
releaseUrl?: string
}
interface TokenCacheEntry {
token: string
expiresAt: number
}
const SEMVER_TAG_PATTERN = /^v?(\d+\.\d+(?:\.\d+)?)$/
const PLATFORM_SUFFIXES = ['-arm64', '-amd64', '-alpine', '-slim', '-cuda', '-rocm']
const REJECTED_TAGS = new Set(['latest', 'nightly', 'edge', 'dev', 'beta', 'alpha', 'canary', 'rc', 'test', 'debug'])
export class ContainerRegistryService {
private tokenCache = new Map<string, TokenCacheEntry>()
private sourceUrlCache = new Map<string, string | null>()
private releaseTagPrefixCache = new Map<string, string>()
/**
* Parse a Docker image reference string into its components.
*/
parseImageReference(image: string): ParsedImageReference {
let registry: string
let remainder: string
let tag = 'latest'
// Split off the tag
const lastColon = image.lastIndexOf(':')
if (lastColon > -1 && !image.substring(lastColon).includes('/')) {
tag = image.substring(lastColon + 1)
image = image.substring(0, lastColon)
}
// Determine registry vs image path
const parts = image.split('/')
if (parts.length === 1) {
// e.g. "nginx" → Docker Hub library image
registry = 'registry-1.docker.io'
remainder = `library/${parts[0]}`
} else if (parts.length === 2 && !parts[0].includes('.') && !parts[0].includes(':')) {
// e.g. "ollama/ollama" → Docker Hub user image
registry = 'registry-1.docker.io'
remainder = image
} else {
// e.g. "ghcr.io/kiwix/kiwix-serve" → custom registry
registry = parts[0]
remainder = parts.slice(1).join('/')
}
const namespaceParts = remainder.split('/')
const repo = namespaceParts.pop()!
const namespace = namespaceParts.join('/')
return {
registry,
namespace,
repo,
tag,
fullName: remainder,
}
}
/**
* Get an anonymous auth token for the given registry and repository.
* NOTE: This could be expanded in the future to support private repo authentication
*/
private async getToken(registry: string, fullName: string): Promise<string> {
const cacheKey = `${registry}/${fullName}`
const cached = this.tokenCache.get(cacheKey)
if (cached && cached.expiresAt > Date.now()) {
return cached.token
}
let tokenUrl: string
if (registry === 'registry-1.docker.io') {
tokenUrl = `https://auth.docker.io/token?service=registry.docker.io&scope=repository:${fullName}:pull`
} else if (registry === 'ghcr.io') {
tokenUrl = `https://ghcr.io/token?service=ghcr.io&scope=repository:${fullName}:pull`
} else {
// For other registries, try the standard v2 token endpoint
tokenUrl = `https://${registry}/token?service=${registry}&scope=repository:${fullName}:pull`
}
const response = await this.fetchWithRetry(tokenUrl)
if (!response.ok) {
throw new Error(`Failed to get auth token from ${registry}: ${response.status}`)
}
const data = (await response.json()) as { token?: string; access_token?: string }
const token = data.token || data.access_token || ''
if (!token) {
throw new Error(`No token returned from ${registry}`)
}
// Cache for 5 minutes (tokens usually last longer, but be conservative)
this.tokenCache.set(cacheKey, {
token,
expiresAt: Date.now() + 5 * 60 * 1000,
})
return token
}
/**
* List all tags for a given image from the registry.
*/
async listTags(parsed: ParsedImageReference): Promise<string[]> {
const token = await this.getToken(parsed.registry, parsed.fullName)
const allTags: string[] = []
let url = `https://${parsed.registry}/v2/${parsed.fullName}/tags/list?n=1000`
while (url) {
const response = await this.fetchWithRetry(url, {
headers: { Authorization: `Bearer ${token}` },
})
if (!response.ok) {
throw new Error(`Failed to list tags for ${parsed.fullName}: ${response.status}`)
}
const data = (await response.json()) as { tags?: string[] }
if (data.tags) {
allTags.push(...data.tags)
}
// Handle pagination via Link header
const linkHeader = response.headers.get('link')
if (linkHeader) {
const match = linkHeader.match(/<([^>]+)>;\s*rel="next"/)
url = match ? match[1] : ''
} else {
url = ''
}
}
return allTags
}
/**
* Check if a specific tag supports the given architecture by fetching its manifest.
*/
async checkArchSupport(parsed: ParsedImageReference, tag: string, hostArch: string): Promise<boolean> {
try {
const token = await this.getToken(parsed.registry, parsed.fullName)
const url = `https://${parsed.registry}/v2/${parsed.fullName}/manifests/${tag}`
const response = await this.fetchWithRetry(url, {
headers: {
Authorization: `Bearer ${token}`,
Accept: [
'application/vnd.oci.image.index.v1+json',
'application/vnd.docker.distribution.manifest.list.v2+json',
'application/vnd.oci.image.manifest.v1+json',
'application/vnd.docker.distribution.manifest.v2+json',
].join(', '),
},
})
if (!response.ok) return true // If we can't check, assume it's compatible
const manifest = (await response.json()) as {
mediaType?: string
manifests?: Array<{ platform?: { architecture?: string } }>
}
const mediaType = manifest.mediaType || response.headers.get('content-type') || ''
// Manifest list — check if any platform matches
if (
mediaType.includes('manifest.list') ||
mediaType.includes('image.index') ||
manifest.manifests
) {
const manifests = manifest.manifests || []
return manifests.some(
(m: any) => m.platform && m.platform.architecture === hostArch
)
}
// Single manifest — assume compatible (can't easily determine arch without fetching config blob)
return true
} catch (error) {
logger.warn(`[ContainerRegistryService] Error checking arch for ${tag}: ${error.message}`)
return true // Assume compatible on error
}
}
/**
* Extract the source repository URL from an image's OCI labels.
* Uses the standardized `org.opencontainers.image.source` label.
* Result is cached per image (not per tag).
*/
async getSourceUrl(parsed: ParsedImageReference): Promise<string | null> {
const cacheKey = `${parsed.registry}/${parsed.fullName}`
if (this.sourceUrlCache.has(cacheKey)) {
return this.sourceUrlCache.get(cacheKey)!
}
try {
const token = await this.getToken(parsed.registry, parsed.fullName)
// First get the manifest to find the config blob digest
const manifestUrl = `https://${parsed.registry}/v2/${parsed.fullName}/manifests/${parsed.tag}`
const manifestRes = await this.fetchWithRetry(manifestUrl, {
headers: {
Authorization: `Bearer ${token}`,
Accept: [
'application/vnd.oci.image.manifest.v1+json',
'application/vnd.docker.distribution.manifest.v2+json',
'application/vnd.oci.image.index.v1+json',
'application/vnd.docker.distribution.manifest.list.v2+json',
].join(', '),
},
})
if (!manifestRes.ok) {
this.sourceUrlCache.set(cacheKey, null)
return null
}
const manifest = (await manifestRes.json()) as {
config?: { digest?: string }
manifests?: Array<{ digest?: string; mediaType?: string; platform?: { architecture?: string } }>
}
// If this is a manifest list, pick the first manifest to get the config
let configDigest = manifest.config?.digest
if (!configDigest && manifest.manifests?.length) {
const firstManifest = manifest.manifests[0]
if (firstManifest.digest) {
const childRes = await this.fetchWithRetry(
`https://${parsed.registry}/v2/${parsed.fullName}/manifests/${firstManifest.digest}`,
{
headers: {
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json',
},
}
)
if (childRes.ok) {
const childManifest = (await childRes.json()) as { config?: { digest?: string } }
configDigest = childManifest.config?.digest
}
}
}
if (!configDigest) {
this.sourceUrlCache.set(cacheKey, null)
return null
}
// Fetch the config blob to read labels
const blobUrl = `https://${parsed.registry}/v2/${parsed.fullName}/blobs/${configDigest}`
const blobRes = await this.fetchWithRetry(blobUrl, {
headers: { Authorization: `Bearer ${token}` },
})
if (!blobRes.ok) {
this.sourceUrlCache.set(cacheKey, null)
return null
}
const config = (await blobRes.json()) as {
config?: { Labels?: Record<string, string> }
}
const sourceUrl = config.config?.Labels?.['org.opencontainers.image.source'] || null
this.sourceUrlCache.set(cacheKey, sourceUrl)
return sourceUrl
} catch (error) {
logger.warn(`[ContainerRegistryService] Failed to get source URL for ${cacheKey}: ${error.message}`)
this.sourceUrlCache.set(cacheKey, null)
return null
}
}
/**
* Detect whether a GitHub/GitLab repo uses a 'v' prefix on release tags.
* Probes the GitHub API with the current tag to determine the convention,
* then caches the result per source URL.
*/
async detectReleaseTagPrefix(sourceUrl: string, sampleTag: string): Promise<string> {
if (this.releaseTagPrefixCache.has(sourceUrl)) {
return this.releaseTagPrefixCache.get(sourceUrl)!
}
try {
const url = new URL(sourceUrl)
if (url.hostname !== 'github.com') {
this.releaseTagPrefixCache.set(sourceUrl, '')
return ''
}
const cleanPath = url.pathname.replace(/\.git$/, '').replace(/\/$/, '')
const strippedTag = sampleTag.replace(/^v/, '')
const vTag = `v${strippedTag}`
// Try both variants against GitHub's API — the one that 200s tells us the convention
// Try v-prefixed first since it's more common
const vRes = await this.fetchWithRetry(
`https://api.github.com/repos${cleanPath}/releases/tags/${vTag}`,
{ headers: { Accept: 'application/vnd.github.v3+json', 'User-Agent': 'ProjectNomad' } },
1
)
if (vRes.ok) {
this.releaseTagPrefixCache.set(sourceUrl, 'v')
return 'v'
}
const plainRes = await this.fetchWithRetry(
`https://api.github.com/repos${cleanPath}/releases/tags/${strippedTag}`,
{ headers: { Accept: 'application/vnd.github.v3+json', 'User-Agent': 'ProjectNomad' } },
1
)
if (plainRes.ok) {
this.releaseTagPrefixCache.set(sourceUrl, '')
return ''
}
} catch {
// On error, fall through to default
}
// Default: no prefix modification
this.releaseTagPrefixCache.set(sourceUrl, '')
return ''
}
/**
* Build a release URL for a specific tag given a source repository URL and
* the detected release tag prefix convention.
* Supports GitHub and GitLab URL patterns.
*/
buildReleaseUrl(sourceUrl: string, tag: string, releaseTagPrefix: string): string | undefined {
try {
const url = new URL(sourceUrl)
if (url.hostname === 'github.com' || url.hostname.includes('gitlab')) {
const cleanPath = url.pathname.replace(/\.git$/, '').replace(/\/$/, '')
const strippedTag = tag.replace(/^v/, '')
const releaseTag = releaseTagPrefix ? `${releaseTagPrefix}${strippedTag}` : strippedTag
return `${url.origin}${cleanPath}/releases/tag/${releaseTag}`
}
} catch {
// Invalid URL, skip
}
return undefined
}
/**
* Filter and sort tags to find compatible updates for a service.
*/
filterCompatibleUpdates(
tags: string[],
currentTag: string,
majorVersion: number
): string[] {
return tags
.filter((tag) => {
// Must match semver pattern
if (!SEMVER_TAG_PATTERN.test(tag)) return false
// Reject known non-version tags
if (REJECTED_TAGS.has(tag.toLowerCase())) return false
// Reject platform suffixes
if (PLATFORM_SUFFIXES.some((suffix) => tag.toLowerCase().endsWith(suffix))) return false
// Must be same major version
if (parseMajorVersion(tag) !== majorVersion) return false
// Must be newer than current
return isNewerVersion(tag, currentTag)
})
.sort((a, b) => (isNewerVersion(a, b) ? -1 : 1)) // Newest first
}
/**
* High-level method to get available updates for a service.
* Returns a sorted list of compatible newer versions (newest first).
*/
async getAvailableUpdates(
containerImage: string,
hostArch: string,
fallbackSourceRepo?: string | null
): Promise<AvailableUpdate[]> {
const parsed = this.parseImageReference(containerImage)
const currentTag = parsed.tag
if (currentTag === 'latest') {
logger.warn(
`[ContainerRegistryService] Cannot check updates for ${containerImage} — using :latest tag`
)
return []
}
const majorVersion = parseMajorVersion(currentTag)
// Fetch tags and source URL in parallel
const [tags, ociSourceUrl] = await Promise.all([
this.listTags(parsed),
this.getSourceUrl(parsed),
])
// OCI label takes precedence, fall back to DB-stored source_repo
const sourceUrl = ociSourceUrl || fallbackSourceRepo || null
const compatible = this.filterCompatibleUpdates(tags, currentTag, majorVersion)
// Detect release tag prefix convention (e.g. 'v' vs no prefix) if we have a source URL
let releaseTagPrefix = ''
if (sourceUrl) {
releaseTagPrefix = await this.detectReleaseTagPrefix(sourceUrl, currentTag)
}
// Check architecture support for the top candidates (limit checks to save API calls)
const maxArchChecks = 10
const results: AvailableUpdate[] = []
for (const tag of compatible.slice(0, maxArchChecks)) {
const supported = await this.checkArchSupport(parsed, tag, hostArch)
if (supported) {
results.push({
tag,
isLatest: results.length === 0,
releaseUrl: sourceUrl ? this.buildReleaseUrl(sourceUrl, tag, releaseTagPrefix) : undefined,
})
}
}
// For remaining tags (beyond arch check limit), include them but mark as not latest
for (const tag of compatible.slice(maxArchChecks)) {
results.push({
tag,
isLatest: false,
releaseUrl: sourceUrl ? this.buildReleaseUrl(sourceUrl, tag, releaseTagPrefix) : undefined,
})
}
return results
}
/**
* Fetch with retry and exponential backoff for rate limiting.
*/
private async fetchWithRetry(
url: string,
init?: RequestInit,
maxRetries = 3
): Promise<Response> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, init)
if (response.status === 429 && attempt < maxRetries) {
const retryAfter = response.headers.get('retry-after')
const delay = retryAfter
? parseInt(retryAfter, 10) * 1000
: Math.pow(2, attempt) * 1000
logger.warn(
`[ContainerRegistryService] Rate limited on ${url}, retrying in ${delay}ms`
)
await new Promise((resolve) => setTimeout(resolve, delay))
continue
}
return response
}
throw new Error(`Failed to fetch ${url} after ${maxRetries} retries`)
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,36 +1,108 @@
import Markdoc from '@markdoc/markdoc'; import Markdoc from '@markdoc/markdoc'
import { readdir, readFile } from 'node:fs/promises'; import { streamToString } from '../../util/docs.js'
import { join } from 'node:path'; import { getFile, getFileStatsIfExists, listDirectoryContentsRecursive } from '../utils/fs.js'
import path from 'path'
import InternalServerErrorException from '#exceptions/internal_server_error_exception'
import logger from '@adonisjs/core/services/logger'
export class DocsService { export class DocsService {
async getDocs() { private docsPath = path.join(process.cwd(), 'docs')
const docsPath = join(process.cwd(), '/docs');
console.log(`Resolving docs path: ${docsPath}`);
const files = await readdir(docsPath, { withFileTypes: true }); private static readonly DOC_ORDER: Record<string, number> = {
const docs = files 'home': 1,
.filter(file => file.isFile() && file.name.endsWith('.md')) 'getting-started': 2,
.map(file => file.name); 'use-cases': 3,
return docs; 'faq': 4,
'about': 5,
'release-notes': 6,
}
async getDocs() {
const contents = await listDirectoryContentsRecursive(this.docsPath)
const files: Array<{ title: string; slug: string }> = []
for (const item of contents) {
if (item.type === 'file' && item.name.endsWith('.md')) {
const cleaned = this.prettify(item.name)
files.push({
title: cleaned,
slug: item.name.replace(/\.md$/, ''),
})
}
}
return files.sort((a, b) => {
const orderA = DocsService.DOC_ORDER[a.slug] ?? 999
const orderB = DocsService.DOC_ORDER[b.slug] ?? 999
return orderA - orderB
})
} }
parse(content: string) { parse(content: string) {
const ast = Markdoc.parse(content); try {
const config = this.getConfig(); const ast = Markdoc.parse(content)
const errors = Markdoc.validate(ast, config); const config = this.getConfig()
const errors = Markdoc.validate(ast, config)
if (errors.length > 0) { // Filter out attribute-undefined errors which may be caused by emojis and special characters
throw new Error(`Markdoc validation errors: ${errors.map(e => e.error).join(', ')}`); const criticalErrors = errors.filter((e) => e.error.id !== 'attribute-undefined')
if (criticalErrors.length > 0) {
logger.error('Markdoc validation errors:', errors.map((e) => JSON.stringify(e.error)).join(', '))
throw new Error('Markdoc validation failed')
}
return Markdoc.transform(ast, config)
} catch (error) {
logger.error('Error parsing Markdoc content:', error)
throw new InternalServerErrorException(`Error parsing content: ${(error as Error).message}`)
} }
return Markdoc.transform(ast, config);
} }
async parseFile(filename: string) { async parseFile(_filename: string) {
const fullPath = join(process.cwd(), '/docs', filename); try {
console.log(`Resolving file path: ${fullPath}`); if (!_filename) {
const content = await readFile(fullPath, 'utf-8') throw new Error('Filename is required')
return this.parse(content); }
const filename = _filename.endsWith('.md') ? _filename : `${_filename}.md`
// Prevent path traversal — resolved path must stay within the docs directory
const basePath = path.resolve(this.docsPath)
const fullPath = path.resolve(path.join(this.docsPath, filename))
if (!fullPath.startsWith(basePath + path.sep)) {
throw new Error('Invalid document slug')
}
const fileExists = await getFileStatsIfExists(fullPath)
if (!fileExists) {
throw new Error(`File not found: ${filename}`)
}
const fileStream = await getFile(fullPath, 'stream')
if (!fileStream) {
throw new Error(`Failed to read file stream: ${filename}`)
}
const content = await streamToString(fileStream)
return this.parse(content)
} catch (error) {
throw new InternalServerErrorException(`Error parsing file: ${(error as Error).message}`)
}
}
private static readonly TITLE_OVERRIDES: Record<string, string> = {
'faq': 'FAQ',
}
private prettify(filename: string) {
const slug = filename.replace(/\.md$/, '')
if (DocsService.TITLE_OVERRIDES[slug]) {
return DocsService.TITLE_OVERRIDES[slug]
}
// Remove hyphens, underscores, and file extension
const cleaned = slug.replace(/_/g, ' ').replace(/-/g, ' ')
// Convert to Title Case
const titleCased = cleaned.replace(/\b\w/g, (char) => char.toUpperCase())
return titleCased.charAt(0).toUpperCase() + titleCased.slice(1)
} }
private getConfig() { private getConfig() {
@ -42,12 +114,12 @@ export class DocsService {
type: { type: {
type: String, type: String,
default: 'info', default: 'info',
matches: ['info', 'warning', 'error', 'success'] matches: ['info', 'warning', 'error', 'success'],
}, },
title: { title: {
type: String type: String,
} },
} },
}, },
}, },
nodes: { nodes: {
@ -55,10 +127,77 @@ export class DocsService {
render: 'Heading', render: 'Heading',
attributes: { attributes: {
level: { type: Number, required: true }, level: { type: Number, required: true },
id: { type: String } id: { type: String },
},
},
list: {
render: 'List',
attributes: {
ordered: { type: Boolean },
start: { type: Number },
},
},
list_item: {
render: 'ListItem',
attributes: {
marker: { type: String },
className: { type: String },
class: { type: String }
} }
} },
} table: {
render: 'Table',
},
thead: {
render: 'TableHead',
},
tbody: {
render: 'TableBody',
},
tr: {
render: 'TableRow',
},
th: {
render: 'TableHeader',
},
td: {
render: 'TableCell',
},
paragraph: {
render: 'Paragraph',
},
image: {
render: 'Image',
attributes: {
src: { type: String, required: true },
alt: { type: String },
title: { type: String },
},
},
link: {
render: 'Link',
attributes: {
href: { type: String, required: true },
title: { type: String },
},
},
fence: {
render: 'CodeBlock',
attributes: {
content: { type: String },
language: { type: String },
},
},
code: {
render: 'InlineCode',
attributes: {
content: { type: String },
},
},
hr: {
render: 'HorizontalRule',
},
},
} }
} }
} }

View File

@ -0,0 +1,64 @@
import { inject } from '@adonisjs/core'
import { QueueService } from './queue_service.js'
import { RunDownloadJob } from '#jobs/run_download_job'
import { DownloadModelJob } from '#jobs/download_model_job'
import { DownloadJobWithProgress } from '../../types/downloads.js'
import { normalize } from 'path'
@inject()
export class DownloadService {
constructor(private queueService: QueueService) {}
async listDownloadJobs(filetype?: string): Promise<DownloadJobWithProgress[]> {
// Get regular file download jobs (zim, map, etc.)
const queue = this.queueService.getQueue(RunDownloadJob.queue)
const fileJobs = await queue.getJobs(['waiting', 'active', 'delayed', 'failed'])
const fileDownloads = fileJobs.map((job) => ({
jobId: job.id!.toString(),
url: job.data.url,
progress: parseInt(job.progress.toString(), 10),
filepath: normalize(job.data.filepath),
filetype: job.data.filetype,
status: (job.failedReason ? 'failed' : 'active') as 'active' | 'failed',
failedReason: job.failedReason || undefined,
}))
// Get Ollama model download jobs
const modelQueue = this.queueService.getQueue(DownloadModelJob.queue)
const modelJobs = await modelQueue.getJobs(['waiting', 'active', 'delayed', 'failed'])
const modelDownloads = modelJobs.map((job) => ({
jobId: job.id!.toString(),
url: job.data.modelName || 'Unknown Model', // Use model name as url
progress: parseInt(job.progress.toString(), 10),
filepath: job.data.modelName || 'Unknown Model', // Use model name as filepath
filetype: 'model',
status: (job.failedReason ? 'failed' : 'active') as 'active' | 'failed',
failedReason: job.failedReason || undefined,
}))
const allDownloads = [...fileDownloads, ...modelDownloads]
// Filter by filetype if specified
const filtered = allDownloads.filter((job) => !filetype || job.filetype === filetype)
// Sort: active downloads first (by progress desc), then failed at the bottom
return filtered.sort((a, b) => {
if (a.status === 'failed' && b.status !== 'failed') return 1
if (a.status !== 'failed' && b.status === 'failed') return -1
return b.progress - a.progress
})
}
async removeFailedJob(jobId: string): Promise<void> {
for (const queueName of [RunDownloadJob.queue, DownloadModelJob.queue]) {
const queue = this.queueService.getQueue(queueName)
const job = await queue.getJob(jobId)
if (job) {
await job.remove()
return
}
}
}
}

View File

@ -0,0 +1,457 @@
import { BaseStylesFile, MapLayer } from '../../types/maps.js'
import {
DownloadRemoteSuccessCallback,
FileEntry,
} from '../../types/files.js'
import { doResumableDownloadWithRetry } from '../utils/downloads.js'
import { extract } from 'tar'
import env from '#start/env'
import {
listDirectoryContentsRecursive,
getFileStatsIfExists,
deleteFileIfExists,
getFile,
ensureDirectoryExists,
} from '../utils/fs.js'
import { join, resolve, sep } from 'path'
import urlJoin from 'url-join'
import { RunDownloadJob } from '#jobs/run_download_job'
import logger from '@adonisjs/core/services/logger'
import InstalledResource from '#models/installed_resource'
import { CollectionManifestService } from './collection_manifest_service.js'
import type { CollectionWithStatus, MapsSpec } from '../../types/collections.js'
const BASE_ASSETS_MIME_TYPES = [
'application/gzip',
'application/x-gzip',
'application/octet-stream',
]
const PMTILES_ATTRIBUTION =
'<a href="https://github.com/protomaps/basemaps">Protomaps</a> © <a href="https://openstreetmap.org">OpenStreetMap</a>'
const PMTILES_MIME_TYPES = ['application/vnd.pmtiles', 'application/octet-stream']
interface IMapService {
downloadRemoteSuccessCallback: DownloadRemoteSuccessCallback
}
export class MapService implements IMapService {
private readonly mapStoragePath = '/storage/maps'
private readonly baseStylesFile = 'nomad-base-styles.json'
private readonly basemapsAssetsDir = 'basemaps-assets'
private readonly baseAssetsTarFile = 'base-assets.tar.gz'
private readonly baseDirPath = join(process.cwd(), this.mapStoragePath)
private baseAssetsExistCache: boolean | null = null
async listRegions() {
const files = (await this.listAllMapStorageItems()).filter(
(item) => item.type === 'file' && item.name.endsWith('.pmtiles')
)
return {
files,
}
}
async downloadBaseAssets(url?: string) {
const tempTarPath = join(this.baseDirPath, this.baseAssetsTarFile)
const defaultTarFileURL = new URL(
this.baseAssetsTarFile,
'https://github.com/Crosstalk-Solutions/project-nomad-maps/raw/refs/heads/master/'
)
const resolvedURL = url ? new URL(url) : defaultTarFileURL
await doResumableDownloadWithRetry({
url: resolvedURL.toString(),
filepath: tempTarPath,
timeout: 30000,
max_retries: 2,
allowedMimeTypes: BASE_ASSETS_MIME_TYPES,
onAttemptError(error, attempt) {
console.error(`Attempt ${attempt} to download tar file failed: ${error.message}`)
},
})
const tarFileBuffer = await getFileStatsIfExists(tempTarPath)
if (!tarFileBuffer) {
throw new Error(`Failed to download tar file`)
}
await extract({
cwd: join(process.cwd(), this.mapStoragePath),
file: tempTarPath,
strip: 1,
})
await deleteFileIfExists(tempTarPath)
// Invalidate cache since we just downloaded new assets
this.baseAssetsExistCache = true
return true
}
async downloadCollection(slug: string): Promise<string[] | null> {
const manifestService = new CollectionManifestService()
const spec = await manifestService.getSpecWithFallback<MapsSpec>('maps')
if (!spec) return null
const collection = spec.collections.find((c) => c.slug === slug)
if (!collection) return null
// Filter out already installed
const installed = await InstalledResource.query().where('resource_type', 'map')
const installedIds = new Set(installed.map((r) => r.resource_id))
const toDownload = collection.resources.filter((r) => !installedIds.has(r.id))
if (toDownload.length === 0) return null
const downloadFilenames: string[] = []
for (const resource of toDownload) {
const existing = await RunDownloadJob.getByUrl(resource.url)
if (existing) {
logger.warn(`[MapService] Download already in progress for URL ${resource.url}, skipping.`)
continue
}
const filename = resource.url.split('/').pop()
if (!filename) {
logger.warn(`[MapService] Could not determine filename from URL ${resource.url}, skipping.`)
continue
}
downloadFilenames.push(filename)
const filepath = join(process.cwd(), this.mapStoragePath, 'pmtiles', filename)
await RunDownloadJob.dispatch({
url: resource.url,
filepath,
timeout: 30000,
allowedMimeTypes: PMTILES_MIME_TYPES,
forceNew: true,
filetype: 'map',
resourceMetadata: {
resource_id: resource.id,
version: resource.version,
collection_ref: slug,
},
})
}
return downloadFilenames.length > 0 ? downloadFilenames : null
}
async downloadRemoteSuccessCallback(urls: string[], _: boolean) {
// Create InstalledResource entries for downloaded map files
for (const url of urls) {
const filename = url.split('/').pop()
if (!filename) continue
const parsed = CollectionManifestService.parseMapFilename(filename)
if (!parsed) continue
const filepath = join(process.cwd(), this.mapStoragePath, 'pmtiles', filename)
const stats = await getFileStatsIfExists(filepath)
try {
const { DateTime } = await import('luxon')
await InstalledResource.updateOrCreate(
{ resource_id: parsed.resource_id, resource_type: 'map' },
{
version: parsed.version,
url: url,
file_path: filepath,
file_size_bytes: stats ? Number(stats.size) : null,
installed_at: DateTime.now(),
}
)
logger.info(`[MapService] Created InstalledResource entry for: ${parsed.resource_id}`)
} catch (error) {
logger.error(`[MapService] Failed to create InstalledResource for ${filename}:`, error)
}
}
}
async downloadRemote(url: string): Promise<{ filename: string; jobId?: string }> {
const parsed = new URL(url)
if (!parsed.pathname.endsWith('.pmtiles')) {
throw new Error(`Invalid PMTiles file URL: ${url}. URL must end with .pmtiles`)
}
const existing = await RunDownloadJob.getByUrl(url)
if (existing) {
throw new Error(`Download already in progress for URL ${url}`)
}
const filename = url.split('/').pop()
if (!filename) {
throw new Error('Could not determine filename from URL')
}
const filepath = join(process.cwd(), this.mapStoragePath, 'pmtiles', filename)
// First, ensure base assets are present - regions depend on them
const baseAssetsExist = await this.ensureBaseAssets()
if (!baseAssetsExist) {
throw new Error(
'Base map assets are missing and could not be downloaded. Please check your connection and try again.'
)
}
// Parse resource metadata
const parsedFilename = CollectionManifestService.parseMapFilename(filename)
const resourceMetadata = parsedFilename
? { resource_id: parsedFilename.resource_id, version: parsedFilename.version, collection_ref: null }
: undefined
// Dispatch background job
const result = await RunDownloadJob.dispatch({
url,
filepath,
timeout: 30000,
allowedMimeTypes: PMTILES_MIME_TYPES,
forceNew: true,
filetype: 'map',
resourceMetadata,
})
if (!result.job) {
throw new Error('Failed to dispatch download job')
}
logger.info(`[MapService] Dispatched download job ${result.job.id} for URL ${url}`)
return {
filename,
jobId: result.job?.id,
}
}
async downloadRemotePreflight(
url: string
): Promise<{ filename: string; size: number } | { message: string }> {
try {
const parsed = new URL(url)
if (!parsed.pathname.endsWith('.pmtiles')) {
throw new Error(`Invalid PMTiles file URL: ${url}. URL must end with .pmtiles`)
}
const filename = url.split('/').pop()
if (!filename) {
throw new Error('Could not determine filename from URL')
}
// Perform a HEAD request to get the content length
const { default: axios } = await import('axios')
const response = await axios.head(url)
if (response.status !== 200) {
throw new Error(`Failed to fetch file info: ${response.status} ${response.statusText}`)
}
const contentLength = response.headers['content-length']
const size = contentLength ? parseInt(contentLength, 10) : 0
return { filename, size }
} catch (error: any) {
return { message: `Preflight check failed: ${error.message}` }
}
}
async generateStylesJSON(host: string | null = null, protocol: string = 'http'): Promise<BaseStylesFile> {
if (!(await this.checkBaseAssetsExist())) {
throw new Error('Base map assets are missing from storage/maps')
}
const baseStylePath = join(this.baseDirPath, this.baseStylesFile)
const baseStyle = await getFile(baseStylePath, 'string')
if (!baseStyle) {
throw new Error('Base styles file not found in storage/maps')
}
const rawStyles = JSON.parse(baseStyle.toString()) as BaseStylesFile
const regions = (await this.listRegions()).files
/** If we have the host, use it to build public URLs, otherwise we'll fallback to defaults
* This is mainly useful because we need to know what host the user is accessing from in order to
* properly generate URLs in the styles file
* e.g. user is accessing from "example.com", but we would by default generate "localhost:8080/..." so maps would
* fail to load.
*/
const sources = this.generateSourcesArray(host, regions, protocol)
const baseUrl = this.getPublicFileBaseUrl(host, this.basemapsAssetsDir, protocol)
const styles = await this.generateStylesFile(
rawStyles,
sources,
urlJoin(baseUrl, 'sprites/v4/light'),
urlJoin(baseUrl, 'fonts/{fontstack}/{range}.pbf')
)
return styles
}
async listCuratedCollections(): Promise<CollectionWithStatus[]> {
const manifestService = new CollectionManifestService()
return manifestService.getMapCollectionsWithStatus()
}
async fetchLatestCollections(): Promise<boolean> {
const manifestService = new CollectionManifestService()
return manifestService.fetchAndCacheSpec('maps')
}
async ensureBaseAssets(): Promise<boolean> {
const exists = await this.checkBaseAssetsExist()
if (exists) {
return true
}
return await this.downloadBaseAssets()
}
private async checkBaseAssetsExist(useCache: boolean = true): Promise<boolean> {
// Return cached result if available and caching is enabled
if (useCache && this.baseAssetsExistCache !== null) {
return this.baseAssetsExistCache
}
await ensureDirectoryExists(this.baseDirPath)
const baseStylePath = join(this.baseDirPath, this.baseStylesFile)
const basemapsAssetsPath = join(this.baseDirPath, this.basemapsAssetsDir)
const [baseStyleExists, basemapsAssetsExists] = await Promise.all([
getFileStatsIfExists(baseStylePath),
getFileStatsIfExists(basemapsAssetsPath),
])
const exists = !!baseStyleExists && !!basemapsAssetsExists
// update cache
this.baseAssetsExistCache = exists
return exists
}
private async listAllMapStorageItems(): Promise<FileEntry[]> {
await ensureDirectoryExists(this.baseDirPath)
return await listDirectoryContentsRecursive(this.baseDirPath)
}
private generateSourcesArray(host: string | null, regions: FileEntry[], protocol: string = 'http'): BaseStylesFile['sources'][] {
const sources: BaseStylesFile['sources'][] = []
const baseUrl = this.getPublicFileBaseUrl(host, 'pmtiles', protocol)
for (const region of regions) {
if (region.type === 'file' && region.name.endsWith('.pmtiles')) {
// Strip .pmtiles and date suffix (e.g. "alaska_2025-12" -> "alaska") for stable source names
const parsed = CollectionManifestService.parseMapFilename(region.name)
const regionName = parsed ? parsed.resource_id : region.name.replace('.pmtiles', '')
const source: BaseStylesFile['sources'] = {}
const sourceUrl = urlJoin(baseUrl, region.name)
source[regionName] = {
type: 'vector',
attribution: PMTILES_ATTRIBUTION,
url: `pmtiles://${sourceUrl}`,
}
sources.push(source)
}
}
return sources
}
private async generateStylesFile(
template: BaseStylesFile,
sources: BaseStylesFile['sources'][],
sprites: string,
glyphs: string
): Promise<BaseStylesFile> {
const layersTemplates = template.layers.filter((layer) => layer.source)
const withoutSources = template.layers.filter((layer) => !layer.source)
template.sources = {} // Clear existing sources
template.layers = [...withoutSources] // Start with layers that don't depend on sources
for (const source of sources) {
for (const layerTemplate of layersTemplates) {
const layer: MapLayer = {
...layerTemplate,
id: `${layerTemplate.id}-${Object.keys(source)[0]}`,
type: layerTemplate.type,
source: Object.keys(source)[0],
}
template.layers.push(layer)
}
template.sources = Object.assign(template.sources, source)
}
template.sprite = sprites
template.glyphs = glyphs
return template
}
async delete(file: string): Promise<void> {
let fileName = file
if (!fileName.endsWith('.pmtiles')) {
fileName += '.pmtiles'
}
const basePath = resolve(join(this.baseDirPath, 'pmtiles'))
const fullPath = resolve(join(basePath, fileName))
// Prevent path traversal — resolved path must stay within the storage directory
if (!fullPath.startsWith(basePath + sep)) {
throw new Error('Invalid filename')
}
const exists = await getFileStatsIfExists(fullPath)
if (!exists) {
throw new Error('not_found')
}
await deleteFileIfExists(fullPath)
// Clean up InstalledResource entry
const parsed = CollectionManifestService.parseMapFilename(fileName)
if (parsed) {
await InstalledResource.query()
.where('resource_id', parsed.resource_id)
.where('resource_type', 'map')
.delete()
logger.info(`[MapService] Deleted InstalledResource entry for: ${parsed.resource_id}`)
}
}
/*
* Gets the appropriate public URL for a map asset depending on environment
*/
private getPublicFileBaseUrl(specifiedHost: string | null, childPath: string, protocol: string = 'http'): string {
function getHost() {
try {
const localUrlRaw = env.get('URL')
if (!localUrlRaw) return 'localhost'
const localUrl = new URL(localUrlRaw)
return localUrl.host
} catch (error) {
return 'localhost'
}
}
const host = specifiedHost || getHost()
const withProtocol = host.startsWith('http') ? host : `${protocol}://${host}`
const baseUrlPath =
process.env.NODE_ENV === 'production' ? childPath : urlJoin(this.mapStoragePath, childPath)
const baseUrl = new URL(baseUrlPath, withProtocol).toString()
return baseUrl
}
}

View File

@ -0,0 +1,421 @@
import { inject } from '@adonisjs/core'
import { ChatRequest, Ollama } from 'ollama'
import { NomadOllamaModel } from '../../types/ollama.js'
import { FALLBACK_RECOMMENDED_OLLAMA_MODELS } from '../../constants/ollama.js'
import fs from 'node:fs/promises'
import path from 'node:path'
import logger from '@adonisjs/core/services/logger'
import axios from 'axios'
import { DownloadModelJob } from '#jobs/download_model_job'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import transmit from '@adonisjs/transmit/services/main'
import Fuse, { IFuseOptions } from 'fuse.js'
import { BROADCAST_CHANNELS } from '../../constants/broadcast.js'
import env from '#start/env'
import { NOMAD_API_DEFAULT_BASE_URL } from '../../constants/misc.js'
const NOMAD_MODELS_API_PATH = '/api/v1/ollama/models'
const MODELS_CACHE_FILE = path.join(process.cwd(), 'storage', 'ollama-models-cache.json')
const CACHE_MAX_AGE_MS = 24 * 60 * 60 * 1000 // 24 hours
@inject()
export class OllamaService {
private ollama: Ollama | null = null
private ollamaInitPromise: Promise<void> | null = null
constructor() { }
private async _initializeOllamaClient() {
if (!this.ollamaInitPromise) {
this.ollamaInitPromise = (async () => {
const dockerService = new (await import('./docker_service.js')).DockerService()
const qdrantUrl = await dockerService.getServiceURL(SERVICE_NAMES.OLLAMA)
if (!qdrantUrl) {
throw new Error('Ollama service is not installed or running.')
}
this.ollama = new Ollama({ host: qdrantUrl })
})()
}
return this.ollamaInitPromise
}
private async _ensureDependencies() {
if (!this.ollama) {
await this._initializeOllamaClient()
}
}
/**
* Downloads a model from the Ollama service with progress tracking. Where possible,
* one should dispatch a background job instead of calling this method directly to avoid long blocking.
* @param model Model name to download
* @returns Success status and message
*/
async downloadModel(model: string, progressCallback?: (percent: number) => void): Promise<{ success: boolean; message: string; retryable?: boolean }> {
try {
await this._ensureDependencies()
if (!this.ollama) {
throw new Error('Ollama client is not initialized.')
}
// See if model is already installed
const installedModels = await this.getModels()
if (installedModels && installedModels.some((m) => m.name === model)) {
logger.info(`[OllamaService] Model "${model}" is already installed.`)
return { success: true, message: 'Model is already installed.' }
}
// Returns AbortableAsyncIterator<ProgressResponse>
const downloadStream = await this.ollama.pull({
model,
stream: true,
})
for await (const chunk of downloadStream) {
if (chunk.completed && chunk.total) {
const percent = ((chunk.completed / chunk.total) * 100).toFixed(2)
const percentNum = parseFloat(percent)
this.broadcastDownloadProgress(model, percentNum)
if (progressCallback) {
progressCallback(percentNum)
}
}
}
logger.info(`[OllamaService] Model "${model}" downloaded successfully.`)
return { success: true, message: 'Model downloaded successfully.' }
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
logger.error(
`[OllamaService] Failed to download model "${model}": ${errorMessage}`
)
// Check for version mismatch (Ollama 412 response)
const isVersionMismatch = errorMessage.includes('newer version of Ollama')
const userMessage = isVersionMismatch
? 'This model requires a newer version of Ollama. Please update AI Assistant from the Apps page.'
: `Failed to download model: ${errorMessage}`
// Broadcast failure to connected clients so UI can show the error
this.broadcastDownloadError(model, userMessage)
return { success: false, message: userMessage, retryable: !isVersionMismatch }
}
}
async dispatchModelDownload(modelName: string): Promise<{ success: boolean; message: string }> {
try {
logger.info(`[OllamaService] Dispatching model download for ${modelName} via job queue`)
await DownloadModelJob.dispatch({
modelName,
})
return {
success: true,
message:
'Model download has been queued successfully. It will start shortly after Ollama and Open WebUI are ready (if not already).',
}
} catch (error) {
logger.error(
`[OllamaService] Failed to dispatch model download for ${modelName}: ${error instanceof Error ? error.message : error}`
)
return {
success: false,
message: 'Failed to queue model download. Please try again.',
}
}
}
public async getClient() {
await this._ensureDependencies()
return this.ollama!
}
public async chat(chatRequest: ChatRequest & { stream?: boolean }) {
await this._ensureDependencies()
if (!this.ollama) {
throw new Error('Ollama client is not initialized.')
}
return await this.ollama.chat({
...chatRequest,
stream: false,
})
}
public async chatStream(chatRequest: ChatRequest) {
await this._ensureDependencies()
if (!this.ollama) {
throw new Error('Ollama client is not initialized.')
}
return await this.ollama.chat({
...chatRequest,
stream: true,
})
}
public async checkModelHasThinking(modelName: string): Promise<boolean> {
await this._ensureDependencies()
if (!this.ollama) {
throw new Error('Ollama client is not initialized.')
}
const modelInfo = await this.ollama.show({
model: modelName,
})
return modelInfo.capabilities.includes('thinking')
}
public async deleteModel(modelName: string) {
await this._ensureDependencies()
if (!this.ollama) {
throw new Error('Ollama client is not initialized.')
}
return await this.ollama.delete({
model: modelName,
})
}
public async getModels(includeEmbeddings = false) {
await this._ensureDependencies()
if (!this.ollama) {
throw new Error('Ollama client is not initialized.')
}
const response = await this.ollama.list()
if (includeEmbeddings) {
return response.models
}
// Filter out embedding models
return response.models.filter((model) => !model.name.includes('embed'))
}
async getAvailableModels(
{ sort, recommendedOnly, query, limit, force }: { sort?: 'pulls' | 'name'; recommendedOnly?: boolean, query: string | null, limit?: number, force?: boolean } = {
sort: 'pulls',
recommendedOnly: false,
query: null,
limit: 15,
}
): Promise<{ models: NomadOllamaModel[], hasMore: boolean } | null> {
try {
const models = await this.retrieveAndRefreshModels(sort, force)
if (!models) {
// If we fail to get models from the API, return the fallback recommended models
logger.warn(
'[OllamaService] Returning fallback recommended models due to failure in fetching available models'
)
return {
models: FALLBACK_RECOMMENDED_OLLAMA_MODELS,
hasMore: false
}
}
if (!recommendedOnly) {
const filteredModels = query ? this.fuseSearchModels(models, query) : models
return {
models: filteredModels.slice(0, limit || 15),
hasMore: filteredModels.length > (limit || 15)
}
}
// If recommendedOnly is true, only return the first three models (if sorted by pulls, these will be the top 3)
const sortedByPulls = sort === 'pulls' ? models : this.sortModels(models, 'pulls')
const firstThree = sortedByPulls.slice(0, 3)
// Only return the first tag of each of these models (should be the most lightweight variant)
const recommendedModels = firstThree.map((model) => {
return {
...model,
tags: model.tags && model.tags.length > 0 ? [model.tags[0]] : [],
}
})
if (query) {
const filteredRecommendedModels = this.fuseSearchModels(recommendedModels, query)
return {
models: filteredRecommendedModels,
hasMore: filteredRecommendedModels.length > (limit || 15)
}
}
return {
models: recommendedModels,
hasMore: recommendedModels.length > (limit || 15)
}
} catch (error) {
logger.error(
`[OllamaService] Failed to get available models: ${error instanceof Error ? error.message : error}`
)
return null
}
}
private async retrieveAndRefreshModels(
sort?: 'pulls' | 'name',
force?: boolean
): Promise<NomadOllamaModel[] | null> {
try {
if (!force) {
const cachedModels = await this.readModelsFromCache()
if (cachedModels) {
logger.info('[OllamaService] Using cached available models data')
return this.sortModels(cachedModels, sort)
}
} else {
logger.info('[OllamaService] Force refresh requested, bypassing cache')
}
logger.info('[OllamaService] Fetching fresh available models from API')
const baseUrl = env.get('NOMAD_API_URL') || NOMAD_API_DEFAULT_BASE_URL
const fullUrl = new URL(NOMAD_MODELS_API_PATH, baseUrl).toString()
const response = await axios.get(fullUrl)
if (!response.data || !Array.isArray(response.data.models)) {
logger.warn(
`[OllamaService] Invalid response format when fetching available models: ${JSON.stringify(response.data)}`
)
return null
}
const rawModels = response.data.models as NomadOllamaModel[]
// Filter out tags where cloud is truthy, then remove models with no remaining tags
const noCloud = rawModels
.map((model) => ({
...model,
tags: model.tags.filter((tag) => !tag.cloud),
}))
.filter((model) => model.tags.length > 0)
await this.writeModelsToCache(noCloud)
return this.sortModels(noCloud, sort)
} catch (error) {
logger.error(
`[OllamaService] Failed to retrieve models from Nomad API: ${error instanceof Error ? error.message : error
}`
)
return null
}
}
private async readModelsFromCache(): Promise<NomadOllamaModel[] | null> {
try {
const stats = await fs.stat(MODELS_CACHE_FILE)
const cacheAge = Date.now() - stats.mtimeMs
if (cacheAge > CACHE_MAX_AGE_MS) {
logger.info('[OllamaService] Cache is stale, will fetch fresh data')
return null
}
const cacheData = await fs.readFile(MODELS_CACHE_FILE, 'utf-8')
const models = JSON.parse(cacheData) as NomadOllamaModel[]
if (!Array.isArray(models)) {
logger.warn('[OllamaService] Invalid cache format, will fetch fresh data')
return null
}
return models
} catch (error) {
// Cache doesn't exist or is invalid
if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
logger.warn(
`[OllamaService] Error reading cache: ${error instanceof Error ? error.message : error}`
)
}
return null
}
}
private async writeModelsToCache(models: NomadOllamaModel[]): Promise<void> {
try {
await fs.mkdir(path.dirname(MODELS_CACHE_FILE), { recursive: true })
await fs.writeFile(MODELS_CACHE_FILE, JSON.stringify(models, null, 2), 'utf-8')
logger.info('[OllamaService] Successfully cached available models')
} catch (error) {
logger.warn(
`[OllamaService] Failed to write models cache: ${error instanceof Error ? error.message : error}`
)
}
}
private sortModels(models: NomadOllamaModel[], sort?: 'pulls' | 'name'): NomadOllamaModel[] {
if (sort === 'pulls') {
// Sort by estimated pulls (it should be a string like "1.2K", "500", "4M" etc.)
models.sort((a, b) => {
const parsePulls = (pulls: string) => {
const multiplier = pulls.endsWith('K')
? 1_000
: pulls.endsWith('M')
? 1_000_000
: pulls.endsWith('B')
? 1_000_000_000
: 1
return parseFloat(pulls) * multiplier
}
return parsePulls(b.estimated_pulls) - parsePulls(a.estimated_pulls)
})
} else if (sort === 'name') {
models.sort((a, b) => a.name.localeCompare(b.name))
}
// Always sort model.tags by the size field in descending order
// Size is a string like '75GB', '8.5GB', '2GB' etc. Smaller models first
models.forEach((model) => {
if (model.tags && Array.isArray(model.tags)) {
model.tags.sort((a, b) => {
const parseSize = (size: string) => {
const multiplier = size.endsWith('KB')
? 1 / 1_000
: size.endsWith('MB')
? 1 / 1_000_000
: size.endsWith('GB')
? 1
: size.endsWith('TB')
? 1_000
: 0 // Unknown size format
return parseFloat(size) * multiplier
}
return parseSize(a.size) - parseSize(b.size)
})
}
})
return models
}
private broadcastDownloadError(model: string, error: string) {
transmit.broadcast(BROADCAST_CHANNELS.OLLAMA_MODEL_DOWNLOAD, {
model,
percent: -1,
error,
timestamp: new Date().toISOString(),
})
}
private broadcastDownloadProgress(model: string, percent: number) {
transmit.broadcast(BROADCAST_CHANNELS.OLLAMA_MODEL_DOWNLOAD, {
model,
percent,
timestamp: new Date().toISOString(),
})
logger.info(`[OllamaService] Download progress for model "${model}": ${percent}%`)
}
private fuseSearchModels(models: NomadOllamaModel[], query: string): NomadOllamaModel[] {
const options: IFuseOptions<NomadOllamaModel> = {
ignoreDiacritics: true,
keys: ['name', 'description', 'tags.name'],
threshold: 0.3, // lower threshold for stricter matching
}
const fuse = new Fuse(models, options)
return fuse.search(query).map(result => result.item)
}
}

View File

@ -0,0 +1,22 @@
import { Queue } from 'bullmq'
import queueConfig from '#config/queue'
export class QueueService {
private queues: Map<string, Queue> = new Map()
getQueue(name: string): Queue {
if (!this.queues.has(name)) {
const queue = new Queue(name, {
connection: queueConfig.connection,
})
this.queues.set(name, queue)
}
return this.queues.get(name)!
}
async close() {
for (const queue of this.queues.values()) {
await queue.close()
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,15 +1,624 @@
import Service from "#models/service" import Service from '#models/service'
import { inject } from '@adonisjs/core'
import { DockerService } from '#services/docker_service'
import { ServiceSlim } from '../../types/services.js'
import logger from '@adonisjs/core/services/logger'
import si from 'systeminformation'
import { GpuHealthStatus, NomadDiskInfo, NomadDiskInfoRaw, SystemInformationResponse } from '../../types/system.js'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import { readFileSync } from 'fs'
import path, { join } from 'path'
import { getAllFilesystems, getFile } from '../utils/fs.js'
import axios from 'axios'
import env from '#start/env'
import KVStore from '#models/kv_store'
import { KV_STORE_SCHEMA, KVStoreKey } from '../../types/kv_store.js'
import { isNewerVersion } from '../utils/version.js'
@inject()
export class SystemService { export class SystemService {
async getServices({ private static appVersion: string | null = null
installedOnly = true, private static diskInfoFile = '/storage/nomad-disk-info.json'
}:{
installedOnly?: boolean constructor(private dockerService: DockerService) { }
}): Promise<{ id: number; service_name: string; installed: boolean }[]> {
const query = Service.query().orderBy('service_name', 'asc').select('id', 'service_name', 'installed', 'ui_location').where('is_dependency_service', false) async checkServiceInstalled(serviceName: string): Promise<boolean> {
if (installedOnly) { const services = await this.getServices({ installedOnly: true });
query.where('installed', true); return services.some(service => service.service_name === serviceName);
}
return await query;
} }
}
async getInternetStatus(): Promise<boolean> {
const DEFAULT_TEST_URL = 'https://1.1.1.1/cdn-cgi/trace'
const MAX_ATTEMPTS = 3
let testUrl = DEFAULT_TEST_URL
let customTestUrl = env.get('INTERNET_STATUS_TEST_URL')?.trim()
// check that customTestUrl is a valid URL, if provided
if (customTestUrl && customTestUrl !== '') {
try {
new URL(customTestUrl)
testUrl = customTestUrl
} catch (error) {
logger.warn(
`Invalid INTERNET_STATUS_TEST_URL: ${customTestUrl}. Falling back to default URL.`
)
}
}
for (let attempt = 1; attempt <= MAX_ATTEMPTS; attempt++) {
try {
const res = await axios.get(testUrl, { timeout: 5000 })
return res.status === 200
} catch (error) {
logger.warn(
`Internet status check attempt ${attempt}/${MAX_ATTEMPTS} failed: ${error instanceof Error ? error.message : error}`
)
if (attempt < MAX_ATTEMPTS) {
// delay before next attempt
await new Promise((resolve) => setTimeout(resolve, 1000))
}
}
}
logger.warn('All internet status check attempts failed.')
return false
}
async getNvidiaSmiInfo(): Promise<Array<{ vendor: string; model: string; vram: number; }> | { error: string } | 'OLLAMA_NOT_FOUND' | 'BAD_RESPONSE' | 'UNKNOWN_ERROR'> {
try {
const containers = await this.dockerService.docker.listContainers({ all: false })
const ollamaContainer = containers.find((c) =>
c.Names.includes(`/${SERVICE_NAMES.OLLAMA}`)
)
if (!ollamaContainer) {
logger.info('Ollama container not found for nvidia-smi info retrieval. This is expected if Ollama is not installed.')
return 'OLLAMA_NOT_FOUND'
}
// Execute nvidia-smi inside the Ollama container to get GPU info
const container = this.dockerService.docker.getContainer(ollamaContainer.Id)
const exec = await container.exec({
Cmd: ['nvidia-smi', '--query-gpu=name,memory.total', '--format=csv,noheader,nounits'],
AttachStdout: true,
AttachStderr: true,
Tty: true,
})
// Read the output stream with a timeout to prevent hanging if nvidia-smi fails
const stream = await exec.start({ Tty: true })
const output = await new Promise<string>((resolve) => {
let data = ''
const timeout = setTimeout(() => resolve(data), 5000)
stream.on('data', (chunk: Buffer) => { data += chunk.toString() })
stream.on('end', () => { clearTimeout(timeout); resolve(data) })
})
// Remove any non-printable characters and trim the output
const cleaned = output.replace(/[\x00-\x08]/g, '').trim()
if (cleaned && !cleaned.toLowerCase().includes('error') && !cleaned.toLowerCase().includes('not found')) {
// Split by newlines to handle multiple GPUs installed
const lines = cleaned.split('\n').filter(line => line.trim())
// Map each line out to a useful structure for us
const gpus = lines.map(line => {
const parts = line.split(',').map((s) => s.trim())
return {
vendor: 'NVIDIA',
model: parts[0] || 'NVIDIA GPU',
vram: parts[1] ? parseInt(parts[1], 10) : 0,
}
})
return gpus.length > 0 ? gpus : 'BAD_RESPONSE'
}
// If we got output but looks like an error, consider it a bad response from nvidia-smi
return 'BAD_RESPONSE'
}
catch (error) {
logger.error('Error getting nvidia-smi info:', error)
if (error instanceof Error && error.message) {
return { error: error.message }
}
return 'UNKNOWN_ERROR'
}
}
async getServices({ installedOnly = true }: { installedOnly?: boolean }): Promise<ServiceSlim[]> {
await this._syncContainersWithDatabase() // Sync up before fetching to ensure we have the latest status
const query = Service.query()
.orderBy('display_order', 'asc')
.orderBy('friendly_name', 'asc')
.select(
'id',
'service_name',
'installed',
'installation_status',
'ui_location',
'friendly_name',
'description',
'icon',
'powered_by',
'display_order',
'container_image',
'available_update_version'
)
.where('is_dependency_service', false)
if (installedOnly) {
query.where('installed', true)
}
const services = await query
if (!services || services.length === 0) {
return []
}
const statuses = await this.dockerService.getServicesStatus()
const toReturn: ServiceSlim[] = []
for (const service of services) {
const status = statuses.find((s) => s.service_name === service.service_name)
toReturn.push({
id: service.id,
service_name: service.service_name,
friendly_name: service.friendly_name,
description: service.description,
icon: service.icon,
installed: service.installed,
installation_status: service.installation_status,
status: status ? status.status : 'unknown',
ui_location: service.ui_location || '',
powered_by: service.powered_by,
display_order: service.display_order,
container_image: service.container_image,
available_update_version: service.available_update_version,
})
}
return toReturn
}
static getAppVersion(): string {
try {
if (this.appVersion) {
return this.appVersion
}
// Return 'dev' for development environment (version.json won't exist)
if (process.env.NODE_ENV === 'development') {
this.appVersion = 'dev'
return 'dev'
}
const packageJson = readFileSync(join(process.cwd(), 'version.json'), 'utf-8')
const packageData = JSON.parse(packageJson)
const version = packageData.version || '0.0.0'
this.appVersion = version
return version
} catch (error) {
logger.error('Error getting app version:', error)
return '0.0.0'
}
}
async getSystemInfo(): Promise<SystemInformationResponse | undefined> {
try {
const [cpu, mem, os, currentLoad, fsSize, uptime, graphics] = await Promise.all([
si.cpu(),
si.mem(),
si.osInfo(),
si.currentLoad(),
si.fsSize(),
si.time(),
si.graphics(),
])
let diskInfo: NomadDiskInfoRaw | undefined
let disk: NomadDiskInfo[] = []
try {
const diskInfoRawString = await getFile(
path.join(process.cwd(), SystemService.diskInfoFile),
'string'
)
diskInfo = (
diskInfoRawString
? JSON.parse(diskInfoRawString.toString())
: { diskLayout: { blockdevices: [] }, fsSize: [] }
) as NomadDiskInfoRaw
disk = this.calculateDiskUsage(diskInfo)
} catch (error) {
logger.error('Error reading disk info file:', error)
}
// GPU health tracking — detect when host has NVIDIA GPU but Ollama can't access it
let gpuHealth: GpuHealthStatus = {
status: 'no_gpu',
hasNvidiaRuntime: false,
ollamaGpuAccessible: false,
}
// Query Docker API for host-level info (hostname, OS, GPU runtime)
// si.osInfo() returns the container's info inside Docker, not the host's
try {
const dockerInfo = await this.dockerService.docker.info()
if (dockerInfo.Name) {
os.hostname = dockerInfo.Name
}
if (dockerInfo.OperatingSystem) {
os.distro = dockerInfo.OperatingSystem
}
if (dockerInfo.KernelVersion) {
os.kernel = dockerInfo.KernelVersion
}
// If si.graphics() returned no controllers (common inside Docker),
// fall back to nvidia runtime + nvidia-smi detection
if (!graphics.controllers || graphics.controllers.length === 0) {
const runtimes = dockerInfo.Runtimes || {}
if ('nvidia' in runtimes) {
gpuHealth.hasNvidiaRuntime = true
const nvidiaInfo = await this.getNvidiaSmiInfo()
if (Array.isArray(nvidiaInfo)) {
graphics.controllers = nvidiaInfo.map((gpu) => ({
model: gpu.model,
vendor: gpu.vendor,
bus: "",
vram: gpu.vram,
vramDynamic: false, // assume false here, we don't actually use this field for our purposes.
}))
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
} else if (nvidiaInfo === 'OLLAMA_NOT_FOUND') {
gpuHealth.status = 'ollama_not_installed'
} else {
gpuHealth.status = 'passthrough_failed'
logger.warn(`NVIDIA runtime detected but GPU passthrough failed: ${typeof nvidiaInfo === 'string' ? nvidiaInfo : JSON.stringify(nvidiaInfo)}`)
}
}
} else {
// si.graphics() returned controllers (host install, not Docker) — GPU is working
gpuHealth.status = 'ok'
gpuHealth.ollamaGpuAccessible = true
}
} catch {
// Docker info query failed, skip host-level enrichment
}
return {
cpu,
mem,
os,
disk,
currentLoad,
fsSize,
uptime,
graphics,
gpuHealth,
}
} catch (error) {
logger.error('Error getting system info:', error)
return undefined
}
}
async checkLatestVersion(force?: boolean): Promise<{
success: boolean
updateAvailable: boolean
currentVersion: string
latestVersion: string
message?: string
}> {
try {
const currentVersion = SystemService.getAppVersion()
const cachedUpdateAvailable = await KVStore.getValue('system.updateAvailable')
const cachedLatestVersion = await KVStore.getValue('system.latestVersion')
// Use cached values if not forcing a fresh check.
// the CheckUpdateJob will update these values every 12 hours
if (!force) {
return {
success: true,
updateAvailable: cachedUpdateAvailable ?? false,
currentVersion,
latestVersion: cachedLatestVersion || '',
}
}
const earlyAccess = (await KVStore.getValue('system.earlyAccess')) ?? false
let latestVersion: string
if (earlyAccess) {
const response = await axios.get(
'https://api.github.com/repos/Crosstalk-Solutions/project-nomad/releases',
{ headers: { Accept: 'application/vnd.github+json' }, timeout: 5000 }
)
if (!response?.data?.length) throw new Error('No releases found')
latestVersion = response.data[0].tag_name.replace(/^v/, '').trim()
} else {
const response = await axios.get(
'https://api.github.com/repos/Crosstalk-Solutions/project-nomad/releases/latest',
{ headers: { Accept: 'application/vnd.github+json' }, timeout: 5000 }
)
if (!response?.data?.tag_name) throw new Error('Invalid response from GitHub API')
latestVersion = response.data.tag_name.replace(/^v/, '').trim()
}
logger.info(`Current version: ${currentVersion}, Latest version: ${latestVersion}`)
const updateAvailable = process.env.NODE_ENV === 'development'
? false
: isNewerVersion(latestVersion, currentVersion.trim(), earlyAccess)
// Cache the results in KVStore for frontend checks
await KVStore.setValue('system.updateAvailable', updateAvailable)
await KVStore.setValue('system.latestVersion', latestVersion)
return {
success: true,
updateAvailable,
currentVersion,
latestVersion,
}
} catch (error) {
logger.error('Error checking latest version:', error)
return {
success: false,
updateAvailable: false,
currentVersion: '',
latestVersion: '',
message: `Failed to check latest version: ${error instanceof Error ? error.message : error}`,
}
}
}
async subscribeToReleaseNotes(email: string): Promise<{ success: boolean; message: string }> {
try {
const response = await axios.post(
'https://api.projectnomad.us/api/v1/lists/release-notes/subscribe',
{ email },
{ timeout: 5000 }
)
if (response.status === 200) {
return {
success: true,
message: 'Successfully subscribed to release notes',
}
}
return {
success: false,
message: `Failed to subscribe: ${response.statusText}`,
}
} catch (error) {
logger.error('Error subscribing to release notes:', error)
return {
success: false,
message: `Failed to subscribe: ${error instanceof Error ? error.message : error}`,
}
}
}
async getDebugInfo(): Promise<string> {
const appVersion = SystemService.getAppVersion()
const environment = process.env.NODE_ENV || 'unknown'
const [systemInfo, services, internetStatus, versionCheck] = await Promise.all([
this.getSystemInfo(),
this.getServices({ installedOnly: false }),
this.getInternetStatus().catch(() => null),
this.checkLatestVersion().catch(() => null),
])
const lines: string[] = [
'Project NOMAD Debug Info',
'========================',
`App Version: ${appVersion}`,
`Environment: ${environment}`,
]
if (systemInfo) {
const { cpu, mem, os, disk, fsSize, uptime, graphics } = systemInfo
lines.push('')
lines.push('System:')
if (os.distro) lines.push(` OS: ${os.distro}`)
if (os.hostname) lines.push(` Hostname: ${os.hostname}`)
if (os.kernel) lines.push(` Kernel: ${os.kernel}`)
if (os.arch) lines.push(` Architecture: ${os.arch}`)
if (uptime?.uptime) lines.push(` Uptime: ${this._formatUptime(uptime.uptime)}`)
lines.push('')
lines.push('Hardware:')
if (cpu.brand) {
lines.push(` CPU: ${cpu.brand} (${cpu.cores} cores)`)
}
if (mem.total) {
const total = this._formatBytes(mem.total)
const used = this._formatBytes(mem.total - (mem.available || 0))
const available = this._formatBytes(mem.available || 0)
lines.push(` RAM: ${total} total, ${used} used, ${available} available`)
}
if (graphics.controllers && graphics.controllers.length > 0) {
for (const gpu of graphics.controllers) {
const vram = gpu.vram ? ` (${gpu.vram} MB VRAM)` : ''
lines.push(` GPU: ${gpu.model}${vram}`)
}
} else {
lines.push(' GPU: None detected')
}
// Disk info — try disk array first, fall back to fsSize
const diskEntries = disk.filter((d) => d.totalSize > 0)
if (diskEntries.length > 0) {
for (const d of diskEntries) {
const size = this._formatBytes(d.totalSize)
const type = d.tran?.toUpperCase() || (d.rota ? 'HDD' : 'SSD')
lines.push(` Disk: ${size}, ${Math.round(d.percentUsed)}% used, ${type}`)
}
} else if (fsSize.length > 0) {
const realFs = fsSize.filter((f) => f.fs.startsWith('/dev/'))
const seen = new Set<number>()
for (const f of realFs) {
if (seen.has(f.size)) continue
seen.add(f.size)
lines.push(` Disk: ${this._formatBytes(f.size)}, ${Math.round(f.use)}% used`)
}
}
}
const installed = services.filter((s) => s.installed)
lines.push('')
if (installed.length > 0) {
lines.push('Installed Services:')
for (const svc of installed) {
lines.push(` ${svc.friendly_name} (${svc.service_name}): ${svc.status}`)
}
} else {
lines.push('Installed Services: None')
}
if (internetStatus !== null) {
lines.push('')
lines.push(`Internet Status: ${internetStatus ? 'Online' : 'Offline'}`)
}
if (versionCheck?.success) {
const updateMsg = versionCheck.updateAvailable
? `Yes (${versionCheck.latestVersion} available)`
: `No (${versionCheck.currentVersion} is latest)`
lines.push(`Update Available: ${updateMsg}`)
}
return lines.join('\n')
}
private _formatUptime(seconds: number): string {
const days = Math.floor(seconds / 86400)
const hours = Math.floor((seconds % 86400) / 3600)
const minutes = Math.floor((seconds % 3600) / 60)
if (days > 0) return `${days}d ${hours}h ${minutes}m`
if (hours > 0) return `${hours}h ${minutes}m`
return `${minutes}m`
}
private _formatBytes(bytes: number, decimals = 1): string {
if (bytes === 0) return '0 Bytes'
const k = 1024
const sizes = ['Bytes', 'KB', 'MB', 'GB', 'TB']
const i = Math.floor(Math.log(bytes) / Math.log(k))
return parseFloat((bytes / Math.pow(k, i)).toFixed(decimals)) + ' ' + sizes[i]
}
async updateSetting(key: KVStoreKey, value: any): Promise<void> {
if ((value === '' || value === undefined || value === null) && KV_STORE_SCHEMA[key] === 'string') {
await KVStore.clearValue(key)
} else {
await KVStore.setValue(key, value)
}
}
/**
* Checks the current state of Docker containers against the database records and updates the database accordingly.
* It will mark services as not installed if their corresponding containers do not exist, regardless of their running state.
* Handles cases where a container might have been manually removed, ensuring the database reflects the actual existence of containers.
* Containers that exist but are stopped, paused, or restarting will still be considered installed.
*/
private async _syncContainersWithDatabase() {
try {
const allServices = await Service.all()
const serviceStatusList = await this.dockerService.getServicesStatus()
for (const service of allServices) {
const containerExists = serviceStatusList.find(
(s) => s.service_name === service.service_name
)
if (service.installed) {
// If marked as installed but container doesn't exist, mark as not installed
if (!containerExists) {
logger.warn(
`Service ${service.service_name} is marked as installed but container does not exist. Marking as not installed.`
)
service.installed = false
service.installation_status = 'idle'
await service.save()
}
} else {
// If marked as not installed but container exists (any state), mark as installed
if (containerExists) {
logger.warn(
`Service ${service.service_name} is marked as not installed but container exists. Marking as installed.`
)
service.installed = true
service.installation_status = 'idle'
await service.save()
}
}
}
} catch (error) {
logger.error('Error syncing containers with database:', error)
}
}
private calculateDiskUsage(diskInfo: NomadDiskInfoRaw): NomadDiskInfo[] {
const { diskLayout, fsSize } = diskInfo
if (!diskLayout?.blockdevices || !fsSize) {
return []
}
// Deduplicate: same device path mounted in multiple places (Docker bind-mounts)
// Keep the entry with the largest size — that's the real partition
const deduped = new Map<string, NomadDiskInfoRaw['fsSize'][0]>()
for (const entry of fsSize) {
const existing = deduped.get(entry.fs)
if (!existing || entry.size > existing.size) {
deduped.set(entry.fs, entry)
}
}
const dedupedFsSize = Array.from(deduped.values())
return diskLayout.blockdevices
.filter((disk) => disk.type === 'disk') // Only physical disks
.map((disk) => {
const filesystems = getAllFilesystems(disk, dedupedFsSize)
// Across all partitions
const totalUsed = filesystems.reduce((sum, p) => sum + (p.used || 0), 0)
const totalSize = filesystems.reduce((sum, p) => sum + (p.size || 0), 0)
const percentUsed = totalSize > 0 ? (totalUsed / totalSize) * 100 : 0
return {
name: disk.name,
model: disk.model || 'Unknown',
vendor: disk.vendor || '',
rota: disk.rota || false,
tran: disk.tran || '',
size: disk.size,
totalUsed,
totalSize,
percentUsed: Math.round(percentUsed * 100) / 100,
filesystems: filesystems.map((p) => ({
fs: p.fs,
mount: p.mount,
used: p.used,
size: p.size,
percentUsed: p.use,
})),
}
})
}
}

View File

@ -0,0 +1,100 @@
import logger from '@adonisjs/core/services/logger'
import { readFileSync, existsSync } from 'fs'
import { writeFile } from 'fs/promises'
import { join } from 'path'
import KVStore from '#models/kv_store'
interface UpdateStatus {
stage: 'idle' | 'starting' | 'pulling' | 'pulled' | 'recreating' | 'complete' | 'error'
progress: number
message: string
timestamp: string
}
export class SystemUpdateService {
private static SHARED_DIR = '/app/update-shared'
private static REQUEST_FILE = join(SystemUpdateService.SHARED_DIR, 'update-request')
private static STATUS_FILE = join(SystemUpdateService.SHARED_DIR, 'update-status')
private static LOG_FILE = join(SystemUpdateService.SHARED_DIR, 'update-log')
/**
* Requests a system update by creating a request file that the sidecar will detect
*/
async requestUpdate(): Promise<{ success: boolean; message: string }> {
try {
const currentStatus = this.getUpdateStatus()
if (currentStatus && !['idle', 'complete', 'error'].includes(currentStatus.stage)) {
return {
success: false,
message: `Update already in progress (stage: ${currentStatus.stage})`,
}
}
// Determine the Docker image tag to install.
const latestVersion = await KVStore.getValue('system.latestVersion')
const requestData = {
requested_at: new Date().toISOString(),
requester: 'admin-api',
target_tag: latestVersion ? `v${latestVersion}` : 'latest',
}
await writeFile(SystemUpdateService.REQUEST_FILE, JSON.stringify(requestData, null, 2))
logger.info(`[SystemUpdateService]: System update requested (target tag: ${requestData.target_tag}) - sidecar will process shortly`)
return {
success: true,
message: 'System update initiated. The admin container will restart during the process.',
}
} catch (error) {
logger.error('[SystemUpdateService]: Failed to request system update:', error)
return {
success: false,
message: `Failed to request update: ${error.message}`,
}
}
}
getUpdateStatus(): UpdateStatus | null {
try {
if (!existsSync(SystemUpdateService.STATUS_FILE)) {
return {
stage: 'idle',
progress: 0,
message: 'No update in progress',
timestamp: new Date().toISOString(),
}
}
const statusContent = readFileSync(SystemUpdateService.STATUS_FILE, 'utf-8')
return JSON.parse(statusContent) as UpdateStatus
} catch (error) {
logger.error('[SystemUpdateService]: Failed to read update status:', error)
return null
}
}
getUpdateLogs(): string {
try {
if (!existsSync(SystemUpdateService.LOG_FILE)) {
return 'No update logs available'
}
return readFileSync(SystemUpdateService.LOG_FILE, 'utf-8')
} catch (error) {
logger.error('[SystemUpdateService]: Failed to read update logs:', error)
return `Error reading logs: ${error.message}`
}
}
/**
* Check if the update sidecar is reachable (i.e. shared volume is mounted)
*/
isSidecarAvailable(): boolean {
try {
return existsSync(SystemUpdateService.SHARED_DIR)
} catch (error) {
return false
}
}
}

View File

@ -0,0 +1,310 @@
import { Archive, Entry } from '@openzim/libzim'
import * as cheerio from 'cheerio'
import { HTML_SELECTORS_TO_REMOVE, NON_CONTENT_HEADING_PATTERNS } from '../../constants/zim_extraction.js'
import logger from '@adonisjs/core/services/logger'
import { ExtractZIMChunkingStrategy, ExtractZIMContentOptions, ZIMContentChunk, ZIMArchiveMetadata } from '../../types/zim.js'
import { randomUUID } from 'node:crypto'
import { access } from 'node:fs/promises'
export class ZIMExtractionService {
private extractArchiveMetadata(archive: Archive): ZIMArchiveMetadata {
try {
return {
title: archive.getMetadata('Title') || archive.getMetadata('Name') || 'Unknown',
creator: archive.getMetadata('Creator') || 'Unknown',
publisher: archive.getMetadata('Publisher') || 'Unknown',
date: archive.getMetadata('Date') || 'Unknown',
language: archive.getMetadata('Language') || 'Unknown',
description: archive.getMetadata('Description') || '',
}
} catch (error) {
logger.warn('[ZIMExtractionService]: Could not extract all metadata, using defaults', error)
return {
title: 'Unknown',
creator: 'Unknown',
publisher: 'Unknown',
date: 'Unknown',
language: 'Unknown',
description: '',
}
}
}
/**
* Breaks out a ZIM file's entries into their structured content form
* to facilitate better indexing and retrieval.
* Returns enhanced chunks with full article context and metadata.
*
* @param filePath - Path to the ZIM file
* @param opts - Options including maxArticles, strategy, onProgress, startOffset, and batchSize
*/
async extractZIMContent(filePath: string, opts: ExtractZIMContentOptions = {}): Promise<ZIMContentChunk[]> {
try {
logger.info(`[ZIMExtractionService]: Processing ZIM file at path: ${filePath}`)
// defensive - check if file still exists before opening
// could have been deleted by another process or batch
try {
await access(filePath)
} catch (error) {
logger.error(`[ZIMExtractionService]: ZIM file not accessible: ${filePath}`)
throw new Error(`ZIM file not found or not accessible: ${filePath}`)
}
const archive = new Archive(filePath)
// Extract archive-level metadata once
const archiveMetadata = this.extractArchiveMetadata(archive)
logger.info(`[ZIMExtractionService]: Archive metadata - Title: ${archiveMetadata.title}, Language: ${archiveMetadata.language}`)
let articlesProcessed = 0
let articlesSkipped = 0
const processedPaths = new Set<string>()
const toReturn: ZIMContentChunk[] = []
// Support batch processing to avoid lock timeouts on large ZIM files
const startOffset = opts.startOffset || 0
const batchSize = opts.batchSize || (opts.maxArticles || Infinity)
for (const entry of archive.iterByPath()) {
// Skip articles until we reach the start offset
if (articlesSkipped < startOffset) {
if (this.isArticleEntry(entry) && !processedPaths.has(entry.path)) {
articlesSkipped++
}
continue
}
if (articlesProcessed >= batchSize) {
break
}
if (!this.isArticleEntry(entry)) {
logger.debug(`[ZIMExtractionService]: Skipping non-article entry at path: ${entry.path}`)
continue
}
if (processedPaths.has(entry.path)) {
logger.debug(`[ZIMExtractionService]: Skipping duplicate entry at path: ${entry.path}`)
continue
}
processedPaths.add(entry.path)
const item = entry.item
const blob = item.data
const html = this.getCleanedHTMLString(blob.data)
const strategy = opts.strategy || this.chooseChunkingStrategy(html);
logger.debug(`[ZIMExtractionService]: Chosen chunking strategy for path ${entry.path}: ${strategy}`)
// Generate a unique document ID. All chunks from same article will share it
const documentId = randomUUID()
const articleTitle = entry.title || entry.path
let chunks: ZIMContentChunk[]
if (strategy === 'structured') {
const structured = this.extractStructuredContent(html)
chunks = structured.sections.map(s => ({
text: s.text,
articleTitle,
articlePath: entry.path,
sectionTitle: s.heading,
fullTitle: `${articleTitle} - ${s.heading}`,
hierarchy: `${articleTitle} > ${s.heading}`,
sectionLevel: s.level,
documentId,
archiveMetadata,
strategy,
}))
} else {
// Simple strategy - entire article as one chunk
const text = this.extractTextFromHTML(html) || ''
chunks = [{
text,
articleTitle,
articlePath: entry.path,
sectionTitle: articleTitle, // Same as article for simple strategy
fullTitle: articleTitle,
hierarchy: articleTitle,
documentId,
archiveMetadata,
strategy,
}]
}
logger.debug(`Extracted ${chunks.length} chunks from article at path: ${entry.path} using strategy: ${strategy}`)
const nonEmptyChunks = chunks.filter(c => c.text.trim().length > 0)
logger.debug(`After filtering empty chunks, ${nonEmptyChunks.length} chunks remain for article at path: ${entry.path}`)
toReturn.push(...nonEmptyChunks)
articlesProcessed++
if (opts.onProgress) {
opts.onProgress(articlesProcessed, archive.articleCount)
}
}
logger.info(`[ZIMExtractionService]: Completed processing ZIM file. Total articles processed: ${articlesProcessed}`)
logger.debug("Final structured content sample:", toReturn.slice(0, 3).map(c => ({
articleTitle: c.articleTitle,
sectionTitle: c.sectionTitle,
hierarchy: c.hierarchy,
textPreview: c.text.substring(0, 100)
})))
logger.debug("Total structured sections extracted:", toReturn.length)
return toReturn
} catch (error) {
logger.error('Error processing ZIM file:', error)
throw error
}
}
private chooseChunkingStrategy(html: string, options = {
forceStrategy: null as ExtractZIMChunkingStrategy | null,
}): ExtractZIMChunkingStrategy {
const {
forceStrategy = null,
} = options;
if (forceStrategy) return forceStrategy;
// Use a simple analysis to determin if the HTML has any meaningful structure
// that we can leverage for better chunking. If not, we'll just chunk it as one big piece of text.
return this.hasStructuredHeadings(html) ? 'structured' : 'simple';
}
private getCleanedHTMLString(buff: Buffer<ArrayBufferLike>): string {
const rawString = buff.toString('utf-8');
const $ = cheerio.load(rawString);
HTML_SELECTORS_TO_REMOVE.forEach((selector) => {
$(selector).remove()
});
return $.html();
}
private extractTextFromHTML(html: string): string | null {
try {
const $ = cheerio.load(html)
// Search body first, then root if body is absent
const text = $('body').length ? $('body').text() : $.root().text()
return text.replace(/\s+/g, ' ').replace(/\n\s*\n/g, '\n').trim()
} catch (error) {
logger.error('Error extracting text from HTML:', error)
return null
}
}
private extractStructuredContent(html: string) {
const $ = cheerio.load(html);
const title = $('h1').first().text().trim() || $('title').text().trim();
// Extract sections with their headings and heading levels
const sections: Array<{ heading: string; text: string; level: number }> = [];
let currentSection = { heading: 'Introduction', content: [] as string[], level: 2 };
$('body').children().each((_, element) => {
const $el = $(element);
const tagName = element.tagName?.toLowerCase();
if (['h2', 'h3', 'h4'].includes(tagName)) {
// Save current section if it has content
if (currentSection.content.length > 0) {
sections.push({
heading: currentSection.heading,
text: currentSection.content.join(' ').replace(/\s+/g, ' ').trim(),
level: currentSection.level,
});
}
// Start new section
const level = parseInt(tagName.substring(1)); // Extract number from h2, h3, h4
currentSection = {
heading: $el.text().replace(/\[edit\]/gi, '').trim(),
content: [],
level,
};
} else if (['p', 'ul', 'ol', 'dl', 'table'].includes(tagName)) {
const text = $el.text().trim();
if (text.length > 0) {
currentSection.content.push(text);
}
}
});
// Push the last section if it has content
if (currentSection.content.length > 0) {
sections.push({
heading: currentSection.heading,
text: currentSection.content.join(' ').replace(/\s+/g, ' ').trim(),
level: currentSection.level,
});
}
return {
title,
sections,
fullText: sections.map(s => `${s.heading}\n${s.text}`).join('\n\n'),
};
}
private hasStructuredHeadings(html: string): boolean {
const $ = cheerio.load(html);
const headings = $('h2, h3').toArray();
// Consider it structured if it has at least 2 headings to break content into meaningful sections
if (headings.length < 2) return false;
// Check that headings have substantial content between them
let sectionsWithContent = 0;
for (const heading of headings) {
const $heading = $(heading);
const headingText = $heading.text().trim();
// Skip empty or very short headings, likely not meaningful
if (headingText.length < 3) continue;
// Skip common non-content headings
if (NON_CONTENT_HEADING_PATTERNS.some(pattern => pattern.test(headingText))) {
continue;
}
// Content until next heading
let contentLength = 0;
let $next = $heading.next();
while ($next.length && !$next.is('h1, h2, h3, h4')) {
contentLength += $next.text().trim().length;
$next = $next.next();
}
// Consider it a real section if it has at least 100 chars of content
if (contentLength >= 100) {
sectionsWithContent++;
}
}
// Require at least 2 sections with substantial content
return sectionsWithContent >= 2;
}
private isArticleEntry(entry: Entry): boolean {
try {
if (entry.isRedirect) return false;
const item = entry.item;
const mimeType = item.mimetype;
return mimeType === 'text/html' || mimeType === 'application/xhtml+xml';
} catch {
return false;
}
}
}

View File

@ -1,78 +1,114 @@
import drive from "@adonisjs/drive/services/main"; import {
import { ListRemoteZimFilesResponse, RawRemoteZimFileEntry, RemoteZimFileEntry, ZimFilesEntry } from "../../types/zim.js"; ListRemoteZimFilesResponse,
import axios from "axios"; RawRemoteZimFileEntry,
RemoteZimFileEntry,
} from '../../types/zim.js'
import axios from 'axios'
import { XMLParser } from 'fast-xml-parser' import { XMLParser } from 'fast-xml-parser'
import { isRawListRemoteZimFilesResponse, isRawRemoteZimFileEntry } from "../../util/zim.js"; import { isRawListRemoteZimFilesResponse, isRawRemoteZimFileEntry } from '../../util/zim.js'
import logger from '@adonisjs/core/services/logger'
import { DockerService } from './docker_service.js'
import { inject } from '@adonisjs/core'
import {
deleteFileIfExists,
ensureDirectoryExists,
getFileStatsIfExists,
listDirectoryContents,
ZIM_STORAGE_PATH,
} from '../utils/fs.js'
import { join, resolve, sep } from 'path'
import { WikipediaOption, WikipediaState } from '../../types/downloads.js'
import vine from '@vinejs/vine'
import { wikipediaOptionsFileSchema } from '#validators/curated_collections'
import WikipediaSelection from '#models/wikipedia_selection'
import InstalledResource from '#models/installed_resource'
import { RunDownloadJob } from '#jobs/run_download_job'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import { CollectionManifestService } from './collection_manifest_service.js'
import type { CategoryWithStatus } from '../../types/collections.js'
const ZIM_MIME_TYPES = ['application/x-zim', 'application/x-openzim', 'application/octet-stream']
const WIKIPEDIA_OPTIONS_URL = 'https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/collections/wikipedia.json'
@inject()
export class ZimService { export class ZimService {
async list() { constructor(private dockerService: DockerService) { }
const disk = drive.use('fs');
const contents = await disk.listAll('/zim')
const files: ZimFilesEntry[] = [] async list() {
for (let item of contents.objects) { const dirPath = join(process.cwd(), ZIM_STORAGE_PATH)
if (item.isFile) { await ensureDirectoryExists(dirPath)
files.push({
type: 'file', const all = await listDirectoryContents(dirPath)
key: item.key, const files = all.filter((item) => item.name.endsWith('.zim'))
name: item.name
})
} else {
files.push({
type: 'directory',
prefix: item.prefix,
name: item.name
})
}
}
return { return {
files, files,
next: contents.paginationToken
} }
} }
async listRemote({ start, count }: { start: number, count: number }): Promise<ListRemoteZimFilesResponse> { async listRemote({
start,
count,
query,
}: {
start: number
count: number
query?: string
}): Promise<ListRemoteZimFilesResponse> {
const LIBRARY_BASE_URL = 'https://browse.library.kiwix.org/catalog/v2/entries' const LIBRARY_BASE_URL = 'https://browse.library.kiwix.org/catalog/v2/entries'
const res = await axios.get(LIBRARY_BASE_URL, { const res = await axios.get(LIBRARY_BASE_URL, {
params: { params: {
start: start, start: start,
count: count, count: count,
lang: 'eng' lang: 'eng',
...(query ? { q: query } : {}),
}, },
responseType: 'text' responseType: 'text',
}); })
const data = res.data; const data = res.data
const parser = new XMLParser({ const parser = new XMLParser({
ignoreAttributes: false, ignoreAttributes: false,
attributeNamePrefix: '', attributeNamePrefix: '',
textNodeName: '#text', textNodeName: '#text',
}); })
const result = parser.parse(data); const result = parser.parse(data)
if (!isRawListRemoteZimFilesResponse(result)) { if (!isRawListRemoteZimFilesResponse(result)) {
throw new Error('Invalid response format from remote library'); throw new Error('Invalid response format from remote library')
} }
const filtered = result.feed.entry.filter((entry: any) => { const entries = result.feed.entry
return isRawRemoteZimFileEntry(entry); ? Array.isArray(result.feed.entry)
? result.feed.entry
: [result.feed.entry]
: []
const filtered = entries.filter((entry: any) => {
return isRawRemoteZimFileEntry(entry)
}) })
const mapped: (RemoteZimFileEntry | null)[] = filtered.map((entry: RawRemoteZimFileEntry) => { const mapped: (RemoteZimFileEntry | null)[] = filtered.map((entry: RawRemoteZimFileEntry) => {
const downloadLink = entry.link.find((link: any) => { const downloadLink = entry.link.find((link: any) => {
return typeof link === 'object' && 'rel' in link && 'length' in link && 'href' in link && 'type' in link && link.type === 'application/x-zim' return (
}); typeof link === 'object' &&
'rel' in link &&
'length' in link &&
'href' in link &&
'type' in link &&
link.type === 'application/x-zim'
)
})
if (!downloadLink) { if (!downloadLink) {
return null return null
} }
// downloadLink['href'] will end with .meta4, we need to remove that to get the actual download URL // downloadLink['href'] will end with .meta4, we need to remove that to get the actual download URL
const download_url = downloadLink['href'].substring(0, downloadLink['href'].length - 6); const download_url = downloadLink['href'].substring(0, downloadLink['href'].length - 6)
const file_name = download_url.split('/').pop() || `${entry.title}.zim`; const file_name = download_url.split('/').pop() || `${entry.title}.zim`
const sizeBytes = parseInt(downloadLink['length'], 10); const sizeBytes = parseInt(downloadLink['length'], 10)
return { return {
id: entry.id, id: entry.id,
@ -82,58 +118,441 @@ export class ZimService {
size_bytes: sizeBytes || 0, size_bytes: sizeBytes || 0,
download_url: download_url, download_url: download_url,
author: entry.author.name, author: entry.author.name,
file_name: file_name file_name: file_name,
} }
}); })
// Filter out any null entries (those without a valid download link) // Filter out any null entries (those without a valid download link)
// or files that already exist in the local storage // or files that already exist in the local storage
const existing = await this.list(); const existing = await this.list()
const existingKeys = new Set(existing.files.map(file => file.name)); const existingKeys = new Set(existing.files.map((file) => file.name))
const withoutExisting = mapped.filter((entry): entry is RemoteZimFileEntry => entry !== null && !existingKeys.has(entry.file_name)); const withoutExisting = mapped.filter(
(entry): entry is RemoteZimFileEntry => entry !== null && !existingKeys.has(entry.file_name)
)
return { return {
items: withoutExisting, items: withoutExisting,
has_more: result.feed.totalResults > start, has_more: result.feed.totalResults > start,
total_count: result.feed.totalResults, total_count: result.feed.totalResults,
}; }
} }
async downloadRemote(url: string): Promise<void> { async downloadRemote(url: string): Promise<{ filename: string; jobId?: string }> {
if (!url.endsWith('.zim')) { const parsed = new URL(url)
throw new Error(`Invalid ZIM file URL: ${url}. URL must end with .zim`); if (!parsed.pathname.endsWith('.zim')) {
throw new Error(`Invalid ZIM file URL: ${url}. URL must end with .zim`)
} }
const disk = drive.use('fs'); const existing = await RunDownloadJob.getByUrl(url)
const response = await axios.get(url, { if (existing) {
responseType: 'stream' throw new Error('A download for this URL is already in progress')
});
if (response.status !== 200) {
throw new Error(`Failed to download remote ZIM file from ${url}`);
} }
// Extract the filename from the URL // Extract the filename from the URL
const filename = url.split('/').pop() || `downloaded-${Date.now()}.zim`; const filename = url.split('/').pop()
const path = `/zim/${filename}`; if (!filename) {
throw new Error('Could not determine filename from URL')
}
await disk.putStream(path, response.data); const filepath = join(process.cwd(), ZIM_STORAGE_PATH, filename)
// Parse resource metadata for the download job
const parsedFilename = CollectionManifestService.parseZimFilename(filename)
const resourceMetadata = parsedFilename
? { resource_id: parsedFilename.resource_id, version: parsedFilename.version, collection_ref: null }
: undefined
// Dispatch a background download job
const result = await RunDownloadJob.dispatch({
url,
filepath,
timeout: 30000,
allowedMimeTypes: ZIM_MIME_TYPES,
forceNew: true,
filetype: 'zim',
resourceMetadata,
})
if (!result || !result.job) {
throw new Error('Failed to dispatch download job')
}
logger.info(`[ZimService] Dispatched background download job for ZIM file: ${filename}`)
return {
filename,
jobId: result.job.id,
}
} }
async delete(key: string): Promise<void> { async listCuratedCategories(): Promise<CategoryWithStatus[]> {
console.log('Deleting ZIM file with key:', key); const manifestService = new CollectionManifestService()
let fileName = key; return manifestService.getCategoriesWithStatus()
}
async downloadCategoryTier(categorySlug: string, tierSlug: string): Promise<string[] | null> {
const manifestService = new CollectionManifestService()
const spec = await manifestService.getSpecWithFallback<import('../../types/collections.js').ZimCategoriesSpec>('zim_categories')
if (!spec) {
throw new Error('Could not load ZIM categories spec')
}
const category = spec.categories.find((c) => c.slug === categorySlug)
if (!category) {
throw new Error(`Category not found: ${categorySlug}`)
}
const tier = category.tiers.find((t) => t.slug === tierSlug)
if (!tier) {
throw new Error(`Tier not found: ${tierSlug}`)
}
const allResources = CollectionManifestService.resolveTierResources(tier, category.tiers)
// Filter out already installed
const installed = await InstalledResource.query().where('resource_type', 'zim')
const installedIds = new Set(installed.map((r) => r.resource_id))
const toDownload = allResources.filter((r) => !installedIds.has(r.id))
if (toDownload.length === 0) return null
const downloadFilenames: string[] = []
for (const resource of toDownload) {
const existingJob = await RunDownloadJob.getByUrl(resource.url)
if (existingJob) {
logger.warn(`[ZimService] Download already in progress for ${resource.url}, skipping.`)
continue
}
const filename = resource.url.split('/').pop()
if (!filename) continue
downloadFilenames.push(filename)
const filepath = join(process.cwd(), ZIM_STORAGE_PATH, filename)
await RunDownloadJob.dispatch({
url: resource.url,
filepath,
timeout: 30000,
allowedMimeTypes: ZIM_MIME_TYPES,
forceNew: true,
filetype: 'zim',
resourceMetadata: {
resource_id: resource.id,
version: resource.version,
collection_ref: categorySlug,
},
})
}
return downloadFilenames.length > 0 ? downloadFilenames : null
}
async downloadRemoteSuccessCallback(urls: string[], restart = true) {
// Check if any URL is a Wikipedia download and handle it
for (const url of urls) {
if (url.includes('wikipedia_en_')) {
await this.onWikipediaDownloadComplete(url, true)
}
}
if (restart) {
// Check if there are any remaining ZIM download jobs before restarting
const { QueueService } = await import('./queue_service.js')
const queueService = new QueueService()
const queue = queueService.getQueue('downloads')
// Get all active and waiting jobs
const [activeJobs, waitingJobs] = await Promise.all([
queue.getActive(),
queue.getWaiting(),
])
// Filter out completed jobs (progress === 100) to avoid race condition
// where this job itself is still in the active queue
const activeIncompleteJobs = activeJobs.filter((job) => {
const progress = typeof job.progress === 'number' ? job.progress : 0
return progress < 100
})
// Check if any remaining incomplete jobs are ZIM downloads
const allJobs = [...activeIncompleteJobs, ...waitingJobs]
const hasRemainingZimJobs = allJobs.some((job) => job.data.filetype === 'zim')
if (hasRemainingZimJobs) {
logger.info('[ZimService] Skipping container restart - more ZIM downloads pending')
} else {
// Restart KIWIX container to pick up new ZIM file
logger.info('[ZimService] No more ZIM downloads pending - restarting KIWIX container')
await this.dockerService
.affectContainer(SERVICE_NAMES.KIWIX, 'restart')
.catch((error) => {
logger.error(`[ZimService] Failed to restart KIWIX container:`, error) // Don't stop the download completion, just log the error.
})
}
}
// Create InstalledResource entries for downloaded files
for (const url of urls) {
// Skip Wikipedia files (managed separately)
if (url.includes('wikipedia_en_')) continue
const filename = url.split('/').pop()
if (!filename) continue
const parsed = CollectionManifestService.parseZimFilename(filename)
if (!parsed) continue
const filepath = join(process.cwd(), ZIM_STORAGE_PATH, filename)
const stats = await getFileStatsIfExists(filepath)
try {
const { DateTime } = await import('luxon')
await InstalledResource.updateOrCreate(
{ resource_id: parsed.resource_id, resource_type: 'zim' },
{
version: parsed.version,
url: url,
file_path: filepath,
file_size_bytes: stats ? Number(stats.size) : null,
installed_at: DateTime.now(),
}
)
logger.info(`[ZimService] Created InstalledResource entry for: ${parsed.resource_id}`)
} catch (error) {
logger.error(`[ZimService] Failed to create InstalledResource for ${filename}:`, error)
}
}
}
async delete(file: string): Promise<void> {
let fileName = file
if (!fileName.endsWith('.zim')) { if (!fileName.endsWith('.zim')) {
fileName += '.zim'; fileName += '.zim'
} }
const disk = drive.use('fs'); const basePath = resolve(join(process.cwd(), ZIM_STORAGE_PATH))
const exists = await disk.exists(fileName); const fullPath = resolve(join(basePath, fileName))
// Prevent path traversal — resolved path must stay within the storage directory
if (!fullPath.startsWith(basePath + sep)) {
throw new Error('Invalid filename')
}
const exists = await getFileStatsIfExists(fullPath)
if (!exists) { if (!exists) {
throw new Error('not_found'); throw new Error('not_found')
} }
await disk.delete(fileName); await deleteFileIfExists(fullPath)
// Clean up InstalledResource entry
const parsed = CollectionManifestService.parseZimFilename(fileName)
if (parsed) {
await InstalledResource.query()
.where('resource_id', parsed.resource_id)
.where('resource_type', 'zim')
.delete()
logger.info(`[ZimService] Deleted InstalledResource entry for: ${parsed.resource_id}`)
}
} }
}
// Wikipedia selector methods
async getWikipediaOptions(): Promise<WikipediaOption[]> {
try {
const response = await axios.get(WIKIPEDIA_OPTIONS_URL)
const data = response.data
const validated = await vine.validate({
schema: wikipediaOptionsFileSchema,
data,
})
return validated.options
} catch (error) {
logger.error(`[ZimService] Failed to fetch Wikipedia options:`, error)
throw new Error('Failed to fetch Wikipedia options')
}
}
async getWikipediaSelection(): Promise<WikipediaSelection | null> {
// Get the single row from wikipedia_selections (there should only ever be one)
return WikipediaSelection.query().first()
}
async getWikipediaState(): Promise<WikipediaState> {
const options = await this.getWikipediaOptions()
const selection = await this.getWikipediaSelection()
return {
options,
currentSelection: selection
? {
optionId: selection.option_id,
status: selection.status,
filename: selection.filename,
url: selection.url,
}
: null,
}
}
async selectWikipedia(optionId: string): Promise<{ success: boolean; jobId?: string; message?: string }> {
const options = await this.getWikipediaOptions()
const selectedOption = options.find((opt) => opt.id === optionId)
if (!selectedOption) {
throw new Error(`Invalid Wikipedia option: ${optionId}`)
}
const currentSelection = await this.getWikipediaSelection()
// If same as currently installed, no action needed
if (currentSelection?.option_id === optionId && currentSelection.status === 'installed') {
return { success: true, message: 'Already installed' }
}
// Handle "none" option - delete current Wikipedia file and update DB
if (optionId === 'none') {
if (currentSelection?.filename) {
try {
await this.delete(currentSelection.filename)
logger.info(`[ZimService] Deleted Wikipedia file: ${currentSelection.filename}`)
} catch (error) {
// File might already be deleted, that's OK
logger.warn(`[ZimService] Could not delete Wikipedia file (may already be gone): ${currentSelection.filename}`)
}
}
// Update or create the selection record (always use first record)
if (currentSelection) {
currentSelection.option_id = 'none'
currentSelection.url = null
currentSelection.filename = null
currentSelection.status = 'none'
await currentSelection.save()
} else {
await WikipediaSelection.create({
option_id: 'none',
url: null,
filename: null,
status: 'none',
})
}
// Restart Kiwix to reflect the change
await this.dockerService
.affectContainer(SERVICE_NAMES.KIWIX, 'restart')
.catch((error) => {
logger.error(`[ZimService] Failed to restart Kiwix after Wikipedia removal:`, error)
})
return { success: true, message: 'Wikipedia removed' }
}
// Start download for the new Wikipedia option
if (!selectedOption.url) {
throw new Error('Selected Wikipedia option has no download URL')
}
// Check if already downloading
const existingJob = await RunDownloadJob.getByUrl(selectedOption.url)
if (existingJob) {
return { success: false, message: 'Download already in progress' }
}
// Extract filename from URL
const filename = selectedOption.url.split('/').pop()
if (!filename) {
throw new Error('Could not determine filename from URL')
}
const filepath = join(process.cwd(), ZIM_STORAGE_PATH, filename)
// Update or create selection record to show downloading status
let selection: WikipediaSelection
if (currentSelection) {
currentSelection.option_id = optionId
currentSelection.url = selectedOption.url
currentSelection.filename = filename
currentSelection.status = 'downloading'
await currentSelection.save()
selection = currentSelection
} else {
selection = await WikipediaSelection.create({
option_id: optionId,
url: selectedOption.url,
filename: filename,
status: 'downloading',
})
}
// Dispatch download job
const result = await RunDownloadJob.dispatch({
url: selectedOption.url,
filepath,
timeout: 30000,
allowedMimeTypes: ZIM_MIME_TYPES,
forceNew: true,
filetype: 'zim',
})
if (!result || !result.job) {
// Revert status on failure to dispatch
selection.option_id = currentSelection?.option_id || 'none'
selection.url = currentSelection?.url || null
selection.filename = currentSelection?.filename || null
selection.status = currentSelection?.status || 'none'
await selection.save()
throw new Error('Failed to dispatch download job')
}
logger.info(`[ZimService] Started Wikipedia download for ${optionId}: ${filename}`)
return {
success: true,
jobId: result.job.id,
message: 'Download started',
}
}
async onWikipediaDownloadComplete(url: string, success: boolean): Promise<void> {
const selection = await this.getWikipediaSelection()
if (!selection || selection.url !== url) {
logger.warn(`[ZimService] Wikipedia download complete callback for unknown URL: ${url}`)
return
}
if (success) {
// Update status to installed
selection.status = 'installed'
await selection.save()
logger.info(`[ZimService] Wikipedia download completed successfully: ${selection.filename}`)
// Delete the old Wikipedia file if it exists and is different
// We need to find what was previously installed
const existingFiles = await this.list()
const wikipediaFiles = existingFiles.files.filter((f) =>
f.name.startsWith('wikipedia_en_') && f.name !== selection.filename
)
for (const oldFile of wikipediaFiles) {
try {
await this.delete(oldFile.name)
logger.info(`[ZimService] Deleted old Wikipedia file: ${oldFile.name}`)
} catch (error) {
logger.warn(`[ZimService] Could not delete old Wikipedia file: ${oldFile.name}`, error)
}
}
} else {
// Download failed - keep the selection record but mark as failed
selection.status = 'failed'
await selection.save()
logger.error(`[ZimService] Wikipedia download failed for: ${selection.filename}`)
}
}
}

View File

@ -0,0 +1,241 @@
import {
DoResumableDownloadParams,
DoResumableDownloadWithRetryParams,
} from '../../types/downloads.js'
import axios from 'axios'
import { Transform } from 'stream'
import { deleteFileIfExists, ensureDirectoryExists, getFileStatsIfExists } from './fs.js'
import { createWriteStream } from 'fs'
import path from 'path'
/**
* Perform a resumable download with progress tracking
* @param param0 - Download parameters. Leave allowedMimeTypes empty to skip mime type checking.
* Otherwise, mime types should be in the format "application/pdf", "image/png", etc.
* @returns Path to the downloaded file
*/
export async function doResumableDownload({
url,
filepath,
timeout = 30000,
signal,
onProgress,
onComplete,
forceNew = false,
allowedMimeTypes,
}: DoResumableDownloadParams): Promise<string> {
const dirname = path.dirname(filepath)
await ensureDirectoryExists(dirname)
// Check if partial file exists for resume
let startByte = 0
let appendMode = false
const existingStats = await getFileStatsIfExists(filepath)
if (existingStats && !forceNew) {
startByte = existingStats.size
appendMode = true
}
// Get file info with HEAD request first
const headResponse = await axios.head(url, {
signal,
timeout,
})
const contentType = headResponse.headers['content-type'] || ''
const totalBytes = parseInt(headResponse.headers['content-length'] || '0')
const supportsRangeRequests = headResponse.headers['accept-ranges'] === 'bytes'
// If allowedMimeTypes is provided, check content type
if (allowedMimeTypes && allowedMimeTypes.length > 0) {
const isMimeTypeAllowed = allowedMimeTypes.some((mimeType) => contentType.includes(mimeType))
if (!isMimeTypeAllowed) {
throw new Error(`MIME type ${contentType} is not allowed`)
}
}
// If file is already complete and not forcing overwrite just return filepath
if (startByte === totalBytes && totalBytes > 0 && !forceNew) {
return filepath
}
// If server doesn't support range requests and we have a partial file, delete it
if (!supportsRangeRequests && startByte > 0) {
await deleteFileIfExists(filepath)
startByte = 0
appendMode = false
}
const headers: Record<string, string> = {}
if (supportsRangeRequests && startByte > 0) {
headers.Range = `bytes=${startByte}-`
}
const response = await axios.get(url, {
responseType: 'stream',
headers,
signal,
timeout,
})
if (response.status !== 200 && response.status !== 206) {
throw new Error(`Failed to download: HTTP ${response.status}`)
}
return new Promise((resolve, reject) => {
let downloadedBytes = startByte
let lastProgressTime = Date.now()
let lastDownloadedBytes = startByte
// Stall detection: if no data arrives for 5 minutes, abort the download
const STALL_TIMEOUT_MS = 5 * 60 * 1000
let stallTimer: ReturnType<typeof setTimeout> | null = null
const clearStallTimer = () => {
if (stallTimer) {
clearTimeout(stallTimer)
stallTimer = null
}
}
const resetStallTimer = () => {
clearStallTimer()
stallTimer = setTimeout(() => {
cleanup(new Error('Download stalled - no data received for 5 minutes'))
}, STALL_TIMEOUT_MS)
}
// Progress tracking stream to monitor data flow
const progressStream = new Transform({
transform(chunk: Buffer, _: any, callback: Function) {
downloadedBytes += chunk.length
resetStallTimer()
// Update progress tracking
const now = Date.now()
if (onProgress && now - lastProgressTime >= 500) {
lastProgressTime = now
lastDownloadedBytes = downloadedBytes
onProgress({
downloadedBytes,
totalBytes,
lastProgressTime,
lastDownloadedBytes,
url,
})
}
this.push(chunk)
callback()
},
})
const writeStream = createWriteStream(filepath, {
flags: appendMode ? 'a' : 'w',
})
// Handle errors and cleanup
const cleanup = (error?: Error) => {
clearStallTimer()
progressStream.destroy()
response.data.destroy()
writeStream.destroy()
if (error) {
reject(error)
}
}
response.data.on('error', cleanup)
progressStream.on('error', cleanup)
writeStream.on('error', cleanup)
writeStream.on('error', cleanup)
signal?.addEventListener('abort', () => {
cleanup(new Error('Download aborted'))
})
writeStream.on('finish', async () => {
clearStallTimer()
if (onProgress) {
onProgress({
downloadedBytes,
totalBytes,
lastProgressTime: Date.now(),
lastDownloadedBytes: downloadedBytes,
url,
})
}
if (onComplete) {
await onComplete(url, filepath)
}
resolve(filepath)
})
// Start stall timer and pipe: response -> progressStream -> writeStream
resetStallTimer()
response.data.pipe(progressStream).pipe(writeStream)
})
}
export async function doResumableDownloadWithRetry({
url,
filepath,
signal,
timeout = 30000,
onProgress,
max_retries = 3,
retry_delay = 2000,
onAttemptError,
allowedMimeTypes,
}: DoResumableDownloadWithRetryParams): Promise<string> {
const dirname = path.dirname(filepath)
await ensureDirectoryExists(dirname)
let attempt = 0
let lastError: Error | null = null
while (attempt < max_retries) {
try {
const result = await doResumableDownload({
url,
filepath,
signal,
timeout,
allowedMimeTypes,
onProgress,
})
return result // return on success
} catch (error) {
attempt++
lastError = error as Error
const isAborted = error.name === 'AbortError' || error.code === 'ABORT_ERR'
const isNetworkError =
error.code === 'ECONNRESET' || error.code === 'ENOTFOUND' || error.code === 'ETIMEDOUT'
onAttemptError?.(error, attempt)
if (isAborted) {
throw new Error(`Download aborted for URL: ${url}`)
}
if (attempt < max_retries && isNetworkError) {
await delay(retry_delay)
continue
}
// If max retries reached or non-retriable error, throw
if (attempt >= max_retries || !isNetworkError) {
throw error
}
}
}
// should not reach here, but TypeScript needs a return
throw lastError || new Error('Unknown error during download')
}
async function delay(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms))
}

176
admin/app/utils/fs.ts Normal file
View File

@ -0,0 +1,176 @@
import { mkdir, readdir, readFile, stat, unlink } from 'fs/promises'
import path, { join } from 'path'
import { FileEntry } from '../../types/files.js'
import { createReadStream } from 'fs'
import { LSBlockDevice, NomadDiskInfoRaw } from '../../types/system.js'
export const ZIM_STORAGE_PATH = '/storage/zim'
export async function listDirectoryContents(path: string): Promise<FileEntry[]> {
const entries = await readdir(path, { withFileTypes: true })
const results: FileEntry[] = []
for (const entry of entries) {
if (entry.isFile()) {
results.push({
type: 'file',
key: join(path, entry.name),
name: entry.name,
})
} else if (entry.isDirectory()) {
results.push({
type: 'directory',
prefix: join(path, entry.name),
name: entry.name,
})
}
}
return results
}
export async function listDirectoryContentsRecursive(path: string): Promise<FileEntry[]> {
let results: FileEntry[] = []
const entries = await readdir(path, { withFileTypes: true })
for (const entry of entries) {
const fullPath = join(path, entry.name)
if (entry.isDirectory()) {
const subdirectoryContents = await listDirectoryContentsRecursive(fullPath)
results = results.concat(subdirectoryContents)
} else {
results.push({
type: 'file',
key: fullPath,
name: entry.name,
})
}
}
return results
}
export async function ensureDirectoryExists(path: string): Promise<void> {
try {
await stat(path)
} catch (error) {
if (error.code === 'ENOENT') {
await mkdir(path, { recursive: true })
}
}
}
export async function getFile(path: string, returnType: 'buffer'): Promise<Buffer | null>
export async function getFile(
path: string,
returnType: 'stream'
): Promise<NodeJS.ReadableStream | null>
export async function getFile(path: string, returnType: 'string'): Promise<string | null>
export async function getFile(
path: string,
returnType: 'buffer' | 'string' | 'stream' = 'buffer'
): Promise<Buffer | string | NodeJS.ReadableStream | null> {
try {
if (returnType === 'string') {
return await readFile(path, 'utf-8')
} else if (returnType === 'stream') {
return createReadStream(path)
}
return await readFile(path)
} catch (error) {
if (error.code === 'ENOENT') {
return null
}
throw error
}
}
export async function getFileStatsIfExists(
path: string
): Promise<{ size: number; modifiedTime: Date } | null> {
try {
const stats = await stat(path)
return {
size: stats.size,
modifiedTime: stats.mtime,
}
} catch (error) {
if (error.code === 'ENOENT') {
return null
}
throw error
}
}
export async function deleteFileIfExists(path: string): Promise<void> {
try {
await unlink(path)
} catch (error) {
if (error.code !== 'ENOENT') {
throw error
}
}
}
export function getAllFilesystems(
device: LSBlockDevice,
fsSize: NomadDiskInfoRaw['fsSize']
): NomadDiskInfoRaw['fsSize'] {
const filesystems: NomadDiskInfoRaw['fsSize'] = []
const seen = new Set()
function traverse(dev: LSBlockDevice) {
// Try to find matching filesystem
const fs = fsSize.find((f) => matchesDevice(f.fs, dev.name))
if (fs && !seen.has(fs.fs)) {
filesystems.push(fs)
seen.add(fs.fs)
}
// Traverse children recursively
if (dev.children) {
dev.children.forEach((child) => traverse(child))
}
}
traverse(device)
return filesystems
}
export function matchesDevice(fsPath: string, deviceName: string): boolean {
// Remove /dev/ and /dev/mapper/ prefixes
const normalized = fsPath.replace('/dev/mapper/', '').replace('/dev/', '')
// Direct match (covers /dev/sda1 ↔ sda1, /dev/nvme0n1p1 ↔ nvme0n1p1)
if (normalized === deviceName) {
return true
}
// LVM/device-mapper: e.g., /dev/mapper/ubuntu--vg-ubuntu--lv contains "ubuntu--lv"
if (fsPath.startsWith('/dev/mapper/') && fsPath.includes(deviceName)) {
return true
}
return false
}
export function determineFileType(filename: string): 'image' | 'pdf' | 'text' | 'zim' | 'unknown' {
const ext = path.extname(filename).toLowerCase()
if (['.jpg', '.jpeg', '.png', '.gif', '.bmp', '.tiff', '.webp'].includes(ext)) {
return 'image'
} else if (ext === '.pdf') {
return 'pdf'
} else if (['.txt', '.md', '.docx', '.rtf'].includes(ext)) {
return 'text'
} else if (ext === '.zim') {
return 'zim'
} else {
return 'unknown'
}
}
/**
* Sanitize a filename by removing potentially dangerous characters.
* @param filename The original filename
* @returns The sanitized filename
*/
export function sanitizeFilename(filename: string): string {
return filename.replace(/[^a-zA-Z0-9._-]/g, '_')
}

25
admin/app/utils/misc.ts Normal file
View File

@ -0,0 +1,25 @@
export function formatSpeed(bytesPerSecond: number): string {
if (bytesPerSecond < 1024) return `${bytesPerSecond.toFixed(0)} B/s`
if (bytesPerSecond < 1024 * 1024) return `${(bytesPerSecond / 1024).toFixed(1)} KB/s`
return `${(bytesPerSecond / (1024 * 1024)).toFixed(1)} MB/s`
}
export function toTitleCase(str: string): string {
return str
.toLowerCase()
.split(' ')
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(' ')
}
export function parseBoolean(value: any): boolean {
if (typeof value === 'boolean') return value
if (typeof value === 'string') {
const lower = value.toLowerCase()
return lower === 'true' || lower === '1'
}
if (typeof value === 'number') {
return value === 1
}
return false
}

View File

@ -0,0 +1,49 @@
/**
* Compare two semantic version strings to determine if the first is newer than the second.
* @param version1 - The version to check (e.g., "1.25.0")
* @param version2 - The current version (e.g., "1.24.0")
* @returns true if version1 is newer than version2
*/
export function isNewerVersion(version1: string, version2: string, includePreReleases = false): boolean {
const normalize = (v: string) => v.replace(/^v/, '')
const [base1, pre1] = normalize(version1).split('-')
const [base2, pre2] = normalize(version2).split('-')
// If pre-releases are not included and version1 is a pre-release, don't consider it newer
if (!includePreReleases && pre1) {
return false
}
const v1Parts = base1.split('.').map((p) => parseInt(p, 10) || 0)
const v2Parts = base2.split('.').map((p) => parseInt(p, 10) || 0)
const maxLen = Math.max(v1Parts.length, v2Parts.length)
for (let i = 0; i < maxLen; i++) {
const a = v1Parts[i] || 0
const b = v2Parts[i] || 0
if (a > b) return true
if (a < b) return false
}
// Base versions equal — GA > RC, RC.n+1 > RC.n
if (!pre1 && pre2) return true // v1 is GA, v2 is RC → v1 is newer
if (pre1 && !pre2) return false // v1 is RC, v2 is GA → v2 is newer
if (!pre1 && !pre2) return false // both GA, equal
// Both prerelease: compare numeric suffix (e.g. "rc.2" vs "rc.1")
const pre1Num = parseInt(pre1.split('.')[1], 10) || 0
const pre2Num = parseInt(pre2.split('.')[1], 10) || 0
return pre1Num > pre2Num
}
/**
* Parse the major version number from a tag string.
* Strips the 'v' prefix if present.
* @param tag - Version tag (e.g., "v3.8.1", "10.19.4")
* @returns The major version number
*/
export function parseMajorVersion(tag: string): number {
const normalized = tag.replace(/^v/, '')
const major = parseInt(normalized.split('.')[0], 10)
return isNaN(major) ? 0 : major
}

View File

@ -0,0 +1,13 @@
import vine from '@vinejs/vine'
export const runBenchmarkValidator = vine.compile(
vine.object({
benchmark_type: vine.enum(['full', 'system', 'ai']).optional(),
})
)
export const submitBenchmarkValidator = vine.compile(
vine.object({
benchmark_id: vine.string().optional(),
})
)

View File

@ -0,0 +1,22 @@
import vine from '@vinejs/vine'
export const createSessionSchema = vine.compile(
vine.object({
title: vine.string().trim().minLength(1).maxLength(200),
model: vine.string().trim().optional(),
})
)
export const updateSessionSchema = vine.compile(
vine.object({
title: vine.string().trim().minLength(1).maxLength(200).optional(),
model: vine.string().trim().optional(),
})
)
export const addMessageSchema = vine.compile(
vine.object({
role: vine.enum(['system', 'user', 'assistant'] as const),
content: vine.string().trim().minLength(1),
})
)

View File

@ -0,0 +1,111 @@
import vine from '@vinejs/vine'
/**
* Checks whether a URL points to a loopback or link-local address.
* Used to prevent SSRF the server should not fetch from localhost
* or link-local/metadata endpoints (e.g. cloud instance metadata at 169.254.x.x).
*
* RFC1918 private ranges (10.x, 172.16-31.x, 192.168.x) are intentionally
* ALLOWED because NOMAD is a LAN appliance and users may host content
* mirrors on their local network.
*
* Throws an error if the URL is a loopback or link-local address.
*/
export function assertNotPrivateUrl(urlString: string): void {
const parsed = new URL(urlString)
const hostname = parsed.hostname.toLowerCase()
const blockedPatterns = [
/^localhost$/,
/^127\.\d+\.\d+\.\d+$/,
/^0\.0\.0\.0$/,
/^169\.254\.\d+\.\d+$/, // Link-local / cloud metadata
/^\[::1\]$/,
/^\[?fe80:/i, // IPv6 link-local
]
if (blockedPatterns.some((re) => re.test(hostname))) {
throw new Error(`Download URL must not point to a loopback or link-local address: ${hostname}`)
}
}
export const remoteDownloadValidator = vine.compile(
vine.object({
url: vine
.string()
.url({ require_tld: false }) // Allow LAN URLs (e.g. http://my-nas:8080/file.zim)
.trim(),
})
)
export const remoteDownloadWithMetadataValidator = vine.compile(
vine.object({
url: vine
.string()
.url({ require_tld: false }) // Allow LAN URLs
.trim(),
metadata: vine
.object({
title: vine.string().trim().minLength(1),
summary: vine.string().trim().optional(),
author: vine.string().trim().optional(),
size_bytes: vine.number().optional(),
})
.optional(),
})
)
export const remoteDownloadValidatorOptional = vine.compile(
vine.object({
url: vine
.string()
.url({ require_tld: false }) // Allow LAN URLs
.trim()
.optional(),
})
)
export const filenameParamValidator = vine.compile(
vine.object({
params: vine.object({
filename: vine.string().trim().minLength(1).maxLength(4096),
}),
})
)
export const downloadCollectionValidator = vine.compile(
vine.object({
slug: vine.string(),
})
)
export const downloadCategoryTierValidator = vine.compile(
vine.object({
categorySlug: vine.string().trim().minLength(1),
tierSlug: vine.string().trim().minLength(1),
})
)
export const selectWikipediaValidator = vine.compile(
vine.object({
optionId: vine.string().trim().minLength(1),
})
)
const resourceUpdateInfoBase = vine.object({
resource_id: vine.string().trim().minLength(1),
resource_type: vine.enum(['zim', 'map'] as const),
installed_version: vine.string().trim(),
latest_version: vine.string().trim().minLength(1),
download_url: vine.string().url({ require_tld: false }).trim(),
})
export const applyContentUpdateValidator = vine.compile(resourceUpdateInfoBase)
export const applyAllContentUpdatesValidator = vine.compile(
vine.object({
updates: vine
.array(resourceUpdateInfoBase)
.minLength(1),
})
)

View File

@ -0,0 +1,83 @@
import vine from '@vinejs/vine'
// ---- Versioned resource validators (with id + version) ----
export const specResourceValidator = vine.object({
id: vine.string(),
version: vine.string(),
title: vine.string(),
description: vine.string(),
url: vine.string().url(),
size_mb: vine.number().min(0).optional(),
})
// ---- ZIM Categories spec (versioned) ----
export const zimCategoriesSpecSchema = vine.object({
spec_version: vine.string(),
categories: vine.array(
vine.object({
name: vine.string(),
slug: vine.string(),
icon: vine.string(),
description: vine.string(),
language: vine.string().minLength(2).maxLength(5),
tiers: vine.array(
vine.object({
name: vine.string(),
slug: vine.string(),
description: vine.string(),
recommended: vine.boolean().optional(),
includesTier: vine.string().optional(),
resources: vine.array(specResourceValidator),
})
),
})
),
})
// ---- Maps spec (versioned) ----
export const mapsSpecSchema = vine.object({
spec_version: vine.string(),
collections: vine.array(
vine.object({
slug: vine.string(),
name: vine.string(),
description: vine.string(),
icon: vine.string(),
language: vine.string().minLength(2).maxLength(5),
resources: vine.array(specResourceValidator).minLength(1),
})
).minLength(1),
})
// ---- Wikipedia spec (versioned) ----
export const wikipediaSpecSchema = vine.object({
spec_version: vine.string(),
options: vine.array(
vine.object({
id: vine.string(),
name: vine.string(),
description: vine.string(),
size_mb: vine.number().min(0),
url: vine.string().url().nullable(),
version: vine.string().nullable(),
})
).minLength(1),
})
// ---- Wikipedia validators (used by ZimService) ----
export const wikipediaOptionSchema = vine.object({
id: vine.string(),
name: vine.string(),
description: vine.string(),
size_mb: vine.number().min(0),
url: vine.string().url().nullable(),
})
export const wikipediaOptionsFileSchema = vine.object({
options: vine.array(wikipediaOptionSchema).minLength(1),
})

View File

@ -0,0 +1,15 @@
import vine from '@vinejs/vine'
export const downloadJobsByFiletypeSchema = vine.compile(
vine.object({
params: vine.object({
filetype: vine.string(),
}),
})
)
export const modelNameSchema = vine.compile(
vine.object({
model: vine.string(),
})
)

View File

@ -0,0 +1,25 @@
import vine from '@vinejs/vine'
export const chatSchema = vine.compile(
vine.object({
model: vine.string().trim().minLength(1),
messages: vine.array(
vine.object({
role: vine.enum(['system', 'user', 'assistant'] as const),
content: vine.string(),
})
),
stream: vine.boolean().optional(),
sessionId: vine.number().positive().optional(),
})
)
export const getAvailableModelsSchema = vine.compile(
vine.object({
sort: vine.enum(['pulls', 'name'] as const).optional(),
recommendedOnly: vine.boolean().optional(),
query: vine.string().trim().optional(),
limit: vine.number().positive().optional(),
force: vine.boolean().optional(),
})
)

View File

@ -0,0 +1,13 @@
import vine from '@vinejs/vine'
export const getJobStatusSchema = vine.compile(
vine.object({
filePath: vine.string(),
})
)
export const deleteFileSchema = vine.compile(
vine.object({
source: vine.string(),
})
)

View File

@ -0,0 +1,8 @@
import vine from "@vinejs/vine";
import { SETTINGS_KEYS } from "../../constants/kv_store.js";
export const updateSettingSchema = vine.compile(vine.object({
key: vine.enum(SETTINGS_KEYS),
value: vine.any().optional(),
}))

View File

@ -1,5 +1,33 @@
import vine from '@vinejs/vine' import vine from '@vinejs/vine'
export const installServiceValidator = vine.compile(vine.object({ export const installServiceValidator = vine.compile(
service_name: vine.string().trim() vine.object({
})) service_name: vine.string().trim(),
})
)
export const affectServiceValidator = vine.compile(
vine.object({
service_name: vine.string().trim(),
action: vine.enum(['start', 'stop', 'restart']),
})
)
export const subscribeToReleaseNotesValidator = vine.compile(
vine.object({
email: vine.string().email().trim(),
})
)
export const checkLatestVersionValidator = vine.compile(
vine.object({
force: vine.boolean().optional(), // Optional flag to force bypassing cache and checking for updates immediately
})
)
export const updateServiceValidator = vine.compile(
vine.object({
service_name: vine.string().trim(),
target_version: vine.string().trim(),
})
)

View File

@ -0,0 +1,9 @@
import vine from '@vinejs/vine'
export const listRemoteZimValidator = vine.compile(
vine.object({
start: vine.number().min(0).optional(),
count: vine.number().min(1).max(100).optional(),
query: vine.string().optional(),
})
)

View File

@ -36,6 +36,15 @@ new Ignitor(APP_ROOT, { importer: IMPORTER })
}) })
app.listen('SIGTERM', () => app.terminate()) app.listen('SIGTERM', () => app.terminate())
app.listenIf(app.managedByPm2, 'SIGINT', () => app.terminate()) app.listenIf(app.managedByPm2, 'SIGINT', () => app.terminate())
app.ready(async () => {
try {
const collectionManifestService = new (await import('#services/collection_manifest_service')).CollectionManifestService()
await collectionManifestService.reconcileFromFilesystem()
} catch (error) {
// Catch and log any errors during reconciliation to prevent the server from crashing
console.error('Error during collection manifest reconciliation:', error)
}
})
}) })
.httpServer() .httpServer()
.start() .start()

View File

@ -0,0 +1,98 @@
import { BaseCommand, flags } from '@adonisjs/core/ace'
import type { CommandOptions } from '@adonisjs/core/types/ace'
export default class BenchmarkResults extends BaseCommand {
static commandName = 'benchmark:results'
static description = 'Display benchmark results'
@flags.boolean({ description: 'Show only the latest result', alias: 'l' })
declare latest: boolean
@flags.string({ description: 'Output format (table, json)', default: 'table' })
declare format: string
@flags.string({ description: 'Show specific benchmark by ID', alias: 'i' })
declare id: string
static options: CommandOptions = {
startApp: true,
}
async run() {
const { DockerService } = await import('#services/docker_service')
const { BenchmarkService } = await import('#services/benchmark_service')
const dockerService = new DockerService()
const benchmarkService = new BenchmarkService(dockerService)
try {
let results
if (this.id) {
const result = await benchmarkService.getResultById(this.id)
results = result ? [result] : []
} else if (this.latest) {
const result = await benchmarkService.getLatestResult()
results = result ? [result] : []
} else {
results = await benchmarkService.getAllResults()
}
if (results.length === 0) {
this.logger.info('No benchmark results found.')
this.logger.info('Run "node ace benchmark:run" to create a benchmark.')
return
}
if (this.format === 'json') {
console.log(JSON.stringify(results, null, 2))
return
}
// Table format
for (const result of results) {
this.logger.info('')
this.logger.info(`=== Benchmark ${result.benchmark_id} ===`)
this.logger.info(`Type: ${result.benchmark_type}`)
this.logger.info(`Date: ${result.created_at}`)
this.logger.info('')
this.logger.info('Hardware:')
this.logger.info(` CPU: ${result.cpu_model}`)
this.logger.info(` Cores: ${result.cpu_cores} physical, ${result.cpu_threads} threads`)
this.logger.info(` RAM: ${Math.round(result.ram_bytes / (1024 * 1024 * 1024))} GB`)
this.logger.info(` Disk: ${result.disk_type}`)
if (result.gpu_model) {
this.logger.info(` GPU: ${result.gpu_model}`)
}
this.logger.info('')
this.logger.info('Scores:')
this.logger.info(` CPU: ${result.cpu_score.toFixed(2)}`)
this.logger.info(` Memory: ${result.memory_score.toFixed(2)}`)
this.logger.info(` Disk Read: ${result.disk_read_score.toFixed(2)}`)
this.logger.info(` Disk Write: ${result.disk_write_score.toFixed(2)}`)
if (result.ai_tokens_per_second) {
this.logger.info(` AI Tokens/sec: ${result.ai_tokens_per_second.toFixed(2)}`)
this.logger.info(` AI TTFT: ${result.ai_time_to_first_token?.toFixed(2)} ms`)
}
this.logger.info('')
this.logger.info(`NOMAD Score: ${result.nomad_score.toFixed(2)} / 100`)
if (result.submitted_to_repository) {
this.logger.info(`Submitted: Yes (${result.repository_id})`)
} else {
this.logger.info('Submitted: No')
}
this.logger.info('')
}
this.logger.info(`Total results: ${results.length}`)
} catch (error) {
this.logger.error(`Failed to retrieve results: ${error.message}`)
this.exitCode = 1
}
}
}

View File

@ -0,0 +1,105 @@
import { BaseCommand, flags } from '@adonisjs/core/ace'
import type { CommandOptions } from '@adonisjs/core/types/ace'
export default class BenchmarkRun extends BaseCommand {
static commandName = 'benchmark:run'
static description = 'Run system and/or AI benchmarks to measure server performance'
@flags.boolean({ description: 'Run system benchmarks only (CPU, memory, disk)', alias: 's' })
declare systemOnly: boolean
@flags.boolean({ description: 'Run AI benchmark only', alias: 'a' })
declare aiOnly: boolean
@flags.boolean({ description: 'Submit results to repository after completion', alias: 'S' })
declare submit: boolean
static options: CommandOptions = {
startApp: true,
}
async run() {
const { DockerService } = await import('#services/docker_service')
const { BenchmarkService } = await import('#services/benchmark_service')
const dockerService = new DockerService()
const benchmarkService = new BenchmarkService(dockerService)
// Determine benchmark type
let benchmarkType: 'full' | 'system' | 'ai' = 'full'
if (this.systemOnly) {
benchmarkType = 'system'
} else if (this.aiOnly) {
benchmarkType = 'ai'
}
this.logger.info(`Starting ${benchmarkType} benchmark...`)
this.logger.info('')
try {
// Run the benchmark
let result
switch (benchmarkType) {
case 'system':
this.logger.info('Running system benchmarks (CPU, memory, disk)...')
result = await benchmarkService.runSystemBenchmarks()
break
case 'ai':
this.logger.info('Running AI benchmark...')
result = await benchmarkService.runAIBenchmark()
break
default:
this.logger.info('Running full benchmark suite...')
result = await benchmarkService.runFullBenchmark()
}
// Display results
this.logger.info('')
this.logger.success('Benchmark completed!')
this.logger.info('')
this.logger.info('=== Hardware Info ===')
this.logger.info(`CPU: ${result.cpu_model}`)
this.logger.info(`Cores: ${result.cpu_cores} physical, ${result.cpu_threads} threads`)
this.logger.info(`RAM: ${Math.round(result.ram_bytes / (1024 * 1024 * 1024))} GB`)
this.logger.info(`Disk Type: ${result.disk_type}`)
if (result.gpu_model) {
this.logger.info(`GPU: ${result.gpu_model}`)
}
this.logger.info('')
this.logger.info('=== Benchmark Scores ===')
this.logger.info(`CPU Score: ${result.cpu_score.toFixed(2)}`)
this.logger.info(`Memory Score: ${result.memory_score.toFixed(2)}`)
this.logger.info(`Disk Read Score: ${result.disk_read_score.toFixed(2)}`)
this.logger.info(`Disk Write Score: ${result.disk_write_score.toFixed(2)}`)
if (result.ai_tokens_per_second) {
this.logger.info(`AI Tokens/sec: ${result.ai_tokens_per_second.toFixed(2)}`)
this.logger.info(`AI Time to First Token: ${result.ai_time_to_first_token?.toFixed(2)} ms`)
this.logger.info(`AI Model: ${result.ai_model_used}`)
}
this.logger.info('')
this.logger.info(`NOMAD Score: ${result.nomad_score.toFixed(2)} / 100`)
this.logger.info('')
this.logger.info(`Benchmark ID: ${result.benchmark_id}`)
// Submit if requested
if (this.submit) {
this.logger.info('')
this.logger.info('Submitting results to repository...')
try {
const submitResult = await benchmarkService.submitToRepository(result.benchmark_id)
this.logger.success(`Results submitted! Repository ID: ${submitResult.repository_id}`)
this.logger.info(`Your percentile: ${submitResult.percentile}%`)
} catch (error) {
this.logger.error(`Failed to submit: ${error.message}`)
}
}
} catch (error) {
this.logger.error(`Benchmark failed: ${error.message}`)
this.exitCode = 1
}
}
}

View File

@ -0,0 +1,101 @@
import { BaseCommand, flags } from '@adonisjs/core/ace'
import type { CommandOptions } from '@adonisjs/core/types/ace'
export default class BenchmarkSubmit extends BaseCommand {
static commandName = 'benchmark:submit'
static description = 'Submit benchmark results to the community repository'
@flags.string({ description: 'Benchmark ID to submit (defaults to latest)', alias: 'i' })
declare benchmarkId: string
@flags.boolean({ description: 'Skip confirmation prompt', alias: 'y' })
declare yes: boolean
static options: CommandOptions = {
startApp: true,
}
async run() {
const { DockerService } = await import('#services/docker_service')
const { BenchmarkService } = await import('#services/benchmark_service')
const dockerService = new DockerService()
const benchmarkService = new BenchmarkService(dockerService)
try {
// Get the result to submit
const result = this.benchmarkId
? await benchmarkService.getResultById(this.benchmarkId)
: await benchmarkService.getLatestResult()
if (!result) {
this.logger.error('No benchmark result found.')
this.logger.info('Run "node ace benchmark:run" first to create a benchmark.')
this.exitCode = 1
return
}
if (result.submitted_to_repository) {
this.logger.warning(`Benchmark ${result.benchmark_id} has already been submitted.`)
this.logger.info(`Repository ID: ${result.repository_id}`)
return
}
// Show what will be submitted
this.logger.info('')
this.logger.info('=== Data to be submitted ===')
this.logger.info('')
this.logger.info('Hardware Information:')
this.logger.info(` CPU Model: ${result.cpu_model}`)
this.logger.info(` CPU Cores: ${result.cpu_cores}`)
this.logger.info(` CPU Threads: ${result.cpu_threads}`)
this.logger.info(` RAM: ${Math.round(result.ram_bytes / (1024 * 1024 * 1024))} GB`)
this.logger.info(` Disk Type: ${result.disk_type}`)
if (result.gpu_model) {
this.logger.info(` GPU: ${result.gpu_model}`)
}
this.logger.info('')
this.logger.info('Benchmark Scores:')
this.logger.info(` CPU Score: ${result.cpu_score.toFixed(2)}`)
this.logger.info(` Memory Score: ${result.memory_score.toFixed(2)}`)
this.logger.info(` Disk Read: ${result.disk_read_score.toFixed(2)}`)
this.logger.info(` Disk Write: ${result.disk_write_score.toFixed(2)}`)
if (result.ai_tokens_per_second) {
this.logger.info(` AI Tokens/sec: ${result.ai_tokens_per_second.toFixed(2)}`)
this.logger.info(` AI TTFT: ${result.ai_time_to_first_token?.toFixed(2)} ms`)
}
this.logger.info(` NOMAD Score: ${result.nomad_score.toFixed(2)}`)
this.logger.info('')
this.logger.info('Privacy Notice:')
this.logger.info(' - Only the information shown above will be submitted')
this.logger.info(' - No IP addresses, hostnames, or personal data is collected')
this.logger.info(' - Submissions are completely anonymous')
this.logger.info('')
// Confirm submission
if (!this.yes) {
const confirm = await this.prompt.confirm(
'Do you want to submit this benchmark to the community repository?'
)
if (!confirm) {
this.logger.info('Submission cancelled.')
return
}
}
// Submit
this.logger.info('Submitting benchmark...')
const submitResult = await benchmarkService.submitToRepository(result.benchmark_id)
this.logger.success('Benchmark submitted successfully!')
this.logger.info('')
this.logger.info(`Repository ID: ${submitResult.repository_id}`)
this.logger.info(`Your percentile: ${submitResult.percentile}%`)
this.logger.info('')
this.logger.info('Thank you for contributing to the NOMAD community!')
} catch (error) {
this.logger.error(`Submission failed: ${error.message}`)
this.exitCode = 1
}
}
}

View File

@ -0,0 +1,145 @@
import { BaseCommand, flags } from '@adonisjs/core/ace'
import type { CommandOptions } from '@adonisjs/core/types/ace'
import { Worker } from 'bullmq'
import queueConfig from '#config/queue'
import { RunDownloadJob } from '#jobs/run_download_job'
import { DownloadModelJob } from '#jobs/download_model_job'
import { RunBenchmarkJob } from '#jobs/run_benchmark_job'
import { EmbedFileJob } from '#jobs/embed_file_job'
import { CheckUpdateJob } from '#jobs/check_update_job'
import { CheckServiceUpdatesJob } from '#jobs/check_service_updates_job'
export default class QueueWork extends BaseCommand {
static commandName = 'queue:work'
static description = 'Start processing jobs from the queue'
@flags.string({ description: 'Queue name to process' })
declare queue: string
@flags.boolean({ description: 'Process all queues automatically' })
declare all: boolean
static options: CommandOptions = {
startApp: true,
staysAlive: true,
}
async run() {
// Validate that either --queue or --all is provided
if (!this.queue && !this.all) {
this.logger.error('You must specify either --queue=<name> or --all')
process.exit(1)
}
if (this.queue && this.all) {
this.logger.error('Cannot specify both --queue and --all flags')
process.exit(1)
}
const [jobHandlers, allQueues] = await this.loadJobHandlers()
// Determine which queues to process
const queuesToProcess = this.all ? Array.from(allQueues.values()) : [this.queue]
this.logger.info(`Starting workers for queues: ${queuesToProcess.join(', ')}`)
const workers: Worker[] = []
// Create a worker for each queue
for (const queueName of queuesToProcess) {
const worker = new Worker(
queueName,
async (job) => {
this.logger.info(`[${queueName}] Processing job: ${job.id} of type: ${job.name}`)
const jobHandler = jobHandlers.get(job.name)
if (!jobHandler) {
throw new Error(`No handler found for job: ${job.name}`)
}
return await jobHandler.handle(job)
},
{
connection: queueConfig.connection,
concurrency: this.getConcurrencyForQueue(queueName),
autorun: true,
}
)
worker.on('failed', async (job, err) => {
this.logger.error(`[${queueName}] Job failed: ${job?.id}, Error: ${err.message}`)
// If this was a Wikipedia download, mark it as failed in the DB
if (job?.data?.filetype === 'zim' && job?.data?.url?.includes('wikipedia_en_')) {
try {
const { DockerService } = await import('#services/docker_service')
const { ZimService } = await import('#services/zim_service')
const dockerService = new DockerService()
const zimService = new ZimService(dockerService)
await zimService.onWikipediaDownloadComplete(job.data.url, false)
} catch (e: any) {
this.logger.error(
`[${queueName}] Failed to update Wikipedia status: ${e.message}`
)
}
}
})
worker.on('completed', (job) => {
this.logger.info(`[${queueName}] Job completed: ${job.id}`)
})
workers.push(worker)
this.logger.info(`Worker started for queue: ${queueName}`)
}
// Schedule nightly update checks (idempotent, will persist over restarts)
await CheckUpdateJob.scheduleNightly()
await CheckServiceUpdatesJob.scheduleNightly()
// Graceful shutdown for all workers
process.on('SIGTERM', async () => {
this.logger.info('SIGTERM received. Shutting down workers...')
await Promise.all(workers.map((worker) => worker.close()))
this.logger.info('All workers shut down gracefully.')
process.exit(0)
})
}
private async loadJobHandlers(): Promise<[Map<string, any>, Map<string, string>]> {
const handlers = new Map<string, any>()
const queues = new Map<string, string>()
handlers.set(RunDownloadJob.key, new RunDownloadJob())
handlers.set(DownloadModelJob.key, new DownloadModelJob())
handlers.set(RunBenchmarkJob.key, new RunBenchmarkJob())
handlers.set(EmbedFileJob.key, new EmbedFileJob())
handlers.set(CheckUpdateJob.key, new CheckUpdateJob())
handlers.set(CheckServiceUpdatesJob.key, new CheckServiceUpdatesJob())
queues.set(RunDownloadJob.key, RunDownloadJob.queue)
queues.set(DownloadModelJob.key, DownloadModelJob.queue)
queues.set(RunBenchmarkJob.key, RunBenchmarkJob.queue)
queues.set(EmbedFileJob.key, EmbedFileJob.queue)
queues.set(CheckUpdateJob.key, CheckUpdateJob.queue)
queues.set(CheckServiceUpdatesJob.key, CheckServiceUpdatesJob.queue)
return [handlers, queues]
}
/**
* Get concurrency setting for a specific queue
* Can be customized per queue based on workload characteristics
*/
private getConcurrencyForQueue(queueName: string): number {
const concurrencyMap: Record<string, number> = {
[RunDownloadJob.queue]: 3,
[DownloadModelJob.queue]: 2, // Lower concurrency for resource-intensive model downloads
[RunBenchmarkJob.queue]: 1, // Run benchmarks one at a time for accurate results
[EmbedFileJob.queue]: 2, // Lower concurrency for embedding jobs, can be resource intensive
[CheckUpdateJob.queue]: 1, // No need to run more than one update check at a time
default: 3,
}
return concurrencyMap[queueName] || concurrencyMap.default
}
}

View File

@ -1,26 +0,0 @@
import env from '#start/env'
import app from '@adonisjs/core/services/app'
import { defineConfig, services } from '@adonisjs/drive'
const driveConfig = defineConfig({
default: env.get('DRIVE_DISK'),
/**
* The services object can be used to configure multiple file system
* services each using the same or a different driver.
*/
services: {
fs: services.fs({
location: app.makePath('storage'),
serveFiles: true,
routeBasePath: '/storage',
visibility: 'public',
}),
},
})
export default driveConfig
declare module '@adonisjs/drive/types' {
export interface DriveDisks extends InferDriveDisks<typeof driveConfig> { }
}

View File

@ -1,3 +1,5 @@
import KVStore from '#models/kv_store'
import { SystemService } from '#services/system_service'
import { defineConfig } from '@adonisjs/inertia' import { defineConfig } from '@adonisjs/inertia'
import type { InferSharedProps } from '@adonisjs/inertia/types' import type { InferSharedProps } from '@adonisjs/inertia/types'
@ -11,7 +13,12 @@ const inertiaConfig = defineConfig({
* Data that should be shared with all rendered pages * Data that should be shared with all rendered pages
*/ */
sharedData: { sharedData: {
// user: (ctx) => ctx.inertia.always(() => ctx.auth.user), appVersion: () => SystemService.getAppVersion(),
environment: process.env.NODE_ENV || 'production',
aiAssistantName: async () => {
const customName = await KVStore.getValue('ai.assistantCustomName')
return (customName && customName.trim()) ? customName : 'AI Assistant'
},
}, },
/** /**

View File

@ -13,12 +13,12 @@ const loggerConfig = defineConfig({
app: { app: {
enabled: true, enabled: true,
name: env.get('APP_NAME'), name: env.get('APP_NAME'),
level: env.get('LOG_LEVEL'), level: env.get('NODE_ENV') === 'production' ? env.get('LOG_LEVEL') : 'debug', // default to 'debug' in non-production envs
transport: { transport: {
targets: targets:
targets() targets()
.pushIf(!app.inProduction, targets.pretty()) .pushIf(!app.inProduction, targets.pretty())
.pushIf(app.inProduction, targets.file({ destination: "/app/storage/logs/admin.log" })) .pushIf(app.inProduction, targets.file({ destination: "/app/storage/logs/admin.log", mkdir: true }))
.toArray(), .toArray(),
}, },
}, },

10
admin/config/queue.ts Normal file
View File

@ -0,0 +1,10 @@
import env from '#start/env'
const queueConfig = {
connection: {
host: env.get('REDIS_HOST'),
port: env.get('REDIS_PORT') ?? 6379,
},
}
export default queueConfig

View File

@ -12,6 +12,7 @@ const staticServerConfig = defineConfig({
etag: true, etag: true,
lastModified: true, lastModified: true,
dotFiles: 'ignore', dotFiles: 'ignore',
acceptRanges: true,
}) })
export default staticServerConfig export default staticServerConfig

View File

@ -1,6 +1,14 @@
import env from '#start/env'
import { defineConfig } from '@adonisjs/transmit' import { defineConfig } from '@adonisjs/transmit'
import { redis } from '@adonisjs/transmit/transports'
export default defineConfig({ export default defineConfig({
pingInterval: false, pingInterval: '30s',
transport: null transport: {
driver: redis({
host: env.get('REDIS_HOST'),
port: env.get('REDIS_PORT'),
keyPrefix: 'transmit:',
})
}
}) })

View File

@ -0,0 +1,7 @@
export const BROADCAST_CHANNELS = {
BENCHMARK_PROGRESS: 'benchmark-progress',
OLLAMA_MODEL_DOWNLOAD: 'ollama-model-download',
SERVICE_INSTALLATION: 'service-installation',
SERVICE_UPDATES: 'service-updates',
}

View File

@ -0,0 +1,3 @@
import { KVStoreKey } from "../types/kv_store.js";
export const SETTINGS_KEYS: KVStoreKey[] = ['chat.suggestionsEnabled', 'chat.lastModel', 'ui.hasVisitedEasySetup', 'ui.theme', 'system.earlyAccess', 'ai.assistantCustomName'];

2
admin/constants/misc.ts Normal file
View File

@ -0,0 +1,2 @@
export const NOMAD_API_DEFAULT_BASE_URL = 'https://api.projectnomad.us'

157
admin/constants/ollama.ts Normal file
View File

@ -0,0 +1,157 @@
import { NomadOllamaModel } from '../types/ollama.js'
/**
* Fallback basic recommended Ollama models in case fetching from the service fails.
*/
export const FALLBACK_RECOMMENDED_OLLAMA_MODELS: NomadOllamaModel[] = [
{
name: 'llama3.1',
description:
'Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes.',
estimated_pulls: '109.3M',
id: '9fe9c575-e77e-4a51-a743-07359458ee71',
first_seen: '2026-01-28T23:37:31.000+00:00',
model_last_updated: '1 year ago',
tags: [
{
name: 'llama3.1:8b-text-q4_1',
size: '5.1 GB',
context: '128k',
input: 'Text',
cloud: false,
thinking: false
},
],
},
{
name: 'deepseek-r1',
description:
'DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.',
estimated_pulls: '77.2M',
id: '0b566560-68a6-4964-b0d4-beb3ab1ad694',
first_seen: '2026-01-28T23:37:31.000+00:00',
model_last_updated: '7 months ago',
tags: [
{
name: 'deepseek-r1:1.5b',
size: '1.1 GB',
context: '128k',
input: 'Text',
cloud: false,
thinking: true
},
],
},
{
name: 'llama3.2',
description: "Meta's Llama 3.2 goes small with 1B and 3B models.",
estimated_pulls: '54.7M',
id: 'c9a1bc23-b290-4501-a913-f7c9bb39c3ad',
first_seen: '2026-01-28T23:37:31.000+00:00',
model_last_updated: '1 year ago',
tags: [
{
name: 'llama3.2:1b-text-q2_K',
size: '581 MB',
context: '128k',
input: 'Text',
cloud: false,
thinking: false
},
],
},
]
export const DEFAULT_QUERY_REWRITE_MODEL = 'qwen2.5:3b' // default to qwen2.5 for query rewriting with good balance of text task performance and resource usage
/**
* Adaptive RAG context limits based on model size.
* Smaller models get overwhelmed with too much context, so we cap it.
*/
export const RAG_CONTEXT_LIMITS: { maxParams: number; maxResults: number; maxTokens: number }[] = [
{ maxParams: 3, maxResults: 2, maxTokens: 1000 }, // 1-3B models
{ maxParams: 8, maxResults: 4, maxTokens: 2500 }, // 4-8B models
{ maxParams: Infinity, maxResults: 5, maxTokens: 0 }, // 13B+ (no cap)
]
export const SYSTEM_PROMPTS = {
default: `
Format all responses using markdown for better readability. Vanilla markdown or GitHub-flavored markdown is preferred.
- Use **bold** and *italic* for emphasis.
- Use code blocks with language identifiers for code snippets.
- Use headers (##, ###) to organize longer responses.
- Use bullet points or numbered lists for clarity.
- Use tables when presenting structured data.
`,
rag_context: (context: string) => `
You have access to relevant information from the knowledge base. This context has been retrieved based on semantic similarity to the user's question.
[Knowledge Base Context]
${context}
IMPORTANT INSTRUCTIONS:
1. If the user's question is directly related to the context above, use this information to provide accurate, detailed answers.
2. Always cite or reference the context when using it (e.g., "According to the information available..." or "Based on the knowledge base...").
3. If the context is only partially relevant, combine it with your general knowledge but be clear about what comes from the knowledge base.
4. If the context is not relevant to the user's question, you can respond using your general knowledge without forcing the context into your answer. Do not mention the context if it's not relevant.
5. Never fabricate information that isn't in the context or your training data.
6. If you're unsure or you don't have enough information to answer the user's question, acknowledge the limitations.
Format your response using markdown for readability.
`,
chat_suggestions: `
You are a helpful assistant that generates conversation starter suggestions for a survivalist/prepper using an AI assistant.
Provide exactly 3 conversation starter topics as direct questions that someone would ask.
These should be clear, complete questions that can start meaningful conversations.
Examples of good suggestions:
- "How do I purify water in an emergency?"
- "What are the best foods for long-term storage?"
- "Help me create a 72-hour emergency kit"
Do NOT use:
- Follow-up questions seeking clarification
- Vague or incomplete suggestions
- Questions that assume prior context
- Statements that are not suggestions themselves, such as praise for asking the question
- Direct questions or commands to the user
Return ONLY the 3 suggestions as a comma-separated list with no additional text, formatting, numbering, or quotation marks.
The suggestions should be in title case.
Ensure that your suggestions are comma-seperated with no conjunctions like "and" or "or".
Do not use line breaks, new lines, or extra spacing to separate the suggestions.
Format: suggestion1, suggestion2, suggestion3
`,
title_generation: `You are a title generator. Given the start of a conversation, generate a concise, descriptive title under 50 characters. Return ONLY the title text with no quotes, punctuation wrapping, or extra formatting.`,
query_rewrite: `
You are a query rewriting assistant. Your task is to reformulate the user's latest question to include relevant context from the conversation history.
Given the conversation history, rewrite the user's latest question to be a standalone, context-aware search query that will retrieve the most relevant information.
Rules:
1. Keep the rewritten query concise (under 150 words)
2. Include key entities, topics, and context from previous messages
3. Make it a clear, searchable query
4. Do NOT answer the question - only rewrite the user's query to be more effective for retrieval
5. Output ONLY the rewritten query, nothing else
Examples:
Conversation:
User: "How do I install Gentoo?"
Assistant: [detailed installation guide]
User: "Is an internet connection required to install?"
Rewritten Query: "Is an internet connection required to install Gentoo Linux?"
---
Conversation:
User: "What's the best way to preserve meat?"
Assistant: [preservation methods]
User: "How long does it last?"
Rewritten Query: "How long does preserved meat last using curing or smoking methods?"
`,
}

View File

@ -0,0 +1,8 @@
export const SERVICE_NAMES = {
KIWIX: 'nomad_kiwix_server',
OLLAMA: 'nomad_ollama',
QDRANT: 'nomad_qdrant',
CYBERCHEF: 'nomad_cyberchef',
FLATNOTES: 'nomad_flatnotes',
KOLIBRI: 'nomad_kolibri',
}

View File

@ -0,0 +1,48 @@
export const HTML_SELECTORS_TO_REMOVE = [
'script',
'style',
'nav',
'header',
'footer',
'noscript',
'iframe',
'svg',
'.navbox',
'.sidebar',
'.infobox',
'.mw-editsection',
'.reference',
'.reflist',
'.toc',
'.noprint',
'.mw-jump-link',
'.mw-headline-anchor',
'[role="navigation"]',
'.navbar',
'.hatnote',
'.ambox',
'.sistersitebox',
'.portal',
'#coordinates',
'.geo-nondefault',
'.authority-control',
]
// Common heading names that usually don't have meaningful content under them
export const NON_CONTENT_HEADING_PATTERNS = [
/^see also$/i,
/^references$/i,
/^external links$/i,
/^further reading$/i,
/^notes$/i,
/^bibliography$/i,
/^navigation$/i,
]
/**
* Batch size for processing ZIM articles to prevent lock timeout errors.
* Processing 50 articles at a time balances throughput with job duration.
* Typical processing time: 2-5 minutes per batch depending on article complexity.
*/
export const ZIM_BATCH_SIZE = 50

Some files were not shown because too many files have changed in this diff Show More