Compare commits

...

29 Commits

Author SHA1 Message Date
gujishh
9d73628ee3 docs(faq): add recovery steps for missing Kiwix library XML 2026-05-04 10:27:50 -07:00
chriscrosstalk
9cbf8c2135
build: write version.json from VERSION build-arg (#754)
The Dockerfile copied root package.json to /app/version.json, which
SystemService.getAppVersion() reads on every render of the app version in
the UI. semantic-release only reliably commits that bump back on the main
branch; on the rc branch it does not, so v1.31.1-rc.1 and v1.31.1-rc.2
both shipped with a version.json still reading 1.31.0. Result: a user who
upgrades to rc.2 sees "1.31.0" in the UI and a persistent "update to
v1.31.1-rc.2 available" prompt.

The build workflow already passes VERSION as a build-arg (used today only
for the OCI image label). Generating version.json from that arg at build
time makes the image tag the single source of truth and eliminates the
drift, regardless of what the committed-back package.json says.

Dev builds (no VERSION override) write "dev", which matches the existing
NODE_ENV=development short-circuit in getAppVersion().

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 00:58:43 +00:00
cosmistack-bot
3117a1be9d docs(release): finalize v1.31.1 release notes [skip ci] 2026-04-21 21:27:53 +00:00
cosmistack-bot
1a81290b31 chore(release): 1.31.1 [skip ci] 2026-04-21 21:27:16 +00:00
Jake Turner
bd20ba87bd ci: ensure tags are force fetched on semantic release 2026-04-21 14:26:28 -07:00
Jake Turner
5cbe6f5203 docs: update release notes 2026-04-21 14:26:28 -07:00
chriscrosstalk
216509ae0d fix(rag): repair ZIM embedding pipeline (sync filter, batch gate, DOM walk) (#745)
Three bugs in the RAG embedding pipeline, diagnosed and patched by @sbruschke
against v1.31.0 with working before/after chunk counts. All three are
root-cause contributors to #388.

1. scanAndSyncStorage queued every file under /storage/zim/ for embedding,
   including Kiwix's generated kiwix-library.xml. EmbedFileJob rejected it
   with "Unsupported file type" and the default 30-attempt retry policy
   kept it looping on every sync, flooding nomad_admin logs. Now gated on
   determineFileType(filePath) !== 'unknown'.

2. hasMoreBatches compared zimChunks.length (section-level chunk count
   under the 'structured' strategy) against ZIM_BATCH_SIZE (an article
   limit). Because articles emit multiple sections, the two are never
   equal for real archives and processing silently stopped after the
   first 50 articles. Now gated on articlesInBatch >= ZIM_BATCH_SIZE.

3. extractStructuredContent walked only direct children of <body>, so any
   ZIM that wraps content in a container div (Devdocs, Wikipedia,
   FreeCodeCamp, React docs, etc.) produced zero sections and silently
   embedded zero chunks while reporting success. Now walks the full DOM
   via $('body').find('h2, h3, h4, p, ul, ol, dl, table'), with a
   whole-body text fallback when the selector walk yields nothing.

Before/after chunk counts confirmed by @sbruschke on v1.31.0:
  devdocs_en_git   0 -> 916
  devdocs_en_react 0 -> 481
  devdocs_en_node  0 -> 423
  libretexts_en_eng 1 -> 35 (climbing)
Wikipedia resumed progressing normally through its 6M articles.

Closes #718
Closes #719
Closes #720
Closes #388

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:26:28 -07:00
chriscrosstalk
810a70acb7 fix(ZIM): accumulate across Kiwix pages to prevent empty Content Explorer (#746)
When many ZIMs are already installed locally, a single Kiwix catalog page
(12 items) could return 12 already-installed items, which zim_service
would fully filter out client-side. The endpoint returned items: [] with
has_more: true, and the frontend's infinite-scroll guard
(flatData.length > 0) blocked fetchNextPage — leaving the user with
"No records found" despite plenty of uninstalled ZIMs available.

Backend now accumulates across up to 5 Kiwix fetches (60 items each)
until it has enough post-filter results to return, dedupes by entry id,
advances currentStart by actual entries returned (not requested), and
returns a next_start cursor. The frontend consumes that cursor instead
of computing Kiwix offsets locally, and the flatData.length > 0 guard is
removed so the existing on-mount effect drives bounded auto-fetch when
a short page lands.

The pre-existing has_more off-by-one (compared totalResults against the
input start rather than the post-fetch position) is fixed implicitly.

Diagnosis credit: @johno10661.

Closes #731

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:26:28 -07:00
chriscrosstalk
6646b3480b fix(AI): stop local nomad_ollama container when remote Ollama is configured (#744)
When users set a remote Ollama URL via AI Settings, the local nomad_ollama
container continued running and competed with the remote host for port 11434
and GPU access. Now configureRemote stops the local container on set and
restores it on clear (if still present). Container and its models volume are
preserved so the local install can be re-enabled later.

Closes #662

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:26:28 -07:00
0xGlitch
33727c744f fix(UI): gate NAS Storage label on network filesystem type (#749)
Closes #743
2026-04-21 14:26:28 -07:00
chriscrosstalk
0c76a195b9 fix(qdrant): disable anonymous telemetry by default (#747)
Qdrant's upstream default enables anonymous telemetry to telemetry.qdrant.io,
which doesn't match NOMAD's offline-first "zero telemetry" posture. Adding
QDRANT__TELEMETRY_DISABLED=true to the container environment turns it off for
fresh installs and reinstalls.

Existing installs keep their current telemetry-enabled container until the
Qdrant service is force-reinstalled via the Knowledge Base panel or the next
container recreation — Docker bakes Env into containers at create time, so
env changes require a new container.

Closes #742

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:26:28 -07:00
chriscrosstalk
056556497c docs: add Community Add-Ons page with field manuals + W3Schools packs (#753)
Introduces a dedicated page listing third-party ZIM content packs built
by the community. Launches with the two current add-ons (jrsphoto field
manuals, kennethbrewer W3Schools) and explains how to install a ZIM pack
and where to submit a new one for inclusion.

- New doc at admin/docs/community-add-ons.md
- Wired into DocsService DOC_ORDER (slot 4) and TITLE_OVERRIDES so the
  hyphen in "Add-Ons" is preserved in the sidebar
- README gets a link under Community & Resources

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-21 14:26:28 -07:00
Jake Turner
b7b3bf00de docs: update release notes 2026-04-21 14:26:28 -07:00
dependabot[bot]
7ec3d790d1 build(deps): bump lodash from 4.17.23 to 4.18.1 in /admin (#643)
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.23 to 4.18.1.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.23...4.18.1)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.18.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
dependabot[bot]
b6bb0f2321 build(deps-dev): bump vite from 6.4.1 to 6.4.2 in /admin (#677)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 6.4.1 to 6.4.2.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v6.4.2/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v6.4.2/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 6.4.2
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
dependabot[bot]
92b6f3c22f build(deps): bump @adonisjs/http-server from 7.8.0 to 7.8.1 in /admin (#724)
Bumps [@adonisjs/http-server](https://github.com/adonisjs/http-server) from 7.8.0 to 7.8.1.
- [Release notes](https://github.com/adonisjs/http-server/releases)
- [Commits](https://github.com/adonisjs/http-server/compare/v7.8.0...v7.8.1)

---
updated-dependencies:
- dependency-name: "@adonisjs/http-server"
  dependency-version: 7.8.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
dependabot[bot]
6ec0678752 build(deps): bump protobufjs from 7.5.4 to 7.5.5 in /admin (#737)
Bumps [protobufjs](https://github.com/protobufjs/protobuf.js) from 7.5.4 to 7.5.5.
- [Release notes](https://github.com/protobufjs/protobuf.js/releases)
- [Changelog](https://github.com/protobufjs/protobuf.js/blob/master/CHANGELOG.md)
- [Commits](https://github.com/protobufjs/protobuf.js/compare/protobufjs-v7.5.4...protobufjs-v7.5.5)

---
updated-dependencies:
- dependency-name: protobufjs
  dependency-version: 7.5.5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
dependabot[bot]
56dbf95c66 build(deps): bump protocol-buffers-schema from 3.6.0 to 3.6.1 in /admin (#736)
Bumps [protocol-buffers-schema](https://github.com/mafintosh/protocol-buffers-schema) from 3.6.0 to 3.6.1.
- [Commits](https://github.com/mafintosh/protocol-buffers-schema/compare/v3.6.0...v3.6.1)

---
updated-dependencies:
- dependency-name: protocol-buffers-schema
  dependency-version: 3.6.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
dependabot[bot]
5f0463bb08 build(deps): bump axios from 1.13.5 to 1.15.0 in /admin (#708)
Bumps [axios](https://github.com/axios/axios) from 1.13.5 to 1.15.0.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.13.5...v1.15.0)

---
updated-dependencies:
- dependency-name: axios
  dependency-version: 1.15.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
dependabot[bot]
540c0abee5 build(deps): bump follow-redirects from 1.15.11 to 1.16.0 in /admin (#729)
Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.15.11 to 1.16.0.
- [Release notes](https://github.com/follow-redirects/follow-redirects/releases)
- [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.15.11...v1.16.0)

---
updated-dependencies:
- dependency-name: follow-redirects
  dependency-version: 1.16.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
chriscrosstalk
6c33a96972 fix(AI): allow cancelling in-progress model downloads and ensure consistent progress UI (#701)
Adds a cancel button to in-progress Ollama model downloads and unifies
the Active Model Downloads card layout with the Active Downloads card
used for ZIMs, maps, and pmtiles (byte counts, progress bar, live speed,
status indicator).

Closes #676.
2026-04-21 14:26:28 -07:00
Luís Miguel
806b2c1714 fix(security): SSRF validation for map downloads and error sanitization (CWE-918, CWE-209) (#552)
* fix(security): add SSRF validation to map download URLs from manifest
* fix(security): sanitize verbose error in rag controller scan endpoint
* fix(security): sanitize verbose errors in benchmark controller
* fix(security): sanitize verbose error in system controller version fetch
* fix(security): sanitize verbose errors in chats controller (6 instances)
* fix(security): sanitize verbose errors in docker service (6 instances)
* fix(security): sanitize verbose error in system update service
* fix(security): sanitize verbose errors in collection update service
---------
Co-authored-by: Jake Turner <52841588+jakeaturner@users.noreply.github.com>
2026-04-21 14:26:28 -07:00
Jake Turner
2b8c847295 fix(Downloads): remove duplicate err listnr and improv Range req stability 2026-04-21 14:26:28 -07:00
Aaron Bird
8d026da06e fix(downloads): stage downloads to .tmp to prevent Kiwix loading partial files
Downloads are now written to `filepath + '.tmp'` and atomically renamed
to the final path only on successful completion. Kiwix globs for `*.zim`
and ZimService filters `.endsWith('.zim')`, so `.tmp` files are invisible
to both during download. The same staging applies to `.pmtiles` map files.

Ref #372

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 14:26:28 -07:00
Ben Gauger
151b454ad9 fix(disk-display): show NAS Storage label in fsSize fallback path
Co-Authored-By: Ben Smith <bravosierra99@gmail.com>
2026-04-21 14:26:28 -07:00
Ben Gauger
84399b19d9 fix(disk-collector): fix storage reporting for NFS mounts
Co-Authored-By: Ben Smith <bravosierra99@gmail.com>
2026-04-21 14:26:28 -07:00
Jake Turner
c8cb79a3a5 fix: prevent ZIM corrupt file crash and deduplicate Ollama download logs (#741)
Corrupted ZIM files cause a native C++ abort (ZimFileFormatError) that
bypasses JS try/catch and kills the process. Add magic number validation
before passing files to @openzim/libzim so invalid files are skipped
gracefully. Also deduplicate Ollama download progress broadcasts — both
within a single stream (skip unchanged percentages) and across concurrent
callers (share one download promise per model).

Co-authored-by: aegisman <aegis@manicode.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 14:26:28 -07:00
Henry Estela
6510f42184 fix(AI): qwen2.5 loading on every chat message (#649)
Use the currently loaded model for chat title generation and query rewrite.
2026-04-21 14:26:28 -07:00
Henry Estela
4d866167a2 fix(AI): add null check to model name (#645)
When the OpenAI-compatible fallback (/v1/models) is used, models are mapped as { name: m.id, size: 0 } with no details field. Accessing model.details.parameter_size throws `TypeError: Cannot read properties of undefined`, which crashes the React render and causes the entire page to go blank.
2026-04-21 14:26:28 -07:00
37 changed files with 1174 additions and 368 deletions

View File

@ -26,6 +26,8 @@ jobs:
with:
fetch-depth: 0
persist-credentials: false
- name: Sync tags
run: git fetch --tags --force
- name: semantic-release
uses: cycjimmy/semantic-release-action@v6
id: semver

View File

@ -43,8 +43,10 @@ ENV NODE_ENV=production
WORKDIR /app
COPY --from=production-deps /app/node_modules /app/node_modules
COPY --from=build /app/build /app
# Copy root package.json for version info
COPY package.json /app/version.json
# Generate version.json from the VERSION build-arg so the image tag is the
# single source of truth (previously copied root package.json, which drifted
# from the tag when semantic-release did not commit the bump back).
RUN echo "{\"version\":\"${VERSION}\"}" > /app/version.json
# Copy docs and README for access within the container
COPY admin/docs /app/docs

View File

@ -124,6 +124,7 @@ Contributions are welcome and appreciated! Please see [CONTRIBUTING.md](CONTRIBU
- **Benchmark Leaderboard:** [benchmark.projectnomad.us](https://benchmark.projectnomad.us) - See how your hardware stacks up against other NOMAD builds
- **Troubleshooting Guide:** [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Find solutions to common issues
- **FAQ:** [FAQ.md](FAQ.md) - Find answers to frequently asked questions
- **Community Add-Ons:** [admin/docs/community-add-ons.md](admin/docs/community-add-ons.md) - Third-party content packs built by the community
## License

View File

@ -5,6 +5,7 @@ import { runBenchmarkValidator, submitBenchmarkValidator } from '#validators/ben
import { RunBenchmarkJob } from '#jobs/run_benchmark_job'
import type { BenchmarkType } from '../../types/benchmark.js'
import { randomUUID } from 'node:crypto'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class BenchmarkController {
@ -52,9 +53,10 @@ export default class BenchmarkController {
result,
})
} catch (error) {
logger.error({ err: error }, '[BenchmarkController] Benchmark run failed')
return response.status(500).send({
success: false,
error: error.message,
error: 'An internal error occurred while running the benchmark.',
})
}
}
@ -181,9 +183,10 @@ export default class BenchmarkController {
} catch (error) {
// Pass through the status code from the service if available, otherwise default to 400
const statusCode = (error as any).statusCode || 400
logger.error({ err: error }, '[BenchmarkController] Benchmark submit failed')
return response.status(statusCode).send({
success: false,
error: error.message,
error: 'Failed to submit benchmark results.',
})
}
}

View File

@ -5,6 +5,7 @@ import { createSessionSchema, updateSessionSchema, addMessageSchema } from '#val
import KVStore from '#models/kv_store'
import { SystemService } from '#services/system_service'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class ChatsController {
@ -45,8 +46,9 @@ export default class ChatsController {
const session = await this.chatService.createSession(data.title, data.model)
return response.status(201).json(session)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to create session')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to create session',
error: 'Failed to create session',
})
}
}
@ -56,8 +58,9 @@ export default class ChatsController {
const suggestions = await this.chatService.getChatSuggestions()
return response.status(200).json({ suggestions })
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to get suggestions')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to get suggestions',
error: 'Failed to get suggestions',
})
}
}
@ -69,8 +72,9 @@ export default class ChatsController {
const session = await this.chatService.updateSession(sessionId, data)
return session
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to update session')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to update session',
error: 'Failed to update session',
})
}
}
@ -81,8 +85,9 @@ export default class ChatsController {
await this.chatService.deleteSession(sessionId)
return response.status(204)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to delete session')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to delete session',
error: 'Failed to delete session',
})
}
}
@ -94,8 +99,9 @@ export default class ChatsController {
const message = await this.chatService.addMessage(sessionId, data.role, data.content)
return response.status(201).json(message)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to add message')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to add message',
error: 'Failed to add message',
})
}
}
@ -105,8 +111,9 @@ export default class ChatsController {
const result = await this.chatService.deleteAllSessions()
return response.status(200).json(result)
} catch (error) {
logger.error({ err: error }, '[ChatsController] Failed to delete all sessions')
return response.status(500).json({
error: error instanceof Error ? error.message : 'Failed to delete all sessions',
error: 'Failed to delete all sessions',
})
}
}

View File

@ -8,7 +8,7 @@ import { modelNameSchema } from '#validators/download'
import { chatSchema, getAvailableModelsSchema } from '#validators/ollama'
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import { DEFAULT_QUERY_REWRITE_MODEL, RAG_CONTEXT_LIMITS, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { RAG_CONTEXT_LIMITS, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { SERVICE_NAMES } from '../../constants/service_names.js'
import logger from '@adonisjs/core/services/logger'
type Message = { role: 'system' | 'user' | 'assistant'; content: string }
@ -59,7 +59,7 @@ export default class OllamaController {
// Query rewriting for better RAG retrieval with manageable context
// Will return user's latest message if no rewriting is needed
const rewrittenQuery = await this.rewriteQueryWithContext(reqData.messages)
const rewrittenQuery = await this.rewriteQueryWithContext(reqData.messages, reqData.model)
logger.debug(`[OllamaController] Rewritten query for RAG: "${rewrittenQuery}"`)
if (rewrittenQuery) {
@ -157,7 +157,7 @@ export default class OllamaController {
await this.chatService.addMessage(sessionId, 'assistant', fullContent)
const messageCount = await this.chatService.getMessageCount(sessionId)
if (messageCount <= 2 && userContent) {
this.chatService.generateTitle(sessionId, userContent, fullContent).catch((err) => {
this.chatService.generateTitle(sessionId, userContent, fullContent, reqData.model).catch((err) => {
logger.error(`[OllamaController] Title generation failed: ${err instanceof Error ? err.message : err}`)
})
}
@ -172,7 +172,7 @@ export default class OllamaController {
await this.chatService.addMessage(sessionId, 'assistant', result.message.content)
const messageCount = await this.chatService.getMessageCount(sessionId)
if (messageCount <= 2 && userContent) {
this.chatService.generateTitle(sessionId, userContent, result.message.content).catch((err) => {
this.chatService.generateTitle(sessionId, userContent, result.message.content, reqData.model).catch((err) => {
logger.error(`[OllamaController] Title generation failed: ${err instanceof Error ? err.message : err}`)
})
}
@ -212,13 +212,21 @@ export default class OllamaController {
return response.status(404).send({ success: false, message: 'Ollama service record not found.' })
}
// Clear path: null or empty URL removes remote config and marks service as not installed
// Clear path: null or empty URL removes remote config. If a local nomad_ollama container
// still exists (user had previously installed AI Assistant locally), restart it and keep
// the service marked installed. Otherwise fall back to uninstalled.
if (!remoteUrl || remoteUrl.trim() === '') {
await KVStore.clearValue('ai.remoteOllamaUrl')
ollamaService.installed = false
const hasLocalContainer = await this._startLocalOllamaContainerIfExists()
ollamaService.installed = hasLocalContainer
ollamaService.installation_status = 'idle'
await ollamaService.save()
return { success: true, message: 'Remote Ollama configuration cleared.' }
return {
success: true,
message: hasLocalContainer
? 'Remote Ollama cleared. Local Ollama container restored.'
: 'Remote Ollama configuration cleared.',
}
}
// Validate URL format
@ -253,6 +261,10 @@ export default class OllamaController {
ollamaService.installation_status = 'idle'
await ollamaService.save()
// Stop the local nomad_ollama container (if running) so it doesn't compete with the
// remote host for GPU / port 11434. Preserves the container and its models volume.
await this._stopLocalOllamaContainer()
// Install Qdrant if not already installed (fire-and-forget)
const qdrantService = await Service.query().where('service_name', SERVICE_NAMES.QDRANT).first()
if (qdrantService && !qdrantService.installed) {
@ -270,6 +282,50 @@ export default class OllamaController {
return { success: true, message: 'Remote Ollama configured.' }
}
private async _stopLocalOllamaContainer(): Promise<void> {
try {
const containers = await this.dockerService.docker.listContainers({ all: true })
const ollamaContainer = containers.find((c) =>
c.Names.includes(`/${SERVICE_NAMES.OLLAMA}`)
)
if (!ollamaContainer || ollamaContainer.State !== 'running') {
return
}
await this.dockerService.docker.getContainer(ollamaContainer.Id).stop()
this.dockerService.invalidateServicesStatusCache()
logger.info('[OllamaController] Stopped local nomad_ollama (remote Ollama configured)')
} catch (error: any) {
logger.error(
{ err: error },
'[OllamaController] Failed to stop local nomad_ollama; remote Ollama is still active'
)
}
}
private async _startLocalOllamaContainerIfExists(): Promise<boolean> {
try {
const containers = await this.dockerService.docker.listContainers({ all: true })
const ollamaContainer = containers.find((c) =>
c.Names.includes(`/${SERVICE_NAMES.OLLAMA}`)
)
if (!ollamaContainer) {
return false
}
if (ollamaContainer.State !== 'running') {
await this.dockerService.docker.getContainer(ollamaContainer.Id).start()
this.dockerService.invalidateServicesStatusCache()
logger.info('[OllamaController] Started local nomad_ollama (remote Ollama cleared)')
}
return true
} catch (error: any) {
logger.error(
{ err: error },
'[OllamaController] Failed to start local nomad_ollama on remote clear'
)
return false
}
}
async deleteModel({ request }: HttpContext) {
const reqData = await request.validateUsing(modelNameSchema)
await this.ollamaService.deleteModel(reqData.model)
@ -312,9 +368,18 @@ export default class OllamaController {
}
private async rewriteQueryWithContext(
messages: Message[]
messages: Message[],
model: string
): Promise<string | null> {
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
try {
// Skip the entire RAG pipeline if there are no documents to search
const hasDocuments = await this.ragService.hasDocuments()
if (!hasDocuments) {
return null
}
// Get recent conversation history (last 6 messages for 3 turns)
const recentMessages = messages.slice(-6)
@ -322,7 +387,7 @@ export default class OllamaController {
// little RAG benefit until there is enough context to matter.
const userMessages = recentMessages.filter(msg => msg.role === 'user')
if (userMessages.length <= 2) {
return userMessages[userMessages.length - 1]?.content || null
return lastUserMessage?.content || null
}
const conversationContext = recentMessages
@ -336,17 +401,8 @@ export default class OllamaController {
})
.join('\n')
const installedModels = await this.ollamaService.getModels(true)
const rewriteModelAvailable = installedModels?.some(model => model.name === DEFAULT_QUERY_REWRITE_MODEL)
if (!rewriteModelAvailable) {
logger.warn(`[RAG] Query rewrite model "${DEFAULT_QUERY_REWRITE_MODEL}" not available. Skipping query rewriting.`)
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
return lastUserMessage?.content || null
}
// FUTURE ENHANCEMENT: allow the user to specify which model to use for rewriting
const response = await this.ollamaService.chat({
model: DEFAULT_QUERY_REWRITE_MODEL,
model,
messages: [
{
role: 'system',
@ -367,7 +423,6 @@ export default class OllamaController {
`[RAG] Query rewriting failed: ${error instanceof Error ? error.message : error}`
)
// Fallback to last user message if rewriting fails
const lastUserMessage = [...messages].reverse().find(msg => msg.role === 'user')
return lastUserMessage?.content || null
}
}

View File

@ -6,6 +6,7 @@ import app from '@adonisjs/core/services/app'
import { randomBytes } from 'node:crypto'
import { sanitizeFilename } from '../utils/fs.js'
import { deleteFileSchema, getJobStatusSchema } from '#validators/rag'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class RagController {
@ -92,7 +93,8 @@ export default class RagController {
const syncResult = await this.ragService.scanAndSyncStorage()
return response.status(200).json(syncResult)
} catch (error) {
return response.status(500).json({ error: 'Error scanning and syncing storage', details: error.message })
logger.error({ err: error }, '[RagController] Error scanning and syncing storage')
return response.status(500).json({ error: 'Error scanning and syncing storage' })
}
}
}

View File

@ -6,6 +6,7 @@ import { CheckServiceUpdatesJob } from '#jobs/check_service_updates_job'
import { affectServiceValidator, checkLatestVersionValidator, installServiceValidator, subscribeToReleaseNotesValidator, updateServiceValidator } from '#validators/system';
import { inject } from '@adonisjs/core'
import type { HttpContext } from '@adonisjs/core/http'
import logger from '@adonisjs/core/services/logger'
@inject()
export default class SystemController {
@ -144,7 +145,8 @@ export default class SystemController {
)
response.send({ versions: updates })
} catch (error) {
response.status(500).send({ error: `Failed to fetch versions: ${error.message}` })
logger.error({ err: error }, `[SystemController] Failed to fetch versions for ${serviceName}`)
response.status(500).send({ error: 'Failed to fetch available versions for this service.' })
}
}

View File

@ -21,6 +21,25 @@ export class DownloadModelJob {
return createHash('sha256').update(modelName).digest('hex').slice(0, 16)
}
/** In-memory registry of abort controllers for active model download jobs */
static abortControllers: Map<string, AbortController> = new Map()
/**
* Redis key used to signal cancellation across processes. Uses a `model-cancel` prefix
* so it cannot collide with content download cancel signals (`nomad:download:cancel:*`).
*/
static cancelKey(jobId: string): string {
return `nomad:download:model-cancel:${jobId}`
}
/** Signal cancellation via Redis so the worker process can pick it up on its next poll tick */
static async signalCancel(jobId: string): Promise<void> {
const queueService = new QueueService()
const queue = queueService.getQueue(this.queue)
const client = await queue.client
await client.set(this.cancelKey(jobId), '1', 'EX', 300) // 5 min TTL
}
async handle(job: Job) {
const { modelName } = job.data as DownloadModelJobParams
@ -41,43 +60,96 @@ export class DownloadModelJob {
`[DownloadModelJob] Ollama service is ready. Initiating download for ${modelName}`
)
// Services are ready, initiate the download with progress tracking
const result = await ollamaService.downloadModel(modelName, (progressPercent) => {
if (progressPercent) {
job.updateProgress(Math.floor(progressPercent)).catch((err) => {
if (err?.code !== -1) throw err
})
logger.info(
`[DownloadModelJob] Model ${modelName}: ${progressPercent}%`
)
// Register abort controller for this job — used both by in-process cancels (same process
// as the API server) and as the target of the Redis poll loop below.
const abortController = new AbortController()
DownloadModelJob.abortControllers.set(job.id!, abortController)
// Get Redis client for checking cancel signals from the API process
const queueService = new QueueService()
const cancelRedis = await queueService.getQueue(DownloadModelJob.queue).client
// Track whether cancellation was explicitly requested by the user. Only user-initiated
// cancels become UnrecoverableError — other failures (e.g., transient network errors)
// should still benefit from BullMQ's retry logic.
let userCancelled = false
// Poll Redis for cancel signal every 2s — independent of progress events so cancellation
// works even when the pull is mid-blob and not emitting progress updates.
let cancelPollInterval: ReturnType<typeof setInterval> | null = setInterval(async () => {
try {
const val = await cancelRedis.get(DownloadModelJob.cancelKey(job.id!))
if (val) {
await cancelRedis.del(DownloadModelJob.cancelKey(job.id!))
userCancelled = true
abortController.abort('user-cancel')
}
} catch {
// Redis errors are non-fatal; in-process AbortController covers same-process cancels
}
}, 2000)
// Store detailed progress in job data for clients to query
job.updateData({
...job.data,
status: 'downloading',
progress: progressPercent,
progress_timestamp: new Date().toISOString(),
}).catch((err) => {
if (err?.code !== -1) throw err
})
})
try {
// Services are ready, initiate the download with progress tracking
const result = await ollamaService.downloadModel(
modelName,
(progressPercent, bytes) => {
if (progressPercent) {
job.updateProgress(Math.floor(progressPercent)).catch((err) => {
if (err?.code !== -1) throw err
})
}
if (!result.success) {
logger.error(
`[DownloadModelJob] Failed to initiate download for model ${modelName}: ${result.message}`
// Store detailed progress in job data for clients to query
job.updateData({
...job.data,
status: 'downloading',
progress: progressPercent,
downloadedBytes: bytes?.downloadedBytes,
totalBytes: bytes?.totalBytes,
progress_timestamp: new Date().toISOString(),
}).catch((err) => {
if (err?.code !== -1) throw err
})
},
abortController.signal,
job.id!
)
// Don't retry errors that will never succeed (e.g., Ollama version too old)
if (result.retryable === false) {
throw new UnrecoverableError(result.message)
}
throw new Error(`Failed to initiate download for model: ${result.message}`)
}
logger.info(`[DownloadModelJob] Successfully completed download for model ${modelName}`)
return {
modelName,
message: result.message,
if (!result.success) {
logger.error(
`[DownloadModelJob] Failed to initiate download for model ${modelName}: ${result.message}`
)
// User-initiated cancel — must be unrecoverable to avoid the 40-attempt retry storm.
// The downloadModel() catch block returns retryable: false for cancels, so this branch
// catches both Ollama version mismatches (existing) AND user cancels (new).
if (result.retryable === false) {
throw new UnrecoverableError(result.message)
}
throw new Error(`Failed to initiate download for model: ${result.message}`)
}
logger.info(`[DownloadModelJob] Successfully completed download for model ${modelName}`)
return {
modelName,
message: result.message,
}
} catch (error: any) {
// Belt-and-suspenders: if downloadModel didn't recognize the cancel (e.g., the abort
// fired after the response stream completed but before our code returned), the cancel
// flag tells us this was a user action and should be unrecoverable.
if (userCancelled || abortController.signal.reason === 'user-cancel') {
if (!(error instanceof UnrecoverableError)) {
throw new UnrecoverableError(`Model download cancelled: ${error.message ?? error}`)
}
}
throw error
} finally {
if (cancelPollInterval !== null) {
clearInterval(cancelPollInterval)
cancelPollInterval = null
}
DownloadModelJob.abortControllers.delete(job.id!)
}
}

View File

@ -4,7 +4,7 @@ import logger from '@adonisjs/core/services/logger'
import { DateTime } from 'luxon'
import { inject } from '@adonisjs/core'
import { OllamaService } from './ollama_service.js'
import { DEFAULT_QUERY_REWRITE_MODEL, SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { SYSTEM_PROMPTS } from '../../constants/ollama.js'
import { toTitleCase } from '../utils/misc.js'
@inject()
@ -232,29 +232,22 @@ export class ChatService {
}
}
async generateTitle(sessionId: number, userMessage: string, assistantMessage: string) {
async generateTitle(sessionId: number, userMessage: string, assistantMessage: string, model: string) {
try {
const models = await this.ollamaService.getModels()
const titleModelAvailable = models?.some((m) => m.name === DEFAULT_QUERY_REWRITE_MODEL)
let title: string
if (!titleModelAvailable) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
} else {
const response = await this.ollamaService.chat({
model: DEFAULT_QUERY_REWRITE_MODEL,
messages: [
{ role: 'system', content: SYSTEM_PROMPTS.title_generation },
{ role: 'user', content: userMessage },
{ role: 'assistant', content: assistantMessage },
],
})
const response = await this.ollamaService.chat({
model,
messages: [
{ role: 'system', content: SYSTEM_PROMPTS.title_generation },
{ role: 'user', content: userMessage },
{ role: 'assistant', content: assistantMessage },
],
})
title = response?.message?.content?.trim()
if (!title) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
}
title = response?.message?.content?.trim()
if (!title) {
title = userMessage.slice(0, 57) + (userMessage.length > 57 ? '...' : '')
}
await this.updateSession(sessionId, { title })

View File

@ -65,7 +65,7 @@ export class CollectionUpdateService {
return {
updates: [],
checked_at: new Date().toISOString(),
error: `Nomad API returned status ${error.response.status}`,
error: 'Failed to check for content updates. The update service may be temporarily unavailable.',
}
}
const message =
@ -74,7 +74,7 @@ export class CollectionUpdateService {
return {
updates: [],
checked_at: new Date().toISOString(),
error: `Failed to contact Nomad API: ${message}`,
error: 'Failed to contact the update service. Please try again later.',
}
}
}

View File

@ -110,10 +110,10 @@ export class DockerService {
message: `Invalid action: ${action}. Use 'start', 'stop', or 'restart'.`,
}
} catch (error: any) {
logger.error(`Error starting service ${serviceName}: ${error.message}`)
logger.error({ err: error }, `[DockerService] Error controlling service ${serviceName}`)
return {
success: false,
message: `Failed to start service ${serviceName}: ${error.message}`,
message: `Failed to ${action} service ${serviceName}. Check server logs for details.`,
}
}
}
@ -355,8 +355,8 @@ export class DockerService {
)
}
} catch (error: any) {
logger.warn(`Error during container cleanup: ${error.message}`)
this._broadcast(serviceName, 'cleanup-warning', `Warning during cleanup: ${error.message}`)
logger.warn({ err: error }, `[DockerService] Error during container cleanup for ${serviceName}`)
this._broadcast(serviceName, 'cleanup-warning', 'Warning during container cleanup. Check server logs for details.')
}
// Step 3: Clear volumes/data if needed
@ -382,11 +382,11 @@ export class DockerService {
this._broadcast(serviceName, 'no-volumes', `No volumes found to clear`)
}
} catch (error: any) {
logger.warn(`Error during volume cleanup: ${error.message}`)
logger.warn({ err: error }, `[DockerService] Error during volume cleanup for ${serviceName}`)
this._broadcast(
serviceName,
'volume-cleanup-warning',
`Warning during volume cleanup: ${error.message}`
'Warning during volume cleanup. Check server logs for details.'
)
}
@ -411,11 +411,11 @@ export class DockerService {
message: `Service ${serviceName} force reinstall initiated successfully. You can receive updates via server-sent events.`,
}
} catch (error: any) {
logger.error(`Force reinstall failed for ${serviceName}: ${error.message}`)
logger.error({ err: error }, `[DockerService] Force reinstall failed for ${serviceName}`)
await this._cleanupFailedInstallation(serviceName)
return {
success: false,
message: `Failed to force reinstall service ${serviceName}: ${error.message}`,
message: `Failed to force reinstall service ${serviceName}. Check server logs for details.`,
}
}
}
@ -664,10 +664,10 @@ export class DockerService {
return { success: true, message: `Service ${serviceName} container removed successfully` }
} catch (error: any) {
logger.error(`Error removing service container: ${error.message}`)
logger.error({ err: error }, `[DockerService] Error removing service container ${serviceName}`)
return {
success: false,
message: `Failed to remove service ${serviceName} container: ${error.message}`,
message: `Failed to remove service ${serviceName} container. Check server logs for details.`,
}
}
}
@ -1204,10 +1204,10 @@ export class DockerService {
this._broadcast(
serviceName,
'update-rollback',
`Update failed: ${error.message}`
'Update failed. Check server logs for details.'
)
logger.error(`[DockerService] Update failed for ${serviceName}: ${error.message}`)
return { success: false, message: `Update failed: ${error.message}` }
logger.error({ err: error }, `[DockerService] Update failed for ${serviceName}`)
return { success: false, message: 'Update failed. Check server logs for details.' }
}
}

View File

@ -12,9 +12,10 @@ export class DocsService {
'home': 1,
'getting-started': 2,
'use-cases': 3,
'faq': 4,
'about': 5,
'release-notes': 6,
'community-add-ons': 4,
'faq': 5,
'about': 6,
'release-notes': 7,
}
async getDocs() {
@ -91,6 +92,7 @@ export class DocsService {
private static readonly TITLE_OVERRIDES: Record<string, string> = {
'faq': 'FAQ',
'community-add-ons': 'Community Add-Ons',
}
private prettify(filename: string) {

View File

@ -5,6 +5,8 @@ import { DownloadModelJob } from '#jobs/download_model_job'
import { DownloadJobWithProgress, DownloadProgressData } from '../../types/downloads.js'
import { normalize } from 'path'
import { deleteFileIfExists } from '../utils/fs.js'
import transmit from '@adonisjs/transmit/services/main'
import { BROADCAST_CHANNELS } from '../../constants/broadcast.js'
@inject()
export class DownloadService {
@ -111,14 +113,32 @@ export class DownloadService {
}
async cancelJob(jobId: string): Promise<{ success: boolean; message: string }> {
// Try the file download queue first (the original PR #554 path)
const queue = this.queueService.getQueue(RunDownloadJob.queue)
const job = await queue.getJob(jobId)
if (!job) {
// Job already completed (removeOnComplete: true) or doesn't exist
return { success: true, message: 'Job not found (may have already completed)' }
if (job) {
return await this._cancelFileDownloadJob(jobId, job, queue)
}
// Fall through to the model download queue
const modelQueue = this.queueService.getQueue(DownloadModelJob.queue)
const modelJob = await modelQueue.getJob(jobId)
if (modelJob) {
return await this._cancelModelDownloadJob(jobId, modelJob, modelQueue)
}
// Not found in either queue
return { success: true, message: 'Job not found (may have already completed)' }
}
/** Cancel a content download (zim, map, pmtiles, etc.) — original PR #554 logic */
private async _cancelFileDownloadJob(
jobId: string,
job: any,
queue: any
): Promise<{ success: boolean; message: string }> {
const filepath = job.data.filepath
// Signal the worker process to abort the download via Redis
@ -128,45 +148,8 @@ export class DownloadService {
RunDownloadJob.abortControllers.get(jobId)?.abort('user-cancel')
RunDownloadJob.abortControllers.delete(jobId)
// Poll for terminal state (up to 4s at 250ms intervals) — cooperates with BullMQ's lifecycle
// instead of force-removing an active job and losing the worker's failure/cleanup path.
const POLL_INTERVAL_MS = 250
const POLL_TIMEOUT_MS = 4000
const deadline = Date.now() + POLL_TIMEOUT_MS
let reachedTerminal = false
while (Date.now() < deadline) {
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
try {
const state = await job.getState()
if (state === 'failed' || state === 'completed' || state === 'unknown') {
reachedTerminal = true
break
}
} catch {
reachedTerminal = true // getState() throws if job is already gone
break
}
}
if (!reachedTerminal) {
console.warn(`[DownloadService] cancelJob: job ${jobId} did not reach terminal state within timeout, removing anyway`)
}
// Remove the BullMQ job
try {
await job.remove()
} catch {
// Lock contention fallback: clear lock and retry once
try {
const client = await queue.client
await client.del(`bull:${RunDownloadJob.queue}:${jobId}:lock`)
const updatedJob = await queue.getJob(jobId)
if (updatedJob) await updatedJob.remove()
} catch {
// Best effort - job will be cleaned up on next dismiss attempt
}
}
await this._pollForTerminalState(job, jobId)
await this._removeJobWithLockFallback(job, queue, RunDownloadJob.queue, jobId)
// Delete the partial file from disk
if (filepath) {
@ -195,4 +178,87 @@ export class DownloadService {
return { success: true, message: 'Download cancelled and partial file deleted' }
}
/** Cancel an Ollama model download — mirrors the file cancel pattern but skips file cleanup */
private async _cancelModelDownloadJob(
jobId: string,
job: any,
queue: any
): Promise<{ success: boolean; message: string }> {
const modelName: string = job.data?.modelName ?? 'unknown'
// Signal the worker process to abort the pull via Redis
await DownloadModelJob.signalCancel(jobId)
// Also try in-memory abort (works if worker is in same process)
DownloadModelJob.abortControllers.get(jobId)?.abort('user-cancel')
DownloadModelJob.abortControllers.delete(jobId)
await this._pollForTerminalState(job, jobId)
await this._removeJobWithLockFallback(job, queue, DownloadModelJob.queue, jobId)
// Broadcast a cancelled event so the frontend hook clears the entry. We use percent: -2
// (distinct from -1 = error) so the hook can route it to a 2s auto-clear instead of the
// 15s error display. The frontend ALSO removes the entry optimistically from the API
// response, so this is belt-and-suspenders for cases where the SSE arrives first.
transmit.broadcast(BROADCAST_CHANNELS.OLLAMA_MODEL_DOWNLOAD, {
model: modelName,
jobId,
percent: -2,
status: 'cancelled',
timestamp: new Date().toISOString(),
})
// Note on partial blob cleanup: Ollama manages model blobs internally at
// /root/.ollama/models/blobs/. We deliberately do NOT call /api/delete here — Ollama's
// expected behavior is to retain partial blobs so a re-pull resumes from where it left
// off. If the user wants to reclaim that space, they can re-pull and let it complete,
// or delete the partially-downloaded model from the AI Settings page.
return { success: true, message: 'Model download cancelled' }
}
/** Wait up to 4s (250ms intervals) for the job to reach a terminal state */
private async _pollForTerminalState(job: any, jobId: string): Promise<void> {
const POLL_INTERVAL_MS = 250
const POLL_TIMEOUT_MS = 4000
const deadline = Date.now() + POLL_TIMEOUT_MS
while (Date.now() < deadline) {
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
try {
const state = await job.getState()
if (state === 'failed' || state === 'completed' || state === 'unknown') {
return
}
} catch {
return // getState() throws if job is already gone
}
}
console.warn(
`[DownloadService] cancelJob: job ${jobId} did not reach terminal state within timeout, removing anyway`
)
}
/** Remove a BullMQ job, clearing a stale worker lock if the first attempt fails */
private async _removeJobWithLockFallback(
job: any,
queue: any,
queueName: string,
jobId: string
): Promise<void> {
try {
await job.remove()
} catch {
// Lock contention fallback: clear lock and retry once
try {
const client = await queue.client
await client.del(`bull:${queueName}:${jobId}:lock`)
const updatedJob = await queue.getJob(jobId)
if (updatedJob) await updatedJob.remove()
} catch {
// Best effort - job will be cleaned up on next dismiss attempt
}
}
}
}

View File

@ -2,7 +2,7 @@ import { XMLBuilder, XMLParser } from 'fast-xml-parser'
import { readFile, writeFile, rename, readdir } from 'fs/promises'
import { join } from 'path'
import { Archive } from '@openzim/libzim'
import { KIWIX_LIBRARY_XML_PATH, ZIM_STORAGE_PATH, ensureDirectoryExists } from '../utils/fs.js'
import { KIWIX_LIBRARY_XML_PATH, ZIM_STORAGE_PATH, ensureDirectoryExists, isValidZimFile } from '../utils/fs.js'
import logger from '@adonisjs/core/services/logger'
import { randomUUID } from 'node:crypto'
@ -54,8 +54,12 @@ export class KiwixLibraryService {
*
* Returns null on any error so callers can fall back gracefully.
*/
private _readZimMetadata(zimFilePath: string): Partial<KiwixBook> | null {
private async _readZimMetadata(zimFilePath: string): Promise<Partial<KiwixBook> | null> {
try {
if (!(await isValidZimFile(zimFilePath))) {
logger.warn(`[KiwixLibraryService] Skipping invalid/corrupted ZIM file: ${zimFilePath}`)
return null
}
const archive = new Archive(zimFilePath)
const getMeta = (key: string): string | undefined => {
@ -197,17 +201,22 @@ export class KiwixLibraryService {
const excludeSet = new Set(opts?.excludeFilenames ?? [])
const zimFiles = entries.filter((name) => name.endsWith('.zim') && !excludeSet.has(name))
const books: KiwixBook[] = zimFiles.map((filename) => {
const meta = this._readZimMetadata(join(dirPath, filename))
const books: KiwixBook[] = []
for (const filename of zimFiles) {
const meta = await this._readZimMetadata(join(dirPath, filename))
if (meta === null) {
logger.warn(`[KiwixLibraryService] Skipping unreadable ZIM file: ${filename}`)
continue
}
const containerPath = `${CONTAINER_DATA_PATH}/${filename}`
return {
books.push({
...meta,
// Override fields that must be derived locally, not from ZIM metadata
id: meta?.id ?? filename.slice(0, -4),
path: containerPath,
title: meta?.title ?? this._filenameToTitle(filename),
}
})
})
}
const xml = this._buildXml(books)
await this._atomicWrite(xml)
@ -239,7 +248,12 @@ export class KiwixLibraryService {
}
const fullPath = join(process.cwd(), ZIM_STORAGE_PATH, zimFilename)
const meta = this._readZimMetadata(fullPath)
const meta = await this._readZimMetadata(fullPath)
if (meta === null) {
logger.error(`[KiwixLibraryService] Cannot add ${zimFilename}: file is invalid or corrupted.`)
return
}
existingBooks.push({
...meta,

View File

@ -17,6 +17,7 @@ import { join, resolve, sep } from 'path'
import urlJoin from 'url-join'
import { RunDownloadJob } from '#jobs/run_download_job'
import logger from '@adonisjs/core/services/logger'
import { assertNotPrivateUrl } from '#validators/common'
import InstalledResource from '#models/installed_resource'
import { CollectionManifestService } from './collection_manifest_service.js'
import type { CollectionWithStatus, MapsSpec } from '../../types/collections.js'
@ -119,6 +120,13 @@ export class MapService implements IMapService {
const downloadFilenames: string[] = []
for (const resource of toDownload) {
try {
assertNotPrivateUrl(resource.url)
} catch {
logger.warn(`[MapService] Blocked download from private/loopback URL: ${resource.url}`)
continue
}
const existing = await RunDownloadJob.getActiveByUrl(resource.url)
if (existing) {
logger.warn(`[MapService] Download already in progress for URL ${resource.url}, skipping.`)
@ -244,6 +252,7 @@ export class MapService implements IMapService {
url: string
): Promise<{ filename: string; size: number } | { message: string }> {
try {
assertNotPrivateUrl(url)
const parsed = new URL(url)
if (!parsed.pathname.endsWith('.pmtiles')) {
throw new Error(`Invalid PMTiles file URL: ${url}. URL must end with .pmtiles`)
@ -267,7 +276,8 @@ export class MapService implements IMapService {
return { filename, size }
} catch (error: any) {
return { message: `Preflight check failed: ${error.message}` }
logger.error({ err: error }, '[MapService] Preflight check failed for URL')
return { message: 'Preflight check failed. Please verify the URL is valid and accessible.' }
}
}

View File

@ -53,6 +53,7 @@ export class OllamaService {
private baseUrl: string | null = null
private initPromise: Promise<void> | null = null
private isOllamaNative: boolean | null = null
private activeDownloads: Map<string, Promise<{ success: boolean; message: string; retryable?: boolean }>> = new Map()
constructor() {}
@ -91,10 +92,46 @@ export class OllamaService {
/**
* Downloads a model from Ollama with progress tracking. Only works with Ollama backends.
* Use dispatchModelDownload() for background job processing where possible.
*
* @param signal Optional AbortSignal when triggered, the underlying axios stream is cancelled
* and the method returns a non-retryable failure so callers can mark the job
* unrecoverable in BullMQ and avoid the 40-attempt retry storm.
* @param jobId Optional BullMQ job id included in progress broadcasts so the frontend can
* correlate Transmit events to a cancellable job.
*/
async downloadModel(
model: string,
progressCallback?: (percent: number) => void
progressCallback?: (
percent: number,
bytes?: { downloadedBytes: number; totalBytes: number }
) => void,
signal?: AbortSignal,
jobId?: string
): Promise<{ success: boolean; message: string; retryable?: boolean }> {
// Deduplicate concurrent downloads of the same model
const existing = this.activeDownloads.get(model)
if (existing) {
logger.info(`[OllamaService] Download already in progress for "${model}", waiting on existing download.`)
return existing
}
const downloadPromise = this._doDownloadModel(model, progressCallback, signal, jobId)
this.activeDownloads.set(model, downloadPromise)
try {
return await downloadPromise
} finally {
this.activeDownloads.delete(model)
}
}
private async _doDownloadModel(
model: string,
progressCallback?: (
percent: number,
bytes?: { downloadedBytes: number; totalBytes: number }
) => void,
signal?: AbortSignal,
jobId?: string
): Promise<{ success: boolean; message: string; retryable?: boolean }> {
await this._ensureDependencies()
if (!this.baseUrl) {
@ -121,15 +158,45 @@ export class OllamaService {
}
}
// Stream pull via Ollama native API
// Stream pull via Ollama native API. axios supports `signal` natively for AbortController
// integration — when triggered, the request errors with code 'ERR_CANCELED' which we detect
// in the catch block below to return a non-retryable cancel result.
const pullResponse = await axios.post(
`${this.baseUrl}/api/pull`,
{ model, stream: true },
{ responseType: 'stream', timeout: 0 }
{ responseType: 'stream', timeout: 0, signal }
)
// Ollama's pull API reports progress per-digest (each blob). A single model can contain
// multiple blobs (weights, tokenizer, template, etc.) and each is reported in turn.
// Aggregate across all digests so the UI shows a single monotonically-increasing total,
// matching the behavior of the content download progress (Active Downloads section).
const digestProgress = new Map<string, { completed: number; total: number }>()
// Throttle broadcasts to once per BROADCAST_THROTTLE_MS — Ollama can emit hundreds of
// progress events per second for fast connections, which would flood the Transmit SSE
// channel and cause jittery speed calculations on the frontend.
const BROADCAST_THROTTLE_MS = 500
let lastBroadcastAt = 0
await new Promise<void>((resolve, reject) => {
let buffer = ''
// If the abort fires after headers are received but mid-stream, axios's signal handling
// destroys the stream which surfaces as an 'error' event — wire the signal listener so
// the promise rejects promptly with a recognizable cancel reason.
const onAbort = () => {
const err: any = new Error('Download cancelled')
err.code = 'ERR_CANCELED'
pullResponse.data.destroy(err)
}
if (signal) {
if (signal.aborted) {
onAbort()
return
}
signal.addEventListener('abort', onAbort, { once: true })
}
pullResponse.data.on('data', (chunk: Buffer) => {
buffer += chunk.toString()
const lines = buffer.split('\n')
@ -138,23 +205,74 @@ export class OllamaService {
if (!line.trim()) continue
try {
const parsed = JSON.parse(line)
if (parsed.completed && parsed.total) {
const percent = parseFloat(((parsed.completed / parsed.total) * 100).toFixed(2))
this.broadcastDownloadProgress(model, percent)
if (progressCallback) progressCallback(percent)
if (parsed.completed && parsed.total && parsed.digest) {
// Update this digest's progress — take the max seen value so transient
// out-of-order updates don't make the aggregate jump backwards.
const existing = digestProgress.get(parsed.digest)
digestProgress.set(parsed.digest, {
completed: Math.max(existing?.completed ?? 0, parsed.completed),
total: Math.max(existing?.total ?? 0, parsed.total),
})
// Compute aggregate across all known blobs
let aggCompleted = 0
let aggTotal = 0
for (const { completed, total } of digestProgress.values()) {
aggCompleted += completed
aggTotal += total
}
const percent = aggTotal > 0
? parseFloat(((aggCompleted / aggTotal) * 100).toFixed(2))
: 0
// Throttle broadcasts. Always call the progressCallback though — the worker
// uses it to update job state in Redis, which should reflect the latest view.
const now = Date.now()
if (now - lastBroadcastAt >= BROADCAST_THROTTLE_MS) {
lastBroadcastAt = now
this.broadcastDownloadProgress(model, percent, jobId, {
downloadedBytes: aggCompleted,
totalBytes: aggTotal,
})
}
if (progressCallback) {
progressCallback(percent, {
downloadedBytes: aggCompleted,
totalBytes: aggTotal,
})
}
}
} catch {
// ignore parse errors on partial lines
}
}
})
pullResponse.data.on('end', resolve)
pullResponse.data.on('error', reject)
pullResponse.data.on('end', () => {
if (signal) signal.removeEventListener('abort', onAbort)
resolve()
})
pullResponse.data.on('error', (err: any) => {
if (signal) signal.removeEventListener('abort', onAbort)
reject(err)
})
})
logger.info(`[OllamaService] Model "${model}" downloaded successfully.`)
return { success: true, message: 'Model downloaded successfully.' }
} catch (error) {
// Detect axios cancel (signal-triggered abort). Don't broadcast an error event for
// user-initiated cancels — the cancel handler in DownloadService already broadcasts
// a cancelled state. Returning retryable: false prevents BullMQ retries.
const isCancelled =
axios.isCancel(error) ||
(error as any)?.code === 'ERR_CANCELED' ||
(error as any)?.name === 'CanceledError'
if (isCancelled) {
logger.info(`[OllamaService] Model "${model}" download cancelled by user.`)
return { success: false, message: 'Download cancelled', retryable: false }
}
const errorMessage = error instanceof Error ? error.message : String(error)
logger.error(
`[OllamaService] Failed to download model "${model}": ${errorMessage}`
@ -628,10 +746,19 @@ export class OllamaService {
})
}
private broadcastDownloadProgress(model: string, percent: number) {
private broadcastDownloadProgress(
model: string,
percent: number,
jobId?: string,
bytes?: { downloadedBytes: number; totalBytes: number }
) {
// Conditional spread on jobId/bytes — Transmit's Broadcastable type rejects fields whose
// value is `undefined`, so we omit each key entirely when its value isn't available.
transmit.broadcast(BROADCAST_CHANNELS.OLLAMA_MODEL_DOWNLOAD, {
model,
percent,
...(jobId ? { jobId } : {}),
...(bytes ? { downloadedBytes: bytes.downloadedBytes, totalBytes: bytes.totalBytes } : {}),
timestamp: new Date().toISOString(),
})
logger.info(`[OllamaService] Download progress for model "${model}": ${percent}%`)

View File

@ -532,9 +532,12 @@ export class RagService {
}
}
// Count unique articles processed in this batch
// Count unique articles processed in this batch. hasMoreBatches gates on the article
// count — zimChunks.length counts section-level chunks (multiple per article under the
// 'structured' strategy), so comparing it to ZIM_BATCH_SIZE (an article limit) caps
// processing at the first batch for any real archive.
const articlesInBatch = new Set(zimChunks.map((c) => c.documentId)).size
const hasMoreBatches = zimChunks.length === ZIM_BATCH_SIZE
const hasMoreBatches = articlesInBatch >= ZIM_BATCH_SIZE
logger.info(
`[RAG] Successfully embedded ${totalChunks} total chunks from ${articlesInBatch} articles (hasMore: ${hasMoreBatches})`
@ -1013,6 +1016,16 @@ export class RagService {
* Retrieve all unique source files that have been stored in the knowledge base.
* @returns Array of unique full source paths
*/
public async hasDocuments(): Promise<boolean> {
try {
await this._ensureCollection(RagService.CONTENT_COLLECTION_NAME, RagService.EMBEDDING_DIMENSION)
const collectionInfo = await this.qdrant!.getCollection(RagService.CONTENT_COLLECTION_NAME)
return (collectionInfo.points_count ?? 0) > 0
} catch {
return false
}
}
public async getStoredFiles(): Promise<string[]> {
try {
await this._ensureCollection(
@ -1242,8 +1255,12 @@ export class RagService {
logger.info(`[RAG] Found ${sourcesInQdrant.size} unique sources in Qdrant`)
// Find files that are in storage but not in Qdrant
const filesToEmbed = filesInStorage.filter((filePath) => !sourcesInQdrant.has(filePath))
// Find files that are in storage, not already in Qdrant, and have an embeddable type.
// Non-embeddable files (e.g. kiwix-library.xml in /storage/zim) would otherwise be
// dispatched to EmbedFileJob, fail with "Unsupported file type", and retry on every sync.
const filesToEmbed = filesInStorage.filter(
(filePath) => !sourcesInQdrant.has(filePath) && determineFileType(filePath) !== 'unknown'
)
logger.info(`[RAG] Found ${filesToEmbed.length} files that need embedding`)

View File

@ -47,10 +47,10 @@ export class SystemUpdateService {
message: 'System update initiated. The admin container will restart during the process.',
}
} catch (error) {
logger.error('[SystemUpdateService]: Failed to request system update:', error)
logger.error({ err: error }, '[SystemUpdateService] Failed to request system update')
return {
success: false,
message: `Failed to request update: ${error.message}`,
message: 'Failed to request system update. Check server logs for details.',
}
}
}

View File

@ -5,6 +5,7 @@ import logger from '@adonisjs/core/services/logger'
import { ExtractZIMChunkingStrategy, ExtractZIMContentOptions, ZIMContentChunk, ZIMArchiveMetadata } from '../../types/zim.js'
import { randomUUID } from 'node:crypto'
import { access } from 'node:fs/promises'
import { isValidZimFile } from '../utils/fs.js'
export class ZIMExtractionService {
@ -51,7 +52,13 @@ export class ZIMExtractionService {
logger.error(`[ZIMExtractionService]: ZIM file not accessible: ${filePath}`)
throw new Error(`ZIM file not found or not accessible: ${filePath}`)
}
// Validate ZIM magic number before opening with native library.
// A corrupted file causes a native C++ abort that cannot be caught by JS.
if (!(await isValidZimFile(filePath))) {
throw new Error(`ZIM file is invalid or corrupted: ${filePath}`)
}
const archive = new Archive(filePath)
// Extract archive-level metadata once
@ -209,7 +216,10 @@ export class ZIMExtractionService {
const sections: Array<{ heading: string; text: string; level: number }> = [];
let currentSection = { heading: 'Introduction', content: [] as string[], level: 2 };
$('body').children().each((_, element) => {
// Walk the full DOM rather than only direct children of <body>. Modern ZIMs (Devdocs,
// Wikipedia, FreeCodeCamp, etc.) wrap article content in a container div, which under
// .children() would be a single non-heading/non-paragraph element and yield zero sections.
$('body').find('h2, h3, h4, p, ul, ol, dl, table').each((_, element) => {
const $el = $(element);
const tagName = element.tagName?.toLowerCase();
@ -246,6 +256,20 @@ export class ZIMExtractionService {
});
}
// Fallback: if the selector walk produced no sections but the body has meaningful
// text (unusual structure, minimal markup), emit one section with the full body text
// so the article still contributes to the knowledge base.
if (sections.length === 0) {
const bodyText = $('body').text().replace(/\s+/g, ' ').trim();
if (bodyText.length > 0) {
sections.push({
heading: title || 'Content',
text: bodyText,
level: 2,
});
}
}
return {
title,
sections,

View File

@ -57,84 +57,105 @@ export class ZimService {
query?: string
}): Promise<ListRemoteZimFilesResponse> {
const LIBRARY_BASE_URL = 'https://browse.library.kiwix.org/catalog/v2/entries'
// Kiwix returns pages of content unaware of what the user has installed locally. When
// the installed set is large, a single 12-item Kiwix page can come back with everything
// already installed → 0 post-filter items → frontend deadlock (#731). Accumulate across
// upstream pages so we return a useful batch. Bounded by MAX_KIWIX_FETCHES so a heavily
// saturated install doesn't hang a single request; the frontend scroll loop + auto-fetch
// effect handle continuation.
const KIWIX_PAGE_SIZE = 60
const MAX_KIWIX_FETCHES = 5
const res = await axios.get(LIBRARY_BASE_URL, {
params: {
start: start,
count: count,
lang: 'eng',
...(query ? { q: query } : {}),
},
responseType: 'text',
})
const data = res.data
const parser = new XMLParser({
ignoreAttributes: false,
attributeNamePrefix: '',
textNodeName: '#text',
})
const result = parser.parse(data)
if (!isRawListRemoteZimFilesResponse(result)) {
throw new Error('Invalid response format from remote library')
}
const entries = result.feed.entry
? Array.isArray(result.feed.entry)
? result.feed.entry
: [result.feed.entry]
: []
const filtered = entries.filter((entry: any) => {
return isRawRemoteZimFileEntry(entry)
})
const mapped: (RemoteZimFileEntry | null)[] = filtered.map((entry: RawRemoteZimFileEntry) => {
const downloadLink = entry.link.find((link: any) => {
return (
typeof link === 'object' &&
'rel' in link &&
'length' in link &&
'href' in link &&
'type' in link &&
link.type === 'application/x-zim'
)
})
if (!downloadLink) {
return null
}
// downloadLink['href'] will end with .meta4, we need to remove that to get the actual download URL
const download_url = downloadLink['href'].substring(0, downloadLink['href'].length - 6)
const file_name = download_url.split('/').pop() || `${entry.title}.zim`
const sizeBytes = parseInt(downloadLink['length'], 10)
return {
id: entry.id,
title: entry.title,
updated: entry.updated,
summary: entry.summary,
size_bytes: sizeBytes || 0,
download_url: download_url,
author: entry.author.name,
file_name: file_name,
}
})
// Filter out any null entries (those without a valid download link)
// or files that already exist in the local storage
// Snapshot locally-installed files once — the filesystem won't change mid-request.
const existing = await this.list()
const existingKeys = new Set(existing.files.map((file) => file.name))
const withoutExisting = mapped.filter(
(entry): entry is RemoteZimFileEntry => entry !== null && !existingKeys.has(entry.file_name)
)
const accumulated: RemoteZimFileEntry[] = []
const seenIds = new Set<string>()
let currentStart = start
let totalResults = 0
for (let i = 0; i < MAX_KIWIX_FETCHES; i++) {
const res = await axios.get(LIBRARY_BASE_URL, {
params: {
start: currentStart,
count: KIWIX_PAGE_SIZE,
lang: 'eng',
...(query ? { q: query } : {}),
},
responseType: 'text',
})
const parsed = parser.parse(res.data)
if (!isRawListRemoteZimFilesResponse(parsed)) {
throw new Error('Invalid response format from remote library')
}
totalResults = parsed.feed.totalResults
const rawEntries = parsed.feed.entry
? Array.isArray(parsed.feed.entry)
? parsed.feed.entry
: [parsed.feed.entry]
: []
// Empty upstream response — bail even if totalResults suggests more (transient Kiwix
// hiccup or totalResults drift between pages). Prevents a pointless spin.
if (rawEntries.length === 0) break
// Advance by actual returned count, not requested count. Short pages at the tail
// would otherwise cause us to skip entries on the next fetch.
currentStart += rawEntries.length
for (const raw of rawEntries) {
if (!isRawRemoteZimFileEntry(raw)) continue
const entry = raw as RawRemoteZimFileEntry
const downloadLink = entry.link.find(
(link: any) =>
typeof link === 'object' &&
'rel' in link &&
'length' in link &&
'href' in link &&
'type' in link &&
link.type === 'application/x-zim'
)
if (!downloadLink) continue
// downloadLink['href'] ends with .meta4; strip that to get the actual .zim URL.
const download_url = downloadLink['href'].substring(0, downloadLink['href'].length - 6)
const file_name = download_url.split('/').pop() || `${entry.title}.zim`
if (existingKeys.has(file_name)) continue
if (seenIds.has(entry.id)) continue
seenIds.add(entry.id)
const sizeBytes = parseInt(downloadLink['length'], 10)
accumulated.push({
id: entry.id,
title: entry.title,
updated: entry.updated,
summary: entry.summary,
size_bytes: sizeBytes || 0,
download_url,
author: entry.author.name,
file_name,
})
}
if (accumulated.length >= count) break
if (currentStart >= totalResults) break
}
return {
items: withoutExisting,
has_more: result.feed.totalResults > start,
total_count: result.feed.totalResults,
items: accumulated,
has_more: currentStart < totalResults,
total_count: totalResults,
next_start: currentStart,
}
}

View File

@ -6,6 +6,7 @@ import axios from 'axios'
import { Transform } from 'stream'
import { deleteFileIfExists, ensureDirectoryExists, getFileStatsIfExists } from './fs.js'
import { createWriteStream } from 'fs'
import { rename } from 'fs/promises'
import path from 'path'
/**
@ -27,13 +28,16 @@ export async function doResumableDownload({
const dirname = path.dirname(filepath)
await ensureDirectoryExists(dirname)
// Check if partial file exists for resume
// Stage download to a .tmp file so consumers (e.g. Kiwix) never see a partial file
const tempPath = filepath + '.tmp'
// Check if partial .tmp file exists for resume
let startByte = 0
let appendMode = false
const existingStats = await getFileStatsIfExists(filepath)
const existingStats = await getFileStatsIfExists(tempPath)
if (existingStats && !forceNew) {
startByte = existingStats.size
startByte = Number(existingStats.size)
appendMode = true
}
@ -55,14 +59,24 @@ export async function doResumableDownload({
}
}
// If file is already complete and not forcing overwrite just return filepath
if (startByte === totalBytes && totalBytes > 0 && !forceNew) {
// If final file already exists at correct size, return early (idempotent)
const finalFileStats = await getFileStatsIfExists(filepath)
if (finalFileStats && Number(finalFileStats.size) === totalBytes && totalBytes > 0 && !forceNew) {
return filepath
}
// If server doesn't support range requests and we have a partial file, delete it
// If .tmp file is already at correct size (complete but never renamed), just rename it
if (startByte === totalBytes && totalBytes > 0 && !forceNew) {
await rename(tempPath, filepath)
if (onComplete) {
await onComplete(url, filepath)
}
return filepath
}
// If server doesn't support range requests and we have a partial .tmp file, delete it
if (!supportsRangeRequests && startByte > 0) {
await deleteFileIfExists(filepath)
await deleteFileIfExists(tempPath)
startByte = 0
appendMode = false
}
@ -72,17 +86,29 @@ export async function doResumableDownload({
headers.Range = `bytes=${startByte}-`
}
const response = await axios.get(url, {
responseType: 'stream',
headers,
signal,
timeout,
})
const fetchStream = (hdrs: Record<string, string>) =>
axios.get(url, { responseType: 'stream', headers: hdrs, signal, timeout })
let response = await fetchStream(headers)
if (response.status !== 200 && response.status !== 206) {
throw new Error(`Failed to download: HTTP ${response.status}`)
}
// If we requested a range but the server returned 200 (ignored the Range header),
// appending would corrupt the .tmp file — delete it and restart from byte 0.
if (headers.Range && response.status === 200) {
response.data.destroy()
await deleteFileIfExists(tempPath)
startByte = 0
appendMode = false
delete headers.Range
response = await fetchStream(headers)
if (response.status !== 200 && response.status !== 206) {
throw new Error(`Failed to download: HTTP ${response.status}`)
}
}
return new Promise((resolve, reject) => {
let downloadedBytes = startByte
let lastProgressTime = Date.now()
@ -131,11 +157,10 @@ export async function doResumableDownload({
},
})
const writeStream = createWriteStream(filepath, {
const writeStream = createWriteStream(tempPath, {
flags: appendMode ? 'a' : 'w',
})
// Handle errors and cleanup
const cleanup = (error?: Error) => {
clearStallTimer()
progressStream.destroy()
@ -149,7 +174,6 @@ export async function doResumableDownload({
response.data.on('error', cleanup)
progressStream.on('error', cleanup)
writeStream.on('error', cleanup)
writeStream.on('error', cleanup)
signal?.addEventListener('abort', () => {
cleanup(new Error('Download aborted'))
@ -157,6 +181,20 @@ export async function doResumableDownload({
writeStream.on('finish', async () => {
clearStallTimer()
try {
// Atomically move the completed .tmp file to the final path
await rename(tempPath, filepath)
} catch (renameError) {
// A parallel job may have completed the same file first — treat as success
// if the destination already exists at the expected size.
const existing = await getFileStatsIfExists(filepath)
if (existing && Number(existing.size) === totalBytes && totalBytes > 0) {
// fall through to resolve
} else {
reject(renameError)
return
}
}
if (onProgress) {
onProgress({
downloadedBytes,
@ -207,7 +245,7 @@ export async function doResumableDownloadWithRetry({
})
return result // return on success
} catch (error) {
} catch (error: any) {
attempt++
lastError = error as Error

View File

@ -1,4 +1,4 @@
import { mkdir, readdir, readFile, stat, unlink } from 'fs/promises'
import { mkdir, open, readdir, readFile, stat, unlink } from 'fs/promises'
import path, { join } from 'path'
import { FileEntry } from '../../types/files.js'
import { createReadStream } from 'fs'
@ -99,6 +99,28 @@ export async function getFileStatsIfExists(
}
}
/**
* Validates that a file has the ZIM magic number (0x44D495A).
* Must be called before passing a file to @openzim/libzim Archive,
* because a corrupted ZIM causes a native C++ abort that cannot be
* caught by JS try/catch.
*/
export async function isValidZimFile(filePath: string): Promise<boolean> {
let fh
try {
fh = await open(filePath, 'r')
const buf = Buffer.alloc(4)
const { bytesRead } = await fh.read(buf, 0, 4, 0)
if (bytesRead < 4) return false
// ZIM magic number: 72 17 32 04 (little-endian 0x044D4953)
return buf[0] === 0x5a && buf[1] === 0x49 && buf[2] === 0x4d && buf[3] === 0x04
} catch {
return false
} finally {
await fh?.close()
}
}
export async function deleteFileIfExists(path: string): Promise<void> {
try {
await unlink(path)

View File

@ -57,6 +57,10 @@ export default class ServiceSeeder extends BaseSeeder {
PortBindings: { '6333/tcp': [{ HostPort: '6333' }], '6334/tcp': [{ HostPort: '6334' }] },
},
ExposedPorts: { '6333/tcp': {}, '6334/tcp': {} },
// Disable Qdrant's anonymous telemetry to telemetry.qdrant.io. NOMAD is offline-first
// and ships with zero telemetry by default — Qdrant's upstream default of enabled
// telemetry doesn't match that posture.
Env: ['QDRANT__TELEMETRY_DISABLED=true'],
}),
ui_location: '6333',
installed: false,

View File

@ -0,0 +1,48 @@
# Community Add-Ons
Project N.O.M.A.D. ships with a curated set of built-in tools and content, but the community has started building add-ons that extend the platform with specialized offline content packs. These are third-party projects, not maintained by the N.O.M.A.D. team. Install them at your own discretion, and please direct any bugs or feature requests to the add-on's own repository.
Have you built a NOMAD add-on? Open an issue on the [Project N.O.M.A.D. GitHub repository](https://github.com/Crosstalk-Solutions/project-nomad/issues/new) or send us a note through the [contact form on projectnomad.us](https://www.projectnomad.us/contact), and we'll review it for inclusion on this page.
---
## ZIM Content Packs
ZIM content packs drop additional offline reference material into your existing Kiwix library. They typically ship with an `install.sh` script that downloads source material, builds a ZIM file with `zimwriterfs`, and registers it with your running Kiwix container.
### U.S. Military Field Manuals
**Repository:** [github.com/jrsphoto/ZIM-military-field-manuals](https://github.com/jrsphoto/ZIM-military-field-manuals)
Roughly 180 public-domain U.S. military field manuals covering field medicine, survival, combat first aid, map reading, and more. Built into a searchable ZIM that drops into your Kiwix library.
Final ZIM size is around 2 GB. The builder downloads about 2 GB of source PDFs from archive.org during the build.
### W3Schools Programming Archive
**Repository:** [github.com/kennethbrewer3/ZIM-w3schools-offline](https://github.com/kennethbrewer3/ZIM-w3schools-offline)
A full offline copy of the W3Schools programming tutorials, covering HTML, CSS, JavaScript, Python, SQL, and more. Good for learning to code, looking up syntax, or teaching programming in an environment without internet.
Final ZIM size is around 700 MB. The builder downloads about 6 GB of source files from a GitHub mirror during the build.
---
## Installing a Community Add-On
Each add-on has its own install instructions, but most ZIM packs follow the same shape:
1. Clone the add-on's repository onto your NOMAD host over SSH.
2. Check the README for required build dependencies. Most need `git`, `python3`, `unzip`, and `zim-tools`.
3. Run the included `install.sh` with a `--deploy` flag, pointing it at your Kiwix library path (`/opt/project-nomad/storage/zim`) and your Kiwix container name (`nomad_kiwix_server`).
4. The script builds the ZIM, copies it into your Kiwix library, registers it with Kiwix, and restarts the Kiwix container.
Once the script finishes, the new content will appear in your Information Library the next time you load it.
Expect the initial build to take anywhere from a few minutes to an hour or more depending on the add-on's size and your host's CPU.
---
## A Note on Support
These add-ons are community-built and community-maintained. If something goes wrong with an install script or the content inside a ZIM, please open an issue on the add-on's own repository rather than Project N.O.M.A.D.'s. We're happy to help if the issue is with NOMAD itself, for example if Kiwix isn't picking up a new ZIM after an install, but we can't maintain or support third-party content.

View File

@ -114,6 +114,18 @@ The Maps feature requires downloaded map data. If you see a blank area:
3. Wait for downloads to complete
4. Return to Maps and refresh
### ERROR: Failed to load the XML library file '/data/kiwix-library.xml'
This usually means the Information Library service started before its Kiwix library index was fully initialized.
Try this recovery flow:
1. Go to **[Apps](/settings/apps)**
2. Stop **Information Library (Kiwix)**
3. Wait 10-15 seconds, then start it again
4. If the error persists, run **Force Reinstall** for Information Library from the same page
After restart/reinstall completes, refresh the Information Library page.
### AI responses are slow
Local AI requires significant computing power. To improve speed:

View File

@ -1,5 +1,31 @@
# Release Notes
## Version 1.31.1 - April 21, 2026
### Features
### Bug Fixes
- **AI Assistant**: In-progress model downloads can now be cancelled properly and the progress UI now matches that of file downloads. Thanks @chriscrosstalk for the contribution!
- **AI Assistant**: Fixed an issue where the AI Assistant settings page could crash if a model object did not have a details property. Thanks @hestela for the fix!
- **AI Assistant**: Fixed an issue with non-embeddable files being queued for embedding and flooding logs with errors. Thanks @sbruschke for the bug report and @chriscrosstalk for the fix!
- **AI Assistant**: Fixed an issue with ZIM batch embedding using the wrong batch count and causing remaining batches to be skipped. Thanks @sbruschke for the bug report and @chriscrosstalk for the fix!
- **AI Assistant**: Fixed an issue with ZIM content extraction only extracting the first-level children of the article body and thus missing a lot of content. Thanks @sbruschke for the bug report and @chriscrosstalk for the fix!
- **Disk Collector**: Improved reporting for NFS mount stats and display in the UI. Thanks @bgauger and @bravosierra99 for the contribution!
- **Downloads**: Downloads are now staged to .tmp files and atomically renamed upon completion to prevent issues with incomplete/corrupt files. Thanks @artbird309 for the contribution!
- **Downloads**: Removed a duplicate error listener and improved stability when handling Range requests for file downloads. Thanks @jakeaturner for the contribution!
- **Downloads**: Added improved handling for corrupt ZIM file downloads and removed duplicate Ollama download logs. Thanks @aegisman for the contribution!
- **Security**: Closed a potential SSRF vulnerability in the map file download functionality by implementing stricter URL validation and blocking private IP ranges. Thanks @LuisMIguelFurlanettoSousa for the fix!
- **Security**: Sanitized error messages from the backend to prevent potential information disclosure. Thanks @LuisMIguelFurlanettoSousa for the fix!
- **UI**: Fixed an issue with broken pagination for the Content Explorer that could cause some users to see a "No records found" message indefinitely. Thanks @johno10661 for the bug report and @chriscrosstalk for the fix!
- **UI**: Fixed an issue where all storage devices could report as "NAS Storage" regardless of actual type. Thanks @bgauger for the fix!
### Improvements
- **AI Assistant**: Now uses the currently loaded model for query rewriting and chat title generation for improved performance and consistency. Thanks @hestela for the contribution!
- **AI Assistant**: When a remote Ollama URL is configured, the Command Center will now attempt to stop NOMAD's local Ollama container to free up resources and avoid confusion. Thanks @chriscrosstalk for the contribution!
- **Dependencies**: Updated various dependencies to close security vulnerabilities and improve stability
- **Docs**: Added a "Community Add-Ons" page to the documentation to highlight some of the amazing community contributions that have been made since launch. Thanks @chriscrosstalk for the contribution!
- **Privacy**: Added the appropriate environment variable to disable telemetry for the Qdrant container. Note that this will only take effect on new installations of if the Qdrant container is force re-installed on existing installations. Thanks @berkdamerc for the find and @chriscrosstalk for the contribution!
## Version 1.31.0 - April 3, 2026
### Features

View File

@ -1,50 +1,214 @@
import { useCallback, useRef, useState } from 'react'
import useOllamaModelDownloads from '~/hooks/useOllamaModelDownloads'
import HorizontalBarChart from './HorizontalBarChart'
import StyledSectionHeader from './StyledSectionHeader'
import { IconAlertTriangle } from '@tabler/icons-react'
import StyledModal from './StyledModal'
import { IconAlertTriangle, IconLoader2, IconX } from '@tabler/icons-react'
import api from '~/lib/api'
import { useModals } from '~/context/ModalContext'
import { formatBytes } from '~/lib/util'
interface ActiveModelDownloadsProps {
withHeader?: boolean
}
function formatSpeed(bytesPerSec: number): string {
if (bytesPerSec <= 0) return '0 B/s'
if (bytesPerSec < 1024) return `${Math.round(bytesPerSec)} B/s`
if (bytesPerSec < 1024 * 1024) return `${(bytesPerSec / 1024).toFixed(1)} KB/s`
return `${(bytesPerSec / (1024 * 1024)).toFixed(1)} MB/s`
}
const ActiveModelDownloads = ({ withHeader = false }: ActiveModelDownloadsProps) => {
const { downloads } = useOllamaModelDownloads()
const { downloads, removeDownload } = useOllamaModelDownloads()
const { openModal, closeAllModals } = useModals()
const [cancellingModels, setCancellingModels] = useState<Set<string>>(new Set())
// Track previous downloadedBytes for speed calculation — mirrors the approach in
// ActiveDownloads.tsx so content + model downloads feel identical.
const prevBytesRef = useRef<Map<string, { bytes: number; time: number }>>(new Map())
const speedRef = useRef<Map<string, number[]>>(new Map())
const getSpeed = useCallback((model: string, currentBytes?: number): number => {
if (!currentBytes || currentBytes <= 0) return 0
const prev = prevBytesRef.current.get(model)
const now = Date.now()
if (prev && prev.bytes > 0 && currentBytes > prev.bytes) {
const deltaBytes = currentBytes - prev.bytes
const deltaSec = (now - prev.time) / 1000
if (deltaSec > 0) {
const instantSpeed = deltaBytes / deltaSec
// Simple moving average (last 5 samples)
const samples = speedRef.current.get(model) || []
samples.push(instantSpeed)
if (samples.length > 5) samples.shift()
speedRef.current.set(model, samples)
const avg = samples.reduce((a, b) => a + b, 0) / samples.length
prevBytesRef.current.set(model, { bytes: currentBytes, time: now })
return avg
}
}
// Only set initial observation; never advance timestamp when bytes unchanged
if (!prev) {
prevBytesRef.current.set(model, { bytes: currentBytes, time: now })
}
return speedRef.current.get(model)?.at(-1) || 0
}, [])
const runCancel = async (download: { model: string; jobId?: string }) => {
// Defensive guard: stale broadcasts during a hot upgrade may not include jobId.
// Without it we have nothing to call the cancel API with.
if (!download.jobId) return
setCancellingModels((prev) => new Set(prev).add(download.model))
try {
await api.cancelDownloadJob(download.jobId)
// Optimistically clear the entry — the Transmit cancelled broadcast usually
// arrives within a second but we don't want to leave the row hanging if it doesn't.
removeDownload(download.model)
// Clean up speed tracking refs for this model
prevBytesRef.current.delete(download.model)
speedRef.current.delete(download.model)
} finally {
setCancellingModels((prev) => {
const next = new Set(prev)
next.delete(download.model)
return next
})
}
}
const confirmCancel = (download: { model: string; jobId?: string }) => {
if (!download.jobId) return
openModal(
<StyledModal
title="Cancel Download?"
onConfirm={() => {
closeAllModals()
runCancel(download)
}}
onCancel={closeAllModals}
open={true}
confirmText="Cancel Download"
cancelText="Keep Downloading"
>
<div className="space-y-3 text-text-primary">
<p>
Stop downloading <span className="font-mono font-semibold">{download.model}</span>?
</p>
<p className="text-sm text-text-muted">
Any data already downloaded will remain on disk. If you re-download
this model later, it will resume from where it left off rather than
starting over.
</p>
</div>
</StyledModal>,
'confirm-cancel-model-download-modal'
)
}
return (
<>
{withHeader && <StyledSectionHeader title="Active Model Downloads" className="mt-12 mb-4" />}
<div className="space-y-4">
{downloads && downloads.length > 0 ? (
downloads.map((download) => (
<div
key={download.model}
className={`bg-desert-white rounded-lg p-4 border shadow-sm hover:shadow-lg transition-shadow ${
download.error ? 'border-red-400' : 'border-desert-stone-light'
}`}
>
{download.error ? (
<div className="flex items-start gap-3">
<IconAlertTriangle className="text-red-500 flex-shrink-0 mt-0.5" size={20} />
<div>
<p className="font-medium text-text-primary">{download.model}</p>
<p className="text-sm text-red-600 mt-1">{download.error}</p>
downloads.map((download) => {
const isCancelling = cancellingModels.has(download.model)
const canCancel = !!download.jobId && !download.error
const speed = getSpeed(download.model, download.downloadedBytes)
const hasBytes = !!(download.downloadedBytes && download.totalBytes)
return (
<div
key={download.model}
className={`rounded-lg p-4 border shadow-sm hover:shadow-lg transition-shadow ${
download.error
? 'bg-surface-primary border-red-300'
: 'bg-surface-primary border-default'
}`}
>
{download.error ? (
<div className="flex items-center gap-2">
<IconAlertTriangle className="w-5 h-5 text-red-500 flex-shrink-0" />
<div className="flex-1 min-w-0">
<p className="text-sm font-medium text-text-primary truncate">
{download.model}
</p>
<p className="text-xs text-red-600 mt-0.5">{download.error}</p>
</div>
</div>
</div>
) : (
<HorizontalBarChart
items={[
{
label: download.model,
value: download.percent,
total: '100%',
used: `${download.percent.toFixed(1)}%`,
type: 'ollama-model',
},
]}
/>
)}
</div>
))
) : (
<div className="space-y-2">
{/* Title + Cancel button row */}
<div className="flex items-start justify-between gap-2">
<div className="flex-1 min-w-0">
<p className="font-semibold text-desert-green truncate">
{download.model}
</p>
<span className="text-xs px-1.5 py-0.5 rounded bg-desert-stone-lighter text-desert-stone-dark font-mono">
ollama
</span>
</div>
{canCancel && (
isCancelling ? (
<IconLoader2 className="w-4 h-4 text-text-muted animate-spin flex-shrink-0" />
) : (
<button
onClick={() => confirmCancel(download)}
className="flex-shrink-0 p-1 rounded hover:bg-red-100 transition-colors"
title="Cancel download"
>
<IconX className="w-4 h-4 text-text-muted hover:text-red-500" />
</button>
)
)}
</div>
{/* Size info */}
<div className="flex justify-between items-baseline text-sm text-text-muted font-mono">
<span>
{hasBytes
? `${formatBytes(download.downloadedBytes!, 1)} / ${formatBytes(download.totalBytes!, 1)}`
: `${download.percent.toFixed(1)}% / 100%`}
</span>
</div>
{/* Progress bar */}
<div className="relative">
<div className="h-6 bg-desert-green-lighter bg-opacity-20 rounded-lg border border-default overflow-hidden">
<div
className="h-full rounded-lg transition-all duration-1000 ease-out bg-desert-green"
style={{ width: `${download.percent}%` }}
/>
</div>
<div
className={`absolute top-1/2 -translate-y-1/2 font-bold text-xs ${
download.percent > 15
? 'left-2 text-white drop-shadow-md'
: 'right-2 text-desert-green'
}`}
>
{Math.round(download.percent)}%
</div>
</div>
{/* Status indicator */}
<div className="flex items-center gap-2">
<div className="w-2 h-2 rounded-full bg-green-500 animate-pulse" />
<span className="text-xs text-text-muted">
Downloading...{speed > 0 ? ` ${formatSpeed(speed)}` : ''}
</span>
</div>
</div>
)}
</div>
)
})
) : (
<p className="text-text-muted">No active model downloads</p>
)}

View File

@ -19,36 +19,66 @@ export function getAllDiskDisplayItems(
): DiskDisplayItem[] {
const validDisks = disks?.filter((d) => d.totalSize > 0) || []
// If /app/storage is backed by a network filesystem (NFS/CIFS), it won't
// appear in the block-device list. Prepend it so NAS and OS disk are both
// shown. Local-disk-backed /app/storage is already reported in disk[] and
// fsSize[], so skip it here to avoid a phantom "NAS Storage" entry.
const NETWORK_FS_TYPES = new Set(['nfs', 'nfs4', 'cifs', 'smbfs', 'smb2', 'smb3'])
const storageMount = fsSize?.find(
(fs) =>
fs.mount === '/app/storage' && fs.size > 0 && NETWORK_FS_TYPES.has(fs.type?.toLowerCase())
)
const storageMountItem: DiskDisplayItem[] = storageMount
? [
{
label: 'NAS Storage',
value: storageMount.use || 0,
total: formatBytes(storageMount.size),
used: formatBytes(storageMount.used),
subtext: `${formatBytes(storageMount.used)} / ${formatBytes(storageMount.size)}`,
totalBytes: storageMount.size,
usedBytes: storageMount.used,
},
]
: []
if (validDisks.length > 0) {
return validDisks.map((disk) => ({
label: disk.name || 'Unknown',
value: disk.percentUsed || 0,
total: formatBytes(disk.totalSize),
used: formatBytes(disk.totalUsed),
subtext: `${formatBytes(disk.totalUsed || 0)} / ${formatBytes(disk.totalSize || 0)}`,
totalBytes: disk.totalSize,
usedBytes: disk.totalUsed,
}))
return [
...storageMountItem,
...validDisks.map((disk) => ({
label: disk.name || 'Unknown',
value: disk.percentUsed || 0,
total: formatBytes(disk.totalSize),
used: formatBytes(disk.totalUsed),
subtext: `${formatBytes(disk.totalUsed || 0)} / ${formatBytes(disk.totalSize || 0)}`,
totalBytes: disk.totalSize,
usedBytes: disk.totalUsed,
})),
]
}
if (fsSize && fsSize.length > 0) {
const seen = new Set<number>()
const uniqueFs = fsSize.filter((fs) => {
if (fs.size <= 0 || seen.has(fs.size)) return false
if (storageMount && fs.mount === '/app/storage') return false
seen.add(fs.size)
return true
})
const realDevices = uniqueFs.filter((fs) => fs.fs.startsWith('/dev/'))
const displayFs = realDevices.length > 0 ? realDevices : uniqueFs
return displayFs.map((fs) => ({
label: fs.fs || 'Unknown',
value: fs.use || 0,
total: formatBytes(fs.size),
used: formatBytes(fs.used),
subtext: `${formatBytes(fs.used)} / ${formatBytes(fs.size)}`,
totalBytes: fs.size,
usedBytes: fs.used,
}))
return [
...storageMountItem,
...displayFs.map((fs) => ({
label: fs.fs || 'Unknown',
value: fs.use || 0,
total: formatBytes(fs.size),
used: formatBytes(fs.used),
subtext: `${formatBytes(fs.used)} / ${formatBytes(fs.size)}`,
totalBytes: fs.size,
usedBytes: fs.used,
})),
]
}
return []
@ -59,6 +89,15 @@ export function getPrimaryDiskInfo(
disks: NomadDiskInfo[] | undefined,
fsSize: Systeminformation.FsSizeData[] | undefined
): { totalSize: number; totalUsed: number } | null {
// First, check if /app/storage is on a dedicated filesystem (e.g. NFS mount).
// This is the most accurate source since it reflects the actual backing
// store for NOMAD content, regardless of whether it's a local disk or
// network-attached storage.
const storageMount = fsSize?.find((fs) => fs.mount === '/app/storage' && fs.size > 0)
if (storageMount) {
return { totalSize: storageMount.size, totalUsed: storageMount.used }
}
const validDisks = disks?.filter((d) => d.totalSize > 0) || []
if (validDisks.length > 0) {
const diskWithRoot = validDisks.find((d) =>

View File

@ -1,11 +1,25 @@
import { useEffect, useRef, useState } from 'react'
import { useCallback, useEffect, useRef, useState } from 'react'
import { useTransmit } from 'react-adonis-transmit'
export type OllamaModelDownload = {
model: string
percent: number
timestamp: string
/**
* BullMQ job id included on progress events from v1.32+ so the frontend can
* call the cancel API. Optional for backward compat with stale broadcasts during
* a hot upgrade.
*/
jobId?: string
/**
* Aggregate bytes across all blobs in the model pull, summed from Ollama's
* per-digest progress events on the backend. Optional for backward compat.
*/
downloadedBytes?: number
totalBytes?: number
error?: string
/** Set to 'cancelled' alongside percent === -2 when the user cancels the download */
status?: 'cancelled'
}
export default function useOllamaModelDownloads() {
@ -13,6 +27,19 @@ export default function useOllamaModelDownloads() {
const [downloads, setDownloads] = useState<Map<string, OllamaModelDownload>>(new Map())
const timeoutsRef = useRef<Set<ReturnType<typeof setTimeout>>>(new Set())
/**
* Optimistically remove a download from local state used by the cancel UI to clear
* the entry immediately on a successful API call, in case the Transmit cancelled
* broadcast arrives late or the SSE connection drops at exactly the wrong moment.
*/
const removeDownload = useCallback((model: string) => {
setDownloads((current) => {
const next = new Map(current)
next.delete(model)
return next
})
}, [])
useEffect(() => {
const unsubscribe = subscribe('ollama-model-download', (data: OllamaModelDownload) => {
setDownloads((prev) => {
@ -30,6 +57,21 @@ export default function useOllamaModelDownloads() {
})
}, 15000)
timeoutsRef.current.add(errorTimeout)
} else if (data.percent === -2) {
// Download cancelled — clear quickly (matches the completion TTL).
// Component-level optimistic removal usually beats this branch, but it's
// here as a safety net for cases where the cancel comes from another tab
// or another client.
const cancelTimeout = setTimeout(() => {
timeoutsRef.current.delete(cancelTimeout)
setDownloads((current) => {
const next = new Map(current)
next.delete(data.model)
return next
})
}, 2000)
timeoutsRef.current.add(cancelTimeout)
updated.delete(data.model)
} else if (data.percent >= 100) {
// If download is complete, keep it for a short time before removing to allow UI to show 100% progress
updated.set(data.model, data)
@ -60,5 +102,5 @@ export default function useOllamaModelDownloads() {
const downloadsArray = Array.from(downloads.values())
return { downloads: downloadsArray, activeCount: downloads.size }
return { downloads: downloadsArray, activeCount: downloads.size, removeDownload }
}

View File

@ -369,7 +369,7 @@ export default function ModelsPage(props: {
</td>
<td className="px-4 py-3">
<span className="text-sm text-text-secondary">
{model.details.parameter_size || 'N/A'}
{model.details?.parameter_size || 'N/A'}
</span>
</td>
<td className="px-4 py-3">

View File

@ -83,8 +83,10 @@ export default function ZimRemoteExplorer() {
useInfiniteQuery<ListRemoteZimFilesResponse>({
queryKey: ['remote-zim-files', query],
queryFn: async ({ pageParam = 0 }) => {
const pageParsed = parseInt((pageParam as number).toString(), 10)
const start = isNaN(pageParsed) ? 0 : pageParsed * 12
// pageParam is an opaque Kiwix offset returned by the backend as `next_start`.
// The backend accumulates across multiple upstream pages when needed (#731), so the
// frontend can't derive the next offset from a 12-item page assumption.
const start = typeof pageParam === 'number' ? pageParam : 0
const res = await api.listRemoteZimFiles({ start, count: 12, query: query || undefined })
if (!res) {
throw new Error('Failed to fetch remote ZIM files.')
@ -92,12 +94,7 @@ export default function ZimRemoteExplorer() {
return res.data
},
initialPageParam: 0,
getNextPageParam: (_lastPage, pages) => {
if (!_lastPage.has_more) {
return undefined // No more pages to fetch
}
return pages.length
},
getNextPageParam: (lastPage) => (lastPage.has_more ? lastPage.next_start : undefined),
refetchOnWindowFocus: false,
placeholderData: keepPreviousData,
})
@ -119,18 +116,16 @@ export default function ZimRemoteExplorer() {
(parentRef?: HTMLDivElement | null) => {
if (parentRef) {
const { scrollHeight, scrollTop, clientHeight } = parentRef
//once the user has scrolled within 200px of the bottom of the table, fetch more data if we can
if (
scrollHeight - scrollTop - clientHeight < 200 &&
!isFetching &&
hasMore &&
flatData.length > 0
) {
// Fetch more when near the bottom. The `flatData.length > 0` guard that used to be
// here caused the #731 deadlock when a heavily-saturated install returned an empty
// page with has_more=true — removing it lets the existing on-mount/on-data effect
// below drive bounded auto-fetch until hasMore flips false.
if (scrollHeight - scrollTop - clientHeight < 200 && !isFetching && hasMore) {
fetchNextPage()
}
}
},
[fetchNextPage, isFetching, hasMore, flatData.length]
[fetchNextPage, isFetching, hasMore]
)
const virtualizer = useVirtualizer({

View File

@ -38,7 +38,7 @@
"@vinejs/vine": "^3.0.1",
"@vitejs/plugin-react": "^4.6.0",
"autoprefixer": "^10.4.21",
"axios": "^1.13.5",
"axios": "^1.15.0",
"better-sqlite3": "^12.1.1",
"bullmq": "^5.65.1",
"cheerio": "^1.2.0",
@ -96,7 +96,7 @@
"prettier": "^3.5.3",
"ts-node-maintained": "^10.9.5",
"typescript": "~5.8.3",
"vite": "^6.4.1"
"vite": "^6.4.2"
}
},
"node_modules/@adobe/css-tools": {
@ -520,9 +520,9 @@
}
},
"node_modules/@adonisjs/http-server": {
"version": "7.8.0",
"resolved": "https://registry.npmjs.org/@adonisjs/http-server/-/http-server-7.8.0.tgz",
"integrity": "sha512-aVMOpExPDNwxjnKGnc4g4sJTIQC3CfNwzWfPFWJm4WnAGXxdI3OxI2zU9FTopB50y0OVK3dWO4/c1Fu6U4vjWQ==",
"version": "7.8.1",
"resolved": "https://registry.npmjs.org/@adonisjs/http-server/-/http-server-7.8.1.tgz",
"integrity": "sha512-ScwKHJstXQbkQXSNqD6MOESowZ+WhRyDXxjSQV/T7IpyMEg/F8NxpR5jAvrpw1BaGzd3t50LrgTrb7ouD8DOpA==",
"license": "MIT",
"dependencies": {
"@paralleldrive/cuid2": "^2.2.2",
@ -4383,7 +4383,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4400,7 +4399,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4417,7 +4415,6 @@
"cpu": [
"arm"
],
"dev": true,
"license": "Apache-2.0",
"optional": true,
"os": [
@ -4434,7 +4431,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4451,7 +4447,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4468,7 +4463,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4485,7 +4479,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4502,7 +4495,6 @@
"cpu": [
"arm64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4519,7 +4511,6 @@
"cpu": [
"ia32"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -4536,7 +4527,6 @@
"cpu": [
"x64"
],
"dev": true,
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
@ -6408,14 +6398,14 @@
}
},
"node_modules/axios": {
"version": "1.13.5",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.13.5.tgz",
"integrity": "sha512-cz4ur7Vb0xS4/KUN0tPWe44eqxrIu31me+fbang3ijiNscE129POzipJJA6zniq2C/Z6sJCjMimjS8Lc/GAs8Q==",
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.15.0.tgz",
"integrity": "sha512-wWyJDlAatxk30ZJer+GeCWS209sA42X+N5jU2jy6oHTp7ufw8uzUTVFBX9+wTfAlhiJXGS0Bq7X6efruWjuK9Q==",
"license": "MIT",
"dependencies": {
"follow-redirects": "^1.15.11",
"form-data": "^4.0.5",
"proxy-from-env": "^1.1.0"
"proxy-from-env": "^2.1.0"
}
},
"node_modules/bail": {
@ -9068,9 +9058,9 @@
}
},
"node_modules/follow-redirects": {
"version": "1.15.11",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.11.tgz",
"integrity": "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==",
"version": "1.16.0",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.16.0.tgz",
"integrity": "sha512-y5rN/uOsadFT/JfYwhxRS5R7Qce+g3zG97+JrtFZlC9klX/W5hD7iiLzScI4nZqUS7DNUdhPgw4xI8W2LuXlUw==",
"funding": [
{
"type": "individual",
@ -11029,9 +11019,9 @@
}
},
"node_modules/lodash": {
"version": "4.17.23",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
"version": "4.18.1",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.18.1.tgz",
"integrity": "sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==",
"license": "MIT"
},
"node_modules/lodash-es": {
@ -13758,9 +13748,9 @@
}
},
"node_modules/protobufjs": {
"version": "7.5.4",
"resolved": "https://registry.npmjs.org/protobufjs/-/protobufjs-7.5.4.tgz",
"integrity": "sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg==",
"version": "7.5.5",
"resolved": "https://registry.npmjs.org/protobufjs/-/protobufjs-7.5.5.tgz",
"integrity": "sha512-3wY1AxV+VBNW8Yypfd1yQY9pXnqTAN+KwQxL8iYm3/BjKYMNg4i0owhEe26PWDOMaIrzeeF98Lqd5NGz4omiIg==",
"hasInstallScript": true,
"license": "BSD-3-Clause",
"dependencies": {
@ -13782,9 +13772,9 @@
}
},
"node_modules/protocol-buffers-schema": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/protocol-buffers-schema/-/protocol-buffers-schema-3.6.0.tgz",
"integrity": "sha512-TdDRD+/QNdrCGCE7v8340QyuXd4kIWIgapsE2+n/SaGiSSbomYl4TjHlvIoCWRpE7wFt02EpB35VVA2ImcBVqw==",
"version": "3.6.1",
"resolved": "https://registry.npmjs.org/protocol-buffers-schema/-/protocol-buffers-schema-3.6.1.tgz",
"integrity": "sha512-VG2K63Igkiv9p76tk1lilczEK1cT+kCjKtkdhw1dQZV3k3IXJbd3o6Ho8b9zJZaHSnT2hKe4I+ObmX9w6m5SmQ==",
"license": "MIT"
},
"node_modules/proxy-addr": {
@ -13801,10 +13791,13 @@
}
},
"node_modules/proxy-from-env": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz",
"integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==",
"license": "MIT"
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-2.1.0.tgz",
"integrity": "sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA==",
"license": "MIT",
"engines": {
"node": ">=10"
}
},
"node_modules/pump": {
"version": "3.0.3",
@ -16425,9 +16418,9 @@
}
},
"node_modules/vite": {
"version": "6.4.1",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.1.tgz",
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
"version": "6.4.2",
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.2.tgz",
"integrity": "sha512-2N/55r4JDJ4gdrCvGgINMy+HH3iRpNIz8K6SFwVsA+JbQScLiC+clmAxBgwiSPgcG9U15QmvqCGWzMbqda5zGQ==",
"license": "MIT",
"dependencies": {
"esbuild": "^0.25.0",

View File

@ -59,7 +59,7 @@
"prettier": "^3.5.3",
"ts-node-maintained": "^10.9.5",
"typescript": "~5.8.3",
"vite": "^6.4.1"
"vite": "^6.4.2"
},
"dependencies": {
"@adonisjs/auth": "^9.4.0",
@ -91,7 +91,7 @@
"@vinejs/vine": "^3.0.1",
"@vitejs/plugin-react": "^4.6.0",
"autoprefixer": "^10.4.21",
"axios": "^1.13.5",
"axios": "^1.15.0",
"better-sqlite3": "^12.1.1",
"bullmq": "^5.65.1",
"cheerio": "^1.2.0",

View File

@ -16,6 +16,7 @@ export type ListRemoteZimFilesResponse = {
items: RemoteZimFileEntry[]
has_more: boolean
total_count: number
next_start: number
}
export type RawRemoteZimFileEntry = {

View File

@ -44,7 +44,9 @@ while true; do
# These are not real filesystem roots and report misleading sizes
[[ -f "/host${mountpoint}" ]] && continue
STATS=$(df -B1 "/host${mountpoint}" 2>/dev/null | awk 'NR==2{print $2,$3,$4,$5}')
# Use -P (POSIX) to force single-line output even when device names
# are long (e.g. NFS mounts), which otherwise wrap across two lines
STATS=$(df -P -B1 "/host${mountpoint}" 2>/dev/null | awk 'NR==2{print $2,$3,$4,$5}')
[[ -z "$STATS" ]] && continue
read -r size used avail pct <<< "$STATS"
@ -60,7 +62,7 @@ while true; do
# The disk-collector container always has /storage bind-mounted from the host,
# so df on /storage reflects the actual backing device and its capacity.
if [[ "$FIRST" -eq 1 ]] && mountpoint -q /storage 2>/dev/null; then
STATS=$(df -B1 /storage 2>/dev/null | awk 'NR==2{print $1,$2,$3,$4,$5}')
STATS=$(df -P -B1 /storage 2>/dev/null | awk 'NR==2{print $1,$2,$3,$4,$5}')
if [[ -n "$STATS" ]]; then
read -r dev size used avail pct <<< "$STATS"
pct="${pct/\%/}"

View File

@ -1,6 +1,6 @@
{
"name": "project-nomad",
"version": "1.31.0",
"version": "1.31.1",
"description": "\"",
"main": "index.js",
"scripts": {