fixes issue seen with some models in lm studio resulting in:
"The number of tokens to keep from the initial prompt is greater than the context length (n_keep: 4705>= n_ctx: 4096)"
Fixed char/token estimate, the old value was too optimistic,
causing the cap to allow more text than the budget allowed in actual tokens.
After RAG injection, estimates the system prompt token count.
If it exceeds ~3000 tokens, requests the next standard context size (8192, 16384, 32768, or 65536),
large enough to fit the prompt plus a 2048-token buffer for the conversation and response.
For Ollama, num_ctx is honoured per-request and will load the model with that context
window. For LM Studio, the parameter is silently ignored — but the tighter char
estimate will also reduce how much RAG text gets stuffed in, so it's less likely to
overflow.
Added a cleanup failed button for Processing Queue in the Knowledge Base
since documents that fail to process tend to get stuck and then can't be
cleared.
Fixed the ingestion of documents for OpenAI servers.
Updated some text in the chat and chat settings since user will need to
manually download models when using a non-ollama remote gpu server.
Exisiting Ollama API support still functions as before. OpenAI vs
Ollama API mostly have the same features, however model file size is not
support with OpenAI's API so when a user chooses one of those then the
models will just show up as the model name without the size.
`npm install openai` triggered some updates in admin/package-lock.json
such as adding many instances of "dev: true".
This further enhances the user's ability to run the LLM on a different
host.
Model downloads that fail (e.g., when Ollama is too old for a model)
were silently retrying 40 times with no UI feedback. Now errors are
broadcast via SSE and shown in the Active Model Downloads section.
Version mismatch errors use UnrecoverableError to fail immediately
instead of retrying. Stale failed jobs are cleared on retry so users
aren't permanently blocked.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>