feat(docs): update README.md for remote AI host

Update some ollama specific text in the chat settings
This commit is contained in:
Henry Estela 2026-03-12 22:51:37 -07:00
parent f98664921a
commit 80dec52b0a
No known key found for this signature in database
GPG Key ID: 90439853E9E235BA
2 changed files with 13 additions and 3 deletions

View File

@ -37,7 +37,7 @@ For more control over the installation process, copy and paste the [Docker Compo
N.O.M.A.D. is a management UI ("Command Center") and API that orchestrates a collection of containerized tools and resources via [Docker](https://www.docker.com/). It handles installation, configuration, and updates for everything — so you don't have to.
**Built-in capabilities include:**
- **AI Chat with Knowledge Base** — local AI chat powered by [Ollama](https://ollama.com/), with document upload and semantic search (RAG via [Qdrant](https://qdrant.tech/))
- **AI Chat with Knowledge Base** — local AI chat powered by [Ollama](https://ollama.com/) or you can use OpenAI API compatible software such as LM Studio or llama.cpp, with document upload and semantic search (RAG via [Qdrant](https://qdrant.tech/))
- **Information Library** — offline Wikipedia, medical references, ebooks, and more via [Kiwix](https://kiwix.org/)
- **Education Platform** — Khan Academy courses with progress tracking via [Kolibri](https://learningequality.org/kolibri/)
- **Offline Maps** — downloadable regional maps via [ProtoMaps](https://protomaps.com)
@ -85,6 +85,16 @@ To run LLM's and other included AI tools:
- OS: Debian-based (Ubuntu recommended)
- Stable internet connection (required during install only)
#### Running AI models on a different host
By default, N.O.M.A.D.'s installer will attempt to setup Ollama on the host when the AI Assistant is installed. However, if you would like to run the AI model on a different host, you can go to the settings of of the AI assistant and input a URL for either an ollama or OpenAI-compatible API server (such as LM Studio, llama.cpp).
Note that if you use Ollama on a different host, you must start the server with this option `OLLAMA_HOST=0.0.0.0`.
You are responsible for the setup of Ollama/OpenAI server on the other host.
#### Running AI models on a different host
By default, N.O.M.A.D. will attempt to setup Ollama on the host when the AI Assistant is installed. However, if you would like to run the AI model on a different host, you can go to the settings of of the AI assistant and input a URL for either an ollama or OpenAI-compatible API server (such as LM Studio, llama.cpp).
Note that if you use Ollama on a different host, you must start the server with this option `OLLAMA_HOST=0.0.0.0`.
You are responsible for the setup of Ollama/OpenAI server on the other host.
**For detailed build recommendations at three price points ($150$1,000+), see the [Hardware Guide](https://www.projectnomad.us/hardware).**
Again, Project N.O.M.A.D. itself is quite lightweight - it's the tools and resources you choose to install with N.O.M.A.D. that will determine the specs required for your unique deployment

View File

@ -338,7 +338,7 @@ export default function ModelsPage(props: {
<div className="flex-1">
<Input
name="remoteOllamaUrl"
label="Remote Ollama URL"
label="Remote Ollama/OpenAI API URL"
placeholder="http://192.168.1.100:11434 (or :1234 for OpenAI API Compatible Apps)"
value={remoteOllamaUrl}
onChange={(e) => {