Exisiting Ollama API support still functions as before. OpenAI vs
Ollama API mostly have the same features, however model file size is not
support with OpenAI's API so when a user chooses one of those then the
models will just show up as the model name without the size.
`npm install openai` triggered some updates in admin/package-lock.json
such as adding many instances of "dev: true".
This further enhances the user's ability to run the LLM on a different
host.
This adds a new setting in the chat app under "models & settings" where
the user can set "Remote Ollama URL" to an IP or hostname of another
device on the network running ollama which is also running with the
setting "OLLAMA_HOST=0.0.0.0:11434"