mirror of
https://github.com/Crosstalk-Solutions/project-nomad.git
synced 2026-04-02 14:59:26 +02:00
New defaults: OLLAMA_NO_CLOUD=1 - "Ollama can run in local only mode by disabling Ollama’s cloud features. By turning off Ollama’s cloud features, you will lose the ability to use Ollama’s cloud models and web search." https://ollama.com/blog/web-search https://docs.ollama.com/faq#how-do-i-disable-ollama%E2%80%99s-cloud-features example output: ``` ollama run minimax-m2.7:cloud Error: ollama cloud is disabled: remote model details are unavailable ``` This setting can be safely disabled as you have to click on a link to login to ollama cloud and theres no real way to do that in nomad outside of looking at the nomad_ollama logs. This one can be disabled in settings in case theres a model out there that doesn't play nice. but that doesnt seem necessary so far. OLLAMA_FLASH_ATTENTION=1 - "Flash Attention is a feature of most modern models that can significantly reduce memory usage as the context size grows. " Tested with llama3.2: ``` docker logs nomad_ollama --tail 1000 2>&1 |grep --color -i flash_attn llama_context: flash_attn = enabled ``` And with second_constantine/deepseek-coder-v2 with is based on https://huggingface.co/lmstudio-community/DeepSeek-Coder-V2-Lite-Instruct-GGUF which is a model that specifically calls out that you should disable flash attention, but during testing it seems ollama can do this for you automatically: ``` docker logs nomad_ollama --tail 1000 2>&1 |grep --color -i flash_attn llama_context: flash_attn = disabled ``` |
||
|---|---|---|
| .. | ||
| zim | ||
| apps.tsx | ||
| benchmark.tsx | ||
| legal.tsx | ||
| maps.tsx | ||
| models.tsx | ||
| support.tsx | ||
| system.tsx | ||
| update.tsx | ||