| services.ollama.models | The directory that the ollama service will read models from and download new models to.
|
| services.ollama.loadModels | Download these models using ollama pull as soon as ollama.service has started
|
| services.ollama.syncModels | Synchronize all currently installed models with those declared in services.ollama.loadModels,
removing any models that are installed but not currently declared there.
|
| services.ollama.enable | Whether to enable ollama server for local large language models.
|
| services.ollama.package | The ollama package to use
|
| services.ollama.acceleration | What interface to use for hardware acceleration
|
| services.ollama.rocmOverrideGfx | Override what rocm will detect your gpu model as
|