| options/nixos/services.ollama.models | The directory that the ollama service will read models from and download new models to.
|
| options/nixos/services.ollama.loadModels | Download these models using ollama pull as soon as ollama.service has started
|
| options/nixos/services.ollama.syncModels | Synchronize all currently installed models with those declared in services.ollama.loadModels,
removing any models that are installed but not currently declared there.
|
| options/nixos/services.ollama.enable | Whether to enable ollama server for local large language models.
|
| options/home-manager/services.ollama.enable | Whether to enable ollama server for local large language models.
|
| options/nixos/services.ollama.package | The ollama package to use
|
| options/home-manager/services.ollama.acceleration | What interface to use for hardware acceleration.
null: default behavior
- if
nixpkgs.config.rocmSupport is enabled, uses "rocm"
- if
nixpkgs.config.cudaSupport is enabled, uses "cuda"
- otherwise defaults to
false
false: disable GPU, only use CPU
"rocm": supported by most modern AMD GPUs
- may require overriding gpu type with
services.ollama.rocmOverrideGfx
if rocm doesn't detect your AMD gpu
"cuda": supported by most modern NVIDIA GPUs
|
| options/nixos/services.ollama.acceleration | What interface to use for hardware acceleration
|
| options/nixos/services.ollama.rocmOverrideGfx | Override what rocm will detect your gpu model as
|