llama-stack-mirror/llama_stack/providers/remote/inference/ollama/config.py
Matthew Farrellee bc47900ec0 feat: add refresh_models support to inference adapters (default: false)
inference adapters can now configure `refresh_models: bool` to control periodic model listing from their providers

BREAKING CHANGE: together inference adapter default changed. previously always refreshed, now follows config.
2025-10-07 07:23:07 -04:00

21 lines
643 B
Python

# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
from typing import Any
from llama_stack.providers.utils.inference.model_registry import RemoteInferenceProviderConfig
DEFAULT_OLLAMA_URL = "http://localhost:11434"
class OllamaImplConfig(RemoteInferenceProviderConfig):
url: str = DEFAULT_OLLAMA_URL
@classmethod
def sample_run_config(cls, url: str = "${env.OLLAMA_URL:=http://localhost:11434}", **kwargs) -> dict[str, Any]:
return {
"url": url,
}