llama-stack-mirror/llama_stack/providers/utils/inference
Sébastien Han 803bf0e029
fix: solve ruff B008 warnings (#1444)
# What does this PR do?

The commit addresses the Ruff warning B008 by refactoring the code to
avoid calling SamplingParams() directly in function argument defaults.
Instead, it either uses Field(default_factory=SamplingParams) for
Pydantic models or sets the default to None and instantiates
SamplingParams inside the function body when the argument is None.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-06 16:48:35 -08:00
..
__init__.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
embedding_mixin.py fix: dont assume SentenceTransformer is imported 2025-02-25 16:53:01 -08:00
litellm_openai_mixin.py fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
model_registry.py feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00
openai_compat.py chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
prompt_adapter.py feat: better using get_default_tool_prompt_format (#1360) 2025-03-03 14:50:06 -08:00