mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-07 22:34:37 +00:00
* Since our API packages use import * in __init__.py, we can import directly from llama_stack.apis.models instead of llama_stack.apis.models.models. However, the choice to use import * is debatable and may need to be reconsidered in the future. * Remove the unnecessary Ruff F401 suppression. * Consolidate the Ruff F403 rule configuration in pyproject.toml. Signed-off-by: Sébastien Han <seb@redhat.com> |
||
---|---|---|
.. | ||
agents | ||
datasetio | ||
eval | ||
files/localfs | ||
inference | ||
ios/inference | ||
post_training | ||
safety | ||
scoring | ||
telemetry | ||
tool_runtime | ||
vector_io | ||
__init__.py |