llama-stack-mirror/llama_stack/providers/utils/inference
Sébastien Han c245cb580c
chore: remove nested imports
* Since our API packages use import * in __init__.py, we can import
  directly from llama_stack.apis.models instead of
  llama_stack.apis.models.models.  However, the choice to use import *
  is debatable and may need to be reconsidered in the future.

* Remove the unnecessary Ruff F401 suppression.

* Consolidate the Ruff F403 rule configuration in
pyproject.toml.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-25 13:07:15 +02:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
inference_store.py feat: support auth attributes in inference/responses stores (#2389) 2025-06-20 10:24:45 -07:00
litellm_openai_mixin.py chore: remove nested imports 2025-06-25 13:07:15 +02:00
model_registry.py chore: remove nested imports 2025-06-25 13:07:15 +02:00
openai_compat.py chore: remove nested imports 2025-06-25 13:07:15 +02:00
prompt_adapter.py chore: more mypy fixes (#2029) 2025-05-06 09:52:31 -07:00
stream_utils.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30