llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Sébastien Han c245cb580c
chore: remove nested imports
* Since our API packages use import * in __init__.py, we can import
  directly from llama_stack.apis.models instead of
  llama_stack.apis.models.models.  However, the choice to use import *
  is debatable and may need to be reconsidered in the future.

* Remove the unnecessary Ruff F401 suppression.

* Consolidate the Ruff F403 rule configuration in
pyproject.toml.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-25 13:07:15 +02:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
models.py chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (#1985) 2025-04-17 06:50:40 -07:00
NVIDIA.md docs: Add NVIDIA platform distro docs (#1971) 2025-04-17 05:54:30 -07:00
nvidia.py chore: remove nested imports 2025-06-25 13:07:15 +02:00
openai_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00