llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Sébastien Han ac5fd57387
chore: remove nested imports (#2515)
# What does this PR do?

* Given that our API packages use "import *" in `__init.py__` we don't
need to do `from llama_stack.apis.models.models` but simply from
llama_stack.apis.models. The decision to use `import *` is debatable and
should probably be revisited at one point.

* Remove unneeded Ruff F401 rule
* Consolidate Ruff F403 rule in the pyprojectfrom
llama_stack.apis.models.models

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-26 08:01:05 +05:30
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
models.py chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (#1985) 2025-04-17 06:50:40 -07:00
NVIDIA.md docs: Add NVIDIA platform distro docs (#1971) 2025-04-17 05:54:30 -07:00
nvidia.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
openai_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00