llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Yuan Tang 34ab7a3b6c
Fix precommit check after moving to ruff (#927)
Lint check in main branch is failing. This fixes the lint check after we
moved to ruff in https://github.com/meta-llama/llama-stack/pull/921. We
need to move to a `ruff.toml` file as well as fixing and ignoring some
additional checks.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-02 06:46:45 -08:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
nvidia.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
openai_utils.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
utils.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00