llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Matthew Farrellee 639bc912d5 chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods
use demonstrated by refactoring OpenAIInferenceAdapter, NVIDIAInferenceAdapter (adds embedding support) and LlamaCompatInferenceAdapter
2025-07-21 07:27:27 -04:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
models.py ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
NVIDIA.md docs: Add NVIDIA platform distro docs (#1971) 2025-04-17 05:54:30 -07:00
nvidia.py chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods 2025-07-21 07:27:27 -04:00
openai_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00