llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
2025-10-07 16:26:42 -07:00
..
__init__.py chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
config.py chore: use remoteinferenceproviderconfig for remote inference providers (#3668) 2025-10-03 08:48:42 -07:00
NVIDIA.md chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
nvidia.py chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
utils.py chore: remove dead code 2025-10-07 16:26:42 -07:00