llama-stack-mirror/llama_stack/providers/remote/inference
Jiayi Ni b72169ca47
docs: update the docs for NVIDIA Inference provider (#3227)
# What does this PR do?
- Documentation update and fix for the NVIDIA Inference provider. 
- Update the `run_moderation` for safety API with a
`NotImplementedError` placeholder. Otherwise initialization NVIDIA
inference client will raise an error.

## Test Plan
N/A
2025-08-21 15:59:39 -07:00
..
anthropic feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
bedrock feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
cerebras feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
databricks feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
fireworks chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
gemini feat: Flash-Lite 2.0 and 2.5 models added to Gemini inference provider (#3058) 2025-08-08 13:48:15 -07:00
groq feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
llama_openai_compat chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
nvidia docs: update the docs for NVIDIA Inference provider (#3227) 2025-08-21 15:59:39 -07:00
ollama refactor: standardize InferenceRouter model handling (#2965) 2025-08-12 04:20:39 -06:00
openai chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
passthrough chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
runpod ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
sambanova fix: sambanova inference provider (#2996) 2025-08-01 09:09:14 -07:00
tgi chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
together chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
vertexai feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
vllm feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
watsonx fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00