llama-stack-mirror/llama_stack/providers/remote
Jiayi Ni 165b8b07f4
docs: Documentation update for NVIDIA Inference Provider (#3840)
# What does this PR do?
<!-- Provide a short summary of what this PR does and why. Link to
relevant issues if applicable. -->

<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->
- Fix examples in the NVIDIA inference documentation to align with
current API requirements.

## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->
N/A
2025-10-20 09:51:43 -07:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
eval feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin (#3547) 2025-09-25 17:17:00 -04:00
files/s3 feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00
inference docs: Documentation update for NVIDIA Inference Provider (#3840) 2025-10-20 09:51:43 -07:00
post_training fix: remove inference.completion from docs (#3589) 2025-09-29 13:14:41 -07:00
safety chore!: Safety api refactoring to use OpenAIMessageParam (#3796) 2025-10-12 08:01:00 -07:00
tool_runtime feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
vector_io feat: Enable setting a default embedding model in the stack (#3803) 2025-10-14 18:25:13 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00