diff --git a/docs/source/distributions/self_hosted_distro/nvidia.md b/docs/source/distributions/self_hosted_distro/nvidia.md index 8679fd9be..ef3ec8213 100644 --- a/docs/source/distributions/self_hosted_distro/nvidia.md +++ b/docs/source/distributions/self_hosted_distro/nvidia.md @@ -172,3 +172,6 @@ llama stack run ./run.yaml \ --env NVIDIA_API_KEY=$NVIDIA_API_KEY \ --env INFERENCE_MODEL=$INFERENCE_MODEL ``` + +## Example Notebooks +For examples of how to use the NVIDIA Distribution to run inference, fine-tune, evaluate, and run safety checks on your LLMs, you can reference the example notebooks in `docs/notebooks/nvidia`. diff --git a/llama_stack/templates/nvidia/doc_template.md b/llama_stack/templates/nvidia/doc_template.md index 068dd7ac3..3fc73ee5e 100644 --- a/llama_stack/templates/nvidia/doc_template.md +++ b/llama_stack/templates/nvidia/doc_template.md @@ -144,3 +144,6 @@ llama stack run ./run.yaml \ --env NVIDIA_API_KEY=$NVIDIA_API_KEY \ --env INFERENCE_MODEL=$INFERENCE_MODEL ``` + +## Example Notebooks +For examples of how to use the NVIDIA Distribution to run inference, fine-tune, evaluate, and run safety checks on your LLMs, you can reference the example notebooks in `docs/notebooks/nvidia`.