diff --git a/docs/source/building_applications/index.md b/docs/source/building_applications/index.md index 55485ddbc..45dca5a1c 100644 --- a/docs/source/building_applications/index.md +++ b/docs/source/building_applications/index.md @@ -4,7 +4,7 @@ Llama Stack provides all the building blocks needed to create sophisticated AI a The best way to get started is to look at this notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them. -**Notebook**: [Building AI Applications](docs/notebooks/Llama_Stack_Building_AI_Applications.ipynb) +**Notebook**: [Building AI Applications](https://github.com/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb) Here are some key topics that will help you build effective agents: diff --git a/docs/source/building_applications/tools.md b/docs/source/building_applications/tools.md index 81b4ab68e..c4229b64d 100644 --- a/docs/source/building_applications/tools.md +++ b/docs/source/building_applications/tools.md @@ -142,7 +142,7 @@ config = AgentConfig( ) ``` -Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_custom_tools.py) for an example of how to use client provided tools. +Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_client_tools.py) for an example of how to use client provided tools. ## Tool Structure diff --git a/docs/source/distributions/selection.md b/docs/source/distributions/selection.md index 08c3e985a..aaaf246ee 100644 --- a/docs/source/distributions/selection.md +++ b/docs/source/distributions/selection.md @@ -16,7 +16,7 @@ Which templates / distributions to choose depends on the hardware you have for r - {dockerhub}`distribution-tgi` ([Guide](self_hosted_distro/tgi)) - {dockerhub}`distribution-nvidia` ([Guide](self_hosted_distro/nvidia)) -- **Are you running on a "regular" desktop or laptop ?** We suggest using the ollama templte for quick prototyping and get started without having to worry about needing GPUs. +- **Are you running on a "regular" desktop or laptop ?** We suggest using the ollama template for quick prototyping and get started without having to worry about needing GPUs. - {dockerhub}`distribution-ollama` ([link](self_hosted_distro/ollama)) - **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest: