mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-08 13:00:52 +00:00
docs: fix broken links (#3540)
# What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> - Fixes broken links and Docusaurus search Closes #3518 ## Test Plan The following should produce a clean build with no warnings and search enabled: ``` npm install npm run gen-api-docs all npm run build npm run serve ``` <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* -->
This commit is contained in:
parent
8537ada11b
commit
6101c8e015
52 changed files with 188 additions and 981 deletions
|
@ -8,7 +8,7 @@ sidebar_position: 7
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This guide walks you through the process of evaluating an LLM application built using Llama Stack. For detailed API reference, check out the [Evaluation Reference](/docs/references/evals-reference) guide that covers the complete set of APIs and developer experience flow.
|
||||
This guide walks you through the process of evaluating an LLM application built using Llama Stack. For detailed API reference, check out the [Evaluation Reference](../references/evals_reference/) guide that covers the complete set of APIs and developer experience flow.
|
||||
|
||||
:::tip[Interactive Examples]
|
||||
Check out our [Colab notebook](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing) for working examples with evaluations, or try the [Getting Started notebook](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).
|
||||
|
@ -251,6 +251,6 @@ results = client.scoring.score(
|
|||
|
||||
- **[Agents](./agent)** - Building agents for evaluation
|
||||
- **[Tools Integration](./tools)** - Using tools in evaluated agents
|
||||
- **[Evaluation Reference](/docs/references/evals-reference)** - Complete API reference for evaluations
|
||||
- **[Evaluation Reference](../references/evals_reference/)** - Complete API reference for evaluations
|
||||
- **[Getting Started Notebook](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)** - Interactive examples
|
||||
- **[Evaluation Examples](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing)** - Additional evaluation scenarios
|
||||
|
|
|
@ -20,23 +20,23 @@ The best way to get started is to look at this comprehensive notebook which walk
|
|||
Here are the key topics that will help you build effective AI applications:
|
||||
|
||||
### 🤖 **Agent Development**
|
||||
- **[Agent Framework](./agent)** - Understand the components and design patterns of the Llama Stack agent framework
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - How agents process information, make decisions, and execute actions
|
||||
- **[Agents vs Responses API](./responses_vs_agents)** - Learn when to use each API for different use cases
|
||||
- **[Agent Framework](./agent.mdx)** - Understand the components and design patterns of the Llama Stack agent framework
|
||||
- **[Agent Execution Loop](./agent_execution_loop.mdx)** - How agents process information, make decisions, and execute actions
|
||||
- **[Agents vs Responses API](./responses_vs_agents.mdx)** - Learn when to use each API for different use cases
|
||||
|
||||
### 📚 **Knowledge Integration**
|
||||
- **[RAG (Retrieval-Augmented Generation)](./rag)** - Enhance your agents with external knowledge through retrieval mechanisms
|
||||
- **[RAG (Retrieval-Augmented Generation)](./rag.mdx)** - Enhance your agents with external knowledge through retrieval mechanisms
|
||||
|
||||
### 🛠️ **Capabilities & Extensions**
|
||||
- **[Tools](./tools)** - Extend your agents' capabilities by integrating with external tools and APIs
|
||||
- **[Tools](./tools.mdx)** - Extend your agents' capabilities by integrating with external tools and APIs
|
||||
|
||||
### 📊 **Quality & Monitoring**
|
||||
- **[Evaluations](./evals)** - Evaluate your agents' effectiveness and identify areas for improvement
|
||||
- **[Telemetry](./telemetry)** - Monitor and analyze your agents' performance and behavior
|
||||
- **[Safety](./safety)** - Implement guardrails and safety measures to ensure responsible AI behavior
|
||||
- **[Evaluations](./evals.mdx)** - Evaluate your agents' effectiveness and identify areas for improvement
|
||||
- **[Telemetry](./telemetry.mdx)** - Monitor and analyze your agents' performance and behavior
|
||||
- **[Safety](./safety.mdx)** - Implement guardrails and safety measures to ensure responsible AI behavior
|
||||
|
||||
### 🎮 **Interactive Development**
|
||||
- **[Playground](./playground)** - Interactive environment for testing and developing applications
|
||||
- **[Playground](./playground.mdx)** - Interactive environment for testing and developing applications
|
||||
|
||||
## Application Patterns
|
||||
|
||||
|
@ -77,7 +77,7 @@ Build production-ready systems with:
|
|||
|
||||
## Related Resources
|
||||
|
||||
- **[Getting Started](/docs/getting-started/)** - Basic setup and concepts
|
||||
- **[Getting Started](/docs/getting_started/quickstart)** - Basic setup and concepts
|
||||
- **[Providers](/docs/providers/)** - Available AI service providers
|
||||
- **[Distributions](/docs/distributions/)** - Pre-configured deployment packages
|
||||
- **[API Reference](/docs/api/)** - Complete API documentation
|
||||
- **[API Reference](/docs/api/llama-stack-specification)** - Complete API documentation
|
||||
|
|
|
@ -291,9 +291,9 @@ llama stack run meta-reference
|
|||
|
||||
## Related Resources
|
||||
|
||||
- **[Getting Started Guide](/docs/getting-started)** - Complete setup and introduction
|
||||
- **[Getting Started Guide](../getting_started/quickstart)** - Complete setup and introduction
|
||||
- **[Core Concepts](/docs/concepts)** - Understanding Llama Stack fundamentals
|
||||
- **[Agents](./agent)** - Building intelligent agents
|
||||
- **[RAG (Retrieval Augmented Generation)](./rag)** - Knowledge-enhanced applications
|
||||
- **[Evaluations](./evals)** - Comprehensive evaluation framework
|
||||
- **[API Reference](/docs/api-reference)** - Complete API documentation
|
||||
- **[API Reference](/docs/api/llama-stack-specification)** - Complete API documentation
|
||||
|
|
|
@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem';
|
|||
Llama Stack (LLS) provides two different APIs for building AI applications with tool calling capabilities: the **Agents API** and the **OpenAI Responses API**. While both enable AI systems to use tools, and maintain full conversation history, they serve different use cases and have distinct characteristics.
|
||||
|
||||
:::note
|
||||
**Note:** For simple and basic inferencing, you may want to use the [Chat Completions API](/docs/providers/openai-compatibility#chat-completions) directly, before progressing to Agents or Responses API.
|
||||
**Note:** For simple and basic inferencing, you may want to use the [Chat Completions API](../providers/openai#chat-completions) directly, before progressing to Agents or Responses API.
|
||||
:::
|
||||
|
||||
## Overview
|
||||
|
@ -217,5 +217,5 @@ Use this framework to choose the right API for your use case:
|
|||
- **[Agents](./agent)** - Understanding the Agents API fundamentals
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - How agents process turns and steps
|
||||
- **[Tools Integration](./tools)** - Adding capabilities to both APIs
|
||||
- **[OpenAI Compatibility](/docs/providers/openai-compatibility)** - Using OpenAI-compatible endpoints
|
||||
- **[OpenAI Compatibility](../providers/openai)** - Using OpenAI-compatible endpoints
|
||||
- **[Safety Guardrails](./safety)** - Implementing safety measures in agents
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue