mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-12 04:50:39 +00:00
docs: part 1 - fix warnings in documentation generation (#2861)
**Description** This PR removes some of the warnings when uv builds the docs - Errors appear when generating docs about .md files not appearing in toctree. ~~Adding content to the `providers-gen.py ` file that adds `--- orphan: true ---` to to each file.~~. Added a toctree generator to the `providers-gen.py` file, this gets rid of the errors in the builds. - Deletes the `_openai_compat` files, extension of PR #2849 - Adds the `files` APIs section to the `providers` toctree on the index page - Manually adds the `--- orphan: true ---` to the advanced apis. Ill try to find a way to modify the providers code gen so it automatically adds it, but this fixes the errors. - Adds the `testing.md` to the `contributing` toctree - Adds `starting_llama_stack_server.md` to `distributions` toctree There are some other warnings im still looking at but this PR gets rid of most of the toctree errors Theres also an issue with the actual distribution-codegen that I can investigate in another PR. Opened a bug for it here #2873
This commit is contained in:
parent
38d5c44354
commit
026caa5551
27 changed files with 210 additions and 230 deletions
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::meta-reference
|
||||
|
||||
## Description
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# remote::nvidia
|
||||
|
||||
## Description
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::huggingface
|
||||
|
||||
## Description
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::torchtune
|
||||
|
||||
## Description
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# remote::nvidia
|
||||
|
||||
## Description
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::basic
|
||||
|
||||
## Description
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::braintrust
|
||||
|
||||
## Description
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::llm-as-judge
|
||||
|
||||
## Description
|
||||
|
|
|
@ -11,4 +11,5 @@ See the [Adding a New API Provider](new_api_provider.md) which describes how to
|
|||
:hidden:
|
||||
|
||||
new_api_provider
|
||||
testing
|
||||
```
|
||||
|
|
|
@ -9,6 +9,7 @@ This section provides an overview of the distributions available in Llama Stack.
|
|||
list_of_distributions
|
||||
building_distro
|
||||
customizing_run_yaml
|
||||
starting_llama_stack_server
|
||||
importing_as_library
|
||||
configuration
|
||||
```
|
||||
|
|
|
@ -1,5 +1,13 @@
|
|||
# Agents Providers
|
||||
# Agents
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **agents** API.
|
||||
|
||||
- [inline::meta-reference](inline_meta-reference.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_meta-reference
|
||||
```
|
||||
|
|
|
@ -1,7 +1,15 @@
|
|||
# Datasetio Providers
|
||||
# Datasetio
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **datasetio** API.
|
||||
|
||||
- [inline::localfs](inline_localfs.md)
|
||||
- [remote::huggingface](remote_huggingface.md)
|
||||
- [remote::nvidia](remote_nvidia.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_localfs
|
||||
remote_huggingface
|
||||
remote_nvidia
|
||||
```
|
||||
|
|
|
@ -1,6 +1,14 @@
|
|||
# Eval Providers
|
||||
# Eval
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **eval** API.
|
||||
|
||||
- [inline::meta-reference](inline_meta-reference.md)
|
||||
- [remote::nvidia](remote_nvidia.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_meta-reference
|
||||
remote_nvidia
|
||||
```
|
||||
|
|
|
@ -1,5 +1,13 @@
|
|||
# Files Providers
|
||||
# Files
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **files** API.
|
||||
|
||||
- [inline::localfs](inline_localfs.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_localfs
|
||||
```
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# API Providers Overview
|
||||
# API Providers
|
||||
|
||||
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
|
||||
- LLM inference providers (e.g., Meta Reference, Ollama, Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, OpenAI, Anthropic, Gemini, WatsonX, etc.),
|
||||
|
@ -12,81 +12,17 @@ Providers come in two flavors:
|
|||
|
||||
Importantly, Llama Stack always strives to provide at least one fully inline provider for each API so you can iterate on a fully featured environment locally.
|
||||
|
||||
## External Providers
|
||||
Llama Stack supports external providers that live outside of the main codebase. This allows you to create and maintain your own providers independently.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
external.md
|
||||
```
|
||||
|
||||
```{include} openai.md
|
||||
:start-after: ## OpenAI API Compatibility
|
||||
```
|
||||
|
||||
## Inference
|
||||
Runs inference with an LLM.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
external
|
||||
openai
|
||||
inference/index
|
||||
```
|
||||
|
||||
## Agents
|
||||
Run multi-step agentic workflows with LLMs with tool usage, memory (RAG), etc.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
agents/index
|
||||
```
|
||||
|
||||
## DatasetIO
|
||||
Interfaces with datasets and data loaders.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
datasetio/index
|
||||
```
|
||||
|
||||
## Safety
|
||||
Applies safety policies to the output at a Systems (not only model) level.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
safety/index
|
||||
```
|
||||
|
||||
## Telemetry
|
||||
Collects telemetry data from the system.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
telemetry/index
|
||||
```
|
||||
|
||||
## Vector IO
|
||||
|
||||
Vector IO refers to operations on vector databases, such as adding documents, searching, and deleting documents.
|
||||
Vector IO plays a crucial role in [Retreival Augmented Generation (RAG)](../..//building_applications/rag), where the vector
|
||||
io and database are used to store and retrieve documents for retrieval.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
vector_io/index
|
||||
```
|
||||
|
||||
## Tool Runtime
|
||||
Is associated with the ToolGroup resources.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
tool_runtime/index
|
||||
```
|
||||
files/index
|
||||
```
|
||||
|
|
|
@ -1,26 +1,34 @@
|
|||
# Inference Providers
|
||||
# Inference
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **inference** API.
|
||||
|
||||
- [inline::meta-reference](inline_meta-reference.md)
|
||||
- [inline::sentence-transformers](inline_sentence-transformers.md)
|
||||
- [remote::anthropic](remote_anthropic.md)
|
||||
- [remote::bedrock](remote_bedrock.md)
|
||||
- [remote::cerebras](remote_cerebras.md)
|
||||
- [remote::databricks](remote_databricks.md)
|
||||
- [remote::fireworks](remote_fireworks.md)
|
||||
- [remote::gemini](remote_gemini.md)
|
||||
- [remote::groq](remote_groq.md)
|
||||
- [remote::hf::endpoint](remote_hf_endpoint.md)
|
||||
- [remote::hf::serverless](remote_hf_serverless.md)
|
||||
- [remote::llama-openai-compat](remote_llama-openai-compat.md)
|
||||
- [remote::nvidia](remote_nvidia.md)
|
||||
- [remote::ollama](remote_ollama.md)
|
||||
- [remote::openai](remote_openai.md)
|
||||
- [remote::passthrough](remote_passthrough.md)
|
||||
- [remote::runpod](remote_runpod.md)
|
||||
- [remote::sambanova](remote_sambanova.md)
|
||||
- [remote::tgi](remote_tgi.md)
|
||||
- [remote::together](remote_together.md)
|
||||
- [remote::vllm](remote_vllm.md)
|
||||
- [remote::watsonx](remote_watsonx.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_meta-reference
|
||||
inline_sentence-transformers
|
||||
remote_anthropic
|
||||
remote_bedrock
|
||||
remote_cerebras
|
||||
remote_databricks
|
||||
remote_fireworks
|
||||
remote_gemini
|
||||
remote_groq
|
||||
remote_hf_endpoint
|
||||
remote_hf_serverless
|
||||
remote_llama-openai-compat
|
||||
remote_nvidia
|
||||
remote_ollama
|
||||
remote_openai
|
||||
remote_passthrough
|
||||
remote_runpod
|
||||
remote_sambanova
|
||||
remote_tgi
|
||||
remote_together
|
||||
remote_vllm
|
||||
remote_watsonx
|
||||
```
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
# remote::cerebras-openai-compat
|
||||
|
||||
## Description
|
||||
|
||||
Cerebras OpenAI-compatible provider for using Cerebras models with OpenAI API format.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `str \| None` | No | | The Cerebras API key |
|
||||
| `openai_compat_api_base` | `<class 'str'>` | No | https://api.cerebras.ai/v1 | The URL for the Cerebras API server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
openai_compat_api_base: https://api.cerebras.ai/v1
|
||||
api_key: ${env.CEREBRAS_API_KEY}
|
||||
|
||||
```
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
# remote::fireworks-openai-compat
|
||||
|
||||
## Description
|
||||
|
||||
Fireworks AI OpenAI-compatible provider for using Fireworks models with OpenAI API format.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `str \| None` | No | | The Fireworks API key |
|
||||
| `openai_compat_api_base` | `<class 'str'>` | No | https://api.fireworks.ai/inference/v1 | The URL for the Fireworks API server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
openai_compat_api_base: https://api.fireworks.ai/inference/v1
|
||||
api_key: ${env.FIREWORKS_API_KEY}
|
||||
|
||||
```
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
# remote::groq-openai-compat
|
||||
|
||||
## Description
|
||||
|
||||
Groq OpenAI-compatible provider for using Groq models with OpenAI API format.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `str \| None` | No | | The Groq API key |
|
||||
| `openai_compat_api_base` | `<class 'str'>` | No | https://api.groq.com/openai/v1 | The URL for the Groq API server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
openai_compat_api_base: https://api.groq.com/openai/v1
|
||||
api_key: ${env.GROQ_API_KEY}
|
||||
|
||||
```
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
# remote::together-openai-compat
|
||||
|
||||
## Description
|
||||
|
||||
Together AI OpenAI-compatible provider for using Together models with OpenAI API format.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `str \| None` | No | | The Together API key |
|
||||
| `openai_compat_api_base` | `<class 'str'>` | No | https://api.together.xyz/v1 | The URL for the Together API server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
openai_compat_api_base: https://api.together.xyz/v1
|
||||
api_key: ${env.TOGETHER_API_KEY}
|
||||
|
||||
```
|
||||
|
|
@ -1,7 +1,15 @@
|
|||
# Post_Training Providers
|
||||
# Post_Training
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **post_training** API.
|
||||
|
||||
- [inline::huggingface](inline_huggingface.md)
|
||||
- [inline::torchtune](inline_torchtune.md)
|
||||
- [remote::nvidia](remote_nvidia.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_huggingface
|
||||
inline_torchtune
|
||||
remote_nvidia
|
||||
```
|
||||
|
|
|
@ -1,10 +1,18 @@
|
|||
# Safety Providers
|
||||
# Safety
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **safety** API.
|
||||
|
||||
- [inline::code-scanner](inline_code-scanner.md)
|
||||
- [inline::llama-guard](inline_llama-guard.md)
|
||||
- [inline::prompt-guard](inline_prompt-guard.md)
|
||||
- [remote::bedrock](remote_bedrock.md)
|
||||
- [remote::nvidia](remote_nvidia.md)
|
||||
- [remote::sambanova](remote_sambanova.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_code-scanner
|
||||
inline_llama-guard
|
||||
inline_prompt-guard
|
||||
remote_bedrock
|
||||
remote_nvidia
|
||||
remote_sambanova
|
||||
```
|
||||
|
|
|
@ -1,7 +1,15 @@
|
|||
# Scoring Providers
|
||||
# Scoring
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **scoring** API.
|
||||
|
||||
- [inline::basic](inline_basic.md)
|
||||
- [inline::braintrust](inline_braintrust.md)
|
||||
- [inline::llm-as-judge](inline_llm-as-judge.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_basic
|
||||
inline_braintrust
|
||||
inline_llm-as-judge
|
||||
```
|
||||
|
|
|
@ -1,5 +1,13 @@
|
|||
# Telemetry Providers
|
||||
# Telemetry
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **telemetry** API.
|
||||
|
||||
- [inline::meta-reference](inline_meta-reference.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_meta-reference
|
||||
```
|
||||
|
|
|
@ -1,10 +1,18 @@
|
|||
# Tool_Runtime Providers
|
||||
# Tool_Runtime
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **tool_runtime** API.
|
||||
|
||||
- [inline::rag-runtime](inline_rag-runtime.md)
|
||||
- [remote::bing-search](remote_bing-search.md)
|
||||
- [remote::brave-search](remote_brave-search.md)
|
||||
- [remote::model-context-protocol](remote_model-context-protocol.md)
|
||||
- [remote::tavily-search](remote_tavily-search.md)
|
||||
- [remote::wolfram-alpha](remote_wolfram-alpha.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_rag-runtime
|
||||
remote_bing-search
|
||||
remote_brave-search
|
||||
remote_model-context-protocol
|
||||
remote_tavily-search
|
||||
remote_wolfram-alpha
|
||||
```
|
||||
|
|
|
@ -1,16 +1,24 @@
|
|||
# Vector_Io Providers
|
||||
# Vector_Io
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **vector_io** API.
|
||||
|
||||
- [inline::chromadb](inline_chromadb.md)
|
||||
- [inline::faiss](inline_faiss.md)
|
||||
- [inline::meta-reference](inline_meta-reference.md)
|
||||
- [inline::milvus](inline_milvus.md)
|
||||
- [inline::qdrant](inline_qdrant.md)
|
||||
- [inline::sqlite-vec](inline_sqlite-vec.md)
|
||||
- [inline::sqlite_vec](inline_sqlite_vec.md)
|
||||
- [remote::chromadb](remote_chromadb.md)
|
||||
- [remote::milvus](remote_milvus.md)
|
||||
- [remote::pgvector](remote_pgvector.md)
|
||||
- [remote::qdrant](remote_qdrant.md)
|
||||
- [remote::weaviate](remote_weaviate.md)
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_chromadb
|
||||
inline_faiss
|
||||
inline_meta-reference
|
||||
inline_milvus
|
||||
inline_qdrant
|
||||
inline_sqlite-vec
|
||||
inline_sqlite_vec
|
||||
remote_chromadb
|
||||
remote_milvus
|
||||
remote_pgvector
|
||||
remote_qdrant
|
||||
remote_weaviate
|
||||
```
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue