mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-05 05:35:22 +00:00
docs: auto generated documentation for providers (#2543)
# What does this PR do? Simple approach to get some provider pages in the docs. Add or update description fields in the provider configuration class using Pydantic’s Field, ensuring these descriptions are clear and complete, as they will be used to auto-generate provider documentation via ./scripts/distro_codegen.py instead of editing the docs manually. Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
parent
8d8e90d78e
commit
c9a49a80e8
96 changed files with 2562 additions and 65 deletions
|
@ -6,7 +6,7 @@ Llama Stack is a stateful service with REST APIs to support the seamless transit
|
|||
environments. You can build and test using a local server first and deploy to a hosted endpoint for production.
|
||||
|
||||
In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)
|
||||
as the inference [provider](../providers/index.md#inference) for a Llama Model.
|
||||
as the inference [provider](../providers/inference/index) for a Llama Model.
|
||||
|
||||
#### Step 1: Install and setup
|
||||
1. Install [uv](https://docs.astral.sh/uv/)
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue