docs: remove Readthedocs references

This commit is contained in:
Alexey Rybak 2025-09-24 10:44:28 -07:00 committed by raghotham
parent e50de07bef
commit d993ecbea4
7 changed files with 32 additions and 26 deletions

View file

@ -21,4 +21,3 @@ Llama Stack uses GitHub Actions for Continuous Integration (CI). Below is a tabl
| Test External API and Providers | [test-external.yml](test-external.yml) | Test the External API and Provider mechanisms | | Test External API and Providers | [test-external.yml](test-external.yml) | Test the External API and Provider mechanisms |
| UI Tests | [ui-unit-tests.yml](ui-unit-tests.yml) | Run the UI test suite | | UI Tests | [ui-unit-tests.yml](ui-unit-tests.yml) | Run the UI test suite |
| Unit Tests | [unit-tests.yml](unit-tests.yml) | Run the unit test suite | | Unit Tests | [unit-tests.yml](unit-tests.yml) | Run the unit test suite |
| Update ReadTheDocs | [update-readthedocs.yml](update-readthedocs.yml) | Update the Llama Stack ReadTheDocs site |

View file

@ -187,14 +187,16 @@ Note that the provider "description" field will be used to generate the provider
### Building the Documentation ### Building the Documentation
If you are making changes to the documentation at [https://llamastack.github.io/latest/](https://llamastack.github.io/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme. If you are making changes to the documentation at [https://llamastack.github.io/](https://llamastack.github.io/), you can use the following command to build the documentation and preview your changes.
```bash ```bash
# This rebuilds the documentation pages. # This rebuilds the documentation pages and the OpenAPI spec.
uv run --group docs make -C docs/ html npm install
npm run gen-api-docs all
npm run build
# This will start a local server (usually at http://127.0.0.1:8000) that automatically rebuilds and refreshes when you make changes to the documentation. # This will start a local server (usually at http://127.0.0.1:3000).
uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all npm run serve
``` ```
### Update API Documentation ### Update API Documentation
@ -205,4 +207,4 @@ If you modify or add new API endpoints, update the API documentation accordingly
uv run ./docs/openapi_generator/run_openapi_generator.sh uv run ./docs/openapi_generator/run_openapi_generator.sh
``` ```
The generated API documentation will be available in `docs/_static/`. Make sure to review the changes before committing. The generated API schema will be available in `docs/static/`. Make sure to review the changes before committing.

View file

@ -1,14 +1,17 @@
# Llama Stack Documentation # Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our [Github page](https://llamastack.github.io/latest/getting_started/index.html). Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our [Github page](https://llamastack.github.io/getting_started/quickstart).
## Render locally ## Render locally
From the llama-stack root directory, run the following command to render the docs locally: From the llama-stack `docs/` directory, run the following commands to render the docs locally:
```bash ```bash
uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all npm install
npm run gen-api-docs all
npm run build
npm run serve
``` ```
You can open up the docs in your browser at http://localhost:8000 You can open up the docs in your browser at http://localhost:3000
## Content ## Content

View file

@ -187,14 +187,16 @@ Note that the provider "description" field will be used to generate the provider
### Building the Documentation ### Building the Documentation
If you are making changes to the documentation at [https://llamastack.github.io/latest/](https://llamastack.github.io/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme. If you are making changes to the documentation at [https://llamastack.github.io/](https://llamastack.github.io/), you can use the following command to build the documentation and preview your changes.
```bash ```bash
# This rebuilds the documentation pages. # This rebuilds the documentation pages and the OpenAPI spec.
uv run --group docs make -C docs/ html npm install
npm run gen-api-docs all
npm run build
# This will start a local server (usually at http://127.0.0.1:8000) that automatically rebuilds and refreshes when you make changes to the documentation. # This will start a local server (usually at http://127.0.0.1:3000).
uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all npm run serve
``` ```
### Update API Documentation ### Update API Documentation
@ -205,7 +207,7 @@ If you modify or add new API endpoints, update the API documentation accordingly
uv run ./docs/openapi_generator/run_openapi_generator.sh uv run ./docs/openapi_generator/run_openapi_generator.sh
``` ```
The generated API documentation will be available in `docs/_static/`. Make sure to review the changes before committing. The generated API schema will be available in `docs/static/`. Make sure to review the changes before committing.
## Adding a New Provider ## Adding a New Provider

View file

@ -45,9 +45,9 @@ Llama Stack consists of a server (with multiple pluggable API providers) and Cli
## Quick Links ## Quick Links
- Ready to build? Check out the [Getting Started Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) to get started. - Ready to build? Check out the [Getting Started Guide](https://llama-stack.github.io/getting_started/quickstart) to get started.
- Want to contribute? See the [Contributing Guide](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md). - Want to contribute? See the [Contributing Guide](https://github.com/llamastack/llama-stack/blob/main/CONTRIBUTING.md).
- Explore [Example Applications](https://github.com/meta-llama/llama-stack-apps) built with Llama Stack. - Explore [Example Applications](https://github.com/llamastack/llama-stack-apps) built with Llama Stack.
## Rich Ecosystem Support ## Rich Ecosystem Support
@ -59,13 +59,13 @@ Llama Stack provides adapters for popular providers across all API categories:
- **Training & Evaluation**: HuggingFace, TorchTune, NVIDIA NEMO - **Training & Evaluation**: HuggingFace, TorchTune, NVIDIA NEMO
:::info Provider Details :::info Provider Details
For complete provider compatibility and setup instructions, see our [Providers Documentation](https://llama-stack.readthedocs.io/en/latest/providers/index.html). For complete provider compatibility and setup instructions, see our [Providers Documentation](https://llamastack.github.io/providers/).
::: :::
## Get Started Today ## Get Started Today
<div style={{display: 'flex', gap: '1rem', flexWrap: 'wrap', margin: '2rem 0'}}> <div style={{display: 'flex', gap: '1rem', flexWrap: 'wrap', margin: '2rem 0'}}>
<a href="https://llama-stack.readthedocs.io/en/latest/getting_started/index.html" <a href="https://llama-stack.github.io/getting_started/quickstart"
style={{ style={{
background: 'var(--ifm-color-primary)', background: 'var(--ifm-color-primary)',
color: 'white', color: 'white',
@ -76,7 +76,7 @@ For complete provider compatibility and setup instructions, see our [Providers D
}}> }}>
🚀 Quick Start Guide 🚀 Quick Start Guide
</a> </a>
<a href="https://github.com/meta-llama/llama-stack-apps" <a href="https://github.com/llamastack/llama-stack-apps"
style={{ style={{
border: '2px solid var(--ifm-color-primary)', border: '2px solid var(--ifm-color-primary)',
color: 'var(--ifm-color-primary)', color: 'var(--ifm-color-primary)',
@ -87,7 +87,7 @@ For complete provider compatibility and setup instructions, see our [Providers D
}}> }}>
📚 Example Apps 📚 Example Apps
</a> </a>
<a href="https://github.com/meta-llama/llama-stack" <a href="https://github.com/llamastack/llama-stack"
style={{ style={{
border: '2px solid #666', border: '2px solid #666',
color: '#666', color: '#666',

View file

@ -15,7 +15,7 @@
"\n", "\n",
"[Llama Stack](https://github.com/meta-llama/llama-stack) defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.\n", "[Llama Stack](https://github.com/meta-llama/llama-stack) defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.\n",
"\n", "\n",
"Read more about the project here: https://llamastack.github.io/latest/getting_started/index.html\n", "Read more about the project here: https://llamastack.github.io\n",
"\n", "\n",
"In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n", "In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n",
"\n", "\n",

View file

@ -14,7 +14,7 @@
"We will also showcase how to leverage existing Llama stack [inference APIs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/apis/inference/inference.py) (ollama as provider) to get the new model's output and the [eval APIs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/apis/eval/eval.py) to help you better measure the new model performance. We hope the flywheel of post-training -> eval -> inference can greatly empower agentic apps development.\n", "We will also showcase how to leverage existing Llama stack [inference APIs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/apis/inference/inference.py) (ollama as provider) to get the new model's output and the [eval APIs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/apis/eval/eval.py) to help you better measure the new model performance. We hope the flywheel of post-training -> eval -> inference can greatly empower agentic apps development.\n",
"\n", "\n",
"\n", "\n",
"- Read more about Llama Stack: https://llamastack.github.io/latest/index.html\n", "- Read more about Llama Stack: https://llamastack.github.io/\n",
"- Read more about post training APIs definition: https://github.com/meta-llama/llama-stack/blob/main/llama_stack/apis/post_training/post_training.py\n", "- Read more about post training APIs definition: https://github.com/meta-llama/llama-stack/blob/main/llama_stack/apis/post_training/post_training.py\n",
"\n", "\n",
"\n", "\n",