Fix broken links with docs

This commit is contained in:
Ashwin Bharambe 2024-11-22 20:42:17 -08:00
parent 36938b716c
commit 0481fa9540
7 changed files with 63 additions and 13 deletions

7
docs/contbuild.sh Normal file
View file

@ -0,0 +1,7 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
sphinx-autobuild --write-all source build/html --watch source/

View file

@ -12,6 +12,8 @@
# -- Project information ----------------------------------------------------- # -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
from docutils import nodes
project = "llama-stack" project = "llama-stack"
copyright = "2024, Meta" copyright = "2024, Meta"
author = "Meta" author = "Meta"
@ -59,6 +61,10 @@ myst_enable_extensions = [
"tasklist", "tasklist",
] ]
myst_substitutions = {
"docker_hub": "https://hub.docker.com/repository/docker/llamastack",
}
# Copy button settings # Copy button settings
copybutton_prompt_text = "$ " # for bash prompts copybutton_prompt_text = "$ " # for bash prompts
copybutton_prompt_is_regexp = True copybutton_prompt_is_regexp = True
@ -98,3 +104,26 @@ redoc = [
] ]
redoc_uri = "https://cdn.redoc.ly/redoc/latest/bundles/redoc.standalone.js" redoc_uri = "https://cdn.redoc.ly/redoc/latest/bundles/redoc.standalone.js"
def setup(app):
def dockerhub_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
url = f"https://hub.docker.com/r/llamastack/{text}"
node = nodes.reference(rawtext, text, refuri=url, **options)
return [node], []
def repopath_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
parts = text.split("::")
if len(parts) == 2:
link_text = parts[0]
url_path = parts[1]
else:
link_text = text
url_path = text
url = f"https://github.com/meta-llama/llama-stack/tree/main/{url_path}"
node = nodes.reference(rawtext, link_text, refuri=url, **options)
return [node], []
app.add_role("dockerhub", dockerhub_role)
app.add_role("repopath", repopath_role)

View file

@ -5,15 +5,15 @@ This guide contains references to walk you through adding a new API provider.
1. First, decide which API your provider falls into (e.g. Inference, Safety, Agents, Memory). 1. First, decide which API your provider falls into (e.g. Inference, Safety, Agents, Memory).
2. Decide whether your provider is a remote provider, or inline implmentation. A remote provider is a provider that makes a remote request to an service. An inline provider is a provider where implementation is executed locally. Checkout the examples, and follow the structure to add your own API provider. Please find the following code pointers: 2. Decide whether your provider is a remote provider, or inline implmentation. A remote provider is a provider that makes a remote request to an service. An inline provider is a provider where implementation is executed locally. Checkout the examples, and follow the structure to add your own API provider. Please find the following code pointers:
- [Remote Adapters](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/remote) - {repopath}`Remote Providers::llama_stack/providers/remote`
- [Inline Providers](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/inline) - {repopath}`Inline Providers::llama_stack/providers/inline`
3. [Build a Llama Stack distribution](https://llama-stack.readthedocs.io/en/latest/distribution_dev/building_distro.html) with your API provider. 3. [Build a Llama Stack distribution](https://llama-stack.readthedocs.io/en/latest/distribution_dev/building_distro.html) with your API provider.
4. Test your code! 4. Test your code!
## Testing your newly added API providers ## Testing your newly added API providers
1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See [llama_stack/providers/tests/inference/test_inference.py](../llama_stack/providers/tests/inference/test_inference.py) for an example. 1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`llama_stack/providers/tests/inference/test_text_inference.py` for an example.
2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in `tests/` but they aren't well supported so far. 2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in `tests/` but they aren't well supported so far.

View file

@ -5,7 +5,7 @@ We offer both remote and on-device use of Llama Stack in Swift via two component
1. [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift/) 1. [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift/)
2. [LocalInferenceImpl](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/inline/ios/inference) 2. [LocalInferenceImpl](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/inline/ios/inference)
```{image} ../../../../_static/remote_or_local.gif ```{image} ../../../_static/remote_or_local.gif
:alt: Seamlessly switching between local, on-device inference and remote hosted inference :alt: Seamlessly switching between local, on-device inference and remote hosted inference
:width: 412px :width: 412px
:align: center :align: center

View file

@ -1,13 +1,27 @@
# Self-Hosted Distributions # Self-Hosted Distributions
```{toctree}
:maxdepth: 1
:hidden:
ollama
tgi
remote-vllm
meta-reference-gpu
meta-reference-quantized-gpu
together
fireworks
bedrock
```
We offer deployable distributions where you can host your own Llama Stack server using local inference. We offer deployable distributions where you can host your own Llama Stack server using local inference.
| **Distribution** | **Llama Stack Docker** | Start This Distribution | | **Distribution** | **Llama Stack Docker** | Start This Distribution |
|:----------------: |:------------------------------------------: |:-----------------------: | |:----------------: |:------------------------------------------: |:-----------------------: |
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | | Ollama | {dockerhub}`distribution-ollama` | [Guide](ollama) |
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | | TGI | {dockerhub}`distribution-tgi` | [Guide](tgi) |
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | | vLLM | {dockerhub}`distribution-remote-vllm` | [Guide](remote-vllm) |
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | | Meta Reference | {dockerhub}`distribution-meta-reference-gpu` | [Guide](meta-reference-gpu) |
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/together.html) | | Meta Reference Quantized | {dockerhub}`distribution-meta-reference-quantized-gpu` | [Guide](meta-reference-quantized-gpu) |
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/fireworks.html) | | Together | {dockerhub}`distribution-together` | [Guide](together) |
| Bedrock | [llamastack/distribution-bedrock](https://hub.docker.com/repository/docker/llamastack/distribution-bedrock/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/bedrock.html) | | Fireworks | {dockerhub}`distribution-fireworks` | [Guide](fireworks) |
| Bedrock | {dockerhub}`distribution-bedrock` | [Guide](bedrock) |

View file

@ -29,7 +29,7 @@ You have two ways to install Llama Stack:
## `llama` subcommands ## `llama` subcommands
1. `download`: `llama` cli tools supports downloading the model from Meta or Hugging Face. 1. `download`: `llama` cli tools supports downloading the model from Meta or Hugging Face.
2. `model`: Lists available models and their properties. 2. `model`: Lists available models and their properties.
3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this [here](../distributions/building_distro). 3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this [here](../../distributions/building_distro).
### Sample Usage ### Sample Usage
@ -228,7 +228,7 @@ You can even run `llama model prompt-format` see all of the templates and their
``` ```
llama model prompt-format -m Llama3.2-3B-Instruct llama model prompt-format -m Llama3.2-3B-Instruct
``` ```
![alt text](../../resources/prompt-format.png) ![alt text](../../../resources/prompt-format.png)