This commit is contained in:
Xi Yan 2024-11-01 10:53:57 -07:00
parent 1f0788f7b2
commit d5474a8c47
2 changed files with 4 additions and 4 deletions

View file

@ -94,7 +94,7 @@ You have two ways to install this repository:
## Documentations ## Documentations
Please checkout our [Docs](https://llama-stack.readthedocs.io/en/latest/index.html) page for more details. Please checkout our [Documentations](https://llama-stack.readthedocs.io/en/latest/index.html) page for more details.
* [CLI reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html) * [CLI reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
* Guide using `llama` CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution. * Guide using `llama` CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.

View file

@ -6,10 +6,10 @@ This guide contains references to walk you through adding a new API provider.
1. First, decide which API your provider falls into (e.g. Inference, Safety, Agents, Memory). 1. First, decide which API your provider falls into (e.g. Inference, Safety, Agents, Memory).
2. Decide whether your provider is a remote provider, or inline implmentation. A remote provider is a provider that makes a remote request to an service. An inline provider is a provider where implementation is executed locally. Checkout the examples, and follow the structure to add your own API provider. Please find the following code pointers: 2. Decide whether your provider is a remote provider, or inline implmentation. A remote provider is a provider that makes a remote request to an service. An inline provider is a provider where implementation is executed locally. Checkout the examples, and follow the structure to add your own API provider. Please find the following code pointers:
- [Inference Remote Adapter](../llama_stack/providers/adapters/inference/) - [Inference Remote Adapter](https://github.com/meta-llama/llama-stack/tree/docs/llama_stack/providers/adapters/inference)
- [Inference Inline Provider](../llama_stack/providers/impls/) - [Inference Inline Provider](https://github.com/meta-llama/llama-stack/tree/docs/llama_stack/providers/impls/meta_reference/inference)
3. [Build a Llama Stack distribution](./building_distro.md) with your API provider. 3. [Build a Llama Stack distribution](https://llama-stack.readthedocs.io/en/latest/distribution_dev/building_distro.html) with your API provider.
4. Test your code! 4. Test your code!
### Testing your newly added API providers ### Testing your newly added API providers