diff --git a/docs/source/contributing/new_api_provider.md b/docs/source/contributing/new_api_provider.md index e0a35e946..3fa875c50 100644 --- a/docs/source/contributing/new_api_provider.md +++ b/docs/source/contributing/new_api_provider.md @@ -3,7 +3,7 @@ This guide contains references to walk you through adding a new API provider. 1. First, decide which API your provider falls into (e.g. Inference, Safety, Agents, Memory). -2. Decide whether your provider is a remote provider, or inline implmentation. A remote provider is a provider that makes a remote request to an service. An inline provider is a provider where implementation is executed locally. Checkout the examples, and follow the structure to add your own API provider. Please find the following code pointers: +2. Decide whether your provider is a remote provider, or inline implementation. A remote provider is a provider that makes a remote request to a service. An inline provider is a provider where implementation is executed locally. Checkout the examples, and follow the structure to add your own API provider. Please find the following code pointers: - {repopath}`Remote Providers::llama_stack/providers/remote` - {repopath}`Inline Providers::llama_stack/providers/inline` @@ -15,7 +15,7 @@ This guide contains references to walk you through adding a new API provider. 1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`llama_stack/providers/tests/inference/test_text_inference.py` for an example. -2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in `tests/` but they aren't well supported so far. +2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in `tests/` but they aren't well-supported so far. 3. Test with a client-server Llama Stack setup. (a) Start a Llama Stack server with your own distribution which includes the new provider. (b) Send a client request to the server. See `llama_stack/apis//client.py` for how this is done. These client scripts can serve as lightweight tests. diff --git a/docs/source/distributions/configuration.md b/docs/source/distributions/configuration.md index 6fee67936..41df26618 100644 --- a/docs/source/distributions/configuration.md +++ b/docs/source/distributions/configuration.md @@ -1,6 +1,6 @@ # Configuring a Stack -The Llama Stack runtime configuration is specified as a YAML file. Here is a simplied version of an example configuration file for the Ollama distribution: +The Llama Stack runtime configuration is specified as a YAML file. Here is a simplified version of an example configuration file for the Ollama distribution: ```{dropdown} Sample Configuration File