docs!: adjust external provider docs (#3484)

# What does this PR do?

now that we consolidated the providerspec types and got rid of
`AdapterSpec`, adjust external.md

BREAKING CHANGE: external providers must update their
`get_provider_spec` function to use `RemoteProviderSpec` properly

Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
Charlie Doern 2025-10-03 09:48:41 -04:00 committed by GitHub
parent d266c59c2a
commit a09e30bd87
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 52 additions and 97 deletions

View file

@ -152,7 +152,6 @@ __all__ = ["WeatherAPI", "available_providers"]
from typing import Protocol from typing import Protocol
from llama_stack.providers.datatypes import ( from llama_stack.providers.datatypes import (
AdapterSpec,
Api, Api,
ProviderSpec, ProviderSpec,
RemoteProviderSpec, RemoteProviderSpec,
@ -166,12 +165,10 @@ def available_providers() -> list[ProviderSpec]:
api=Api.weather, api=Api.weather,
provider_type="remote::kaze", provider_type="remote::kaze",
config_class="llama_stack_provider_kaze.KazeProviderConfig", config_class="llama_stack_provider_kaze.KazeProviderConfig",
adapter=AdapterSpec( adapter_type="kaze",
adapter_type="kaze", module="llama_stack_provider_kaze",
module="llama_stack_provider_kaze", pip_packages=["llama_stack_provider_kaze"],
pip_packages=["llama_stack_provider_kaze"], config_class="llama_stack_provider_kaze.KazeProviderConfig",
config_class="llama_stack_provider_kaze.KazeProviderConfig",
),
), ),
] ]
@ -325,11 +322,10 @@ class WeatherKazeAdapter(WeatherProvider):
```yaml ```yaml
# ~/.llama/providers.d/remote/weather/kaze.yaml # ~/.llama/providers.d/remote/weather/kaze.yaml
adapter: adapter_type: kaze
adapter_type: kaze pip_packages: ["llama_stack_provider_kaze"]
pip_packages: ["llama_stack_provider_kaze"] config_class: llama_stack_provider_kaze.config.KazeProviderConfig
config_class: llama_stack_provider_kaze.config.KazeProviderConfig module: llama_stack_provider_kaze
module: llama_stack_provider_kaze
optional_api_dependencies: [] optional_api_dependencies: []
``` ```

View file

@ -11,38 +11,6 @@ an example entry in your build.yaml should look like:
module: ramalama_stack module: ramalama_stack
``` ```
Additionally you can configure the `external_providers_dir` in your Llama Stack configuration. This method is in the process of being deprecated in favor of the `module` method. If using this method, the external provider directory should contain your external provider specifications:
```yaml
external_providers_dir: ~/.llama/providers.d/
```
## Directory Structure
The external providers directory should follow this structure:
```
providers.d/
remote/
inference/
custom_ollama.yaml
vllm.yaml
vector_io/
qdrant.yaml
safety/
llama-guard.yaml
inline/
inference/
custom_ollama.yaml
vllm.yaml
vector_io/
qdrant.yaml
safety/
llama-guard.yaml
```
Each YAML file in these directories defines a provider specification for that particular API.
## Provider Types ## Provider Types
Llama Stack supports two types of external providers: Llama Stack supports two types of external providers:
@ -50,30 +18,37 @@ Llama Stack supports two types of external providers:
1. **Remote Providers**: Providers that communicate with external services (e.g., cloud APIs) 1. **Remote Providers**: Providers that communicate with external services (e.g., cloud APIs)
2. **Inline Providers**: Providers that run locally within the Llama Stack process 2. **Inline Providers**: Providers that run locally within the Llama Stack process
### Provider Specification (Common between inline and remote providers)
- `provider_type`: The type of the provider to be installed (remote or inline). eg. `remote::ollama`
- `api`: The API for this provider, eg. `inference`
- `config_class`: The full path to the configuration class
- `module`: The Python module containing the provider implementation
- `optional_api_dependencies`: List of optional Llama Stack APIs that this provider can use
- `api_dependencies`: List of Llama Stack APIs that this provider depends on
- `provider_data_validator`: Optional validator for provider data.
- `pip_packages`: List of Python packages required by the provider
### Remote Provider Specification ### Remote Provider Specification
Remote providers are used when you need to communicate with external services. Here's an example for a custom Ollama provider: Remote providers are used when you need to communicate with external services. Here's an example for a custom Ollama provider:
```yaml ```yaml
adapter: adapter_type: custom_ollama
adapter_type: custom_ollama provider_type: "remote::ollama"
pip_packages: pip_packages:
- ollama - ollama
- aiohttp - aiohttp
config_class: llama_stack_ollama_provider.config.OllamaImplConfig config_class: llama_stack_ollama_provider.config.OllamaImplConfig
module: llama_stack_ollama_provider module: llama_stack_ollama_provider
api_dependencies: [] api_dependencies: []
optional_api_dependencies: [] optional_api_dependencies: []
``` ```
#### Adapter Configuration #### Remote Provider Configuration
The `adapter` section defines how to load and configure the provider: - `adapter_type`: A unique identifier for this adapter, eg. `ollama`
- `adapter_type`: A unique identifier for this adapter
- `pip_packages`: List of Python packages required by the provider
- `config_class`: The full path to the configuration class
- `module`: The Python module containing the provider implementation
### Inline Provider Specification ### Inline Provider Specification
@ -81,6 +56,7 @@ Inline providers run locally within the Llama Stack process. Here's an example f
```yaml ```yaml
module: llama_stack_vector_provider module: llama_stack_vector_provider
provider_type: inline::llama_stack_vector_provider
config_class: llama_stack_vector_provider.config.VectorStoreConfig config_class: llama_stack_vector_provider.config.VectorStoreConfig
pip_packages: pip_packages:
- faiss-cpu - faiss-cpu
@ -95,12 +71,6 @@ container_image: custom-vector-store:latest # optional
#### Inline Provider Fields #### Inline Provider Fields
- `module`: The Python module containing the provider implementation
- `config_class`: The full path to the configuration class
- `pip_packages`: List of Python packages required by the provider
- `api_dependencies`: List of Llama Stack APIs that this provider depends on
- `optional_api_dependencies`: List of optional Llama Stack APIs that this provider can use
- `provider_data_validator`: Optional validator for provider data
- `container_image`: Optional container image to use instead of pip packages - `container_image`: Optional container image to use instead of pip packages
## Required Fields ## Required Fields
@ -113,20 +83,17 @@ All providers must contain a `get_provider_spec` function in their `provider` mo
from llama_stack.providers.datatypes import ( from llama_stack.providers.datatypes import (
ProviderSpec, ProviderSpec,
Api, Api,
AdapterSpec, RemoteProviderSpec,
remote_provider_spec,
) )
def get_provider_spec() -> ProviderSpec: def get_provider_spec() -> ProviderSpec:
return remote_provider_spec( return RemoteProviderSpec(
api=Api.inference, api=Api.inference,
adapter=AdapterSpec( adapter_type="ramalama",
adapter_type="ramalama", pip_packages=["ramalama>=0.8.5", "pymilvus"],
pip_packages=["ramalama>=0.8.5", "pymilvus"], config_class="ramalama_stack.config.RamalamaImplConfig",
config_class="ramalama_stack.config.RamalamaImplConfig", module="ramalama_stack",
module="ramalama_stack",
),
) )
``` ```
@ -197,18 +164,16 @@ information. Execute the test for the Provider type you are developing.
If your external provider isn't being loaded: If your external provider isn't being loaded:
1. Check that `module` points to a published pip package with a top level `provider` module including `get_provider_spec`. 1. Check that `module` points to a published pip package with a top level `provider` module including `get_provider_spec`.
1. Check that the `external_providers_dir` path is correct and accessible.
2. Verify that the YAML files are properly formatted. 2. Verify that the YAML files are properly formatted.
3. Ensure all required Python packages are installed. 3. Ensure all required Python packages are installed.
4. Check the Llama Stack server logs for any error messages - turn on debug logging to get more 4. Check the Llama Stack server logs for any error messages - turn on debug logging to get more
information using `LLAMA_STACK_LOGGING=all=debug`. information using `LLAMA_STACK_LOGGING=all=debug`.
5. Verify that the provider package is installed in your Python environment if using `external_providers_dir`.
## Examples ## Examples
### Example using `external_providers_dir`: Custom Ollama Provider ### How to create an external provider module
Here's a complete example of creating and using a custom Ollama provider: If you are creating a new external provider called `llama-stack-provider-ollama` here is how you would set up the package properly:
1. First, create the provider package: 1. First, create the provider package:
@ -230,33 +195,28 @@ requires-python = ">=3.12"
dependencies = ["llama-stack", "pydantic", "ollama", "aiohttp"] dependencies = ["llama-stack", "pydantic", "ollama", "aiohttp"]
``` ```
3. Create the provider specification: 3. Install the provider:
```yaml
# ~/.llama/providers.d/remote/inference/custom_ollama.yaml
adapter:
adapter_type: custom_ollama
pip_packages: ["ollama", "aiohttp"]
config_class: llama_stack_provider_ollama.config.OllamaImplConfig
module: llama_stack_provider_ollama
api_dependencies: []
optional_api_dependencies: []
```
4. Install the provider:
```bash ```bash
uv pip install -e . uv pip install -e .
``` ```
5. Configure Llama Stack to use external providers: 4. Edit `provider.py`
```yaml provider.py must be updated to contain `get_provider_spec`. This is used by llama stack to install the provider.
external_providers_dir: ~/.llama/providers.d/
```python
def get_provider_spec() -> ProviderSpec:
return RemoteProviderSpec(
api=Api.inference,
adapter_type="llama-stack-provider-ollama",
pip_packages=["ollama", "aiohttp"],
config_class="llama_stack_provider_ollama.config.OllamaImplConfig",
module="llama_stack_provider_ollama",
)
``` ```
The provider will now be available in Llama Stack with the type `remote::custom_ollama`. 5. Implement the provider as outlined above with `get_provider_impl` or `get_adapter_impl`, etc.
### Example using `module`: ramalama-stack ### Example using `module`: ramalama-stack
@ -275,7 +235,6 @@ distribution_spec:
module: ramalama_stack==0.3.0a0 module: ramalama_stack==0.3.0a0
image_type: venv image_type: venv
image_name: null image_name: null
external_providers_dir: null
additional_pip_packages: additional_pip_packages:
- aiosqlite - aiosqlite
- sqlalchemy[asyncio] - sqlalchemy[asyncio]