docs!: adjust external provider docs

now that we consolidated the providerspec types and got rid of `AdapterSpec`, adjust external.md and external-provider-guide.md

external_providers_guide.md specifically needed some updates around the proper fields required in a ProviderSpec beyond the changes in this PR.

BREAKING CHANGE: external providers must update their `get_provider_spec` function to use `RemoteProviderSpec` properly

Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
Charlie Doern 2025-09-18 14:52:41 -04:00
parent 4dfbe46954
commit da607957c0
2 changed files with 52 additions and 97 deletions

View file

@ -152,7 +152,6 @@ __all__ = ["WeatherAPI", "available_providers"]
from typing import Protocol
from llama_stack.providers.datatypes import (
AdapterSpec,
Api,
ProviderSpec,
RemoteProviderSpec,
@ -166,13 +165,11 @@ def available_providers() -> list[ProviderSpec]:
api=Api.weather,
provider_type="remote::kaze",
config_class="llama_stack_provider_kaze.KazeProviderConfig",
adapter=AdapterSpec(
adapter_type="kaze",
module="llama_stack_provider_kaze",
pip_packages=["llama_stack_provider_kaze"],
config_class="llama_stack_provider_kaze.KazeProviderConfig",
),
),
]
@ -325,7 +322,6 @@ class WeatherKazeAdapter(WeatherProvider):
```yaml
# ~/.llama/providers.d/remote/weather/kaze.yaml
adapter:
adapter_type: kaze
pip_packages: ["llama_stack_provider_kaze"]
config_class: llama_stack_provider_kaze.config.KazeProviderConfig

View file

@ -11,38 +11,6 @@ an example entry in your build.yaml should look like:
module: ramalama_stack
```
Additionally you can configure the `external_providers_dir` in your Llama Stack configuration. This method is in the process of being deprecated in favor of the `module` method. If using this method, the external provider directory should contain your external provider specifications:
```yaml
external_providers_dir: ~/.llama/providers.d/
```
## Directory Structure
The external providers directory should follow this structure:
```
providers.d/
remote/
inference/
custom_ollama.yaml
vllm.yaml
vector_io/
qdrant.yaml
safety/
llama-guard.yaml
inline/
inference/
custom_ollama.yaml
vllm.yaml
vector_io/
qdrant.yaml
safety/
llama-guard.yaml
```
Each YAML file in these directories defines a provider specification for that particular API.
## Provider Types
Llama Stack supports two types of external providers:
@ -50,13 +18,25 @@ Llama Stack supports two types of external providers:
1. **Remote Providers**: Providers that communicate with external services (e.g., cloud APIs)
2. **Inline Providers**: Providers that run locally within the Llama Stack process
### Provider Specification (Common between inline and remote providers)
- `provider_type`: The type of the provider to be installed (remote or inline). eg. `remote::ollama`
- `api`: The API for this provider, eg. `inference`
- `config_class`: The full path to the configuration class
- `module`: The Python module containing the provider implementation
- `optional_api_dependencies`: List of optional Llama Stack APIs that this provider can use
- `api_dependencies`: List of Llama Stack APIs that this provider depends on
- `provider_data_validator`: Optional validator for provider data.
- `pip_packages`: List of Python packages required by the provider
### Remote Provider Specification
Remote providers are used when you need to communicate with external services. Here's an example for a custom Ollama provider:
```yaml
adapter:
adapter_type: custom_ollama
provider_type: "remote::ollama"
pip_packages:
- ollama
- aiohttp
@ -66,14 +46,9 @@ api_dependencies: []
optional_api_dependencies: []
```
#### Adapter Configuration
#### Remote Provider Configuration
The `adapter` section defines how to load and configure the provider:
- `adapter_type`: A unique identifier for this adapter
- `pip_packages`: List of Python packages required by the provider
- `config_class`: The full path to the configuration class
- `module`: The Python module containing the provider implementation
- `adapter_type`: A unique identifier for this adapter, eg. `ollama`
### Inline Provider Specification
@ -81,6 +56,7 @@ Inline providers run locally within the Llama Stack process. Here's an example f
```yaml
module: llama_stack_vector_provider
provider_type: inline::llama_stack_vector_provider
config_class: llama_stack_vector_provider.config.VectorStoreConfig
pip_packages:
- faiss-cpu
@ -95,12 +71,6 @@ container_image: custom-vector-store:latest # optional
#### Inline Provider Fields
- `module`: The Python module containing the provider implementation
- `config_class`: The full path to the configuration class
- `pip_packages`: List of Python packages required by the provider
- `api_dependencies`: List of Llama Stack APIs that this provider depends on
- `optional_api_dependencies`: List of optional Llama Stack APIs that this provider can use
- `provider_data_validator`: Optional validator for provider data
- `container_image`: Optional container image to use instead of pip packages
## Required Fields
@ -113,20 +83,17 @@ All providers must contain a `get_provider_spec` function in their `provider` mo
from llama_stack.providers.datatypes import (
ProviderSpec,
Api,
AdapterSpec,
remote_provider_spec,
RemoteProviderSpec,
)
def get_provider_spec() -> ProviderSpec:
return remote_provider_spec(
return RemoteProviderSpec(
api=Api.inference,
adapter=AdapterSpec(
adapter_type="ramalama",
pip_packages=["ramalama>=0.8.5", "pymilvus"],
config_class="ramalama_stack.config.RamalamaImplConfig",
module="ramalama_stack",
),
)
```
@ -197,18 +164,16 @@ information. Execute the test for the Provider type you are developing.
If your external provider isn't being loaded:
1. Check that `module` points to a published pip package with a top level `provider` module including `get_provider_spec`.
1. Check that the `external_providers_dir` path is correct and accessible.
2. Verify that the YAML files are properly formatted.
3. Ensure all required Python packages are installed.
4. Check the Llama Stack server logs for any error messages - turn on debug logging to get more
information using `LLAMA_STACK_LOGGING=all=debug`.
5. Verify that the provider package is installed in your Python environment if using `external_providers_dir`.
## Examples
### Example using `external_providers_dir`: Custom Ollama Provider
### How to create an external provider module
Here's a complete example of creating and using a custom Ollama provider:
If you are creating a new external provider called `llama-stack-provider-ollama` here is how you would set up the package properly:
1. First, create the provider package:
@ -230,33 +195,28 @@ requires-python = ">=3.12"
dependencies = ["llama-stack", "pydantic", "ollama", "aiohttp"]
```
3. Create the provider specification:
```yaml
# ~/.llama/providers.d/remote/inference/custom_ollama.yaml
adapter:
adapter_type: custom_ollama
pip_packages: ["ollama", "aiohttp"]
config_class: llama_stack_provider_ollama.config.OllamaImplConfig
module: llama_stack_provider_ollama
api_dependencies: []
optional_api_dependencies: []
```
4. Install the provider:
3. Install the provider:
```bash
uv pip install -e .
```
5. Configure Llama Stack to use external providers:
4. Edit `provider.py`
```yaml
external_providers_dir: ~/.llama/providers.d/
provider.py must be updated to contain `get_provider_spec`. This is used by llama stack to install the provider.
```python
def get_provider_spec() -> ProviderSpec:
return RemoteProviderSpec(
api=Api.inference,
adapter_type="llama-stack-provider-ollama",
pip_packages=["ollama", "aiohttp"],
config_class="llama_stack_provider_ollama.config.OllamaImplConfig",
module="llama_stack_provider_ollama",
)
```
The provider will now be available in Llama Stack with the type `remote::custom_ollama`.
5. Implement the provider as outlined above with `get_provider_impl` or `get_adapter_impl`, etc.
### Example using `module`: ramalama-stack
@ -275,7 +235,6 @@ distribution_spec:
module: ramalama_stack==0.3.0a0
image_type: venv
image_name: null
external_providers_dir: null
additional_pip_packages:
- aiosqlite
- sqlalchemy[asyncio]