forked from phoenix-oss/llama-stack-mirror
Doc updates
This commit is contained in:
parent
9351a4b2d7
commit
2118f37350
5 changed files with 75 additions and 129 deletions
|
@ -1,73 +1,14 @@
|
|||
# Contributing to Llama Stack
|
||||
|
||||
If you are interested in contributing to Llama Stack, this guide will cover some of the key topics that might help you get started.
|
||||
|
||||
Also, check out our [Contributing Guide](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md) for more details on how to contribute to Llama Stack.
|
||||
|
||||
|
||||
|
||||
## Adding a New API Provider
|
||||
|
||||
This guide will walk you through the process of adding a new API provider to Llama Stack.
|
||||
|
||||
### Getting Started
|
||||
|
||||
1. **Choose Your API Category**
|
||||
- Determine which API category your provider belongs to (Inference, Safety, Agents, VectorIO)
|
||||
- Review the core concepts of Llama Stack in the [concepts guide](../concepts/index.md)
|
||||
|
||||
2. **Determine Provider Type**
|
||||
- **Remote Provider**: Makes requests to external services
|
||||
- **Inline Provider**: Executes implementation locally
|
||||
|
||||
Reference existing implementations:
|
||||
- {repopath}`Remote Providers::llama_stack/providers/remote`
|
||||
- {repopath}`Inline Providers::llama_stack/providers/inline`
|
||||
|
||||
Example PRs:
|
||||
- [Grok Inference Implementation](https://github.com/meta-llama/llama-stack/pull/609)
|
||||
- [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355)
|
||||
- [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665)
|
||||
|
||||
3. **Register Your Provider**
|
||||
- Add your provider to the appropriate {repopath}`Registry::llama_stack/providers/registry/`
|
||||
- Specify any required pip dependencies
|
||||
|
||||
4. **Integration**
|
||||
- Update the run.yaml file to include your provider
|
||||
- To make your provider a default option or create a new distribution, look at the teamplates in {repopath}`llama_stack/templates/` and run {repopath}`llama_stack/scripts/distro_codegen.py`
|
||||
- Example PRs:
|
||||
- [Adding Model Context Protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/816)
|
||||
|
||||
### Testing Guidelines
|
||||
|
||||
#### 1. Integration Testing
|
||||
- Create integration tests that use real provider instances and configurations
|
||||
- For remote services, test actual API interactions
|
||||
- Avoid mocking at the provider level
|
||||
- Reference examples in {repopath}`tests/client-sdk`
|
||||
|
||||
#### 2. Unit Testing (Optional)
|
||||
- Add unit tests for provider-specific functionality
|
||||
- See examples in {repopath}`llama_stack/providers/tests/inference/test_text_inference.py`
|
||||
|
||||
#### 3. End-to-End Testing
|
||||
1. Start a Llama Stack server with your new provider
|
||||
2. Test using client requests
|
||||
3. Verify compatibility with existing client scripts in the [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) repository
|
||||
4. Document which scripts are compatible with your provider
|
||||
|
||||
### Submitting Your PR
|
||||
|
||||
1. Ensure all tests pass
|
||||
2. Include a comprehensive test plan in your PR summary
|
||||
3. Document any known limitations or considerations
|
||||
4. Submit your pull request for review
|
||||
Start with the [Contributing Guide](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md) for some general tips. This section covers a few key topics in more detail.
|
||||
|
||||
- [Adding a New API Provider](new_api_provider.md) describes adding new API providers to the Stack.
|
||||
- [Testing Llama Stack](testing.md) provides details about the testing framework and how to test providers and distributions.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
new_api_provider
|
||||
memory_api
|
||||
testing
|
||||
```
|
||||
|
|
|
@ -2,41 +2,25 @@
|
|||
|
||||
This guide will walk you through the process of adding a new API provider to Llama Stack.
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Choose Your API Category**
|
||||
- Determine which API category your provider belongs to (Inference, Safety, Agents, VectorIO)
|
||||
- Review the core concepts of Llama Stack in the [concepts guide](../concepts/index.md)
|
||||
- Begin by reviewing the [core concepts](../concepts/) of Llama Stack and choose the API your provider belongs to (Inference, Safety, VectorIO, etc.)
|
||||
- Determine the provider type ({repopath}`Remote::llama_stack/providers/remote` or {repopath}`Inline::llama_stack/providers/inline`). Remote providers make requests to external services, while inline providers execute implementation locally.
|
||||
- Add your provider to the appropriate {repopath}`Registry::llama_stack/providers/registry/`. Specify pip dependencies necessary.
|
||||
- Update any distribution {repopath}`Templates::llama_stack/templates/` build.yaml and run.yaml files if they should include your provider by default. Run {repopath}`llama_stack/scripts/distro_codegen.py` if necessary.
|
||||
|
||||
2. **Determine Provider Type**
|
||||
- **Remote Provider**: Makes requests to external services
|
||||
- **Inline Provider**: Executes implementation locally
|
||||
|
||||
Reference existing implementations:
|
||||
- {repopath}`Remote Providers::llama_stack/providers/remote`
|
||||
- {repopath}`Inline Providers::llama_stack/providers/inline`
|
||||
|
||||
Example PRs:
|
||||
Here are some example PRs to help you get started:
|
||||
- [Grok Inference Implementation](https://github.com/meta-llama/llama-stack/pull/609)
|
||||
- [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355)
|
||||
- [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665)
|
||||
|
||||
3. **Register Your Provider**
|
||||
- Add your provider to the appropriate {repopath}`Registry::llama_stack/providers/registry/`
|
||||
- Specify any required pip dependencies
|
||||
|
||||
4. **Integration**
|
||||
- Update the run.yaml file to include your provider
|
||||
- To make your provider a default option or create a new distribution, look at the teamplates in {repopath}`llama_stack/templates/` and run {repopath}`llama_stack/scripts/distro_codegen.py`
|
||||
- Example PRs:
|
||||
- [Adding Model Context Protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/816)
|
||||
|
||||
## Testing Guidelines
|
||||
## Testing the Provider
|
||||
|
||||
### 1. Integration Testing
|
||||
- Create integration tests that use real provider instances and configurations
|
||||
- For remote services, test actual API interactions
|
||||
- Avoid mocking at the provider level
|
||||
- Avoid mocking at the provider level since adapter layers tend to be thin
|
||||
- Reference examples in {repopath}`tests/client-sdk`
|
||||
|
||||
### 2. Unit Testing (Optional)
|
||||
|
|
6
docs/source/contributing/testing.md
Normal file
6
docs/source/contributing/testing.md
Normal file
|
@ -0,0 +1,6 @@
|
|||
# Testing Llama Stack
|
||||
|
||||
Tests are of three different kinds:
|
||||
- Unit tests
|
||||
- Provider focused integration tests
|
||||
- Client SDK tests
|
Loading…
Add table
Add a link
Reference in a new issue