mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
docs: Contributor guidelines for creating Internal or External providers (#3111)
**Description:** Adding information and guidelines on when contributors should create an in-tree vs out-of-tree provider. Im still learning a bit about this subject so Im very open to feedback on this PR Will also add this section to the API Providers section of the docs
This commit is contained in:
parent
d73955a41e
commit
1a9fa3c0b8
1 changed files with 7 additions and 0 deletions
|
@ -14,6 +14,13 @@ Here are some example PRs to help you get started:
|
|||
- [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355)
|
||||
- [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665)
|
||||
|
||||
## Guidelines for creating Internal or External Providers
|
||||
|
||||
|**Type** |Internal (In-tree) |External (out-of-tree)
|
||||
|---------|-------------------|---------------------|
|
||||
|**Description** |A provider that is directly in the Llama Stack code|A provider that is outside of the Llama stack core codebase but is still accessible and usable by Llama Stack.
|
||||
|**Benefits** |Ability to interact with the provider with minimal additional configurations or installations| Contributors do not have to add directly to the code to create providers accessible on Llama Stack. Keep provider-specific code separate from the core Llama Stack code.
|
||||
|
||||
## Inference Provider Patterns
|
||||
|
||||
When implementing Inference providers for OpenAI-compatible APIs, Llama Stack provides several mixin classes to simplify development and ensure consistent behavior across providers.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue