docs: Contributor guidelines for creating Internal or External providers (#3111)

**Description:** 
Adding information and guidelines on when contributors should create an
in-tree vs out-of-tree provider.


Im still learning a bit about this subject so Im very open to feedback
on this PR

Will also add this section to the API Providers section of the docs
This commit is contained in:
Kelly Brown 2025-08-28 06:26:47 -04:00 committed by GitHub
parent d73955a41e
commit 1a9fa3c0b8
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -14,6 +14,13 @@ Here are some example PRs to help you get started:
- [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355)
- [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665)
## Guidelines for creating Internal or External Providers
|**Type** |Internal (In-tree) |External (out-of-tree)
|---------|-------------------|---------------------|
|**Description** |A provider that is directly in the Llama Stack code|A provider that is outside of the Llama stack core codebase but is still accessible and usable by Llama Stack.
|**Benefits** |Ability to interact with the provider with minimal additional configurations or installations| Contributors do not have to add directly to the code to create providers accessible on Llama Stack. Keep provider-specific code separate from the core Llama Stack code.
## Inference Provider Patterns
When implementing Inference providers for OpenAI-compatible APIs, Llama Stack provides several mixin classes to simplify development and ensure consistent behavior across providers.