From d03cd30b33ea0effcdd4fb6b5fbf8e4a62610798 Mon Sep 17 00:00:00 2001 From: Kelly Brown Date: Tue, 12 Aug 2025 10:59:46 -0400 Subject: [PATCH] docs: Contributor guidelines for creating Internal or External providers --- docs/source/contributing/new_api_provider.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/docs/source/contributing/new_api_provider.md b/docs/source/contributing/new_api_provider.md index 6f8f59a47..9a7a62a38 100644 --- a/docs/source/contributing/new_api_provider.md +++ b/docs/source/contributing/new_api_provider.md @@ -14,6 +14,13 @@ Here are some example PRs to help you get started: - [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355) - [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665) +## Guidelines for creating Internal or External Providers + +|**Type** |Internal (In-tree) |External (out-of-tree) +|---------|-------------------|---------------------| +|**Description** |A provider that is directly in the Llama Stack code|A provider that is outside of the Llama stack core codebase but is still accessible and usable by Llama Stack. +|**Benefits** |Ability to interact with the provider with minimal additional configurations or installations| Contributors do not have to add directly to the code to create providers accessible on Llama Stack. Keep provider-specific code separate from the core Llama Stack code. + ## Inference Provider Patterns When implementing Inference providers for OpenAI-compatible APIs, Llama Stack provides several mixin classes to simplify development and ensure consistent behavior across providers.