From 1a9fa3c0b88a60aece2cbbcaa9c98dc635becc48 Mon Sep 17 00:00:00 2001 From: Kelly Brown <86735520+kelbrown20@users.noreply.github.com> Date: Thu, 28 Aug 2025 06:26:47 -0400 Subject: [PATCH] docs: Contributor guidelines for creating Internal or External providers (#3111) **Description:** Adding information and guidelines on when contributors should create an in-tree vs out-of-tree provider. Im still learning a bit about this subject so Im very open to feedback on this PR Will also add this section to the API Providers section of the docs --- docs/source/contributing/new_api_provider.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/docs/source/contributing/new_api_provider.md b/docs/source/contributing/new_api_provider.md index 6f8f59a47..9a7a62a38 100644 --- a/docs/source/contributing/new_api_provider.md +++ b/docs/source/contributing/new_api_provider.md @@ -14,6 +14,13 @@ Here are some example PRs to help you get started: - [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355) - [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665) +## Guidelines for creating Internal or External Providers + +|**Type** |Internal (In-tree) |External (out-of-tree) +|---------|-------------------|---------------------| +|**Description** |A provider that is directly in the Llama Stack code|A provider that is outside of the Llama stack core codebase but is still accessible and usable by Llama Stack. +|**Benefits** |Ability to interact with the provider with minimal additional configurations or installations| Contributors do not have to add directly to the code to create providers accessible on Llama Stack. Keep provider-specific code separate from the core Llama Stack code. + ## Inference Provider Patterns When implementing Inference providers for OpenAI-compatible APIs, Llama Stack provides several mixin classes to simplify development and ensure consistent behavior across providers.