From 8c537d1802b1f5128010e06d0c265d6ac81f4f5c Mon Sep 17 00:00:00 2001 From: Dinesh Yeduguru Date: Thu, 23 Jan 2025 12:04:58 -0800 Subject: [PATCH] address feedback --- docs/source/contributing/new_api_provider.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/source/contributing/new_api_provider.md b/docs/source/contributing/new_api_provider.md index 99e389d6a..439021685 100644 --- a/docs/source/contributing/new_api_provider.md +++ b/docs/source/contributing/new_api_provider.md @@ -19,6 +19,7 @@ This guide will walk you through the process of adding a new API provider to Lla Example PRs: - [Grok Inference Implementation](https://github.com/meta-llama/llama-stack/pull/609) - [Nvidia Inference Implementation](https://github.com/meta-llama/llama-stack/pull/355) + - [Model context protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/665) 3. **Register Your Provider** - Add your provider to the appropriate {repopath}`Registry::llama_stack/providers/registry/` @@ -26,7 +27,9 @@ This guide will walk you through the process of adding a new API provider to Lla 4. **Integration** - Update the run.yaml file to include your provider - - To make your provider a default option or create a new distribution, open an issue to discuss with the Llama Stack team + - To make your provider a default option or create a new distribution, look at the teamplates in {repopath}`llama_stack/templates/` and run {repopath}`llama_stack/scripts/distro_codegen.py` + - Example PRs: + - [Adding Model Context Protocol Tool Runtime](https://github.com/meta-llama/llama-stack/pull/816) ## Testing Guidelines