mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-06 10:42:39 +00:00
fix: Update link to redirect to index page
Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
This commit is contained in:
parent
7558678b8c
commit
dac3f0595c
1 changed files with 1 additions and 1 deletions
|
@ -3,7 +3,7 @@
|
|||
This guide will walk you through the process of adding a new API provider to Llama Stack.
|
||||
|
||||
|
||||
- Begin by reviewing the [core concepts](../concepts/) of Llama Stack and choose the API your provider belongs to (Inference, Safety, VectorIO, etc.)
|
||||
- Begin by reviewing the [core concepts](../concepts/index.md) of Llama Stack and choose the API your provider belongs to (Inference, Safety, VectorIO, etc.)
|
||||
- Determine the provider type ({repopath}`Remote::llama_stack/providers/remote` or {repopath}`Inline::llama_stack/providers/inline`). Remote providers make requests to external services, while inline providers execute implementation locally.
|
||||
- Add your provider to the appropriate {repopath}`Registry::llama_stack/providers/registry/`. Specify pip dependencies necessary.
|
||||
- Update any distribution {repopath}`Templates::llama_stack/templates/` build.yaml and run.yaml files if they should include your provider by default. Run {repopath}`llama_stack/scripts/distro_codegen.py` if necessary.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue