llama-stack-mirror/docs
Charlie Doern 436f8ade9e feat: implement provider updating
add `v1/providers/` which uses PUT to allow users to change their provider configuration

this is a follow up to #1429 and related to #1359

a user can call something like:

`llama_stack_client.providers.update(api="inference", provider_id="ollama", provider_type="remote::ollama", config={'url': 'http:/localhost:12345'})`

or

`llama-stack-client providers update inference ollama remote::ollama "{'url': 'http://localhost:12345'}"`

this API works by adding a `RequestMiddleware` to the server which checks requests, and if the user is using PUT /v1/providers, the routes are re-registered with the re-initialized provider configurations/methods

for the client, `self.impls` is updated to hold the proper methods+configurations

this depends on a client PR, the CI will fail until then but succeeded locally

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-07-01 10:04:10 -04:00
..
_static feat: implement provider updating 2025-07-01 10:04:10 -04:00
notebooks feat: Add Nvidia e2e beginner notebook and tool calling notebook (#1964) 2025-06-16 11:29:01 -04:00
openapi_generator feat: Add webmethod for deleting openai responses (#2160) 2025-06-30 11:28:02 +02:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source docs: specify the ability to train non-Llama models (#2573) 2025-07-01 19:29:06 +05:30
zero_to_hero_guide fix: update zero-to-hero guide for modern llama stack (#2555) 2025-06-30 18:09:33 -07:00
conftest.py fix: sleep after notebook test 2025-03-23 14:03:35 -07:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: remove last instances of code-interpreter provider (#2143) 2025-05-12 10:54:43 -07:00
getting_started_llama4.ipynb docs: llama4 getting started nb (#1878) 2025-04-06 18:51:34 -07:00
getting_started_llama_api.ipynb feat: add api.llama provider, llama-guard-4 model (#2058) 2025-04-29 10:07:41 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
readme.md chore: use groups when running commands (#2298) 2025-05-28 09:13:16 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Render locally

From the llama-stack root directory, run the following command to render the docs locally:

uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all

You can open up the docs in your browser at http://localhost:8000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: