mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-03 19:57:35 +00:00
docs: fix broken links
This commit is contained in:
parent
59127a75f9
commit
d8ae3198bd
22 changed files with 49 additions and 62 deletions
|
@ -190,4 +190,4 @@ The Scoring API works closely with the [Evaluation](./evaluation.mdx) API to pro
|
|||
- Check out the [Evaluation](./evaluation.mdx) guide for running complete evaluations
|
||||
- See the [Building Applications - Evaluation](../building_applications/evals.mdx) guide for application examples
|
||||
- Review the [Evaluation Reference](../references/evals_reference/) for comprehensive scoring function usage
|
||||
- Explore the [Evaluation Concepts](../concepts/evaluation_concepts.mdx) for detailed conceptual information
|
||||
- Explore the [Evaluation Concepts](../concepts/evaluation_concepts) for detailed conceptual information
|
||||
|
|
|
@ -8,7 +8,7 @@ sidebar_position: 7
|
|||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This guide walks you through the process of evaluating an LLM application built using Llama Stack. For detailed API reference, check out the [Evaluation Reference](/docs/references/evals-reference) guide that covers the complete set of APIs and developer experience flow.
|
||||
This guide walks you through the process of evaluating an LLM application built using Llama Stack. For detailed API reference, check out the [Evaluation Reference](../references/evals_reference/) guide that covers the complete set of APIs and developer experience flow.
|
||||
|
||||
:::tip[Interactive Examples]
|
||||
Check out our [Colab notebook](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing) for working examples with evaluations, or try the [Getting Started notebook](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).
|
||||
|
@ -251,6 +251,6 @@ results = client.scoring.score(
|
|||
|
||||
- **[Agents](./agent)** - Building agents for evaluation
|
||||
- **[Tools Integration](./tools)** - Using tools in evaluated agents
|
||||
- **[Evaluation Reference](/docs/references/evals-reference)** - Complete API reference for evaluations
|
||||
- **[Evaluation Reference](../references/evals_reference/)** - Complete API reference for evaluations
|
||||
- **[Getting Started Notebook](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)** - Interactive examples
|
||||
- **[Evaluation Examples](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing)** - Additional evaluation scenarios
|
||||
|
|
|
@ -77,7 +77,7 @@ Build production-ready systems with:
|
|||
|
||||
## Related Resources
|
||||
|
||||
- **[Getting Started](/docs/getting-started/quickstart)** - Basic setup and concepts
|
||||
- **[Getting Started](/docs/getting_started/quickstart)** - Basic setup and concepts
|
||||
- **[Providers](/docs/providers/)** - Available AI service providers
|
||||
- **[Distributions](/docs/distributions/)** - Pre-configured deployment packages
|
||||
- **[API Reference](/docs/api/llama-stack-specification)** - Complete API documentation
|
||||
|
|
|
@ -291,7 +291,7 @@ llama stack run meta-reference
|
|||
|
||||
## Related Resources
|
||||
|
||||
- **[Getting Started Guide](/docs/getting-started/quickstart)** - Complete setup and introduction
|
||||
- **[Getting Started Guide](../getting_started/quickstart)** - Complete setup and introduction
|
||||
- **[Core Concepts](/docs/concepts)** - Understanding Llama Stack fundamentals
|
||||
- **[Agents](./agent)** - Building intelligent agents
|
||||
- **[RAG (Retrieval Augmented Generation)](./rag)** - Knowledge-enhanced applications
|
||||
|
|
|
@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem';
|
|||
Llama Stack (LLS) provides two different APIs for building AI applications with tool calling capabilities: the **Agents API** and the **OpenAI Responses API**. While both enable AI systems to use tools, and maintain full conversation history, they serve different use cases and have distinct characteristics.
|
||||
|
||||
:::note
|
||||
**Note:** For simple and basic inferencing, you may want to use the [Chat Completions API](/docs/providers/openai-compatibility#chat-completions) directly, before progressing to Agents or Responses API.
|
||||
**Note:** For simple and basic inferencing, you may want to use the [Chat Completions API](../providers/openai#chat-completions) directly, before progressing to Agents or Responses API.
|
||||
:::
|
||||
|
||||
## Overview
|
||||
|
@ -217,5 +217,5 @@ Use this framework to choose the right API for your use case:
|
|||
- **[Agents](./agent)** - Understanding the Agents API fundamentals
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - How agents process turns and steps
|
||||
- **[Tools Integration](./tools)** - Adding capabilities to both APIs
|
||||
- **[OpenAI Compatibility](/docs/providers/openai-compatibility)** - Using OpenAI-compatible endpoints
|
||||
- **[OpenAI Compatibility](../providers/openai)** - Using OpenAI-compatible endpoints
|
||||
- **[Safety Guardrails](./safety)** - Implementing safety measures in agents
|
||||
|
|
|
@ -37,7 +37,7 @@ The list of open-benchmarks we currently support:
|
|||
- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions.
|
||||
- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)]: Benchmark designed to evaluate multimodal models.
|
||||
|
||||
You can follow this [contributing guide](../references/evals_reference.mdx#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
|
||||
You can follow this [contributing guide](../references/evals_reference/#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
|
||||
|
||||
### Run evaluation on open-benchmarks via CLI
|
||||
|
||||
|
@ -75,4 +75,4 @@ evaluation results over there.
|
|||
|
||||
- Check out our Colab notebook on working examples with running benchmark evaluations [here](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb#scrollTo=mxLCsP4MvFqP).
|
||||
- Check out our [Building Applications - Evaluation](../building_applications/evals.mdx) guide for more details on how to use the Evaluation APIs to evaluate your applications.
|
||||
- Check out our [Evaluation Reference](../references/evals_reference.mdx) for more details on the APIs.
|
||||
- Check out our [Evaluation Reference](../references/evals_reference/) for more details on the APIs.
|
||||
|
|
|
@ -12,7 +12,7 @@ Given Llama Stack's service-oriented philosophy, a few concepts and workflows ar
|
|||
This section covers the fundamental concepts of Llama Stack:
|
||||
|
||||
- **[Architecture](architecture.mdx)** - Learn about Llama Stack's architectural design and principles
|
||||
- **[APIs](apis)** - Understanding the core APIs and their stability levels
|
||||
- **[APIs](/docs/concepts/apis/)** - Understanding the core APIs and their stability levels
|
||||
- [API Overview](apis/index.mdx) - Core APIs available in Llama Stack
|
||||
- [API Providers](apis/api_providers.mdx) - How providers implement APIs
|
||||
- [External APIs](apis/external.mdx) - External APIs available in Llama Stack
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Resources
|
||||
description: Resource federation and registration in Llama Stack
|
||||
sidebar_label: Resources
|
||||
sidebar_position: 6
|
||||
sidebar_position: 4
|
||||
---
|
||||
|
||||
# Resources
|
||||
|
|
|
@ -148,7 +148,7 @@ As a general guideline:
|
|||
that describes the configuration. These descriptions will be used to generate the provider
|
||||
documentation.
|
||||
* When possible, use keyword arguments only when calling functions.
|
||||
* Llama Stack utilizes [custom Exception classes](llama_stack/apis/common/errors.py) for certain Resources that should be used where applicable.
|
||||
* Llama Stack utilizes custom Exception classes for certain Resources that should be used where applicable.
|
||||
|
||||
### License
|
||||
By contributing to Llama, you agree that your contributions will be licensed
|
||||
|
@ -212,35 +212,22 @@ The generated API schema will be available in `docs/static/`. Make sure to revie
|
|||
## Adding a New Provider
|
||||
|
||||
See:
|
||||
- [Adding a New API Provider Page](new_api_provider.md) which describes how to add new API providers to the Stack.
|
||||
- [Vector Database Page](new_vector_database.md) which describes how to add a new vector databases with Llama Stack.
|
||||
- [External Provider Page](../providers/external/index.md) which describes how to add external providers to the Stack.
|
||||
- [Adding a New API Provider Page](./new_api_provider.mdx) which describes how to add new API providers to the Stack.
|
||||
- [Vector Database Page](./new_vector_database.mdx) which describes how to add a new vector databases with Llama Stack.
|
||||
- [External Provider Page](/docs/providers/external/) which describes how to add external providers to the Stack.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
new_api_provider
|
||||
new_vector_database
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
|
||||
```{include} ../../../tests/README.md
|
||||
```
|
||||
See the [Testing README](https://github.com/meta-llama/llama-stack/blob/main/tests/README.md) for detailed testing information.
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
For developers who need deeper understanding of the testing system internals:
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
testing/record-replay
|
||||
```
|
||||
- [Record-Replay Testing](./testing/record-replay.mdx)
|
||||
|
||||
### Benchmarking
|
||||
|
||||
```{include} ../../../benchmarking/k8s-benchmark/README.md
|
||||
```
|
||||
See the [Benchmarking README](https://github.com/meta-llama/llama-stack/blob/main/benchmarking/k8s-benchmark/README.md) for benchmarking information.
|
||||
|
|
|
@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
|
|||
This guide will walk you through the process of adding a new API provider to Llama Stack.
|
||||
|
||||
|
||||
- Begin by reviewing the [core concepts](../concepts/index.md) of Llama Stack and choose the API your provider belongs to (Inference, Safety, VectorIO, etc.)
|
||||
- Begin by reviewing the [core concepts](../concepts/) of Llama Stack and choose the API your provider belongs to (Inference, Safety, VectorIO, etc.)
|
||||
- Determine the provider type ([Remote](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/remote) or [Inline](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/inline)). Remote providers make requests to external services, while inline providers execute implementation locally.
|
||||
- Add your provider to the appropriate [Registry](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/registry/). Specify pip dependencies necessary.
|
||||
- Update any distribution [Templates](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/distributions/) `build.yaml` and `run.yaml` files if they should include your provider by default. Run [./scripts/distro_codegen.py](https://github.com/meta-llama/llama-stack/blob/main/scripts/distro_codegen.py) if necessary. Note that `distro_codegen.py` will fail if the new provider causes any distribution template to attempt to import provider-specific dependencies. This usually means the distribution's `get_distribution_template()` code path should only import any necessary Config or model alias definitions from each provider and not the provider's actual implementation.
|
||||
|
|
|
@ -219,6 +219,6 @@ kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- curl http:
|
|||
|
||||
## Related Resources
|
||||
|
||||
- **[Deployment Overview](./index)** - Overview of deployment options
|
||||
- **[Deployment Overview](/docs/deploying/)** - Overview of deployment options
|
||||
- **[Distributions](/docs/distributions)** - Understanding Llama Stack distributions
|
||||
- **[Configuration](/docs/distributions/configuration)** - Detailed configuration options
|
||||
|
|
|
@ -251,7 +251,7 @@ directory or a git repository (git must be installed on the build environment).
|
|||
llama stack build --config my-external-stack.yaml
|
||||
```
|
||||
|
||||
For more information on external providers, including directory structure, provider types, and implementation requirements, see the [External Providers documentation](../providers/external.md).
|
||||
For more information on external providers, including directory structure, provider types, and implementation requirements, see the [External Providers documentation](../providers/external/).
|
||||
</TabItem>
|
||||
<TabItem value="container" label="Building Container">
|
||||
|
||||
|
|
|
@ -206,7 +206,7 @@ models:
|
|||
provider_model_id: null
|
||||
model_type: llm
|
||||
```
|
||||
A Model is an instance of a "Resource" (see [Concepts](../concepts/index)) and is associated with a specific inference provider (in this case, the provider with identifier `ollama`). This is an instance of a "pre-registered" model. While we always encourage the clients to register models before using them, some Stack servers may come up a list of "already known and available" models.
|
||||
A Model is an instance of a "Resource" (see [Concepts](../concepts/)) and is associated with a specific inference provider (in this case, the provider with identifier `ollama`). This is an instance of a "pre-registered" model. While we always encourage the clients to register models before using them, some Stack servers may come up a list of "already known and available" models.
|
||||
|
||||
What's with the `provider_model_id` field? This is an identifier for the model inside the provider's model catalog. Contrast it with `model_id` which is the identifier for the same model for Llama Stack's purposes. For example, you may want to name "llama3.2:vision-11b" as "image_captioning_model" when you use it in your Stack interactions. When omitted, the server will set `provider_model_id` to be the same as `model_id`.
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ Then, you can access the APIs like `models` and `inference` on the client and ca
|
|||
response = client.models.list()
|
||||
```
|
||||
|
||||
If you've created a [custom distribution](building_distro.md), you can also use the run.yaml configuration file directly:
|
||||
If you've created a [custom distribution](./building_distro), you can also use the run.yaml configuration file directly:
|
||||
|
||||
```python
|
||||
client = LlamaStackAsLibraryClient(config_path)
|
||||
|
|
|
@ -13,9 +13,9 @@ This section provides an overview of the distributions available in Llama Stack.
|
|||
|
||||
## Distribution Guides
|
||||
|
||||
- **[Available Distributions](./list_of_distributions)** - Complete list and comparison of all distributions
|
||||
- **[Building Custom Distributions](./building_distro)** - Create your own distribution from scratch
|
||||
- **[Customizing Configuration](./customizing_run_yaml)** - Customize run.yaml for your needs
|
||||
- **[Starting Llama Stack Server](./starting_llama_stack_server)** - How to run distributions
|
||||
- **[Importing as Library](./importing_as_library)** - Use distributions in your code
|
||||
- **[Configuration Reference](./configuration)** - Configuration file format details
|
||||
- **[Available Distributions](./list_of_distributions.mdx)** - Complete list and comparison of all distributions
|
||||
- **[Building Custom Distributions](./building_distro.mdx)** - Create your own distribution from scratch
|
||||
- **[Customizing Configuration](./customizing_run_yaml.mdx)** - Customize run.yaml for your needs
|
||||
- **[Starting Llama Stack Server](./starting_llama_stack_server.mdx)** - How to run distributions
|
||||
- **[Importing as Library](./importing_as_library.mdx)** - Use distributions in your code
|
||||
- **[Configuration Reference](./configuration.mdx)** - Configuration file format details
|
||||
|
|
|
@ -62,7 +62,7 @@ docker pull llama-stack/distribution-meta-reference-gpu
|
|||
|
||||
**Partners:** [Fireworks.ai](https://fireworks.ai) and [Together.xyz](https://together.xyz)
|
||||
|
||||
**Guides:** [Remote-Hosted Endpoints](remote_hosted_distro/index)
|
||||
**Guides:** [Remote-Hosted Endpoints](./remote_hosted_distro/)
|
||||
|
||||
### 📱 Mobile Development
|
||||
|
||||
|
@ -81,7 +81,7 @@ docker pull llama-stack/distribution-meta-reference-gpu
|
|||
- You need custom configurations
|
||||
- You want to optimize for your specific use case
|
||||
|
||||
**Guides:** [Building Custom Distributions](building_distro.md)
|
||||
**Guides:** [Building Custom Distributions](./building_distro)
|
||||
|
||||
## Detailed Documentation
|
||||
|
||||
|
@ -131,4 +131,4 @@ graph TD
|
|||
3. **Configure your providers** with API keys or local models
|
||||
4. **Start building** with Llama Stack!
|
||||
|
||||
For help choosing or troubleshooting, check our [Getting Started Guide](../getting_started/index.md) or [Community Support](https://github.com/llama-stack/llama-stack/discussions).
|
||||
For help choosing or troubleshooting, check our [Getting Started Guide](/docs/getting_started/quickstart) or [Community Support](https://github.com/llama-stack/llama-stack/discussions).
|
||||
|
|
|
@ -66,7 +66,7 @@ llama stack run starter --port 5050
|
|||
|
||||
Ensure the Llama Stack server version is the same as the Kotlin SDK Library for maximum compatibility.
|
||||
|
||||
Other inference providers: [Table](../../index.md#supported-llama-stack-implementations)
|
||||
Other inference providers: [Table](/docs/)
|
||||
|
||||
How to set remote localhost in Demo App: [Settings](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/examples/android_app#settings)
|
||||
|
||||
|
|
|
@ -16,11 +16,11 @@ This is the simplest way to get started. Using Llama Stack as a library means yo
|
|||
|
||||
## Container:
|
||||
|
||||
Another simple way to start interacting with Llama Stack is to just spin up a container (via Docker or Podman) which is pre-built with all the providers you need. We provide a number of pre-built images so you can start a Llama Stack server instantly. You can also build your own custom container. Which distribution to choose depends on the hardware you have. See [Selection of a Distribution](selection) for more details.
|
||||
Another simple way to start interacting with Llama Stack is to just spin up a container (via Docker or Podman) which is pre-built with all the providers you need. We provide a number of pre-built images so you can start a Llama Stack server instantly. You can also build your own custom container. Which distribution to choose depends on the hardware you have. See [Selection of a Distribution](./list_of_distributions) for more details.
|
||||
|
||||
## Kubernetes:
|
||||
|
||||
If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](kubernetes_deployment) for more details.
|
||||
If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](../deploying/kubernetes_deployment) for more details.
|
||||
|
||||
|
||||
```{toctree}
|
||||
|
|
|
@ -18,7 +18,7 @@ In Llama Stack, we provide a server exposing multiple APIs. These APIs are backe
|
|||
Llama Stack is a stateful service with REST APIs to support seamless transition of AI applications across different environments. The server can be run in a variety of ways, including as a standalone binary, Docker container, or hosted service. You can build and test using a local server first and deploy to a hosted endpoint for production.
|
||||
|
||||
In this guide, we'll walk through how to build a RAG agent locally using Llama Stack with [Ollama](https://ollama.com/)
|
||||
as the inference [provider](../providers/index.md#inference) for a Llama Model.
|
||||
as the inference [provider](/docs/providers/inference/) for a Llama Model.
|
||||
|
||||
### Step 1: Installation and Setup
|
||||
|
||||
|
@ -60,8 +60,8 @@ Llama Stack is a server that exposes multiple APIs, you connect with it using th
|
|||
<TabItem value="venv" label="Using venv">
|
||||
You can use Python to build and run the Llama Stack server, which is useful for testing and development.
|
||||
|
||||
Llama Stack uses a [YAML configuration file](../distributions/configuration.md) to specify the stack setup,
|
||||
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml.md).
|
||||
Llama Stack uses a [YAML configuration file](../distributions/configuration) to specify the stack setup,
|
||||
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml).
|
||||
Now let's build and run the Llama Stack config for Ollama.
|
||||
We use `starter` as template. By default all providers are disabled, this requires enable ollama by passing environment variables.
|
||||
|
||||
|
@ -73,7 +73,7 @@ llama stack build --distro starter --image-type venv --run
|
|||
You can use a container image to run the Llama Stack server. We provide several container images for the server
|
||||
component that works with different inference providers out of the box. For this guide, we will use
|
||||
`llamastack/distribution-starter` as the container image. If you'd like to build your own image or customize the
|
||||
configurations, please check out [this guide](../distributions/building_distro.md).
|
||||
configurations, please check out [this guide](../distributions/building_distro).
|
||||
First lets setup some environment variables and create a local directory to mount into the container’s file system.
|
||||
```bash
|
||||
export LLAMA_STACK_PORT=8321
|
||||
|
@ -145,7 +145,7 @@ pip install llama-stack-client
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Now let's use the `llama-stack-client` [CLI](../references/llama_stack_client_cli_reference.md) to check the
|
||||
Now let's use the `llama-stack-client` [CLI](../references/llama_stack_client_cli_reference) to check the
|
||||
connectivity to the server.
|
||||
|
||||
```bash
|
||||
|
@ -216,8 +216,8 @@ OpenAIChatCompletion(
|
|||
|
||||
### Step 4: Run the Demos
|
||||
|
||||
Note that these demos show the [Python Client SDK](../references/python_sdk_reference/index).
|
||||
Other SDKs are also available, please refer to the [Client SDK](../index.md#client-sdks) list for the complete options.
|
||||
Note that these demos show the [Python Client SDK](../references/python_sdk_reference/).
|
||||
Other SDKs are also available, please refer to the [Client SDK](/docs/) list for the complete options.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="inference" label="Basic Inference">
|
||||
|
@ -538,4 +538,4 @@ uv run python rag_agent.py
|
|||
|
||||
**You're Ready to Build Your Own Apps!**
|
||||
|
||||
Congrats! 🥳 Now you're ready to [build your own Llama Stack applications](../building_applications/index)! 🚀
|
||||
Congrats! 🥳 Now you're ready to [build your own Llama Stack applications](../building_applications/)! 🚀
|
||||
|
|
|
@ -140,7 +140,7 @@ If you are getting a **401 Client Error** from HuggingFace for the **all-MiniLM-
|
|||
### Next Steps
|
||||
|
||||
Now you're ready to dive deeper into Llama Stack!
|
||||
- Explore the [Detailed Tutorial](/docs/detailed_tutorial).
|
||||
- Explore the [Detailed Tutorial](./detailed_tutorial).
|
||||
- Try the [Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).
|
||||
- Browse more [Notebooks on GitHub](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks).
|
||||
- Learn about Llama Stack [Concepts](/docs/concepts).
|
||||
|
|
4
docs/docs/providers/external/index.mdx
vendored
4
docs/docs/providers/external/index.mdx
vendored
|
@ -7,5 +7,5 @@ Llama Stack supports external providers that live outside of the main codebase.
|
|||
|
||||
## External Provider Documentation
|
||||
|
||||
- [Known External Providers](external-providers-list)
|
||||
- [Creating External Providers](external-providers-guide)
|
||||
- [Known External Providers](./external-providers-list.mdx)
|
||||
- [Creating External Providers](./external-providers-guide.mdx)
|
||||
|
|
|
@ -7,6 +7,6 @@ sidebar_position: 1
|
|||
|
||||
# References
|
||||
|
||||
- [Python SDK Reference](python_sdk_reference/index)
|
||||
- [Llama CLI](llama_cli_reference/index) for building and running your Llama Stack server
|
||||
- [Llama Stack Client CLI](llama_stack_client_cli_reference) for interacting with your Llama Stack server
|
||||
- [Python SDK Reference](/docs/references/python_sdk_reference/)
|
||||
- [Llama CLI](/docs/references/llama_cli_reference/) for building and running your Llama Stack server
|
||||
- [Llama Stack Client CLI](./llama_stack_client_cli_reference.md) for interacting with your Llama Stack server
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue