mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-02 16:54:42 +00:00
Merge branch 'meta-llama:main' into main
This commit is contained in:
commit
fa4ce70aed
41 changed files with 603 additions and 150 deletions
16
README.md
16
README.md
|
@ -93,12 +93,12 @@ Additionally, we have designed every element of the Stack such that APIs as well
|
||||||
|
|
||||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
|
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
|
||||||
|:----------------: |:------------------------------------------: |:-----------------------: |
|
|:----------------: |:------------------------------------------: |:-----------------------: |
|
||||||
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) |
|
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/meta-reference-gpu.html) |
|
||||||
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) |
|
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) |
|
||||||
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) |
|
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/ollama.html) |
|
||||||
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) |
|
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/tgi.html) |
|
||||||
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) |
|
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/together.html) |
|
||||||
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) |
|
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/fireworks.html) |
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
@ -128,7 +128,7 @@ You have two ways to install this repository:
|
||||||
|
|
||||||
Please checkout our [Documentation](https://llama-stack.readthedocs.io/en/latest/index.html) page for more details.
|
Please checkout our [Documentation](https://llama-stack.readthedocs.io/en/latest/index.html) page for more details.
|
||||||
|
|
||||||
* [CLI reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
|
* [CLI reference](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/index.html)
|
||||||
* Guide using `llama` CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.
|
* Guide using `llama` CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.
|
||||||
* [Getting Started](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html)
|
* [Getting Started](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html)
|
||||||
* Quick guide to start a Llama Stack server.
|
* Quick guide to start a Llama Stack server.
|
||||||
|
@ -136,7 +136,7 @@ Please checkout our [Documentation](https://llama-stack.readthedocs.io/en/latest
|
||||||
* The complete Llama Stack lesson [Colab notebook](https://colab.research.google.com/drive/1dtVmxotBsI4cGZQNsJRYPrLiDeT0Wnwt) of the new [Llama 3.2 course on Deeplearning.ai](https://learn.deeplearning.ai/courses/introducing-multimodal-llama-3-2/lesson/8/llama-stack).
|
* The complete Llama Stack lesson [Colab notebook](https://colab.research.google.com/drive/1dtVmxotBsI4cGZQNsJRYPrLiDeT0Wnwt) of the new [Llama 3.2 course on Deeplearning.ai](https://learn.deeplearning.ai/courses/introducing-multimodal-llama-3-2/lesson/8/llama-stack).
|
||||||
* A [Zero-to-Hero Guide](https://github.com/meta-llama/llama-stack/tree/main/docs/zero_to_hero_guide) that guide you through all the key components of llama stack with code samples.
|
* A [Zero-to-Hero Guide](https://github.com/meta-llama/llama-stack/tree/main/docs/zero_to_hero_guide) that guide you through all the key components of llama stack with code samples.
|
||||||
* [Contributing](CONTRIBUTING.md)
|
* [Contributing](CONTRIBUTING.md)
|
||||||
* [Adding a new API Provider](https://llama-stack.readthedocs.io/en/latest/api_providers/new_api_provider.html) to walk-through how to add a new API provider.
|
* [Adding a new API Provider](https://llama-stack.readthedocs.io/en/latest/contributing/new_api_provider.html) to walk-through how to add a new API provider.
|
||||||
|
|
||||||
## Llama Stack Client SDKs
|
## Llama Stack Client SDKs
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,7 @@ This guide contains references to walk you through adding a new API provider.
|
||||||
- {repopath}`Remote Providers::llama_stack/providers/remote`
|
- {repopath}`Remote Providers::llama_stack/providers/remote`
|
||||||
- {repopath}`Inline Providers::llama_stack/providers/inline`
|
- {repopath}`Inline Providers::llama_stack/providers/inline`
|
||||||
|
|
||||||
3. [Build a Llama Stack distribution](https://llama-stack.readthedocs.io/en/latest/distribution_dev/building_distro.html) with your API provider.
|
3. [Build a Llama Stack distribution](https://llama-stack.readthedocs.io/en/latest/distributions/building_distro.html) with your API provider.
|
||||||
4. Test your code!
|
4. Test your code!
|
||||||
|
|
||||||
## Testing your newly added API providers
|
## Testing your newly added API providers
|
||||||
|
|
|
@ -36,7 +36,7 @@ The following environment variables can be configured:
|
||||||
|
|
||||||
## Prerequisite: Downloading Models
|
## Prerequisite: Downloading Models
|
||||||
|
|
||||||
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ls ~/.llama/checkpoints
|
$ ls ~/.llama/checkpoints
|
||||||
|
|
|
@ -36,7 +36,7 @@ The following environment variables can be configured:
|
||||||
|
|
||||||
## Prerequisite: Downloading Models
|
## Prerequisite: Downloading Models
|
||||||
|
|
||||||
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ls ~/.llama/checkpoints
|
$ ls ~/.llama/checkpoints
|
||||||
|
|
|
@ -118,9 +118,9 @@ llama stack run ./run-with-safety.yaml \
|
||||||
|
|
||||||
### (Optional) Update Model Serving Configuration
|
### (Optional) Update Model Serving Configuration
|
||||||
|
|
||||||
> [!NOTE]
|
```{note}
|
||||||
> Please check the [OLLAMA_SUPPORTED_MODELS](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers.remote/inference/ollama/ollama.py) for the supported Ollama models.
|
Please check the [model_aliases](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L45) variable for supported Ollama models.
|
||||||
|
```
|
||||||
|
|
||||||
To serve a new model with `ollama`
|
To serve a new model with `ollama`
|
||||||
```bash
|
```bash
|
||||||
|
|
|
@ -27,8 +27,6 @@ $ llama-stack-client configure
|
||||||
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5000
|
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5000
|
||||||
```
|
```
|
||||||
|
|
||||||
## Provider Commands
|
|
||||||
|
|
||||||
### `llama-stack-client providers list`
|
### `llama-stack-client providers list`
|
||||||
```bash
|
```bash
|
||||||
$ llama-stack-client providers list
|
$ llama-stack-client providers list
|
||||||
|
@ -119,8 +117,25 @@ $ llama-stack-client memory_banks list
|
||||||
+--------------+----------------+--------+-------------------+------------------------+--------------------------+
|
+--------------+----------------+--------+-------------------+------------------------+--------------------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
## Shield Management
|
### `llama-stack-client memory_banks register`
|
||||||
|
```bash
|
||||||
|
$ llama-stack-client memory_banks register <memory-bank-id> --type <type> [--provider-id <provider-id>] [--provider-memory-bank-id <provider-memory-bank-id>] [--chunk-size <chunk-size>] [--embedding-model <embedding-model>] [--overlap-size <overlap-size>]
|
||||||
|
```
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- `--type`: Required. Type of memory bank. Choices: "vector", "keyvalue", "keyword", "graph"
|
||||||
|
- `--provider-id`: Optional. Provider ID for the memory bank
|
||||||
|
- `--provider-memory-bank-id`: Optional. Provider's memory bank ID
|
||||||
|
- `--chunk-size`: Optional. Chunk size in tokens (for vector type). Default: 512
|
||||||
|
- `--embedding-model`: Optional. Embedding model (for vector type). Default: "all-MiniLM-L6-v2"
|
||||||
|
- `--overlap-size`: Optional. Overlap size in tokens (for vector type). Default: 64
|
||||||
|
|
||||||
|
### `llama-stack-client memory_banks unregister`
|
||||||
|
```bash
|
||||||
|
$ llama-stack-client memory_banks unregister <memory-bank-id>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Shield Management
|
||||||
### `llama-stack-client shields list`
|
### `llama-stack-client shields list`
|
||||||
```bash
|
```bash
|
||||||
$ llama-stack-client shields list
|
$ llama-stack-client shields list
|
||||||
|
@ -134,16 +149,51 @@ $ llama-stack-client shields list
|
||||||
+--------------+----------+----------------+-------------+
|
+--------------+----------+----------------+-------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
## Evaluation Tasks
|
### `llama-stack-client shields register`
|
||||||
|
```bash
|
||||||
|
$ llama-stack-client shields register --shield-id <shield-id> [--provider-id <provider-id>] [--provider-shield-id <provider-shield-id>] [--params <params>]
|
||||||
|
```
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- `--shield-id`: Required. ID of the shield
|
||||||
|
- `--provider-id`: Optional. Provider ID for the shield
|
||||||
|
- `--provider-shield-id`: Optional. Provider's shield ID
|
||||||
|
- `--params`: Optional. JSON configuration parameters for the shield
|
||||||
|
|
||||||
|
## Eval Task Management
|
||||||
|
|
||||||
### `llama-stack-client eval_tasks list`
|
### `llama-stack-client eval_tasks list`
|
||||||
```bash
|
```bash
|
||||||
$ llama-stack-client eval run_benchmark <task_id1> <task_id2> --num-examples 10 --output-dir ./ --eval-task-config ~/eval_task_config.json
|
$ llama-stack-client eval_tasks list
|
||||||
```
|
```
|
||||||
|
|
||||||
where `eval_task_config.json` is the path to the eval task config file in JSON format. An example eval_task_config
|
### `llama-stack-client eval_tasks register`
|
||||||
|
```bash
|
||||||
|
$ llama-stack-client eval_tasks register --eval-task-id <eval-task-id> --dataset-id <dataset-id> --scoring-functions <function1> [<function2> ...] [--provider-id <provider-id>] [--provider-eval-task-id <provider-eval-task-id>] [--metadata <metadata>]
|
||||||
```
|
```
|
||||||
$ cat ~/eval_task_config.json
|
|
||||||
|
Options:
|
||||||
|
- `--eval-task-id`: Required. ID of the eval task
|
||||||
|
- `--dataset-id`: Required. ID of the dataset to evaluate
|
||||||
|
- `--scoring-functions`: Required. One or more scoring functions to use for evaluation
|
||||||
|
- `--provider-id`: Optional. Provider ID for the eval task
|
||||||
|
- `--provider-eval-task-id`: Optional. Provider's eval task ID
|
||||||
|
- `--metadata`: Optional. Metadata for the eval task in JSON format
|
||||||
|
|
||||||
|
## Eval execution
|
||||||
|
### `llama-stack-client eval run-benchmark`
|
||||||
|
```bash
|
||||||
|
$ llama-stack-client eval run-benchmark <eval-task-id1> [<eval-task-id2> ...] --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]
|
||||||
|
```
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- `--eval-task-config`: Required. Path to the eval task config file in JSON format
|
||||||
|
- `--output-dir`: Required. Path to the directory where evaluation results will be saved
|
||||||
|
- `--num-examples`: Optional. Number of examples to evaluate (useful for debugging)
|
||||||
|
- `--visualize`: Optional flag. If set, visualizes evaluation results after completion
|
||||||
|
|
||||||
|
Example eval_task_config.json:
|
||||||
|
```json
|
||||||
{
|
{
|
||||||
"type": "benchmark",
|
"type": "benchmark",
|
||||||
"eval_candidate": {
|
"eval_candidate": {
|
||||||
|
@ -160,3 +210,14 @@ $ cat ~/eval_task_config.json
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `llama-stack-client eval run-scoring`
|
||||||
|
```bash
|
||||||
|
$ llama-stack-client eval run-scoring <eval-task-id> --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]
|
||||||
|
```
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- `--eval-task-config`: Required. Path to the eval task config file in JSON format
|
||||||
|
- `--output-dir`: Required. Path to the directory where scoring results will be saved
|
||||||
|
- `--num-examples`: Optional. Number of examples to evaluate (useful for debugging)
|
||||||
|
- `--visualize`: Optional flag. If set, visualizes scoring results after completion
|
||||||
|
|
|
@ -13,13 +13,13 @@ Based on your developer needs, below are references to guides to help you get st
|
||||||
* Developer Need: I want to start a local Llama Stack server with my GPU using meta-reference implementations.
|
* Developer Need: I want to start a local Llama Stack server with my GPU using meta-reference implementations.
|
||||||
* Effort: 5min
|
* Effort: 5min
|
||||||
* Guide:
|
* Guide:
|
||||||
- Please see our [meta-reference-gpu](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/meta-reference-gpu.html) on starting up a meta-reference Llama Stack server.
|
- Please see our [meta-reference-gpu](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/meta-reference-gpu.html) on starting up a meta-reference Llama Stack server.
|
||||||
|
|
||||||
### Llama Stack Server with Remote Providers
|
### Llama Stack Server with Remote Providers
|
||||||
* Developer need: I want a Llama Stack distribution with a remote provider.
|
* Developer need: I want a Llama Stack distribution with a remote provider.
|
||||||
* Effort: 10min
|
* Effort: 10min
|
||||||
* Guide
|
* Guide
|
||||||
- Please see our [Distributions Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/index.html) on starting up distributions with remote providers.
|
- Please see our [Distributions Guide](https://llama-stack.readthedocs.io/en/latest/concepts/index.html#distributions) on starting up distributions with remote providers.
|
||||||
|
|
||||||
|
|
||||||
### On-Device (iOS) Llama Stack
|
### On-Device (iOS) Llama Stack
|
||||||
|
@ -38,4 +38,4 @@ Based on your developer needs, below are references to guides to help you get st
|
||||||
* Developer Need: I want to add a new API provider to Llama Stack.
|
* Developer Need: I want to add a new API provider to Llama Stack.
|
||||||
* Effort: 3hr
|
* Effort: 3hr
|
||||||
* Guide
|
* Guide
|
||||||
- Please see our [Adding a New API Provider](https://llama-stack.readthedocs.io/en/latest/api_providers/new_api_provider.html) guide for adding a new API provider.
|
- Please see our [Adding a New API Provider](https://llama-stack.readthedocs.io/en/latest/contributing/new_api_provider.html) guide for adding a new API provider.
|
||||||
|
|
|
@ -231,7 +231,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"Thanks for checking out this notebook! \n",
|
"Thanks for checking out this notebook! \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The next one will be a guide on [Prompt Engineering](./01_Prompt_Engineering101.ipynb), please continue learning!"
|
"The next one will be a guide on [Prompt Engineering](./02_Prompt_Engineering101.ipynb), please continue learning!"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
|
@ -276,7 +276,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"Thanks for checking out this notebook! \n",
|
"Thanks for checking out this notebook! \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The next one will be a guide on how to chat with images, continue to the notebook [here](./02_Image_Chat101.ipynb). Happy learning!"
|
"The next one will be a guide on how to chat with images, continue to the notebook [here](./03_Image_Chat101.ipynb). Happy learning!"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
|
@ -175,7 +175,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"Thanks for checking out this notebook! \n",
|
"Thanks for checking out this notebook! \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The next one in the series will teach you one of the favorite applications of Large Language Models: [Tool Calling](./03_Tool_Calling101.ipynb). Enjoy!"
|
"The next one in the series will teach you one of the favorite applications of Large Language Models: [Tool Calling](./04_Tool_Calling101.ipynb). Enjoy!"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
|
@ -373,7 +373,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"Awesome, now we can embed all our notes with Llama-stack and ask it about the meaning of life :)\n",
|
"Awesome, now we can embed all our notes with Llama-stack and ask it about the meaning of life :)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Next up, we will learn about the safety features and how to use them: [notebook link](./05_Safety101.ipynb)"
|
"Next up, we will learn about the safety features and how to use them: [notebook link](./06_Safety101.ipynb)."
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
|
@ -107,7 +107,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"Thanks for leaning about the Safety API of Llama-Stack. \n",
|
"Thanks for leaning about the Safety API of Llama-Stack. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Finally, we learn about the Agents API, [here](./06_Agents101.ipynb)"
|
"Finally, we learn about the Agents API, [here](./07_Agents101.ipynb)."
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
|
@ -1,37 +1,21 @@
|
||||||
# Llama Stack: from Zero to Hero
|
# Llama Stack: from Zero to Hero
|
||||||
|
|
||||||
Llama-Stack allows you to configure your distribution from various providers, allowing you to focus on going from zero to production super fast.
|
Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Providers providing their implementations. These building blocks are assembled into Distributions which are easy for developers to get from zero to production.
|
||||||
|
|
||||||
This guide will walk you through how to build a local distribution, using Ollama as an inference provider.
|
This guide will walk you through an end-to-end workflow with Llama Stack with Ollama as the inference provider and ChromaDB as the memory provider. Please note the steps for configuring your provider and distribution will vary a little depending on the services you use. However, the user experience will remain universal - this is the power of Llama-Stack.
|
||||||
|
|
||||||
We also have a set of notebooks walking you through how to use Llama-Stack APIs:
|
If you're looking for more specific topics, we have a [Zero to Hero Guide](#next-steps) that covers everything from Tool Calling to Agents in detail. Feel free to skip to the end to explore the advanced topics you're interested in.
|
||||||
|
|
||||||
- Inference
|
> If you'd prefer not to set up a local server, explore our notebook on [tool calling with the Together API](Tool_Calling101_Using_Together's_Llama_Stack_Server.ipynb). This notebook will show you how to leverage together.ai's Llama Stack Server API, allowing you to get started with Llama Stack without the need for a locally built and running server.
|
||||||
- Prompt Engineering
|
|
||||||
- Chatting with Images
|
|
||||||
- Tool Calling
|
|
||||||
- Memory API for RAG
|
|
||||||
- Safety API
|
|
||||||
- Agentic API
|
|
||||||
|
|
||||||
Below, we will learn how to get started with Ollama as an inference provider, please note the steps for configuring your provider will vary a little depending on the service. However, the user experience will remain universal-this is the power of Llama-Stack.
|
|
||||||
|
|
||||||
Prototype locally using Ollama, deploy to the cloud with your favorite provider or own deployment. Use any API from any provider while focussing on development.
|
|
||||||
|
|
||||||
# Ollama Quickstart Guide
|
|
||||||
|
|
||||||
This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-3B-Instruct` model. Follow these steps to get started quickly.
|
|
||||||
|
|
||||||
If you're looking for more specific topics like tool calling or agent setup, we have a [Zero to Hero Guide](#next-steps) that covers everything from Tool Calling to Agents in detail. Feel free to skip to the end to explore the advanced topics you're interested in.
|
|
||||||
|
|
||||||
> If you'd prefer not to set up a local server, explore our notebook on [tool calling with the Together API](Tool_Calling101_Using_Together's_Llama_Stack_Server.ipynb). This guide will show you how to leverage Together.ai's Llama Stack Server API, allowing you to get started with Llama Stack without the need for a locally built and running server.
|
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
1. [Setup ollama](#setup-ollama)
|
1. [Setup and run ollama](#setup-ollama)
|
||||||
2. [Install Dependencies and Set Up Environment](#install-dependencies-and-set-up-environment)
|
2. [Install Dependencies and Set Up Environment](#install-dependencies-and-set-up-environment)
|
||||||
3. [Build, Configure, and Run Llama Stack](#build-configure-and-run-llama-stack)
|
3. [Build, Configure, and Run Llama Stack](#build-configure-and-run-llama-stack)
|
||||||
4. [Run Ollama Model](#run-ollama-model)
|
4. [Test with llama-stack-client CLI](#test-with-llama-stack-client-cli)
|
||||||
5. [Next Steps](#next-steps)
|
5. [Test with curl](#test-with-curl)
|
||||||
|
6. [Test with Python](#test-with-python)
|
||||||
|
7. [Next Steps](#next-steps)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -39,107 +23,137 @@ If you're looking for more specific topics like tool calling or agent setup, we
|
||||||
|
|
||||||
1. **Download Ollama App**:
|
1. **Download Ollama App**:
|
||||||
- Go to [https://ollama.com/download](https://ollama.com/download).
|
- Go to [https://ollama.com/download](https://ollama.com/download).
|
||||||
- Download and unzip `Ollama-darwin.zip`.
|
- Follow instructions based on the OS you are on. For example, if you are on a Mac, download and unzip `Ollama-darwin.zip`.
|
||||||
- Run the `Ollama` application.
|
- Run the `Ollama` application.
|
||||||
|
|
||||||
1. **Download the Ollama CLI**:
|
1. **Download the Ollama CLI**:
|
||||||
- Ensure you have the `ollama` command line tool by downloading and installing it from the same website.
|
Ensure you have the `ollama` command line tool by downloading and installing it from the same website.
|
||||||
|
|
||||||
1. **Start ollama server**:
|
1. **Start ollama server**:
|
||||||
- Open the terminal and run:
|
Open the terminal and run:
|
||||||
```
|
```
|
||||||
ollama serve
|
ollama serve
|
||||||
```
|
```
|
||||||
|
|
||||||
1. **Run the model**:
|
1. **Run the model**:
|
||||||
- Open the terminal and run:
|
Open the terminal and run:
|
||||||
```bash
|
```bash
|
||||||
ollama run llama3.2:3b-instruct-fp16
|
ollama run llama3.2:3b-instruct-fp16 --keepalive -1m
|
||||||
```
|
```
|
||||||
**Note**: The supported models for llama stack for now is listed in [here](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L43)
|
**Note**:
|
||||||
|
- The supported models for llama stack for now is listed in [here](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/inference/ollama/ollama.py#L43)
|
||||||
|
- `keepalive -1m` is used so that ollama continues to keep the model in memory indefinitely. Otherwise, ollama frees up memory and you would have to run `ollama run` again.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Install Dependencies and Set Up Environment
|
## Install Dependencies and Set Up Environment
|
||||||
|
|
||||||
1. **Create a Conda Environment**:
|
1. **Create a Conda Environment**:
|
||||||
- Create a new Conda environment with Python 3.10:
|
Create a new Conda environment with Python 3.10:
|
||||||
```bash
|
```bash
|
||||||
conda create -n ollama python=3.10
|
conda create -n ollama python=3.10
|
||||||
```
|
```
|
||||||
- Activate the environment:
|
Activate the environment:
|
||||||
```bash
|
```bash
|
||||||
conda activate ollama
|
conda activate ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Install ChromaDB**:
|
2. **Install ChromaDB**:
|
||||||
- Install `chromadb` using `pip`:
|
Install `chromadb` using `pip`:
|
||||||
```bash
|
```bash
|
||||||
pip install chromadb
|
pip install chromadb
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Run ChromaDB**:
|
3. **Run ChromaDB**:
|
||||||
- Start the ChromaDB server:
|
Start the ChromaDB server:
|
||||||
```bash
|
```bash
|
||||||
chroma run --host localhost --port 8000 --path ./my_chroma_data
|
chroma run --host localhost --port 8000 --path ./my_chroma_data
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **Install Llama Stack**:
|
4. **Install Llama Stack**:
|
||||||
- Open a new terminal and install `llama-stack`:
|
Open a new terminal and install `llama-stack`:
|
||||||
```bash
|
```bash
|
||||||
conda activate hack
|
conda activate ollama
|
||||||
pip install llama-stack==0.0.53
|
pip install llama-stack==0.0.55
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Build, Configure, and Run Llama Stack
|
## Build, Configure, and Run Llama Stack
|
||||||
|
|
||||||
1. **Build the Llama Stack**:
|
1. **Build the Llama Stack**:
|
||||||
- Build the Llama Stack using the `ollama` template:
|
Build the Llama Stack using the `ollama` template:
|
||||||
```bash
|
```bash
|
||||||
llama stack build --template ollama --image-type conda
|
llama stack build --template ollama --image-type conda
|
||||||
```
|
```
|
||||||
|
**Expected Output:**
|
||||||
After this step, you will see the console output:
|
```
|
||||||
|
...
|
||||||
```
|
Build Successful! Next steps:
|
||||||
Build Successful! Next steps:
|
|
||||||
1. Set the environment variables: LLAMASTACK_PORT, OLLAMA_URL, INFERENCE_MODEL, SAFETY_MODEL
|
1. Set the environment variables: LLAMASTACK_PORT, OLLAMA_URL, INFERENCE_MODEL, SAFETY_MODEL
|
||||||
2. `llama stack run /Users/username/.llama/distributions/llamastack-ollama/ollama-run.yaml`
|
2. `llama stack run /Users/<username>/.llama/distributions/llamastack-ollama/ollama-run.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Set the ENV variables by exporting them to the terminal**:
|
3. **Set the ENV variables by exporting them to the terminal**:
|
||||||
```bash
|
```bash
|
||||||
export OLLAMA_URL="http://localhost:11434"
|
export OLLAMA_URL="http://localhost:11434"
|
||||||
export LLAMA_STACK_PORT=5001
|
export LLAMA_STACK_PORT=5051
|
||||||
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
|
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
|
||||||
export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
|
export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Run the Llama Stack**:
|
3. **Run the Llama Stack**:
|
||||||
- Run the stack with command shared by the API from earlier:
|
Run the stack with command shared by the API from earlier:
|
||||||
```bash
|
```bash
|
||||||
llama stack run ollama \
|
llama stack run ollama \
|
||||||
--port $LLAMA_STACK_PORT \
|
--port $LLAMA_STACK_PORT \
|
||||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||||
--env SAFETY_MODEL=$SAFETY_MODEL \
|
--env SAFETY_MODEL=$SAFETY_MODEL \
|
||||||
--env OLLAMA_URL=http://localhost:11434
|
--env OLLAMA_URL=$OLLAMA_URL
|
||||||
```
|
```
|
||||||
|
Note: Everytime you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model.
|
||||||
Note: Everytime you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model
|
|
||||||
|
|
||||||
The server will start and listen on `http://localhost:5051`.
|
The server will start and listen on `http://localhost:5051`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
## Test with `llama-stack-client` CLI
|
||||||
|
After setting up the server, open a new terminal window and install the llama-stack-client package.
|
||||||
|
|
||||||
## Testing with `curl`
|
1. Install the llama-stack-client package
|
||||||
|
```bash
|
||||||
|
conda activate ollama
|
||||||
|
pip install llama-stack-client
|
||||||
|
```
|
||||||
|
2. Configure the CLI to point to the llama-stack server.
|
||||||
|
```bash
|
||||||
|
llama-stack-client configure --endpoint http://localhost:5051
|
||||||
|
```
|
||||||
|
**Expected Output:**
|
||||||
|
```bash
|
||||||
|
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5051
|
||||||
|
```
|
||||||
|
3. Test the CLI by running inference:
|
||||||
|
```bash
|
||||||
|
llama-stack-client inference chat-completion --message "Write me a 2-sentence poem about the moon"
|
||||||
|
```
|
||||||
|
**Expected Output:**
|
||||||
|
```bash
|
||||||
|
ChatCompletionResponse(
|
||||||
|
completion_message=CompletionMessage(
|
||||||
|
content='Here is a 2-sentence poem about the moon:\n\nSilver crescent shining bright in the night,\nA beacon of wonder, full of gentle light.',
|
||||||
|
role='assistant',
|
||||||
|
stop_reason='end_of_turn',
|
||||||
|
tool_calls=[]
|
||||||
|
),
|
||||||
|
logprobs=None
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test with `curl`
|
||||||
|
|
||||||
After setting up the server, open a new terminal window and verify it's working by sending a `POST` request using `curl`:
|
After setting up the server, open a new terminal window and verify it's working by sending a `POST` request using `curl`:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl http://localhost:5051/inference/chat_completion \
|
curl http://localhost:$LLAMA_STACK_PORT/inference/chat_completion \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d '{
|
-d '{
|
||||||
"model": "Llama3.2-3B-Instruct",
|
"model": "Llama3.2-3B-Instruct",
|
||||||
|
@ -168,15 +182,16 @@ You can check the available models with the command `llama-stack-client models l
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Testing with Python
|
## Test with Python
|
||||||
|
|
||||||
You can also interact with the Llama Stack server using a simple Python script. Below is an example:
|
You can also interact with the Llama Stack server using a simple Python script. Below is an example:
|
||||||
|
|
||||||
### 1. Active Conda Environment and Install Required Python Packages
|
### 1. Activate Conda Environment and Install Required Python Packages
|
||||||
The `llama-stack-client` library offers a robust and efficient python methods for interacting with the Llama Stack server.
|
The `llama-stack-client` library offers a robust and efficient python methods for interacting with the Llama Stack server.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
conda activate your-llama-stack-conda-env
|
conda activate ollama
|
||||||
|
pip install llama-stack-client
|
||||||
```
|
```
|
||||||
|
|
||||||
Note, the client library gets installed by default if you install the server library
|
Note, the client library gets installed by default if you install the server library
|
||||||
|
@ -188,6 +203,8 @@ touch test_llama_stack.py
|
||||||
|
|
||||||
### 3. Create a Chat Completion Request in Python
|
### 3. Create a Chat Completion Request in Python
|
||||||
|
|
||||||
|
In `test_llama_stack.py`, write the following code:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from llama_stack_client import LlamaStackClient
|
from llama_stack_client import LlamaStackClient
|
||||||
|
|
||||||
|
@ -227,15 +244,15 @@ This command initializes the model to interact with your local Llama Stack insta
|
||||||
## Next Steps
|
## Next Steps
|
||||||
|
|
||||||
**Explore Other Guides**: Dive deeper into specific topics by following these guides:
|
**Explore Other Guides**: Dive deeper into specific topics by following these guides:
|
||||||
- [Understanding Distribution](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html#decide-your-inference-provider)
|
- [Understanding Distribution](https://llama-stack.readthedocs.io/en/latest/concepts/index.html#distributions)
|
||||||
- [Inference 101](00_Inference101.ipynb)
|
- [Inference 101](00_Inference101.ipynb)
|
||||||
- [Local and Cloud Model Toggling 101](00_Local_Cloud_Inference101.ipynb)
|
- [Local and Cloud Model Toggling 101](01_Local_Cloud_Inference101.ipynb)
|
||||||
- [Prompt Engineering](01_Prompt_Engineering101.ipynb)
|
- [Prompt Engineering](02_Prompt_Engineering101.ipynb)
|
||||||
- [Chat with Image - LlamaStack Vision API](02_Image_Chat101.ipynb)
|
- [Chat with Image - LlamaStack Vision API](03_Image_Chat101.ipynb)
|
||||||
- [Tool Calling: How to and Details](03_Tool_Calling101.ipynb)
|
- [Tool Calling: How to and Details](04_Tool_Calling101.ipynb)
|
||||||
- [Memory API: Show Simple In-Memory Retrieval](04_Memory101.ipynb)
|
- [Memory API: Show Simple In-Memory Retrieval](05_Memory101.ipynb)
|
||||||
- [Using Safety API in Conversation](05_Safety101.ipynb)
|
- [Using Safety API in Conversation](06_Safety101.ipynb)
|
||||||
- [Agents API: Explain Components](06_Agents101.ipynb)
|
- [Agents API: Explain Components](07_Agents101.ipynb)
|
||||||
|
|
||||||
|
|
||||||
**Explore Client SDKs**: Utilize our client SDKs for various languages to integrate Llama Stack into your applications:
|
**Explore Client SDKs**: Utilize our client SDKs for various languages to integrate Llama Stack into your applications:
|
||||||
|
@ -244,7 +261,7 @@ This command initializes the model to interact with your local Llama Stack insta
|
||||||
- [Swift SDK](https://github.com/meta-llama/llama-stack-client-swift)
|
- [Swift SDK](https://github.com/meta-llama/llama-stack-client-swift)
|
||||||
- [Kotlin SDK](https://github.com/meta-llama/llama-stack-client-kotlin)
|
- [Kotlin SDK](https://github.com/meta-llama/llama-stack-client-kotlin)
|
||||||
|
|
||||||
**Advanced Configuration**: Learn how to customize your Llama Stack distribution by referring to the [Building a Llama Stack Distribution](https://llama-stack.readthedocs.io/en/latest/distributions/index.html#building-your-own-distribution) guide.
|
**Advanced Configuration**: Learn how to customize your Llama Stack distribution by referring to the [Building a Llama Stack Distribution](https://llama-stack.readthedocs.io/en/latest/distributions/building_distro.html) guide.
|
||||||
|
|
||||||
**Explore Example Apps**: Check out [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) for example applications built using Llama Stack.
|
**Explore Example Apps**: Check out [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) for example applications built using Llama Stack.
|
||||||
|
|
||||||
|
|
|
@ -35,7 +35,7 @@ class NeedsRequestProviderData:
|
||||||
provider_data = validator(**val)
|
provider_data = validator(**val)
|
||||||
return provider_data
|
return provider_data
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error("Error parsing provider data", e)
|
log.error(f"Error parsing provider data: {e}")
|
||||||
|
|
||||||
|
|
||||||
def set_request_provider_data(headers: Dict[str, str]):
|
def set_request_provider_data(headers: Dict[str, str]):
|
||||||
|
|
11
llama_stack/distribution/ui/README.md
Normal file
11
llama_stack/distribution/ui/README.md
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
# LLama Stack UI
|
||||||
|
|
||||||
|
[!NOTE] This is a work in progress.
|
||||||
|
|
||||||
|
## Running Streamlit App
|
||||||
|
|
||||||
|
```
|
||||||
|
cd llama_stack/distribution/ui
|
||||||
|
pip install -r requirements.txt
|
||||||
|
streamlit run app.py
|
||||||
|
```
|
5
llama_stack/distribution/ui/__init__.py
Normal file
5
llama_stack/distribution/ui/__init__.py
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
173
llama_stack/distribution/ui/app.py
Normal file
173
llama_stack/distribution/ui/app.py
Normal file
|
@ -0,0 +1,173 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
|
|
||||||
|
import streamlit as st
|
||||||
|
|
||||||
|
from modules.api import LlamaStackEvaluation
|
||||||
|
|
||||||
|
from modules.utils import process_dataset
|
||||||
|
|
||||||
|
EVALUATION_API = LlamaStackEvaluation()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
# Add collapsible sidebar
|
||||||
|
with st.sidebar:
|
||||||
|
# Add collapse button
|
||||||
|
if "sidebar_state" not in st.session_state:
|
||||||
|
st.session_state.sidebar_state = True
|
||||||
|
|
||||||
|
if st.session_state.sidebar_state:
|
||||||
|
st.title("Navigation")
|
||||||
|
page = st.radio(
|
||||||
|
"Select a Page",
|
||||||
|
["Application Evaluation"],
|
||||||
|
index=0,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
page = "Application Evaluation" # Default page when sidebar is collapsed
|
||||||
|
|
||||||
|
# Main content area
|
||||||
|
st.title("🦙 Llama Stack Evaluations")
|
||||||
|
|
||||||
|
if page == "Application Evaluation":
|
||||||
|
application_evaluation_page()
|
||||||
|
|
||||||
|
|
||||||
|
def application_evaluation_page():
|
||||||
|
# File uploader
|
||||||
|
uploaded_file = st.file_uploader("Upload Dataset", type=["csv", "xlsx", "xls"])
|
||||||
|
|
||||||
|
if uploaded_file is None:
|
||||||
|
st.error("No file uploaded")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Process uploaded file
|
||||||
|
df = process_dataset(uploaded_file)
|
||||||
|
if df is None:
|
||||||
|
st.error("Error processing file")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Display dataset information
|
||||||
|
st.success("Dataset loaded successfully!")
|
||||||
|
|
||||||
|
# Display dataframe preview
|
||||||
|
st.subheader("Dataset Preview")
|
||||||
|
st.dataframe(df)
|
||||||
|
|
||||||
|
# Select Scoring Functions to Run Evaluation On
|
||||||
|
st.subheader("Select Scoring Functions")
|
||||||
|
scoring_functions = EVALUATION_API.list_scoring_functions()
|
||||||
|
scoring_functions = {sf.identifier: sf for sf in scoring_functions}
|
||||||
|
scoring_functions_names = list(scoring_functions.keys())
|
||||||
|
selected_scoring_functions = st.multiselect(
|
||||||
|
"Choose one or more scoring functions",
|
||||||
|
options=scoring_functions_names,
|
||||||
|
help="Choose one or more scoring functions.",
|
||||||
|
)
|
||||||
|
|
||||||
|
available_models = EVALUATION_API.list_models()
|
||||||
|
available_models = [m.identifier for m in available_models]
|
||||||
|
|
||||||
|
scoring_params = {}
|
||||||
|
if selected_scoring_functions:
|
||||||
|
st.write("Selected:")
|
||||||
|
for scoring_fn_id in selected_scoring_functions:
|
||||||
|
scoring_fn = scoring_functions[scoring_fn_id]
|
||||||
|
st.write(f"- **{scoring_fn_id}**: {scoring_fn.description}")
|
||||||
|
new_params = None
|
||||||
|
if scoring_fn.params:
|
||||||
|
new_params = {}
|
||||||
|
for param_name, param_value in scoring_fn.params.to_dict().items():
|
||||||
|
if param_name == "type":
|
||||||
|
new_params[param_name] = param_value
|
||||||
|
continue
|
||||||
|
|
||||||
|
if param_name == "judge_model":
|
||||||
|
value = st.selectbox(
|
||||||
|
f"Select **{param_name}** for {scoring_fn_id}",
|
||||||
|
options=available_models,
|
||||||
|
index=0,
|
||||||
|
key=f"{scoring_fn_id}_{param_name}",
|
||||||
|
)
|
||||||
|
new_params[param_name] = value
|
||||||
|
else:
|
||||||
|
value = st.text_area(
|
||||||
|
f"Enter value for **{param_name}** in {scoring_fn_id} in valid JSON format",
|
||||||
|
value=json.dumps(param_value, indent=2),
|
||||||
|
height=80,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
new_params[param_name] = json.loads(value)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
st.error(
|
||||||
|
f"Invalid JSON for **{param_name}** in {scoring_fn_id}"
|
||||||
|
)
|
||||||
|
|
||||||
|
st.json(new_params)
|
||||||
|
scoring_params[scoring_fn_id] = new_params
|
||||||
|
|
||||||
|
# Add run evaluation button & slider
|
||||||
|
total_rows = len(df)
|
||||||
|
num_rows = st.slider("Number of rows to evaluate", 1, total_rows, total_rows)
|
||||||
|
|
||||||
|
if st.button("Run Evaluation"):
|
||||||
|
progress_text = "Running evaluation..."
|
||||||
|
progress_bar = st.progress(0, text=progress_text)
|
||||||
|
rows = df.to_dict(orient="records")
|
||||||
|
if num_rows < total_rows:
|
||||||
|
rows = rows[:num_rows]
|
||||||
|
|
||||||
|
# Create separate containers for progress text and results
|
||||||
|
progress_text_container = st.empty()
|
||||||
|
results_container = st.empty()
|
||||||
|
output_res = {}
|
||||||
|
for i, r in enumerate(rows):
|
||||||
|
# Update progress
|
||||||
|
progress = i / len(rows)
|
||||||
|
progress_bar.progress(progress, text=progress_text)
|
||||||
|
|
||||||
|
# Run evaluation for current row
|
||||||
|
score_res = EVALUATION_API.run_scoring(
|
||||||
|
r,
|
||||||
|
scoring_function_ids=selected_scoring_functions,
|
||||||
|
scoring_params=scoring_params,
|
||||||
|
)
|
||||||
|
|
||||||
|
for k in r.keys():
|
||||||
|
if k not in output_res:
|
||||||
|
output_res[k] = []
|
||||||
|
output_res[k].append(r[k])
|
||||||
|
|
||||||
|
for fn_id in selected_scoring_functions:
|
||||||
|
if fn_id not in output_res:
|
||||||
|
output_res[fn_id] = []
|
||||||
|
output_res[fn_id].append(score_res.results[fn_id].score_rows[0])
|
||||||
|
|
||||||
|
# Display current row results using separate containers
|
||||||
|
progress_text_container.write(
|
||||||
|
f"Expand to see current processed result ({i+1}/{len(rows)})"
|
||||||
|
)
|
||||||
|
results_container.json(
|
||||||
|
score_res.to_json(),
|
||||||
|
expanded=2,
|
||||||
|
)
|
||||||
|
|
||||||
|
progress_bar.progress(1.0, text="Evaluation complete!")
|
||||||
|
|
||||||
|
# Display results in dataframe
|
||||||
|
if output_res:
|
||||||
|
output_df = pd.DataFrame(output_res)
|
||||||
|
st.subheader("Evaluation Results")
|
||||||
|
st.dataframe(output_df)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
41
llama_stack/distribution/ui/modules/api.py
Normal file
41
llama_stack/distribution/ui/modules/api.py
Normal file
|
@ -0,0 +1,41 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from llama_stack_client import LlamaStackClient
|
||||||
|
|
||||||
|
|
||||||
|
class LlamaStackEvaluation:
|
||||||
|
def __init__(self):
|
||||||
|
self.client = LlamaStackClient(
|
||||||
|
base_url=os.environ.get("LLAMA_STACK_ENDPOINT", "http://localhost:5000"),
|
||||||
|
provider_data={
|
||||||
|
"fireworks_api_key": os.environ.get("FIREWORKS_API_KEY", ""),
|
||||||
|
"together_api_key": os.environ.get("TOGETHER_API_KEY", ""),
|
||||||
|
"openai_api_key": os.environ.get("OPENAI_API_KEY", ""),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
def list_scoring_functions(self):
|
||||||
|
"""List all available scoring functions"""
|
||||||
|
return self.client.scoring_functions.list()
|
||||||
|
|
||||||
|
def list_models(self):
|
||||||
|
"""List all available judge models"""
|
||||||
|
return self.client.models.list()
|
||||||
|
|
||||||
|
def run_scoring(
|
||||||
|
self, row, scoring_function_ids: list[str], scoring_params: Optional[dict]
|
||||||
|
):
|
||||||
|
"""Run scoring on a single row"""
|
||||||
|
if not scoring_params:
|
||||||
|
scoring_params = {fn_id: None for fn_id in scoring_function_ids}
|
||||||
|
return self.client.scoring.score(
|
||||||
|
input_rows=[row], scoring_functions=scoring_params
|
||||||
|
)
|
31
llama_stack/distribution/ui/modules/utils.py
Normal file
31
llama_stack/distribution/ui/modules/utils.py
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
import pandas as pd
|
||||||
|
import streamlit as st
|
||||||
|
|
||||||
|
|
||||||
|
def process_dataset(file):
|
||||||
|
if file is None:
|
||||||
|
return "No file uploaded", None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Determine file type and read accordingly
|
||||||
|
file_ext = os.path.splitext(file.name)[1].lower()
|
||||||
|
if file_ext == ".csv":
|
||||||
|
df = pd.read_csv(file)
|
||||||
|
elif file_ext in [".xlsx", ".xls"]:
|
||||||
|
df = pd.read_excel(file)
|
||||||
|
else:
|
||||||
|
return "Unsupported file format. Please upload a CSV or Excel file.", None
|
||||||
|
|
||||||
|
return df
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
st.error(f"Error processing file: {str(e)}")
|
||||||
|
return None
|
3
llama_stack/distribution/ui/requirements.txt
Normal file
3
llama_stack/distribution/ui/requirements.txt
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
streamlit
|
||||||
|
pandas
|
||||||
|
llama-stack-client>=0.0.55
|
5
llama_stack/providers/inline/datasetio/__init__.py
Normal file
5
llama_stack/providers/inline/datasetio/__init__.py
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
|
@ -6,10 +6,15 @@
|
||||||
from typing import Dict
|
from typing import Dict
|
||||||
|
|
||||||
from llama_stack.distribution.datatypes import Api, ProviderSpec
|
from llama_stack.distribution.datatypes import Api, ProviderSpec
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
from .config import BraintrustScoringConfig
|
from .config import BraintrustScoringConfig
|
||||||
|
|
||||||
|
|
||||||
|
class BraintrustProviderDataValidator(BaseModel):
|
||||||
|
openai_api_key: str
|
||||||
|
|
||||||
|
|
||||||
async def get_provider_impl(
|
async def get_provider_impl(
|
||||||
config: BraintrustScoringConfig,
|
config: BraintrustScoringConfig,
|
||||||
deps: Dict[Api, ProviderSpec],
|
deps: Dict[Api, ProviderSpec],
|
||||||
|
|
|
@ -12,9 +12,11 @@ from llama_stack.apis.common.type_system import * # noqa: F403
|
||||||
from llama_stack.apis.datasetio import * # noqa: F403
|
from llama_stack.apis.datasetio import * # noqa: F403
|
||||||
from llama_stack.apis.datasets import * # noqa: F403
|
from llama_stack.apis.datasets import * # noqa: F403
|
||||||
|
|
||||||
# from .scoring_fn.braintrust_scoring_fn import BraintrustScoringFn
|
import os
|
||||||
|
|
||||||
from autoevals.llm import Factuality
|
from autoevals.llm import Factuality
|
||||||
from autoevals.ragas import AnswerCorrectness
|
from autoevals.ragas import AnswerCorrectness
|
||||||
|
from llama_stack.distribution.request_headers import NeedsRequestProviderData
|
||||||
from llama_stack.providers.datatypes import ScoringFunctionsProtocolPrivate
|
from llama_stack.providers.datatypes import ScoringFunctionsProtocolPrivate
|
||||||
|
|
||||||
from llama_stack.providers.utils.scoring.aggregation_utils import aggregate_average
|
from llama_stack.providers.utils.scoring.aggregation_utils import aggregate_average
|
||||||
|
@ -24,7 +26,9 @@ from .scoring_fn.fn_defs.answer_correctness import answer_correctness_fn_def
|
||||||
from .scoring_fn.fn_defs.factuality import factuality_fn_def
|
from .scoring_fn.fn_defs.factuality import factuality_fn_def
|
||||||
|
|
||||||
|
|
||||||
class BraintrustScoringImpl(Scoring, ScoringFunctionsProtocolPrivate):
|
class BraintrustScoringImpl(
|
||||||
|
Scoring, ScoringFunctionsProtocolPrivate, NeedsRequestProviderData
|
||||||
|
):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
config: BraintrustScoringConfig,
|
config: BraintrustScoringConfig,
|
||||||
|
@ -79,12 +83,25 @@ class BraintrustScoringImpl(Scoring, ScoringFunctionsProtocolPrivate):
|
||||||
f"Dataset {dataset_id} does not have a '{required_column}' column of type 'string'."
|
f"Dataset {dataset_id} does not have a '{required_column}' column of type 'string'."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def set_api_key(self) -> None:
|
||||||
|
# api key is in the request headers
|
||||||
|
if self.config.openai_api_key is None:
|
||||||
|
provider_data = self.get_request_provider_data()
|
||||||
|
if provider_data is None or not provider_data.openai_api_key:
|
||||||
|
raise ValueError(
|
||||||
|
'Pass OpenAI API Key in the header X-LlamaStack-ProviderData as { "openai_api_key": <your api key>}'
|
||||||
|
)
|
||||||
|
self.config.openai_api_key = provider_data.openai_api_key
|
||||||
|
|
||||||
|
os.environ["OPENAI_API_KEY"] = self.config.openai_api_key
|
||||||
|
|
||||||
async def score_batch(
|
async def score_batch(
|
||||||
self,
|
self,
|
||||||
dataset_id: str,
|
dataset_id: str,
|
||||||
scoring_functions: List[str],
|
scoring_functions: List[str],
|
||||||
save_results_dataset: bool = False,
|
save_results_dataset: bool = False,
|
||||||
) -> ScoreBatchResponse:
|
) -> ScoreBatchResponse:
|
||||||
|
await self.set_api_key()
|
||||||
await self.validate_scoring_input_dataset_schema(dataset_id=dataset_id)
|
await self.validate_scoring_input_dataset_schema(dataset_id=dataset_id)
|
||||||
all_rows = await self.datasetio_api.get_rows_paginated(
|
all_rows = await self.datasetio_api.get_rows_paginated(
|
||||||
dataset_id=dataset_id,
|
dataset_id=dataset_id,
|
||||||
|
@ -105,6 +122,7 @@ class BraintrustScoringImpl(Scoring, ScoringFunctionsProtocolPrivate):
|
||||||
async def score_row(
|
async def score_row(
|
||||||
self, input_row: Dict[str, Any], scoring_fn_identifier: Optional[str] = None
|
self, input_row: Dict[str, Any], scoring_fn_identifier: Optional[str] = None
|
||||||
) -> ScoringResultRow:
|
) -> ScoringResultRow:
|
||||||
|
await self.set_api_key()
|
||||||
assert scoring_fn_identifier is not None, "scoring_fn_identifier cannot be None"
|
assert scoring_fn_identifier is not None, "scoring_fn_identifier cannot be None"
|
||||||
expected_answer = input_row["expected_answer"]
|
expected_answer = input_row["expected_answer"]
|
||||||
generated_answer = input_row["generated_answer"]
|
generated_answer = input_row["generated_answer"]
|
||||||
|
@ -118,6 +136,7 @@ class BraintrustScoringImpl(Scoring, ScoringFunctionsProtocolPrivate):
|
||||||
async def score(
|
async def score(
|
||||||
self, input_rows: List[Dict[str, Any]], scoring_functions: List[str]
|
self, input_rows: List[Dict[str, Any]], scoring_functions: List[str]
|
||||||
) -> ScoreResponse:
|
) -> ScoreResponse:
|
||||||
|
await self.set_api_key()
|
||||||
res = {}
|
res = {}
|
||||||
for scoring_fn_id in scoring_functions:
|
for scoring_fn_id in scoring_functions:
|
||||||
if scoring_fn_id not in self.supported_fn_defs_registry:
|
if scoring_fn_id not in self.supported_fn_defs_registry:
|
||||||
|
|
|
@ -6,4 +6,8 @@
|
||||||
from llama_stack.apis.scoring import * # noqa: F401, F403
|
from llama_stack.apis.scoring import * # noqa: F401, F403
|
||||||
|
|
||||||
|
|
||||||
class BraintrustScoringConfig(BaseModel): ...
|
class BraintrustScoringConfig(BaseModel):
|
||||||
|
openai_api_key: Optional[str] = Field(
|
||||||
|
default=None,
|
||||||
|
description="The OpenAI API Key",
|
||||||
|
)
|
||||||
|
|
|
@ -10,7 +10,7 @@ from llama_stack.apis.scoring_functions import ScoringFn
|
||||||
|
|
||||||
answer_correctness_fn_def = ScoringFn(
|
answer_correctness_fn_def = ScoringFn(
|
||||||
identifier="braintrust::answer-correctness",
|
identifier="braintrust::answer-correctness",
|
||||||
description="Test whether an output is factual, compared to an original (`expected`) value. One of Braintrust LLM basd scorer https://github.com/braintrustdata/autoevals/blob/main/py/autoevals/llm.py",
|
description="Scores the correctness of the answer based on the ground truth.. One of Braintrust LLM basd scorer https://github.com/braintrustdata/autoevals/blob/main/py/autoevals/llm.py",
|
||||||
params=None,
|
params=None,
|
||||||
provider_id="braintrust",
|
provider_id="braintrust",
|
||||||
provider_resource_id="answer-correctness",
|
provider_resource_id="answer-correctness",
|
||||||
|
|
|
@ -44,5 +44,6 @@ def available_providers() -> List[ProviderSpec]:
|
||||||
Api.datasetio,
|
Api.datasetio,
|
||||||
Api.datasets,
|
Api.datasets,
|
||||||
],
|
],
|
||||||
|
provider_data_validator="llama_stack.providers.inline.scoring.braintrust.BraintrustProviderDataValidator",
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
|
|
5
llama_stack/providers/remote/datasetio/__init__.py
Normal file
5
llama_stack/providers/remote/datasetio/__init__.py
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
|
@ -3,12 +3,13 @@
|
||||||
#
|
#
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
# the root directory of this source tree.
|
# the root directory of this source tree.
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
from llama_stack.distribution.utils.config_dirs import RUNTIME_BASE_DIR
|
from llama_stack.distribution.utils.config_dirs import RUNTIME_BASE_DIR
|
||||||
from llama_stack.providers.utils.kvstore.config import (
|
from llama_stack.providers.utils.kvstore.config import (
|
||||||
KVStoreConfig,
|
KVStoreConfig,
|
||||||
SqliteKVStoreConfig,
|
SqliteKVStoreConfig,
|
||||||
)
|
)
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
|
|
||||||
class HuggingfaceDatasetIOConfig(BaseModel):
|
class HuggingfaceDatasetIOConfig(BaseModel):
|
||||||
|
|
|
@ -9,6 +9,7 @@ from llama_stack.apis.datasetio import * # noqa: F403
|
||||||
|
|
||||||
|
|
||||||
import datasets as hf_datasets
|
import datasets as hf_datasets
|
||||||
|
|
||||||
from llama_stack.providers.datatypes import DatasetsProtocolPrivate
|
from llama_stack.providers.datatypes import DatasetsProtocolPrivate
|
||||||
from llama_stack.providers.utils.datasetio.url_utils import get_dataframe_from_url
|
from llama_stack.providers.utils.datasetio.url_utils import get_dataframe_from_url
|
||||||
from llama_stack.providers.utils.kvstore import kvstore_impl
|
from llama_stack.providers.utils.kvstore import kvstore_impl
|
||||||
|
|
|
@ -35,7 +35,9 @@ class NVIDIAConfig(BaseModel):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
url: str = Field(
|
url: str = Field(
|
||||||
default="https://integrate.api.nvidia.com",
|
default_factory=lambda: os.getenv(
|
||||||
|
"NVIDIA_BASE_URL", "https://integrate.api.nvidia.com"
|
||||||
|
),
|
||||||
description="A base url for accessing the NVIDIA NIM",
|
description="A base url for accessing the NVIDIA NIM",
|
||||||
)
|
)
|
||||||
api_key: Optional[str] = Field(
|
api_key: Optional[str] = Field(
|
||||||
|
|
|
@ -89,8 +89,9 @@ class _HfAdapter(Inference, ModelsProtocolPrivate):
|
||||||
stream: Optional[bool] = False,
|
stream: Optional[bool] = False,
|
||||||
logprobs: Optional[LogProbConfig] = None,
|
logprobs: Optional[LogProbConfig] = None,
|
||||||
) -> AsyncGenerator:
|
) -> AsyncGenerator:
|
||||||
|
model = await self.model_store.get_model(model_id)
|
||||||
request = CompletionRequest(
|
request = CompletionRequest(
|
||||||
model=model_id,
|
model=model.provider_resource_id,
|
||||||
content=content,
|
content=content,
|
||||||
sampling_params=sampling_params,
|
sampling_params=sampling_params,
|
||||||
response_format=response_format,
|
response_format=response_format,
|
||||||
|
@ -194,8 +195,9 @@ class _HfAdapter(Inference, ModelsProtocolPrivate):
|
||||||
stream: Optional[bool] = False,
|
stream: Optional[bool] = False,
|
||||||
logprobs: Optional[LogProbConfig] = None,
|
logprobs: Optional[LogProbConfig] = None,
|
||||||
) -> AsyncGenerator:
|
) -> AsyncGenerator:
|
||||||
|
model = await self.model_store.get_model(model_id)
|
||||||
request = ChatCompletionRequest(
|
request = ChatCompletionRequest(
|
||||||
model=model_id,
|
model=model.provider_resource_id,
|
||||||
messages=messages,
|
messages=messages,
|
||||||
sampling_params=sampling_params,
|
sampling_params=sampling_params,
|
||||||
tools=tools or [],
|
tools=tools or [],
|
||||||
|
@ -249,7 +251,7 @@ class _HfAdapter(Inference, ModelsProtocolPrivate):
|
||||||
|
|
||||||
def _get_params(self, request: ChatCompletionRequest) -> dict:
|
def _get_params(self, request: ChatCompletionRequest) -> dict:
|
||||||
prompt, input_tokens = chat_completion_request_to_model_input_info(
|
prompt, input_tokens = chat_completion_request_to_model_input_info(
|
||||||
request, self.formatter
|
request, self.register_helper.get_llama_model(request.model), self.formatter
|
||||||
)
|
)
|
||||||
return dict(
|
return dict(
|
||||||
prompt=prompt,
|
prompt=prompt,
|
||||||
|
|
|
@ -6,10 +6,14 @@
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
|
from ..agents.fixtures import AGENTS_FIXTURES
|
||||||
|
|
||||||
from ..conftest import get_provider_fixture_overrides
|
from ..conftest import get_provider_fixture_overrides
|
||||||
|
|
||||||
from ..datasetio.fixtures import DATASETIO_FIXTURES
|
from ..datasetio.fixtures import DATASETIO_FIXTURES
|
||||||
from ..inference.fixtures import INFERENCE_FIXTURES
|
from ..inference.fixtures import INFERENCE_FIXTURES
|
||||||
|
from ..memory.fixtures import MEMORY_FIXTURES
|
||||||
|
from ..safety.fixtures import SAFETY_FIXTURES
|
||||||
from ..scoring.fixtures import SCORING_FIXTURES
|
from ..scoring.fixtures import SCORING_FIXTURES
|
||||||
from .fixtures import EVAL_FIXTURES
|
from .fixtures import EVAL_FIXTURES
|
||||||
|
|
||||||
|
@ -20,6 +24,9 @@ DEFAULT_PROVIDER_COMBINATIONS = [
|
||||||
"scoring": "basic",
|
"scoring": "basic",
|
||||||
"datasetio": "localfs",
|
"datasetio": "localfs",
|
||||||
"inference": "fireworks",
|
"inference": "fireworks",
|
||||||
|
"agents": "meta_reference",
|
||||||
|
"safety": "llama_guard",
|
||||||
|
"memory": "faiss",
|
||||||
},
|
},
|
||||||
id="meta_reference_eval_fireworks_inference",
|
id="meta_reference_eval_fireworks_inference",
|
||||||
marks=pytest.mark.meta_reference_eval_fireworks_inference,
|
marks=pytest.mark.meta_reference_eval_fireworks_inference,
|
||||||
|
@ -30,6 +37,9 @@ DEFAULT_PROVIDER_COMBINATIONS = [
|
||||||
"scoring": "basic",
|
"scoring": "basic",
|
||||||
"datasetio": "localfs",
|
"datasetio": "localfs",
|
||||||
"inference": "together",
|
"inference": "together",
|
||||||
|
"agents": "meta_reference",
|
||||||
|
"safety": "llama_guard",
|
||||||
|
"memory": "faiss",
|
||||||
},
|
},
|
||||||
id="meta_reference_eval_together_inference",
|
id="meta_reference_eval_together_inference",
|
||||||
marks=pytest.mark.meta_reference_eval_together_inference,
|
marks=pytest.mark.meta_reference_eval_together_inference,
|
||||||
|
@ -40,6 +50,9 @@ DEFAULT_PROVIDER_COMBINATIONS = [
|
||||||
"scoring": "basic",
|
"scoring": "basic",
|
||||||
"datasetio": "huggingface",
|
"datasetio": "huggingface",
|
||||||
"inference": "together",
|
"inference": "together",
|
||||||
|
"agents": "meta_reference",
|
||||||
|
"safety": "llama_guard",
|
||||||
|
"memory": "faiss",
|
||||||
},
|
},
|
||||||
id="meta_reference_eval_together_inference_huggingface_datasetio",
|
id="meta_reference_eval_together_inference_huggingface_datasetio",
|
||||||
marks=pytest.mark.meta_reference_eval_together_inference_huggingface_datasetio,
|
marks=pytest.mark.meta_reference_eval_together_inference_huggingface_datasetio,
|
||||||
|
@ -75,6 +88,9 @@ def pytest_generate_tests(metafunc):
|
||||||
"scoring": SCORING_FIXTURES,
|
"scoring": SCORING_FIXTURES,
|
||||||
"datasetio": DATASETIO_FIXTURES,
|
"datasetio": DATASETIO_FIXTURES,
|
||||||
"inference": INFERENCE_FIXTURES,
|
"inference": INFERENCE_FIXTURES,
|
||||||
|
"agents": AGENTS_FIXTURES,
|
||||||
|
"safety": SAFETY_FIXTURES,
|
||||||
|
"memory": MEMORY_FIXTURES,
|
||||||
}
|
}
|
||||||
combinations = (
|
combinations = (
|
||||||
get_provider_fixture_overrides(metafunc.config, available_fixtures)
|
get_provider_fixture_overrides(metafunc.config, available_fixtures)
|
||||||
|
|
|
@ -40,14 +40,30 @@ async def eval_stack(request):
|
||||||
|
|
||||||
providers = {}
|
providers = {}
|
||||||
provider_data = {}
|
provider_data = {}
|
||||||
for key in ["datasetio", "eval", "scoring", "inference"]:
|
for key in [
|
||||||
|
"datasetio",
|
||||||
|
"eval",
|
||||||
|
"scoring",
|
||||||
|
"inference",
|
||||||
|
"agents",
|
||||||
|
"safety",
|
||||||
|
"memory",
|
||||||
|
]:
|
||||||
fixture = request.getfixturevalue(f"{key}_{fixture_dict[key]}")
|
fixture = request.getfixturevalue(f"{key}_{fixture_dict[key]}")
|
||||||
providers[key] = fixture.providers
|
providers[key] = fixture.providers
|
||||||
if fixture.provider_data:
|
if fixture.provider_data:
|
||||||
provider_data.update(fixture.provider_data)
|
provider_data.update(fixture.provider_data)
|
||||||
|
|
||||||
test_stack = await construct_stack_for_test(
|
test_stack = await construct_stack_for_test(
|
||||||
[Api.eval, Api.datasetio, Api.inference, Api.scoring],
|
[
|
||||||
|
Api.eval,
|
||||||
|
Api.datasetio,
|
||||||
|
Api.inference,
|
||||||
|
Api.scoring,
|
||||||
|
Api.agents,
|
||||||
|
Api.safety,
|
||||||
|
Api.memory,
|
||||||
|
],
|
||||||
providers,
|
providers,
|
||||||
provider_data,
|
provider_data,
|
||||||
)
|
)
|
||||||
|
|
|
@ -20,6 +20,7 @@ from llama_stack.providers.remote.inference.bedrock import BedrockConfig
|
||||||
from llama_stack.providers.remote.inference.fireworks import FireworksImplConfig
|
from llama_stack.providers.remote.inference.fireworks import FireworksImplConfig
|
||||||
from llama_stack.providers.remote.inference.nvidia import NVIDIAConfig
|
from llama_stack.providers.remote.inference.nvidia import NVIDIAConfig
|
||||||
from llama_stack.providers.remote.inference.ollama import OllamaImplConfig
|
from llama_stack.providers.remote.inference.ollama import OllamaImplConfig
|
||||||
|
from llama_stack.providers.remote.inference.tgi import TGIImplConfig
|
||||||
from llama_stack.providers.remote.inference.together import TogetherImplConfig
|
from llama_stack.providers.remote.inference.together import TogetherImplConfig
|
||||||
from llama_stack.providers.remote.inference.vllm import VLLMInferenceAdapterConfig
|
from llama_stack.providers.remote.inference.vllm import VLLMInferenceAdapterConfig
|
||||||
from llama_stack.providers.tests.resolver import construct_stack_for_test
|
from llama_stack.providers.tests.resolver import construct_stack_for_test
|
||||||
|
@ -156,6 +157,22 @@ def inference_nvidia() -> ProviderFixture:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def inference_tgi() -> ProviderFixture:
|
||||||
|
return ProviderFixture(
|
||||||
|
providers=[
|
||||||
|
Provider(
|
||||||
|
provider_id="tgi",
|
||||||
|
provider_type="remote::tgi",
|
||||||
|
config=TGIImplConfig(
|
||||||
|
url=get_env_or_fail("TGI_URL"),
|
||||||
|
api_token=os.getenv("TGI_API_TOKEN", None),
|
||||||
|
).model_dump(),
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_model_short_name(model_name: str) -> str:
|
def get_model_short_name(model_name: str) -> str:
|
||||||
"""Convert model name to a short test identifier.
|
"""Convert model name to a short test identifier.
|
||||||
|
|
||||||
|
@ -190,6 +207,7 @@ INFERENCE_FIXTURES = [
|
||||||
"remote",
|
"remote",
|
||||||
"bedrock",
|
"bedrock",
|
||||||
"nvidia",
|
"nvidia",
|
||||||
|
"tgi",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -211,7 +211,15 @@ class TestInference:
|
||||||
response = await inference_impl.chat_completion(
|
response = await inference_impl.chat_completion(
|
||||||
model_id=inference_model,
|
model_id=inference_model,
|
||||||
messages=[
|
messages=[
|
||||||
SystemMessage(content="You are a helpful assistant."),
|
# we include context about Michael Jordan in the prompt so that the test is
|
||||||
|
# focused on the funtionality of the model and not on the information embedded
|
||||||
|
# in the model. Llama 3.2 3B Instruct tends to think MJ played for 14 seasons.
|
||||||
|
SystemMessage(
|
||||||
|
content=(
|
||||||
|
"You are a helpful assistant.\n\n"
|
||||||
|
"Michael Jordan was born in 1963. He played basketball for the Chicago Bulls for 15 seasons."
|
||||||
|
)
|
||||||
|
),
|
||||||
UserMessage(content="Please give me information about Michael Jordan."),
|
UserMessage(content="Please give me information about Michael Jordan."),
|
||||||
],
|
],
|
||||||
stream=False,
|
stream=False,
|
||||||
|
|
|
@ -10,9 +10,10 @@ import pytest_asyncio
|
||||||
from llama_stack.apis.models import ModelInput
|
from llama_stack.apis.models import ModelInput
|
||||||
|
|
||||||
from llama_stack.distribution.datatypes import Api, Provider
|
from llama_stack.distribution.datatypes import Api, Provider
|
||||||
|
from llama_stack.providers.inline.scoring.braintrust import BraintrustScoringConfig
|
||||||
from llama_stack.providers.tests.resolver import construct_stack_for_test
|
from llama_stack.providers.tests.resolver import construct_stack_for_test
|
||||||
from ..conftest import ProviderFixture, remote_stack_fixture
|
from ..conftest import ProviderFixture, remote_stack_fixture
|
||||||
|
from ..env import get_env_or_fail
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
|
@ -40,7 +41,9 @@ def scoring_braintrust() -> ProviderFixture:
|
||||||
Provider(
|
Provider(
|
||||||
provider_id="braintrust",
|
provider_id="braintrust",
|
||||||
provider_type="inline::braintrust",
|
provider_type="inline::braintrust",
|
||||||
config={},
|
config=BraintrustScoringConfig(
|
||||||
|
openai_api_key=get_env_or_fail("OPENAI_API_KEY"),
|
||||||
|
).model_dump(),
|
||||||
)
|
)
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
5
llama_stack/providers/utils/scoring/__init__.py
Normal file
5
llama_stack/providers/utils/scoring/__init__.py
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
|
@ -29,7 +29,7 @@ The following environment variables can be configured:
|
||||||
|
|
||||||
## Prerequisite: Downloading Models
|
## Prerequisite: Downloading Models
|
||||||
|
|
||||||
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ls ~/.llama/checkpoints
|
$ ls ~/.llama/checkpoints
|
||||||
|
|
|
@ -31,7 +31,7 @@ The following environment variables can be configured:
|
||||||
|
|
||||||
## Prerequisite: Downloading Models
|
## Prerequisite: Downloading Models
|
||||||
|
|
||||||
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/download_models.html) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ls ~/.llama/checkpoints
|
$ ls ~/.llama/checkpoints
|
||||||
|
|
|
@ -2,8 +2,8 @@ blobfile
|
||||||
fire
|
fire
|
||||||
httpx
|
httpx
|
||||||
huggingface-hub
|
huggingface-hub
|
||||||
llama-models>=0.0.55
|
llama-models>=0.0.57
|
||||||
llama-stack-client>=0.0.55
|
llama-stack-client>=0.0.57
|
||||||
prompt-toolkit
|
prompt-toolkit
|
||||||
python-dotenv
|
python-dotenv
|
||||||
pydantic>=2
|
pydantic>=2
|
||||||
|
|
2
setup.py
2
setup.py
|
@ -16,7 +16,7 @@ def read_requirements():
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
name="llama_stack",
|
name="llama_stack",
|
||||||
version="0.0.55",
|
version="0.0.57",
|
||||||
author="Meta Llama",
|
author="Meta Llama",
|
||||||
author_email="llama-oss@meta.com",
|
author_email="llama-oss@meta.com",
|
||||||
description="Llama Stack",
|
description="Llama Stack",
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue