Composable building blocks to build Llama Apps https://llama-stack.readthedocs.io
Find a file
Aidan Do 095125e463
[#391] Add support for json structured output for vLLM (#528)
# What does this PR do?

Addresses issue (#391)

- Adds json structured output for vLLM
- Enables structured output tests for vLLM

> Give me a recipe for Spaghetti Bolognaise:

```json
{
  "recipe_name": "Spaghetti Bolognaise",
  "preamble": "Ah, spaghetti bolognaise - the quintessential Italian dish that fills my kitchen with the aromas of childhood nostalgia. As a child, I would watch my nonna cook up a big pot of spaghetti bolognaise every Sunday, filling our small Italian household with the savory scent of simmering meat and tomatoes. The way the sauce would thicken and the spaghetti would al dente - it was love at first bite. And now, as a chef, I want to share that same love with you, so you can recreate these warm, comforting memories at home.",
  "ingredients": [
    "500g minced beef",
    "1 medium onion, finely chopped",
    "2 cloves garlic, minced",
    "1 carrot, finely chopped",
    " celery, finely chopped",
    "1 (28 oz) can whole peeled tomatoes",
    "1 tbsp tomato paste",
    "1 tsp dried basil",
    "1 tsp dried oregano",
    "1 tsp salt",
    "1/2 tsp black pepper",
    "1/2 tsp sugar",
    "1 lb spaghetti",
    "Grated Parmesan cheese, for serving",
    "Extra virgin olive oil, for serving"
  ],
  "steps": [
    "Heat a large pot over medium heat and add a generous drizzle of extra virgin olive oil.",
    "Add the chopped onion, garlic, carrot, and celery and cook until the vegetables are soft and translucent, about 5-7 minutes.",
    "Add the minced beef and cook until browned, breaking it up with a spoon as it cooks.",
    "Add the tomato paste and cook for 1-2 minutes, stirring constantly.",
    "Add the canned tomatoes, dried basil, dried oregano, salt, black pepper, and sugar. Stir well to combine.",
    "Bring the sauce to a simmer and let it cook for 20-30 minutes, stirring occasionally, until the sauce has thickened and the flavors have melded together.",
    "While the sauce cooks, bring a large pot of salted water to a boil and cook the spaghetti according to the package instructions until al dente. Reserve 1 cup of pasta water before draining the spaghetti.",
    "Add the reserved pasta water to the sauce and stir to combine.",
    "Combine the cooked spaghetti and sauce, tossing to coat the pasta evenly.",
    "Serve hot, topped with grated Parmesan cheese and a drizzle of extra virgin olive oil.",
    "Enjoy!"
  ]
}
```

Generated with Llama-3.2-3B-Instruct model - pretty good for a 3B
parameter model 👍

## Test Plan

`pytest -v -s
llama_stack/providers/tests/inference/test_text_inference.py -k
llama_3b-vllm_remote`

With the following setup:

```bash
# Environment
export INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
export INFERENCE_PORT=8000
export VLLM_URL=http://localhost:8000/v1

# vLLM server
sudo docker run --gpus all \
    -v $STORAGE_DIR/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=$(cat ~/.cache/huggingface/token)" \
    -p 8000:$INFERENCE_PORT \
    --ipc=host \
    --net=host \
    vllm/vllm-openai:v0.6.3.post1 \
    --model $INFERENCE_MODEL

# llama-stack server
llama stack build --template remote-vllm --image-type conda && llama stack run distributions/remote-vllm/run.yaml \
  --port 5001 \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct
```

Results:

```
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_model_list[llama_3b-vllm_remote] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_completion[llama_3b-vllm_remote] SKIPPED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_completions_structured_output[llama_3b-vllm_remote] SKIPPED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_non_streaming[llama_3b-vllm_remote] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_structured_output[llama_3b-vllm_remote] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_streaming[llama_3b-vllm_remote] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling[llama_3b-vllm_remote] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling_streaming[llama_3b-vllm_remote] PASSED

================================ 6 passed, 2 skipped, 120 deselected, 2 warnings in 13.26s ================================
```

## Sources

- https://github.com/vllm-project/vllm/discussions/8300
- By default, vLLM uses https://github.com/dottxt-ai/outlines for
structured outputs
[[1](32e7db2536/vllm/engine/arg_utils.py (L279-L280))]

## Before submitting

[N/A] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case)

- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?

[N/A?] Updated relevant documentation. Couldn't find any relevant
documentation. Lmk if I've missed anything.

- [x] Wrote necessary unit or integration tests.
2024-12-08 15:02:51 -08:00
.github Introduce GitHub Actions Workflow for Llama Stack Tests (#523) 2024-12-04 15:42:55 -08:00
distributions Add eval/scoring/datasetio API providers to distribution templates & UI developer guide (#564) 2024-12-05 16:29:32 -08:00
docs Use customtool's get_tool_definition to remove duplication (#584) 2024-12-08 15:00:41 -08:00
llama_stack [#391] Add support for json structured output for vLLM (#528) 2024-12-08 15:02:51 -08:00
rfcs Update RFC-0001-llama-stack.md (#134) 2024-09-27 09:14:36 -07:00
.flake8 ci: Run pre-commit checks in CI (#176) 2024-10-10 11:21:59 -07:00
.gitignore Move gitignore from docs/ to the main gitignore 2024-11-22 15:55:34 -08:00
.gitmodules impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
.pre-commit-config.yaml Add a pre-commit for distro_codegen but it does not work yet 2024-11-18 15:21:13 -08:00
.readthedocs.yaml first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
CHANGELOG.md add changelog (#487) 2024-11-19 17:36:08 -08:00
CODE_OF_CONDUCT.md Initial commit 2024-07-23 08:32:33 -07:00
CONTRIBUTING.md Update CONTRIBUTING to include info about pre-commit 2024-11-18 18:17:54 -08:00
LICENSE Update LICENSE (#47) 2024-08-29 07:39:50 -07:00
MANIFEST.in codegen per-distro dependencies; not hooked into setup.py yet 2024-11-19 09:54:30 -08:00
pyproject.toml Initial commit 2024-07-23 08:32:33 -07:00
README.md Update integration type for Cerebras to hosted (#583) 2024-12-07 22:42:07 -08:00
requirements.txt Bump version to 0.0.58 2024-12-06 08:36:00 -08:00
SECURITY.md Create SECURITY.md 2024-10-08 13:30:40 -04:00
setup.py Bump version to 0.0.58 2024-12-06 08:36:00 -08:00

Llama Stack

PyPI version PyPI - Downloads Discord

Quick Start | Documentation | Zero-to-Hero Guide

Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.

Llama Stack

Our goal is to provide pre-packaged implementations which can be operated in a variety of deployment environments: developers start iterating with Desktops or their mobile devices and can seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available.

⚠️ Note The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions.

APIs

We have working implementations of the following APIs today:

  • Inference
  • Safety
  • Memory
  • Agents
  • Eval
  • Telemetry

Alongside these APIs, we also related APIs for operating with associated resources (see Concepts):

  • Models
  • Shields
  • Memory Banks
  • EvalTasks
  • Datasets
  • Scoring Functions

We are also working on the following APIs which will be released soon:

  • Post Training
  • Synthetic Data Generation
  • Reward Scoring

Each of the APIs themselves is a collection of REST endpoints.

Philosophy

Service-oriented design

Unlike other frameworks, Llama Stack is built with a service-oriented, REST API-first approach. Such a design not only allows for seamless transitions from a local to remote deployments, but also forces the design to be more declarative. We believe this restriction can result in a much simpler, robust developer experience. This will necessarily trade-off against expressivity however if we get the APIs right, it can lead to a very powerful platform.

Composability

We expect the set of APIs we design to be composable. An Agent abstractly depends on { Inference, Memory, Safety } APIs but does not care about the actual implementation details. Safety itself may require model inference and hence can depend on the Inference API.

Turnkey one-stop solutions

We expect to provide turnkey solutions for popular deployment scenarios. It should be easy to deploy a Llama Stack server on AWS or on a private data center. Either of these should allow a developer to get started with powerful agentic apps, model evaluations or fine-tuning services in a matter of minutes. They should all result in the same uniform observability and developer experience.

Focus on Llama models

As a Meta initiated project, we have started by explicitly focusing on Meta's Llama series of models. Supporting the broad set of open models is no easy task and we want to start with models we understand best.

Supporting the Ecosystem

There is a vibrant ecosystem of Providers which provide efficient inference or scalable vector stores or powerful observability solutions. We want to make sure it is easy for developers to pick and choose the best implementations for their use cases. We also want to make sure it is easy for new Providers to onboard and participate in the ecosystem.

Additionally, we have designed every element of the Stack such that APIs as well as Resources (like Models) can be federated.

Supported Llama Stack Implementations

API Providers

API Provider Builder Environments Agents Inference Memory Safety Telemetry
Meta Reference Single Node ✔️ ✔️ ✔️ ✔️ ✔️
Cerebras Hosted ✔️
Fireworks Hosted ✔️ ✔️ ✔️
AWS Bedrock Hosted ✔️ ✔️
Together Hosted ✔️ ✔️ ✔️
Ollama Single Node ✔️
TGI Hosted and Single Node ✔️
Chroma Single Node ✔️
PG Vector Single Node ✔️
PyTorch ExecuTorch On-device iOS ✔️ ✔️

Distributions

Distribution Llama Stack Docker Start This Distribution
Meta Reference llamastack/distribution-meta-reference-gpu Guide
Meta Reference Quantized llamastack/distribution-meta-reference-quantized-gpu Guide
Cerebras llamastack/distribution-cerebras Guide
Ollama llamastack/distribution-ollama Guide
TGI llamastack/distribution-tgi Guide
Together llamastack/distribution-together Guide
Fireworks llamastack/distribution-fireworks Guide

Installation

You have two ways to install this repository:

  1. Install as a package: You can install the repository directly from PyPI by running the following command:

    pip install llama-stack
    
  2. Install from source: If you prefer to install from the source code, make sure you have conda installed. Then, follow these steps:

     mkdir -p ~/local
     cd ~/local
     git clone git@github.com:meta-llama/llama-stack.git
    
     conda create -n stack python=3.10
     conda activate stack
    
     cd llama-stack
     $CONDA_PREFIX/bin/pip install -e .
    

Documentation

Please checkout our Documentation page for more details.

Llama Stack Client SDKs

Language Client SDK Package
Python llama-stack-client-python PyPI version
Swift llama-stack-client-swift Swift Package Index
Node llama-stack-client-node NPM version
Kotlin llama-stack-client-kotlin Maven version

Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from python, node, swift, and kotlin programming languages to quickly build your applications.

You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.