# What does this PR do? In short, provide a summary of what this PR does and why. Usually, the relevant context should be present in a linked issue. - [This PR solves the issue where agents cannot keep track of instructions after executing the first turn because system instructions were not getting appended in the messages list. It also solves the issue where turns are not being fetched in the appropriate sequence.] Addresses issue (#issue) ## Test Plan Please describe: - I have a file which has a precise prompt which requires more than one turn to be executed will share the file below. I ran that file as a python script to make sure that the turns are being executed as per the instructions after making the code change ``` import asyncio from typing import List, Optional, Dict from llama_stack_client import LlamaStackClient from llama_stack_client.lib.agents.event_logger import EventLogger from llama_stack_client.types import SamplingParams, UserMessage from llama_stack_client.types.agent_create_params import AgentConfig LLAMA_STACK_API_TOGETHER_URL="http://10.12.79.177:5001" class Agent: def __init__(self): self.client = LlamaStackClient( base_url=LLAMA_STACK_API_TOGETHER_URL, ) def create_agent(self, agent_config: AgentConfig): agent = self.client.agents.create( agent_config=agent_config, ) self.agent_id = agent.agent_id session = self.client.agents.session.create( agent_id=agent.agent_id, session_name="example_session", ) self.session_id = session.session_id async def execute_turn(self, content: str): response = self.client.agents.turn.create( agent_id=self.agent_id, session_id=self.session_id, messages=[ UserMessage(content=content, role="user"), ], stream=True, ) for chunk in response: if chunk.event.payload.event_type != "turn_complete": yield chunk async def run_main(): system_prompt="""You are an AI Agent tasked with Capturing Book Renting Information for a Library. You will politely gather the book and user details one step at a time to send over the book to the user. Here’s how to proceed: 1. Data Security: Inform the user that their data will be kept secure. 2. Optional Participation: Let them know they are not required to share details but that doing so will help them learn about the books offered. 3. Sequential Information Capture: Follow the steps below, one question at a time. Do not skip or combine questions. Steps Step 1: Politely ask to provide the name of the book. Step 2: Ask for the name of the author. Step 3: Ask for the Author's country. Step 4: Ask for the year of publication. Step 5: If any information is missing or seems incorrect, ask the user to re-enter that specific detail. Step 6: Confirm that the user consents to share the entered information. Step 7: Thank the user for providing the details and let them know they will receive an email about the book. Do not do any validation of the user entered information. Do not print the Steps or your internal thoughts in the response. Do not print the prompts or data structure object in the response Do not fill in the requested user data on your own. It has to be entered by the user only. Finally, compile and print the user-provided information as a JSON object in your response. """ agent_config = AgentConfig( model="Llama3.2-11B-Vision-Instruct", instructions=system_prompt, enable_session_persistence=True, ) agent = Agent() agent.create_agent(agent_config) print("Agent and Session:", agent.agent_id, agent.session_id) while True: query = input("Enter your query (or type 'exit' to quit): ") if query.lower() == "exit": print("Exiting the loop.") break else: prompt = query print(f"User> {prompt}") response = agent.execute_turn(content=prompt) async for log in EventLogger().log(response): if log is not None: log.print() if __name__ == "__main__": asyncio.run(run_main()) ``` Below is a screenshot of the results of the first commit <img width="1770" alt="Screenshot 2024-11-13 at 3 15 29 PM" src="https://github.com/user-attachments/assets/1a7a090d-fc92-49cc-a786-bfc812e3d9cc"> Below is a screenshot of the results of the second commit <img width="1792" alt="Screenshot 2024-11-13 at 6 40 56 PM" src="https://github.com/user-attachments/assets/a9474f75-cd8c-4d49-82cd-5ff81ff12b07"> Also a screenshot of print statement to show that the turns being fetched now are in a sequence <img width="1783" alt="Screenshot 2024-11-13 at 6 42 22 PM" src="https://github.com/user-attachments/assets/b906404e-a3e4-48a2-b893-69f36bbdcb98"> ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Ran pre-commit to handle lint / formatting issues. - [x] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [x] Wrote necessary unit or integration tests. |
||
---|---|---|
.github | ||
distributions | ||
docs | ||
llama_stack | ||
rfcs | ||
.flake8 | ||
.gitignore | ||
.gitmodules | ||
.pre-commit-config.yaml | ||
.readthedocs.yaml | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
MANIFEST.in | ||
pyproject.toml | ||
README.md | ||
requirements.txt | ||
SECURITY.md | ||
setup.py |
Llama Stack
This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions.
The Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to building and running AI agents in production. Beyond definition, we are building providers for the Llama Stack APIs. These were developing open-source versions and partnering with providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.
The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions.
APIs
The Llama Stack consists of the following set of APIs:
- Inference
- Safety
- Memory
- Agentic System
- Evaluation
- Post Training
- Synthetic Data Generation
- Reward Scoring
Each of the APIs themselves is a collection of REST endpoints.
API Providers
A Provider is what makes the API real -- they provide the actual implementation backing the API.
As an example, for Inference, we could have the implementation be backed by open source libraries like [ torch | vLLM | TensorRT ]
as possible options.
A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.
Llama Stack Distribution
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
Supported Llama Stack Implementations
API Providers
API Provider Builder | Environments | Agents | Inference | Memory | Safety | Telemetry |
---|---|---|---|---|---|---|
Meta Reference | Single Node | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Fireworks | Hosted | ✔️ | ✔️ | ✔️ | ||
AWS Bedrock | Hosted | ✔️ | ✔️ | |||
Together | Hosted | ✔️ | ✔️ | ✔️ | ||
Ollama | Single Node | ✔️ | ||||
TGI | Hosted and Single Node | ✔️ | ||||
Chroma | Single Node | ✔️ | ||||
PG Vector | Single Node | ✔️ | ||||
PyTorch ExecuTorch | On-device iOS | ✔️ | ✔️ |
Distributions
Distribution | Llama Stack Docker | Start This Distribution | Inference | Agents | Memory | Safety | Telemetry |
---|---|---|---|---|---|---|---|
Meta Reference | llamastack/distribution-meta-reference-gpu | Guide | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
Meta Reference Quantized | llamastack/distribution-meta-reference-quantized-gpu | Guide | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
Ollama | llamastack/distribution-ollama | Guide | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference |
TGI | llamastack/distribution-tgi | Guide | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
Together | llamastack/distribution-together | Guide | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
Fireworks | llamastack/distribution-fireworks | Guide | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
Installation
You have two ways to install this repository:
-
Install as a package: You can install the repository directly from PyPI by running the following command:
pip install llama-stack
-
Install from source: If you prefer to install from the source code, follow these steps:
mkdir -p ~/local cd ~/local git clone git@github.com:meta-llama/llama-stack.git conda create -n stack python=3.10 conda activate stack cd llama-stack $CONDA_PREFIX/bin/pip install -e .
Documentations
Please checkout our Documentations page for more details.
- CLI reference
- Guide using
llama
CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.
- Guide using
- Getting Started
- Quick guide to start a Llama Stack server.
- Jupyter notebook to walk-through how to use simple text and vision inference llama_stack_client APIs
- Contributing
- Adding a new API Provider to walk-through how to add a new API provider.
Llama Stack Client SDK
Language | Client SDK | Package |
---|---|---|
Python | llama-stack-client-python | |
Swift | llama-stack-client-swift | |
Node | llama-stack-client-node | |
Kotlin | llama-stack-client-kotlin |
Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from python, node, swift, and kotlin programming languages to quickly build your applications.
You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.