mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-07 14:26:44 +00:00
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 2s
Integration Tests / test-matrix (http, 3.12, providers) (push) Failing after 15s
Integration Tests / test-matrix (http, 3.12, agents) (push) Failing after 17s
Integration Tests / test-matrix (http, 3.13, providers) (push) Failing after 8s
Integration Tests / test-matrix (http, 3.12, inference) (push) Failing after 24s
Integration Tests / test-matrix (http, 3.12, tool_runtime) (push) Failing after 22s
Integration Tests / test-matrix (http, 3.12, inspect) (push) Failing after 24s
Integration Tests / test-matrix (http, 3.13, post_training) (push) Failing after 13s
Integration Tests / test-matrix (http, 3.12, vector_io) (push) Failing after 22s
Integration Tests / test-matrix (http, 3.13, agents) (push) Failing after 23s
Integration Tests / test-matrix (http, 3.13, inspect) (push) Failing after 18s
Integration Tests / test-matrix (http, 3.12, post_training) (push) Failing after 25s
Integration Tests / test-matrix (http, 3.13, inference) (push) Failing after 20s
Integration Tests / test-matrix (library, 3.12, agents) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.13, vector_io) (push) Failing after 8s
Integration Tests / test-matrix (http, 3.13, tool_runtime) (push) Failing after 14s
Integration Tests / test-matrix (http, 3.13, scoring) (push) Failing after 16s
Integration Tests / test-matrix (library, 3.12, datasets) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.12, datasets) (push) Failing after 41s
Integration Tests / test-matrix (http, 3.13, datasets) (push) Failing after 37s
Integration Tests / test-matrix (http, 3.12, scoring) (push) Failing after 39s
Integration Tests / test-matrix (library, 3.12, post_training) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.12, scoring) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.12, inference) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.12, inspect) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.12, providers) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.13, datasets) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.13, post_training) (push) Failing after 4s
Integration Tests / test-matrix (library, 3.13, scoring) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.13, tool_runtime) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.13, agents) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.13, inspect) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.13, providers) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.13, vector_io) (push) Failing after 6s
Vector IO Integration Tests / test-matrix (3.12, inline::faiss) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.13, inference) (push) Failing after 15s
Integration Tests / test-matrix (library, 3.12, tool_runtime) (push) Failing after 19s
Vector IO Integration Tests / test-matrix (3.12, inline::milvus) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.12, vector_io) (push) Failing after 18s
Vector IO Integration Tests / test-matrix (3.12, inline::sqlite-vec) (push) Failing after 6s
Vector IO Integration Tests / test-matrix (3.12, remote::chromadb) (push) Failing after 18s
Vector IO Integration Tests / test-matrix (3.13, inline::faiss) (push) Failing after 17s
Vector IO Integration Tests / test-matrix (3.13, remote::chromadb) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.12, remote::pgvector) (push) Failing after 22s
Vector IO Integration Tests / test-matrix (3.13, inline::milvus) (push) Failing after 20s
Python Package Build Test / build (3.12) (push) Failing after 9s
Python Package Build Test / build (3.13) (push) Failing after 9s
Test External Providers / test-external-providers (venv) (push) Failing after 8s
Update ReadTheDocs / update-readthedocs (push) Failing after 5s
Unit Tests / unit-tests (3.12) (push) Failing after 9s
Vector IO Integration Tests / test-matrix (3.13, remote::pgvector) (push) Failing after 52s
Vector IO Integration Tests / test-matrix (3.13, inline::sqlite-vec) (push) Failing after 54s
Unit Tests / unit-tests (3.13) (push) Failing after 50s
Pre-commit / pre-commit (push) Successful in 1m51s
# What does this PR do? - Adding a notebook equivalent of the [getting_started/index.md#Quickstart guide](https://github.com/meta-llama/llama-stack/blob/main/docs/source/getting_started/index.md). ## To discuss **Note:** works locally, but I am encountering issues when attempting to run through the notebook on Google Colab. Specifically, on the last step to run the demo, the `knowledge_search` tool doesn't seem to be called i.e.,: ``` rag_tool> Ingesting document: https://www.paulgraham.com/greatwork.html prompt> How do you do great work? inference> I don't have personal experiences or emotions, but I was trained on a large corpus of text data and use various techniques such as natural language processing (NLP) and machine learning algorithms to generate human-like responses. ``` I would expect to get something like: ``` rag_tool> Ingesting document: https://www.paulgraham.com/greatwork.html prompt> How do you do great work? inference> [knowledge_search(query="What is the key to doing great work")] tool_execution> Tool:knowledge_search Args:{'query': 'What is the key to doing great work'} tool_execution> Tool:knowledge_search Response:[TextContentItem(text='knowledge_search tool found 5 chunks: .... .... ```
123 lines
6.3 KiB
Markdown
123 lines
6.3 KiB
Markdown
# Quickstart
|
|
|
|
Get started with Llama Stack in minutes!
|
|
|
|
Llama Stack is a stateful service with REST APIs to support the seamless transition of AI applications across different
|
|
environments. You can build and test using a local server first and deploy to a hosted endpoint for production.
|
|
|
|
In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)
|
|
as the inference [provider](../providers/inference/index) for a Llama Model.
|
|
|
|
**💡 Notebook Version:** You can also follow this quickstart guide in a Jupyter notebook format: [quick_start.ipynb](https://github.com/meta-llama/llama-stack/blob/main/docs/quick_start.ipynb)
|
|
|
|
#### Step 1: Install and setup
|
|
1. Install [uv](https://docs.astral.sh/uv/)
|
|
2. Run inference on a Llama model with [Ollama](https://ollama.com/download)
|
|
```bash
|
|
ollama run llama3.2:3b --keepalive 60m
|
|
```
|
|
#### Step 2: Run the Llama Stack server
|
|
We will use `uv` to run the Llama Stack server.
|
|
```bash
|
|
INFERENCE_MODEL=llama3.2:3b uv run --with llama-stack llama stack build --template ollama --image-type venv --run
|
|
```
|
|
#### Step 3: Run the demo
|
|
Now open up a new terminal and copy the following script into a file named `demo_script.py`.
|
|
|
|
```python
|
|
from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient
|
|
|
|
vector_db_id = "my_demo_vector_db"
|
|
client = LlamaStackClient(base_url="http://localhost:8321")
|
|
|
|
models = client.models.list()
|
|
|
|
# Select the first LLM and first embedding models
|
|
model_id = next(m for m in models if m.model_type == "llm").identifier
|
|
embedding_model_id = (
|
|
em := next(m for m in models if m.model_type == "embedding")
|
|
).identifier
|
|
embedding_dimension = em.metadata["embedding_dimension"]
|
|
|
|
_ = client.vector_dbs.register(
|
|
vector_db_id=vector_db_id,
|
|
embedding_model=embedding_model_id,
|
|
embedding_dimension=embedding_dimension,
|
|
provider_id="faiss",
|
|
)
|
|
source = "https://www.paulgraham.com/greatwork.html"
|
|
print("rag_tool> Ingesting document:", source)
|
|
document = RAGDocument(
|
|
document_id="document_1",
|
|
content=source,
|
|
mime_type="text/html",
|
|
metadata={},
|
|
)
|
|
client.tool_runtime.rag_tool.insert(
|
|
documents=[document],
|
|
vector_db_id=vector_db_id,
|
|
chunk_size_in_tokens=50,
|
|
)
|
|
agent = Agent(
|
|
client,
|
|
model=model_id,
|
|
instructions="You are a helpful assistant",
|
|
tools=[
|
|
{
|
|
"name": "builtin::rag/knowledge_search",
|
|
"args": {"vector_db_ids": [vector_db_id]},
|
|
}
|
|
],
|
|
)
|
|
|
|
prompt = "How do you do great work?"
|
|
print("prompt>", prompt)
|
|
|
|
response = agent.create_turn(
|
|
messages=[{"role": "user", "content": prompt}],
|
|
session_id=agent.create_session("rag_session"),
|
|
stream=True,
|
|
)
|
|
|
|
for log in AgentEventLogger().log(response):
|
|
log.print()
|
|
```
|
|
We will use `uv` to run the script
|
|
```
|
|
uv run --with llama-stack-client,fire,requests demo_script.py
|
|
```
|
|
And you should see output like below.
|
|
```
|
|
rag_tool> Ingesting document: https://www.paulgraham.com/greatwork.html
|
|
|
|
prompt> How do you do great work?
|
|
|
|
inference> [knowledge_search(query="What is the key to doing great work")]
|
|
|
|
tool_execution> Tool:knowledge_search Args:{'query': 'What is the key to doing great work'}
|
|
|
|
tool_execution> Tool:knowledge_search Response:[TextContentItem(text='knowledge_search tool found 5 chunks:\nBEGIN of knowledge_search tool results.\n', type='text'), TextContentItem(text="Result 1:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 2:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 3:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 4:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 5:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text='END of knowledge_search tool results.\n', type='text')]
|
|
|
|
inference> Based on the search results, it seems that doing great work means doing something important so well that you expand people's ideas of what's possible. However, there is no clear threshold for importance, and it can be difficult to judge at the time.
|
|
|
|
To further clarify, I would suggest that doing great work involves:
|
|
|
|
* Completing tasks with high quality and attention to detail
|
|
* Expanding on existing knowledge or ideas
|
|
* Making a positive impact on others through your work
|
|
* Striving for excellence and continuous improvement
|
|
|
|
Ultimately, great work is about making a meaningful contribution and leaving a lasting impression.
|
|
```
|
|
Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳
|
|
|
|
## Next Steps
|
|
|
|
Now you're ready to dive deeper into Llama Stack!
|
|
- Explore the [Detailed Tutorial](./detailed_tutorial.md).
|
|
- Try the [Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).
|
|
- Browse more [Notebooks on GitHub](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks).
|
|
- Learn about Llama Stack [Concepts](../concepts/index.md).
|
|
- Discover how to [Build Llama Stacks](../distributions/index.md).
|
|
- Refer to our [References](../references/index.md) for details on the Llama CLI and Python SDK.
|
|
- Check out the [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository for example applications and tutorials.
|