mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
367 lines
16 KiB
Text
367 lines
16 KiB
Text
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c1e7571c",
|
||
"metadata": {
|
||
"id": "c1e7571c"
|
||
},
|
||
"source": [
|
||
"[](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)\n",
|
||
"\n",
|
||
"# Llama Stack - Building AI Applications\n",
|
||
"\n",
|
||
"<img src=\"https://llama-stack.readthedocs.io/en/latest/_images/llama-stack.png\" alt=\"drawing\" width=\"500\"/>\n",
|
||
"\n",
|
||
"Get started with Llama Stack in minutes!\n",
|
||
"\n",
|
||
"[Llama Stack](https://github.com/meta-llama/llama-stack) is a stateful service with REST APIs to support the seamless transition of AI applications across different environments. You can build and test using a local server first and deploy to a hosted endpoint for production.\n",
|
||
"\n",
|
||
"In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)\n",
|
||
"as the inference [provider](docs/source/providers/index.md#inference) for a Llama Model.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4CV1Q19BDMVw",
|
||
"metadata": {
|
||
"id": "4CV1Q19BDMVw"
|
||
},
|
||
"source": [
|
||
"## Step 1: Install and setup"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "K4AvfUAJZOeS",
|
||
"metadata": {
|
||
"id": "K4AvfUAJZOeS"
|
||
},
|
||
"source": [
|
||
"### 1.1. Install uv and test inference with Ollama\n",
|
||
"\n",
|
||
"We'll install [uv](https://docs.astral.sh/uv/) to setup the Python virtual environment, along with [colab-xterm](https://github.com/InfuseAI/colab-xterm) for running command-line tools, and [Ollama](https://ollama.com/download) as the inference provider."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "7a2d7b85",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"%pip install uv llama_stack llama-stack-client\n",
|
||
"\n",
|
||
"## If running on Collab:\n",
|
||
"# !pip install colab-xterm\n",
|
||
"# %load_ext colabxterm\n",
|
||
"\n",
|
||
"!curl https://ollama.ai/install.sh | sh"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "39fa584b",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 1.2. Test inference with Ollama"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3bf81522",
|
||
"metadata": {},
|
||
"source": [
|
||
"We’ll now launch a terminal and run inference on a Llama model with Ollama to verify that the model is working correctly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a7e8e0f1",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"## If running on Colab:\n",
|
||
"# %xterm\n",
|
||
"\n",
|
||
"## To be ran in the terminal:\n",
|
||
"# ollama serve &\n",
|
||
"# ollama run llama3.2:3b --keepalive 60m"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f3c5f243",
|
||
"metadata": {},
|
||
"source": [
|
||
"If successful, you should see the model respond to a prompt.\n",
|
||
"\n",
|
||
"...\n",
|
||
"```\n",
|
||
">>> hi\n",
|
||
"Hello! How can I assist you today?\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "oDUB7M_qe-Gs",
|
||
"metadata": {
|
||
"id": "oDUB7M_qe-Gs"
|
||
},
|
||
"source": [
|
||
"## Step 2: Run the Llama Stack server\n",
|
||
"\n",
|
||
"In this showcase, we will start a Llama Stack server that is running locally."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "732eadc6",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 2.1. Setup the Llama Stack Server"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "J2kGed0R5PSf",
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/"
|
||
},
|
||
"collapsed": true,
|
||
"id": "J2kGed0R5PSf",
|
||
"outputId": "2478ea60-8d35-48a1-b011-f233831740c5"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import os \n",
|
||
"import subprocess\n",
|
||
"\n",
|
||
"if \"UV_SYSTEM_PYTHON\" in os.environ:\n",
|
||
" del os.environ[\"UV_SYSTEM_PYTHON\"]\n",
|
||
"\n",
|
||
"# this command installs all the dependencies needed for the llama stack server with the ollama inference provider\n",
|
||
"!uv run --with llama-stack llama stack build --template ollama --image-type venv --image-name myvenv\n",
|
||
"\n",
|
||
"def run_llama_stack_server_background():\n",
|
||
" log_file = open(\"llama_stack_server.log\", \"w\")\n",
|
||
" process = subprocess.Popen(\n",
|
||
" f\"uv run --with llama-stack llama stack run ollama --image-type venv --image-name myvenv --env INFERENCE_MODEL=llama3.2:3b\",\n",
|
||
" shell=True,\n",
|
||
" stdout=log_file,\n",
|
||
" stderr=log_file,\n",
|
||
" text=True\n",
|
||
" )\n",
|
||
" \n",
|
||
" print(f\"Starting Llama Stack server with PID: {process.pid}\")\n",
|
||
" return process\n",
|
||
"\n",
|
||
"def wait_for_server_to_start():\n",
|
||
" import requests\n",
|
||
" from requests.exceptions import ConnectionError\n",
|
||
" import time\n",
|
||
" \n",
|
||
" url = \"http://0.0.0.0:8321/v1/health\"\n",
|
||
" max_retries = 30\n",
|
||
" retry_interval = 1\n",
|
||
" \n",
|
||
" print(\"Waiting for server to start\", end=\"\")\n",
|
||
" for _ in range(max_retries):\n",
|
||
" try:\n",
|
||
" response = requests.get(url)\n",
|
||
" if response.status_code == 200:\n",
|
||
" print(\"\\nServer is ready!\")\n",
|
||
" return True\n",
|
||
" except ConnectionError:\n",
|
||
" print(\".\", end=\"\", flush=True)\n",
|
||
" time.sleep(retry_interval)\n",
|
||
" \n",
|
||
" print(\"\\nServer failed to start after\", max_retries * retry_interval, \"seconds\")\n",
|
||
" return False\n",
|
||
"\n",
|
||
"\n",
|
||
"# use this helper if needed to kill the server \n",
|
||
"def kill_llama_stack_server():\n",
|
||
" # Kill any existing llama stack server processes\n",
|
||
" os.system(\"ps aux | grep -v grep | grep llama_stack.distribution.server.server | awk '{print $2}' | xargs kill -9\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c40e9efd",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 2.2. Start the Llama Stack Server"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "f779283d",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Starting Llama Stack server with PID: 787100\n",
|
||
"Waiting for server to start\n",
|
||
"Server is ready!\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"server_process = run_llama_stack_server_background()\n",
|
||
"assert wait_for_server_to_start()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "28477c03",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Step 3: Run the demo"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "7da71011",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"rag_tool> Ingesting document: https://www.paulgraham.com/greatwork.html\n",
|
||
"prompt> How do you do great work?\n",
|
||
"\u001b[33minference> \u001b[0m\u001b[33m[k\u001b[0m\u001b[33mnowledge\u001b[0m\u001b[33m_search\u001b[0m\u001b[33m(query\u001b[0m\u001b[33m=\"\u001b[0m\u001b[33mWhat\u001b[0m\u001b[33m is\u001b[0m\u001b[33m the\u001b[0m\u001b[33m key\u001b[0m\u001b[33m to\u001b[0m\u001b[33m doing\u001b[0m\u001b[33m great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m\")]\u001b[0m\u001b[97m\u001b[0m\n",
|
||
"\u001b[32mtool_execution> Tool:knowledge_search Args:{'query': 'What is the key to doing great work'}\u001b[0m\n",
|
||
"\u001b[32mtool_execution> Tool:knowledge_search Response:[TextContentItem(text='knowledge_search tool found 5 chunks:\\nBEGIN of knowledge_search tool results.\\n', type='text'), TextContentItem(text=\"Result 1:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 2:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 3:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 4:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 5:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text='END of knowledge_search tool results.\\n', type='text'), TextContentItem(text='The above results were retrieved to help answer the user\\'s query: \"What is the key to doing great work\". Use them as supporting information only in answering this query.\\n', type='text')]\u001b[0m\n",
|
||
"\u001b[33minference> \u001b[0m\u001b[33mDoing\u001b[0m\u001b[33m great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m means\u001b[0m\u001b[33m doing\u001b[0m\u001b[33m something\u001b[0m\u001b[33m important\u001b[0m\u001b[33m so\u001b[0m\u001b[33m well\u001b[0m\u001b[33m that\u001b[0m\u001b[33m you\u001b[0m\u001b[33m expand\u001b[0m\u001b[33m people\u001b[0m\u001b[33m's\u001b[0m\u001b[33m ideas\u001b[0m\u001b[33m of\u001b[0m\u001b[33m what\u001b[0m\u001b[33m's\u001b[0m\u001b[33m possible\u001b[0m\u001b[33m.\u001b[0m\u001b[33m However\u001b[0m\u001b[33m,\u001b[0m\u001b[33m there\u001b[0m\u001b[33m's\u001b[0m\u001b[33m no\u001b[0m\u001b[33m threshold\u001b[0m\u001b[33m for\u001b[0m\u001b[33m importance\u001b[0m\u001b[33m,\u001b[0m\u001b[33m and\u001b[0m\u001b[33m it\u001b[0m\u001b[33m's\u001b[0m\u001b[33m often\u001b[0m\u001b[33m hard\u001b[0m\u001b[33m to\u001b[0m\u001b[33m judge\u001b[0m\u001b[33m at\u001b[0m\u001b[33m the\u001b[0m\u001b[33m time\u001b[0m\u001b[33m anyway\u001b[0m\u001b[33m.\u001b[0m\u001b[33m Great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m is\u001b[0m\u001b[33m a\u001b[0m\u001b[33m matter\u001b[0m\u001b[33m of\u001b[0m\u001b[33m degree\u001b[0m\u001b[33m,\u001b[0m\u001b[33m and\u001b[0m\u001b[33m it\u001b[0m\u001b[33m can\u001b[0m\u001b[33m be\u001b[0m\u001b[33m difficult\u001b[0m\u001b[33m to\u001b[0m\u001b[33m determine\u001b[0m\u001b[33m whether\u001b[0m\u001b[33m someone\u001b[0m\u001b[33m has\u001b[0m\u001b[33m done\u001b[0m\u001b[33m great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m until\u001b[0m\u001b[33m after\u001b[0m\u001b[33m the\u001b[0m\u001b[33m fact\u001b[0m\u001b[33m.\u001b[0m\u001b[97m\u001b[0m\n",
|
||
"\u001b[30m\u001b[0m"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient\n",
|
||
"\n",
|
||
"vector_db_id = \"my_demo_vector_db\"\n",
|
||
"client = LlamaStackClient(base_url=\"http://0.0.0.0:8321\")\n",
|
||
"\n",
|
||
"models = client.models.list()\n",
|
||
"\n",
|
||
"# Select the first LLM and first embedding models\n",
|
||
"model_id = next(m for m in models if m.model_type == \"llm\").identifier\n",
|
||
"embedding_model_id = (\n",
|
||
" em := next(m for m in models if m.model_type == \"embedding\")\n",
|
||
").identifier\n",
|
||
"embedding_dimension = em.metadata[\"embedding_dimension\"]\n",
|
||
"\n",
|
||
"_ = client.vector_dbs.register(\n",
|
||
" vector_db_id=vector_db_id,\n",
|
||
" embedding_model=embedding_model_id,\n",
|
||
" embedding_dimension=embedding_dimension,\n",
|
||
" provider_id=\"faiss\",\n",
|
||
")\n",
|
||
"source = \"https://www.paulgraham.com/greatwork.html\"\n",
|
||
"print(\"rag_tool> Ingesting document:\", source)\n",
|
||
"document = RAGDocument(\n",
|
||
" document_id=\"document_1\",\n",
|
||
" content=source,\n",
|
||
" mime_type=\"text/html\",\n",
|
||
" metadata={},\n",
|
||
")\n",
|
||
"client.tool_runtime.rag_tool.insert(\n",
|
||
" documents=[document],\n",
|
||
" vector_db_id=vector_db_id,\n",
|
||
" chunk_size_in_tokens=50,\n",
|
||
")\n",
|
||
"agent = Agent(\n",
|
||
" client,\n",
|
||
" model=model_id,\n",
|
||
" instructions=\"You are a helpful assistant\",\n",
|
||
" tools=[\n",
|
||
" {\n",
|
||
" \"name\": \"builtin::rag/knowledge_search\",\n",
|
||
" \"args\": {\"vector_db_ids\": [vector_db_id]},\n",
|
||
" }\n",
|
||
" ],\n",
|
||
")\n",
|
||
"\n",
|
||
"prompt = \"How do you do great work?\"\n",
|
||
"print(\"prompt>\", prompt)\n",
|
||
"\n",
|
||
"response = agent.create_turn(\n",
|
||
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
|
||
" session_id=agent.create_session(\"rag_session\"),\n",
|
||
" stream=True,\n",
|
||
")\n",
|
||
"\n",
|
||
"for log in AgentEventLogger().log(response):\n",
|
||
" log.print()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "341aaadf",
|
||
"metadata": {},
|
||
"source": [
|
||
"Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e88e1185",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Next Steps"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bcb73600",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now you're ready to dive deeper into Llama Stack!\n",
|
||
"- Explore the [Detailed Tutorial](./detailed_tutorial.md).\n",
|
||
"- Try the [Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).\n",
|
||
"- Browse more [Notebooks on GitHub](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks).\n",
|
||
"- Learn about Llama Stack [Concepts](../concepts/index.md).\n",
|
||
"- Discover how to [Build Llama Stacks](../distributions/index.md).\n",
|
||
"- Refer to our [References](../references/index.md) for details on the Llama CLI and Python SDK.\n",
|
||
"- Check out the [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository for example applications and tutorials."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"accelerator": "GPU",
|
||
"colab": {
|
||
"gpuType": "T4",
|
||
"provenance": []
|
||
},
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.10.13"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|