mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-22 08:17:18 +00:00
399 lines
13 KiB
Text
399 lines
13 KiB
Text
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c1e7571c",
|
||
"metadata": {
|
||
"id": "c1e7571c"
|
||
},
|
||
"source": [
|
||
"[](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)\n",
|
||
"\n",
|
||
"# Llama Stack - Building AI Applications\n",
|
||
"\n",
|
||
"<img src=\"https://llamastack.github.io/latest/_images/llama-stack.png\" alt=\"drawing\" width=\"500\"/>\n",
|
||
"\n",
|
||
"Get started with Llama Stack in minutes!\n",
|
||
"\n",
|
||
"[Llama Stack](https://github.com/meta-llama/llama-stack) is a stateful service with REST APIs to support the seamless transition of AI applications across different environments. You can build and test using a local server first and deploy to a hosted endpoint for production.\n",
|
||
"\n",
|
||
"In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)\n",
|
||
"as the inference [provider](docs/source/providers/index.md#inference) for a Llama Model.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4CV1Q19BDMVw",
|
||
"metadata": {
|
||
"id": "4CV1Q19BDMVw"
|
||
},
|
||
"source": [
|
||
"## Step 1: Install and setup"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "K4AvfUAJZOeS",
|
||
"metadata": {
|
||
"id": "K4AvfUAJZOeS"
|
||
},
|
||
"source": [
|
||
"### 1.1. Install uv and test inference with Ollama\n",
|
||
"\n",
|
||
"We'll install [uv](https://docs.astral.sh/uv/) to setup the Python virtual environment, along with [colab-xterm](https://github.com/InfuseAI/colab-xterm) for running command-line tools, and [Ollama](https://ollama.com/download) as the inference provider."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "7a2d7b85",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"%pip install uv llama_stack llama-stack-client\n",
|
||
"\n",
|
||
"## If running on Collab:\n",
|
||
"# !pip install colab-xterm\n",
|
||
"# %load_ext colabxterm\n",
|
||
"\n",
|
||
"!curl https://ollama.ai/install.sh | sh"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "39fa584b",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 1.2. Test inference with Ollama"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3bf81522",
|
||
"metadata": {},
|
||
"source": [
|
||
"We’ll now launch a terminal and run inference on a Llama model with Ollama to verify that the model is working correctly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a7e8e0f1",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"## If running on Colab:\n",
|
||
"# %xterm\n",
|
||
"\n",
|
||
"## To be ran in the terminal:\n",
|
||
"# ollama serve &\n",
|
||
"# ollama run llama3.2:3b --keepalive 60m"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f3c5f243",
|
||
"metadata": {},
|
||
"source": [
|
||
"If successful, you should see the model respond to a prompt.\n",
|
||
"\n",
|
||
"...\n",
|
||
"```\n",
|
||
">>> hi\n",
|
||
"Hello! How can I assist you today?\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "oDUB7M_qe-Gs",
|
||
"metadata": {
|
||
"id": "oDUB7M_qe-Gs"
|
||
},
|
||
"source": [
|
||
"## Step 2: Run the Llama Stack server\n",
|
||
"\n",
|
||
"In this showcase, we will start a Llama Stack server that is running locally."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "732eadc6",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 2.1. Setup the Llama Stack Server"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"id": "J2kGed0R5PSf",
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/"
|
||
},
|
||
"id": "J2kGed0R5PSf",
|
||
"outputId": "2478ea60-8d35-48a1-b011-f233831740c5"
|
||
},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||
"\u001b[2mAudited \u001b[1m52 packages\u001b[0m \u001b[2min 1.56s\u001b[0m\u001b[0m\n",
|
||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||
"\u001b[2mAudited \u001b[1m3 packages\u001b[0m \u001b[2min 122ms\u001b[0m\u001b[0m\n",
|
||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||
"\u001b[2mAudited \u001b[1m3 packages\u001b[0m \u001b[2min 197ms\u001b[0m\u001b[0m\n",
|
||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||
"\u001b[2mAudited \u001b[1m1 package\u001b[0m \u001b[2min 11ms\u001b[0m\u001b[0m\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import os\n",
|
||
"import subprocess\n",
|
||
"\n",
|
||
"if \"UV_SYSTEM_PYTHON\" in os.environ:\n",
|
||
" del os.environ[\"UV_SYSTEM_PYTHON\"]\n",
|
||
"\n",
|
||
"# this command installs all the dependencies needed for the llama stack server with the ollama inference provider\n",
|
||
"!uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install\n",
|
||
"\n",
|
||
"def run_llama_stack_server_background():\n",
|
||
" log_file = open(\"llama_stack_server.log\", \"w\")\n",
|
||
" process = subprocess.Popen(\n",
|
||
" f\"OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter\",\n",
|
||
" shell=True,\n",
|
||
" stdout=log_file,\n",
|
||
" stderr=log_file,\n",
|
||
" text=True\n",
|
||
" )\n",
|
||
"\n",
|
||
" print(f\"Starting Llama Stack server with PID: {process.pid}\")\n",
|
||
" return process\n",
|
||
"\n",
|
||
"def wait_for_server_to_start():\n",
|
||
" import requests\n",
|
||
" from requests.exceptions import ConnectionError\n",
|
||
" import time\n",
|
||
"\n",
|
||
" url = \"http://0.0.0.0:8321/v1/health\"\n",
|
||
" max_retries = 30\n",
|
||
" retry_interval = 1\n",
|
||
"\n",
|
||
" print(\"Waiting for server to start\", end=\"\")\n",
|
||
" for _ in range(max_retries):\n",
|
||
" try:\n",
|
||
" response = requests.get(url)\n",
|
||
" if response.status_code == 200:\n",
|
||
" print(\"\\nServer is ready!\")\n",
|
||
" return True\n",
|
||
" except ConnectionError:\n",
|
||
" print(\".\", end=\"\", flush=True)\n",
|
||
" time.sleep(retry_interval)\n",
|
||
"\n",
|
||
" print(\"\\nServer failed to start after\", max_retries * retry_interval, \"seconds\")\n",
|
||
" return False\n",
|
||
"\n",
|
||
"\n",
|
||
"# use this helper if needed to kill the server\n",
|
||
"def kill_llama_stack_server():\n",
|
||
" # Kill any existing llama stack server processes\n",
|
||
" os.system(\"ps aux | grep -v grep | grep llama_stack.core.server.server | awk '{print $2}' | xargs kill -9\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c40e9efd",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 2.2. Start the Llama Stack Server"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "f779283d",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Starting Llama Stack server with PID: 20778\n",
|
||
"Waiting for server to start........\n",
|
||
"Server is ready!\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"server_process = run_llama_stack_server_background()\n",
|
||
"assert wait_for_server_to_start()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "28477c03",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Step 3: Run the demo"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "7da71011",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stderr",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"INFO:httpx:HTTP Request: GET http://0.0.0.0:8321/v1/models \"HTTP/1.1 200 OK\"\n",
|
||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/files \"HTTP/1.1 200 OK\"\n",
|
||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/vector_stores \"HTTP/1.1 200 OK\"\n",
|
||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/conversations \"HTTP/1.1 200 OK\"\n",
|
||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/responses \"HTTP/1.1 200 OK\"\n"
|
||
]
|
||
},
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"prompt> How do you do great work?\n",
|
||
"🤔 Doing great work involves a combination of skills, habits, and mindsets. Here are some key principles:\n",
|
||
"\n",
|
||
"1. **Set Clear Goals**: Start with a clear vision of what you want to achieve. Define specific, measurable, achievable, relevant, and time-bound (SMART) goals.\n",
|
||
"\n",
|
||
"2. **Plan and Prioritize**: Break your goals into smaller, manageable tasks. Prioritize these tasks based on their importance and urgency.\n",
|
||
"\n",
|
||
"3. **Focus on Quality**: Aim for high-quality outcomes rather than just finishing tasks. Pay attention to detail, and ensure your work meets or exceeds standards.\n",
|
||
"\n",
|
||
"4. **Stay Organized**: Keep your workspace, both physical and digital, organized to help you stay focused and efficient.\n",
|
||
"\n",
|
||
"5. **Manage Your Time**: Use time management techniques such as the Pomodoro Technique, time blocking, or the Eisenhower Box to maximize productivity.\n",
|
||
"\n",
|
||
"6. **Seek Feedback and Learn**: Regularly seek feedback from peers, mentors, or supervisors. Use constructive criticism to improve continuously.\n",
|
||
"\n",
|
||
"7. **Innovate and Improve**: Look for ways to improve processes or introduce new ideas. Be open to change and willing to adapt.\n",
|
||
"\n",
|
||
"8. **Stay Motivated and Persistent**: Keep your end goals in mind to stay motivated. Overcome setbacks with resilience and persistence.\n",
|
||
"\n",
|
||
"9. **Balance and Rest**: Ensure you maintain a healthy work-life balance. Take breaks and manage stress to sustain long-term productivity.\n",
|
||
"\n",
|
||
"10. **Reflect and Adjust**: Regularly assess your progress and adjust your strategies as needed. Reflect on what works well and what doesn't.\n",
|
||
"\n",
|
||
"By incorporating these elements, you can consistently produce high-quality work and achieve excellence in your endeavors.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient\n",
|
||
"import requests\n",
|
||
"\n",
|
||
"vector_store_id = \"my_demo_vector_db\"\n",
|
||
"client = LlamaStackClient(base_url=\"http://0.0.0.0:8321\")\n",
|
||
"\n",
|
||
"models = client.models.list()\n",
|
||
"\n",
|
||
"# Select the first ollama and first ollama's embedding model\n",
|
||
"model_id = next(m for m in models if m.model_type == \"llm\" and m.provider_id == \"ollama\").identifier\n",
|
||
"\n",
|
||
"\n",
|
||
"source = \"https://www.paulgraham.com/greatwork.html\"\n",
|
||
"response = requests.get(source)\n",
|
||
"file = client.files.create(\n",
|
||
" file=response.content,\n",
|
||
" purpose='assistants'\n",
|
||
")\n",
|
||
"vector_store = client.vector_stores.create(\n",
|
||
" name=vector_store_id,\n",
|
||
" file_ids=[file.id],\n",
|
||
")\n",
|
||
"\n",
|
||
"agent = Agent(\n",
|
||
" client,\n",
|
||
" model=model_id,\n",
|
||
" instructions=\"You are a helpful assistant\",\n",
|
||
" tools=[\n",
|
||
" {\n",
|
||
" \"type\": \"file_search\",\n",
|
||
" \"vector_store_ids\": [vector_store_id],\n",
|
||
" }\n",
|
||
" ],\n",
|
||
")\n",
|
||
"\n",
|
||
"prompt = \"How do you do great work?\"\n",
|
||
"print(\"prompt>\", prompt)\n",
|
||
"\n",
|
||
"response = agent.create_turn(\n",
|
||
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
|
||
" session_id=agent.create_session(\"rag_session\"),\n",
|
||
" stream=True,\n",
|
||
")\n",
|
||
"\n",
|
||
"for log in AgentEventLogger().log(response):\n",
|
||
" print(log, end=\"\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "341aaadf",
|
||
"metadata": {},
|
||
"source": [
|
||
"Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e88e1185",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Next Steps"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bcb73600",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now you're ready to dive deeper into Llama Stack!\n",
|
||
"- Explore the [Detailed Tutorial](./detailed_tutorial.md).\n",
|
||
"- Try the [Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).\n",
|
||
"- Browse more [Notebooks on GitHub](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks).\n",
|
||
"- Learn about Llama Stack [Concepts](../concepts/index.md).\n",
|
||
"- Discover how to [Build Llama Stacks](../distributions/index.md).\n",
|
||
"- Refer to our [References](../references/index.md) for details on the Llama CLI and Python SDK.\n",
|
||
"- Check out the [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository for example applications and tutorials."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"accelerator": "GPU",
|
||
"colab": {
|
||
"gpuType": "T4",
|
||
"provenance": []
|
||
},
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.12.12"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|