llama-stack-mirror/docs/zero_to_hero_guide/00_Inference101.ipynb

283 lines
9 KiB
Text
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"id": "c1e7571c",
"metadata": {},
"source": [
"# Llama Stack Inference Guide\n",
"\n",
"This document provides instructions on how to use Llama Stack's `chat_completion` function for generating text using the `Llama3.2-11B-Vision-Instruct` model. \n",
"\n",
"Before you begin, please ensure Llama Stack is installed and set up by following the [Getting Started Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html).\n",
"\n",
"\n",
"### Table of Contents\n",
"1. [Quickstart](#quickstart)\n",
"2. [Building Effective Prompts](#building-effective-prompts)\n",
"3. [Conversation Loop](#conversation-loop)\n",
"4. [Conversation History](#conversation-history)\n",
"5. [Streaming Responses](#streaming-responses)\n"
]
},
{
"cell_type": "markdown",
"id": "414301dc",
"metadata": {},
"source": [
"## Quickstart\n",
"\n",
"This section walks through each step to set up and make a simple text generation request.\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "25b97dfe",
"metadata": {},
"source": [
"### 0. Configuration\n",
"Set up your connection parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38a39e44",
"metadata": {},
"outputs": [],
"source": [
"HOST = \"localhost\" # Replace with your host\n",
"PORT = 5001 # Replace with your port"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1d097ab",
"metadata": {},
"outputs": [],
"source": [
"### 1. Set Up the Client\n",
"\n",
"Begin by importing the necessary components from Llama Stacks client library:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7a573752",
"metadata": {},
"outputs": [],
"source": [
"from llama_stack_client import LlamaStackClient\n",
"from llama_stack_client.types import SystemMessage, UserMessage\n",
"\n",
"client = LlamaStackClient(base_url='http://{HOST}:{PORT}')"
]
},
{
"cell_type": "markdown",
"id": "86366383",
"metadata": {},
"source": [
"### 2. Create a Chat Completion Request\n",
"\n",
"Use the `chat_completion` function to define the conversation context. Each message you include should have a specific role and content:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "77c29dba",
"metadata": {},
"outputs": [],
"source": [
"response = client.inference.chat_completion(\n",
" messages=[\n",
" SystemMessage(content='You are a friendly assistant.', role='system'),\n",
" UserMessage(content='Write a two-sentence poem about llama.', role='user')\n",
" ],\n",
" model='Llama3.2-11B-Vision-Instruct',\n",
")\n",
"\n",
"print(response.completion_message.content)"
]
},
{
"cell_type": "markdown",
"id": "e5f16949",
"metadata": {},
"source": [
"## Building Effective Prompts\n",
"\n",
"Effective prompt creation (often called 'prompt engineering') is essential for quality responses. Here are best practices for structuring your prompts to get the most out of the Llama Stack model:\n",
"\n",
"1. **System Messages**: Use `SystemMessage` to set the model's behavior. This is similar to providing top-level instructions for tone, format, or specific behavior.\n",
" - **Example**: `SystemMessage(content='You are a friendly assistant that explains complex topics simply.')`\n",
"2. **User Messages**: Define the task or question you want to ask the model with a `UserMessage`. The clearer and more direct you are, the better the response.\n",
" - **Example**: `UserMessage(content='Explain recursion in programming in simple terms.')`\n",
"\n",
"### Sample Prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c6812da",
"metadata": {},
"outputs": [],
"source": [
"response = client.inference.chat_completion(\n",
" messages=[\n",
" SystemMessage(content='You are shakespeare.', role='system'),\n",
" UserMessage(content='Write a two-sentence poem about llama.', role='user')\n",
" ],\n",
" model='Llama3.2-11B-Vision-Instruct',\n",
")\n",
"\n",
"print(response.completion_message.content)"
]
},
{
"cell_type": "markdown",
"id": "c8690ef0",
"metadata": {},
"source": [
"## Conversation Loop\n",
"\n",
"To create a continuous conversation loop, where users can input multiple messages in a session, use the following structure. This example runs an asynchronous loop, ending when the user types 'exit,' 'quit,' or 'bye.'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02211625",
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"from llama_stack_client import LlamaStackClient\n",
"from llama_stack_client.types import UserMessage\n",
"from termcolor import cprint\n",
"\n",
"client = LlamaStackClient(base_url='http://{HOST}:{PORT}')\n",
"\n",
"async def chat_loop():\n",
" while True:\n",
" user_input = input('User> ')\n",
" if user_input.lower() in ['exit', 'quit', 'bye']:\n",
" cprint('Ending conversation. Goodbye!', 'yellow')\n",
" break\n",
"\n",
" message = UserMessage(content=user_input, role='user')\n",
" response = client.inference.chat_completion(\n",
" messages=[message],\n",
" model='Llama3.2-11B-Vision-Instruct',\n",
" )\n",
" cprint(f'> Response: {response.completion_message.content}', 'cyan')\n",
"\n",
"asyncio.run(chat_loop())"
]
},
{
"cell_type": "markdown",
"id": "8cf0d555",
"metadata": {},
"source": [
"## Conversation History\n",
"\n",
"Maintaining a conversation history allows the model to retain context from previous interactions. Use a list to accumulate messages, enabling continuity throughout the chat session."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9496f75c",
"metadata": {},
"outputs": [],
"source": [
"async def chat_loop():\n",
" conversation_history = []\n",
" while True:\n",
" user_input = input('User> ')\n",
" if user_input.lower() in ['exit', 'quit', 'bye']:\n",
" cprint('Ending conversation. Goodbye!', 'yellow')\n",
" break\n",
"\n",
" user_message = UserMessage(content=user_input, role='user')\n",
" conversation_history.append(user_message)\n",
"\n",
" response = client.inference.chat_completion(\n",
" messages=conversation_history,\n",
" model='Llama3.2-11B-Vision-Instruct',\n",
" )\n",
" cprint(f'> Response: {response.completion_message.content}', 'cyan')\n",
"\n",
" assistant_message = UserMessage(content=response.completion_message.content, role='user')\n",
" conversation_history.append(assistant_message)\n",
"\n",
"asyncio.run(chat_loop())"
]
},
{
"cell_type": "markdown",
"id": "03fcf5e0",
"metadata": {},
"source": [
"## Streaming Responses\n",
"\n",
"Llama Stack offers a `stream` parameter in the `chat_completion` function, which allows partial responses to be returned progressively as they are generated. This can enhance user experience by providing immediate feedback without waiting for the entire response to be processed.\n",
"\n",
"### Example: Streaming Responses"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d119026e",
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"from llama_stack_client import LlamaStackClient\n",
"from llama_stack_client.lib.inference.event_logger import EventLogger\n",
"from llama_stack_client.types import UserMessage\n",
"from termcolor import cprint\n",
"\n",
"async def run_main(stream: bool = True):\n",
" client = LlamaStackClient(base_url='http://{HOST}:{PORT}')\n",
"\n",
" message = UserMessage(\n",
" content='hello world, write me a 2 sentence poem about the moon', role='user'\n",
" )\n",
" print(f'User>{message.content}', 'green')\n",
"\n",
" response = client.inference.chat_completion(\n",
" messages=[message],\n",
" model='Llama3.2-11B-Vision-Instruct',\n",
" stream=stream,\n",
" )\n",
"\n",
" if not stream:\n",
" cprint(f'> Response: {response}', 'cyan')\n",
" else:\n",
" async for log in EventLogger().log(response):\n",
" log.print()\n",
"\n",
" models_response = client.models.list()\n",
" print(models_response)\n",
"\n",
"if __name__ == '__main__':\n",
" asyncio.run(run_main())"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}