This change attempts to build off of the existing "Agents vs. OpenAI Responses API" section of "AI Application Examples", and get into how several of the popular AI frameworks provide some form of OpenAI API compatibility, and how this fact can allow one to deploy such application on Llama Stack. This change also - introduces a simple LangChain/LangGraph example that runs on LLama Stack via use of OpenAI API compatibility API - circles back to the Responses API, and introduces a page of external references to examples - makes it clear that other OpenAI API compatibile AI frameworks can be added as the community has time to dive into them.
5.9 KiB
Example: A multi-node LangGraph Agent Application that registers a simple tool that adds two numbers together
Setup
Activate model
ollama run llama3.2:3b-instruct-fp16 --keepalive 60m
Note: this blocks the terminal as ollama run
allows you to chat with the model. So use
/bye
to return to your command prompt. To confirm the model is in fact running, you can run
ollama ps
Start up Llama Stack
OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack build --distro starter --image-type venv --run
Install dependencies
In order to install LangChain, LangGraph, OpenAI, and their related dependencies, run
uv pip install langgraph langchain openai langchain_openai langchain_community
Application details
To run the application, from the root of your Llama Stack git repository clone, execute:
python docs/source/building_applications/langgraph-agent-add.py
and you should see this in the output:
HUMAN: What is 16 plus 9?
AI:
TOOL: 25
The sample also adds some debug that illustrates the use of the Open AI Chat Completion API, as the response.response_metadata
field equates to the Chat Completion Object.
LLM returned Chat Completion object: {'token_usage': {'completion_tokens': 23, 'prompt_tokens': 169, 'total_tokens': 192, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'llama3.2:3b-instruct-fp16', 'system_fingerprint': 'fp_ollama', 'id': 'chatcmpl-51307b80-a1a1-4092-b005-21ea9cde29a0', 'service_tier': None, 'finish_reason': 'tool_calls', 'logprobs': None}
This is analogous to the object returned by the Llama Stack Client's chat.completions.create
call.
The example application leverages a series of LangGraph and LangChain API. The two keys ones are:
- ChatOpenAI is the primary LangChain Open AI compatible chat API. The standard parameters fort his API supply the Llama Stack OpenAI provider endpoint, followed by a model registered with Llama Stack.
- StateGraph provides the LangGraph API for building the nodes and edges of the graph that define (potential) steps of the agentic workflow.
Additional LangChain API are leveraged in order to:
- register or bind the tool used by the LangGraph agentic workflow
- process any messages generated by the workflow
- supply user and tool prompts
Ultimately, this agentic workflow application performs the simple task of adding numbers together.
:language: python
Minor application tweak - the OpenAI Responses API
It is very easy to switch from the default OpenAI Chat Completion API to the newer OpenAI Responses API. Simply modify
the ChatOpenAI
instantiator with the additional use_responses_api=True
flag:
llm = ChatOpenAI(
model="ollama/llama3.2:3b-instruct-fp16",
openai_api_key="none",
openai_api_base="http://localhost:8321/v1/openai/v1",
use_responses_api=True).bind_tools(tools)
For convenience, here is the entire sample with that change to the constructor:
:language: python
If you are examining the Llama Stack server logs while running the application, you'll see use of the /v1/openai/v1/responses
REST APIs instead of /v1/openai/v1/chat/completions
REST APIs.
In the sample application's output, the debug statement displaying the response from the LLM will now illustrate that instead of the Chat Completion Object, the LLM returns a Response Object from the Responses API.
LLM returned Responses object: {'id': 'resp-9dbaa1e1-7ba4-45cd-978e-e84448aee278', 'created_at': 1756140326.0, 'model': 'ollama/llama3.2:3b-instruct-fp16', 'object': 'response', 'status': 'completed', 'model_name': 'ollama/llama3.2:3b-instruct-fp16'}
This is analogous to the object returned by the Llama Stack Client's responses.create
call.
The Responses API is considered the next generation of OpenAI's core agentic API primitive.
For a detailed comparison with and migration suggestions from the Chat API, visit the Open AI documentation.
Comparing Llama Stack Agents and LangGraph Agents
Expressing the agent workflow as a LangGraph StateGraph
is an alternative approach to the Llama Stack agent execution
loop as discussed in this prior section.
To summarize some of the key takeaways detailed earlier:
- LangGraph does not offer the easy integration with Llama Stack's API providers, like say the shields / safety mechanisms, that Llama Stack Agents benefit from
- Llama stack agents provide a simpler, predefined, sequence of steps incorporated in a loop is the standard execution pattern (similar to LangChain), where multiple LLM calls and tool invocations are possible.
- LangGraph execution order is more flexible, with edges allowing for conditional branching along with loops. Each node is a LLM call or tool call. Also, a mutable state is passed between nodes, enabling complex, multi-turn interactions and adaptive behavior, i.e. workflow orchestration.