docs: add an AI frameworks with common OpenAI API compatibility section to AI Application Examples

This change attempts to build off of the existing "Agents vs. OpenAI Responses API" section of "AI Application Examples", and get into how several of the popular AI frameworks provide some form of OpenAI API compatibility, and how this fact can allow one to deploy such application on Llama Stack.

This change also
- introduces a simple LangChain/LangGraph example that runs on LLama Stack via use of OpenAI API compatibility API
- circles back to the Responses API, and introduces a page of external references to examples
- makes it clear that other OpenAI API compatibile AI frameworks can be added as the community has time to dive into them.
This commit is contained in:
gabemontero 2025-08-24 20:54:23 -04:00
parent 7394828c7a
commit 20fd5ff54c
7 changed files with 342 additions and 0 deletions

View file

@ -12,6 +12,7 @@ Here are some key topics that will help you build effective agents:
- **[Agent](agent)**: Understand the components and design patterns of the Llama Stack agent framework.
- **[Agent Execution Loop](agent_execution_loop)**: Understand how agents process information, make decisions, and execute actions in a continuous loop.
- **[Agents vs Responses API](responses_vs_agents)**: Learn the differences between the Agents API and Responses API, and when to use each one.
- **[OpenAI API](more_on_openai_compatibility)**: Learn how Llama Stack's OpenAI API Compatibility also allows for use of other AI frameworks on the platform.
- **[Tools](tools)**: Extend your agents' capabilities by integrating with external tools and APIs.
- **[Evals](evals)**: Evaluate your agents' effectiveness and identify areas for improvement.
- **[Telemetry](telemetry)**: Monitor and analyze your agents' performance and behavior.
@ -25,6 +26,7 @@ rag
agent
agent_execution_loop
responses_vs_agents
more_on_openai_compatibility
tools
evals
telemetry