llama-stack-mirror/scripts/openapi_generator/app.py
Sébastien Han eb3cab1eec
feat: Implement FastAPI router system
This commit introduces a new FastAPI router-based system for defining
API endpoints, enabling a migration path away from the legacy @webmethod
decorator system. The implementation includes router infrastructure,
migration of the Batches API as the first example, and updates
to server, OpenAPI generation, and inspection systems to
support both routing approaches.

The router infrastructure consists of a router registry system
that allows APIs to register FastAPI router factories, which
are then automatically discovered and included in the server
application. Standard error responses are centralized in
router_utils to ensure consistent OpenAPI specification
generation with proper $ref references to component responses.

The Batches API has been migrated to demonstrate the new
pattern. The protocol definition and models remain in
llama_stack_api/batches, maintaining clear separation between
API contracts and server implementation. The FastAPI router
implementation lives in
llama_stack/core/server/routers/batches, following the
established pattern where API contracts are defined in
llama_stack_api and server routing logic lives in
llama_stack/core/server.

The server now checks for registered routers before falling
back to the legacy webmethod-based route discovery, ensuring
backward compatibility during the migration period. The
OpenAPI generator has been updated to handle both router-based
and webmethod-based routes, correctly extracting metadata from
FastAPI route decorators and Pydantic Field descriptions. The
inspect endpoint now includes routes from both systems, with
proper filtering for deprecated routes and API levels.

Response descriptions are now explicitly defined in router decorators,
ensuring the generated OpenAPI specification matches the
previous format. Error responses use $ref references to
component responses (BadRequest400, TooManyRequests429, etc.)
as required by the specification. This is neat and will allow us to
remove a lot of boiler plate code from our generator once the
migration is done.

This implementation provides a foundation for incrementally migrating
other APIs to the router system while maintaining full backward
compatibility with existing webmethod-based APIs.

Closes: https://github.com/llamastack/llama-stack/issues/4188
Signed-off-by: Sébastien Han <seb@redhat.com>
2025-11-19 17:07:24 +01:00

119 lines
3.8 KiB
Python

# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
FastAPI app creation for OpenAPI generation.
"""
import inspect
from typing import Any
from fastapi import FastAPI
from llama_stack.core.resolver import api_protocol_map
from llama_stack_api import Api
from .state import _protocol_methods_cache
def _get_protocol_method(api: Api, method_name: str) -> Any | None:
"""
Get a protocol method function by API and method name.
Uses caching to avoid repeated lookups.
Args:
api: The API enum
method_name: The method name (function name)
Returns:
The function object, or None if not found
"""
global _protocol_methods_cache
if _protocol_methods_cache is None:
_protocol_methods_cache = {}
protocols = api_protocol_map()
from llama_stack_api.tools import SpecialToolGroup, ToolRuntime
toolgroup_protocols = {
SpecialToolGroup.rag_tool: ToolRuntime,
}
for api_key, protocol in protocols.items():
method_map: dict[str, Any] = {}
protocol_methods = inspect.getmembers(protocol, predicate=inspect.isfunction)
for name, method in protocol_methods:
method_map[name] = method
# Handle tool_runtime special case
if api_key == Api.tool_runtime:
for tool_group, sub_protocol in toolgroup_protocols.items():
sub_protocol_methods = inspect.getmembers(sub_protocol, predicate=inspect.isfunction)
for name, method in sub_protocol_methods:
if hasattr(method, "__webmethod__"):
method_map[f"{tool_group.value}.{name}"] = method
_protocol_methods_cache[api_key] = method_map
return _protocol_methods_cache.get(api, {}).get(method_name)
def create_llama_stack_app() -> FastAPI:
"""
Create a FastAPI app that represents the Llama Stack API.
This uses both router-based routes (for migrated APIs) and the existing
route discovery system for legacy webmethod-based routes.
"""
app = FastAPI(
title="Llama Stack API",
description="A comprehensive API for building and deploying AI applications",
version="1.0.0",
servers=[
{"url": "http://any-hosted-llama-stack.com"},
],
)
# Import batches router to trigger router registration
try:
from llama_stack.core.server.routers import batches # noqa: F401
except ImportError:
pass
# Include routers for APIs that have them registered
from llama_stack.core.server.router_registry import create_router, has_router
def dummy_impl_getter(api: Api) -> Any:
"""Dummy implementation getter for OpenAPI generation."""
return None
# Get all APIs that might have routers
from llama_stack.core.resolver import api_protocol_map
protocols = api_protocol_map()
for api in protocols.keys():
if has_router(api):
router = create_router(api, dummy_impl_getter)
if router:
app.include_router(router)
# Get all API routes (for legacy webmethod-based routes)
from llama_stack.core.server.routes import get_all_api_routes
api_routes = get_all_api_routes()
# Create FastAPI routes from the discovered routes (skip APIs that have routers)
from . import endpoints
for api, routes in api_routes.items():
# Skip APIs that have routers - they're already included above
if has_router(api):
continue
for route, webmethod in routes:
# Convert the route to a FastAPI endpoint
endpoints._create_fastapi_endpoint(app, route, webmethod, api)
return app