llama-stack-mirror/src/llama_stack_api
Sébastien Han 97f535c4f1
Some checks failed
Pre-commit / pre-commit (push) Successful in 3m27s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Integration Tests (Replay) / generate-matrix (push) Successful in 3s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Test llama stack list-deps / generate-matrix (push) Successful in 3s
Python Package Build Test / build (3.12) (push) Failing after 4s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
Test llama stack list-deps / show-single-provider (push) Successful in 25s
Test External API and Providers / test-external (venv) (push) Failing after 34s
Vector IO Integration Tests / test-matrix (push) Failing after 43s
Test Llama Stack Build / build (push) Successful in 37s
Test Llama Stack Build / build-single-provider (push) Successful in 48s
Test llama stack list-deps / list-deps-from-config (push) Successful in 52s
Test llama stack list-deps / list-deps (push) Failing after 52s
Python Package Build Test / build (3.13) (push) Failing after 1m2s
UI Tests / ui-tests (22) (push) Successful in 1m15s
Test Llama Stack Build / build-custom-container-distribution (push) Successful in 1m29s
Unit Tests / unit-tests (3.12) (push) Failing after 1m45s
Test Llama Stack Build / build-ubi9-container-distribution (push) Successful in 1m54s
Unit Tests / unit-tests (3.13) (push) Failing after 2m13s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 2m20s
feat(openapi): switch to fastapi-based generator (#3944)
# What does this PR do?
This replaces the legacy "pyopenapi + strong_typing" pipeline with a
FastAPI-backed generator that has an explicit schema registry inside
`llama_stack_api`. The key changes:

1. **New generator architecture.** FastAPI now builds the OpenAPI schema
directly from the real routes, while helper modules
(`schema_collection`, `endpoints`, `schema_transforms`, etc.)
post-process the result. The old pyopenapi stack and its strong_typing
helpers are removed entirely, so we no longer rely on fragile AST
analysis or top-level import side effects.

2. **Schema registry in `llama_stack_api`.** `schema_utils.py` keeps a
`SchemaInfo` record for every `@json_schema_type`, `register_schema`,
and dynamically created request model. The OpenAPI generator and other
tooling query this registry instead of scanning the package tree,
producing deterministic names (e.g., `{MethodName}Request`), capturing
all optional/nullable fields, and making schema discovery testable. A
new unit test covers the registry behavior.

3. **Regenerated specs + CI alignment.** All docs/Stainless specs are
regenerated from the new pipeline, so optional/nullable fields now match
reality (expect the API Conformance workflow to report breaking
changes—this PR establishes the new baseline). The workflow itself is
back to the stock oasdiff invocation so future regressions surface
normally.

*Conformance will be RED on this PR; we choose to accept the
deviations.*

## Test Plan
- `uv run pytest tests/unit/server/test_schema_registry.py`
- `uv run python -m scripts.openapi_generator.main docs/static`

---------

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-11-14 15:53:53 -08:00
..
common fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
strong_typing feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
__init__.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
agents.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
batches.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
benchmarks.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
conversations.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
datasetio.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
datasets.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
datatypes.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
eval.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
files.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
inference.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
inspect.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
models.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
openai_responses.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
post_training.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
prompts.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
providers.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
py.typed fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
pyproject.toml fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
rag_tool.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
README.md fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
resource.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
safety.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
schema_utils.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
scoring.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
scoring_functions.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
shields.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
tools.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
uv.lock fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
vector_io.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
vector_stores.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
version.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00

llama-stack-api

API and Provider specifications for Llama Stack - a lightweight package with protocol definitions and provider specs.

Overview

llama-stack-api is a minimal dependency package that contains:

  • API Protocol Definitions: Type-safe protocol definitions for all Llama Stack APIs (inference, agents, safety, etc.)
  • Provider Specifications: Provider spec definitions for building custom providers
  • Data Types: Shared data types and models used across the Llama Stack ecosystem
  • Type Utilities: Strong typing utilities and schema validation

What This Package Does NOT Include

  • Server implementation (see llama-stack package)
  • Provider implementations (see llama-stack package)
  • CLI tools (see llama-stack package)
  • Runtime orchestration (see llama-stack package)

Use Cases

This package is designed for:

  1. Third-party Provider Developers: Build custom providers without depending on the full Llama Stack server
  2. Client Library Authors: Use type definitions without server dependencies
  3. Documentation Generation: Generate API docs from protocol definitions
  4. Type Checking: Validate implementations against the official specs

Installation

pip install llama-stack-api

Or with uv:

uv pip install llama-stack-api

Dependencies

Minimal dependencies:

  • pydantic>=2.11.9 - For data validation and serialization
  • jsonschema - For JSON schema utilities

Versioning

This package follows semantic versioning independently from the main llama-stack package:

  • Patch versions (0.1.x): Documentation, internal improvements
  • Minor versions (0.x.0): New APIs, backward-compatible changes
  • Major versions (x.0.0): Breaking changes to existing APIs

Current version: 0.4.0.dev0

Usage Example

from llama_stack_api.inference import Inference, ChatCompletionRequest
from llama_stack_api.providers.datatypes import ProviderSpec, InlineProviderSpec
from llama_stack_api.datatypes import Api


# Use protocol definitions for type checking
class MyInferenceProvider(Inference):
    async def chat_completion(self, request: ChatCompletionRequest):
        # Your implementation
        pass


# Define provider specifications
my_provider_spec = InlineProviderSpec(
    api=Api.inference,
    provider_type="inline::my-provider",
    pip_packages=["my-dependencies"],
    module="my_package.providers.inference",
    config_class="my_package.providers.inference.MyConfig",
)

Relationship to llama-stack

The main llama-stack package depends on llama-stack-api and provides:

  • Full server implementation
  • Built-in provider implementations
  • CLI tools for running and managing stacks
  • Runtime provider resolution and orchestration

Contributing

See the main Llama Stack repository for contribution guidelines.

License

MIT License - see LICENSE file for details.