llama-stack-mirror/src/llama_stack_api
Sébastien Han 87e60bc48f
chore: move dep functions outside of create_router
Less indirection and clearer declarations.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-11-24 11:30:44 +01:00
..
batches chore: move dep functions outside of create_router 2025-11-24 11:30:44 +01:00
common fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
__init__.py fix: move models.py to top-level init 2025-11-21 15:56:44 +01:00
agents.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
benchmarks.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
conversations.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
datasetio.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
datasets.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
datatypes.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
eval.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
files.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
inference.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
inspect.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
models.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
openai_responses.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
post_training.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
prompts.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
providers.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
py.typed fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
pyproject.toml fix: adopt FastAPI directly in llama-stack-api 2025-11-20 15:10:33 +01:00
rag_tool.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
README.md fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
resource.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
router_utils.py fix: mypy 2025-11-21 12:18:25 +01:00
safety.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
schema_utils.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
scoring.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
scoring_functions.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
shields.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
tools.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
uv.lock fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
vector_io.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
vector_stores.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
version.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00

llama-stack-api

API and Provider specifications for Llama Stack - a lightweight package with protocol definitions and provider specs.

Overview

llama-stack-api is a minimal dependency package that contains:

  • API Protocol Definitions: Type-safe protocol definitions for all Llama Stack APIs (inference, agents, safety, etc.)
  • Provider Specifications: Provider spec definitions for building custom providers
  • Data Types: Shared data types and models used across the Llama Stack ecosystem
  • Type Utilities: Strong typing utilities and schema validation

What This Package Does NOT Include

  • Server implementation (see llama-stack package)
  • Provider implementations (see llama-stack package)
  • CLI tools (see llama-stack package)
  • Runtime orchestration (see llama-stack package)

Use Cases

This package is designed for:

  1. Third-party Provider Developers: Build custom providers without depending on the full Llama Stack server
  2. Client Library Authors: Use type definitions without server dependencies
  3. Documentation Generation: Generate API docs from protocol definitions
  4. Type Checking: Validate implementations against the official specs

Installation

pip install llama-stack-api

Or with uv:

uv pip install llama-stack-api

Dependencies

Minimal dependencies:

  • pydantic>=2.11.9 - For data validation and serialization
  • jsonschema - For JSON schema utilities

Versioning

This package follows semantic versioning independently from the main llama-stack package:

  • Patch versions (0.1.x): Documentation, internal improvements
  • Minor versions (0.x.0): New APIs, backward-compatible changes
  • Major versions (x.0.0): Breaking changes to existing APIs

Current version: 0.4.0.dev0

Usage Example

from llama_stack_api.inference import Inference, ChatCompletionRequest
from llama_stack_api.providers.datatypes import ProviderSpec, InlineProviderSpec
from llama_stack_api.datatypes import Api


# Use protocol definitions for type checking
class MyInferenceProvider(Inference):
    async def chat_completion(self, request: ChatCompletionRequest):
        # Your implementation
        pass


# Define provider specifications
my_provider_spec = InlineProviderSpec(
    api=Api.inference,
    provider_type="inline::my-provider",
    pip_packages=["my-dependencies"],
    module="my_package.providers.inference",
    config_class="my_package.providers.inference.MyConfig",
)

Relationship to llama-stack

The main llama-stack package depends on llama-stack-api and provides:

  • Full server implementation
  • Built-in provider implementations
  • CLI tools for running and managing stacks
  • Runtime provider resolution and orchestration

Contributing

See the main Llama Stack repository for contribution guidelines.

License

MIT License - see LICENSE file for details.