llama-stack-mirror/src/llama_stack_api
2025-12-01 10:42:04 -08:00
..
common fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
internal refactor(storage): make { kvstore, sqlstore } as llama stack "internal" APIs (#4181) 2025-11-18 13:15:16 -08:00
__init__.py fix: list-deps command (#4174) 2025-11-17 15:26:10 +01:00
agents.py feat: Add metadata field to request and response 2025-11-26 12:11:36 +00:00
batches.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
benchmarks.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
conversations.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
datasetio.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
datasets.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
datatypes.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
eval.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
files.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
inference.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
inspect.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
models.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
openai_responses.py feat: Add metadata field to request and response 2025-11-26 12:11:36 +00:00
post_training.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
prompts.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
providers.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
py.typed fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
pyproject.toml fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
rag_tool.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
README.md fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
resource.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
safety.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
schema_utils.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
scoring.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
scoring_functions.py feat(openapi): switch to fastapi-based generator (#3944) 2025-11-14 15:53:53 -08:00
shields.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
tools.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
uv.lock fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
vector_io.py fix: remove telemetry_traceable (#4205) 2025-12-01 10:40:57 -08:00
vector_stores.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
version.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00

llama-stack-api

API and Provider specifications for Llama Stack - a lightweight package with protocol definitions and provider specs.

Overview

llama-stack-api is a minimal dependency package that contains:

  • API Protocol Definitions: Type-safe protocol definitions for all Llama Stack APIs (inference, agents, safety, etc.)
  • Provider Specifications: Provider spec definitions for building custom providers
  • Data Types: Shared data types and models used across the Llama Stack ecosystem
  • Type Utilities: Strong typing utilities and schema validation

What This Package Does NOT Include

  • Server implementation (see llama-stack package)
  • Provider implementations (see llama-stack package)
  • CLI tools (see llama-stack package)
  • Runtime orchestration (see llama-stack package)

Use Cases

This package is designed for:

  1. Third-party Provider Developers: Build custom providers without depending on the full Llama Stack server
  2. Client Library Authors: Use type definitions without server dependencies
  3. Documentation Generation: Generate API docs from protocol definitions
  4. Type Checking: Validate implementations against the official specs

Installation

pip install llama-stack-api

Or with uv:

uv pip install llama-stack-api

Dependencies

Minimal dependencies:

  • pydantic>=2.11.9 - For data validation and serialization
  • jsonschema - For JSON schema utilities

Versioning

This package follows semantic versioning independently from the main llama-stack package:

  • Patch versions (0.1.x): Documentation, internal improvements
  • Minor versions (0.x.0): New APIs, backward-compatible changes
  • Major versions (x.0.0): Breaking changes to existing APIs

Current version: 0.4.0.dev0

Usage Example

from llama_stack_api.inference import Inference, ChatCompletionRequest
from llama_stack_api.providers.datatypes import ProviderSpec, InlineProviderSpec
from llama_stack_api.datatypes import Api


# Use protocol definitions for type checking
class MyInferenceProvider(Inference):
    async def chat_completion(self, request: ChatCompletionRequest):
        # Your implementation
        pass


# Define provider specifications
my_provider_spec = InlineProviderSpec(
    api=Api.inference,
    provider_type="inline::my-provider",
    pip_packages=["my-dependencies"],
    module="my_package.providers.inference",
    config_class="my_package.providers.inference.MyConfig",
)

Relationship to llama-stack

The main llama-stack package depends on llama-stack-api and provides:

  • Full server implementation
  • Built-in provider implementations
  • CLI tools for running and managing stacks
  • Runtime provider resolution and orchestration

Contributing

See the main Llama Stack repository for contribution guidelines.

License

MIT License - see LICENSE file for details.