chore: use Pydantic to generate OpenAPI schema

Removes the need for the strong_typing and pyopenapi packages and purely
use Pydantic for schema generation.

Our generator now purely relies on Pydantic and FastAPI, it is available
at `scripts/fastapi_generator.py`, you can run it like so:

```
uv run ./scripts/run_openapi_generator.sh
```

The generator will:

* Generate the deprecated, experimental, stable and combined specs
* Validate all the spec it generates against OpenAPI standards

A few changes in the schema required for oasdiff some updates so I've
made the following ignore rules. The new Pydantic-based generator is
likely more correct and follows OpenAPI standards better than the old
pyopenapi generator. Instead of trying to make the new generator match
the old one's quirks, we should focus on what's actually correct
according to OpenAPI standards.

These are non-critical changes:

* response-property-became-nullable: Backward compatible:
  existing non-null values still work, now also accepts null
* response-required-property-removed: oasdiff reports a false
  positive because it doesn't resolve $refs inside anyOf; we could use
  tool like 'redocly' to flatten the schema to a single file.
* response-property-type-changed: properties are still object
  types, but oasdiff doesn't resolve $refs, so it flags the missing
  inline type: object even though the referenced schemas define type:
  object
* request-property-one-of-removed: These are false positives
  caused by schema restructuring (wrapping in anyOf for nullability,
  using -Input variants, or simplifying nested oneOf structures)
  that don't change the actual API contract - the same data types are
  still accepted, just represented differently in the schema.
* request-parameter-enum-value-removed: These are false
  positives caused by oasdiff not resolving $refs - the enum values
  (asc, desc, assistants, batch) are still present in the referenced
  schemas (Order and OpenAIFilePurpose), just represented via schema
  references instead of inline enums.
* request-property-enum-value-removed: this is a false positive caused
    by oasdiff not resolving $refs - the enum values (llm, embedding,
    rerank) are still present in the referenced ModelType schema,
    just represented via schema reference instead of inline enums.
* request-property-type-changed: These are schema quality issues
    where type information is missing (due to Any fallback in dynamic
    model creation), but the API contract remains unchanged -
    properties still exist with correct names and defaults, so the same
    requests will work.
* response-body-type-changed: These are false positives caused
  by schema representation changes (from inferred/empty types to
  explicit $ref schemas, or vice versa) - the actual response types
  an API contract remain unchanged, just how they're represented in the
  OpenAPI spec.
* response-media-type-removed: This is a false positive caused
  by FastAPI's OpenAPI generator not documenting union return types with
  AsyncIterator - the streaming functionality with text/event-stream
  media type still works when stream=True is passed, it's just not
  reflected in the generated OpenAPI spec.
* request-body-type-changed: This is a schema correction - the
  old spec incorrectly represented the request body as an object, but
  the function signature shows chunks: list[Chunk], so the new spec
  correctly shows it as an array, matching the actual API
  implementation.

Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
Sébastien Han 2025-10-29 14:38:56 +01:00
parent a078f089d9
commit e3cb8ed74a
No known key found for this signature in database
45 changed files with 46606 additions and 18616 deletions

View file

@ -129,12 +129,32 @@ jobs:
echo "Will compare: ${BASE_SPEC} -> ${CURRENT_SPEC}" echo "Will compare: ${BASE_SPEC} -> ${CURRENT_SPEC}"
- name: Write ignore file
run: |
cat <<EOF > ignore-oasdiff
response-property-became-nullable none
response-property-list-of-types-widened none
request-parameter-default-value-added none
request-property-min-items-increased none
response-property-became-optional none
response-required-property-removed none
response-property-one-of-added none
response-property-type-changed none
request-property-one-of-removed none
request-parameter-enum-value-removed none
request-property-enum-value-removed none
request-property-type-changed none
response-body-type-changed none
response-media-type-removed none
request-body-type-changed none
EOF
# Run oasdiff to detect breaking changes in the API specification # Run oasdiff to detect breaking changes in the API specification
# This step will fail if incompatible changes are detected, preventing breaking changes from being merged # This step will fail if incompatible changes are detected, preventing breaking changes from being merged
- name: Run OpenAPI Breaking Change Diff - name: Run OpenAPI Breaking Change Diff
if: steps.skip-check.outputs.skip != 'true' if: steps.skip-check.outputs.skip != 'true'
run: | run: |
oasdiff breaking --fail-on ERR $BASE_SPEC $CURRENT_SPEC --match-path '^/v1/' oasdiff breaking --fail-on ERR --severity-levels ignore-oasdiff $BASE_SPEC $CURRENT_SPEC --match-path '^/v1/'
# Report when test is skipped # Report when test is skipped
- name: Report skip reason - name: Report skip reason

View file

@ -42,7 +42,6 @@ repos:
hooks: hooks:
- id: ruff - id: ruff
args: [ --fix ] args: [ --fix ]
exclude: ^(src/llama_stack_api/strong_typing/.*)$
- id: ruff-format - id: ruff-format
- repo: https://github.com/adamchainz/blacken-docs - repo: https://github.com/adamchainz/blacken-docs
@ -111,11 +110,20 @@ repos:
name: API Spec Codegen name: API Spec Codegen
additional_dependencies: additional_dependencies:
- uv==0.7.8 - uv==0.7.8
entry: sh -c './scripts/uv-run-with-index.sh run ./docs/openapi_generator/run_openapi_generator.sh > /dev/null' entry: sh -c './scripts/uv-run-with-index.sh run scripts/run_openapi_generator.sh'
language: python language: python
pass_filenames: false pass_filenames: false
require_serial: true require_serial: true
files: ^src/llama_stack/apis/|^docs/openapi_generator/ files: ^src/llama_stack/apis/
- id: openapi-validate
name: OpenAPI Schema Validation
additional_dependencies:
- uv==0.7.8
entry: uv run scripts/validate_openapi.py docs/static/ --quiet
language: python
pass_filenames: false
require_serial: true
files: ^docs/static/.*\.ya?ml$
- id: check-workflows-use-hashes - id: check-workflows-use-hashes
name: Check GitHub Actions use SHA-pinned actions name: Check GitHub Actions use SHA-pinned actions
entry: ./scripts/check-workflows-use-hashes.sh entry: ./scripts/check-workflows-use-hashes.sh

File diff suppressed because it is too large Load diff

View file

@ -1 +0,0 @@
The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in `llama_stack.core/server/endpoints.py` using the `generate.py` utility.

View file

@ -1,134 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described found in the
# LICENSE file in the root directory of this source tree.
from datetime import datetime
from pathlib import Path
import sys
import fire
import ruamel.yaml as yaml
from llama_stack_api import LLAMA_STACK_API_V1 # noqa: E402
from llama_stack.core.stack import LlamaStack # noqa: E402
from .pyopenapi.options import Options # noqa: E402
from .pyopenapi.specification import Info, Server # noqa: E402
from .pyopenapi.utility import Specification, validate_api # noqa: E402
def str_presenter(dumper, data):
if data.startswith(f"/{LLAMA_STACK_API_V1}") or data.startswith(
"#/components/schemas/"
):
style = None
else:
style = ">" if "\n" in data or len(data) > 40 else None
return dumper.represent_scalar("tag:yaml.org,2002:str", data, style=style)
def generate_spec(output_dir: Path, stability_filter: str = None, main_spec: bool = False, combined_spec: bool = False):
"""Generate OpenAPI spec with optional stability filtering."""
if combined_spec:
# Special case for combined stable + experimental APIs
title_suffix = " - Stable & Experimental APIs"
filename_prefix = "stainless-"
description_suffix = "\n\n**🔗 COMBINED**: This specification includes both stable production-ready APIs and experimental pre-release APIs. Use stable APIs for production deployments and experimental APIs for testing new features."
# Use the special "stainless" filter to include stable + experimental APIs
stability_filter = "stainless"
elif stability_filter:
title_suffix = {
"stable": " - Stable APIs" if not main_spec else "",
"experimental": " - Experimental APIs",
"deprecated": " - Deprecated APIs"
}.get(stability_filter, f" - {stability_filter.title()} APIs")
# Use main spec filename for stable when main_spec=True
if main_spec and stability_filter == "stable":
filename_prefix = ""
else:
filename_prefix = f"{stability_filter}-"
description_suffix = {
"stable": "\n\n**✅ STABLE**: Production-ready APIs with backward compatibility guarantees.",
"experimental": "\n\n**🧪 EXPERIMENTAL**: Pre-release APIs (v1alpha, v1beta) that may change before becoming stable.",
"deprecated": "\n\n**⚠️ DEPRECATED**: Legacy APIs that may be removed in future versions. Use for migration reference only."
}.get(stability_filter, "")
else:
title_suffix = ""
filename_prefix = ""
description_suffix = ""
spec = Specification(
LlamaStack,
Options(
server=Server(url="http://any-hosted-llama-stack.com"),
info=Info(
title=f"Llama Stack Specification{title_suffix}",
version=LLAMA_STACK_API_V1,
description=f"""This is the specification of the Llama Stack that provides
a set of endpoints and their corresponding interfaces that are tailored to
best leverage Llama Models.{description_suffix}""",
),
include_standard_error_responses=True,
stability_filter=stability_filter, # Pass the filter to the generator
),
)
yaml_filename = f"{filename_prefix}llama-stack-spec.yaml"
with open(output_dir / yaml_filename, "w", encoding="utf-8") as fp:
y = yaml.YAML()
y.default_flow_style = False
y.block_seq_indent = 2
y.map_indent = 2
y.sequence_indent = 4
y.sequence_dash_offset = 2
y.width = 80
y.allow_unicode = True
y.representer.add_representer(str, str_presenter)
y.dump(
spec.get_json(),
fp,
)
def main(output_dir: str):
output_dir = Path(output_dir)
if not output_dir.exists():
raise ValueError(f"Directory {output_dir} does not exist")
# Validate API protocols before generating spec
return_type_errors = validate_api()
if return_type_errors:
print("\nAPI Method Return Type Validation Errors:\n")
for error in return_type_errors:
print(error, file=sys.stderr)
sys.exit(1)
now = str(datetime.now())
print(f"Converting the spec to YAML (openapi.yaml) and HTML (openapi.html) at {now}")
print("")
# Generate main spec as stable APIs (llama-stack-spec.yaml)
print("Generating main specification (stable APIs)...")
generate_spec(output_dir, "stable", main_spec=True)
print("Generating other stability-filtered specifications...")
generate_spec(output_dir, "experimental")
generate_spec(output_dir, "deprecated")
print("Generating combined stable + experimental specification...")
generate_spec(output_dir, combined_spec=True)
if __name__ == "__main__":
fire.Fire(main)

View file

@ -1 +0,0 @@
This is forked from https://github.com/hunyadi/pyopenapi

View file

@ -1,5 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.

File diff suppressed because it is too large Load diff

View file

@ -1,459 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import collections.abc
import enum
import inspect
import typing
from dataclasses import dataclass
from typing import Any, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, Union
from termcolor import colored
from typing import get_origin, get_args
from fastapi import UploadFile
from fastapi.params import File, Form
from typing import Annotated
from llama_stack_api import (
ExtraBodyField,
LLAMA_STACK_API_V1,
LLAMA_STACK_API_V1ALPHA,
LLAMA_STACK_API_V1BETA,
get_signature,
)
def split_prefix(
s: str, sep: str, prefix: Union[str, Iterable[str]]
) -> Tuple[Optional[str], str]:
"""
Recognizes a prefix at the beginning of a string.
:param s: The string to check.
:param sep: A separator between (one of) the prefix(es) and the rest of the string.
:param prefix: A string or a set of strings to identify as a prefix.
:return: A tuple of the recognized prefix (if any) and the rest of the string excluding the separator (or the entire string).
"""
if isinstance(prefix, str):
if s.startswith(prefix + sep):
return prefix, s[len(prefix) + len(sep) :]
else:
return None, s
for p in prefix:
if s.startswith(p + sep):
return p, s[len(p) + len(sep) :]
return None, s
def _get_annotation_type(annotation: Union[type, str], callable: Callable) -> type:
"Maps a stringized reference to a type, as if using `from __future__ import annotations`."
if isinstance(annotation, str):
return eval(annotation, callable.__globals__)
else:
return annotation
class HTTPMethod(enum.Enum):
"HTTP method used to invoke an endpoint operation."
GET = "GET"
POST = "POST"
PUT = "PUT"
DELETE = "DELETE"
PATCH = "PATCH"
OperationParameter = Tuple[str, type]
class ValidationError(TypeError):
pass
@dataclass
class EndpointOperation:
"""
Type information and metadata associated with an endpoint operation.
"param defining_class: The most specific class that defines the endpoint operation.
:param name: The short name of the endpoint operation.
:param func_name: The name of the function to invoke when the operation is triggered.
:param func_ref: The callable to invoke when the operation is triggered.
:param route: A custom route string assigned to the operation.
:param path_params: Parameters of the operation signature that are passed in the path component of the URL string.
:param query_params: Parameters of the operation signature that are passed in the query string as `key=value` pairs.
:param request_params: The parameter that corresponds to the data transmitted in the request body.
:param multipart_params: Parameters that indicate multipart/form-data request body.
:param extra_body_params: Parameters that arrive via extra_body and are documented but not in SDK.
:param event_type: The Python type of the data that is transmitted out-of-band (e.g. via websockets) while the operation is in progress.
:param response_type: The Python type of the data that is transmitted in the response body.
:param http_method: The HTTP method used to invoke the endpoint such as POST, GET or PUT.
:param public: True if the operation can be invoked without prior authentication.
:param request_examples: Sample requests that the operation might take.
:param response_examples: Sample responses that the operation might produce.
"""
defining_class: type
name: str
func_name: str
func_ref: Callable[..., Any]
route: Optional[str]
path_params: List[OperationParameter]
query_params: List[OperationParameter]
request_params: Optional[OperationParameter]
multipart_params: List[OperationParameter]
extra_body_params: List[tuple[str, type, str | None]]
event_type: Optional[type]
response_type: type
http_method: HTTPMethod
public: bool
request_examples: Optional[List[Any]] = None
response_examples: Optional[List[Any]] = None
def get_route(self, webmethod) -> str:
api_level = webmethod.level
if self.route is not None:
return "/".join(["", api_level, self.route.lstrip("/")])
route_parts = ["", api_level, self.name]
for param_name, _ in self.path_params:
route_parts.append("{" + param_name + "}")
return "/".join(route_parts)
class _FormatParameterExtractor:
"A visitor to exract parameters in a format string."
keys: List[str]
def __init__(self) -> None:
self.keys = []
def __getitem__(self, key: str) -> None:
self.keys.append(key)
return None
def _get_route_parameters(route: str) -> List[str]:
extractor = _FormatParameterExtractor()
# Replace all occurrences of ":path" with empty string
route = route.replace(":path", "")
route.format_map(extractor)
return extractor.keys
def _get_endpoint_functions(
endpoint: type, prefixes: List[str]
) -> Iterator[Tuple[str, str, str, Callable]]:
if not inspect.isclass(endpoint):
raise ValueError(f"object is not a class type: {endpoint}")
functions = inspect.getmembers(endpoint, inspect.isfunction)
for func_name, func_ref in functions:
webmethods = []
# Check for multiple webmethods (stacked decorators)
if hasattr(func_ref, "__webmethods__"):
webmethods = func_ref.__webmethods__
if not webmethods:
continue
for webmethod in webmethods:
print(f"Processing {colored(func_name, 'white')}...")
operation_name = func_name
if webmethod.method == "GET":
prefix = "get"
elif webmethod.method == "DELETE":
prefix = "delete"
elif webmethod.method == "POST":
prefix = "post"
elif operation_name.startswith("get_") or operation_name.endswith("/get"):
prefix = "get"
elif (
operation_name.startswith("delete_")
or operation_name.startswith("remove_")
or operation_name.endswith("/delete")
or operation_name.endswith("/remove")
):
prefix = "delete"
else:
# by default everything else is a POST
prefix = "post"
yield prefix, operation_name, func_name, func_ref
def _get_defining_class(member_fn: str, derived_cls: type) -> type:
"Find the class in which a member function is first defined in a class inheritance hierarchy."
# iterate in reverse member resolution order to find most specific class first
for cls in reversed(inspect.getmro(derived_cls)):
for name, _ in inspect.getmembers(cls, inspect.isfunction):
if name == member_fn:
return cls
raise ValidationError(
f"cannot find defining class for {member_fn} in {derived_cls}"
)
def get_endpoint_operations(
endpoint: type, use_examples: bool = True
) -> List[EndpointOperation]:
"""
Extracts a list of member functions in a class eligible for HTTP interface binding.
These member functions are expected to have a signature like
```
async def get_object(self, uuid: str, version: int) -> Object:
...
```
where the prefix `get_` translates to an HTTP GET, `object` corresponds to the name of the endpoint operation,
`uuid` and `version` are mapped to route path elements in "/object/{uuid}/{version}", and `Object` becomes
the response payload type, transmitted as an object serialized to JSON.
If the member function has a composite class type in the argument list, it becomes the request payload type,
and the caller is expected to provide the data as serialized JSON in an HTTP POST request.
:param endpoint: A class with member functions that can be mapped to an HTTP endpoint.
:param use_examples: Whether to return examples associated with member functions.
"""
result = []
for prefix, operation_name, func_name, func_ref in _get_endpoint_functions(
endpoint,
[
"create",
"delete",
"do",
"get",
"post",
"put",
"remove",
"set",
"update",
],
):
# Get all webmethods for this function
webmethods = getattr(func_ref, "__webmethods__", [])
# Create one EndpointOperation for each webmethod
for webmethod in webmethods:
route = webmethod.route
route_params = _get_route_parameters(route) if route is not None else None
public = webmethod.public
request_examples = webmethod.request_examples
response_examples = webmethod.response_examples
# inspect function signature for path and query parameters, and request/response payload type
signature = get_signature(func_ref)
path_params = []
query_params = []
request_params = []
multipart_params = []
extra_body_params = []
for param_name, parameter in signature.parameters.items():
param_type = _get_annotation_type(parameter.annotation, func_ref)
# omit "self" for instance methods
if param_name == "self" and param_type is inspect.Parameter.empty:
continue
# check if all parameters have explicit type
if parameter.annotation is inspect.Parameter.empty:
raise ValidationError(
f"parameter '{param_name}' in function '{func_name}' has no type annotation"
)
# Check if this is an extra_body parameter
is_extra_body, extra_body_desc = _is_extra_body_param(param_type)
if is_extra_body:
# Store in a separate list for documentation
extra_body_params.append((param_name, param_type, extra_body_desc))
continue # Skip adding to request_params
is_multipart = _is_multipart_param(param_type)
if prefix in ["get", "delete"]:
if route_params is not None and param_name in route_params:
path_params.append((param_name, param_type))
else:
query_params.append((param_name, param_type))
else:
if route_params is not None and param_name in route_params:
path_params.append((param_name, param_type))
elif is_multipart:
multipart_params.append((param_name, param_type))
else:
request_params.append((param_name, param_type))
# check if function has explicit return type
if signature.return_annotation is inspect.Signature.empty:
raise ValidationError(
f"function '{func_name}' has no return type annotation"
)
return_type = _get_annotation_type(signature.return_annotation, func_ref)
# operations that produce events are labeled as Generator[YieldType, SendType, ReturnType]
# where YieldType is the event type, SendType is None, and ReturnType is the immediate response type to the request
if typing.get_origin(return_type) is collections.abc.Generator:
event_type, send_type, response_type = typing.get_args(return_type)
if send_type is not type(None):
raise ValidationError(
f"function '{func_name}' has a return type Generator[Y,S,R] and therefore looks like an event but has an explicit send type"
)
else:
event_type = None
def process_type(t):
if typing.get_origin(t) is collections.abc.AsyncIterator:
# NOTE(ashwin): this is SSE and there is no way to represent it. either we make it a List
# or the item type. I am choosing it to be the latter
args = typing.get_args(t)
return args[0]
elif typing.get_origin(t) is typing.Union:
types = [process_type(a) for a in typing.get_args(t)]
return typing._UnionGenericAlias(typing.Union, tuple(types))
else:
return t
response_type = process_type(return_type)
if prefix in ["delete", "remove"]:
http_method = HTTPMethod.DELETE
elif prefix == "post":
http_method = HTTPMethod.POST
elif prefix == "get":
http_method = HTTPMethod.GET
elif prefix == "set":
http_method = HTTPMethod.PUT
elif prefix == "update":
http_method = HTTPMethod.PATCH
else:
raise ValidationError(f"unknown prefix {prefix}")
# Create an EndpointOperation for this specific webmethod
operation = EndpointOperation(
defining_class=_get_defining_class(func_name, endpoint),
name=operation_name,
func_name=func_name,
func_ref=func_ref,
route=route,
path_params=path_params,
query_params=query_params,
request_params=request_params,
multipart_params=multipart_params,
extra_body_params=extra_body_params,
event_type=event_type,
response_type=response_type,
http_method=http_method,
public=public,
request_examples=request_examples if use_examples else None,
response_examples=response_examples if use_examples else None,
)
# Store the specific webmethod with this operation
operation.webmethod = webmethod
result.append(operation)
if not result:
raise ValidationError(f"no eligible endpoint operations in type {endpoint}")
return result
def get_endpoint_events(endpoint: type) -> Dict[str, type]:
results = {}
for decl in typing.get_type_hints(endpoint).values():
# check if signature is Callable[...]
origin = typing.get_origin(decl)
if origin is None or not issubclass(origin, Callable): # type: ignore
continue
# check if signature is Callable[[...], Any]
args = typing.get_args(decl)
if len(args) != 2:
continue
params_type, return_type = args
if not isinstance(params_type, list):
continue
# check if signature is Callable[[...], None]
if not issubclass(return_type, type(None)):
continue
# check if signature is Callable[[EventType], None]
if len(params_type) != 1:
continue
param_type = params_type[0]
results[param_type.__name__] = param_type
return results
def _is_multipart_param(param_type: type) -> bool:
"""
Check if a parameter type indicates multipart form data.
Returns True if the type is:
- UploadFile
- Annotated[UploadFile, File()]
- Annotated[str, Form()]
- Annotated[Any, File()]
- Annotated[Any, Form()]
"""
if param_type is UploadFile:
return True
# Check for Annotated types
origin = get_origin(param_type)
if origin is None:
return False
if origin is Annotated:
args = get_args(param_type)
if len(args) < 2:
return False
# Check the annotations for File() or Form()
for annotation in args[1:]:
if isinstance(annotation, (File, Form)):
return True
return False
def _is_extra_body_param(param_type: type) -> tuple[bool, str | None]:
"""
Check if parameter is marked as coming from extra_body.
Returns:
(is_extra_body, description): Tuple of boolean and optional description
"""
origin = get_origin(param_type)
if origin is Annotated:
args = get_args(param_type)
for annotation in args[1:]:
if isinstance(annotation, ExtraBodyField):
return True, annotation.description
# Also check by type name for cases where import matters
if type(annotation).__name__ == 'ExtraBodyField':
return True, getattr(annotation, 'description', None)
return False, None

View file

@ -1,78 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import dataclasses
from dataclasses import dataclass
from http import HTTPStatus
from typing import Callable, ClassVar, Dict, List, Optional, Tuple, Union
from .specification import (
Info,
SecurityScheme,
SecuritySchemeAPI,
SecuritySchemeHTTP,
SecuritySchemeOpenIDConnect,
Server,
)
HTTPStatusCode = Union[HTTPStatus, int, str]
@dataclass
class Options:
"""
:param server: Base URL for the API endpoint.
:param info: Meta-information for the endpoint specification.
:param version: OpenAPI specification version as a tuple of major, minor, revision.
:param default_security_scheme: Security scheme to apply to endpoints, unless overridden on a per-endpoint basis.
:param extra_types: Extra types in addition to those found in operation signatures. Use a dictionary to group related types.
:param use_examples: Whether to emit examples for operations.
:param success_responses: Associates operation response types with HTTP status codes.
:param error_responses: Associates error response types with HTTP status codes.
:param error_wrapper: True if errors are encapsulated in an error object wrapper.
:param property_description_fun: Custom transformation function to apply to class property documentation strings.
:param captions: User-defined captions for sections such as "Operations" or "Types", and (if applicable) groups of extra types.
:param include_standard_error_responses: Whether to include standard error responses (400, 429, 500, 503) in all operations.
"""
server: Server
info: Info
version: Tuple[int, int, int] = (3, 1, 0)
default_security_scheme: Optional[SecurityScheme] = None
extra_types: Union[List[type], Dict[str, List[type]], None] = None
use_examples: bool = True
success_responses: Dict[type, HTTPStatusCode] = dataclasses.field(
default_factory=dict
)
error_responses: Dict[type, HTTPStatusCode] = dataclasses.field(
default_factory=dict
)
error_wrapper: bool = False
property_description_fun: Optional[Callable[[type, str, str], str]] = None
captions: Optional[Dict[str, str]] = None
include_standard_error_responses: bool = True
stability_filter: Optional[str] = None
default_captions: ClassVar[Dict[str, str]] = {
"Operations": "Operations",
"Types": "Types",
"Events": "Events",
"AdditionalTypes": "Additional types",
}
def map(self, id: str) -> str:
"Maps a language-neutral placeholder string to language-dependent text."
if self.captions is not None:
caption = self.captions.get(id)
if caption is not None:
return caption
caption = self.__class__.default_captions.get(id)
if caption is not None:
return caption
raise KeyError(f"no caption found for ID: {id}")

View file

@ -1,269 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import dataclasses
import enum
from dataclasses import dataclass
from typing import Any, ClassVar, Dict, List, Optional, Union
from llama_stack_api import JsonType, Schema, StrictJsonType
URL = str
@dataclass
class Ref:
ref_type: ClassVar[str]
id: str
def to_json(self) -> StrictJsonType:
return {"$ref": f"#/components/{self.ref_type}/{self.id}"}
@dataclass
class SchemaRef(Ref):
ref_type: ClassVar[str] = "schemas"
SchemaOrRef = Union[Schema, SchemaRef]
@dataclass
class ResponseRef(Ref):
ref_type: ClassVar[str] = "responses"
@dataclass
class ParameterRef(Ref):
ref_type: ClassVar[str] = "parameters"
@dataclass
class ExampleRef(Ref):
ref_type: ClassVar[str] = "examples"
@dataclass
class Contact:
name: Optional[str] = None
url: Optional[URL] = None
email: Optional[str] = None
@dataclass
class License:
name: str
url: Optional[URL] = None
@dataclass
class Info:
title: str
version: str
description: Optional[str] = None
termsOfService: Optional[str] = None
contact: Optional[Contact] = None
license: Optional[License] = None
@dataclass
class MediaType:
schema: Optional[SchemaOrRef] = None
example: Optional[Any] = None
examples: Optional[Dict[str, Union["Example", ExampleRef]]] = None
@dataclass
class RequestBody:
content: Dict[str, MediaType | Dict[str, Any]]
description: Optional[str] = None
required: Optional[bool] = None
@dataclass
class Response:
description: str
content: Optional[Dict[str, MediaType]] = None
class ParameterLocation(enum.Enum):
Query = "query"
Header = "header"
Path = "path"
Cookie = "cookie"
@dataclass
class Parameter:
name: str
in_: ParameterLocation
description: Optional[str] = None
required: Optional[bool] = None
schema: Optional[SchemaOrRef] = None
example: Optional[Any] = None
@dataclass
class ExtraBodyParameter:
"""Represents a parameter that arrives via extra_body in the request."""
name: str
schema: SchemaOrRef
description: Optional[str] = None
required: Optional[bool] = None
@dataclass
class Operation:
responses: Dict[str, Union[Response, ResponseRef]]
tags: Optional[List[str]] = None
summary: Optional[str] = None
description: Optional[str] = None
operationId: Optional[str] = None
parameters: Optional[List[Parameter]] = None
requestBody: Optional[RequestBody] = None
callbacks: Optional[Dict[str, "Callback"]] = None
security: Optional[List["SecurityRequirement"]] = None
deprecated: Optional[bool] = None
extraBodyParameters: Optional[List[ExtraBodyParameter]] = None
@dataclass
class PathItem:
summary: Optional[str] = None
description: Optional[str] = None
get: Optional[Operation] = None
put: Optional[Operation] = None
post: Optional[Operation] = None
delete: Optional[Operation] = None
options: Optional[Operation] = None
head: Optional[Operation] = None
patch: Optional[Operation] = None
trace: Optional[Operation] = None
def update(self, other: "PathItem") -> None:
"Merges another instance of this class into this object."
for field in dataclasses.fields(self.__class__):
value = getattr(other, field.name)
if value is not None:
setattr(self, field.name, value)
# maps run-time expressions such as "$request.body#/url" to path items
Callback = Dict[str, PathItem]
@dataclass
class Example:
summary: Optional[str] = None
description: Optional[str] = None
value: Optional[Any] = None
externalValue: Optional[URL] = None
@dataclass
class Server:
url: URL
description: Optional[str] = None
class SecuritySchemeType(enum.Enum):
ApiKey = "apiKey"
HTTP = "http"
OAuth2 = "oauth2"
OpenIDConnect = "openIdConnect"
@dataclass
class SecurityScheme:
type: SecuritySchemeType
description: str
@dataclass(init=False)
class SecuritySchemeAPI(SecurityScheme):
name: str
in_: ParameterLocation
def __init__(self, description: str, name: str, in_: ParameterLocation) -> None:
super().__init__(SecuritySchemeType.ApiKey, description)
self.name = name
self.in_ = in_
@dataclass(init=False)
class SecuritySchemeHTTP(SecurityScheme):
scheme: str
bearerFormat: Optional[str] = None
def __init__(
self, description: str, scheme: str, bearerFormat: Optional[str] = None
) -> None:
super().__init__(SecuritySchemeType.HTTP, description)
self.scheme = scheme
self.bearerFormat = bearerFormat
@dataclass(init=False)
class SecuritySchemeOpenIDConnect(SecurityScheme):
openIdConnectUrl: str
def __init__(self, description: str, openIdConnectUrl: str) -> None:
super().__init__(SecuritySchemeType.OpenIDConnect, description)
self.openIdConnectUrl = openIdConnectUrl
@dataclass
class Components:
schemas: Optional[Dict[str, Schema]] = None
responses: Optional[Dict[str, Response]] = None
parameters: Optional[Dict[str, Parameter]] = None
examples: Optional[Dict[str, Example]] = None
requestBodies: Optional[Dict[str, RequestBody]] = None
securitySchemes: Optional[Dict[str, SecurityScheme]] = None
callbacks: Optional[Dict[str, Callback]] = None
SecurityScope = str
SecurityRequirement = Dict[str, List[SecurityScope]]
@dataclass
class Tag:
name: str
description: Optional[str] = None
displayName: Optional[str] = None
@dataclass
class TagGroup:
"""
A ReDoc extension to provide information about groups of tags.
Exposed via the vendor-specific property "x-tagGroups" of the top-level object.
"""
name: str
tags: List[str]
@dataclass
class Document:
"""
This class is a Python dataclass adaptation of the OpenAPI Specification.
For details, see <https://swagger.io/specification/>
"""
openapi: str
info: Info
servers: List[Server]
paths: Dict[str, PathItem]
jsonSchemaDialect: Optional[str] = None
components: Optional[Components] = None
security: Optional[List[SecurityRequirement]] = None
tags: Optional[List[Tag]] = None
tagGroups: Optional[List[TagGroup]] = None

View file

@ -1,41 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>OpenAPI specification</title>
<link href="https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700" rel="stylesheet">
<script type="module" src="https://cdn.jsdelivr.net/npm/@stoplight/elements/web-components.min.js"></script>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@stoplight/elements/styles.min.css">
<style>
body {
margin: 0;
padding: 0;
height: 100vh;
}
elements-api {
height: 100%;
}
</style>
</head>
<body>
<elements-api id="openapi-container" router="hash" layout="sidebar" hideExport="true"
hideInternal="true"></elements-api>
<script>
document.addEventListener("DOMContentLoaded", function () {
const spec = { /* OPENAPI_SPECIFICATION */ };
const element = document.getElementById("openapi-container");
element.apiDescriptionDocument = spec;
if (spec.info && spec.info.title) {
document.title = spec.info.title;
}
});
</script>
</body>
</html>

View file

@ -1,287 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import json
import typing
import inspect
from pathlib import Path
from typing import Any, List, Optional, TextIO, Union, get_type_hints, get_origin, get_args
from pydantic import BaseModel
from llama_stack_api import StrictJsonType, is_unwrapped_body_param, object_to_json
from llama_stack.core.resolver import api_protocol_map
from .generator import Generator
from .options import Options
from .specification import Document
THIS_DIR = Path(__file__).parent
class Specification:
document: Document
def __init__(self, endpoint: type, options: Options):
generator = Generator(endpoint, options)
self.document = generator.generate()
def get_json(self) -> StrictJsonType:
"""
Returns the OpenAPI specification as a Python data type (e.g. `dict` for an object, `list` for an array).
The result can be serialized to a JSON string with `json.dump` or `json.dumps`.
"""
json_doc = typing.cast(StrictJsonType, object_to_json(self.document))
if isinstance(json_doc, dict):
# rename vendor-specific properties
tag_groups = json_doc.pop("tagGroups", None)
if tag_groups:
json_doc["x-tagGroups"] = tag_groups
tags = json_doc.get("tags")
if tags and isinstance(tags, list):
for tag in tags:
if not isinstance(tag, dict):
continue
display_name = tag.pop("displayName", None)
if display_name:
tag["x-displayName"] = display_name
# Handle operations to rename extraBodyParameters -> x-llama-stack-extra-body-params
paths = json_doc.get("paths", {})
for path_item in paths.values():
if isinstance(path_item, dict):
for method in ["get", "post", "put", "delete", "patch"]:
operation = path_item.get(method)
if operation and isinstance(operation, dict):
extra_body_params = operation.pop("extraBodyParameters", None)
if extra_body_params:
operation["x-llama-stack-extra-body-params"] = extra_body_params
return json_doc
def get_json_string(self, pretty_print: bool = False) -> str:
"""
Returns the OpenAPI specification as a JSON string.
:param pretty_print: Whether to use line indents to beautify the output.
"""
json_doc = self.get_json()
if pretty_print:
return json.dumps(
json_doc, check_circular=False, ensure_ascii=False, indent=4
)
else:
return json.dumps(
json_doc,
check_circular=False,
ensure_ascii=False,
separators=(",", ":"),
)
def write_json(self, f: TextIO, pretty_print: bool = False) -> None:
"""
Writes the OpenAPI specification to a file as a JSON string.
:param pretty_print: Whether to use line indents to beautify the output.
"""
json_doc = self.get_json()
if pretty_print:
json.dump(
json_doc,
f,
check_circular=False,
ensure_ascii=False,
indent=4,
)
else:
json.dump(
json_doc,
f,
check_circular=False,
ensure_ascii=False,
separators=(",", ":"),
)
def write_html(self, f: TextIO, pretty_print: bool = False) -> None:
"""
Creates a stand-alone HTML page for the OpenAPI specification with ReDoc.
:param pretty_print: Whether to use line indents to beautify the JSON string in the HTML file.
"""
path = THIS_DIR / "template.html"
with path.open(encoding="utf-8", errors="strict") as html_template_file:
html_template = html_template_file.read()
html = html_template.replace(
"{ /* OPENAPI_SPECIFICATION */ }",
self.get_json_string(pretty_print=pretty_print),
)
f.write(html)
def is_optional_type(type_: Any) -> bool:
"""Check if a type is Optional."""
origin = get_origin(type_)
args = get_args(type_)
return origin is Optional or (origin is Union and type(None) in args)
def _validate_api_method_return_type(method) -> str | None:
hints = get_type_hints(method)
if 'return' not in hints:
return "has no return type annotation"
return_type = hints['return']
if is_optional_type(return_type):
return "returns Optional type where a return value is mandatory"
def _validate_api_method_doesnt_return_list(method) -> str | None:
hints = get_type_hints(method)
if 'return' not in hints:
return "has no return type annotation"
return_type = hints['return']
if get_origin(return_type) is list:
return "returns a list where a PaginatedResponse or List*Response object is expected"
def _validate_api_delete_method_returns_none(method) -> str | None:
hints = get_type_hints(method)
if 'return' not in hints:
return "has no return type annotation"
return_type = hints['return']
# Allow OpenAI endpoints to return response objects since they follow OpenAI specification
method_name = getattr(method, '__name__', '')
if method_name.__contains__('openai_'):
return None
if return_type is not None and return_type is not type(None):
return "does not return None where None is mandatory"
def _validate_list_parameters_contain_data(method) -> str | None:
hints = get_type_hints(method)
if 'return' not in hints:
return "has no return type annotation"
return_type = hints['return']
if not inspect.isclass(return_type):
return
if not return_type.__name__.startswith('List'):
return
if 'data' not in return_type.model_fields:
return "does not have a mandatory data attribute containing the list of objects"
def _validate_has_ellipsis(method) -> str | None:
source = inspect.getsource(method)
if "..." not in source and not "NotImplementedError" in source:
return "does not contain ellipsis (...) in its implementation"
def _validate_has_return_in_docstring(method) -> str | None:
source = inspect.getsource(method)
return_type = method.__annotations__.get('return')
if return_type is not None and return_type != type(None) and ":returns:" not in source:
return "does not have a ':returns:' in its docstring"
def _validate_has_params_in_docstring(method) -> str | None:
source = inspect.getsource(method)
sig = inspect.signature(method)
params_list = [p for p in sig.parameters.values() if p.name != "self"]
if len(params_list) == 1:
param = params_list[0]
param_type = param.annotation
if is_unwrapped_body_param(param_type):
return
# Only check if the method has more than one parameter
if len(sig.parameters) > 1 and ":param" not in source:
return "does not have a ':param' in its docstring"
def _validate_has_no_return_none_in_docstring(method) -> str | None:
source = inspect.getsource(method)
return_type = method.__annotations__.get('return')
if return_type is None and ":returns: None" in source:
return "has a ':returns: None' in its docstring which is redundant for None-returning functions"
def _validate_docstring_lines_end_with_dot(method) -> str | None:
docstring = inspect.getdoc(method)
if docstring is None:
return None
lines = docstring.split('\n')
for line in lines:
line = line.strip()
if line and not any(line.endswith(char) for char in '.:{}[]()",'):
return f"docstring line '{line}' does not end with a valid character: . : {{ }} [ ] ( ) , \""
_VALIDATORS = {
"GET": [
_validate_api_method_return_type,
_validate_list_parameters_contain_data,
_validate_api_method_doesnt_return_list,
_validate_has_ellipsis,
_validate_has_return_in_docstring,
_validate_has_params_in_docstring,
_validate_docstring_lines_end_with_dot,
],
"DELETE": [
_validate_api_delete_method_returns_none,
_validate_has_ellipsis,
_validate_has_return_in_docstring,
_validate_has_params_in_docstring,
_validate_has_no_return_none_in_docstring
],
"POST": [
_validate_has_ellipsis,
_validate_has_return_in_docstring,
_validate_has_params_in_docstring,
_validate_has_no_return_none_in_docstring,
_validate_docstring_lines_end_with_dot,
],
}
def _get_methods_by_type(protocol, method_type: str):
members = inspect.getmembers(protocol, predicate=inspect.isfunction)
return {
method_name: method
for method_name, method in members
if (webmethod := getattr(method, '__webmethod__', None))
if webmethod and webmethod.method == method_type
}
def validate_api() -> List[str]:
"""Validate the API protocols."""
errors = []
protocols = api_protocol_map()
for target, validators in _VALIDATORS.items():
for protocol_name, protocol in protocols.items():
for validator in validators:
for method_name, method in _get_methods_by_type(protocol, target).items():
err = validator(method)
if err:
errors.append(f"Method {protocol_name}.{method_name} {err}")
return errors

View file

@ -1,34 +0,0 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
PYTHONPATH=${PYTHONPATH:-}
THIS_DIR="$(cd "$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")" && pwd)"
set -euo pipefail
missing_packages=()
check_package() {
if ! pip show "$1" &>/dev/null; then
missing_packages+=("$1")
fi
}
if [ ${#missing_packages[@]} -ne 0 ]; then
echo "Error: The following package(s) are not installed:"
printf " - %s\n" "${missing_packages[@]}"
echo "Please install them using:"
echo "pip install ${missing_packages[*]}"
exit 1
fi
stack_dir=$(dirname $(dirname $THIS_DIR))
PYTHONPATH=$PYTHONPATH:$stack_dir \
python -m docs.openapi_generator.generate $(dirname $THIS_DIR)/static
cp $stack_dir/docs/static/stainless-llama-stack-spec.yaml $stack_dir/client-sdks/stainless/openapi.yml

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -31,7 +31,7 @@ dependencies = [
"httpx", "httpx",
"jinja2>=3.1.6", "jinja2>=3.1.6",
"jsonschema", "jsonschema",
"llama-stack-api", # API and provider specifications (local dev via tool.uv.sources) "llama-stack-api", # API and provider specifications (local dev via tool.uv.sources)
"openai>=2.5.0", "openai>=2.5.0",
"prompt-toolkit", "prompt-toolkit",
"python-dotenv", "python-dotenv",
@ -50,11 +50,12 @@ dependencies = [
"aiosqlite>=0.21.0", # server - for metadata store "aiosqlite>=0.21.0", # server - for metadata store
"asyncpg", # for metadata store "asyncpg", # for metadata store
"sqlalchemy[asyncio]>=2.0.41", # server - for conversations "sqlalchemy[asyncio]>=2.0.41", # server - for conversations
"pyyaml>=6.0.2",
] ]
[project.optional-dependencies] [project.optional-dependencies]
client = [ client = [
"llama-stack-client>=0.3.0", # Optional for library-only usage "llama-stack-client>=0.3.0", # Optional for library-only usage
] ]
[dependency-groups] [dependency-groups]
@ -65,13 +66,14 @@ dev = [
"pytest-cov", "pytest-cov",
"pytest-html", "pytest-html",
"pytest-json-report", "pytest-json-report",
"pytest-socket", # For blocking network access in unit tests "pytest-socket", # For blocking network access in unit tests
"nbval", # For notebook testing "nbval", # For notebook testing
"black", "black",
"ruff", "ruff",
"mypy", "mypy",
"pre-commit>=4.4.0", "pre-commit>=4.4.0",
"ruamel.yaml", # needed for openapi generator "ruamel.yaml", # needed for openapi generator
"openapi-spec-validator>=0.7.2",
] ]
# Type checking dependencies - includes type stubs and optional runtime dependencies # Type checking dependencies - includes type stubs and optional runtime dependencies
# needed for complete mypy coverage across all optional features # needed for complete mypy coverage across all optional features
@ -181,7 +183,12 @@ install-wheel-from-presigned = "llama_stack.cli.scripts.run:install_wheel_from_p
[tool.setuptools.packages.find] [tool.setuptools.packages.find]
where = ["src"] where = ["src"]
include = ["llama_stack", "llama_stack.*", "llama_stack_api", "llama_stack_api.*"] include = [
"llama_stack",
"llama_stack.*",
"llama_stack_api",
"llama_stack_api.*",
]
[[tool.uv.index]] [[tool.uv.index]]
name = "pytorch-cpu" name = "pytorch-cpu"
@ -248,7 +255,9 @@ unfixable = [
# Ignore the following errors for the following files # Ignore the following errors for the following files
[tool.ruff.lint.per-file-ignores] [tool.ruff.lint.per-file-ignores]
"tests/**/*.py" = ["DTZ"] # Ignore datetime rules for tests "tests/**/*.py" = ["DTZ"] # Ignore datetime rules for tests
"src/llama_stack/providers/inline/scoring/basic/utils/ifeval_utils.py" = ["RUF001"] "src/llama_stack/providers/inline/scoring/basic/utils/ifeval_utils.py" = [
"RUF001",
]
"src/llama_stack/providers/inline/scoring/basic/scoring_fn/fn_defs/regex_parser_multiple_choice_answer.py" = [ "src/llama_stack/providers/inline/scoring/basic/scoring_fn/fn_defs/regex_parser_multiple_choice_answer.py" = [
"RUF001", "RUF001",
"PLE2515", "PLE2515",
@ -340,7 +349,6 @@ exclude = [
"^src/llama_stack/providers/utils/telemetry/dataset_mixin\\.py$", "^src/llama_stack/providers/utils/telemetry/dataset_mixin\\.py$",
"^src/llama_stack/providers/utils/telemetry/trace_protocol\\.py$", "^src/llama_stack/providers/utils/telemetry/trace_protocol\\.py$",
"^src/llama_stack/providers/utils/telemetry/tracing\\.py$", "^src/llama_stack/providers/utils/telemetry/tracing\\.py$",
"^src/llama_stack_api/strong_typing/auxiliary\\.py$",
"^src/llama_stack/distributions/template\\.py$", "^src/llama_stack/distributions/template\\.py$",
] ]

1591
scripts/fastapi_generator.py Executable file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,19 @@
#!/usr/bin/env bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
PYTHONPATH=${PYTHONPATH:-}
THIS_DIR="$(cd "$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")" && pwd)"
set -euo pipefail
stack_dir=$(dirname "$THIS_DIR")
PYTHONPATH=$PYTHONPATH:$stack_dir \
python3 -m scripts.fastapi_generator "$stack_dir"/docs/static
cp "$stack_dir"/docs/static/stainless-llama-stack-spec.yaml "$stack_dir"/client-sdks/stainless/openapi.yml

290
scripts/validate_openapi.py Executable file
View file

@ -0,0 +1,290 @@
#!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
OpenAPI Schema Validator for Llama Stack.
This script provides comprehensive validation of OpenAPI specifications
using multiple validation tools and approaches.
"""
import argparse
import json
import sys
from pathlib import Path
from typing import Any
import yaml
from openapi_spec_validator import validate_spec
from openapi_spec_validator.exceptions import OpenAPISpecValidatorError
def validate_openapi_schema(schema: dict[str, Any], schema_name: str = "OpenAPI schema") -> bool:
"""
Validate an OpenAPI schema using openapi-spec-validator.
Args:
schema: The OpenAPI schema dictionary to validate
schema_name: Name of the schema for error reporting
Returns:
True if valid, False otherwise
"""
try:
validate_spec(schema)
print(f"{schema_name} is valid")
return True
except OpenAPISpecValidatorError as e:
print(f"{schema_name} validation failed:")
print(f" {e}")
return False
except Exception as e:
print(f"{schema_name} validation error: {e}")
return False
def validate_schema_file(file_path: Path) -> bool:
"""
Validate an OpenAPI schema file (YAML or JSON).
Args:
file_path: Path to the schema file
Returns:
True if valid, False otherwise
"""
try:
with open(file_path) as f:
if file_path.suffix.lower() in [".yaml", ".yml"]:
schema = yaml.safe_load(f)
elif file_path.suffix.lower() == ".json":
schema = json.load(f)
else:
print(f"❌ Unsupported file format: {file_path.suffix}")
return False
return validate_openapi_schema(schema, str(file_path))
except Exception as e:
print(f"❌ Failed to read {file_path}: {e}")
return False
def validate_directory(directory: Path, pattern: str = "*.yaml") -> bool:
"""
Validate all OpenAPI schema files in a directory.
Args:
directory: Directory containing schema files
pattern: Glob pattern to match schema files
Returns:
True if all files are valid, False otherwise
"""
if not directory.exists():
print(f"❌ Directory not found: {directory}")
return False
schema_files = list(directory.glob(pattern)) + list(directory.glob("*.yml")) + list(directory.glob("*.json"))
if not schema_files:
print(f"❌ No schema files found in {directory}")
return False
print(f"🔍 Found {len(schema_files)} schema files to validate")
all_valid = True
for schema_file in schema_files:
print(f"\n📄 Validating {schema_file.name}...")
is_valid = validate_schema_file(schema_file)
if not is_valid:
all_valid = False
return all_valid
def get_schema_stats(schema: dict[str, Any]) -> dict[str, int]:
"""
Get statistics about an OpenAPI schema.
Args:
schema: The OpenAPI schema dictionary
Returns:
Dictionary with schema statistics
"""
stats = {
"paths": len(schema.get("paths", {})),
"schemas": len(schema.get("components", {}).get("schemas", {})),
"operations": 0,
"parameters": 0,
"responses": 0,
}
# Count operations
for path_info in schema.get("paths", {}).values():
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
if method in path_info:
stats["operations"] += 1
operation = path_info[method]
if "parameters" in operation:
stats["parameters"] += len(operation["parameters"])
if "responses" in operation:
stats["responses"] += len(operation["responses"])
return stats
def print_schema_stats(schema: dict[str, Any], schema_name: str = "Schema") -> None:
"""
Print statistics about an OpenAPI schema.
Args:
schema: The OpenAPI schema dictionary
schema_name: Name of the schema for display
"""
stats = get_schema_stats(schema)
print(f"\n📊 {schema_name} Statistics:")
print(f" 🛣️ Paths: {stats['paths']}")
print(f" 📋 Schemas: {stats['schemas']}")
print(f" 🔧 Operations: {stats['operations']}")
print(f" 📝 Parameters: {stats['parameters']}")
print(f" 📤 Responses: {stats['responses']}")
def main():
"""Main entry point for the OpenAPI validator."""
parser = argparse.ArgumentParser(
description="Validate OpenAPI specifications",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Validate a specific file
python validate_openapi.py docs/static/llama-stack-spec.yaml
# Validate all YAML files in a directory
python validate_openapi.py docs/static/
# Validate with detailed statistics
python validate_openapi.py docs/static/llama-stack-spec.yaml --stats
# Validate and show only errors
python validate_openapi.py docs/static/ --quiet
""",
)
parser.add_argument("path", help="Path to schema file or directory containing schema files")
parser.add_argument("--stats", action="store_true", help="Show detailed schema statistics")
parser.add_argument("--quiet", action="store_true", help="Only show errors, suppress success messages")
parser.add_argument("--pattern", default="*.yaml", help="Glob pattern for schema files (default: *.yaml)")
args = parser.parse_args()
path = Path(args.path)
if not path.exists():
print(f"❌ Path not found: {path}")
return 1
if path.is_file():
# Validate a single file
if args.quiet:
# Override the validation function to be quiet
def quiet_validate(schema, name):
try:
validate_spec(schema)
return True
except Exception as e:
print(f"{name}: {e}")
return False
try:
with open(path) as f:
if path.suffix.lower() in [".yaml", ".yml"]:
schema = yaml.safe_load(f)
elif path.suffix.lower() == ".json":
schema = json.load(f)
else:
print(f"❌ Unsupported file format: {path.suffix}")
return 1
is_valid = quiet_validate(schema, str(path))
if is_valid and args.stats:
print_schema_stats(schema, path.name)
return 0 if is_valid else 1
except Exception as e:
print(f"❌ Failed to read {path}: {e}")
return 1
else:
is_valid = validate_schema_file(path)
if is_valid and args.stats:
try:
with open(path) as f:
if path.suffix.lower() in [".yaml", ".yml"]:
schema = yaml.safe_load(f)
elif path.suffix.lower() == ".json":
schema = json.load(f)
else:
return 1
print_schema_stats(schema, path.name)
except Exception:
pass
return 0 if is_valid else 1
elif path.is_dir():
# Validate all files in directory
if args.quiet:
all_valid = True
schema_files = list(path.glob(args.pattern)) + list(path.glob("*.yml")) + list(path.glob("*.json"))
for schema_file in schema_files:
try:
with open(schema_file) as f:
if schema_file.suffix.lower() in [".yaml", ".yml"]:
schema = yaml.safe_load(f)
elif schema_file.suffix.lower() == ".json":
schema = json.load(f)
else:
continue
try:
validate_spec(schema)
except Exception as e:
print(f"{schema_file.name}: {e}")
all_valid = False
except Exception as e:
print(f"❌ Failed to read {schema_file.name}: {e}")
all_valid = False
return 0 if all_valid else 1
else:
all_valid = validate_directory(path, args.pattern)
if all_valid and args.stats:
# Show stats for all files
schema_files = list(path.glob(args.pattern)) + list(path.glob("*.yml")) + list(path.glob("*.json"))
for schema_file in schema_files:
try:
with open(schema_file) as f:
if schema_file.suffix.lower() in [".yaml", ".yml"]:
schema = yaml.safe_load(f)
elif schema_file.suffix.lower() == ".json":
schema = json.load(f)
else:
continue
print_schema_stats(schema, schema_file.name)
except Exception:
continue
return 0 if all_valid else 1
else:
print(f"❌ Invalid path type: {path}")
return 1
if __name__ == "__main__":
sys.exit(main())

View file

@ -42,22 +42,16 @@ from termcolor import cprint
from llama_stack.core.build import print_pip_install_help from llama_stack.core.build import print_pip_install_help
from llama_stack.core.configure import parse_and_maybe_upgrade_config from llama_stack.core.configure import parse_and_maybe_upgrade_config
from llama_stack.core.datatypes import BuildConfig, BuildProvider, DistributionSpec from llama_stack.core.datatypes import BuildConfig, BuildProvider, DistributionSpec
from llama_stack.core.request_headers import ( from llama_stack.core.request_headers import PROVIDER_DATA_VAR, request_provider_data_context
PROVIDER_DATA_VAR,
request_provider_data_context,
)
from llama_stack.core.resolver import ProviderRegistry from llama_stack.core.resolver import ProviderRegistry
from llama_stack.core.server.routes import RouteImpls, find_matching_route, initialize_route_impls from llama_stack.core.server.routes import RouteImpls, find_matching_route, initialize_route_impls
from llama_stack.core.stack import ( from llama_stack.core.stack import Stack, get_stack_run_config_from_distro, replace_env_vars
Stack,
get_stack_run_config_from_distro,
replace_env_vars,
)
from llama_stack.core.telemetry import Telemetry from llama_stack.core.telemetry import Telemetry
from llama_stack.core.telemetry.tracing import CURRENT_TRACE_CONTEXT, end_trace, setup_logger, start_trace from llama_stack.core.telemetry.tracing import CURRENT_TRACE_CONTEXT, end_trace, setup_logger, start_trace
from llama_stack.core.utils.config import redact_sensitive_fields from llama_stack.core.utils.config import redact_sensitive_fields
from llama_stack.core.utils.context import preserve_contexts_async_generator from llama_stack.core.utils.context import preserve_contexts_async_generator
from llama_stack.core.utils.exec import in_notebook from llama_stack.core.utils.exec import in_notebook
from llama_stack.core.utils.type_inspection import is_unwrapped_body_param
from llama_stack.log import get_logger, setup_logging from llama_stack.log import get_logger, setup_logging
logger = get_logger(name=__name__, category="core") logger = get_logger(name=__name__, category="core")

View file

@ -0,0 +1,45 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Utility functions for type inspection and parameter handling.
"""
import inspect
import typing
from typing import Any, get_args, get_origin
from pydantic import BaseModel
from pydantic.fields import FieldInfo
def is_unwrapped_body_param(param_type: Any) -> bool:
"""
Check if a parameter type represents an unwrapped body parameter.
An unwrapped body parameter is an Annotated type with Body(embed=False)
This is used to determine whether request parameters should be flattened
in OpenAPI specs and client libraries (matching FastAPI's embed=False behavior).
Args:
param_type: The parameter type annotation to check
Returns:
True if the parameter should be treated as an unwrapped body parameter
"""
# Check if it's Annotated with Body(embed=False)
if get_origin(param_type) is typing.Annotated:
args = get_args(param_type)
base_type = args[0]
metadata = args[1:]
# Look for Body annotation with embed=False
# Body() returns a FieldInfo object, so we check for that type and the embed attribute
for item in metadata:
if isinstance(item, FieldInfo) and hasattr(item, "embed") and not item.embed:
return inspect.isclass(base_type) and issubclass(base_type, BaseModel)
return False

View file

@ -54,6 +54,7 @@ class VersionInfo(BaseModel):
version: str version: str
@json_schema_type
class ListRoutesResponse(BaseModel): class ListRoutesResponse(BaseModel):
"""Response containing a list of all available API routes. """Response containing a list of all available API routes.

View file

@ -1314,6 +1314,7 @@ OpenAIResponseInput = Annotated[
register_schema(OpenAIResponseInput, name="OpenAIResponseInput") register_schema(OpenAIResponseInput, name="OpenAIResponseInput")
@json_schema_type
class ListOpenAIResponseInputItem(BaseModel): class ListOpenAIResponseInputItem(BaseModel):
"""List container for OpenAI response input items. """List container for OpenAI response input items.

View file

@ -8,8 +8,6 @@ from collections.abc import Callable
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any, TypeVar from typing import Any, TypeVar
from .strong_typing.schema import json_schema_type, register_schema # noqa: F401
class ExtraBodyField[T]: class ExtraBodyField[T]:
""" """
@ -48,6 +46,47 @@ class ExtraBodyField[T]:
self.description = description self.description = description
def json_schema_type(cls):
"""
Decorator to mark a Pydantic model for top-level component registration.
Models marked with this decorator will be registered as top-level components
in the OpenAPI schema, while unmarked models will be inlined.
This provides control over schema registration to avoid unnecessary indirection
for simple one-off types while keeping complex reusable types as components.
"""
cls._llama_stack_schema_type = True
return cls
# Global registry for registered schemas
_registered_schemas = {}
def register_schema(schema_type, name: str | None = None):
"""
Register a schema type for top-level component registration.
This replicates the behavior of strong_typing's register_schema function.
It's used for union types and other complex types that should appear as
top-level components in the OpenAPI schema.
Args:
schema_type: The type to register (e.g., union types, Annotated types)
name: Optional name for the schema in the OpenAPI spec. If not provided,
uses the type's __name__ or a generated name.
"""
if name is None:
name = getattr(schema_type, "__name__", f"Anonymous_{id(schema_type)}")
# Store the registration information in a global registry
# since union types don't allow setting attributes
_registered_schemas[schema_type] = {"name": name, "type": schema_type}
return schema_type
@dataclass @dataclass
class WebMethod: class WebMethod:
level: str | None = None level: str | None = None

View file

@ -1,19 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
Provides auxiliary services for working with Python type annotations, converting typed data to and from JSON,
and generating a JSON schema for a complex type.
"""
__version__ = "0.3.4"
__author__ = "Levente Hunyadi"
__copyright__ = "Copyright 2021-2024, Levente Hunyadi"
__license__ = "MIT"
__maintainer__ = "Levente Hunyadi"
__status__ = "Production"

View file

@ -1,229 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import dataclasses
import sys
from collections.abc import Callable
from dataclasses import is_dataclass
from typing import TypeVar, overload
if sys.version_info >= (3, 9):
from typing import Annotated as Annotated
else:
from typing import Annotated as Annotated
if sys.version_info >= (3, 10):
from typing import TypeAlias as TypeAlias
else:
from typing import TypeAlias as TypeAlias
if sys.version_info >= (3, 11):
from typing import dataclass_transform as dataclass_transform
else:
from typing import dataclass_transform as dataclass_transform
T = TypeVar("T")
def _compact_dataclass_repr(obj: object) -> str:
"""
Compact data-class representation where positional arguments are used instead of keyword arguments.
:param obj: A data-class object.
:returns: A string that matches the pattern `Class(arg1, arg2, ...)`.
"""
if is_dataclass(obj):
arglist = ", ".join(repr(getattr(obj, field.name)) for field in dataclasses.fields(obj))
return f"{obj.__class__.__name__}({arglist})"
else:
return obj.__class__.__name__
class CompactDataClass:
"A data class whose repr() uses positional rather than keyword arguments."
def __repr__(self) -> str:
return _compact_dataclass_repr(self)
@overload
def typeannotation(cls: type[T], /) -> type[T]: ...
@overload
def typeannotation(cls: None, *, eq: bool = True, order: bool = False) -> Callable[[type[T]], type[T]]: ...
@dataclass_transform(eq_default=True, order_default=False)
def typeannotation(
cls: type[T] | None = None, *, eq: bool = True, order: bool = False
) -> type[T] | Callable[[type[T]], type[T]]:
"""
Returns the same class as was passed in, with dunder methods added based on the fields defined in the class.
:param cls: The data-class type to transform into a type annotation.
:param eq: Whether to generate functions to support equality comparison.
:param order: Whether to generate functions to support ordering.
:returns: A data-class type, or a wrapper for data-class types.
"""
def wrap(cls: type[T]) -> type[T]:
# mypy fails to equate bound-y functions (first argument interpreted as
# the bound object) with class methods, hence the `ignore` directive.
cls.__repr__ = _compact_dataclass_repr # type: ignore[method-assign]
if not dataclasses.is_dataclass(cls):
cls = dataclasses.dataclass( # type: ignore[call-overload]
cls,
init=True,
repr=False,
eq=eq,
order=order,
unsafe_hash=False,
frozen=True,
)
return cls
# see if decorator is used as @typeannotation or @typeannotation()
if cls is None:
# called with parentheses
return wrap
else:
# called without parentheses
return wrap(cls)
@typeannotation
class Alias:
"Alternative name of a property, typically used in JSON serialization."
name: str
@typeannotation
class Signed:
"Signedness of an integer type."
is_signed: bool
@typeannotation
class Storage:
"Number of bytes the binary representation of an integer type takes, e.g. 4 bytes for an int32."
bytes: int
@typeannotation
class IntegerRange:
"Minimum and maximum value of an integer. The range is inclusive."
minimum: int
maximum: int
@typeannotation
class Precision:
"Precision of a floating-point value."
significant_digits: int
decimal_digits: int = 0
@property
def integer_digits(self) -> int:
return self.significant_digits - self.decimal_digits
@typeannotation
class TimePrecision:
"""
Precision of a timestamp or time interval.
:param decimal_digits: Number of fractional digits retained in the sub-seconds field for a timestamp.
"""
decimal_digits: int = 0
@typeannotation
class Length:
"Exact length of a string."
value: int
@typeannotation
class MinLength:
"Minimum length of a string."
value: int
@typeannotation
class MaxLength:
"Maximum length of a string."
value: int
@typeannotation
class SpecialConversion:
"Indicates that the annotated type is subject to custom conversion rules."
int8: TypeAlias = Annotated[int, Signed(True), Storage(1), IntegerRange(-128, 127)]
int16: TypeAlias = Annotated[int, Signed(True), Storage(2), IntegerRange(-32768, 32767)]
int32: TypeAlias = Annotated[
int,
Signed(True),
Storage(4),
IntegerRange(-2147483648, 2147483647),
]
int64: TypeAlias = Annotated[
int,
Signed(True),
Storage(8),
IntegerRange(-9223372036854775808, 9223372036854775807),
]
uint8: TypeAlias = Annotated[int, Signed(False), Storage(1), IntegerRange(0, 255)]
uint16: TypeAlias = Annotated[int, Signed(False), Storage(2), IntegerRange(0, 65535)]
uint32: TypeAlias = Annotated[
int,
Signed(False),
Storage(4),
IntegerRange(0, 4294967295),
]
uint64: TypeAlias = Annotated[
int,
Signed(False),
Storage(8),
IntegerRange(0, 18446744073709551615),
]
float32: TypeAlias = Annotated[float, Storage(4)]
float64: TypeAlias = Annotated[float, Storage(8)]
# maps globals of type Annotated[T, ...] defined in this module to their string names
_auxiliary_types: dict[object, str] = {}
module = sys.modules[__name__]
for var in dir(module):
typ = getattr(module, var)
if getattr(typ, "__metadata__", None) is not None:
# type is Annotated[T, ...]
_auxiliary_types[typ] = var
def get_auxiliary_format(data_type: object) -> str | None:
"Returns the JSON format string corresponding to an auxiliary type."
return _auxiliary_types.get(data_type)

View file

@ -1,440 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import copy
import dataclasses
import datetime
import decimal
import enum
import ipaddress
import math
import re
import sys
import types
import typing
import uuid
from dataclasses import dataclass
from typing import Any, Literal, TypeVar, Union
from .auxiliary import (
Alias,
Annotated,
MaxLength,
Precision,
float32,
float64,
int16,
int32,
int64,
)
from .core import JsonType, Schema
from .docstring import Docstring, DocstringParam
from .inspection import TypeLike
from .serialization import json_to_object, object_to_json
T = TypeVar("T")
@dataclass
class JsonSchemaNode:
title: str | None
description: str | None
@dataclass
class JsonSchemaType(JsonSchemaNode):
type: str
format: str | None
@dataclass
class JsonSchemaBoolean(JsonSchemaType):
type: Literal["boolean"]
const: bool | None
default: bool | None
examples: list[bool] | None
@dataclass
class JsonSchemaInteger(JsonSchemaType):
type: Literal["integer"]
const: int | None
default: int | None
examples: list[int] | None
enum: list[int] | None
minimum: int | None
maximum: int | None
@dataclass
class JsonSchemaNumber(JsonSchemaType):
type: Literal["number"]
const: float | None
default: float | None
examples: list[float] | None
minimum: float | None
maximum: float | None
exclusiveMinimum: float | None
exclusiveMaximum: float | None
multipleOf: float | None
@dataclass
class JsonSchemaString(JsonSchemaType):
type: Literal["string"]
const: str | None
default: str | None
examples: list[str] | None
enum: list[str] | None
minLength: int | None
maxLength: int | None
@dataclass
class JsonSchemaArray(JsonSchemaType):
type: Literal["array"]
items: "JsonSchemaAny"
@dataclass
class JsonSchemaObject(JsonSchemaType):
type: Literal["object"]
properties: dict[str, "JsonSchemaAny"] | None
additionalProperties: bool | None
required: list[str] | None
@dataclass
class JsonSchemaRef(JsonSchemaNode):
ref: Annotated[str, Alias("$ref")]
@dataclass
class JsonSchemaAllOf(JsonSchemaNode):
allOf: list["JsonSchemaAny"]
@dataclass
class JsonSchemaAnyOf(JsonSchemaNode):
anyOf: list["JsonSchemaAny"]
@dataclass
class Discriminator:
propertyName: str
mapping: dict[str, str]
@dataclass
class JsonSchemaOneOf(JsonSchemaNode):
oneOf: list["JsonSchemaAny"]
discriminator: Discriminator | None
JsonSchemaAny = Union[
JsonSchemaRef,
JsonSchemaBoolean,
JsonSchemaInteger,
JsonSchemaNumber,
JsonSchemaString,
JsonSchemaArray,
JsonSchemaObject,
JsonSchemaOneOf,
]
@dataclass
class JsonSchemaTopLevelObject(JsonSchemaObject):
schema: Annotated[str, Alias("$schema")]
definitions: dict[str, JsonSchemaAny] | None
def integer_range_to_type(min_value: float, max_value: float) -> type:
if min_value >= -(2**15) and max_value < 2**15:
return int16
elif min_value >= -(2**31) and max_value < 2**31:
return int32
else:
return int64
def enum_safe_name(name: str) -> str:
name = re.sub(r"\W", "_", name)
is_dunder = name.startswith("__")
is_sunder = name.startswith("_") and name.endswith("_")
if is_dunder or is_sunder: # provide an alternative for dunder and sunder names
name = f"v{name}"
return name
def enum_values_to_type(
module: types.ModuleType,
name: str,
values: dict[str, Any],
title: str | None = None,
description: str | None = None,
) -> type[enum.Enum]:
enum_class: type[enum.Enum] = enum.Enum(name, values) # type: ignore
# assign the newly created type to the same module where the defining class is
enum_class.__module__ = module.__name__
enum_class.__doc__ = str(Docstring(short_description=title, long_description=description))
setattr(module, name, enum_class)
return enum.unique(enum_class)
def schema_to_type(schema: Schema, *, module: types.ModuleType, class_name: str) -> TypeLike:
"""
Creates a Python type from a JSON schema.
:param schema: The JSON schema that the types would correspond to.
:param module: The module in which to create the new types.
:param class_name: The name assigned to the top-level class.
"""
top_node = typing.cast(JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema))
if top_node.definitions is not None:
for type_name, type_node in top_node.definitions.items():
type_def = node_to_typedef(module, type_name, type_node)
if type_def.default is not dataclasses.MISSING:
raise TypeError("disallowed: `default` for top-level type definitions")
type_def.type.__module__ = module.__name__
setattr(module, type_name, type_def.type)
return node_to_typedef(module, class_name, top_node).type
@dataclass
class TypeDef:
type: TypeLike
default: Any = dataclasses.MISSING
def json_to_value(target_type: TypeLike, data: JsonType) -> Any:
if data is not None:
return json_to_object(target_type, data)
else:
return dataclasses.MISSING
def node_to_typedef(module: types.ModuleType, context: str, node: JsonSchemaNode) -> TypeDef:
if isinstance(node, JsonSchemaRef):
match_obj = re.match(r"^#/definitions/(\w+)$", node.ref)
if not match_obj:
raise ValueError(f"invalid reference: {node.ref}")
type_name = match_obj.group(1)
return TypeDef(getattr(module, type_name), dataclasses.MISSING)
elif isinstance(node, JsonSchemaBoolean):
if node.const is not None:
return TypeDef(Literal[node.const], dataclasses.MISSING)
default = json_to_value(bool, node.default)
return TypeDef(bool, default)
elif isinstance(node, JsonSchemaInteger):
if node.const is not None:
return TypeDef(Literal[node.const], dataclasses.MISSING)
integer_type: TypeLike
if node.format == "int16":
integer_type = int16
elif node.format == "int32":
integer_type = int32
elif node.format == "int64":
integer_type = int64
else:
if node.enum is not None:
integer_type = integer_range_to_type(min(node.enum), max(node.enum))
elif node.minimum is not None and node.maximum is not None:
integer_type = integer_range_to_type(node.minimum, node.maximum)
else:
integer_type = int
default = json_to_value(integer_type, node.default)
return TypeDef(integer_type, default)
elif isinstance(node, JsonSchemaNumber):
if node.const is not None:
return TypeDef(Literal[node.const], dataclasses.MISSING)
number_type: TypeLike
if node.format == "float32":
number_type = float32
elif node.format == "float64":
number_type = float64
else:
if (
node.exclusiveMinimum is not None
and node.exclusiveMaximum is not None
and node.exclusiveMinimum == -node.exclusiveMaximum
):
integer_digits = round(math.log10(node.exclusiveMaximum))
else:
integer_digits = None
if node.multipleOf is not None:
decimal_digits = -round(math.log10(node.multipleOf))
else:
decimal_digits = None
if integer_digits is not None and decimal_digits is not None:
number_type = Annotated[
decimal.Decimal,
Precision(integer_digits + decimal_digits, decimal_digits),
]
else:
number_type = float
default = json_to_value(number_type, node.default)
return TypeDef(number_type, default)
elif isinstance(node, JsonSchemaString):
if node.const is not None:
return TypeDef(Literal[node.const], dataclasses.MISSING)
string_type: TypeLike
if node.format == "date-time":
string_type = datetime.datetime
elif node.format == "uuid":
string_type = uuid.UUID
elif node.format == "ipv4":
string_type = ipaddress.IPv4Address
elif node.format == "ipv6":
string_type = ipaddress.IPv6Address
elif node.enum is not None:
string_type = enum_values_to_type(
module,
context,
{enum_safe_name(e): e for e in node.enum},
title=node.title,
description=node.description,
)
elif node.maxLength is not None:
string_type = Annotated[str, MaxLength(node.maxLength)]
else:
string_type = str
default = json_to_value(string_type, node.default)
return TypeDef(string_type, default)
elif isinstance(node, JsonSchemaArray):
type_def = node_to_typedef(module, context, node.items)
if type_def.default is not dataclasses.MISSING:
raise TypeError("disallowed: `default` for array element type")
list_type = list[(type_def.type,)] # type: ignore
return TypeDef(list_type, dataclasses.MISSING)
elif isinstance(node, JsonSchemaObject):
if node.properties is None:
return TypeDef(JsonType, dataclasses.MISSING)
if node.additionalProperties is None or node.additionalProperties is not False:
raise TypeError("expected: `additionalProperties` equals `false`")
required = node.required if node.required is not None else []
class_name = context
fields: list[tuple[str, Any, dataclasses.Field]] = []
params: dict[str, DocstringParam] = {}
for prop_name, prop_node in node.properties.items():
type_def = node_to_typedef(module, f"{class_name}__{prop_name}", prop_node)
if prop_name in required:
prop_type = type_def.type
else:
prop_type = Union[(None, type_def.type)]
fields.append((prop_name, prop_type, dataclasses.field(default=type_def.default)))
prop_desc = prop_node.title or prop_node.description
if prop_desc is not None:
params[prop_name] = DocstringParam(prop_name, prop_desc)
fields.sort(key=lambda t: t[2].default is not dataclasses.MISSING)
if sys.version_info >= (3, 12):
class_type = dataclasses.make_dataclass(class_name, fields, module=module.__name__)
else:
class_type = dataclasses.make_dataclass(class_name, fields, namespace={"__module__": module.__name__})
class_type.__doc__ = str(
Docstring(
short_description=node.title,
long_description=node.description,
params=params,
)
)
setattr(module, class_name, class_type)
return TypeDef(class_type, dataclasses.MISSING)
elif isinstance(node, JsonSchemaOneOf):
union_defs = tuple(node_to_typedef(module, context, n) for n in node.oneOf)
if any(d.default is not dataclasses.MISSING for d in union_defs):
raise TypeError("disallowed: `default` for union member type")
union_types = tuple(d.type for d in union_defs)
return TypeDef(Union[union_types], dataclasses.MISSING)
raise NotImplementedError()
@dataclass
class SchemaFlatteningOptions:
qualified_names: bool = False
recursive: bool = False
def flatten_schema(schema: Schema, *, options: SchemaFlatteningOptions | None = None) -> Schema:
top_node = typing.cast(JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema))
flattener = SchemaFlattener(options)
obj = flattener.flatten(top_node)
return typing.cast(Schema, object_to_json(obj))
class SchemaFlattener:
options: SchemaFlatteningOptions
def __init__(self, options: SchemaFlatteningOptions | None = None) -> None:
self.options = options or SchemaFlatteningOptions()
def flatten(self, source_node: JsonSchemaObject) -> JsonSchemaObject:
if source_node.type != "object":
return source_node
source_props = source_node.properties or {}
target_props: dict[str, JsonSchemaAny] = {}
source_reqs = source_node.required or []
target_reqs: list[str] = []
for name, prop in source_props.items():
if not isinstance(prop, JsonSchemaObject):
target_props[name] = prop
if name in source_reqs:
target_reqs.append(name)
continue
if self.options.recursive:
obj = self.flatten(prop)
else:
obj = prop
if obj.properties is not None:
if self.options.qualified_names:
target_props.update((f"{name}.{n}", p) for n, p in obj.properties.items())
else:
target_props.update(obj.properties.items())
if obj.required is not None:
if self.options.qualified_names:
target_reqs.extend(f"{name}.{n}" for n in obj.required)
else:
target_reqs.extend(obj.required)
target_node = copy.copy(source_node)
target_node.properties = target_props or None
target_node.additionalProperties = False
target_node.required = target_reqs or None
return target_node

View file

@ -1,46 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
from typing import Union
class JsonObject:
"Placeholder type for an unrestricted JSON object."
class JsonArray:
"Placeholder type for an unrestricted JSON array."
# a JSON type with possible `null` values
JsonType = Union[
None,
bool,
int,
float,
str,
dict[str, "JsonType"],
list["JsonType"],
]
# a JSON type that cannot contain `null` values
StrictJsonType = Union[
bool,
int,
float,
str,
dict[str, "StrictJsonType"],
list["StrictJsonType"],
]
# a meta-type that captures the object type in a JSON schema
Schema = dict[str, JsonType]

View file

@ -1,872 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import abc
import base64
import dataclasses
import datetime
import enum
import inspect
import ipaddress
import sys
import typing
import uuid
from collections.abc import Callable
from types import ModuleType
from typing import (
Any,
Generic,
Literal,
NamedTuple,
Optional,
TypeVar,
Union,
)
from .core import JsonType
from .exception import JsonKeyError, JsonTypeError, JsonValueError
from .inspection import (
TypeLike,
create_object,
enum_value_types,
evaluate_type,
get_class_properties,
get_class_property,
get_resolved_hints,
is_dataclass_instance,
is_dataclass_type,
is_named_tuple_type,
is_type_annotated,
is_type_literal,
is_type_optional,
unwrap_annotated_type,
unwrap_literal_values,
unwrap_optional_type,
)
from .mapping import python_field_to_json_property
from .name import python_type_to_str
E = TypeVar("E", bound=enum.Enum)
T = TypeVar("T")
R = TypeVar("R")
K = TypeVar("K")
V = TypeVar("V")
class Deserializer(abc.ABC, Generic[T]):
"Parses a JSON value into a Python type."
def build(self, context: ModuleType | None) -> None:
"""
Creates auxiliary parsers that this parser is depending on.
:param context: A module context for evaluating types specified as a string.
"""
@abc.abstractmethod
def parse(self, data: JsonType) -> T:
"""
Parses a JSON value into a Python type.
:param data: The JSON value to de-serialize.
:returns: The Python object that the JSON value de-serializes to.
"""
class NoneDeserializer(Deserializer[None]):
"Parses JSON `null` values into Python `None`."
def parse(self, data: JsonType) -> None:
if data is not None:
raise JsonTypeError(f"`None` type expects JSON `null` but instead received: {data}")
return None
class BoolDeserializer(Deserializer[bool]):
"Parses JSON `boolean` values into Python `bool` type."
def parse(self, data: JsonType) -> bool:
if not isinstance(data, bool):
raise JsonTypeError(f"`bool` type expects JSON `boolean` data but instead received: {data}")
return bool(data)
class IntDeserializer(Deserializer[int]):
"Parses JSON `number` values into Python `int` type."
def parse(self, data: JsonType) -> int:
if not isinstance(data, int):
raise JsonTypeError(f"`int` type expects integer data as JSON `number` but instead received: {data}")
return int(data)
class FloatDeserializer(Deserializer[float]):
"Parses JSON `number` values into Python `float` type."
def parse(self, data: JsonType) -> float:
if not isinstance(data, float) and not isinstance(data, int):
raise JsonTypeError(f"`int` type expects data as JSON `number` but instead received: {data}")
return float(data)
class StringDeserializer(Deserializer[str]):
"Parses JSON `string` values into Python `str` type."
def parse(self, data: JsonType) -> str:
if not isinstance(data, str):
raise JsonTypeError(f"`str` type expects JSON `string` data but instead received: {data}")
return str(data)
class BytesDeserializer(Deserializer[bytes]):
"Parses JSON `string` values of Base64-encoded strings into Python `bytes` type."
def parse(self, data: JsonType) -> bytes:
if not isinstance(data, str):
raise JsonTypeError(f"`bytes` type expects JSON `string` data but instead received: {data}")
return base64.b64decode(data, validate=True)
class DateTimeDeserializer(Deserializer[datetime.datetime]):
"Parses JSON `string` values representing timestamps in ISO 8601 format to Python `datetime` with time zone."
def parse(self, data: JsonType) -> datetime.datetime:
if not isinstance(data, str):
raise JsonTypeError(f"`datetime` type expects JSON `string` data but instead received: {data}")
if data.endswith("Z"):
data = f"{data[:-1]}+00:00" # Python's isoformat() does not support military time zones like "Zulu" for UTC
timestamp = datetime.datetime.fromisoformat(data)
if timestamp.tzinfo is None:
raise JsonValueError(f"timestamp lacks explicit time zone designator: {data}")
return timestamp
class DateDeserializer(Deserializer[datetime.date]):
"Parses JSON `string` values representing dates in ISO 8601 format to Python `date` type."
def parse(self, data: JsonType) -> datetime.date:
if not isinstance(data, str):
raise JsonTypeError(f"`date` type expects JSON `string` data but instead received: {data}")
return datetime.date.fromisoformat(data)
class TimeDeserializer(Deserializer[datetime.time]):
"Parses JSON `string` values representing time instances in ISO 8601 format to Python `time` type with time zone."
def parse(self, data: JsonType) -> datetime.time:
if not isinstance(data, str):
raise JsonTypeError(f"`time` type expects JSON `string` data but instead received: {data}")
return datetime.time.fromisoformat(data)
class UUIDDeserializer(Deserializer[uuid.UUID]):
"Parses JSON `string` values of UUID strings into Python `uuid.UUID` type."
def parse(self, data: JsonType) -> uuid.UUID:
if not isinstance(data, str):
raise JsonTypeError(f"`UUID` type expects JSON `string` data but instead received: {data}")
return uuid.UUID(data)
class IPv4Deserializer(Deserializer[ipaddress.IPv4Address]):
"Parses JSON `string` values of IPv4 address strings into Python `ipaddress.IPv4Address` type."
def parse(self, data: JsonType) -> ipaddress.IPv4Address:
if not isinstance(data, str):
raise JsonTypeError(f"`IPv4Address` type expects JSON `string` data but instead received: {data}")
return ipaddress.IPv4Address(data)
class IPv6Deserializer(Deserializer[ipaddress.IPv6Address]):
"Parses JSON `string` values of IPv6 address strings into Python `ipaddress.IPv6Address` type."
def parse(self, data: JsonType) -> ipaddress.IPv6Address:
if not isinstance(data, str):
raise JsonTypeError(f"`IPv6Address` type expects JSON `string` data but instead received: {data}")
return ipaddress.IPv6Address(data)
class ListDeserializer(Deserializer[list[T]]):
"Recursively de-serializes a JSON array into a Python `list`."
item_type: type[T]
item_parser: Deserializer
def __init__(self, item_type: type[T]) -> None:
self.item_type = item_type
def build(self, context: ModuleType | None) -> None:
self.item_parser = _get_deserializer(self.item_type, context)
def parse(self, data: JsonType) -> list[T]:
if not isinstance(data, list):
type_name = python_type_to_str(self.item_type)
raise JsonTypeError(f"type `List[{type_name}]` expects JSON `array` data but instead received: {data}")
return [self.item_parser.parse(item) for item in data]
class DictDeserializer(Deserializer[dict[K, V]]):
"Recursively de-serializes a JSON object into a Python `dict`."
key_type: type[K]
value_type: type[V]
value_parser: Deserializer[V]
def __init__(self, key_type: type[K], value_type: type[V]) -> None:
self.key_type = key_type
self.value_type = value_type
self._check_key_type()
def build(self, context: ModuleType | None) -> None:
self.value_parser = _get_deserializer(self.value_type, context)
def _check_key_type(self) -> None:
if self.key_type is str:
return
if issubclass(self.key_type, enum.Enum):
value_types = enum_value_types(self.key_type)
if len(value_types) != 1:
raise JsonTypeError(
f"type `{self.container_type}` has invalid key type, "
f"enumerations must have a consistent member value type but several types found: {value_types}"
)
value_type = value_types.pop()
if value_type is not str:
f"`type `{self.container_type}` has invalid enumeration key type, expected `enum.Enum` with string values"
return
raise JsonTypeError(
f"`type `{self.container_type}` has invalid key type, expected `str` or `enum.Enum` with string values"
)
@property
def container_type(self) -> str:
key_type_name = python_type_to_str(self.key_type)
value_type_name = python_type_to_str(self.value_type)
return f"Dict[{key_type_name}, {value_type_name}]"
def parse(self, data: JsonType) -> dict[K, V]:
if not isinstance(data, dict):
raise JsonTypeError(
f"`type `{self.container_type}` expects JSON `object` data but instead received: {data}"
)
return dict(
(self.key_type(key), self.value_parser.parse(value)) # type: ignore[call-arg]
for key, value in data.items()
)
class SetDeserializer(Deserializer[set[T]]):
"Recursively de-serializes a JSON list into a Python `set`."
member_type: type[T]
member_parser: Deserializer
def __init__(self, member_type: type[T]) -> None:
self.member_type = member_type
def build(self, context: ModuleType | None) -> None:
self.member_parser = _get_deserializer(self.member_type, context)
def parse(self, data: JsonType) -> set[T]:
if not isinstance(data, list):
type_name = python_type_to_str(self.member_type)
raise JsonTypeError(f"type `Set[{type_name}]` expects JSON `array` data but instead received: {data}")
return set(self.member_parser.parse(item) for item in data)
class TupleDeserializer(Deserializer[tuple[Any, ...]]):
"Recursively de-serializes a JSON list into a Python `tuple`."
item_types: tuple[type[Any], ...]
item_parsers: tuple[Deserializer[Any], ...]
def __init__(self, item_types: tuple[type[Any], ...]) -> None:
self.item_types = item_types
def build(self, context: ModuleType | None) -> None:
self.item_parsers = tuple(_get_deserializer(item_type, context) for item_type in self.item_types)
@property
def container_type(self) -> str:
type_names = ", ".join(python_type_to_str(item_type) for item_type in self.item_types)
return f"Tuple[{type_names}]"
def parse(self, data: JsonType) -> tuple[Any, ...]:
if not isinstance(data, list) or len(data) != len(self.item_parsers):
if not isinstance(data, list):
raise JsonTypeError(
f"type `{self.container_type}` expects JSON `array` data but instead received: {data}"
)
else:
count = len(self.item_parsers)
raise JsonValueError(
f"type `{self.container_type}` expects a JSON `array` of length {count} but received length {len(data)}"
)
return tuple(item_parser.parse(item) for item_parser, item in zip(self.item_parsers, data, strict=False))
class UnionDeserializer(Deserializer):
"De-serializes a JSON value (of any type) into a Python union type."
member_types: tuple[type, ...]
member_parsers: tuple[Deserializer, ...]
def __init__(self, member_types: tuple[type, ...]) -> None:
self.member_types = member_types
def build(self, context: ModuleType | None) -> None:
self.member_parsers = tuple(_get_deserializer(member_type, context) for member_type in self.member_types)
def parse(self, data: JsonType) -> Any:
for member_parser in self.member_parsers:
# iterate over potential types of discriminated union
try:
return member_parser.parse(data)
except (JsonKeyError, JsonTypeError):
# indicates a required field is missing from JSON dict -OR- the data cannot be cast to the expected type,
# i.e. we don't have the type that we are looking for
continue
type_names = ", ".join(python_type_to_str(member_type) for member_type in self.member_types)
raise JsonKeyError(f"type `Union[{type_names}]` could not be instantiated from: {data}")
def get_literal_properties(typ: type) -> set[str]:
"Returns the names of all properties in a class that are of a literal type."
return set(
property_name for property_name, property_type in get_class_properties(typ) if is_type_literal(property_type)
)
def get_discriminating_properties(types: tuple[type, ...]) -> set[str]:
"Returns a set of properties with literal type that are common across all specified classes."
if not types or not all(isinstance(typ, type) for typ in types):
return set()
props = get_literal_properties(types[0])
for typ in types[1:]:
props = props & get_literal_properties(typ)
return props
class TaggedUnionDeserializer(Deserializer):
"De-serializes a JSON value with one or more disambiguating properties into a Python union type."
member_types: tuple[type, ...]
disambiguating_properties: set[str]
member_parsers: dict[tuple[str, Any], Deserializer]
def __init__(self, member_types: tuple[type, ...]) -> None:
self.member_types = member_types
self.disambiguating_properties = get_discriminating_properties(member_types)
def build(self, context: ModuleType | None) -> None:
self.member_parsers = {}
for member_type in self.member_types:
for property_name in self.disambiguating_properties:
literal_type = get_class_property(member_type, property_name)
if not literal_type:
continue
for literal_value in unwrap_literal_values(literal_type):
tpl = (property_name, literal_value)
if tpl in self.member_parsers:
raise JsonTypeError(
f"disambiguating property `{property_name}` in type `{self.union_type}` has a duplicate value: {literal_value}"
)
self.member_parsers[tpl] = _get_deserializer(member_type, context)
@property
def union_type(self) -> str:
type_names = ", ".join(python_type_to_str(member_type) for member_type in self.member_types)
return f"Union[{type_names}]"
def parse(self, data: JsonType) -> Any:
if not isinstance(data, dict):
raise JsonTypeError(
f"tagged union type `{self.union_type}` expects JSON `object` data but instead received: {data}"
)
for property_name in self.disambiguating_properties:
disambiguating_value = data.get(property_name)
if disambiguating_value is None:
continue
member_parser = self.member_parsers.get((property_name, disambiguating_value))
if member_parser is None:
raise JsonTypeError(
f"disambiguating property value is invalid for tagged union type `{self.union_type}`: {data}"
)
return member_parser.parse(data)
raise JsonTypeError(
f"disambiguating property value is missing for tagged union type `{self.union_type}`: {data}"
)
class LiteralDeserializer(Deserializer):
"De-serializes a JSON value into a Python literal type."
values: tuple[Any, ...]
parser: Deserializer
def __init__(self, values: tuple[Any, ...]) -> None:
self.values = values
def build(self, context: ModuleType | None) -> None:
literal_type_tuple = tuple(type(value) for value in self.values)
literal_type_set = set(literal_type_tuple)
if len(literal_type_set) != 1:
value_names = ", ".join(repr(value) for value in self.values)
raise TypeError(
f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}"
)
literal_type = literal_type_set.pop()
self.parser = _get_deserializer(literal_type, context)
def parse(self, data: JsonType) -> Any:
value = self.parser.parse(data)
if value not in self.values:
value_names = ", ".join(repr(value) for value in self.values)
raise JsonTypeError(f"type `Literal[{value_names}]` could not be instantiated from: {data}")
return value
class EnumDeserializer(Deserializer[E]):
"Returns an enumeration instance based on the enumeration value read from a JSON value."
enum_type: type[E]
def __init__(self, enum_type: type[E]) -> None:
self.enum_type = enum_type
def parse(self, data: JsonType) -> E:
return self.enum_type(data)
class CustomDeserializer(Deserializer[T]):
"Uses the `from_json` class method in class to de-serialize the object from JSON."
converter: Callable[[JsonType], T]
def __init__(self, converter: Callable[[JsonType], T]) -> None:
self.converter = converter
def parse(self, data: JsonType) -> T:
return self.converter(data)
class FieldDeserializer(abc.ABC, Generic[T, R]):
"""
Deserializes a JSON property into a Python object field.
:param property_name: The name of the JSON property to read from a JSON `object`.
:param field_name: The name of the field in a Python class to write data to.
:param parser: A compatible deserializer that can handle the field's type.
"""
property_name: str
field_name: str
parser: Deserializer[T]
def __init__(self, property_name: str, field_name: str, parser: Deserializer[T]) -> None:
self.property_name = property_name
self.field_name = field_name
self.parser = parser
@abc.abstractmethod
def parse_field(self, data: dict[str, JsonType]) -> R: ...
class RequiredFieldDeserializer(FieldDeserializer[T, T]):
"Deserializes a JSON property into a mandatory Python object field."
def parse_field(self, data: dict[str, JsonType]) -> T:
if self.property_name not in data:
raise JsonKeyError(f"missing required property `{self.property_name}` from JSON object: {data}")
return self.parser.parse(data[self.property_name])
class OptionalFieldDeserializer(FieldDeserializer[T, Optional[T]]):
"Deserializes a JSON property into an optional Python object field with a default value of `None`."
def parse_field(self, data: dict[str, JsonType]) -> T | None:
value = data.get(self.property_name)
if value is not None:
return self.parser.parse(value)
else:
return None
class DefaultFieldDeserializer(FieldDeserializer[T, T]):
"Deserializes a JSON property into a Python object field with an explicit default value."
default_value: T
def __init__(
self,
property_name: str,
field_name: str,
parser: Deserializer,
default_value: T,
) -> None:
super().__init__(property_name, field_name, parser)
self.default_value = default_value
def parse_field(self, data: dict[str, JsonType]) -> T:
value = data.get(self.property_name)
if value is not None:
return self.parser.parse(value)
else:
return self.default_value
class DefaultFactoryFieldDeserializer(FieldDeserializer[T, T]):
"Deserializes a JSON property into an optional Python object field with an explicit default value factory."
default_factory: Callable[[], T]
def __init__(
self,
property_name: str,
field_name: str,
parser: Deserializer[T],
default_factory: Callable[[], T],
) -> None:
super().__init__(property_name, field_name, parser)
self.default_factory = default_factory
def parse_field(self, data: dict[str, JsonType]) -> T:
value = data.get(self.property_name)
if value is not None:
return self.parser.parse(value)
else:
return self.default_factory()
class ClassDeserializer(Deserializer[T]):
"Base class for de-serializing class-like types such as data classes, named tuples and regular classes."
class_type: type
property_parsers: list[FieldDeserializer]
property_fields: set[str]
def __init__(self, class_type: type[T]) -> None:
self.class_type = class_type
def assign(self, property_parsers: list[FieldDeserializer]) -> None:
self.property_parsers = property_parsers
self.property_fields = set(property_parser.property_name for property_parser in property_parsers)
def parse(self, data: JsonType) -> T:
if not isinstance(data, dict):
type_name = python_type_to_str(self.class_type)
raise JsonTypeError(f"`type `{type_name}` expects JSON `object` data but instead received: {data}")
object_data: dict[str, JsonType] = typing.cast(dict[str, JsonType], data)
field_values = {}
for property_parser in self.property_parsers:
field_values[property_parser.field_name] = property_parser.parse_field(object_data)
if not self.property_fields.issuperset(object_data):
unassigned_names = [name for name in object_data if name not in self.property_fields]
raise JsonKeyError(f"unrecognized fields in JSON object: {unassigned_names}")
return self.create(**field_values)
def create(self, **field_values: Any) -> T:
"Instantiates an object with a collection of property values."
obj: T = create_object(self.class_type)
# use `setattr` on newly created object instance
for field_name, field_value in field_values.items():
setattr(obj, field_name, field_value)
return obj
class NamedTupleDeserializer(ClassDeserializer[NamedTuple]):
"De-serializes a named tuple from a JSON `object`."
def build(self, context: ModuleType | None) -> None:
property_parsers: list[FieldDeserializer] = [
RequiredFieldDeserializer(field_name, field_name, _get_deserializer(field_type, context))
for field_name, field_type in get_resolved_hints(self.class_type).items()
]
super().assign(property_parsers)
def create(self, **field_values: Any) -> NamedTuple:
# mypy fails to deduce that this class returns NamedTuples only, hence the `ignore` directive
return self.class_type(**field_values) # type: ignore[no-any-return]
class DataclassDeserializer(ClassDeserializer[T]):
"De-serializes a data class from a JSON `object`."
def __init__(self, class_type: type[T]) -> None:
if not dataclasses.is_dataclass(class_type):
raise TypeError("expected: data-class type")
super().__init__(class_type) # type: ignore[arg-type]
def build(self, context: ModuleType | None) -> None:
property_parsers: list[FieldDeserializer] = []
resolved_hints = get_resolved_hints(self.class_type)
for field in dataclasses.fields(self.class_type):
field_type = resolved_hints[field.name]
property_name = python_field_to_json_property(field.name, field_type)
is_optional = is_type_optional(field_type)
has_default = field.default is not dataclasses.MISSING
has_default_factory = field.default_factory is not dataclasses.MISSING
if is_optional:
required_type: type[T] = unwrap_optional_type(field_type)
else:
required_type = field_type
parser = _get_deserializer(required_type, context)
if has_default:
field_parser: FieldDeserializer = DefaultFieldDeserializer(
property_name, field.name, parser, field.default
)
elif has_default_factory:
default_factory = typing.cast(Callable[[], Any], field.default_factory)
field_parser = DefaultFactoryFieldDeserializer(property_name, field.name, parser, default_factory)
elif is_optional:
field_parser = OptionalFieldDeserializer(property_name, field.name, parser)
else:
field_parser = RequiredFieldDeserializer(property_name, field.name, parser)
property_parsers.append(field_parser)
super().assign(property_parsers)
class FrozenDataclassDeserializer(DataclassDeserializer[T]):
"De-serializes a frozen data class from a JSON `object`."
def create(self, **field_values: Any) -> T:
"Instantiates an object with a collection of property values."
# create object instance without calling `__init__`
obj: T = create_object(self.class_type)
# can't use `setattr` on frozen dataclasses, pass member variable values to `__init__`
obj.__init__(**field_values) # type: ignore
return obj
class TypedClassDeserializer(ClassDeserializer[T]):
"De-serializes a class with type annotations from a JSON `object` by iterating over class properties."
def build(self, context: ModuleType | None) -> None:
property_parsers: list[FieldDeserializer] = []
for field_name, field_type in get_resolved_hints(self.class_type).items():
property_name = python_field_to_json_property(field_name, field_type)
is_optional = is_type_optional(field_type)
if is_optional:
required_type: type[T] = unwrap_optional_type(field_type)
else:
required_type = field_type
parser = _get_deserializer(required_type, context)
if is_optional:
field_parser: FieldDeserializer = OptionalFieldDeserializer(property_name, field_name, parser)
else:
field_parser = RequiredFieldDeserializer(property_name, field_name, parser)
property_parsers.append(field_parser)
super().assign(property_parsers)
def create_deserializer(typ: TypeLike, context: ModuleType | None = None) -> Deserializer:
"""
Creates a de-serializer engine to produce a Python object from an object obtained from a JSON string.
When de-serializing a JSON object into a Python object, the following transformations are applied:
* Fundamental types are parsed as `bool`, `int`, `float` or `str`.
* Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type
`datetime`, `date` or `time`.
* Byte arrays are read from a string with Base64 encoding into a `bytes` instance.
* UUIDs are extracted from a UUID string compliant with RFC 4122 into a `uuid.UUID` instance.
* Enumerations are instantiated with a lookup on enumeration value.
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively.
* Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs
using reflection (enumerating type annotations).
:raises TypeError: A de-serializer engine cannot be constructed for the input type.
"""
if context is None:
if isinstance(typ, type):
context = sys.modules[typ.__module__]
return _get_deserializer(typ, context)
_CACHE: dict[tuple[str, str], Deserializer] = {}
def _get_deserializer(typ: TypeLike, context: ModuleType | None) -> Deserializer:
"Creates or re-uses a de-serializer engine to parse an object obtained from a JSON string."
cache_key = None
if isinstance(typ, (str, typing.ForwardRef)):
if context is None:
raise TypeError(f"missing context for evaluating type: {typ}")
if isinstance(typ, str):
if hasattr(context, typ):
cache_key = (context.__name__, typ)
elif isinstance(typ, typing.ForwardRef):
if hasattr(context, typ.__forward_arg__):
cache_key = (context.__name__, typ.__forward_arg__)
typ = evaluate_type(typ, context)
typ = unwrap_annotated_type(typ) if is_type_annotated(typ) else typ
if isinstance(typ, type) and typing.get_origin(typ) is None:
cache_key = (typ.__module__, typ.__name__)
if cache_key is not None:
deserializer = _CACHE.get(cache_key)
if deserializer is None:
deserializer = _create_deserializer(typ)
# store de-serializer immediately in cache to avoid stack overflow for recursive types
_CACHE[cache_key] = deserializer
if isinstance(typ, type):
# use type's own module as context for evaluating member types
context = sys.modules[typ.__module__]
# create any de-serializers this de-serializer is depending on
deserializer.build(context)
else:
# special forms are not always hashable, create a new de-serializer every time
deserializer = _create_deserializer(typ)
deserializer.build(context)
return deserializer
def _create_deserializer(typ: TypeLike) -> Deserializer:
"Creates a de-serializer engine to parse an object obtained from a JSON string."
# check for well-known types
if typ is type(None):
return NoneDeserializer()
elif typ is bool:
return BoolDeserializer()
elif typ is int:
return IntDeserializer()
elif typ is float:
return FloatDeserializer()
elif typ is str:
return StringDeserializer()
elif typ is bytes:
return BytesDeserializer()
elif typ is datetime.datetime:
return DateTimeDeserializer()
elif typ is datetime.date:
return DateDeserializer()
elif typ is datetime.time:
return TimeDeserializer()
elif typ is uuid.UUID:
return UUIDDeserializer()
elif typ is ipaddress.IPv4Address:
return IPv4Deserializer()
elif typ is ipaddress.IPv6Address:
return IPv6Deserializer()
# dynamically-typed collection types
if typ is list:
raise TypeError("explicit item type required: use `List[T]` instead of `list`")
if typ is dict:
raise TypeError("explicit key and value types required: use `Dict[K, V]` instead of `dict`")
if typ is set:
raise TypeError("explicit member type required: use `Set[T]` instead of `set`")
if typ is tuple:
raise TypeError("explicit item type list required: use `Tuple[T, ...]` instead of `tuple`")
# generic types (e.g. list, dict, set, etc.)
origin_type = typing.get_origin(typ)
if origin_type is list:
(list_item_type,) = typing.get_args(typ) # unpack single tuple element
return ListDeserializer(list_item_type)
elif origin_type is dict:
key_type, value_type = typing.get_args(typ)
return DictDeserializer(key_type, value_type)
elif origin_type is set:
(set_member_type,) = typing.get_args(typ) # unpack single tuple element
return SetDeserializer(set_member_type)
elif origin_type is tuple:
return TupleDeserializer(typing.get_args(typ))
elif origin_type is Union:
union_args = typing.get_args(typ)
if get_discriminating_properties(union_args):
return TaggedUnionDeserializer(union_args)
else:
return UnionDeserializer(union_args)
elif origin_type is Literal:
return LiteralDeserializer(typing.get_args(typ))
if not inspect.isclass(typ):
if is_dataclass_instance(typ):
raise TypeError(f"dataclass type expected but got instance: {typ}")
else:
raise TypeError(f"unable to de-serialize unrecognized type: {typ}")
if issubclass(typ, enum.Enum):
return EnumDeserializer(typ)
if is_named_tuple_type(typ):
return NamedTupleDeserializer(typ)
# check if object has custom serialization method
convert_func = getattr(typ, "from_json", None)
if callable(convert_func):
return CustomDeserializer(convert_func)
if is_dataclass_type(typ):
dataclass_params = getattr(typ, "__dataclass_params__", None)
if dataclass_params is not None and dataclass_params.frozen:
return FrozenDataclassDeserializer(typ)
else:
return DataclassDeserializer(typ)
return TypedClassDeserializer(typ)

View file

@ -1,410 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import builtins
import collections.abc
import dataclasses
import inspect
import re
import sys
import types
import typing
from collections.abc import Callable
from dataclasses import dataclass
from io import StringIO
from typing import Any, Protocol, TypeVar
if sys.version_info >= (3, 10):
from typing import TypeGuard
else:
from typing import TypeGuard
from .inspection import (
DataclassInstance,
get_class_properties,
get_signature,
is_dataclass_type,
is_type_enum,
)
T = TypeVar("T")
@dataclass
class DocstringParam:
"""
A parameter declaration in a parameter block.
:param name: The name of the parameter.
:param description: The description text for the parameter.
"""
name: str
description: str
param_type: type | str = inspect.Signature.empty
def __str__(self) -> str:
return f":param {self.name}: {self.description}"
@dataclass
class DocstringReturns:
"""
A `returns` declaration extracted from a docstring.
:param description: The description text for the return value.
"""
description: str
return_type: type = inspect.Signature.empty
def __str__(self) -> str:
return f":returns: {self.description}"
@dataclass
class DocstringRaises:
"""
A `raises` declaration extracted from a docstring.
:param typename: The type name of the exception raised.
:param description: The description associated with the exception raised.
"""
typename: str
description: str
raise_type: type = inspect.Signature.empty
def __str__(self) -> str:
return f":raises {self.typename}: {self.description}"
@dataclass
class Docstring:
"""
Represents the documentation string (a.k.a. docstring) for a type such as a (data) class or function.
A docstring is broken down into the following components:
* A short description, which is the first block of text in the documentation string, and ends with a double
newline or a parameter block.
* A long description, which is the optional block of text following the short description, and ends with
a parameter block.
* A parameter block of named parameter and description string pairs in ReST-style.
* A `returns` declaration, which adds explanation to the return value.
* A `raises` declaration, which adds explanation to the exception type raised by the function on error.
When the docstring is attached to a data class, it is understood as the documentation string of the class
`__init__` method.
:param short_description: The short description text parsed from a docstring.
:param long_description: The long description text parsed from a docstring.
:param params: The parameter block extracted from a docstring.
:param returns: The returns declaration extracted from a docstring.
"""
short_description: str | None = None
long_description: str | None = None
params: dict[str, DocstringParam] = dataclasses.field(default_factory=dict)
returns: DocstringReturns | None = None
raises: dict[str, DocstringRaises] = dataclasses.field(default_factory=dict)
@property
def full_description(self) -> str | None:
if self.short_description and self.long_description:
return f"{self.short_description}\n\n{self.long_description}"
elif self.short_description:
return self.short_description
else:
return None
def __str__(self) -> str:
output = StringIO()
has_description = self.short_description or self.long_description
has_blocks = self.params or self.returns or self.raises
if has_description:
if self.short_description and self.long_description:
output.write(self.short_description)
output.write("\n\n")
output.write(self.long_description)
elif self.short_description:
output.write(self.short_description)
if has_blocks:
if has_description:
output.write("\n")
for param in self.params.values():
output.write("\n")
output.write(str(param))
if self.returns:
output.write("\n")
output.write(str(self.returns))
for raises in self.raises.values():
output.write("\n")
output.write(str(raises))
s = output.getvalue()
output.close()
return s
def is_exception(member: object) -> TypeGuard[type[BaseException]]:
return isinstance(member, type) and issubclass(member, BaseException)
def get_exceptions(module: types.ModuleType) -> dict[str, type[BaseException]]:
"Returns all exception classes declared in a module."
return {name: class_type for name, class_type in inspect.getmembers(module, is_exception)}
class SupportsDoc(Protocol):
__doc__: str | None
def _maybe_unwrap_async_iterator(t):
origin_type = typing.get_origin(t)
if origin_type is collections.abc.AsyncIterator:
return typing.get_args(t)[0]
return t
def parse_type(typ: SupportsDoc) -> Docstring:
"""
Parse the docstring of a type into its components.
:param typ: The type whose documentation string to parse.
:returns: Components of the documentation string.
"""
# Use docstring from the iterator origin type for streaming apis
typ = _maybe_unwrap_async_iterator(typ)
doc = get_docstring(typ)
if doc is None:
return Docstring()
docstring = parse_text(doc)
check_docstring(typ, docstring)
# assign parameter and return types
if is_dataclass_type(typ):
properties = dict(get_class_properties(typing.cast(type, typ)))
for name, param in docstring.params.items():
param.param_type = properties[name]
elif inspect.isfunction(typ):
signature = get_signature(typ)
for name, param in docstring.params.items():
param.param_type = signature.parameters[name].annotation
if docstring.returns:
docstring.returns.return_type = signature.return_annotation
# assign exception types
defining_module = inspect.getmodule(typ)
if defining_module:
context: dict[str, type] = {}
context.update(get_exceptions(builtins))
context.update(get_exceptions(defining_module))
for exc_name, exc in docstring.raises.items():
raise_type = context.get(exc_name)
if raise_type is None:
type_name = getattr(typ, "__qualname__", None) or getattr(typ, "__name__", None) or None
raise TypeError(
f"doc-string exception type `{exc_name}` is not an exception defined in the context of `{type_name}`"
)
exc.raise_type = raise_type
return docstring
def parse_text(text: str) -> Docstring:
"""
Parse a ReST-style docstring into its components.
:param text: The documentation string to parse, typically acquired as `type.__doc__`.
:returns: Components of the documentation string.
"""
if not text:
return Docstring()
# find block that starts object metadata block (e.g. `:param p:` or `:returns:`)
text = inspect.cleandoc(text)
match = re.search("^:", text, flags=re.MULTILINE)
if match:
desc_chunk = text[: match.start()]
meta_chunk = text[match.start() :] # noqa: E203
else:
desc_chunk = text
meta_chunk = ""
# split description text into short and long description
parts = desc_chunk.split("\n\n", 1)
# ensure short description has no newlines
short_description = parts[0].strip().replace("\n", " ") or None
# ensure long description preserves its structure (e.g. preformatted text)
if len(parts) > 1:
long_description = parts[1].strip() or None
else:
long_description = None
params: dict[str, DocstringParam] = {}
raises: dict[str, DocstringRaises] = {}
returns = None
for match in re.finditer(r"(^:.*?)(?=^:|\Z)", meta_chunk, flags=re.DOTALL | re.MULTILINE):
chunk = match.group(0)
if not chunk:
continue
args_chunk, desc_chunk = chunk.lstrip(":").split(":", 1)
args = args_chunk.split()
desc = re.sub(r"\s+", " ", desc_chunk.strip())
if len(args) > 0:
kw = args[0]
if len(args) == 2:
if kw == "param":
params[args[1]] = DocstringParam(
name=args[1],
description=desc,
)
elif kw == "raise" or kw == "raises":
raises[args[1]] = DocstringRaises(
typename=args[1],
description=desc,
)
elif len(args) == 1:
if kw == "return" or kw == "returns":
returns = DocstringReturns(description=desc)
return Docstring(
long_description=long_description,
short_description=short_description,
params=params,
returns=returns,
raises=raises,
)
def has_default_docstring(typ: SupportsDoc) -> bool:
"Check if class has the auto-generated string assigned by @dataclass."
if not isinstance(typ, type):
return False
if is_dataclass_type(typ):
return typ.__doc__ is not None and re.match(f"^{re.escape(typ.__name__)}[(].*[)]$", typ.__doc__) is not None
if is_type_enum(typ):
return typ.__doc__ is not None and typ.__doc__ == "An enumeration."
return False
def has_docstring(typ: SupportsDoc) -> bool:
"Check if class has a documentation string other than the auto-generated string assigned by @dataclass."
if has_default_docstring(typ):
return False
return bool(typ.__doc__)
def get_docstring(typ: SupportsDoc) -> str | None:
if typ.__doc__ is None:
return None
if has_default_docstring(typ):
return None
return typ.__doc__
def check_docstring(typ: SupportsDoc, docstring: Docstring, strict: bool = False) -> None:
"""
Verifies the doc-string of a type.
:raises TypeError: Raised on a mismatch between doc-string parameters, and function or type signature.
"""
if is_dataclass_type(typ):
check_dataclass_docstring(typ, docstring, strict)
elif inspect.isfunction(typ):
check_function_docstring(typ, docstring, strict)
def check_dataclass_docstring(typ: type[DataclassInstance], docstring: Docstring, strict: bool = False) -> None:
"""
Verifies the doc-string of a data-class type.
:param strict: Whether to check if all data-class members have doc-strings.
:raises TypeError: Raised on a mismatch between doc-string parameters and data-class members.
"""
if not is_dataclass_type(typ):
raise TypeError("not a data-class type")
properties = dict(get_class_properties(typ))
class_name = typ.__name__
for name in docstring.params:
if name not in properties:
raise TypeError(f"doc-string parameter `{name}` is not a member of the data-class `{class_name}`")
if not strict:
return
for name in properties:
if name not in docstring.params:
raise TypeError(f"member `{name}` in data-class `{class_name}` is missing its doc-string")
def check_function_docstring(fn: Callable[..., Any], docstring: Docstring, strict: bool = False) -> None:
"""
Verifies the doc-string of a function or member function.
:param strict: Whether to check if all function parameters and the return type have doc-strings.
:raises TypeError: Raised on a mismatch between doc-string parameters and function signature.
"""
signature = get_signature(fn)
func_name = fn.__qualname__
for name in docstring.params:
if name not in signature.parameters:
raise TypeError(f"doc-string parameter `{name}` is absent from signature of function `{func_name}`")
if docstring.returns is not None and signature.return_annotation is inspect.Signature.empty:
raise TypeError(f"doc-string has returns description in function `{func_name}` with no return type annotation")
if not strict:
return
for name, param in signature.parameters.items():
# ignore `self` in member function signatures
if name == "self" and (
param.kind is inspect.Parameter.POSITIONAL_ONLY or param.kind is inspect.Parameter.POSITIONAL_OR_KEYWORD
):
continue
if name not in docstring.params:
raise TypeError(f"function parameter `{name}` in `{func_name}` is missing its doc-string")
if signature.return_annotation is not inspect.Signature.empty and docstring.returns is None:
raise TypeError(f"function `{func_name}` has no returns description in its doc-string")

View file

@ -1,23 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
class JsonKeyError(Exception):
"Raised when deserialization for a class or union type has failed because a matching member was not found."
class JsonValueError(Exception):
"Raised when (de)serialization of data has failed due to invalid value."
class JsonTypeError(Exception):
"Raised when deserialization of data has failed due to a type mismatch."

File diff suppressed because it is too large Load diff

View file

@ -1,39 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import keyword
from .auxiliary import Alias
from .inspection import get_annotation
def python_field_to_json_property(python_id: str, python_type: object | None = None) -> str:
"""
Map a Python field identifier to a JSON property name.
Authors may use an underscore appended at the end of a Python identifier as per PEP 8 if it clashes with a Python
keyword: e.g. `in` would become `in_` and `from` would become `from_`. Remove these suffixes when exporting to JSON.
Authors may supply an explicit alias with the type annotation `Alias`, e.g. `Annotated[MyType, Alias("alias")]`.
"""
if python_type is not None:
alias = get_annotation(python_type, Alias)
if alias:
return alias.name
if python_id.endswith("_"):
id = python_id[:-1]
if keyword.iskeyword(id):
return id
return python_id

View file

@ -1,188 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import typing
from typing import Any, Literal, Union
from .auxiliary import _auxiliary_types
from .inspection import (
TypeLike,
is_generic_dict,
is_generic_list,
is_generic_sequence,
is_type_optional,
is_type_union,
unwrap_generic_dict,
unwrap_generic_list,
unwrap_generic_sequence,
unwrap_optional_type,
unwrap_union_types,
)
class TypeFormatter:
"""
Type formatter.
:param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604.
"""
use_union_operator: bool
def __init__(self, use_union_operator: bool = False) -> None:
self.use_union_operator = use_union_operator
def union_to_str(self, data_type_args: tuple[TypeLike, ...]) -> str:
if self.use_union_operator:
return " | ".join(self.python_type_to_str(t) for t in data_type_args)
else:
if len(data_type_args) == 2 and type(None) in data_type_args:
# Optional[T] is represented as Union[T, None]
origin_name = "Optional"
data_type_args = tuple(t for t in data_type_args if t is not type(None))
else:
origin_name = "Union"
args = ", ".join(self.python_type_to_str(t) for t in data_type_args)
return f"{origin_name}[{args}]"
def plain_type_to_str(self, data_type: TypeLike) -> str:
"Returns the string representation of a Python type without metadata."
# return forward references as the annotation string
if isinstance(data_type, typing.ForwardRef):
fwd: typing.ForwardRef = data_type
return fwd.__forward_arg__
elif isinstance(data_type, str):
return data_type
origin = typing.get_origin(data_type)
if origin is not None:
data_type_args = typing.get_args(data_type)
if origin is dict: # Dict[T]
origin_name = "Dict"
elif origin is list: # List[T]
origin_name = "List"
elif origin is set: # Set[T]
origin_name = "Set"
elif origin is Union:
return self.union_to_str(data_type_args)
elif origin is Literal:
args = ", ".join(repr(arg) for arg in data_type_args)
return f"Literal[{args}]"
else:
origin_name = origin.__name__
args = ", ".join(self.python_type_to_str(t) for t in data_type_args)
return f"{origin_name}[{args}]"
return data_type.__name__
def python_type_to_str(self, data_type: TypeLike) -> str:
"Returns the string representation of a Python type."
if data_type is type(None):
return "None"
# use compact name for alias types
name = _auxiliary_types.get(data_type)
if name is not None:
return name
metadata = getattr(data_type, "__metadata__", None)
if metadata is not None:
# type is Annotated[T, ...]
metatuple: tuple[Any, ...] = metadata
arg = typing.get_args(data_type)[0]
# check for auxiliary types with user-defined annotations
metaset = set(metatuple)
for auxiliary_type, auxiliary_name in _auxiliary_types.items():
auxiliary_arg = typing.get_args(auxiliary_type)[0]
if arg is not auxiliary_arg:
continue
auxiliary_metatuple: tuple[Any, ...] | None = getattr(auxiliary_type, "__metadata__", None)
if auxiliary_metatuple is None:
continue
if metaset.issuperset(auxiliary_metatuple):
# type is an auxiliary type with extra annotations
auxiliary_args = ", ".join(repr(m) for m in metatuple if m not in auxiliary_metatuple)
return f"Annotated[{auxiliary_name}, {auxiliary_args}]"
# type is an annotated type
args = ", ".join(repr(m) for m in metatuple)
return f"Annotated[{self.plain_type_to_str(arg)}, {args}]"
else:
# type is a regular type
return self.plain_type_to_str(data_type)
def python_type_to_str(data_type: TypeLike, use_union_operator: bool = False) -> str:
"""
Returns the string representation of a Python type.
:param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604.
"""
fmt = TypeFormatter(use_union_operator)
return fmt.python_type_to_str(data_type)
def python_type_to_name(data_type: TypeLike, force: bool = False) -> str:
"""
Returns the short name of a Python type.
:param force: Whether to produce a name for composite types such as generics.
"""
# use compact name for alias types
name = _auxiliary_types.get(data_type)
if name is not None:
return name
# unwrap annotated types
metadata = getattr(data_type, "__metadata__", None)
if metadata is not None:
# type is Annotated[T, ...]
arg = typing.get_args(data_type)[0]
return python_type_to_name(arg, force=force)
if force:
# generic types
if is_type_optional(data_type, strict=True):
inner_name = python_type_to_name(unwrap_optional_type(data_type), force=True)
return f"Optional__{inner_name}"
elif is_generic_list(data_type):
item_name = python_type_to_name(unwrap_generic_list(data_type), force=True)
return f"List__{item_name}"
elif is_generic_sequence(data_type):
# Treat Sequence the same as List for schema generation purposes
item_name = python_type_to_name(unwrap_generic_sequence(data_type), force=True)
return f"List__{item_name}"
elif is_generic_dict(data_type):
key_type, value_type = unwrap_generic_dict(data_type)
key_name = python_type_to_name(key_type, force=True)
value_name = python_type_to_name(value_type, force=True)
return f"Dict__{key_name}__{value_name}"
elif is_type_union(data_type):
member_types = unwrap_union_types(data_type)
member_names = "__".join(python_type_to_name(member_type, force=True) for member_type in member_types)
return f"Union__{member_names}"
# named system or user-defined type
if hasattr(data_type, "__name__") and not typing.get_args(data_type):
return data_type.__name__
raise TypeError(f"cannot assign a simple name to type: {data_type}")

View file

@ -1,791 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import collections.abc
import dataclasses
import datetime
import decimal
import enum
import functools
import inspect
import json
import types
import typing
import uuid
from collections.abc import Callable
from copy import deepcopy
from typing import (
Annotated,
Any,
ClassVar,
Literal,
TypeVar,
Union,
overload,
)
import jsonschema
from . import docstring
from .auxiliary import (
Alias,
IntegerRange,
MaxLength,
MinLength,
Precision,
get_auxiliary_format,
)
from .core import JsonArray, JsonObject, JsonType, Schema, StrictJsonType
from .inspection import (
TypeLike,
enum_value_types,
get_annotation,
get_class_properties,
is_type_enum,
is_type_like,
is_type_optional,
unwrap_optional_type,
)
from .name import python_type_to_name
from .serialization import object_to_json
# determines the maximum number of distinct enum members up to which a Dict[EnumType, Any] is converted into a JSON
# schema with explicitly listed properties (rather than employing a pattern constraint on property names)
OBJECT_ENUM_EXPANSION_LIMIT = 4
T = TypeVar("T")
def get_class_docstrings(data_type: type) -> tuple[str | None, str | None]:
docstr = docstring.parse_type(data_type)
# check if class has a doc-string other than the auto-generated string assigned by @dataclass
if docstring.has_default_docstring(data_type):
return None, None
return docstr.short_description, docstr.long_description
def get_class_property_docstrings(
data_type: type, transform_fun: Callable[[type, str, str], str] | None = None
) -> dict[str, str]:
"""
Extracts the documentation strings associated with the properties of a composite type.
:param data_type: The object whose properties to iterate over.
:param transform_fun: An optional function that maps a property documentation string to a custom tailored string.
:returns: A dictionary mapping property names to descriptions.
"""
result: dict[str, str] = {}
# Only try to get MRO if data_type is actually a class
# Special types like Literal, Union, etc. don't have MRO
if not inspect.isclass(data_type):
return result
for base in inspect.getmro(data_type):
docstr = docstring.parse_type(base)
for param in docstr.params.values():
if param.name in result:
continue
if transform_fun:
description = transform_fun(data_type, param.name, param.description)
else:
description = param.description
result[param.name] = description
return result
def docstring_to_schema(data_type: type) -> Schema:
short_description, long_description = get_class_docstrings(data_type)
schema: Schema = {
"title": python_type_to_name(data_type, force=True),
}
description = "\n".join(filter(None, [short_description, long_description]))
if description:
schema["description"] = description
return schema
def id_from_ref(data_type: typing.ForwardRef | str | type) -> str:
"Extracts the name of a possibly forward-referenced type."
if isinstance(data_type, typing.ForwardRef):
forward_type: typing.ForwardRef = data_type
return forward_type.__forward_arg__
elif isinstance(data_type, str):
return data_type
else:
return data_type.__name__
def type_from_ref(data_type: typing.ForwardRef | str | type) -> tuple[str, type]:
"Creates a type from a forward reference."
if isinstance(data_type, typing.ForwardRef):
forward_type: typing.ForwardRef = data_type
true_type = eval(forward_type.__forward_code__)
return forward_type.__forward_arg__, true_type
elif isinstance(data_type, str):
true_type = eval(data_type)
return data_type, true_type
else:
return data_type.__name__, data_type
@dataclasses.dataclass
class TypeCatalogEntry:
schema: Schema | None
identifier: str
examples: JsonType | None = None
class TypeCatalog:
"Maintains an association of well-known Python types to their JSON schema."
_by_type: dict[TypeLike, TypeCatalogEntry]
_by_name: dict[str, TypeCatalogEntry]
def __init__(self) -> None:
self._by_type = {}
self._by_name = {}
def __contains__(self, data_type: TypeLike) -> bool:
if isinstance(data_type, typing.ForwardRef):
fwd: typing.ForwardRef = data_type
name = fwd.__forward_arg__
return name in self._by_name
else:
return data_type in self._by_type
def add(
self,
data_type: TypeLike,
schema: Schema | None,
identifier: str,
examples: list[JsonType] | None = None,
) -> None:
if isinstance(data_type, typing.ForwardRef):
raise TypeError("forward references cannot be used to register a type")
if data_type in self._by_type:
raise ValueError(f"type {data_type} is already registered in the catalog")
entry = TypeCatalogEntry(schema, identifier, examples)
self._by_type[data_type] = entry
self._by_name[identifier] = entry
def get(self, data_type: TypeLike) -> TypeCatalogEntry:
if isinstance(data_type, typing.ForwardRef):
fwd: typing.ForwardRef = data_type
name = fwd.__forward_arg__
return self._by_name[name]
else:
return self._by_type[data_type]
@dataclasses.dataclass
class SchemaOptions:
definitions_path: str = "#/definitions/"
use_descriptions: bool = True
use_examples: bool = True
property_description_fun: Callable[[type, str, str], str] | None = None
class JsonSchemaGenerator:
"Creates a JSON schema with user-defined type definitions."
type_catalog: ClassVar[TypeCatalog] = TypeCatalog()
types_used: dict[str, TypeLike]
options: SchemaOptions
def __init__(self, options: SchemaOptions | None = None):
if options is None:
self.options = SchemaOptions()
else:
self.options = options
self.types_used = {}
@functools.singledispatchmethod
def _metadata_to_schema(self, arg: object) -> Schema:
# unrecognized annotation
return {}
@_metadata_to_schema.register
def _(self, arg: IntegerRange) -> Schema:
return {"minimum": arg.minimum, "maximum": arg.maximum}
@_metadata_to_schema.register
def _(self, arg: Precision) -> Schema:
return {
"multipleOf": 10 ** (-arg.decimal_digits),
"exclusiveMinimum": -(10**arg.integer_digits),
"exclusiveMaximum": (10**arg.integer_digits),
}
@_metadata_to_schema.register
def _(self, arg: MinLength) -> Schema:
return {"minLength": arg.value}
@_metadata_to_schema.register
def _(self, arg: MaxLength) -> Schema:
return {"maxLength": arg.value}
def _with_metadata(self, type_schema: Schema, metadata: tuple[Any, ...] | None) -> Schema:
if metadata:
for m in metadata:
type_schema.update(self._metadata_to_schema(m))
return type_schema
def _simple_type_to_schema(self, typ: TypeLike, json_schema_extra: dict | None = None) -> Schema | None:
"""
Returns the JSON schema associated with a simple, unrestricted type.
:returns: The schema for a simple type, or `None`.
"""
if typ is type(None):
return {"type": "null"}
elif typ is bool:
return {"type": "boolean"}
elif typ is int:
return {"type": "integer"}
elif typ is float:
return {"type": "number"}
elif typ is str:
if json_schema_extra and "contentEncoding" in json_schema_extra:
return {
"type": "string",
"contentEncoding": json_schema_extra["contentEncoding"],
}
return {"type": "string"}
elif typ is bytes:
return {"type": "string", "contentEncoding": "base64"}
elif typ is datetime.datetime:
# 2018-11-13T20:20:39+00:00
return {
"type": "string",
"format": "date-time",
}
elif typ is datetime.date:
# 2018-11-13
return {"type": "string", "format": "date"}
elif typ is datetime.time:
# 20:20:39+00:00
return {"type": "string", "format": "time"}
elif typ is decimal.Decimal:
return {"type": "number"}
elif typ is uuid.UUID:
# f81d4fae-7dec-11d0-a765-00a0c91e6bf6
return {"type": "string", "format": "uuid"}
elif typ is Any:
return {
"oneOf": [
{"type": "null"},
{"type": "boolean"},
{"type": "number"},
{"type": "string"},
{"type": "array"},
{"type": "object"},
]
}
elif typ is JsonObject:
return {"type": "object"}
elif typ is JsonArray:
return {"type": "array"}
else:
# not a simple type
return None
def type_to_schema(
self,
data_type: TypeLike,
force_expand: bool = False,
json_schema_extra: dict | None = None,
) -> Schema:
common_info = {}
if json_schema_extra and "deprecated" in json_schema_extra:
common_info["deprecated"] = json_schema_extra["deprecated"]
return self._type_to_schema(data_type, force_expand, json_schema_extra) | common_info
def _type_to_schema(
self,
data_type: TypeLike,
force_expand: bool = False,
json_schema_extra: dict | None = None,
) -> Schema:
"""
Returns the JSON schema associated with a type.
:param data_type: The Python type whose JSON schema to return.
:param force_expand: Forces a JSON schema to be returned even if the type is registered in the catalog of known types.
:returns: The JSON schema associated with the type.
"""
# short-circuit for common simple types
schema = self._simple_type_to_schema(data_type, json_schema_extra)
if schema is not None:
return schema
# types registered in the type catalog of well-known types
type_catalog = JsonSchemaGenerator.type_catalog
if not force_expand and data_type in type_catalog:
# user-defined type
identifier = type_catalog.get(data_type).identifier
self.types_used.setdefault(identifier, data_type)
return {"$ref": f"{self.options.definitions_path}{identifier}"}
# unwrap annotated types
metadata = getattr(data_type, "__metadata__", None)
if metadata is not None:
# type is Annotated[T, ...]
typ = typing.get_args(data_type)[0]
schema = self._simple_type_to_schema(typ)
if schema is not None:
# recognize well-known auxiliary types
fmt = get_auxiliary_format(data_type)
if fmt is not None:
schema.update({"format": fmt})
return schema
else:
return self._with_metadata(schema, metadata)
else:
# type is a regular type
typ = data_type
if isinstance(typ, typing.ForwardRef) or isinstance(typ, str):
if force_expand:
identifier, true_type = type_from_ref(typ)
return self.type_to_schema(true_type, force_expand=True)
else:
try:
identifier, true_type = type_from_ref(typ)
self.types_used[identifier] = true_type
except NameError:
identifier = id_from_ref(typ)
return {"$ref": f"{self.options.definitions_path}{identifier}"}
if is_type_enum(typ):
enum_type: type[enum.Enum] = typ
value_types = enum_value_types(enum_type)
if len(value_types) != 1:
raise ValueError(
f"enumerations must have a consistent member value type but several types found: {value_types}"
)
enum_value_type = value_types.pop()
enum_schema: Schema
if enum_value_type is bool or enum_value_type is int or enum_value_type is float or enum_value_type is str:
if enum_value_type is bool:
enum_schema_type = "boolean"
elif enum_value_type is int:
enum_schema_type = "integer"
elif enum_value_type is float:
enum_schema_type = "number"
elif enum_value_type is str:
enum_schema_type = "string"
enum_schema = {
"type": enum_schema_type,
"enum": [object_to_json(e.value) for e in enum_type],
}
if self.options.use_descriptions:
enum_schema.update(docstring_to_schema(typ))
return enum_schema
else:
enum_schema = self.type_to_schema(enum_value_type)
if self.options.use_descriptions:
enum_schema.update(docstring_to_schema(typ))
return enum_schema
origin_type = typing.get_origin(typ)
if origin_type is list:
(list_type,) = typing.get_args(typ) # unpack single tuple element
return {"type": "array", "items": self.type_to_schema(list_type)}
elif origin_type is collections.abc.Sequence:
# Treat Sequence the same as list for JSON schema (both are arrays)
(sequence_type,) = typing.get_args(typ) # unpack single tuple element
return {"type": "array", "items": self.type_to_schema(sequence_type)}
elif origin_type is dict:
key_type, value_type = typing.get_args(typ)
if not (key_type is str or key_type is int or is_type_enum(key_type)):
raise ValueError("`dict` with key type not coercible to `str` is not supported")
dict_schema: Schema
value_schema = self.type_to_schema(value_type)
if is_type_enum(key_type):
enum_values = [str(e.value) for e in key_type]
if len(enum_values) > OBJECT_ENUM_EXPANSION_LIMIT:
dict_schema = {
"propertyNames": {"pattern": "^(" + "|".join(enum_values) + ")$"},
"additionalProperties": value_schema,
}
else:
dict_schema = {
"properties": dict.fromkeys(enum_values, value_schema),
"additionalProperties": False,
}
else:
dict_schema = {"additionalProperties": value_schema}
schema = {"type": "object"}
schema.update(dict_schema)
return schema
elif origin_type is set:
(set_type,) = typing.get_args(typ) # unpack single tuple element
return {
"type": "array",
"items": self.type_to_schema(set_type),
"uniqueItems": True,
}
elif origin_type is tuple:
args = typing.get_args(typ)
return {
"type": "array",
"minItems": len(args),
"maxItems": len(args),
"prefixItems": [self.type_to_schema(member_type) for member_type in args],
}
elif origin_type in (Union, types.UnionType):
discriminator = None
if typing.get_origin(data_type) is Annotated:
discriminator = typing.get_args(data_type)[1].discriminator
ret: Schema = {"oneOf": [self.type_to_schema(union_type) for union_type in typing.get_args(typ)]}
if discriminator:
# for each union type, we need to read the value of the discriminator
mapping: dict[str, JsonType] = {}
for union_type in typing.get_args(typ):
props = self.type_to_schema(union_type, force_expand=True)["properties"]
# mypy is confused here because JsonType allows multiple types, some of them
# not indexable (bool?) or not indexable by string (list?). The correctness of
# types depends on correct model definitions. Hence multiple ignore statements below.
discriminator_value = props[discriminator]["default"] # type: ignore[index,call-overload]
mapping[discriminator_value] = self.type_to_schema(union_type)["$ref"] # type: ignore[index]
ret["discriminator"] = {
"propertyName": discriminator,
"mapping": mapping,
}
return ret
elif origin_type is Literal:
literal_args = typing.get_args(typ)
if len(literal_args) == 1:
(literal_value,) = literal_args
schema = self.type_to_schema(type(literal_value))
schema["const"] = literal_value
return schema
elif len(literal_args) > 1:
first_value = literal_args[0]
schema = self.type_to_schema(type(first_value))
schema["enum"] = list(literal_args)
return schema
else:
return {"enum": []}
elif origin_type is type:
(concrete_type,) = typing.get_args(typ) # unpack single tuple element
return {"const": self.type_to_schema(concrete_type, force_expand=True)}
elif origin_type is collections.abc.AsyncIterator:
(concrete_type,) = typing.get_args(typ)
return self.type_to_schema(concrete_type)
# dictionary of class attributes
members = dict(inspect.getmembers(typ, lambda a: not inspect.isroutine(a)))
property_docstrings = get_class_property_docstrings(typ, self.options.property_description_fun)
properties: dict[str, Schema] = {}
required: list[str] = []
for property_name, property_type in get_class_properties(typ):
# rename property if an alias name is specified
alias = get_annotation(property_type, Alias)
if alias:
output_name = alias.name
else:
output_name = property_name
defaults = {}
json_schema_extra = None
if "model_fields" in members:
f = members["model_fields"]
defaults = {k: finfo.default for k, finfo in f.items()}
if output_name in f:
finfo = f[output_name]
json_schema_extra = finfo.json_schema_extra or {}
if finfo.deprecated:
json_schema_extra["deprecated"] = True
if is_type_optional(property_type):
optional_type: type = unwrap_optional_type(property_type)
property_def = self.type_to_schema(optional_type, json_schema_extra=json_schema_extra)
else:
property_def = self.type_to_schema(property_type, json_schema_extra=json_schema_extra)
required.append(output_name)
# check if attribute has a default value initializer
if defaults.get(property_name) is not None:
def_value = defaults[property_name]
# check if value can be directly represented in JSON
if isinstance(
def_value,
(
bool,
int,
float,
str,
enum.Enum,
datetime.datetime,
datetime.date,
datetime.time,
),
):
property_def["default"] = object_to_json(def_value)
# add property docstring if available
property_doc = property_docstrings.get(property_name)
if property_doc:
# print(output_name, property_doc)
property_def.pop("title", None)
property_def["description"] = property_doc
properties[output_name] = property_def
schema = {"type": "object"}
if len(properties) > 0:
schema["properties"] = typing.cast(JsonType, properties)
schema["additionalProperties"] = False
if len(required) > 0:
schema["required"] = typing.cast(JsonType, required)
if self.options.use_descriptions:
schema.update(docstring_to_schema(typ))
return schema
def _type_to_schema_with_lookup(self, data_type: TypeLike) -> Schema:
"""
Returns the JSON schema associated with a type that may be registered in the catalog of known types.
:param data_type: The type whose JSON schema we seek.
:returns: The JSON schema associated with the type.
"""
entry = JsonSchemaGenerator.type_catalog.get(data_type)
if entry.schema is None:
type_schema = self.type_to_schema(data_type, force_expand=True)
else:
type_schema = deepcopy(entry.schema)
# add descriptive text (if present)
if self.options.use_descriptions:
if isinstance(data_type, type) and not isinstance(data_type, typing.ForwardRef):
type_schema.update(docstring_to_schema(data_type))
# add example (if present)
if self.options.use_examples and entry.examples:
type_schema["examples"] = entry.examples
return type_schema
def classdef_to_schema(self, data_type: TypeLike, force_expand: bool = False) -> tuple[Schema, dict[str, Schema]]:
"""
Returns the JSON schema associated with a type and any nested types.
:param data_type: The type whose JSON schema to return.
:param force_expand: True if a full JSON schema is to be returned even for well-known types; false if a schema
reference is to be used for well-known types.
:returns: A tuple of the JSON schema, and a mapping between nested type names and their corresponding schema.
"""
if not is_type_like(data_type):
raise TypeError(f"expected a type-like object but got: {data_type}")
self.types_used = {}
try:
type_schema = self.type_to_schema(data_type, force_expand=force_expand)
types_defined: dict[str, Schema] = {}
while len(self.types_used) > len(types_defined):
# make a snapshot copy; original collection is going to be modified
types_undefined = {
sub_name: sub_type
for sub_name, sub_type in self.types_used.items()
if sub_name not in types_defined
}
# expand undefined types, which may lead to additional types to be defined
for sub_name, sub_type in types_undefined.items():
types_defined[sub_name] = self._type_to_schema_with_lookup(sub_type)
type_definitions = dict(sorted(types_defined.items()))
finally:
self.types_used = {}
return type_schema, type_definitions
class Validator(enum.Enum):
"Defines constants for JSON schema standards."
Draft7 = jsonschema.Draft7Validator
Draft201909 = jsonschema.Draft201909Validator
Draft202012 = jsonschema.Draft202012Validator
Latest = jsonschema.Draft202012Validator
def classdef_to_schema(
data_type: TypeLike,
options: SchemaOptions | None = None,
validator: Validator = Validator.Latest,
) -> Schema:
"""
Returns the JSON schema corresponding to the given type.
:param data_type: The Python type used to generate the JSON schema
:returns: A JSON object that you can serialize to a JSON string with json.dump or json.dumps
:raises TypeError: Indicates that the generated JSON schema does not validate against the desired meta-schema.
"""
# short-circuit with an error message when passing invalid data
if not is_type_like(data_type):
raise TypeError(f"expected a type-like object but got: {data_type}")
generator = JsonSchemaGenerator(options)
type_schema, type_definitions = generator.classdef_to_schema(data_type)
class_schema: Schema = {}
if type_definitions:
class_schema["definitions"] = typing.cast(JsonType, type_definitions)
class_schema.update(type_schema)
validator_id = validator.value.META_SCHEMA["$id"]
try:
validator.value.check_schema(class_schema)
except jsonschema.exceptions.SchemaError:
raise TypeError(f"schema does not validate against meta-schema <{validator_id}>")
schema = {"$schema": validator_id}
schema.update(class_schema)
return schema
def validate_object(data_type: TypeLike, json_dict: JsonType) -> None:
"""
Validates if the JSON dictionary object conforms to the expected type.
:param data_type: The type to match against.
:param json_dict: A JSON object obtained with `json.load` or `json.loads`.
:raises jsonschema.exceptions.ValidationError: Indicates that the JSON object cannot represent the type.
"""
schema_dict = classdef_to_schema(data_type)
jsonschema.validate(json_dict, schema_dict, format_checker=jsonschema.FormatChecker())
def print_schema(data_type: type) -> None:
"""Pretty-prints the JSON schema corresponding to the type."""
s = classdef_to_schema(data_type)
print(json.dumps(s, indent=4))
def get_schema_identifier(data_type: type) -> str | None:
if data_type in JsonSchemaGenerator.type_catalog:
return JsonSchemaGenerator.type_catalog.get(data_type).identifier
else:
return None
def register_schema(
data_type: T,
schema: Schema | None = None,
name: str | None = None,
examples: list[JsonType] | None = None,
) -> T:
"""
Associates a type with a JSON schema definition.
:param data_type: The type to associate with a JSON schema.
:param schema: The schema to associate the type with. Derived automatically if omitted.
:param name: The name used for looking uo the type. Determined automatically if omitted.
:returns: The input type.
"""
JsonSchemaGenerator.type_catalog.add(
data_type,
schema,
name if name is not None else python_type_to_name(data_type),
examples,
)
return data_type
@overload
def json_schema_type(cls: type[T], /) -> type[T]: ...
@overload
def json_schema_type(cls: None, *, schema: Schema | None = None) -> Callable[[type[T]], type[T]]: ...
def json_schema_type(
cls: type[T] | None = None,
*,
schema: Schema | None = None,
examples: list[JsonType] | None = None,
) -> type[T] | Callable[[type[T]], type[T]]:
"""Decorator to add user-defined schema definition to a class."""
def wrap(cls: type[T]) -> type[T]:
return register_schema(cls, schema, examples=examples)
# see if decorator is used as @json_schema_type or @json_schema_type()
if cls is None:
# called with parentheses
return wrap
else:
# called as @json_schema_type without parentheses
return wrap(cls)
register_schema(JsonObject, name="JsonObject")
register_schema(JsonArray, name="JsonArray")
register_schema(
JsonType,
name="JsonType",
examples=[
{
"property1": None,
"property2": True,
"property3": 64,
"property4": "string",
"property5": ["item"],
"property6": {"key": "value"},
}
],
)
register_schema(
StrictJsonType,
name="StrictJsonType",
examples=[
{
"property1": True,
"property2": 64,
"property3": "string",
"property4": ["item"],
"property5": {"key": "value"},
}
],
)

View file

@ -1,97 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import inspect
import json
import sys
from types import ModuleType
from typing import Any, TextIO, TypeVar
from .core import JsonType
from .deserializer import create_deserializer
from .inspection import TypeLike
from .serializer import create_serializer
T = TypeVar("T")
def object_to_json(obj: Any) -> JsonType:
"""
Converts a Python object to a representation that can be exported to JSON.
* Fundamental types (e.g. numeric types) are written as is.
* Date and time types are serialized in the ISO 8601 format with time zone.
* A byte array is written as a string with Base64 encoding.
* UUIDs are written as a UUID string.
* Enumerations are written as their value.
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively.
* Objects with properties (including data class types) are converted to a dictionaries of key-value pairs.
"""
typ: type = type(obj)
generator = create_serializer(typ)
return generator.generate(obj)
def json_to_object(typ: TypeLike, data: JsonType, *, context: ModuleType | None = None) -> object:
"""
Creates an object from a representation that has been de-serialized from JSON.
When de-serializing a JSON object into a Python object, the following transformations are applied:
* Fundamental types are parsed as `bool`, `int`, `float` or `str`.
* Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type
`datetime`, `date` or `time`
* A byte array is read from a string with Base64 encoding into a `bytes` instance.
* UUIDs are extracted from a UUID string into a `uuid.UUID` instance.
* Enumerations are instantiated with a lookup on enumeration value.
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively.
* Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs
using reflection (enumerating type annotations).
:raises TypeError: A de-serializing engine cannot be constructed for the input type.
:raises JsonKeyError: Deserialization for a class or union type has failed because a matching member was not found.
:raises JsonTypeError: Deserialization for data has failed due to a type mismatch.
"""
# use caller context for evaluating types if no context is supplied
if context is None:
this_frame = inspect.currentframe()
if this_frame is not None:
caller_frame = this_frame.f_back
del this_frame
if caller_frame is not None:
try:
context = sys.modules[caller_frame.f_globals["__name__"]]
finally:
del caller_frame
parser = create_deserializer(typ, context)
return parser.parse(data)
def json_dump_string(json_object: JsonType) -> str:
"Dump an object as a JSON string with a compact representation."
return json.dumps(json_object, ensure_ascii=False, check_circular=False, separators=(",", ":"))
def json_dump(json_object: JsonType, file: TextIO) -> None:
json.dump(
json_object,
file,
ensure_ascii=False,
check_circular=False,
separators=(",", ":"),
)
file.write("\n")

View file

@ -1,494 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
import abc
import base64
import datetime
import enum
import functools
import inspect
import ipaddress
import sys
import typing
import uuid
from collections.abc import Callable
from types import FunctionType, MethodType, ModuleType
from typing import (
Any,
Generic,
Literal,
NamedTuple,
TypeVar,
Union,
)
from .core import JsonType
from .exception import JsonTypeError, JsonValueError
from .inspection import (
TypeLike,
enum_value_types,
evaluate_type,
get_class_properties,
get_resolved_hints,
is_dataclass_type,
is_named_tuple_type,
is_reserved_property,
is_type_annotated,
is_type_enum,
unwrap_annotated_type,
)
from .mapping import python_field_to_json_property
T = TypeVar("T")
class Serializer(abc.ABC, Generic[T]):
@abc.abstractmethod
def generate(self, data: T) -> JsonType: ...
class NoneSerializer(Serializer[None]):
def generate(self, data: None) -> None:
# can be directly represented in JSON
return None
class BoolSerializer(Serializer[bool]):
def generate(self, data: bool) -> bool:
# can be directly represented in JSON
return data
class IntSerializer(Serializer[int]):
def generate(self, data: int) -> int:
# can be directly represented in JSON
return data
class FloatSerializer(Serializer[float]):
def generate(self, data: float) -> float:
# can be directly represented in JSON
return data
class StringSerializer(Serializer[str]):
def generate(self, data: str) -> str:
# can be directly represented in JSON
return data
class BytesSerializer(Serializer[bytes]):
def generate(self, data: bytes) -> str:
return base64.b64encode(data).decode("ascii")
class DateTimeSerializer(Serializer[datetime.datetime]):
def generate(self, obj: datetime.datetime) -> str:
if obj.tzinfo is None:
raise JsonValueError(f"timestamp lacks explicit time zone designator: {obj}")
fmt = obj.isoformat()
if fmt.endswith("+00:00"):
fmt = f"{fmt[:-6]}Z" # Python's isoformat() does not support military time zones like "Zulu" for UTC
return fmt
class DateSerializer(Serializer[datetime.date]):
def generate(self, obj: datetime.date) -> str:
return obj.isoformat()
class TimeSerializer(Serializer[datetime.time]):
def generate(self, obj: datetime.time) -> str:
return obj.isoformat()
class UUIDSerializer(Serializer[uuid.UUID]):
def generate(self, obj: uuid.UUID) -> str:
return str(obj)
class IPv4Serializer(Serializer[ipaddress.IPv4Address]):
def generate(self, obj: ipaddress.IPv4Address) -> str:
return str(obj)
class IPv6Serializer(Serializer[ipaddress.IPv6Address]):
def generate(self, obj: ipaddress.IPv6Address) -> str:
return str(obj)
class EnumSerializer(Serializer[enum.Enum]):
def generate(self, obj: enum.Enum) -> int | str:
value = obj.value
if isinstance(value, int):
return value
return str(value)
class UntypedListSerializer(Serializer[list]):
def generate(self, obj: list) -> list[JsonType]:
return [object_to_json(item) for item in obj]
class UntypedDictSerializer(Serializer[dict]):
def generate(self, obj: dict) -> dict[str, JsonType]:
if obj and isinstance(next(iter(obj.keys())), enum.Enum):
iterator = ((key.value, object_to_json(value)) for key, value in obj.items())
else:
iterator = ((str(key), object_to_json(value)) for key, value in obj.items())
return dict(iterator)
class UntypedSetSerializer(Serializer[set]):
def generate(self, obj: set) -> list[JsonType]:
return [object_to_json(item) for item in obj]
class UntypedTupleSerializer(Serializer[tuple]):
def generate(self, obj: tuple) -> list[JsonType]:
return [object_to_json(item) for item in obj]
class TypedCollectionSerializer(Serializer, Generic[T]):
generator: Serializer[T]
def __init__(self, item_type: type[T], context: ModuleType | None) -> None:
self.generator = _get_serializer(item_type, context)
class TypedListSerializer(TypedCollectionSerializer[T]):
def generate(self, obj: list[T]) -> list[JsonType]:
return [self.generator.generate(item) for item in obj]
class TypedStringDictSerializer(TypedCollectionSerializer[T]):
def __init__(self, value_type: type[T], context: ModuleType | None) -> None:
super().__init__(value_type, context)
def generate(self, obj: dict[str, T]) -> dict[str, JsonType]:
return {key: self.generator.generate(value) for key, value in obj.items()}
class TypedEnumDictSerializer(TypedCollectionSerializer[T]):
def __init__(
self,
key_type: type[enum.Enum],
value_type: type[T],
context: ModuleType | None,
) -> None:
super().__init__(value_type, context)
value_types = enum_value_types(key_type)
if len(value_types) != 1:
raise JsonTypeError(
f"invalid key type, enumerations must have a consistent member value type but several types found: {value_types}"
)
value_type = value_types.pop()
if value_type is not str:
raise JsonTypeError("invalid enumeration key type, expected `enum.Enum` with string values")
def generate(self, obj: dict[enum.Enum, T]) -> dict[str, JsonType]:
return {key.value: self.generator.generate(value) for key, value in obj.items()}
class TypedSetSerializer(TypedCollectionSerializer[T]):
def generate(self, obj: set[T]) -> JsonType:
return [self.generator.generate(item) for item in obj]
class TypedTupleSerializer(Serializer[tuple]):
item_generators: tuple[Serializer, ...]
def __init__(self, item_types: tuple[type, ...], context: ModuleType | None) -> None:
self.item_generators = tuple(_get_serializer(item_type, context) for item_type in item_types)
def generate(self, obj: tuple) -> list[JsonType]:
return [item_generator.generate(item) for item_generator, item in zip(self.item_generators, obj, strict=False)]
class CustomSerializer(Serializer):
converter: Callable[[object], JsonType]
def __init__(self, converter: Callable[[object], JsonType]) -> None:
self.converter = converter
def generate(self, obj: object) -> JsonType:
return self.converter(obj)
class FieldSerializer(Generic[T]):
"""
Serializes a Python object field into a JSON property.
:param field_name: The name of the field in a Python class to read data from.
:param property_name: The name of the JSON property to write to a JSON `object`.
:param generator: A compatible serializer that can handle the field's type.
"""
field_name: str
property_name: str
generator: Serializer
def __init__(self, field_name: str, property_name: str, generator: Serializer[T]) -> None:
self.field_name = field_name
self.property_name = property_name
self.generator = generator
def generate_field(self, obj: object, object_dict: dict[str, JsonType]) -> None:
value = getattr(obj, self.field_name)
if value is not None:
object_dict[self.property_name] = self.generator.generate(value)
class TypedClassSerializer(Serializer[T]):
property_generators: list[FieldSerializer]
def __init__(self, class_type: type[T], context: ModuleType | None) -> None:
self.property_generators = [
FieldSerializer(
field_name,
python_field_to_json_property(field_name, field_type),
_get_serializer(field_type, context),
)
for field_name, field_type in get_class_properties(class_type)
]
def generate(self, obj: T) -> dict[str, JsonType]:
object_dict: dict[str, JsonType] = {}
for property_generator in self.property_generators:
property_generator.generate_field(obj, object_dict)
return object_dict
class TypedNamedTupleSerializer(TypedClassSerializer[NamedTuple]):
def __init__(self, class_type: type[NamedTuple], context: ModuleType | None) -> None:
super().__init__(class_type, context)
class DataclassSerializer(TypedClassSerializer[T]):
def __init__(self, class_type: type[T], context: ModuleType | None) -> None:
super().__init__(class_type, context)
class UnionSerializer(Serializer):
def generate(self, obj: Any) -> JsonType:
return object_to_json(obj)
class LiteralSerializer(Serializer):
generator: Serializer
def __init__(self, values: tuple[Any, ...], context: ModuleType | None) -> None:
literal_type_tuple = tuple(type(value) for value in values)
literal_type_set = set(literal_type_tuple)
if len(literal_type_set) != 1:
value_names = ", ".join(repr(value) for value in values)
raise TypeError(
f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}"
)
literal_type = literal_type_set.pop()
self.generator = _get_serializer(literal_type, context)
def generate(self, obj: Any) -> JsonType:
return self.generator.generate(obj)
class UntypedNamedTupleSerializer(Serializer):
fields: dict[str, str]
def __init__(self, class_type: type[NamedTuple]) -> None:
# named tuples are also instances of tuple
self.fields = {}
field_names: tuple[str, ...] = class_type._fields
for field_name in field_names:
self.fields[field_name] = python_field_to_json_property(field_name)
def generate(self, obj: NamedTuple) -> JsonType:
object_dict = {}
for field_name, property_name in self.fields.items():
value = getattr(obj, field_name)
object_dict[property_name] = object_to_json(value)
return object_dict
class UntypedClassSerializer(Serializer):
def generate(self, obj: object) -> JsonType:
# iterate over object attributes to get a standard representation
object_dict = {}
for name in dir(obj):
if is_reserved_property(name):
continue
value = getattr(obj, name)
if value is None:
continue
# filter instance methods
if inspect.ismethod(value):
continue
object_dict[python_field_to_json_property(name)] = object_to_json(value)
return object_dict
def create_serializer(typ: TypeLike, context: ModuleType | None = None) -> Serializer:
"""
Creates a serializer engine to produce an object that can be directly converted into a JSON string.
When serializing a Python object into a JSON object, the following transformations are applied:
* Fundamental types (`bool`, `int`, `float` or `str`) are returned as-is.
* Date and time types (`datetime`, `date` or `time`) produce an ISO 8601 format string with time zone
(ending with `Z` for UTC).
* Byte arrays (`bytes`) are written as a string with Base64 encoding.
* UUIDs (`uuid.UUID`) are written as a UUID string as per RFC 4122.
* Enumerations yield their enumeration value.
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are processed recursively.
* Complex objects with properties (including data class types) generate dictionaries of key-value pairs.
:raises TypeError: A serializer engine cannot be constructed for the input type.
"""
if context is None:
if isinstance(typ, type):
context = sys.modules[typ.__module__]
return _get_serializer(typ, context)
def _get_serializer(typ: TypeLike, context: ModuleType | None) -> Serializer:
if isinstance(typ, (str, typing.ForwardRef)):
if context is None:
raise TypeError(f"missing context for evaluating type: {typ}")
typ = evaluate_type(typ, context)
if isinstance(typ, type):
return _fetch_serializer(typ)
else:
# special forms are not always hashable
return _create_serializer(typ, context)
@functools.cache
def _fetch_serializer(typ: type) -> Serializer:
context = sys.modules[typ.__module__]
return _create_serializer(typ, context)
def _create_serializer(typ: TypeLike, context: ModuleType | None) -> Serializer:
# check for well-known types
if typ is type(None):
return NoneSerializer()
elif typ is bool:
return BoolSerializer()
elif typ is int:
return IntSerializer()
elif typ is float:
return FloatSerializer()
elif typ is str:
return StringSerializer()
elif typ is bytes:
return BytesSerializer()
elif typ is datetime.datetime:
return DateTimeSerializer()
elif typ is datetime.date:
return DateSerializer()
elif typ is datetime.time:
return TimeSerializer()
elif typ is uuid.UUID:
return UUIDSerializer()
elif typ is ipaddress.IPv4Address:
return IPv4Serializer()
elif typ is ipaddress.IPv6Address:
return IPv6Serializer()
# dynamically-typed collection types
if typ is list:
return UntypedListSerializer()
elif typ is dict:
return UntypedDictSerializer()
elif typ is set:
return UntypedSetSerializer()
elif typ is tuple:
return UntypedTupleSerializer()
# generic types (e.g. list, dict, set, etc.)
origin_type = typing.get_origin(typ)
if origin_type is list:
(list_item_type,) = typing.get_args(typ) # unpack single tuple element
return TypedListSerializer(list_item_type, context)
elif origin_type is dict:
key_type, value_type = typing.get_args(typ)
if key_type is str:
return TypedStringDictSerializer(value_type, context)
elif issubclass(key_type, enum.Enum):
return TypedEnumDictSerializer(key_type, value_type, context)
elif origin_type is set:
(set_member_type,) = typing.get_args(typ) # unpack single tuple element
return TypedSetSerializer(set_member_type, context)
elif origin_type is tuple:
return TypedTupleSerializer(typing.get_args(typ), context)
elif origin_type is Union:
return UnionSerializer()
elif origin_type is Literal:
return LiteralSerializer(typing.get_args(typ), context)
if is_type_annotated(typ):
return create_serializer(unwrap_annotated_type(typ))
# check if object has custom serialization method
convert_func = getattr(typ, "to_json", None)
if callable(convert_func):
return CustomSerializer(convert_func)
if is_type_enum(typ):
return EnumSerializer()
if is_dataclass_type(typ):
return DataclassSerializer(typ, context)
if is_named_tuple_type(typ):
if getattr(typ, "__annotations__", None):
return TypedNamedTupleSerializer(typ, context)
else:
return UntypedNamedTupleSerializer(typ)
# fail early if caller passes an object with an exotic type
if not isinstance(typ, type) or typ is FunctionType or typ is MethodType or typ is type or typ is ModuleType:
raise TypeError(f"object of type {typ} cannot be represented in JSON")
if get_resolved_hints(typ):
return TypedClassSerializer(typ, context)
else:
return UntypedClassSerializer()
def object_to_json(obj: Any) -> JsonType:
"""
Converts a Python object to a representation that can be exported to JSON.
* Fundamental types (e.g. numeric types) are written as is.
* Date and time types are serialized in the ISO 8601 format with time zone.
* A byte array is written as a string with Base64 encoding.
* UUIDs are written as a UUID string.
* Enumerations are written as their value.
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively.
* Objects with properties (including data class types) are converted to a dictionaries of key-value pairs.
"""
typ: type = type(obj)
generator = create_serializer(typ)
return generator.generate(obj)

View file

@ -1,27 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
from typing import Any, TypeVar
T = TypeVar("T")
class SlotsMeta(type):
def __new__(cls: type[T], name: str, bases: tuple[type, ...], ns: dict[str, Any]) -> T:
# caller may have already provided slots, in which case just retain them and keep going
slots: tuple[str, ...] = ns.get("__slots__", ())
# add fields with type annotations to slots
annotations: dict[str, Any] = ns.get("__annotations__", {})
members = tuple(member for member in annotations.keys() if member not in slots)
# assign slots
ns["__slots__"] = slots + tuple(members)
return super().__new__(cls, name, bases, ns) # type: ignore
class Slots(metaclass=SlotsMeta):
pass

View file

@ -1,90 +0,0 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
"""
Type-safe data interchange for Python data classes.
:see: https://github.com/hunyadi/strong_typing
"""
from collections.abc import Callable, Iterable
from typing import TypeVar
from .inspection import TypeCollector
T = TypeVar("T")
def topological_sort(graph: dict[T, set[T]]) -> list[T]:
"""
Performs a topological sort of a graph.
Nodes with no outgoing edges are first. Nodes with no incoming edges are last.
The topological ordering is not unique.
:param graph: A dictionary of mappings from nodes to adjacent nodes. Keys and set members must be hashable.
:returns: The list of nodes in topological order.
"""
# empty list that will contain the sorted nodes (in reverse order)
ordered: list[T] = []
seen: dict[T, bool] = {}
def _visit(n: T) -> None:
status = seen.get(n)
if status is not None:
if status: # node has a permanent mark
return
else: # node has a temporary mark
raise RuntimeError(f"cycle detected in graph for node {n}")
seen[n] = False # apply temporary mark
for m in graph[n]: # visit all adjacent nodes
if m != n: # ignore self-referencing nodes
_visit(m)
seen[n] = True # apply permanent mark
ordered.append(n)
for n in graph.keys():
_visit(n)
return ordered
def type_topological_sort(
types: Iterable[type],
dependency_fn: Callable[[type], Iterable[type]] | None = None,
) -> list[type]:
"""
Performs a topological sort of a list of types.
Types that don't depend on other types (i.e. fundamental types) are first. Types on which no other types depend
are last. The topological ordering is not unique.
:param types: A list of types (simple or composite).
:param dependency_fn: Returns a list of additional dependencies for a class (e.g. classes referenced by a foreign key).
:returns: The list of types in topological order.
"""
if not all(isinstance(typ, type) for typ in types):
raise TypeError("expected a list of types")
collector = TypeCollector()
collector.traverse_all(types)
graph = collector.graph
if dependency_fn:
new_types: set[type] = set()
for source_type, references in graph.items():
dependent_types = dependency_fn(source_type)
references.update(dependent_types)
new_types.update(dependent_types)
for new_type in new_types:
graph[new_type] = set()
return topological_sort(graph)

View file

@ -97,6 +97,7 @@ class ListToolGroupsResponse(BaseModel):
data: list[ToolGroup] data: list[ToolGroup]
@json_schema_type
class ListToolDefsResponse(BaseModel): class ListToolDefsResponse(BaseModel):
"""Response containing a list of tool definitions. """Response containing a list of tool definitions.

View file

@ -15,8 +15,7 @@ from pydantic import BaseModel, Field
from llama_stack_api.common.tracing import telemetry_traceable from llama_stack_api.common.tracing import telemetry_traceable
from llama_stack_api.inference import InterleavedContent from llama_stack_api.inference import InterleavedContent
from llama_stack_api.schema_utils import json_schema_type, webmethod from llama_stack_api.schema_utils import json_schema_type, register_schema, webmethod
from llama_stack_api.strong_typing.schema import register_schema
from llama_stack_api.vector_stores import VectorStore from llama_stack_api.vector_stores import VectorStore
from llama_stack_api.version import LLAMA_STACK_API_V1 from llama_stack_api.version import LLAMA_STACK_API_V1

151
uv.lock generated
View file

@ -1824,6 +1824,21 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fe/54/c86cd8e011fe98803d7e382fd67c0df5ceab8d2b7ad8c5a81524f791551c/jsonschema-4.25.0-py3-none-any.whl", hash = "sha256:24c2e8da302de79c8b9382fee3e76b355e44d2a4364bb207159ce10b517bd716", size = 89184, upload-time = "2025-07-18T15:39:42.956Z" }, { url = "https://files.pythonhosted.org/packages/fe/54/c86cd8e011fe98803d7e382fd67c0df5ceab8d2b7ad8c5a81524f791551c/jsonschema-4.25.0-py3-none-any.whl", hash = "sha256:24c2e8da302de79c8b9382fee3e76b355e44d2a4364bb207159ce10b517bd716", size = 89184, upload-time = "2025-07-18T15:39:42.956Z" },
] ]
[[package]]
name = "jsonschema-path"
version = "0.3.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pathable" },
{ name = "pyyaml" },
{ name = "referencing" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/6e/45/41ebc679c2a4fced6a722f624c18d658dee42612b83ea24c1caf7c0eb3a8/jsonschema_path-0.3.4.tar.gz", hash = "sha256:8365356039f16cc65fddffafda5f58766e34bebab7d6d105616ab52bc4297001", size = 11159, upload-time = "2025-01-24T14:33:16.547Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/58/3485da8cb93d2f393bce453adeef16896751f14ba3e2024bc21dc9597646/jsonschema_path-0.3.4-py3-none-any.whl", hash = "sha256:f502191fdc2b22050f9a81c9237be9d27145b9001c55842bece5e94e382e52f8", size = 14810, upload-time = "2025-01-24T14:33:14.652Z" },
]
[[package]] [[package]]
name = "jsonschema-specifications" name = "jsonschema-specifications"
version = "2025.4.1" version = "2025.4.1"
@ -1903,6 +1918,38 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/89/43/d9bebfc3db7dea6ec80df5cb2aad8d274dd18ec2edd6c4f21f32c237cbbb/kubernetes-33.1.0-py2.py3-none-any.whl", hash = "sha256:544de42b24b64287f7e0aa9513c93cb503f7f40eea39b20f66810011a86eabc5", size = 1941335, upload-time = "2025-06-09T21:57:56.327Z" }, { url = "https://files.pythonhosted.org/packages/89/43/d9bebfc3db7dea6ec80df5cb2aad8d274dd18ec2edd6c4f21f32c237cbbb/kubernetes-33.1.0-py2.py3-none-any.whl", hash = "sha256:544de42b24b64287f7e0aa9513c93cb503f7f40eea39b20f66810011a86eabc5", size = 1941335, upload-time = "2025-06-09T21:57:56.327Z" },
] ]
[[package]]
name = "lazy-object-proxy"
version = "1.12.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/08/a2/69df9c6ba6d316cfd81fe2381e464db3e6de5db45f8c43c6a23504abf8cb/lazy_object_proxy-1.12.0.tar.gz", hash = "sha256:1f5a462d92fd0cfb82f1fab28b51bfb209fabbe6aabf7f0d51472c0c124c0c61", size = 43681, upload-time = "2025-08-22T13:50:06.783Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0d/1b/b5f5bd6bda26f1e15cd3232b223892e4498e34ec70a7f4f11c401ac969f1/lazy_object_proxy-1.12.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8ee0d6027b760a11cc18281e702c0309dd92da458a74b4c15025d7fc490deede", size = 26746, upload-time = "2025-08-22T13:42:37.572Z" },
{ url = "https://files.pythonhosted.org/packages/55/64/314889b618075c2bfc19293ffa9153ce880ac6153aacfd0a52fcabf21a66/lazy_object_proxy-1.12.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:4ab2c584e3cc8be0dfca422e05ad30a9abe3555ce63e9ab7a559f62f8dbc6ff9", size = 71457, upload-time = "2025-08-22T13:42:38.743Z" },
{ url = "https://files.pythonhosted.org/packages/11/53/857fc2827fc1e13fbdfc0ba2629a7d2579645a06192d5461809540b78913/lazy_object_proxy-1.12.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:14e348185adbd03ec17d051e169ec45686dcd840a3779c9d4c10aabe2ca6e1c0", size = 71036, upload-time = "2025-08-22T13:42:40.184Z" },
{ url = "https://files.pythonhosted.org/packages/2b/24/e581ffed864cd33c1b445b5763d617448ebb880f48675fc9de0471a95cbc/lazy_object_proxy-1.12.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:c4fcbe74fb85df8ba7825fa05eddca764138da752904b378f0ae5ab33a36c308", size = 69329, upload-time = "2025-08-22T13:42:41.311Z" },
{ url = "https://files.pythonhosted.org/packages/78/be/15f8f5a0b0b2e668e756a152257d26370132c97f2f1943329b08f057eff0/lazy_object_proxy-1.12.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:563d2ec8e4d4b68ee7848c5ab4d6057a6d703cb7963b342968bb8758dda33a23", size = 70690, upload-time = "2025-08-22T13:42:42.51Z" },
{ url = "https://files.pythonhosted.org/packages/5d/aa/f02be9bbfb270e13ee608c2b28b8771f20a5f64356c6d9317b20043c6129/lazy_object_proxy-1.12.0-cp312-cp312-win_amd64.whl", hash = "sha256:53c7fd99eb156bbb82cbc5d5188891d8fdd805ba6c1e3b92b90092da2a837073", size = 26563, upload-time = "2025-08-22T13:42:43.685Z" },
{ url = "https://files.pythonhosted.org/packages/f4/26/b74c791008841f8ad896c7f293415136c66cc27e7c7577de4ee68040c110/lazy_object_proxy-1.12.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:86fd61cb2ba249b9f436d789d1356deae69ad3231dc3c0f17293ac535162672e", size = 26745, upload-time = "2025-08-22T13:42:44.982Z" },
{ url = "https://files.pythonhosted.org/packages/9b/52/641870d309e5d1fb1ea7d462a818ca727e43bfa431d8c34b173eb090348c/lazy_object_proxy-1.12.0-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:81d1852fb30fab81696f93db1b1e55a5d1ff7940838191062f5f56987d5fcc3e", size = 71537, upload-time = "2025-08-22T13:42:46.141Z" },
{ url = "https://files.pythonhosted.org/packages/47/b6/919118e99d51c5e76e8bf5a27df406884921c0acf2c7b8a3b38d847ab3e9/lazy_object_proxy-1.12.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:be9045646d83f6c2664c1330904b245ae2371b5c57a3195e4028aedc9f999655", size = 71141, upload-time = "2025-08-22T13:42:47.375Z" },
{ url = "https://files.pythonhosted.org/packages/e5/47/1d20e626567b41de085cf4d4fb3661a56c159feaa73c825917b3b4d4f806/lazy_object_proxy-1.12.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:67f07ab742f1adfb3966c40f630baaa7902be4222a17941f3d85fd1dae5565ff", size = 69449, upload-time = "2025-08-22T13:42:48.49Z" },
{ url = "https://files.pythonhosted.org/packages/58/8d/25c20ff1a1a8426d9af2d0b6f29f6388005fc8cd10d6ee71f48bff86fdd0/lazy_object_proxy-1.12.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:75ba769017b944fcacbf6a80c18b2761a1795b03f8899acdad1f1c39db4409be", size = 70744, upload-time = "2025-08-22T13:42:49.608Z" },
{ url = "https://files.pythonhosted.org/packages/c0/67/8ec9abe15c4f8a4bcc6e65160a2c667240d025cbb6591b879bea55625263/lazy_object_proxy-1.12.0-cp313-cp313-win_amd64.whl", hash = "sha256:7b22c2bbfb155706b928ac4d74c1a63ac8552a55ba7fff4445155523ea4067e1", size = 26568, upload-time = "2025-08-22T13:42:57.719Z" },
{ url = "https://files.pythonhosted.org/packages/23/12/cd2235463f3469fd6c62d41d92b7f120e8134f76e52421413a0ad16d493e/lazy_object_proxy-1.12.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:4a79b909aa16bde8ae606f06e6bbc9d3219d2e57fb3e0076e17879072b742c65", size = 27391, upload-time = "2025-08-22T13:42:50.62Z" },
{ url = "https://files.pythonhosted.org/packages/60/9e/f1c53e39bbebad2e8609c67d0830cc275f694d0ea23d78e8f6db526c12d3/lazy_object_proxy-1.12.0-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:338ab2f132276203e404951205fe80c3fd59429b3a724e7b662b2eb539bb1be9", size = 80552, upload-time = "2025-08-22T13:42:51.731Z" },
{ url = "https://files.pythonhosted.org/packages/4c/b6/6c513693448dcb317d9d8c91d91f47addc09553613379e504435b4cc8b3e/lazy_object_proxy-1.12.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8c40b3c9faee2e32bfce0df4ae63f4e73529766893258eca78548bac801c8f66", size = 82857, upload-time = "2025-08-22T13:42:53.225Z" },
{ url = "https://files.pythonhosted.org/packages/12/1c/d9c4aaa4c75da11eb7c22c43d7c90a53b4fca0e27784a5ab207768debea7/lazy_object_proxy-1.12.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:717484c309df78cedf48396e420fa57fc8a2b1f06ea889df7248fdd156e58847", size = 80833, upload-time = "2025-08-22T13:42:54.391Z" },
{ url = "https://files.pythonhosted.org/packages/0b/ae/29117275aac7d7d78ae4f5a4787f36ff33262499d486ac0bf3e0b97889f6/lazy_object_proxy-1.12.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:a6b7ea5ea1ffe15059eb44bcbcb258f97bcb40e139b88152c40d07b1a1dfc9ac", size = 79516, upload-time = "2025-08-22T13:42:55.812Z" },
{ url = "https://files.pythonhosted.org/packages/19/40/b4e48b2c38c69392ae702ae7afa7b6551e0ca5d38263198b7c79de8b3bdf/lazy_object_proxy-1.12.0-cp313-cp313t-win_amd64.whl", hash = "sha256:08c465fb5cd23527512f9bd7b4c7ba6cec33e28aad36fbbe46bf7b858f9f3f7f", size = 27656, upload-time = "2025-08-22T13:42:56.793Z" },
{ url = "https://files.pythonhosted.org/packages/ef/3a/277857b51ae419a1574557c0b12e0d06bf327b758ba94cafc664cb1e2f66/lazy_object_proxy-1.12.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c9defba70ab943f1df98a656247966d7729da2fe9c2d5d85346464bf320820a3", size = 26582, upload-time = "2025-08-22T13:49:49.366Z" },
{ url = "https://files.pythonhosted.org/packages/1a/b6/c5e0fa43535bb9c87880e0ba037cdb1c50e01850b0831e80eb4f4762f270/lazy_object_proxy-1.12.0-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6763941dbf97eea6b90f5b06eb4da9418cc088fce0e3883f5816090f9afcde4a", size = 71059, upload-time = "2025-08-22T13:49:50.488Z" },
{ url = "https://files.pythonhosted.org/packages/06/8a/7dcad19c685963c652624702f1a968ff10220b16bfcc442257038216bf55/lazy_object_proxy-1.12.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:fdc70d81235fc586b9e3d1aeef7d1553259b62ecaae9db2167a5d2550dcc391a", size = 71034, upload-time = "2025-08-22T13:49:54.224Z" },
{ url = "https://files.pythonhosted.org/packages/12/ac/34cbfb433a10e28c7fd830f91c5a348462ba748413cbb950c7f259e67aa7/lazy_object_proxy-1.12.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:0a83c6f7a6b2bfc11ef3ed67f8cbe99f8ff500b05655d8e7df9aab993a6abc95", size = 69529, upload-time = "2025-08-22T13:49:55.29Z" },
{ url = "https://files.pythonhosted.org/packages/6f/6a/11ad7e349307c3ca4c0175db7a77d60ce42a41c60bcb11800aabd6a8acb8/lazy_object_proxy-1.12.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:256262384ebd2a77b023ad02fbcc9326282bcfd16484d5531154b02bc304f4c5", size = 70391, upload-time = "2025-08-22T13:49:56.35Z" },
{ url = "https://files.pythonhosted.org/packages/59/97/9b410ed8fbc6e79c1ee8b13f8777a80137d4bc189caf2c6202358e66192c/lazy_object_proxy-1.12.0-cp314-cp314-win_amd64.whl", hash = "sha256:7601ec171c7e8584f8ff3f4e440aa2eebf93e854f04639263875b8c2971f819f", size = 26988, upload-time = "2025-08-22T13:49:57.302Z" },
]
[[package]] [[package]]
name = "linkify" name = "linkify"
version = "1.4" version = "1.4"
@ -1982,6 +2029,7 @@ dev = [
{ name = "black" }, { name = "black" },
{ name = "mypy" }, { name = "mypy" },
{ name = "nbval" }, { name = "nbval" },
{ name = "openapi-spec-validator" },
{ name = "pre-commit" }, { name = "pre-commit" },
{ name = "pytest" }, { name = "pytest" },
{ name = "pytest-asyncio" }, { name = "pytest-asyncio" },
@ -2107,6 +2155,7 @@ requires-dist = [
{ name = "python-dotenv" }, { name = "python-dotenv" },
{ name = "python-multipart", specifier = ">=0.0.20" }, { name = "python-multipart", specifier = ">=0.0.20" },
{ name = "pyyaml", specifier = ">=6.0" }, { name = "pyyaml", specifier = ">=6.0" },
{ name = "pyyaml", specifier = ">=6.0.2" },
{ name = "rich" }, { name = "rich" },
{ name = "sqlalchemy", extras = ["asyncio"], specifier = ">=2.0.41" }, { name = "sqlalchemy", extras = ["asyncio"], specifier = ">=2.0.41" },
{ name = "starlette" }, { name = "starlette" },
@ -3000,6 +3049,35 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/14/f3/ebbd700d8dc1e6380a7a382969d96bc0cbea8717b52fb38ff0ca2a7653e8/openai-2.5.0-py3-none-any.whl", hash = "sha256:21380e5f52a71666dbadbf322dd518bdf2b9d11ed0bb3f96bea17310302d6280", size = 999851, upload-time = "2025-10-17T18:14:45.528Z" }, { url = "https://files.pythonhosted.org/packages/14/f3/ebbd700d8dc1e6380a7a382969d96bc0cbea8717b52fb38ff0ca2a7653e8/openai-2.5.0-py3-none-any.whl", hash = "sha256:21380e5f52a71666dbadbf322dd518bdf2b9d11ed0bb3f96bea17310302d6280", size = 999851, upload-time = "2025-10-17T18:14:45.528Z" },
] ]
[[package]]
name = "openapi-schema-validator"
version = "0.6.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "jsonschema" },
{ name = "jsonschema-specifications" },
{ name = "rfc3339-validator" },
]
sdist = { url = "https://files.pythonhosted.org/packages/8b/f3/5507ad3325169347cd8ced61c232ff3df70e2b250c49f0fe140edb4973c6/openapi_schema_validator-0.6.3.tar.gz", hash = "sha256:f37bace4fc2a5d96692f4f8b31dc0f8d7400fd04f3a937798eaf880d425de6ee", size = 11550, upload-time = "2025-01-10T18:08:22.268Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/21/c6/ad0fba32775ae749016829dace42ed80f4407b171da41313d1a3a5f102e4/openapi_schema_validator-0.6.3-py3-none-any.whl", hash = "sha256:f3b9870f4e556b5a62a1c39da72a6b4b16f3ad9c73dc80084b1b11e74ba148a3", size = 8755, upload-time = "2025-01-10T18:08:19.758Z" },
]
[[package]]
name = "openapi-spec-validator"
version = "0.7.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "jsonschema" },
{ name = "jsonschema-path" },
{ name = "lazy-object-proxy" },
{ name = "openapi-schema-validator" },
]
sdist = { url = "https://files.pythonhosted.org/packages/82/af/fe2d7618d6eae6fb3a82766a44ed87cd8d6d82b4564ed1c7cfb0f6378e91/openapi_spec_validator-0.7.2.tar.gz", hash = "sha256:cc029309b5c5dbc7859df0372d55e9d1ff43e96d678b9ba087f7c56fc586f734", size = 36855, upload-time = "2025-06-07T14:48:56.299Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/27/dd/b3fd642260cb17532f66cc1e8250f3507d1e580483e209dc1e9d13bd980d/openapi_spec_validator-0.7.2-py3-none-any.whl", hash = "sha256:4bbdc0894ec85f1d1bea1d6d9c8b2c3c8d7ccaa13577ef40da9c006c9fd0eb60", size = 39713, upload-time = "2025-06-07T14:48:54.077Z" },
]
[[package]] [[package]]
name = "opentelemetry-api" name = "opentelemetry-api"
version = "1.36.0" version = "1.36.0"
@ -3236,6 +3314,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c6/ac/dac4a63f978e4dcb3c6d3a78c4d8e0192a113d288502a1216950c41b1027/parso-0.8.4-py2.py3-none-any.whl", hash = "sha256:a418670a20291dacd2dddc80c377c5c3791378ee1e8d12bffc35420643d43f18", size = 103650, upload-time = "2024-04-05T09:43:53.299Z" }, { url = "https://files.pythonhosted.org/packages/c6/ac/dac4a63f978e4dcb3c6d3a78c4d8e0192a113d288502a1216950c41b1027/parso-0.8.4-py2.py3-none-any.whl", hash = "sha256:a418670a20291dacd2dddc80c377c5c3791378ee1e8d12bffc35420643d43f18", size = 103650, upload-time = "2024-04-05T09:43:53.299Z" },
] ]
[[package]]
name = "pathable"
version = "0.4.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/67/93/8f2c2075b180c12c1e9f6a09d1a985bc2036906b13dff1d8917e395f2048/pathable-0.4.4.tar.gz", hash = "sha256:6905a3cd17804edfac7875b5f6c9142a218c7caef78693c2dbbbfbac186d88b2", size = 8124, upload-time = "2025-01-10T18:43:13.247Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7d/eb/b6260b31b1a96386c0a880edebe26f89669098acea8e0318bff6adb378fd/pathable-0.4.4-py3-none-any.whl", hash = "sha256:5ae9e94793b6ef5a4cbe0a7ce9dbbefc1eec38df253763fd0aeeacf2762dbbc2", size = 9592, upload-time = "2025-01-10T18:43:11.88Z" },
]
[[package]] [[package]]
name = "pathspec" name = "pathspec"
version = "0.12.1" version = "0.12.1"
@ -4393,6 +4480,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/1c/4c/cc276ce57e572c102d9542d383b2cfd551276581dc60004cb94fe8774c11/responses-0.25.8-py3-none-any.whl", hash = "sha256:0c710af92def29c8352ceadff0c3fe340ace27cf5af1bbe46fb71275bcd2831c", size = 34769, upload-time = "2025-08-08T19:01:45.018Z" }, { url = "https://files.pythonhosted.org/packages/1c/4c/cc276ce57e572c102d9542d383b2cfd551276581dc60004cb94fe8774c11/responses-0.25.8-py3-none-any.whl", hash = "sha256:0c710af92def29c8352ceadff0c3fe340ace27cf5af1bbe46fb71275bcd2831c", size = 34769, upload-time = "2025-08-08T19:01:45.018Z" },
] ]
[[package]]
name = "rfc3339-validator"
version = "0.1.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
]
sdist = { url = "https://files.pythonhosted.org/packages/28/ea/a9387748e2d111c3c2b275ba970b735e04e15cdb1eb30693b6b5708c4dbd/rfc3339_validator-0.1.4.tar.gz", hash = "sha256:138a2abdf93304ad60530167e51d2dfb9549521a836871b88d7f4695d0022f6b", size = 5513, upload-time = "2021-05-12T16:37:54.178Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7b/44/4e421b96b67b2daff264473f7465db72fbdf36a07e05494f50300cc7b0c6/rfc3339_validator-0.1.4-py2.py3-none-any.whl", hash = "sha256:24f6ec1eda14ef823da9e36ec7113124b39c04d50a4d3d3a3c2859577e7791fa", size = 3490, upload-time = "2021-05-12T16:37:52.536Z" },
]
[[package]] [[package]]
name = "rich" name = "rich"
version = "14.1.0" version = "14.1.0"
@ -4505,40 +4604,46 @@ wheels = [
[[package]] [[package]]
name = "ruamel-yaml" name = "ruamel-yaml"
version = "0.18.14" version = "0.18.16"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "ruamel-yaml-clib", marker = "python_full_version < '3.14' and platform_python_implementation == 'CPython'" }, { name = "ruamel-yaml-clib", marker = "python_full_version < '3.14' and platform_python_implementation == 'CPython'" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/39/87/6da0df742a4684263261c253f00edd5829e6aca970fff69e75028cccc547/ruamel.yaml-0.18.14.tar.gz", hash = "sha256:7227b76aaec364df15936730efbf7d72b30c0b79b1d578bbb8e3dcb2d81f52b7", size = 145511, upload-time = "2025-06-09T08:51:09.828Z" } sdist = { url = "https://files.pythonhosted.org/packages/9f/c7/ee630b29e04a672ecfc9b63227c87fd7a37eb67c1bf30fe95376437f897c/ruamel.yaml-0.18.16.tar.gz", hash = "sha256:a6e587512f3c998b2225d68aa1f35111c29fad14aed561a26e73fab729ec5e5a", size = 147269, upload-time = "2025-10-22T17:54:02.346Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/af/6d/6fe4805235e193aad4aaf979160dd1f3c487c57d48b810c816e6e842171b/ruamel.yaml-0.18.14-py3-none-any.whl", hash = "sha256:710ff198bb53da66718c7db27eec4fbcc9aa6ca7204e4c1df2f282b6fe5eb6b2", size = 118570, upload-time = "2025-06-09T08:51:06.348Z" }, { url = "https://files.pythonhosted.org/packages/0f/73/bb1bc2529f852e7bf64a2dec885e89ff9f5cc7bbf6c9340eed30ff2c69c5/ruamel.yaml-0.18.16-py3-none-any.whl", hash = "sha256:048f26d64245bae57a4f9ef6feb5b552a386830ef7a826f235ffb804c59efbba", size = 119858, upload-time = "2025-10-22T17:53:59.012Z" },
] ]
[[package]] [[package]]
name = "ruamel-yaml-clib" name = "ruamel-yaml-clib"
version = "0.2.12" version = "0.2.14"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/20/84/80203abff8ea4993a87d823a5f632e4d92831ef75d404c9fc78d0176d2b5/ruamel.yaml.clib-0.2.12.tar.gz", hash = "sha256:6c8fbb13ec503f99a91901ab46e0b07ae7941cd527393187039aec586fdfd36f", size = 225315, upload-time = "2024-10-20T10:10:56.22Z" } sdist = { url = "https://files.pythonhosted.org/packages/d8/e9/39ec4d4b3f91188fad1842748f67d4e749c77c37e353c4e545052ee8e893/ruamel.yaml.clib-0.2.14.tar.gz", hash = "sha256:803f5044b13602d58ea378576dd75aa759f52116a0232608e8fdada4da33752e", size = 225394, upload-time = "2025-09-22T19:51:23.753Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/48/41/e7a405afbdc26af961678474a55373e1b323605a4f5e2ddd4a80ea80f628/ruamel.yaml.clib-0.2.12-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:20b0f8dc160ba83b6dcc0e256846e1a02d044e13f7ea74a3d1d56ede4e48c632", size = 133433, upload-time = "2024-10-20T10:12:55.657Z" }, { url = "https://files.pythonhosted.org/packages/b4/42/ccfb34a25289afbbc42017e4d3d4288e61d35b2e00cfc6b92974a6a1f94b/ruamel.yaml.clib-0.2.14-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:6aeadc170090ff1889f0d2c3057557f9cd71f975f17535c26a5d37af98f19c27", size = 271775, upload-time = "2025-09-23T14:24:12.771Z" },
{ url = "https://files.pythonhosted.org/packages/ec/b0/b850385604334c2ce90e3ee1013bd911aedf058a934905863a6ea95e9eb4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:943f32bc9dedb3abff9879edc134901df92cfce2c3d5c9348f172f62eb2d771d", size = 647362, upload-time = "2024-10-20T10:12:57.155Z" }, { url = "https://files.pythonhosted.org/packages/82/73/e628a92e80197ff6a79ab81ec3fa00d4cc082d58ab78d3337b7ba7043301/ruamel.yaml.clib-0.2.14-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:5e56ac47260c0eed992789fa0b8efe43404a9adb608608631a948cee4fc2b052", size = 138842, upload-time = "2025-09-22T19:50:49.156Z" },
{ url = "https://files.pythonhosted.org/packages/44/d0/3f68a86e006448fb6c005aee66565b9eb89014a70c491d70c08de597f8e4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95c3829bb364fdb8e0332c9931ecf57d9be3519241323c5274bd82f709cebc0c", size = 754118, upload-time = "2024-10-20T10:12:58.501Z" }, { url = "https://files.pythonhosted.org/packages/2b/c5/346c7094344a60419764b4b1334d9e0285031c961176ff88ffb652405b0c/ruamel.yaml.clib-0.2.14-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:a911aa73588d9a8b08d662b9484bc0567949529824a55d3885b77e8dd62a127a", size = 647404, upload-time = "2025-09-22T19:50:52.921Z" },
{ url = "https://files.pythonhosted.org/packages/52/a9/d39f3c5ada0a3bb2870d7db41901125dbe2434fa4f12ca8c5b83a42d7c53/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:749c16fcc4a2b09f28843cda5a193e0283e47454b63ec4b81eaa2242f50e4ccd", size = 706497, upload-time = "2024-10-20T10:13:00.211Z" }, { url = "https://files.pythonhosted.org/packages/df/99/65080c863eb06d4498de3d6c86f3e90595e02e159fd8529f1565f56cfe2c/ruamel.yaml.clib-0.2.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a05ba88adf3d7189a974b2de7a9d56731548d35dc0a822ec3dc669caa7019b29", size = 753141, upload-time = "2025-09-22T19:50:50.294Z" },
{ url = "https://files.pythonhosted.org/packages/b0/fa/097e38135dadd9ac25aecf2a54be17ddf6e4c23e43d538492a90ab3d71c6/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:bf165fef1f223beae7333275156ab2022cffe255dcc51c27f066b4370da81e31", size = 698042, upload-time = "2024-10-21T11:26:46.038Z" }, { url = "https://files.pythonhosted.org/packages/3d/e3/0de85f3e3333f8e29e4b10244374a202a87665d1131798946ee22cf05c7c/ruamel.yaml.clib-0.2.14-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fb04c5650de6668b853623eceadcdb1a9f2fee381f5d7b6bc842ee7c239eeec4", size = 703477, upload-time = "2025-09-22T19:50:51.508Z" },
{ url = "https://files.pythonhosted.org/packages/ec/d5/a659ca6f503b9379b930f13bc6b130c9f176469b73b9834296822a83a132/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:32621c177bbf782ca5a18ba4d7af0f1082a3f6e517ac2a18b3974d4edf349680", size = 745831, upload-time = "2024-10-21T11:26:47.487Z" }, { url = "https://files.pythonhosted.org/packages/d9/25/0d2f09d8833c7fd77ab8efeff213093c16856479a9d293180a0d89f6bed9/ruamel.yaml.clib-0.2.14-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:df3ec9959241d07bc261f4983d25a1205ff37703faf42b474f15d54d88b4f8c9", size = 741157, upload-time = "2025-09-23T18:42:50.408Z" },
{ url = "https://files.pythonhosted.org/packages/db/5d/36619b61ffa2429eeaefaab4f3374666adf36ad8ac6330d855848d7d36fd/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b82a7c94a498853aa0b272fd5bc67f29008da798d4f93a2f9f289feb8426a58d", size = 715692, upload-time = "2024-12-11T19:58:17.252Z" }, { url = "https://files.pythonhosted.org/packages/d3/8c/959f10c2e2153cbdab834c46e6954b6dd9e3b109c8f8c0a3cf1618310985/ruamel.yaml.clib-0.2.14-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:fbc08c02e9b147a11dfcaa1ac8a83168b699863493e183f7c0c8b12850b7d259", size = 745859, upload-time = "2025-09-22T19:50:54.497Z" },
{ url = "https://files.pythonhosted.org/packages/b1/82/85cb92f15a4231c89b95dfe08b09eb6adca929ef7df7e17ab59902b6f589/ruamel.yaml.clib-0.2.12-cp312-cp312-win32.whl", hash = "sha256:e8c4ebfcfd57177b572e2040777b8abc537cdef58a2120e830124946aa9b42c5", size = 98777, upload-time = "2024-10-20T10:13:01.395Z" }, { url = "https://files.pythonhosted.org/packages/ed/6b/e580a7c18b485e1a5f30a32cda96b20364b0ba649d9d2baaf72f8bd21f83/ruamel.yaml.clib-0.2.14-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c099cafc1834d3c5dac305865d04235f7c21c167c8dd31ebc3d6bbc357e2f023", size = 770200, upload-time = "2025-09-22T19:50:55.718Z" },
{ url = "https://files.pythonhosted.org/packages/d7/8f/c3654f6f1ddb75daf3922c3d8fc6005b1ab56671ad56ffb874d908bfa668/ruamel.yaml.clib-0.2.12-cp312-cp312-win_amd64.whl", hash = "sha256:0467c5965282c62203273b838ae77c0d29d7638c8a4e3a1c8bdd3602c10904e4", size = 115523, upload-time = "2024-10-20T10:13:02.768Z" }, { url = "https://files.pythonhosted.org/packages/ef/44/3455eebc761dc8e8fdced90f2b0a3fa61e32ba38b50de4130e2d57db0f21/ruamel.yaml.clib-0.2.14-cp312-cp312-win32.whl", hash = "sha256:b5b0f7e294700b615a3bcf6d28b26e6da94e8eba63b079f4ec92e9ba6c0d6b54", size = 98829, upload-time = "2025-09-22T19:50:58.895Z" },
{ url = "https://files.pythonhosted.org/packages/29/00/4864119668d71a5fa45678f380b5923ff410701565821925c69780356ffa/ruamel.yaml.clib-0.2.12-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:4c8c5d82f50bb53986a5e02d1b3092b03622c02c2eb78e29bec33fd9593bae1a", size = 132011, upload-time = "2024-10-20T10:13:04.377Z" }, { url = "https://files.pythonhosted.org/packages/76/ab/5121f7f3b651db93de546f8c982c241397aad0a4765d793aca1dac5eadee/ruamel.yaml.clib-0.2.14-cp312-cp312-win_amd64.whl", hash = "sha256:a37f40a859b503304dd740686359fcf541d6fb3ff7fc10f539af7f7150917c68", size = 115570, upload-time = "2025-09-22T19:50:57.981Z" },
{ url = "https://files.pythonhosted.org/packages/7f/5e/212f473a93ae78c669ffa0cb051e3fee1139cb2d385d2ae1653d64281507/ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux2014_aarch64.whl", hash = "sha256:e7e3736715fbf53e9be2a79eb4db68e4ed857017344d697e8b9749444ae57475", size = 642488, upload-time = "2024-10-20T10:13:05.906Z" }, { url = "https://files.pythonhosted.org/packages/d7/ae/e3811f05415594025e96000349d3400978adaed88d8f98d494352d9761ee/ruamel.yaml.clib-0.2.14-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:7e4f9da7e7549946e02a6122dcad00b7c1168513acb1f8a726b1aaf504a99d32", size = 269205, upload-time = "2025-09-23T14:24:15.06Z" },
{ url = "https://files.pythonhosted.org/packages/1f/8f/ecfbe2123ade605c49ef769788f79c38ddb1c8fa81e01f4dbf5cf1a44b16/ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0b7e75b4965e1d4690e93021adfcecccbca7d61c7bddd8e22406ef2ff20d74ef", size = 745066, upload-time = "2024-10-20T10:13:07.26Z" }, { url = "https://files.pythonhosted.org/packages/72/06/7d51f4688d6d72bb72fa74254e1593c4f5ebd0036be5b41fe39315b275e9/ruamel.yaml.clib-0.2.14-cp313-cp313-macosx_15_0_arm64.whl", hash = "sha256:dd7546c851e59c06197a7c651335755e74aa383a835878ca86d2c650c07a2f85", size = 137417, upload-time = "2025-09-22T19:50:59.82Z" },
{ url = "https://files.pythonhosted.org/packages/e2/a9/28f60726d29dfc01b8decdb385de4ced2ced9faeb37a847bd5cf26836815/ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96777d473c05ee3e5e3c3e999f5d23c6f4ec5b0c38c098b3a5229085f74236c6", size = 701785, upload-time = "2024-10-20T10:13:08.504Z" }, { url = "https://files.pythonhosted.org/packages/5a/08/b4499234a420ef42960eeb05585df5cc7eb25ccb8c980490b079e6367050/ruamel.yaml.clib-0.2.14-cp313-cp313-manylinux2014_aarch64.whl", hash = "sha256:1c1acc3a0209ea9042cc3cfc0790edd2eddd431a2ec3f8283d081e4d5018571e", size = 642558, upload-time = "2025-09-22T19:51:03.388Z" },
{ url = "https://files.pythonhosted.org/packages/84/7e/8e7ec45920daa7f76046578e4f677a3215fe8f18ee30a9cb7627a19d9b4c/ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:3bc2a80e6420ca8b7d3590791e2dfc709c88ab9152c00eeb511c9875ce5778bf", size = 693017, upload-time = "2024-10-21T11:26:48.866Z" }, { url = "https://files.pythonhosted.org/packages/b6/ba/1975a27dedf1c4c33306ee67c948121be8710b19387aada29e2f139c43ee/ruamel.yaml.clib-0.2.14-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2070bf0ad1540d5c77a664de07ebcc45eebd1ddcab71a7a06f26936920692beb", size = 744087, upload-time = "2025-09-22T19:51:00.897Z" },
{ url = "https://files.pythonhosted.org/packages/c5/b3/d650eaade4ca225f02a648321e1ab835b9d361c60d51150bac49063b83fa/ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:e188d2699864c11c36cdfdada94d781fd5d6b0071cd9c427bceb08ad3d7c70e1", size = 741270, upload-time = "2024-10-21T11:26:50.213Z" }, { url = "https://files.pythonhosted.org/packages/20/15/8a19a13d27f3bd09fa18813add8380a29115a47b553845f08802959acbce/ruamel.yaml.clib-0.2.14-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bd8fe07f49c170e09d76773fb86ad9135e0beee44f36e1576a201b0676d3d1d", size = 699709, upload-time = "2025-09-22T19:51:02.075Z" },
{ url = "https://files.pythonhosted.org/packages/87/b8/01c29b924dcbbed75cc45b30c30d565d763b9c4d540545a0eeecffb8f09c/ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4f6f3eac23941b32afccc23081e1f50612bdbe4e982012ef4f5797986828cd01", size = 709059, upload-time = "2024-12-11T19:58:18.846Z" }, { url = "https://files.pythonhosted.org/packages/19/ee/8d6146a079ad21e534b5083c9ee4a4c8bec42f79cf87594b60978286b39a/ruamel.yaml.clib-0.2.14-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ff86876889ea478b1381089e55cf9e345707b312beda4986f823e1d95e8c0f59", size = 708926, upload-time = "2025-09-23T18:42:51.707Z" },
{ url = "https://files.pythonhosted.org/packages/30/8c/ed73f047a73638257aa9377ad356bea4d96125b305c34a28766f4445cc0f/ruamel.yaml.clib-0.2.12-cp313-cp313-win32.whl", hash = "sha256:6442cb36270b3afb1b4951f060eccca1ce49f3d087ca1ca4563a6eb479cb3de6", size = 98583, upload-time = "2024-10-20T10:13:09.658Z" }, { url = "https://files.pythonhosted.org/packages/a9/f5/426b714abdc222392e68f3b8ad323930d05a214a27c7e7a0f06c69126401/ruamel.yaml.clib-0.2.14-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:1f118b707eece8cf84ecbc3e3ec94d9db879d85ed608f95870d39b2d2efa5dca", size = 740202, upload-time = "2025-09-22T19:51:04.673Z" },
{ url = "https://files.pythonhosted.org/packages/b0/85/e8e751d8791564dd333d5d9a4eab0a7a115f7e349595417fd50ecae3395c/ruamel.yaml.clib-0.2.12-cp313-cp313-win_amd64.whl", hash = "sha256:e5b8daf27af0b90da7bb903a876477a9e6d7270be6146906b276605997c7e9a3", size = 115190, upload-time = "2024-10-20T10:13:10.66Z" }, { url = "https://files.pythonhosted.org/packages/3d/ac/3c5c2b27a183f4fda8a57c82211721c016bcb689a4a175865f7646db9f94/ruamel.yaml.clib-0.2.14-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b30110b29484adc597df6bd92a37b90e63a8c152ca8136aad100a02f8ba6d1b6", size = 765196, upload-time = "2025-09-22T19:51:05.916Z" },
{ url = "https://files.pythonhosted.org/packages/92/2e/06f56a71fd55021c993ed6e848c9b2e5e9cfce180a42179f0ddd28253f7c/ruamel.yaml.clib-0.2.14-cp313-cp313-win32.whl", hash = "sha256:f4e97a1cf0b7a30af9e1d9dad10a5671157b9acee790d9e26996391f49b965a2", size = 98635, upload-time = "2025-09-22T19:51:08.183Z" },
{ url = "https://files.pythonhosted.org/packages/51/79/76aba16a1689b50528224b182f71097ece338e7a4ab55e84c2e73443b78a/ruamel.yaml.clib-0.2.14-cp313-cp313-win_amd64.whl", hash = "sha256:090782b5fb9d98df96509eecdbcaffd037d47389a89492320280d52f91330d78", size = 115238, upload-time = "2025-09-22T19:51:07.081Z" },
{ url = "https://files.pythonhosted.org/packages/21/e2/a59ff65c26aaf21a24eb38df777cb9af5d87ba8fc8107c163c2da9d1e85e/ruamel.yaml.clib-0.2.14-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:7df6f6e9d0e33c7b1d435defb185095386c469109de723d514142632a7b9d07f", size = 271441, upload-time = "2025-09-23T14:24:16.498Z" },
{ url = "https://files.pythonhosted.org/packages/6b/fa/3234f913fe9a6525a7b97c6dad1f51e72b917e6872e051a5e2ffd8b16fbb/ruamel.yaml.clib-0.2.14-cp314-cp314-macosx_15_0_arm64.whl", hash = "sha256:70eda7703b8126f5e52fcf276e6c0f40b0d314674f896fc58c47b0aef2b9ae83", size = 137970, upload-time = "2025-09-22T19:51:09.472Z" },
{ url = "https://files.pythonhosted.org/packages/ef/ec/4edbf17ac2c87fa0845dd366ef8d5852b96eb58fcd65fc1ecf5fe27b4641/ruamel.yaml.clib-0.2.14-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:a0cb71ccc6ef9ce36eecb6272c81afdc2f565950cdcec33ae8e6cd8f7fc86f27", size = 739639, upload-time = "2025-09-22T19:51:10.566Z" },
{ url = "https://files.pythonhosted.org/packages/15/18/b0e1fafe59051de9e79cdd431863b03593ecfa8341c110affad7c8121efc/ruamel.yaml.clib-0.2.14-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:e7cb9ad1d525d40f7d87b6df7c0ff916a66bc52cb61b66ac1b2a16d0c1b07640", size = 764456, upload-time = "2025-09-22T19:51:11.736Z" },
] ]
[[package]] [[package]]