mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
feat(openapi): switch to fastapi-based generator (#3944)
Some checks failed
Pre-commit / pre-commit (push) Successful in 3m27s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Integration Tests (Replay) / generate-matrix (push) Successful in 3s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Test llama stack list-deps / generate-matrix (push) Successful in 3s
Python Package Build Test / build (3.12) (push) Failing after 4s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
Test llama stack list-deps / show-single-provider (push) Successful in 25s
Test External API and Providers / test-external (venv) (push) Failing after 34s
Vector IO Integration Tests / test-matrix (push) Failing after 43s
Test Llama Stack Build / build (push) Successful in 37s
Test Llama Stack Build / build-single-provider (push) Successful in 48s
Test llama stack list-deps / list-deps-from-config (push) Successful in 52s
Test llama stack list-deps / list-deps (push) Failing after 52s
Python Package Build Test / build (3.13) (push) Failing after 1m2s
UI Tests / ui-tests (22) (push) Successful in 1m15s
Test Llama Stack Build / build-custom-container-distribution (push) Successful in 1m29s
Unit Tests / unit-tests (3.12) (push) Failing after 1m45s
Test Llama Stack Build / build-ubi9-container-distribution (push) Successful in 1m54s
Unit Tests / unit-tests (3.13) (push) Failing after 2m13s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 2m20s
Some checks failed
Pre-commit / pre-commit (push) Successful in 3m27s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Integration Tests (Replay) / generate-matrix (push) Successful in 3s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Test llama stack list-deps / generate-matrix (push) Successful in 3s
Python Package Build Test / build (3.12) (push) Failing after 4s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
Test llama stack list-deps / show-single-provider (push) Successful in 25s
Test External API and Providers / test-external (venv) (push) Failing after 34s
Vector IO Integration Tests / test-matrix (push) Failing after 43s
Test Llama Stack Build / build (push) Successful in 37s
Test Llama Stack Build / build-single-provider (push) Successful in 48s
Test llama stack list-deps / list-deps-from-config (push) Successful in 52s
Test llama stack list-deps / list-deps (push) Failing after 52s
Python Package Build Test / build (3.13) (push) Failing after 1m2s
UI Tests / ui-tests (22) (push) Successful in 1m15s
Test Llama Stack Build / build-custom-container-distribution (push) Successful in 1m29s
Unit Tests / unit-tests (3.12) (push) Failing after 1m45s
Test Llama Stack Build / build-ubi9-container-distribution (push) Successful in 1m54s
Unit Tests / unit-tests (3.13) (push) Failing after 2m13s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 2m20s
# What does this PR do?
This replaces the legacy "pyopenapi + strong_typing" pipeline with a
FastAPI-backed generator that has an explicit schema registry inside
`llama_stack_api`. The key changes:
1. **New generator architecture.** FastAPI now builds the OpenAPI schema
directly from the real routes, while helper modules
(`schema_collection`, `endpoints`, `schema_transforms`, etc.)
post-process the result. The old pyopenapi stack and its strong_typing
helpers are removed entirely, so we no longer rely on fragile AST
analysis or top-level import side effects.
2. **Schema registry in `llama_stack_api`.** `schema_utils.py` keeps a
`SchemaInfo` record for every `@json_schema_type`, `register_schema`,
and dynamically created request model. The OpenAPI generator and other
tooling query this registry instead of scanning the package tree,
producing deterministic names (e.g., `{MethodName}Request`), capturing
all optional/nullable fields, and making schema discovery testable. A
new unit test covers the registry behavior.
3. **Regenerated specs + CI alignment.** All docs/Stainless specs are
regenerated from the new pipeline, so optional/nullable fields now match
reality (expect the API Conformance workflow to report breaking
changes—this PR establishes the new baseline). The workflow itself is
back to the stock oasdiff invocation so future regressions surface
normally.
*Conformance will be RED on this PR; we choose to accept the
deviations.*
## Test Plan
- `uv run pytest tests/unit/server/test_schema_registry.py`
- `uv run python -m scripts.openapi_generator.main docs/static`
---------
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
This commit is contained in:
parent
cc88789071
commit
97f535c4f1
64 changed files with 47592 additions and 30218 deletions
|
|
@ -42,7 +42,6 @@ repos:
|
||||||
hooks:
|
hooks:
|
||||||
- id: ruff
|
- id: ruff
|
||||||
args: [ --fix ]
|
args: [ --fix ]
|
||||||
exclude: ^(src/llama_stack_api/strong_typing/.*)$
|
|
||||||
- id: ruff-format
|
- id: ruff-format
|
||||||
|
|
||||||
- repo: https://github.com/adamchainz/blacken-docs
|
- repo: https://github.com/adamchainz/blacken-docs
|
||||||
|
|
@ -106,16 +105,16 @@ repos:
|
||||||
language: python
|
language: python
|
||||||
pass_filenames: false
|
pass_filenames: false
|
||||||
require_serial: true
|
require_serial: true
|
||||||
files: ^src/llama_stack/providers/.*$
|
files: ^src/llama_stack/providers/.*$|^scripts/run_openapi_generator.sh$
|
||||||
- id: openapi-codegen
|
- id: openapi-codegen
|
||||||
name: API Spec Codegen
|
name: API Spec Codegen
|
||||||
additional_dependencies:
|
additional_dependencies:
|
||||||
- uv==0.7.8
|
- uv==0.7.8
|
||||||
entry: sh -c './scripts/uv-run-with-index.sh run ./docs/openapi_generator/run_openapi_generator.sh > /dev/null'
|
entry: sh -c './scripts/uv-run-with-index.sh run scripts/run_openapi_generator.sh'
|
||||||
language: python
|
language: python
|
||||||
pass_filenames: false
|
pass_filenames: false
|
||||||
require_serial: true
|
require_serial: true
|
||||||
files: ^src/llama_stack/apis/|^docs/openapi_generator/
|
files: ^src/llama_stack_api/.*$
|
||||||
- id: check-workflows-use-hashes
|
- id: check-workflows-use-hashes
|
||||||
name: Check GitHub Actions use SHA-pinned actions
|
name: Check GitHub Actions use SHA-pinned actions
|
||||||
entry: ./scripts/check-workflows-use-hashes.sh
|
entry: ./scripts/check-workflows-use-hashes.sh
|
||||||
|
|
|
||||||
|
|
@ -231,7 +231,7 @@ npm run serve
|
||||||
If you modify or add new API endpoints, update the API documentation accordingly. You can do this by running the following command:
|
If you modify or add new API endpoints, update the API documentation accordingly. You can do this by running the following command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uv run ./docs/openapi_generator/run_openapi_generator.sh
|
uv run ./scripts/run_openapi_generator.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
The generated API schema will be available in `docs/static/`. Make sure to review the changes before committing.
|
The generated API schema will be available in `docs/static/`. Make sure to review the changes before committing.
|
||||||
|
|
|
||||||
|
|
@ -5,4 +5,4 @@ These are the source-of-truth configuration files used to generate the Stainless
|
||||||
|
|
||||||
A small side note: notice the `.yml` suffixes since Stainless uses that suffix typically for its configuration files.
|
A small side note: notice the `.yml` suffixes since Stainless uses that suffix typically for its configuration files.
|
||||||
|
|
||||||
These files go hand-in-hand. As of now, only the `openapi.yml` file is automatically generated using the `run_openapi_generator.sh` script.
|
These files go hand-in-hand. As of now, only the `openapi.yml` file is automatically generated using the `scripts/run_openapi_generator.sh` script.
|
||||||
|
|
|
||||||
|
|
@ -115,9 +115,6 @@ resources:
|
||||||
sampling_params: SamplingParams
|
sampling_params: SamplingParams
|
||||||
scoring_result: ScoringResult
|
scoring_result: ScoringResult
|
||||||
system_message: SystemMessage
|
system_message: SystemMessage
|
||||||
query_result: RAGQueryResult
|
|
||||||
document: RAGDocument
|
|
||||||
query_config: RAGQueryConfig
|
|
||||||
toolgroups:
|
toolgroups:
|
||||||
models:
|
models:
|
||||||
tool_group: ToolGroup
|
tool_group: ToolGroup
|
||||||
|
|
@ -143,11 +140,6 @@ resources:
|
||||||
endpoint: get /v1/tool-runtime/list-tools
|
endpoint: get /v1/tool-runtime/list-tools
|
||||||
paginated: false
|
paginated: false
|
||||||
invoke_tool: post /v1/tool-runtime/invoke
|
invoke_tool: post /v1/tool-runtime/invoke
|
||||||
subresources:
|
|
||||||
rag_tool:
|
|
||||||
methods:
|
|
||||||
insert: post /v1/tool-runtime/rag-tool/insert
|
|
||||||
query: post /v1/tool-runtime/rag-tool/query
|
|
||||||
|
|
||||||
responses:
|
responses:
|
||||||
models:
|
models:
|
||||||
|
|
@ -173,6 +165,7 @@ resources:
|
||||||
list:
|
list:
|
||||||
type: http
|
type: http
|
||||||
endpoint: get /v1/responses/{response_id}/input_items
|
endpoint: get /v1/responses/{response_id}/input_items
|
||||||
|
paginated: false
|
||||||
|
|
||||||
prompts:
|
prompts:
|
||||||
models:
|
models:
|
||||||
|
|
@ -220,6 +213,9 @@ resources:
|
||||||
create:
|
create:
|
||||||
type: http
|
type: http
|
||||||
endpoint: post /v1/conversations/{conversation_id}/items
|
endpoint: post /v1/conversations/{conversation_id}/items
|
||||||
|
delete:
|
||||||
|
type: http
|
||||||
|
endpoint: delete /v1/conversations/{conversation_id}/items/{item_id}
|
||||||
|
|
||||||
inspect:
|
inspect:
|
||||||
models:
|
models:
|
||||||
|
|
@ -252,6 +248,7 @@ resources:
|
||||||
list:
|
list:
|
||||||
type: http
|
type: http
|
||||||
endpoint: get /v1/chat/completions
|
endpoint: get /v1/chat/completions
|
||||||
|
paginated: false
|
||||||
retrieve:
|
retrieve:
|
||||||
type: http
|
type: http
|
||||||
endpoint: get /v1/chat/completions/{completion_id}
|
endpoint: get /v1/chat/completions/{completion_id}
|
||||||
|
|
@ -375,6 +372,7 @@ resources:
|
||||||
endpoint: get /v1/scoring-functions
|
endpoint: get /v1/scoring-functions
|
||||||
paginated: false
|
paginated: false
|
||||||
register: post /v1/scoring-functions
|
register: post /v1/scoring-functions
|
||||||
|
unregister: delete /v1/scoring-functions/{scoring_fn_id}
|
||||||
models:
|
models:
|
||||||
scoring_fn: ScoringFn
|
scoring_fn: ScoringFn
|
||||||
scoring_fn_params: ScoringFnParams
|
scoring_fn_params: ScoringFnParams
|
||||||
|
|
@ -392,6 +390,13 @@ resources:
|
||||||
list_files_response: ListOpenAIFileResponse
|
list_files_response: ListOpenAIFileResponse
|
||||||
delete_file_response: OpenAIFileDeleteResponse
|
delete_file_response: OpenAIFileDeleteResponse
|
||||||
|
|
||||||
|
batches:
|
||||||
|
methods:
|
||||||
|
create: post /v1/batches
|
||||||
|
list: get /v1/batches
|
||||||
|
retrieve: get /v1/batches/{batch_id}
|
||||||
|
cancel: post /v1/batches/{batch_id}/cancel
|
||||||
|
|
||||||
alpha:
|
alpha:
|
||||||
subresources:
|
subresources:
|
||||||
inference:
|
inference:
|
||||||
|
|
@ -423,6 +428,7 @@ resources:
|
||||||
endpoint: get /v1alpha/eval/benchmarks
|
endpoint: get /v1alpha/eval/benchmarks
|
||||||
paginated: false
|
paginated: false
|
||||||
register: post /v1alpha/eval/benchmarks
|
register: post /v1alpha/eval/benchmarks
|
||||||
|
unregister: delete /v1alpha/eval/benchmarks/{benchmark_id}
|
||||||
models:
|
models:
|
||||||
benchmark: Benchmark
|
benchmark: Benchmark
|
||||||
list_benchmarks_response: ListBenchmarksResponse
|
list_benchmarks_response: ListBenchmarksResponse
|
||||||
|
|
@ -519,7 +525,7 @@ readme:
|
||||||
params: &ref_0 {}
|
params: &ref_0 {}
|
||||||
headline:
|
headline:
|
||||||
type: request
|
type: request
|
||||||
endpoint: post /v1/models
|
endpoint: get /v1/models
|
||||||
params: *ref_0
|
params: *ref_0
|
||||||
pagination:
|
pagination:
|
||||||
type: request
|
type: request
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load diff
|
|
@ -1 +0,0 @@
|
||||||
The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in `llama_stack.core/server/endpoints.py` using the `generate.py` utility.
|
|
||||||
|
|
@ -1,134 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described found in the
|
|
||||||
# LICENSE file in the root directory of this source tree.
|
|
||||||
|
|
||||||
from datetime import datetime
|
|
||||||
from pathlib import Path
|
|
||||||
import sys
|
|
||||||
import fire
|
|
||||||
import ruamel.yaml as yaml
|
|
||||||
|
|
||||||
from llama_stack_api import LLAMA_STACK_API_V1 # noqa: E402
|
|
||||||
from llama_stack.core.stack import LlamaStack # noqa: E402
|
|
||||||
|
|
||||||
from .pyopenapi.options import Options # noqa: E402
|
|
||||||
from .pyopenapi.specification import Info, Server # noqa: E402
|
|
||||||
from .pyopenapi.utility import Specification, validate_api # noqa: E402
|
|
||||||
|
|
||||||
|
|
||||||
def str_presenter(dumper, data):
|
|
||||||
if data.startswith(f"/{LLAMA_STACK_API_V1}") or data.startswith(
|
|
||||||
"#/components/schemas/"
|
|
||||||
):
|
|
||||||
style = None
|
|
||||||
else:
|
|
||||||
style = ">" if "\n" in data or len(data) > 40 else None
|
|
||||||
return dumper.represent_scalar("tag:yaml.org,2002:str", data, style=style)
|
|
||||||
|
|
||||||
|
|
||||||
def generate_spec(output_dir: Path, stability_filter: str = None, main_spec: bool = False, combined_spec: bool = False):
|
|
||||||
"""Generate OpenAPI spec with optional stability filtering."""
|
|
||||||
|
|
||||||
if combined_spec:
|
|
||||||
# Special case for combined stable + experimental APIs
|
|
||||||
title_suffix = " - Stable & Experimental APIs"
|
|
||||||
filename_prefix = "stainless-"
|
|
||||||
description_suffix = "\n\n**🔗 COMBINED**: This specification includes both stable production-ready APIs and experimental pre-release APIs. Use stable APIs for production deployments and experimental APIs for testing new features."
|
|
||||||
# Use the special "stainless" filter to include stable + experimental APIs
|
|
||||||
stability_filter = "stainless"
|
|
||||||
elif stability_filter:
|
|
||||||
title_suffix = {
|
|
||||||
"stable": " - Stable APIs" if not main_spec else "",
|
|
||||||
"experimental": " - Experimental APIs",
|
|
||||||
"deprecated": " - Deprecated APIs"
|
|
||||||
}.get(stability_filter, f" - {stability_filter.title()} APIs")
|
|
||||||
|
|
||||||
# Use main spec filename for stable when main_spec=True
|
|
||||||
if main_spec and stability_filter == "stable":
|
|
||||||
filename_prefix = ""
|
|
||||||
else:
|
|
||||||
filename_prefix = f"{stability_filter}-"
|
|
||||||
|
|
||||||
description_suffix = {
|
|
||||||
"stable": "\n\n**✅ STABLE**: Production-ready APIs with backward compatibility guarantees.",
|
|
||||||
"experimental": "\n\n**🧪 EXPERIMENTAL**: Pre-release APIs (v1alpha, v1beta) that may change before becoming stable.",
|
|
||||||
"deprecated": "\n\n**⚠️ DEPRECATED**: Legacy APIs that may be removed in future versions. Use for migration reference only."
|
|
||||||
}.get(stability_filter, "")
|
|
||||||
else:
|
|
||||||
title_suffix = ""
|
|
||||||
filename_prefix = ""
|
|
||||||
description_suffix = ""
|
|
||||||
|
|
||||||
spec = Specification(
|
|
||||||
LlamaStack,
|
|
||||||
Options(
|
|
||||||
server=Server(url="http://any-hosted-llama-stack.com"),
|
|
||||||
info=Info(
|
|
||||||
title=f"Llama Stack Specification{title_suffix}",
|
|
||||||
version=LLAMA_STACK_API_V1,
|
|
||||||
description=f"""This is the specification of the Llama Stack that provides
|
|
||||||
a set of endpoints and their corresponding interfaces that are tailored to
|
|
||||||
best leverage Llama Models.{description_suffix}""",
|
|
||||||
),
|
|
||||||
include_standard_error_responses=True,
|
|
||||||
stability_filter=stability_filter, # Pass the filter to the generator
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
yaml_filename = f"{filename_prefix}llama-stack-spec.yaml"
|
|
||||||
|
|
||||||
with open(output_dir / yaml_filename, "w", encoding="utf-8") as fp:
|
|
||||||
y = yaml.YAML()
|
|
||||||
y.default_flow_style = False
|
|
||||||
y.block_seq_indent = 2
|
|
||||||
y.map_indent = 2
|
|
||||||
y.sequence_indent = 4
|
|
||||||
y.sequence_dash_offset = 2
|
|
||||||
y.width = 80
|
|
||||||
y.allow_unicode = True
|
|
||||||
y.representer.add_representer(str, str_presenter)
|
|
||||||
|
|
||||||
y.dump(
|
|
||||||
spec.get_json(),
|
|
||||||
fp,
|
|
||||||
)
|
|
||||||
|
|
||||||
def main(output_dir: str):
|
|
||||||
output_dir = Path(output_dir)
|
|
||||||
if not output_dir.exists():
|
|
||||||
raise ValueError(f"Directory {output_dir} does not exist")
|
|
||||||
|
|
||||||
# Validate API protocols before generating spec
|
|
||||||
return_type_errors = validate_api()
|
|
||||||
if return_type_errors:
|
|
||||||
print("\nAPI Method Return Type Validation Errors:\n")
|
|
||||||
for error in return_type_errors:
|
|
||||||
print(error, file=sys.stderr)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
now = str(datetime.now())
|
|
||||||
print(f"Converting the spec to YAML (openapi.yaml) and HTML (openapi.html) at {now}")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
# Generate main spec as stable APIs (llama-stack-spec.yaml)
|
|
||||||
print("Generating main specification (stable APIs)...")
|
|
||||||
generate_spec(output_dir, "stable", main_spec=True)
|
|
||||||
|
|
||||||
print("Generating other stability-filtered specifications...")
|
|
||||||
generate_spec(output_dir, "experimental")
|
|
||||||
generate_spec(output_dir, "deprecated")
|
|
||||||
|
|
||||||
print("Generating combined stable + experimental specification...")
|
|
||||||
generate_spec(output_dir, combined_spec=True)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
fire.Fire(main)
|
|
||||||
|
|
@ -1 +0,0 @@
|
||||||
This is forked from https://github.com/hunyadi/pyopenapi
|
|
||||||
File diff suppressed because it is too large
Load diff
|
|
@ -1,459 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
import collections.abc
|
|
||||||
import enum
|
|
||||||
import inspect
|
|
||||||
import typing
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import Any, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, Union
|
|
||||||
|
|
||||||
from termcolor import colored
|
|
||||||
|
|
||||||
from typing import get_origin, get_args
|
|
||||||
|
|
||||||
from fastapi import UploadFile
|
|
||||||
from fastapi.params import File, Form
|
|
||||||
from typing import Annotated
|
|
||||||
|
|
||||||
from llama_stack_api import (
|
|
||||||
ExtraBodyField,
|
|
||||||
LLAMA_STACK_API_V1,
|
|
||||||
LLAMA_STACK_API_V1ALPHA,
|
|
||||||
LLAMA_STACK_API_V1BETA,
|
|
||||||
get_signature,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def split_prefix(
|
|
||||||
s: str, sep: str, prefix: Union[str, Iterable[str]]
|
|
||||||
) -> Tuple[Optional[str], str]:
|
|
||||||
"""
|
|
||||||
Recognizes a prefix at the beginning of a string.
|
|
||||||
|
|
||||||
:param s: The string to check.
|
|
||||||
:param sep: A separator between (one of) the prefix(es) and the rest of the string.
|
|
||||||
:param prefix: A string or a set of strings to identify as a prefix.
|
|
||||||
:return: A tuple of the recognized prefix (if any) and the rest of the string excluding the separator (or the entire string).
|
|
||||||
"""
|
|
||||||
|
|
||||||
if isinstance(prefix, str):
|
|
||||||
if s.startswith(prefix + sep):
|
|
||||||
return prefix, s[len(prefix) + len(sep) :]
|
|
||||||
else:
|
|
||||||
return None, s
|
|
||||||
|
|
||||||
for p in prefix:
|
|
||||||
if s.startswith(p + sep):
|
|
||||||
return p, s[len(p) + len(sep) :]
|
|
||||||
|
|
||||||
return None, s
|
|
||||||
|
|
||||||
|
|
||||||
def _get_annotation_type(annotation: Union[type, str], callable: Callable) -> type:
|
|
||||||
"Maps a stringized reference to a type, as if using `from __future__ import annotations`."
|
|
||||||
|
|
||||||
if isinstance(annotation, str):
|
|
||||||
return eval(annotation, callable.__globals__)
|
|
||||||
else:
|
|
||||||
return annotation
|
|
||||||
|
|
||||||
|
|
||||||
class HTTPMethod(enum.Enum):
|
|
||||||
"HTTP method used to invoke an endpoint operation."
|
|
||||||
|
|
||||||
GET = "GET"
|
|
||||||
POST = "POST"
|
|
||||||
PUT = "PUT"
|
|
||||||
DELETE = "DELETE"
|
|
||||||
PATCH = "PATCH"
|
|
||||||
|
|
||||||
|
|
||||||
OperationParameter = Tuple[str, type]
|
|
||||||
|
|
||||||
|
|
||||||
class ValidationError(TypeError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class EndpointOperation:
|
|
||||||
"""
|
|
||||||
Type information and metadata associated with an endpoint operation.
|
|
||||||
|
|
||||||
"param defining_class: The most specific class that defines the endpoint operation.
|
|
||||||
:param name: The short name of the endpoint operation.
|
|
||||||
:param func_name: The name of the function to invoke when the operation is triggered.
|
|
||||||
:param func_ref: The callable to invoke when the operation is triggered.
|
|
||||||
:param route: A custom route string assigned to the operation.
|
|
||||||
:param path_params: Parameters of the operation signature that are passed in the path component of the URL string.
|
|
||||||
:param query_params: Parameters of the operation signature that are passed in the query string as `key=value` pairs.
|
|
||||||
:param request_params: The parameter that corresponds to the data transmitted in the request body.
|
|
||||||
:param multipart_params: Parameters that indicate multipart/form-data request body.
|
|
||||||
:param extra_body_params: Parameters that arrive via extra_body and are documented but not in SDK.
|
|
||||||
:param event_type: The Python type of the data that is transmitted out-of-band (e.g. via websockets) while the operation is in progress.
|
|
||||||
:param response_type: The Python type of the data that is transmitted in the response body.
|
|
||||||
:param http_method: The HTTP method used to invoke the endpoint such as POST, GET or PUT.
|
|
||||||
:param public: True if the operation can be invoked without prior authentication.
|
|
||||||
:param request_examples: Sample requests that the operation might take.
|
|
||||||
:param response_examples: Sample responses that the operation might produce.
|
|
||||||
"""
|
|
||||||
|
|
||||||
defining_class: type
|
|
||||||
name: str
|
|
||||||
func_name: str
|
|
||||||
func_ref: Callable[..., Any]
|
|
||||||
route: Optional[str]
|
|
||||||
path_params: List[OperationParameter]
|
|
||||||
query_params: List[OperationParameter]
|
|
||||||
request_params: Optional[OperationParameter]
|
|
||||||
multipart_params: List[OperationParameter]
|
|
||||||
extra_body_params: List[tuple[str, type, str | None]]
|
|
||||||
event_type: Optional[type]
|
|
||||||
response_type: type
|
|
||||||
http_method: HTTPMethod
|
|
||||||
public: bool
|
|
||||||
request_examples: Optional[List[Any]] = None
|
|
||||||
response_examples: Optional[List[Any]] = None
|
|
||||||
|
|
||||||
def get_route(self, webmethod) -> str:
|
|
||||||
api_level = webmethod.level
|
|
||||||
|
|
||||||
if self.route is not None:
|
|
||||||
return "/".join(["", api_level, self.route.lstrip("/")])
|
|
||||||
|
|
||||||
route_parts = ["", api_level, self.name]
|
|
||||||
for param_name, _ in self.path_params:
|
|
||||||
route_parts.append("{" + param_name + "}")
|
|
||||||
return "/".join(route_parts)
|
|
||||||
|
|
||||||
|
|
||||||
class _FormatParameterExtractor:
|
|
||||||
"A visitor to exract parameters in a format string."
|
|
||||||
|
|
||||||
keys: List[str]
|
|
||||||
|
|
||||||
def __init__(self) -> None:
|
|
||||||
self.keys = []
|
|
||||||
|
|
||||||
def __getitem__(self, key: str) -> None:
|
|
||||||
self.keys.append(key)
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def _get_route_parameters(route: str) -> List[str]:
|
|
||||||
extractor = _FormatParameterExtractor()
|
|
||||||
# Replace all occurrences of ":path" with empty string
|
|
||||||
route = route.replace(":path", "")
|
|
||||||
route.format_map(extractor)
|
|
||||||
return extractor.keys
|
|
||||||
|
|
||||||
|
|
||||||
def _get_endpoint_functions(
|
|
||||||
endpoint: type, prefixes: List[str]
|
|
||||||
) -> Iterator[Tuple[str, str, str, Callable]]:
|
|
||||||
if not inspect.isclass(endpoint):
|
|
||||||
raise ValueError(f"object is not a class type: {endpoint}")
|
|
||||||
|
|
||||||
functions = inspect.getmembers(endpoint, inspect.isfunction)
|
|
||||||
for func_name, func_ref in functions:
|
|
||||||
webmethods = []
|
|
||||||
|
|
||||||
# Check for multiple webmethods (stacked decorators)
|
|
||||||
if hasattr(func_ref, "__webmethods__"):
|
|
||||||
webmethods = func_ref.__webmethods__
|
|
||||||
|
|
||||||
if not webmethods:
|
|
||||||
continue
|
|
||||||
|
|
||||||
for webmethod in webmethods:
|
|
||||||
print(f"Processing {colored(func_name, 'white')}...")
|
|
||||||
operation_name = func_name
|
|
||||||
|
|
||||||
if webmethod.method == "GET":
|
|
||||||
prefix = "get"
|
|
||||||
elif webmethod.method == "DELETE":
|
|
||||||
prefix = "delete"
|
|
||||||
elif webmethod.method == "POST":
|
|
||||||
prefix = "post"
|
|
||||||
elif operation_name.startswith("get_") or operation_name.endswith("/get"):
|
|
||||||
prefix = "get"
|
|
||||||
elif (
|
|
||||||
operation_name.startswith("delete_")
|
|
||||||
or operation_name.startswith("remove_")
|
|
||||||
or operation_name.endswith("/delete")
|
|
||||||
or operation_name.endswith("/remove")
|
|
||||||
):
|
|
||||||
prefix = "delete"
|
|
||||||
else:
|
|
||||||
# by default everything else is a POST
|
|
||||||
prefix = "post"
|
|
||||||
|
|
||||||
yield prefix, operation_name, func_name, func_ref
|
|
||||||
|
|
||||||
|
|
||||||
def _get_defining_class(member_fn: str, derived_cls: type) -> type:
|
|
||||||
"Find the class in which a member function is first defined in a class inheritance hierarchy."
|
|
||||||
|
|
||||||
# iterate in reverse member resolution order to find most specific class first
|
|
||||||
for cls in reversed(inspect.getmro(derived_cls)):
|
|
||||||
for name, _ in inspect.getmembers(cls, inspect.isfunction):
|
|
||||||
if name == member_fn:
|
|
||||||
return cls
|
|
||||||
|
|
||||||
raise ValidationError(
|
|
||||||
f"cannot find defining class for {member_fn} in {derived_cls}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_endpoint_operations(
|
|
||||||
endpoint: type, use_examples: bool = True
|
|
||||||
) -> List[EndpointOperation]:
|
|
||||||
"""
|
|
||||||
Extracts a list of member functions in a class eligible for HTTP interface binding.
|
|
||||||
|
|
||||||
These member functions are expected to have a signature like
|
|
||||||
```
|
|
||||||
async def get_object(self, uuid: str, version: int) -> Object:
|
|
||||||
...
|
|
||||||
```
|
|
||||||
where the prefix `get_` translates to an HTTP GET, `object` corresponds to the name of the endpoint operation,
|
|
||||||
`uuid` and `version` are mapped to route path elements in "/object/{uuid}/{version}", and `Object` becomes
|
|
||||||
the response payload type, transmitted as an object serialized to JSON.
|
|
||||||
|
|
||||||
If the member function has a composite class type in the argument list, it becomes the request payload type,
|
|
||||||
and the caller is expected to provide the data as serialized JSON in an HTTP POST request.
|
|
||||||
|
|
||||||
:param endpoint: A class with member functions that can be mapped to an HTTP endpoint.
|
|
||||||
:param use_examples: Whether to return examples associated with member functions.
|
|
||||||
"""
|
|
||||||
|
|
||||||
result = []
|
|
||||||
|
|
||||||
for prefix, operation_name, func_name, func_ref in _get_endpoint_functions(
|
|
||||||
endpoint,
|
|
||||||
[
|
|
||||||
"create",
|
|
||||||
"delete",
|
|
||||||
"do",
|
|
||||||
"get",
|
|
||||||
"post",
|
|
||||||
"put",
|
|
||||||
"remove",
|
|
||||||
"set",
|
|
||||||
"update",
|
|
||||||
],
|
|
||||||
):
|
|
||||||
# Get all webmethods for this function
|
|
||||||
webmethods = getattr(func_ref, "__webmethods__", [])
|
|
||||||
|
|
||||||
# Create one EndpointOperation for each webmethod
|
|
||||||
for webmethod in webmethods:
|
|
||||||
route = webmethod.route
|
|
||||||
route_params = _get_route_parameters(route) if route is not None else None
|
|
||||||
public = webmethod.public
|
|
||||||
request_examples = webmethod.request_examples
|
|
||||||
response_examples = webmethod.response_examples
|
|
||||||
|
|
||||||
# inspect function signature for path and query parameters, and request/response payload type
|
|
||||||
signature = get_signature(func_ref)
|
|
||||||
|
|
||||||
path_params = []
|
|
||||||
query_params = []
|
|
||||||
request_params = []
|
|
||||||
multipart_params = []
|
|
||||||
extra_body_params = []
|
|
||||||
|
|
||||||
for param_name, parameter in signature.parameters.items():
|
|
||||||
param_type = _get_annotation_type(parameter.annotation, func_ref)
|
|
||||||
|
|
||||||
# omit "self" for instance methods
|
|
||||||
if param_name == "self" and param_type is inspect.Parameter.empty:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# check if all parameters have explicit type
|
|
||||||
if parameter.annotation is inspect.Parameter.empty:
|
|
||||||
raise ValidationError(
|
|
||||||
f"parameter '{param_name}' in function '{func_name}' has no type annotation"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check if this is an extra_body parameter
|
|
||||||
is_extra_body, extra_body_desc = _is_extra_body_param(param_type)
|
|
||||||
if is_extra_body:
|
|
||||||
# Store in a separate list for documentation
|
|
||||||
extra_body_params.append((param_name, param_type, extra_body_desc))
|
|
||||||
continue # Skip adding to request_params
|
|
||||||
|
|
||||||
is_multipart = _is_multipart_param(param_type)
|
|
||||||
|
|
||||||
if prefix in ["get", "delete"]:
|
|
||||||
if route_params is not None and param_name in route_params:
|
|
||||||
path_params.append((param_name, param_type))
|
|
||||||
else:
|
|
||||||
query_params.append((param_name, param_type))
|
|
||||||
else:
|
|
||||||
if route_params is not None and param_name in route_params:
|
|
||||||
path_params.append((param_name, param_type))
|
|
||||||
elif is_multipart:
|
|
||||||
multipart_params.append((param_name, param_type))
|
|
||||||
else:
|
|
||||||
request_params.append((param_name, param_type))
|
|
||||||
|
|
||||||
# check if function has explicit return type
|
|
||||||
if signature.return_annotation is inspect.Signature.empty:
|
|
||||||
raise ValidationError(
|
|
||||||
f"function '{func_name}' has no return type annotation"
|
|
||||||
)
|
|
||||||
|
|
||||||
return_type = _get_annotation_type(signature.return_annotation, func_ref)
|
|
||||||
|
|
||||||
# operations that produce events are labeled as Generator[YieldType, SendType, ReturnType]
|
|
||||||
# where YieldType is the event type, SendType is None, and ReturnType is the immediate response type to the request
|
|
||||||
if typing.get_origin(return_type) is collections.abc.Generator:
|
|
||||||
event_type, send_type, response_type = typing.get_args(return_type)
|
|
||||||
if send_type is not type(None):
|
|
||||||
raise ValidationError(
|
|
||||||
f"function '{func_name}' has a return type Generator[Y,S,R] and therefore looks like an event but has an explicit send type"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
event_type = None
|
|
||||||
|
|
||||||
def process_type(t):
|
|
||||||
if typing.get_origin(t) is collections.abc.AsyncIterator:
|
|
||||||
# NOTE(ashwin): this is SSE and there is no way to represent it. either we make it a List
|
|
||||||
# or the item type. I am choosing it to be the latter
|
|
||||||
args = typing.get_args(t)
|
|
||||||
return args[0]
|
|
||||||
elif typing.get_origin(t) is typing.Union:
|
|
||||||
types = [process_type(a) for a in typing.get_args(t)]
|
|
||||||
return typing._UnionGenericAlias(typing.Union, tuple(types))
|
|
||||||
else:
|
|
||||||
return t
|
|
||||||
|
|
||||||
response_type = process_type(return_type)
|
|
||||||
|
|
||||||
if prefix in ["delete", "remove"]:
|
|
||||||
http_method = HTTPMethod.DELETE
|
|
||||||
elif prefix == "post":
|
|
||||||
http_method = HTTPMethod.POST
|
|
||||||
elif prefix == "get":
|
|
||||||
http_method = HTTPMethod.GET
|
|
||||||
elif prefix == "set":
|
|
||||||
http_method = HTTPMethod.PUT
|
|
||||||
elif prefix == "update":
|
|
||||||
http_method = HTTPMethod.PATCH
|
|
||||||
else:
|
|
||||||
raise ValidationError(f"unknown prefix {prefix}")
|
|
||||||
|
|
||||||
# Create an EndpointOperation for this specific webmethod
|
|
||||||
operation = EndpointOperation(
|
|
||||||
defining_class=_get_defining_class(func_name, endpoint),
|
|
||||||
name=operation_name,
|
|
||||||
func_name=func_name,
|
|
||||||
func_ref=func_ref,
|
|
||||||
route=route,
|
|
||||||
path_params=path_params,
|
|
||||||
query_params=query_params,
|
|
||||||
request_params=request_params,
|
|
||||||
multipart_params=multipart_params,
|
|
||||||
extra_body_params=extra_body_params,
|
|
||||||
event_type=event_type,
|
|
||||||
response_type=response_type,
|
|
||||||
http_method=http_method,
|
|
||||||
public=public,
|
|
||||||
request_examples=request_examples if use_examples else None,
|
|
||||||
response_examples=response_examples if use_examples else None,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Store the specific webmethod with this operation
|
|
||||||
operation.webmethod = webmethod
|
|
||||||
result.append(operation)
|
|
||||||
|
|
||||||
if not result:
|
|
||||||
raise ValidationError(f"no eligible endpoint operations in type {endpoint}")
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
def get_endpoint_events(endpoint: type) -> Dict[str, type]:
|
|
||||||
results = {}
|
|
||||||
|
|
||||||
for decl in typing.get_type_hints(endpoint).values():
|
|
||||||
# check if signature is Callable[...]
|
|
||||||
origin = typing.get_origin(decl)
|
|
||||||
if origin is None or not issubclass(origin, Callable): # type: ignore
|
|
||||||
continue
|
|
||||||
|
|
||||||
# check if signature is Callable[[...], Any]
|
|
||||||
args = typing.get_args(decl)
|
|
||||||
if len(args) != 2:
|
|
||||||
continue
|
|
||||||
params_type, return_type = args
|
|
||||||
if not isinstance(params_type, list):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# check if signature is Callable[[...], None]
|
|
||||||
if not issubclass(return_type, type(None)):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# check if signature is Callable[[EventType], None]
|
|
||||||
if len(params_type) != 1:
|
|
||||||
continue
|
|
||||||
|
|
||||||
param_type = params_type[0]
|
|
||||||
results[param_type.__name__] = param_type
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
|
|
||||||
def _is_multipart_param(param_type: type) -> bool:
|
|
||||||
"""
|
|
||||||
Check if a parameter type indicates multipart form data.
|
|
||||||
|
|
||||||
Returns True if the type is:
|
|
||||||
- UploadFile
|
|
||||||
- Annotated[UploadFile, File()]
|
|
||||||
- Annotated[str, Form()]
|
|
||||||
- Annotated[Any, File()]
|
|
||||||
- Annotated[Any, Form()]
|
|
||||||
"""
|
|
||||||
if param_type is UploadFile:
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Check for Annotated types
|
|
||||||
origin = get_origin(param_type)
|
|
||||||
if origin is None:
|
|
||||||
return False
|
|
||||||
|
|
||||||
if origin is Annotated:
|
|
||||||
args = get_args(param_type)
|
|
||||||
if len(args) < 2:
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Check the annotations for File() or Form()
|
|
||||||
for annotation in args[1:]:
|
|
||||||
if isinstance(annotation, (File, Form)):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def _is_extra_body_param(param_type: type) -> tuple[bool, str | None]:
|
|
||||||
"""
|
|
||||||
Check if parameter is marked as coming from extra_body.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(is_extra_body, description): Tuple of boolean and optional description
|
|
||||||
"""
|
|
||||||
origin = get_origin(param_type)
|
|
||||||
if origin is Annotated:
|
|
||||||
args = get_args(param_type)
|
|
||||||
for annotation in args[1:]:
|
|
||||||
if isinstance(annotation, ExtraBodyField):
|
|
||||||
return True, annotation.description
|
|
||||||
# Also check by type name for cases where import matters
|
|
||||||
if type(annotation).__name__ == 'ExtraBodyField':
|
|
||||||
return True, getattr(annotation, 'description', None)
|
|
||||||
return False, None
|
|
||||||
|
|
@ -1,78 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
import dataclasses
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from http import HTTPStatus
|
|
||||||
from typing import Callable, ClassVar, Dict, List, Optional, Tuple, Union
|
|
||||||
|
|
||||||
from .specification import (
|
|
||||||
Info,
|
|
||||||
SecurityScheme,
|
|
||||||
SecuritySchemeAPI,
|
|
||||||
SecuritySchemeHTTP,
|
|
||||||
SecuritySchemeOpenIDConnect,
|
|
||||||
Server,
|
|
||||||
)
|
|
||||||
|
|
||||||
HTTPStatusCode = Union[HTTPStatus, int, str]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Options:
|
|
||||||
"""
|
|
||||||
:param server: Base URL for the API endpoint.
|
|
||||||
:param info: Meta-information for the endpoint specification.
|
|
||||||
:param version: OpenAPI specification version as a tuple of major, minor, revision.
|
|
||||||
:param default_security_scheme: Security scheme to apply to endpoints, unless overridden on a per-endpoint basis.
|
|
||||||
:param extra_types: Extra types in addition to those found in operation signatures. Use a dictionary to group related types.
|
|
||||||
:param use_examples: Whether to emit examples for operations.
|
|
||||||
:param success_responses: Associates operation response types with HTTP status codes.
|
|
||||||
:param error_responses: Associates error response types with HTTP status codes.
|
|
||||||
:param error_wrapper: True if errors are encapsulated in an error object wrapper.
|
|
||||||
:param property_description_fun: Custom transformation function to apply to class property documentation strings.
|
|
||||||
:param captions: User-defined captions for sections such as "Operations" or "Types", and (if applicable) groups of extra types.
|
|
||||||
:param include_standard_error_responses: Whether to include standard error responses (400, 429, 500, 503) in all operations.
|
|
||||||
"""
|
|
||||||
|
|
||||||
server: Server
|
|
||||||
info: Info
|
|
||||||
version: Tuple[int, int, int] = (3, 1, 0)
|
|
||||||
default_security_scheme: Optional[SecurityScheme] = None
|
|
||||||
extra_types: Union[List[type], Dict[str, List[type]], None] = None
|
|
||||||
use_examples: bool = True
|
|
||||||
success_responses: Dict[type, HTTPStatusCode] = dataclasses.field(
|
|
||||||
default_factory=dict
|
|
||||||
)
|
|
||||||
error_responses: Dict[type, HTTPStatusCode] = dataclasses.field(
|
|
||||||
default_factory=dict
|
|
||||||
)
|
|
||||||
error_wrapper: bool = False
|
|
||||||
property_description_fun: Optional[Callable[[type, str, str], str]] = None
|
|
||||||
captions: Optional[Dict[str, str]] = None
|
|
||||||
include_standard_error_responses: bool = True
|
|
||||||
stability_filter: Optional[str] = None
|
|
||||||
|
|
||||||
default_captions: ClassVar[Dict[str, str]] = {
|
|
||||||
"Operations": "Operations",
|
|
||||||
"Types": "Types",
|
|
||||||
"Events": "Events",
|
|
||||||
"AdditionalTypes": "Additional types",
|
|
||||||
}
|
|
||||||
|
|
||||||
def map(self, id: str) -> str:
|
|
||||||
"Maps a language-neutral placeholder string to language-dependent text."
|
|
||||||
|
|
||||||
if self.captions is not None:
|
|
||||||
caption = self.captions.get(id)
|
|
||||||
if caption is not None:
|
|
||||||
return caption
|
|
||||||
|
|
||||||
caption = self.__class__.default_captions.get(id)
|
|
||||||
if caption is not None:
|
|
||||||
return caption
|
|
||||||
|
|
||||||
raise KeyError(f"no caption found for ID: {id}")
|
|
||||||
|
|
@ -1,269 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
import dataclasses
|
|
||||||
import enum
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import Any, ClassVar, Dict, List, Optional, Union
|
|
||||||
|
|
||||||
from llama_stack_api import JsonType, Schema, StrictJsonType
|
|
||||||
|
|
||||||
URL = str
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Ref:
|
|
||||||
ref_type: ClassVar[str]
|
|
||||||
id: str
|
|
||||||
|
|
||||||
def to_json(self) -> StrictJsonType:
|
|
||||||
return {"$ref": f"#/components/{self.ref_type}/{self.id}"}
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SchemaRef(Ref):
|
|
||||||
ref_type: ClassVar[str] = "schemas"
|
|
||||||
|
|
||||||
|
|
||||||
SchemaOrRef = Union[Schema, SchemaRef]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ResponseRef(Ref):
|
|
||||||
ref_type: ClassVar[str] = "responses"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ParameterRef(Ref):
|
|
||||||
ref_type: ClassVar[str] = "parameters"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ExampleRef(Ref):
|
|
||||||
ref_type: ClassVar[str] = "examples"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Contact:
|
|
||||||
name: Optional[str] = None
|
|
||||||
url: Optional[URL] = None
|
|
||||||
email: Optional[str] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class License:
|
|
||||||
name: str
|
|
||||||
url: Optional[URL] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Info:
|
|
||||||
title: str
|
|
||||||
version: str
|
|
||||||
description: Optional[str] = None
|
|
||||||
termsOfService: Optional[str] = None
|
|
||||||
contact: Optional[Contact] = None
|
|
||||||
license: Optional[License] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class MediaType:
|
|
||||||
schema: Optional[SchemaOrRef] = None
|
|
||||||
example: Optional[Any] = None
|
|
||||||
examples: Optional[Dict[str, Union["Example", ExampleRef]]] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class RequestBody:
|
|
||||||
content: Dict[str, MediaType | Dict[str, Any]]
|
|
||||||
description: Optional[str] = None
|
|
||||||
required: Optional[bool] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Response:
|
|
||||||
description: str
|
|
||||||
content: Optional[Dict[str, MediaType]] = None
|
|
||||||
|
|
||||||
|
|
||||||
class ParameterLocation(enum.Enum):
|
|
||||||
Query = "query"
|
|
||||||
Header = "header"
|
|
||||||
Path = "path"
|
|
||||||
Cookie = "cookie"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Parameter:
|
|
||||||
name: str
|
|
||||||
in_: ParameterLocation
|
|
||||||
description: Optional[str] = None
|
|
||||||
required: Optional[bool] = None
|
|
||||||
schema: Optional[SchemaOrRef] = None
|
|
||||||
example: Optional[Any] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ExtraBodyParameter:
|
|
||||||
"""Represents a parameter that arrives via extra_body in the request."""
|
|
||||||
name: str
|
|
||||||
schema: SchemaOrRef
|
|
||||||
description: Optional[str] = None
|
|
||||||
required: Optional[bool] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Operation:
|
|
||||||
responses: Dict[str, Union[Response, ResponseRef]]
|
|
||||||
tags: Optional[List[str]] = None
|
|
||||||
summary: Optional[str] = None
|
|
||||||
description: Optional[str] = None
|
|
||||||
operationId: Optional[str] = None
|
|
||||||
parameters: Optional[List[Parameter]] = None
|
|
||||||
requestBody: Optional[RequestBody] = None
|
|
||||||
callbacks: Optional[Dict[str, "Callback"]] = None
|
|
||||||
security: Optional[List["SecurityRequirement"]] = None
|
|
||||||
deprecated: Optional[bool] = None
|
|
||||||
extraBodyParameters: Optional[List[ExtraBodyParameter]] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class PathItem:
|
|
||||||
summary: Optional[str] = None
|
|
||||||
description: Optional[str] = None
|
|
||||||
get: Optional[Operation] = None
|
|
||||||
put: Optional[Operation] = None
|
|
||||||
post: Optional[Operation] = None
|
|
||||||
delete: Optional[Operation] = None
|
|
||||||
options: Optional[Operation] = None
|
|
||||||
head: Optional[Operation] = None
|
|
||||||
patch: Optional[Operation] = None
|
|
||||||
trace: Optional[Operation] = None
|
|
||||||
|
|
||||||
def update(self, other: "PathItem") -> None:
|
|
||||||
"Merges another instance of this class into this object."
|
|
||||||
|
|
||||||
for field in dataclasses.fields(self.__class__):
|
|
||||||
value = getattr(other, field.name)
|
|
||||||
if value is not None:
|
|
||||||
setattr(self, field.name, value)
|
|
||||||
|
|
||||||
|
|
||||||
# maps run-time expressions such as "$request.body#/url" to path items
|
|
||||||
Callback = Dict[str, PathItem]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Example:
|
|
||||||
summary: Optional[str] = None
|
|
||||||
description: Optional[str] = None
|
|
||||||
value: Optional[Any] = None
|
|
||||||
externalValue: Optional[URL] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Server:
|
|
||||||
url: URL
|
|
||||||
description: Optional[str] = None
|
|
||||||
|
|
||||||
|
|
||||||
class SecuritySchemeType(enum.Enum):
|
|
||||||
ApiKey = "apiKey"
|
|
||||||
HTTP = "http"
|
|
||||||
OAuth2 = "oauth2"
|
|
||||||
OpenIDConnect = "openIdConnect"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SecurityScheme:
|
|
||||||
type: SecuritySchemeType
|
|
||||||
description: str
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(init=False)
|
|
||||||
class SecuritySchemeAPI(SecurityScheme):
|
|
||||||
name: str
|
|
||||||
in_: ParameterLocation
|
|
||||||
|
|
||||||
def __init__(self, description: str, name: str, in_: ParameterLocation) -> None:
|
|
||||||
super().__init__(SecuritySchemeType.ApiKey, description)
|
|
||||||
self.name = name
|
|
||||||
self.in_ = in_
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(init=False)
|
|
||||||
class SecuritySchemeHTTP(SecurityScheme):
|
|
||||||
scheme: str
|
|
||||||
bearerFormat: Optional[str] = None
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self, description: str, scheme: str, bearerFormat: Optional[str] = None
|
|
||||||
) -> None:
|
|
||||||
super().__init__(SecuritySchemeType.HTTP, description)
|
|
||||||
self.scheme = scheme
|
|
||||||
self.bearerFormat = bearerFormat
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(init=False)
|
|
||||||
class SecuritySchemeOpenIDConnect(SecurityScheme):
|
|
||||||
openIdConnectUrl: str
|
|
||||||
|
|
||||||
def __init__(self, description: str, openIdConnectUrl: str) -> None:
|
|
||||||
super().__init__(SecuritySchemeType.OpenIDConnect, description)
|
|
||||||
self.openIdConnectUrl = openIdConnectUrl
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Components:
|
|
||||||
schemas: Optional[Dict[str, Schema]] = None
|
|
||||||
responses: Optional[Dict[str, Response]] = None
|
|
||||||
parameters: Optional[Dict[str, Parameter]] = None
|
|
||||||
examples: Optional[Dict[str, Example]] = None
|
|
||||||
requestBodies: Optional[Dict[str, RequestBody]] = None
|
|
||||||
securitySchemes: Optional[Dict[str, SecurityScheme]] = None
|
|
||||||
callbacks: Optional[Dict[str, Callback]] = None
|
|
||||||
|
|
||||||
|
|
||||||
SecurityScope = str
|
|
||||||
SecurityRequirement = Dict[str, List[SecurityScope]]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Tag:
|
|
||||||
name: str
|
|
||||||
description: Optional[str] = None
|
|
||||||
displayName: Optional[str] = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class TagGroup:
|
|
||||||
"""
|
|
||||||
A ReDoc extension to provide information about groups of tags.
|
|
||||||
|
|
||||||
Exposed via the vendor-specific property "x-tagGroups" of the top-level object.
|
|
||||||
"""
|
|
||||||
|
|
||||||
name: str
|
|
||||||
tags: List[str]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Document:
|
|
||||||
"""
|
|
||||||
This class is a Python dataclass adaptation of the OpenAPI Specification.
|
|
||||||
|
|
||||||
For details, see <https://swagger.io/specification/>
|
|
||||||
"""
|
|
||||||
|
|
||||||
openapi: str
|
|
||||||
info: Info
|
|
||||||
servers: List[Server]
|
|
||||||
paths: Dict[str, PathItem]
|
|
||||||
jsonSchemaDialect: Optional[str] = None
|
|
||||||
components: Optional[Components] = None
|
|
||||||
security: Optional[List[SecurityRequirement]] = None
|
|
||||||
tags: Optional[List[Tag]] = None
|
|
||||||
tagGroups: Optional[List[TagGroup]] = None
|
|
||||||
|
|
@ -1,41 +0,0 @@
|
||||||
<!DOCTYPE html>
|
|
||||||
<html>
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<meta charset="utf-8" />
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
|
||||||
<title>OpenAPI specification</title>
|
|
||||||
<link href="https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700" rel="stylesheet">
|
|
||||||
<script type="module" src="https://cdn.jsdelivr.net/npm/@stoplight/elements/web-components.min.js"></script>
|
|
||||||
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@stoplight/elements/styles.min.css">
|
|
||||||
<style>
|
|
||||||
body {
|
|
||||||
margin: 0;
|
|
||||||
padding: 0;
|
|
||||||
height: 100vh;
|
|
||||||
}
|
|
||||||
|
|
||||||
elements-api {
|
|
||||||
height: 100%;
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
<body>
|
|
||||||
<elements-api id="openapi-container" router="hash" layout="sidebar" hideExport="true"
|
|
||||||
hideInternal="true"></elements-api>
|
|
||||||
|
|
||||||
<script>
|
|
||||||
document.addEventListener("DOMContentLoaded", function () {
|
|
||||||
const spec = { /* OPENAPI_SPECIFICATION */ };
|
|
||||||
const element = document.getElementById("openapi-container");
|
|
||||||
element.apiDescriptionDocument = spec;
|
|
||||||
|
|
||||||
if (spec.info && spec.info.title) {
|
|
||||||
document.title = spec.info.title;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
</script>
|
|
||||||
</body>
|
|
||||||
|
|
||||||
</html>
|
|
||||||
|
|
@ -1,287 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
import json
|
|
||||||
import typing
|
|
||||||
import inspect
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, List, Optional, TextIO, Union, get_type_hints, get_origin, get_args
|
|
||||||
|
|
||||||
from pydantic import BaseModel
|
|
||||||
from llama_stack_api import StrictJsonType, is_unwrapped_body_param, object_to_json
|
|
||||||
from llama_stack.core.resolver import api_protocol_map
|
|
||||||
|
|
||||||
from .generator import Generator
|
|
||||||
from .options import Options
|
|
||||||
from .specification import Document
|
|
||||||
|
|
||||||
THIS_DIR = Path(__file__).parent
|
|
||||||
|
|
||||||
|
|
||||||
class Specification:
|
|
||||||
document: Document
|
|
||||||
|
|
||||||
def __init__(self, endpoint: type, options: Options):
|
|
||||||
generator = Generator(endpoint, options)
|
|
||||||
self.document = generator.generate()
|
|
||||||
|
|
||||||
def get_json(self) -> StrictJsonType:
|
|
||||||
"""
|
|
||||||
Returns the OpenAPI specification as a Python data type (e.g. `dict` for an object, `list` for an array).
|
|
||||||
|
|
||||||
The result can be serialized to a JSON string with `json.dump` or `json.dumps`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
json_doc = typing.cast(StrictJsonType, object_to_json(self.document))
|
|
||||||
|
|
||||||
if isinstance(json_doc, dict):
|
|
||||||
# rename vendor-specific properties
|
|
||||||
tag_groups = json_doc.pop("tagGroups", None)
|
|
||||||
if tag_groups:
|
|
||||||
json_doc["x-tagGroups"] = tag_groups
|
|
||||||
tags = json_doc.get("tags")
|
|
||||||
if tags and isinstance(tags, list):
|
|
||||||
for tag in tags:
|
|
||||||
if not isinstance(tag, dict):
|
|
||||||
continue
|
|
||||||
|
|
||||||
display_name = tag.pop("displayName", None)
|
|
||||||
if display_name:
|
|
||||||
tag["x-displayName"] = display_name
|
|
||||||
|
|
||||||
# Handle operations to rename extraBodyParameters -> x-llama-stack-extra-body-params
|
|
||||||
paths = json_doc.get("paths", {})
|
|
||||||
for path_item in paths.values():
|
|
||||||
if isinstance(path_item, dict):
|
|
||||||
for method in ["get", "post", "put", "delete", "patch"]:
|
|
||||||
operation = path_item.get(method)
|
|
||||||
if operation and isinstance(operation, dict):
|
|
||||||
extra_body_params = operation.pop("extraBodyParameters", None)
|
|
||||||
if extra_body_params:
|
|
||||||
operation["x-llama-stack-extra-body-params"] = extra_body_params
|
|
||||||
|
|
||||||
return json_doc
|
|
||||||
|
|
||||||
def get_json_string(self, pretty_print: bool = False) -> str:
|
|
||||||
"""
|
|
||||||
Returns the OpenAPI specification as a JSON string.
|
|
||||||
|
|
||||||
:param pretty_print: Whether to use line indents to beautify the output.
|
|
||||||
"""
|
|
||||||
|
|
||||||
json_doc = self.get_json()
|
|
||||||
if pretty_print:
|
|
||||||
return json.dumps(
|
|
||||||
json_doc, check_circular=False, ensure_ascii=False, indent=4
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
return json.dumps(
|
|
||||||
json_doc,
|
|
||||||
check_circular=False,
|
|
||||||
ensure_ascii=False,
|
|
||||||
separators=(",", ":"),
|
|
||||||
)
|
|
||||||
|
|
||||||
def write_json(self, f: TextIO, pretty_print: bool = False) -> None:
|
|
||||||
"""
|
|
||||||
Writes the OpenAPI specification to a file as a JSON string.
|
|
||||||
|
|
||||||
:param pretty_print: Whether to use line indents to beautify the output.
|
|
||||||
"""
|
|
||||||
|
|
||||||
json_doc = self.get_json()
|
|
||||||
if pretty_print:
|
|
||||||
json.dump(
|
|
||||||
json_doc,
|
|
||||||
f,
|
|
||||||
check_circular=False,
|
|
||||||
ensure_ascii=False,
|
|
||||||
indent=4,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
json.dump(
|
|
||||||
json_doc,
|
|
||||||
f,
|
|
||||||
check_circular=False,
|
|
||||||
ensure_ascii=False,
|
|
||||||
separators=(",", ":"),
|
|
||||||
)
|
|
||||||
|
|
||||||
def write_html(self, f: TextIO, pretty_print: bool = False) -> None:
|
|
||||||
"""
|
|
||||||
Creates a stand-alone HTML page for the OpenAPI specification with ReDoc.
|
|
||||||
|
|
||||||
:param pretty_print: Whether to use line indents to beautify the JSON string in the HTML file.
|
|
||||||
"""
|
|
||||||
|
|
||||||
path = THIS_DIR / "template.html"
|
|
||||||
with path.open(encoding="utf-8", errors="strict") as html_template_file:
|
|
||||||
html_template = html_template_file.read()
|
|
||||||
|
|
||||||
html = html_template.replace(
|
|
||||||
"{ /* OPENAPI_SPECIFICATION */ }",
|
|
||||||
self.get_json_string(pretty_print=pretty_print),
|
|
||||||
)
|
|
||||||
|
|
||||||
f.write(html)
|
|
||||||
|
|
||||||
def is_optional_type(type_: Any) -> bool:
|
|
||||||
"""Check if a type is Optional."""
|
|
||||||
origin = get_origin(type_)
|
|
||||||
args = get_args(type_)
|
|
||||||
return origin is Optional or (origin is Union and type(None) in args)
|
|
||||||
|
|
||||||
|
|
||||||
def _validate_api_method_return_type(method) -> str | None:
|
|
||||||
hints = get_type_hints(method)
|
|
||||||
|
|
||||||
if 'return' not in hints:
|
|
||||||
return "has no return type annotation"
|
|
||||||
|
|
||||||
return_type = hints['return']
|
|
||||||
if is_optional_type(return_type):
|
|
||||||
return "returns Optional type where a return value is mandatory"
|
|
||||||
|
|
||||||
|
|
||||||
def _validate_api_method_doesnt_return_list(method) -> str | None:
|
|
||||||
hints = get_type_hints(method)
|
|
||||||
|
|
||||||
if 'return' not in hints:
|
|
||||||
return "has no return type annotation"
|
|
||||||
|
|
||||||
return_type = hints['return']
|
|
||||||
if get_origin(return_type) is list:
|
|
||||||
return "returns a list where a PaginatedResponse or List*Response object is expected"
|
|
||||||
|
|
||||||
|
|
||||||
def _validate_api_delete_method_returns_none(method) -> str | None:
|
|
||||||
hints = get_type_hints(method)
|
|
||||||
|
|
||||||
if 'return' not in hints:
|
|
||||||
return "has no return type annotation"
|
|
||||||
|
|
||||||
return_type = hints['return']
|
|
||||||
|
|
||||||
# Allow OpenAI endpoints to return response objects since they follow OpenAI specification
|
|
||||||
method_name = getattr(method, '__name__', '')
|
|
||||||
if method_name.__contains__('openai_'):
|
|
||||||
return None
|
|
||||||
|
|
||||||
if return_type is not None and return_type is not type(None):
|
|
||||||
return "does not return None where None is mandatory"
|
|
||||||
|
|
||||||
|
|
||||||
def _validate_list_parameters_contain_data(method) -> str | None:
|
|
||||||
hints = get_type_hints(method)
|
|
||||||
|
|
||||||
if 'return' not in hints:
|
|
||||||
return "has no return type annotation"
|
|
||||||
|
|
||||||
return_type = hints['return']
|
|
||||||
if not inspect.isclass(return_type):
|
|
||||||
return
|
|
||||||
|
|
||||||
if not return_type.__name__.startswith('List'):
|
|
||||||
return
|
|
||||||
|
|
||||||
if 'data' not in return_type.model_fields:
|
|
||||||
return "does not have a mandatory data attribute containing the list of objects"
|
|
||||||
|
|
||||||
|
|
||||||
def _validate_has_ellipsis(method) -> str | None:
|
|
||||||
source = inspect.getsource(method)
|
|
||||||
if "..." not in source and not "NotImplementedError" in source:
|
|
||||||
return "does not contain ellipsis (...) in its implementation"
|
|
||||||
|
|
||||||
def _validate_has_return_in_docstring(method) -> str | None:
|
|
||||||
source = inspect.getsource(method)
|
|
||||||
return_type = method.__annotations__.get('return')
|
|
||||||
if return_type is not None and return_type != type(None) and ":returns:" not in source:
|
|
||||||
return "does not have a ':returns:' in its docstring"
|
|
||||||
|
|
||||||
def _validate_has_params_in_docstring(method) -> str | None:
|
|
||||||
source = inspect.getsource(method)
|
|
||||||
sig = inspect.signature(method)
|
|
||||||
|
|
||||||
params_list = [p for p in sig.parameters.values() if p.name != "self"]
|
|
||||||
if len(params_list) == 1:
|
|
||||||
param = params_list[0]
|
|
||||||
param_type = param.annotation
|
|
||||||
if is_unwrapped_body_param(param_type):
|
|
||||||
return
|
|
||||||
|
|
||||||
# Only check if the method has more than one parameter
|
|
||||||
if len(sig.parameters) > 1 and ":param" not in source:
|
|
||||||
return "does not have a ':param' in its docstring"
|
|
||||||
|
|
||||||
def _validate_has_no_return_none_in_docstring(method) -> str | None:
|
|
||||||
source = inspect.getsource(method)
|
|
||||||
return_type = method.__annotations__.get('return')
|
|
||||||
if return_type is None and ":returns: None" in source:
|
|
||||||
return "has a ':returns: None' in its docstring which is redundant for None-returning functions"
|
|
||||||
|
|
||||||
def _validate_docstring_lines_end_with_dot(method) -> str | None:
|
|
||||||
docstring = inspect.getdoc(method)
|
|
||||||
if docstring is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
lines = docstring.split('\n')
|
|
||||||
for line in lines:
|
|
||||||
line = line.strip()
|
|
||||||
if line and not any(line.endswith(char) for char in '.:{}[]()",'):
|
|
||||||
return f"docstring line '{line}' does not end with a valid character: . : {{ }} [ ] ( ) , \""
|
|
||||||
|
|
||||||
_VALIDATORS = {
|
|
||||||
"GET": [
|
|
||||||
_validate_api_method_return_type,
|
|
||||||
_validate_list_parameters_contain_data,
|
|
||||||
_validate_api_method_doesnt_return_list,
|
|
||||||
_validate_has_ellipsis,
|
|
||||||
_validate_has_return_in_docstring,
|
|
||||||
_validate_has_params_in_docstring,
|
|
||||||
_validate_docstring_lines_end_with_dot,
|
|
||||||
],
|
|
||||||
"DELETE": [
|
|
||||||
_validate_api_delete_method_returns_none,
|
|
||||||
_validate_has_ellipsis,
|
|
||||||
_validate_has_return_in_docstring,
|
|
||||||
_validate_has_params_in_docstring,
|
|
||||||
_validate_has_no_return_none_in_docstring
|
|
||||||
],
|
|
||||||
"POST": [
|
|
||||||
_validate_has_ellipsis,
|
|
||||||
_validate_has_return_in_docstring,
|
|
||||||
_validate_has_params_in_docstring,
|
|
||||||
_validate_has_no_return_none_in_docstring,
|
|
||||||
_validate_docstring_lines_end_with_dot,
|
|
||||||
],
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def _get_methods_by_type(protocol, method_type: str):
|
|
||||||
members = inspect.getmembers(protocol, predicate=inspect.isfunction)
|
|
||||||
return {
|
|
||||||
method_name: method
|
|
||||||
for method_name, method in members
|
|
||||||
if (webmethod := getattr(method, '__webmethod__', None))
|
|
||||||
if webmethod and webmethod.method == method_type
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def validate_api() -> List[str]:
|
|
||||||
"""Validate the API protocols."""
|
|
||||||
errors = []
|
|
||||||
protocols = api_protocol_map()
|
|
||||||
|
|
||||||
for target, validators in _VALIDATORS.items():
|
|
||||||
for protocol_name, protocol in protocols.items():
|
|
||||||
for validator in validators:
|
|
||||||
for method_name, method in _get_methods_by_type(protocol, target).items():
|
|
||||||
err = validator(method)
|
|
||||||
if err:
|
|
||||||
errors.append(f"Method {protocol_name}.{method_name} {err}")
|
|
||||||
|
|
||||||
return errors
|
|
||||||
|
|
@ -1,34 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
PYTHONPATH=${PYTHONPATH:-}
|
|
||||||
THIS_DIR="$(cd "$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")" && pwd)"
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
missing_packages=()
|
|
||||||
|
|
||||||
check_package() {
|
|
||||||
if ! pip show "$1" &>/dev/null; then
|
|
||||||
missing_packages+=("$1")
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
if [ ${#missing_packages[@]} -ne 0 ]; then
|
|
||||||
echo "Error: The following package(s) are not installed:"
|
|
||||||
printf " - %s\n" "${missing_packages[@]}"
|
|
||||||
echo "Please install them using:"
|
|
||||||
echo "pip install ${missing_packages[*]}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
stack_dir=$(dirname $(dirname $THIS_DIR))
|
|
||||||
PYTHONPATH=$PYTHONPATH:$stack_dir \
|
|
||||||
python -m docs.openapi_generator.generate $(dirname $THIS_DIR)/static
|
|
||||||
|
|
||||||
cp $stack_dir/docs/static/stainless-llama-stack-spec.yaml $stack_dir/client-sdks/stainless/openapi.yml
|
|
||||||
10520
docs/static/deprecated-llama-stack-spec.yaml
vendored
10520
docs/static/deprecated-llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
9653
docs/static/experimental-llama-stack-spec.yaml
vendored
9653
docs/static/experimental-llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
12292
docs/static/llama-stack-spec.yaml
vendored
12292
docs/static/llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
13285
docs/static/stainless-llama-stack-spec.yaml
vendored
13285
docs/static/stainless-llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
|
|
@ -50,6 +50,7 @@ dependencies = [
|
||||||
"aiosqlite>=0.21.0", # server - for metadata store
|
"aiosqlite>=0.21.0", # server - for metadata store
|
||||||
"asyncpg", # for metadata store
|
"asyncpg", # for metadata store
|
||||||
"sqlalchemy[asyncio]>=2.0.41", # server - for conversations
|
"sqlalchemy[asyncio]>=2.0.41", # server - for conversations
|
||||||
|
"pyyaml>=6.0.2",
|
||||||
"starlette>=0.49.1",
|
"starlette>=0.49.1",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
@ -73,6 +74,7 @@ dev = [
|
||||||
"mypy",
|
"mypy",
|
||||||
"pre-commit>=4.4.0",
|
"pre-commit>=4.4.0",
|
||||||
"ruamel.yaml", # needed for openapi generator
|
"ruamel.yaml", # needed for openapi generator
|
||||||
|
"openapi-spec-validator>=0.7.2",
|
||||||
]
|
]
|
||||||
# Type checking dependencies - includes type stubs and optional runtime dependencies
|
# Type checking dependencies - includes type stubs and optional runtime dependencies
|
||||||
# needed for complete mypy coverage across all optional features
|
# needed for complete mypy coverage across all optional features
|
||||||
|
|
@ -182,7 +184,12 @@ install-wheel-from-presigned = "llama_stack.cli.scripts.run:install_wheel_from_p
|
||||||
|
|
||||||
[tool.setuptools.packages.find]
|
[tool.setuptools.packages.find]
|
||||||
where = ["src"]
|
where = ["src"]
|
||||||
include = ["llama_stack", "llama_stack.*", "llama_stack_api", "llama_stack_api.*"]
|
include = [
|
||||||
|
"llama_stack",
|
||||||
|
"llama_stack.*",
|
||||||
|
"llama_stack_api",
|
||||||
|
"llama_stack_api.*",
|
||||||
|
]
|
||||||
|
|
||||||
[[tool.uv.index]]
|
[[tool.uv.index]]
|
||||||
name = "pytorch-cpu"
|
name = "pytorch-cpu"
|
||||||
|
|
@ -249,7 +256,9 @@ unfixable = [
|
||||||
# Ignore the following errors for the following files
|
# Ignore the following errors for the following files
|
||||||
[tool.ruff.lint.per-file-ignores]
|
[tool.ruff.lint.per-file-ignores]
|
||||||
"tests/**/*.py" = ["DTZ"] # Ignore datetime rules for tests
|
"tests/**/*.py" = ["DTZ"] # Ignore datetime rules for tests
|
||||||
"src/llama_stack/providers/inline/scoring/basic/utils/ifeval_utils.py" = ["RUF001"]
|
"src/llama_stack/providers/inline/scoring/basic/utils/ifeval_utils.py" = [
|
||||||
|
"RUF001",
|
||||||
|
]
|
||||||
"src/llama_stack/providers/inline/scoring/basic/scoring_fn/fn_defs/regex_parser_multiple_choice_answer.py" = [
|
"src/llama_stack/providers/inline/scoring/basic/scoring_fn/fn_defs/regex_parser_multiple_choice_answer.py" = [
|
||||||
"RUF001",
|
"RUF001",
|
||||||
"PLE2515",
|
"PLE2515",
|
||||||
|
|
@ -341,7 +350,6 @@ exclude = [
|
||||||
"^src/llama_stack/providers/utils/telemetry/dataset_mixin\\.py$",
|
"^src/llama_stack/providers/utils/telemetry/dataset_mixin\\.py$",
|
||||||
"^src/llama_stack/providers/utils/telemetry/trace_protocol\\.py$",
|
"^src/llama_stack/providers/utils/telemetry/trace_protocol\\.py$",
|
||||||
"^src/llama_stack/providers/utils/telemetry/tracing\\.py$",
|
"^src/llama_stack/providers/utils/telemetry/tracing\\.py$",
|
||||||
"^src/llama_stack_api/strong_typing/auxiliary\\.py$",
|
|
||||||
"^src/llama_stack/distributions/template\\.py$",
|
"^src/llama_stack/distributions/template\\.py$",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
||||||
16
scripts/openapi_generator/__init__.py
Normal file
16
scripts/openapi_generator/__init__.py
Normal file
|
|
@ -0,0 +1,16 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
OpenAPI generator module for Llama Stack.
|
||||||
|
|
||||||
|
This module provides functionality to generate OpenAPI specifications
|
||||||
|
from FastAPI applications.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .main import generate_openapi_spec, main
|
||||||
|
|
||||||
|
__all__ = ["generate_openapi_spec", "main"]
|
||||||
|
|
@ -3,3 +3,12 @@
|
||||||
#
|
#
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
# the root directory of this source tree.
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Entry point for running the openapi_generator module as a package.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .main import main
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
502
scripts/openapi_generator/_legacy_order.py
Normal file
502
scripts/openapi_generator/_legacy_order.py
Normal file
|
|
@ -0,0 +1,502 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Temporary ordering helpers extracted from origin/main client-sdks/stainless/openapi.yml.
|
||||||
|
|
||||||
|
These lists help the new generator match the previous ordering so that diffs
|
||||||
|
remain readable while we debug schema content regressions. Remove once stable.
|
||||||
|
"""
|
||||||
|
|
||||||
|
LEGACY_PATH_ORDER = [
|
||||||
|
"/v1/batches",
|
||||||
|
"/v1/batches/{batch_id}",
|
||||||
|
"/v1/batches/{batch_id}/cancel",
|
||||||
|
"/v1/chat/completions",
|
||||||
|
"/v1/chat/completions/{completion_id}",
|
||||||
|
"/v1/completions",
|
||||||
|
"/v1/conversations",
|
||||||
|
"/v1/conversations/{conversation_id}",
|
||||||
|
"/v1/conversations/{conversation_id}/items",
|
||||||
|
"/v1/conversations/{conversation_id}/items/{item_id}",
|
||||||
|
"/v1/embeddings",
|
||||||
|
"/v1/files",
|
||||||
|
"/v1/files/{file_id}",
|
||||||
|
"/v1/files/{file_id}/content",
|
||||||
|
"/v1/health",
|
||||||
|
"/v1/inspect/routes",
|
||||||
|
"/v1/models",
|
||||||
|
"/v1/models/{model_id}",
|
||||||
|
"/v1/moderations",
|
||||||
|
"/v1/prompts",
|
||||||
|
"/v1/prompts/{prompt_id}",
|
||||||
|
"/v1/prompts/{prompt_id}/set-default-version",
|
||||||
|
"/v1/prompts/{prompt_id}/versions",
|
||||||
|
"/v1/providers",
|
||||||
|
"/v1/providers/{provider_id}",
|
||||||
|
"/v1/responses",
|
||||||
|
"/v1/responses/{response_id}",
|
||||||
|
"/v1/responses/{response_id}/input_items",
|
||||||
|
"/v1/safety/run-shield",
|
||||||
|
"/v1/scoring-functions",
|
||||||
|
"/v1/scoring-functions/{scoring_fn_id}",
|
||||||
|
"/v1/scoring/score",
|
||||||
|
"/v1/scoring/score-batch",
|
||||||
|
"/v1/shields",
|
||||||
|
"/v1/shields/{identifier}",
|
||||||
|
"/v1/tool-runtime/invoke",
|
||||||
|
"/v1/tool-runtime/list-tools",
|
||||||
|
"/v1/toolgroups",
|
||||||
|
"/v1/toolgroups/{toolgroup_id}",
|
||||||
|
"/v1/tools",
|
||||||
|
"/v1/tools/{tool_name}",
|
||||||
|
"/v1/vector-io/insert",
|
||||||
|
"/v1/vector-io/query",
|
||||||
|
"/v1/vector_stores",
|
||||||
|
"/v1/vector_stores/{vector_store_id}",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/file_batches",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/files",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/files",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/files/{file_id}",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/files/{file_id}/content",
|
||||||
|
"/v1/vector_stores/{vector_store_id}/search",
|
||||||
|
"/v1/version",
|
||||||
|
"/v1beta/datasetio/append-rows/{dataset_id}",
|
||||||
|
"/v1beta/datasetio/iterrows/{dataset_id}",
|
||||||
|
"/v1beta/datasets",
|
||||||
|
"/v1beta/datasets/{dataset_id}",
|
||||||
|
"/v1alpha/eval/benchmarks",
|
||||||
|
"/v1alpha/eval/benchmarks/{benchmark_id}",
|
||||||
|
"/v1alpha/eval/benchmarks/{benchmark_id}/evaluations",
|
||||||
|
"/v1alpha/eval/benchmarks/{benchmark_id}/jobs",
|
||||||
|
"/v1alpha/eval/benchmarks/{benchmark_id}/jobs/{job_id}",
|
||||||
|
"/v1alpha/eval/benchmarks/{benchmark_id}/jobs/{job_id}/result",
|
||||||
|
"/v1alpha/inference/rerank",
|
||||||
|
"/v1alpha/post-training/job/artifacts",
|
||||||
|
"/v1alpha/post-training/job/cancel",
|
||||||
|
"/v1alpha/post-training/job/status",
|
||||||
|
"/v1alpha/post-training/jobs",
|
||||||
|
"/v1alpha/post-training/preference-optimize",
|
||||||
|
"/v1alpha/post-training/supervised-fine-tune",
|
||||||
|
]
|
||||||
|
|
||||||
|
LEGACY_SCHEMA_ORDER = [
|
||||||
|
"Error",
|
||||||
|
"ListBatchesResponse",
|
||||||
|
"CreateBatchRequest",
|
||||||
|
"Batch",
|
||||||
|
"Order",
|
||||||
|
"ListOpenAIChatCompletionResponse",
|
||||||
|
"OpenAIAssistantMessageParam",
|
||||||
|
"OpenAIChatCompletionContentPartImageParam",
|
||||||
|
"OpenAIChatCompletionContentPartParam",
|
||||||
|
"OpenAIChatCompletionContentPartTextParam",
|
||||||
|
"OpenAIChatCompletionToolCall",
|
||||||
|
"OpenAIChatCompletionToolCallFunction",
|
||||||
|
"OpenAIChatCompletionUsage",
|
||||||
|
"OpenAIChoice",
|
||||||
|
"OpenAIChoiceLogprobs",
|
||||||
|
"OpenAIDeveloperMessageParam",
|
||||||
|
"OpenAIFile",
|
||||||
|
"OpenAIFileFile",
|
||||||
|
"OpenAIImageURL",
|
||||||
|
"OpenAIMessageParam",
|
||||||
|
"OpenAISystemMessageParam",
|
||||||
|
"OpenAITokenLogProb",
|
||||||
|
"OpenAIToolMessageParam",
|
||||||
|
"OpenAITopLogProb",
|
||||||
|
"OpenAIUserMessageParam",
|
||||||
|
"OpenAIJSONSchema",
|
||||||
|
"OpenAIResponseFormatJSONObject",
|
||||||
|
"OpenAIResponseFormatJSONSchema",
|
||||||
|
"OpenAIResponseFormatParam",
|
||||||
|
"OpenAIResponseFormatText",
|
||||||
|
"OpenAIChatCompletionRequestWithExtraBody",
|
||||||
|
"OpenAIChatCompletion",
|
||||||
|
"OpenAIChatCompletionChunk",
|
||||||
|
"OpenAIChoiceDelta",
|
||||||
|
"OpenAIChunkChoice",
|
||||||
|
"OpenAICompletionWithInputMessages",
|
||||||
|
"OpenAICompletionRequestWithExtraBody",
|
||||||
|
"OpenAICompletion",
|
||||||
|
"OpenAICompletionChoice",
|
||||||
|
"ConversationItem",
|
||||||
|
"OpenAIResponseAnnotationCitation",
|
||||||
|
"OpenAIResponseAnnotationContainerFileCitation",
|
||||||
|
"OpenAIResponseAnnotationFileCitation",
|
||||||
|
"OpenAIResponseAnnotationFilePath",
|
||||||
|
"OpenAIResponseAnnotations",
|
||||||
|
"OpenAIResponseContentPartRefusal",
|
||||||
|
"OpenAIResponseInputFunctionToolCallOutput",
|
||||||
|
"OpenAIResponseInputMessageContent",
|
||||||
|
"OpenAIResponseInputMessageContentFile",
|
||||||
|
"OpenAIResponseInputMessageContentImage",
|
||||||
|
"OpenAIResponseInputMessageContentText",
|
||||||
|
"OpenAIResponseMCPApprovalRequest",
|
||||||
|
"OpenAIResponseMCPApprovalResponse",
|
||||||
|
"OpenAIResponseMessage",
|
||||||
|
"OpenAIResponseOutputMessageContent",
|
||||||
|
"OpenAIResponseOutputMessageContentOutputText",
|
||||||
|
"OpenAIResponseOutputMessageFileSearchToolCall",
|
||||||
|
"OpenAIResponseOutputMessageFunctionToolCall",
|
||||||
|
"OpenAIResponseOutputMessageMCPCall",
|
||||||
|
"OpenAIResponseOutputMessageMCPListTools",
|
||||||
|
"OpenAIResponseOutputMessageWebSearchToolCall",
|
||||||
|
"CreateConversationRequest",
|
||||||
|
"Conversation",
|
||||||
|
"UpdateConversationRequest",
|
||||||
|
"ConversationDeletedResource",
|
||||||
|
"ConversationItemList",
|
||||||
|
"AddItemsRequest",
|
||||||
|
"ConversationItemDeletedResource",
|
||||||
|
"OpenAIEmbeddingsRequestWithExtraBody",
|
||||||
|
"OpenAIEmbeddingData",
|
||||||
|
"OpenAIEmbeddingUsage",
|
||||||
|
"OpenAIEmbeddingsResponse",
|
||||||
|
"OpenAIFilePurpose",
|
||||||
|
"ListOpenAIFileResponse",
|
||||||
|
"OpenAIFileObject",
|
||||||
|
"ExpiresAfter",
|
||||||
|
"OpenAIFileDeleteResponse",
|
||||||
|
"Response",
|
||||||
|
"HealthInfo",
|
||||||
|
"RouteInfo",
|
||||||
|
"ListRoutesResponse",
|
||||||
|
"OpenAIModel",
|
||||||
|
"OpenAIListModelsResponse",
|
||||||
|
"Model",
|
||||||
|
"ModelType",
|
||||||
|
"RunModerationRequest",
|
||||||
|
"ModerationObject",
|
||||||
|
"ModerationObjectResults",
|
||||||
|
"Prompt",
|
||||||
|
"ListPromptsResponse",
|
||||||
|
"CreatePromptRequest",
|
||||||
|
"UpdatePromptRequest",
|
||||||
|
"SetDefaultVersionRequest",
|
||||||
|
"ProviderInfo",
|
||||||
|
"ListProvidersResponse",
|
||||||
|
"ListOpenAIResponseObject",
|
||||||
|
"OpenAIResponseError",
|
||||||
|
"OpenAIResponseInput",
|
||||||
|
"OpenAIResponseInputToolFileSearch",
|
||||||
|
"OpenAIResponseInputToolFunction",
|
||||||
|
"OpenAIResponseInputToolWebSearch",
|
||||||
|
"OpenAIResponseObjectWithInput",
|
||||||
|
"OpenAIResponseOutput",
|
||||||
|
"OpenAIResponsePrompt",
|
||||||
|
"OpenAIResponseText",
|
||||||
|
"OpenAIResponseTool",
|
||||||
|
"OpenAIResponseToolMCP",
|
||||||
|
"OpenAIResponseUsage",
|
||||||
|
"ResponseGuardrailSpec",
|
||||||
|
"OpenAIResponseInputTool",
|
||||||
|
"OpenAIResponseInputToolMCP",
|
||||||
|
"CreateOpenaiResponseRequest",
|
||||||
|
"OpenAIResponseObject",
|
||||||
|
"OpenAIResponseContentPartOutputText",
|
||||||
|
"OpenAIResponseContentPartReasoningSummary",
|
||||||
|
"OpenAIResponseContentPartReasoningText",
|
||||||
|
"OpenAIResponseObjectStream",
|
||||||
|
"OpenAIResponseObjectStreamResponseCompleted",
|
||||||
|
"OpenAIResponseObjectStreamResponseContentPartAdded",
|
||||||
|
"OpenAIResponseObjectStreamResponseContentPartDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseCreated",
|
||||||
|
"OpenAIResponseObjectStreamResponseFailed",
|
||||||
|
"OpenAIResponseObjectStreamResponseFileSearchCallCompleted",
|
||||||
|
"OpenAIResponseObjectStreamResponseFileSearchCallInProgress",
|
||||||
|
"OpenAIResponseObjectStreamResponseFileSearchCallSearching",
|
||||||
|
"OpenAIResponseObjectStreamResponseFunctionCallArgumentsDelta",
|
||||||
|
"OpenAIResponseObjectStreamResponseFunctionCallArgumentsDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseInProgress",
|
||||||
|
"OpenAIResponseObjectStreamResponseIncomplete",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpCallArgumentsDelta",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpCallArgumentsDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpCallCompleted",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpCallFailed",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpCallInProgress",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpListToolsCompleted",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpListToolsFailed",
|
||||||
|
"OpenAIResponseObjectStreamResponseMcpListToolsInProgress",
|
||||||
|
"OpenAIResponseObjectStreamResponseOutputItemAdded",
|
||||||
|
"OpenAIResponseObjectStreamResponseOutputItemDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseOutputTextAnnotationAdded",
|
||||||
|
"OpenAIResponseObjectStreamResponseOutputTextDelta",
|
||||||
|
"OpenAIResponseObjectStreamResponseOutputTextDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseReasoningSummaryPartAdded",
|
||||||
|
"OpenAIResponseObjectStreamResponseReasoningSummaryPartDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseReasoningSummaryTextDelta",
|
||||||
|
"OpenAIResponseObjectStreamResponseReasoningSummaryTextDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseReasoningTextDelta",
|
||||||
|
"OpenAIResponseObjectStreamResponseReasoningTextDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseRefusalDelta",
|
||||||
|
"OpenAIResponseObjectStreamResponseRefusalDone",
|
||||||
|
"OpenAIResponseObjectStreamResponseWebSearchCallCompleted",
|
||||||
|
"OpenAIResponseObjectStreamResponseWebSearchCallInProgress",
|
||||||
|
"OpenAIResponseObjectStreamResponseWebSearchCallSearching",
|
||||||
|
"OpenAIDeleteResponseObject",
|
||||||
|
"ListOpenAIResponseInputItem",
|
||||||
|
"RunShieldRequest",
|
||||||
|
"RunShieldResponse",
|
||||||
|
"SafetyViolation",
|
||||||
|
"ViolationLevel",
|
||||||
|
"AggregationFunctionType",
|
||||||
|
"ArrayType",
|
||||||
|
"BasicScoringFnParams",
|
||||||
|
"BooleanType",
|
||||||
|
"ChatCompletionInputType",
|
||||||
|
"CompletionInputType",
|
||||||
|
"JsonType",
|
||||||
|
"LLMAsJudgeScoringFnParams",
|
||||||
|
"NumberType",
|
||||||
|
"ObjectType",
|
||||||
|
"RegexParserScoringFnParams",
|
||||||
|
"ScoringFn",
|
||||||
|
"ScoringFnParams",
|
||||||
|
"ScoringFnParamsType",
|
||||||
|
"StringType",
|
||||||
|
"UnionType",
|
||||||
|
"ListScoringFunctionsResponse",
|
||||||
|
"ScoreRequest",
|
||||||
|
"ScoreResponse",
|
||||||
|
"ScoringResult",
|
||||||
|
"ScoreBatchRequest",
|
||||||
|
"ScoreBatchResponse",
|
||||||
|
"Shield",
|
||||||
|
"ListShieldsResponse",
|
||||||
|
"InvokeToolRequest",
|
||||||
|
"ImageContentItem",
|
||||||
|
"InterleavedContent",
|
||||||
|
"InterleavedContentItem",
|
||||||
|
"TextContentItem",
|
||||||
|
"ToolInvocationResult",
|
||||||
|
"URL",
|
||||||
|
"ToolDef",
|
||||||
|
"ListToolDefsResponse",
|
||||||
|
"ToolGroup",
|
||||||
|
"ListToolGroupsResponse",
|
||||||
|
"Chunk",
|
||||||
|
"ChunkMetadata",
|
||||||
|
"InsertChunksRequest",
|
||||||
|
"QueryChunksRequest",
|
||||||
|
"QueryChunksResponse",
|
||||||
|
"VectorStoreFileCounts",
|
||||||
|
"VectorStoreListResponse",
|
||||||
|
"VectorStoreObject",
|
||||||
|
"VectorStoreChunkingStrategy",
|
||||||
|
"VectorStoreChunkingStrategyAuto",
|
||||||
|
"VectorStoreChunkingStrategyStatic",
|
||||||
|
"VectorStoreChunkingStrategyStaticConfig",
|
||||||
|
"OpenAICreateVectorStoreRequestWithExtraBody",
|
||||||
|
"OpenaiUpdateVectorStoreRequest",
|
||||||
|
"VectorStoreDeleteResponse",
|
||||||
|
"OpenAICreateVectorStoreFileBatchRequestWithExtraBody",
|
||||||
|
"VectorStoreFileBatchObject",
|
||||||
|
"VectorStoreFileStatus",
|
||||||
|
"VectorStoreFileLastError",
|
||||||
|
"VectorStoreFileObject",
|
||||||
|
"VectorStoreFilesListInBatchResponse",
|
||||||
|
"VectorStoreListFilesResponse",
|
||||||
|
"OpenaiAttachFileToVectorStoreRequest",
|
||||||
|
"OpenaiUpdateVectorStoreFileRequest",
|
||||||
|
"VectorStoreFileDeleteResponse",
|
||||||
|
"bool",
|
||||||
|
"VectorStoreContent",
|
||||||
|
"VectorStoreFileContentResponse",
|
||||||
|
"OpenaiSearchVectorStoreRequest",
|
||||||
|
"VectorStoreSearchResponse",
|
||||||
|
"VectorStoreSearchResponsePage",
|
||||||
|
"VersionInfo",
|
||||||
|
"AppendRowsRequest",
|
||||||
|
"PaginatedResponse",
|
||||||
|
"Dataset",
|
||||||
|
"RowsDataSource",
|
||||||
|
"URIDataSource",
|
||||||
|
"ListDatasetsResponse",
|
||||||
|
"Benchmark",
|
||||||
|
"ListBenchmarksResponse",
|
||||||
|
"BenchmarkConfig",
|
||||||
|
"GreedySamplingStrategy",
|
||||||
|
"ModelCandidate",
|
||||||
|
"SamplingParams",
|
||||||
|
"SystemMessage",
|
||||||
|
"TopKSamplingStrategy",
|
||||||
|
"TopPSamplingStrategy",
|
||||||
|
"EvaluateRowsRequest",
|
||||||
|
"EvaluateResponse",
|
||||||
|
"RunEvalRequest",
|
||||||
|
"Job",
|
||||||
|
"RerankRequest",
|
||||||
|
"RerankData",
|
||||||
|
"RerankResponse",
|
||||||
|
"Checkpoint",
|
||||||
|
"PostTrainingJobArtifactsResponse",
|
||||||
|
"PostTrainingMetric",
|
||||||
|
"CancelTrainingJobRequest",
|
||||||
|
"PostTrainingJobStatusResponse",
|
||||||
|
"ListPostTrainingJobsResponse",
|
||||||
|
"DPOAlignmentConfig",
|
||||||
|
"DPOLossType",
|
||||||
|
"DataConfig",
|
||||||
|
"DatasetFormat",
|
||||||
|
"EfficiencyConfig",
|
||||||
|
"OptimizerConfig",
|
||||||
|
"OptimizerType",
|
||||||
|
"TrainingConfig",
|
||||||
|
"PreferenceOptimizeRequest",
|
||||||
|
"PostTrainingJob",
|
||||||
|
"AlgorithmConfig",
|
||||||
|
"LoraFinetuningConfig",
|
||||||
|
"QATFinetuningConfig",
|
||||||
|
"SupervisedFineTuneRequest",
|
||||||
|
"RegisterModelRequest",
|
||||||
|
"ParamType",
|
||||||
|
"RegisterScoringFunctionRequest",
|
||||||
|
"RegisterShieldRequest",
|
||||||
|
"RegisterToolGroupRequest",
|
||||||
|
"DataSource",
|
||||||
|
"RegisterDatasetRequest",
|
||||||
|
"RegisterBenchmarkRequest",
|
||||||
|
]
|
||||||
|
|
||||||
|
LEGACY_RESPONSE_ORDER = ["BadRequest400", "TooManyRequests429", "InternalServerError500", "DefaultError"]
|
||||||
|
|
||||||
|
LEGACY_TAGS = [
|
||||||
|
{
|
||||||
|
"description": "APIs for creating and interacting with agentic systems.",
|
||||||
|
"name": "Agents",
|
||||||
|
"x-displayName": "Agents",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "The API is designed to allow use of openai client libraries for seamless integration.\n"
|
||||||
|
"\n"
|
||||||
|
"This API provides the following extensions:\n"
|
||||||
|
" - idempotent batch creation\n"
|
||||||
|
"\n"
|
||||||
|
"Note: This API is currently under active development and may undergo changes.",
|
||||||
|
"name": "Batches",
|
||||||
|
"x-displayName": "The Batches API enables efficient processing of multiple requests in a single operation, "
|
||||||
|
"particularly useful for processing large datasets, batch evaluation workflows, and cost-effective "
|
||||||
|
"inference at scale.",
|
||||||
|
},
|
||||||
|
{"description": "", "name": "Benchmarks"},
|
||||||
|
{
|
||||||
|
"description": "Protocol for conversation management operations.",
|
||||||
|
"name": "Conversations",
|
||||||
|
"x-displayName": "Conversations",
|
||||||
|
},
|
||||||
|
{"description": "", "name": "DatasetIO"},
|
||||||
|
{"description": "", "name": "Datasets"},
|
||||||
|
{
|
||||||
|
"description": "Llama Stack Evaluation API for running evaluations on model and agent candidates.",
|
||||||
|
"name": "Eval",
|
||||||
|
"x-displayName": "Evaluations",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "This API is used to upload documents that can be used with other Llama Stack APIs.",
|
||||||
|
"name": "Files",
|
||||||
|
"x-displayName": "Files",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "Llama Stack Inference API for generating completions, chat completions, and embeddings.\n"
|
||||||
|
"\n"
|
||||||
|
"This API provides the raw interface to the underlying models. Three kinds of models are supported:\n"
|
||||||
|
'- LLM models: these models generate "raw" and "chat" (conversational) completions.\n'
|
||||||
|
"- Embedding models: these models generate embeddings to be used for semantic search.\n"
|
||||||
|
"- Rerank models: these models reorder the documents based on their relevance to a query.",
|
||||||
|
"name": "Inference",
|
||||||
|
"x-displayName": "Inference",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "APIs for inspecting the Llama Stack service, including health status, available API routes with "
|
||||||
|
"methods and implementing providers.",
|
||||||
|
"name": "Inspect",
|
||||||
|
"x-displayName": "Inspect",
|
||||||
|
},
|
||||||
|
{"description": "", "name": "Models"},
|
||||||
|
{"description": "", "name": "PostTraining (Coming Soon)"},
|
||||||
|
{"description": "Protocol for prompt management operations.", "name": "Prompts", "x-displayName": "Prompts"},
|
||||||
|
{
|
||||||
|
"description": "Providers API for inspecting, listing, and modifying providers and their configurations.",
|
||||||
|
"name": "Providers",
|
||||||
|
"x-displayName": "Providers",
|
||||||
|
},
|
||||||
|
{"description": "OpenAI-compatible Moderations API.", "name": "Safety", "x-displayName": "Safety"},
|
||||||
|
{"description": "", "name": "Scoring"},
|
||||||
|
{"description": "", "name": "ScoringFunctions"},
|
||||||
|
{"description": "", "name": "Shields"},
|
||||||
|
{"description": "", "name": "ToolGroups"},
|
||||||
|
{"description": "", "name": "ToolRuntime"},
|
||||||
|
{"description": "", "name": "VectorIO"},
|
||||||
|
]
|
||||||
|
|
||||||
|
LEGACY_TAG_ORDER = [
|
||||||
|
"Agents",
|
||||||
|
"Batches",
|
||||||
|
"Benchmarks",
|
||||||
|
"Conversations",
|
||||||
|
"DatasetIO",
|
||||||
|
"Datasets",
|
||||||
|
"Eval",
|
||||||
|
"Files",
|
||||||
|
"Inference",
|
||||||
|
"Inspect",
|
||||||
|
"Models",
|
||||||
|
"PostTraining (Coming Soon)",
|
||||||
|
"Prompts",
|
||||||
|
"Providers",
|
||||||
|
"Safety",
|
||||||
|
"Scoring",
|
||||||
|
"ScoringFunctions",
|
||||||
|
"Shields",
|
||||||
|
"ToolGroups",
|
||||||
|
"ToolRuntime",
|
||||||
|
"VectorIO",
|
||||||
|
]
|
||||||
|
|
||||||
|
LEGACY_TAG_GROUPS = [
|
||||||
|
{
|
||||||
|
"name": "Operations",
|
||||||
|
"tags": [
|
||||||
|
"Agents",
|
||||||
|
"Batches",
|
||||||
|
"Benchmarks",
|
||||||
|
"Conversations",
|
||||||
|
"DatasetIO",
|
||||||
|
"Datasets",
|
||||||
|
"Eval",
|
||||||
|
"Files",
|
||||||
|
"Inference",
|
||||||
|
"Inspect",
|
||||||
|
"Models",
|
||||||
|
"PostTraining (Coming Soon)",
|
||||||
|
"Prompts",
|
||||||
|
"Providers",
|
||||||
|
"Safety",
|
||||||
|
"Scoring",
|
||||||
|
"ScoringFunctions",
|
||||||
|
"Shields",
|
||||||
|
"ToolGroups",
|
||||||
|
"ToolRuntime",
|
||||||
|
"VectorIO",
|
||||||
|
],
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
LEGACY_SECURITY = [{"Default": []}]
|
||||||
|
|
||||||
|
LEGACY_OPERATION_KEYS = [
|
||||||
|
"responses",
|
||||||
|
"tags",
|
||||||
|
"summary",
|
||||||
|
"description",
|
||||||
|
"operationId",
|
||||||
|
"parameters",
|
||||||
|
"requestBody",
|
||||||
|
"deprecated",
|
||||||
|
]
|
||||||
91
scripts/openapi_generator/app.py
Normal file
91
scripts/openapi_generator/app.py
Normal file
|
|
@ -0,0 +1,91 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
FastAPI app creation for OpenAPI generation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import inspect
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from fastapi import FastAPI
|
||||||
|
|
||||||
|
from llama_stack.core.resolver import api_protocol_map
|
||||||
|
from llama_stack_api import Api
|
||||||
|
|
||||||
|
from .state import _protocol_methods_cache
|
||||||
|
|
||||||
|
|
||||||
|
def _get_protocol_method(api: Api, method_name: str) -> Any | None:
|
||||||
|
"""
|
||||||
|
Get a protocol method function by API and method name.
|
||||||
|
Uses caching to avoid repeated lookups.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
api: The API enum
|
||||||
|
method_name: The method name (function name)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The function object, or None if not found
|
||||||
|
"""
|
||||||
|
global _protocol_methods_cache
|
||||||
|
|
||||||
|
if _protocol_methods_cache is None:
|
||||||
|
_protocol_methods_cache = {}
|
||||||
|
protocols = api_protocol_map()
|
||||||
|
from llama_stack_api.tools import SpecialToolGroup, ToolRuntime
|
||||||
|
|
||||||
|
toolgroup_protocols = {
|
||||||
|
SpecialToolGroup.rag_tool: ToolRuntime,
|
||||||
|
}
|
||||||
|
|
||||||
|
for api_key, protocol in protocols.items():
|
||||||
|
method_map: dict[str, Any] = {}
|
||||||
|
protocol_methods = inspect.getmembers(protocol, predicate=inspect.isfunction)
|
||||||
|
for name, method in protocol_methods:
|
||||||
|
method_map[name] = method
|
||||||
|
|
||||||
|
# Handle tool_runtime special case
|
||||||
|
if api_key == Api.tool_runtime:
|
||||||
|
for tool_group, sub_protocol in toolgroup_protocols.items():
|
||||||
|
sub_protocol_methods = inspect.getmembers(sub_protocol, predicate=inspect.isfunction)
|
||||||
|
for name, method in sub_protocol_methods:
|
||||||
|
if hasattr(method, "__webmethod__"):
|
||||||
|
method_map[f"{tool_group.value}.{name}"] = method
|
||||||
|
|
||||||
|
_protocol_methods_cache[api_key] = method_map
|
||||||
|
|
||||||
|
return _protocol_methods_cache.get(api, {}).get(method_name)
|
||||||
|
|
||||||
|
|
||||||
|
def create_llama_stack_app() -> FastAPI:
|
||||||
|
"""
|
||||||
|
Create a FastAPI app that represents the Llama Stack API.
|
||||||
|
This uses the existing route discovery system to automatically find all routes.
|
||||||
|
"""
|
||||||
|
app = FastAPI(
|
||||||
|
title="Llama Stack API",
|
||||||
|
description="A comprehensive API for building and deploying AI applications",
|
||||||
|
version="1.0.0",
|
||||||
|
servers=[
|
||||||
|
{"url": "http://any-hosted-llama-stack.com"},
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get all API routes
|
||||||
|
from llama_stack.core.server.routes import get_all_api_routes
|
||||||
|
|
||||||
|
api_routes = get_all_api_routes()
|
||||||
|
|
||||||
|
# Create FastAPI routes from the discovered routes
|
||||||
|
from . import endpoints
|
||||||
|
|
||||||
|
for api, routes in api_routes.items():
|
||||||
|
for route, webmethod in routes:
|
||||||
|
# Convert the route to a FastAPI endpoint
|
||||||
|
endpoints._create_fastapi_endpoint(app, route, webmethod, api)
|
||||||
|
|
||||||
|
return app
|
||||||
657
scripts/openapi_generator/endpoints.py
Normal file
657
scripts/openapi_generator/endpoints.py
Normal file
|
|
@ -0,0 +1,657 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Endpoint generation logic for FastAPI OpenAPI generation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import inspect
|
||||||
|
import re
|
||||||
|
import types
|
||||||
|
import typing
|
||||||
|
from typing import Annotated, Any, get_args, get_origin
|
||||||
|
|
||||||
|
from fastapi import FastAPI
|
||||||
|
from pydantic import Field, create_model
|
||||||
|
|
||||||
|
from llama_stack.log import get_logger
|
||||||
|
from llama_stack_api import Api
|
||||||
|
from llama_stack_api.schema_utils import get_registered_schema_info
|
||||||
|
|
||||||
|
from . import app as app_module
|
||||||
|
from .state import _extra_body_fields, register_dynamic_model
|
||||||
|
|
||||||
|
logger = get_logger(name=__name__, category="core")
|
||||||
|
|
||||||
|
|
||||||
|
def _to_pascal_case(segment: str) -> str:
|
||||||
|
tokens = re.findall(r"[A-Za-z]+|\d+", segment)
|
||||||
|
return "".join(token.capitalize() for token in tokens if token)
|
||||||
|
|
||||||
|
|
||||||
|
def _compose_request_model_name(api: Api, method_name: str, variant: str | None = None) -> str:
|
||||||
|
"""Generate a deterministic model name from the protocol method."""
|
||||||
|
|
||||||
|
def _to_pascal_from_snake(value: str) -> str:
|
||||||
|
return "".join(segment.capitalize() for segment in value.split("_") if segment)
|
||||||
|
|
||||||
|
base_name = _to_pascal_from_snake(method_name)
|
||||||
|
if not base_name:
|
||||||
|
base_name = _to_pascal_case(api.value)
|
||||||
|
base_name = f"{base_name}Request"
|
||||||
|
if variant:
|
||||||
|
base_name = f"{base_name}{variant}"
|
||||||
|
return base_name
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_path_parameters(path: str) -> list[dict[str, Any]]:
|
||||||
|
"""Extract path parameters from a URL path and return them as OpenAPI parameter definitions."""
|
||||||
|
matches = re.findall(r"\{([^}:]+)(?::[^}]+)?\}", path)
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"name": param_name,
|
||||||
|
"in": "path",
|
||||||
|
"required": True,
|
||||||
|
"schema": {"type": "string"},
|
||||||
|
"description": f"Path parameter: {param_name}",
|
||||||
|
}
|
||||||
|
for param_name in matches
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _create_endpoint_with_request_model(
|
||||||
|
request_model: type, response_model: type | None, operation_description: str | None
|
||||||
|
):
|
||||||
|
"""Create an endpoint function with a request body model."""
|
||||||
|
|
||||||
|
async def endpoint(request: request_model) -> response_model:
|
||||||
|
return response_model() if response_model else {}
|
||||||
|
|
||||||
|
if operation_description:
|
||||||
|
endpoint.__doc__ = operation_description
|
||||||
|
return endpoint
|
||||||
|
|
||||||
|
|
||||||
|
def _build_field_definitions(query_parameters: list[tuple[str, type, Any]], use_any: bool = False) -> dict[str, tuple]:
|
||||||
|
"""Build field definitions for a Pydantic model from query parameters."""
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
field_definitions = {}
|
||||||
|
for param_name, param_type, default_value in query_parameters:
|
||||||
|
if use_any:
|
||||||
|
field_definitions[param_name] = (Any, ... if default_value is inspect.Parameter.empty else default_value)
|
||||||
|
continue
|
||||||
|
|
||||||
|
base_type = param_type
|
||||||
|
extracted_field = None
|
||||||
|
if get_origin(param_type) is Annotated:
|
||||||
|
args = get_args(param_type)
|
||||||
|
if args:
|
||||||
|
base_type = args[0]
|
||||||
|
for arg in args[1:]:
|
||||||
|
if isinstance(arg, Field):
|
||||||
|
extracted_field = arg
|
||||||
|
break
|
||||||
|
|
||||||
|
try:
|
||||||
|
if extracted_field:
|
||||||
|
field_definitions[param_name] = (base_type, extracted_field)
|
||||||
|
else:
|
||||||
|
field_definitions[param_name] = (
|
||||||
|
base_type,
|
||||||
|
... if default_value is inspect.Parameter.empty else default_value,
|
||||||
|
)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
field_definitions[param_name] = (Any, ... if default_value is inspect.Parameter.empty else default_value)
|
||||||
|
|
||||||
|
# Ensure all parameters are included
|
||||||
|
expected_params = {name for name, _, _ in query_parameters}
|
||||||
|
missing = expected_params - set(field_definitions.keys())
|
||||||
|
if missing:
|
||||||
|
for param_name, _, default_value in query_parameters:
|
||||||
|
if param_name in missing:
|
||||||
|
field_definitions[param_name] = (
|
||||||
|
Any,
|
||||||
|
... if default_value is inspect.Parameter.empty else default_value,
|
||||||
|
)
|
||||||
|
|
||||||
|
return field_definitions
|
||||||
|
|
||||||
|
|
||||||
|
def _create_dynamic_request_model(
|
||||||
|
api: Api,
|
||||||
|
webmethod,
|
||||||
|
method_name: str,
|
||||||
|
http_method: str,
|
||||||
|
query_parameters: list[tuple[str, type, Any]],
|
||||||
|
use_any: bool = False,
|
||||||
|
variant_suffix: str | None = None,
|
||||||
|
) -> type | None:
|
||||||
|
"""Create a dynamic Pydantic model for request body."""
|
||||||
|
try:
|
||||||
|
field_definitions = _build_field_definitions(query_parameters, use_any)
|
||||||
|
if not field_definitions:
|
||||||
|
return None
|
||||||
|
model_name = _compose_request_model_name(api, method_name, variant_suffix or None)
|
||||||
|
request_model = create_model(model_name, **field_definitions)
|
||||||
|
return register_dynamic_model(model_name, request_model)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _build_signature_params(
|
||||||
|
query_parameters: list[tuple[str, type, Any]],
|
||||||
|
) -> tuple[list[inspect.Parameter], dict[str, type]]:
|
||||||
|
"""Build signature parameters and annotations from query parameters."""
|
||||||
|
signature_params = []
|
||||||
|
param_annotations = {}
|
||||||
|
for param_name, param_type, default_value in query_parameters:
|
||||||
|
param_annotations[param_name] = param_type
|
||||||
|
signature_params.append(
|
||||||
|
inspect.Parameter(
|
||||||
|
param_name,
|
||||||
|
inspect.Parameter.POSITIONAL_OR_KEYWORD,
|
||||||
|
default=default_value if default_value is not inspect.Parameter.empty else inspect.Parameter.empty,
|
||||||
|
annotation=param_type,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return signature_params, param_annotations
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_operation_description_from_docstring(api: Api, method_name: str) -> str | None:
|
||||||
|
"""Extract operation description from the actual function docstring."""
|
||||||
|
func = app_module._get_protocol_method(api, method_name)
|
||||||
|
if not func or not func.__doc__:
|
||||||
|
return None
|
||||||
|
|
||||||
|
doc_lines = func.__doc__.split("\n")
|
||||||
|
description_lines = []
|
||||||
|
metadata_markers = (":param", ":type", ":return", ":returns", ":raises", ":exception", ":yield", ":yields", ":cvar")
|
||||||
|
|
||||||
|
for line in doc_lines:
|
||||||
|
if line.strip().startswith(metadata_markers):
|
||||||
|
break
|
||||||
|
description_lines.append(line)
|
||||||
|
|
||||||
|
description = "\n".join(description_lines).strip()
|
||||||
|
return description if description else None
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_response_description_from_docstring(webmethod, response_model, api: Api, method_name: str) -> str:
|
||||||
|
"""Extract response description from the actual function docstring."""
|
||||||
|
func = app_module._get_protocol_method(api, method_name)
|
||||||
|
if not func or not func.__doc__:
|
||||||
|
return "Successful Response"
|
||||||
|
for line in func.__doc__.split("\n"):
|
||||||
|
if line.strip().startswith(":returns:"):
|
||||||
|
if desc := line.strip()[9:].strip():
|
||||||
|
return desc
|
||||||
|
return "Successful Response"
|
||||||
|
|
||||||
|
|
||||||
|
def _get_tag_from_api(api: Api) -> str:
|
||||||
|
"""Extract a tag name from the API enum for API grouping."""
|
||||||
|
return api.value.replace("_", " ").title()
|
||||||
|
|
||||||
|
|
||||||
|
def _is_file_or_form_param(param_type: Any) -> bool:
|
||||||
|
"""Check if a parameter type is annotated with File() or Form()."""
|
||||||
|
if get_origin(param_type) is Annotated:
|
||||||
|
args = get_args(param_type)
|
||||||
|
if len(args) > 1:
|
||||||
|
# Check metadata for File or Form
|
||||||
|
for metadata in args[1:]:
|
||||||
|
# Check if it's a File or Form instance
|
||||||
|
if hasattr(metadata, "__class__"):
|
||||||
|
class_name = metadata.__class__.__name__
|
||||||
|
if class_name in ("File", "Form"):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _is_extra_body_field(metadata_item: Any) -> bool:
|
||||||
|
"""Check if a metadata item is an ExtraBodyField instance."""
|
||||||
|
from llama_stack_api.schema_utils import ExtraBodyField
|
||||||
|
|
||||||
|
return isinstance(metadata_item, ExtraBodyField)
|
||||||
|
|
||||||
|
|
||||||
|
def _is_async_iterator_type(type_obj: Any) -> bool:
|
||||||
|
"""Check if a type is AsyncIterator or AsyncIterable."""
|
||||||
|
from collections.abc import AsyncIterable, AsyncIterator
|
||||||
|
|
||||||
|
origin = get_origin(type_obj)
|
||||||
|
if origin is None:
|
||||||
|
# Check if it's the class itself
|
||||||
|
return type_obj in (AsyncIterator, AsyncIterable) or (
|
||||||
|
hasattr(type_obj, "__origin__") and type_obj.__origin__ in (AsyncIterator, AsyncIterable)
|
||||||
|
)
|
||||||
|
return origin in (AsyncIterator, AsyncIterable)
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_response_models_from_union(union_type: Any) -> tuple[type | None, type | None]:
|
||||||
|
"""
|
||||||
|
Extract non-streaming and streaming response models from a union type.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple: (non_streaming_model, streaming_model)
|
||||||
|
"""
|
||||||
|
non_streaming_model = None
|
||||||
|
streaming_model = None
|
||||||
|
|
||||||
|
args = get_args(union_type)
|
||||||
|
for arg in args:
|
||||||
|
# Check if it's an AsyncIterator
|
||||||
|
if _is_async_iterator_type(arg):
|
||||||
|
# Extract the type argument from AsyncIterator[T]
|
||||||
|
iterator_args = get_args(arg)
|
||||||
|
if iterator_args:
|
||||||
|
inner_type = iterator_args[0]
|
||||||
|
# Check if the inner type is a registered schema (union type)
|
||||||
|
# or a Pydantic model
|
||||||
|
if hasattr(inner_type, "model_json_schema"):
|
||||||
|
streaming_model = inner_type
|
||||||
|
else:
|
||||||
|
# Might be a registered schema - check if it's registered
|
||||||
|
if get_registered_schema_info(inner_type):
|
||||||
|
# We'll need to look this up later, but for now store the type
|
||||||
|
streaming_model = inner_type
|
||||||
|
elif hasattr(arg, "model_json_schema"):
|
||||||
|
# Non-streaming Pydantic model
|
||||||
|
if non_streaming_model is None:
|
||||||
|
non_streaming_model = arg
|
||||||
|
|
||||||
|
return non_streaming_model, streaming_model
|
||||||
|
|
||||||
|
|
||||||
|
def _find_models_for_endpoint(
|
||||||
|
webmethod, api: Api, method_name: str, is_post_put: bool = False
|
||||||
|
) -> tuple[type | None, type | None, list[tuple[str, type, Any]], list[inspect.Parameter], type | None, str | None]:
|
||||||
|
"""
|
||||||
|
Find appropriate request and response models for an endpoint by analyzing the actual function signature.
|
||||||
|
This uses the protocol function to determine the correct models dynamically.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
webmethod: The webmethod metadata
|
||||||
|
api: The API enum for looking up the function
|
||||||
|
method_name: The method name (function name)
|
||||||
|
is_post_put: Whether this is a POST, PUT, or PATCH request (GET requests should never have request bodies)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple: (request_model, response_model, query_parameters, file_form_params, streaming_response_model, response_schema_name)
|
||||||
|
where query_parameters is a list of (name, type, default_value) tuples
|
||||||
|
and file_form_params is a list of inspect.Parameter objects for File()/Form() params
|
||||||
|
and streaming_response_model is the model for streaming responses (AsyncIterator content)
|
||||||
|
"""
|
||||||
|
route_descriptor = f"{webmethod.method or 'UNKNOWN'} {webmethod.route}"
|
||||||
|
try:
|
||||||
|
# Get the function from the protocol
|
||||||
|
func = app_module._get_protocol_method(api, method_name)
|
||||||
|
if not func:
|
||||||
|
logger.warning("No protocol method for %s.%s (%s)", api, method_name, route_descriptor)
|
||||||
|
return None, None, [], [], None, None
|
||||||
|
|
||||||
|
# Analyze the function signature
|
||||||
|
sig = inspect.signature(func)
|
||||||
|
|
||||||
|
# Find request model and collect all body parameters
|
||||||
|
request_model = None
|
||||||
|
query_parameters = []
|
||||||
|
file_form_params = []
|
||||||
|
path_params = set()
|
||||||
|
extra_body_params = []
|
||||||
|
response_schema_name = None
|
||||||
|
|
||||||
|
# Extract path parameters from the route
|
||||||
|
if webmethod and hasattr(webmethod, "route"):
|
||||||
|
path_matches = re.findall(r"\{([^}:]+)(?::[^}]+)?\}", webmethod.route)
|
||||||
|
path_params = set(path_matches)
|
||||||
|
|
||||||
|
for param_name, param in sig.parameters.items():
|
||||||
|
if param_name == "self":
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Skip *args and **kwargs parameters - these are not real API parameters
|
||||||
|
if param.kind in (inspect.Parameter.VAR_POSITIONAL, inspect.Parameter.VAR_KEYWORD):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if this is a path parameter
|
||||||
|
if param_name in path_params:
|
||||||
|
# Path parameters are handled separately, skip them
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if it's a File() or Form() parameter - these need special handling
|
||||||
|
param_type = param.annotation
|
||||||
|
if _is_file_or_form_param(param_type):
|
||||||
|
# File() and Form() parameters must be in the function signature directly
|
||||||
|
# They cannot be part of a Pydantic model
|
||||||
|
file_form_params.append(param)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check for ExtraBodyField in Annotated types
|
||||||
|
is_extra_body = False
|
||||||
|
extra_body_description = None
|
||||||
|
if get_origin(param_type) is Annotated:
|
||||||
|
args = get_args(param_type)
|
||||||
|
base_type = args[0] if args else param_type
|
||||||
|
metadata = args[1:] if len(args) > 1 else []
|
||||||
|
|
||||||
|
# Check if any metadata item is an ExtraBodyField
|
||||||
|
for metadata_item in metadata:
|
||||||
|
if _is_extra_body_field(metadata_item):
|
||||||
|
is_extra_body = True
|
||||||
|
extra_body_description = metadata_item.description
|
||||||
|
break
|
||||||
|
|
||||||
|
if is_extra_body:
|
||||||
|
# Store as extra body parameter - exclude from request model
|
||||||
|
extra_body_params.append((param_name, base_type, extra_body_description))
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if it's a Pydantic model (for POST/PUT requests)
|
||||||
|
if hasattr(param_type, "model_json_schema"):
|
||||||
|
# Collect all body parameters including Pydantic models
|
||||||
|
# We'll decide later whether to use a single model or create a combined one
|
||||||
|
query_parameters.append((param_name, param_type, param.default))
|
||||||
|
elif get_origin(param_type) is Annotated:
|
||||||
|
# Handle Annotated types - get the base type
|
||||||
|
args = get_args(param_type)
|
||||||
|
if args and hasattr(args[0], "model_json_schema"):
|
||||||
|
# Collect Pydantic models from Annotated types
|
||||||
|
query_parameters.append((param_name, args[0], param.default))
|
||||||
|
else:
|
||||||
|
# Regular annotated parameter (but not File/Form, already handled above)
|
||||||
|
query_parameters.append((param_name, param_type, param.default))
|
||||||
|
else:
|
||||||
|
# This is likely a body parameter for POST/PUT or query parameter for GET
|
||||||
|
# Store the parameter info for later use
|
||||||
|
# Preserve inspect.Parameter.empty to distinguish "no default" from "default=None"
|
||||||
|
default_value = param.default
|
||||||
|
|
||||||
|
# Extract the base type from union types (e.g., str | None -> str)
|
||||||
|
# Also make it safe for FastAPI to avoid forward reference issues
|
||||||
|
query_parameters.append((param_name, param_type, default_value))
|
||||||
|
|
||||||
|
# Store extra body fields for later use in post-processing
|
||||||
|
# We'll store them when the endpoint is created, as we need the full path
|
||||||
|
# For now, attach to the function for later retrieval
|
||||||
|
if extra_body_params:
|
||||||
|
func._extra_body_params = extra_body_params # type: ignore
|
||||||
|
|
||||||
|
# If there's exactly one body parameter and it's a Pydantic model, use it directly
|
||||||
|
# Otherwise, we'll create a combined request model from all parameters
|
||||||
|
# BUT: For GET requests, never create a request body - all parameters should be query parameters
|
||||||
|
if is_post_put and len(query_parameters) == 1:
|
||||||
|
param_name, param_type, default_value = query_parameters[0]
|
||||||
|
if hasattr(param_type, "model_json_schema"):
|
||||||
|
request_model = param_type
|
||||||
|
query_parameters = [] # Clear query_parameters so we use the single model
|
||||||
|
|
||||||
|
# Find response model from return annotation
|
||||||
|
# Also detect streaming response models (AsyncIterator)
|
||||||
|
response_model = None
|
||||||
|
streaming_response_model = None
|
||||||
|
return_annotation = sig.return_annotation
|
||||||
|
if return_annotation != inspect.Signature.empty:
|
||||||
|
origin = get_origin(return_annotation)
|
||||||
|
if hasattr(return_annotation, "model_json_schema"):
|
||||||
|
response_model = return_annotation
|
||||||
|
elif origin is Annotated:
|
||||||
|
# Handle Annotated return types
|
||||||
|
args = get_args(return_annotation)
|
||||||
|
if args:
|
||||||
|
# Check if the first argument is a Pydantic model
|
||||||
|
if hasattr(args[0], "model_json_schema"):
|
||||||
|
response_model = args[0]
|
||||||
|
else:
|
||||||
|
# Check if the first argument is a union type
|
||||||
|
inner_origin = get_origin(args[0])
|
||||||
|
if inner_origin is not None and (
|
||||||
|
inner_origin is types.UnionType or inner_origin is typing.Union
|
||||||
|
):
|
||||||
|
response_model, streaming_response_model = _extract_response_models_from_union(args[0])
|
||||||
|
elif origin is not None and (origin is types.UnionType or origin is typing.Union):
|
||||||
|
# Handle union types - extract both non-streaming and streaming models
|
||||||
|
response_model, streaming_response_model = _extract_response_models_from_union(return_annotation)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
from fastapi import Response as FastAPIResponse
|
||||||
|
except ImportError:
|
||||||
|
fastapi_response_cls = None
|
||||||
|
else:
|
||||||
|
fastapi_response_cls = FastAPIResponse
|
||||||
|
try:
|
||||||
|
from starlette.responses import Response as StarletteResponse
|
||||||
|
except ImportError:
|
||||||
|
starlette_response_cls = None
|
||||||
|
else:
|
||||||
|
starlette_response_cls = StarletteResponse
|
||||||
|
|
||||||
|
response_types = tuple(t for t in (fastapi_response_cls, starlette_response_cls) if t is not None)
|
||||||
|
if response_types and any(return_annotation is t for t in response_types):
|
||||||
|
response_schema_name = "Response"
|
||||||
|
|
||||||
|
return (
|
||||||
|
request_model,
|
||||||
|
response_model,
|
||||||
|
query_parameters,
|
||||||
|
file_form_params,
|
||||||
|
streaming_response_model,
|
||||||
|
response_schema_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as exc:
|
||||||
|
logger.warning(
|
||||||
|
"Failed to analyze endpoint %s.%s (%s): %s", api, method_name, route_descriptor, exc, exc_info=True
|
||||||
|
)
|
||||||
|
return None, None, [], [], None, None
|
||||||
|
|
||||||
|
|
||||||
|
def _create_fastapi_endpoint(app: FastAPI, route, webmethod, api: Api):
|
||||||
|
"""Create a FastAPI endpoint from a discovered route and webmethod."""
|
||||||
|
path = route.path
|
||||||
|
raw_methods = route.methods or set()
|
||||||
|
method_list = sorted({method.upper() for method in raw_methods if method and method.upper() != "HEAD"})
|
||||||
|
if not method_list:
|
||||||
|
method_list = ["GET"]
|
||||||
|
primary_method = method_list[0]
|
||||||
|
name = route.name
|
||||||
|
fastapi_path = path.replace("{", "{").replace("}", "}")
|
||||||
|
is_post_put = any(method in ["POST", "PUT", "PATCH"] for method in method_list)
|
||||||
|
|
||||||
|
(
|
||||||
|
request_model,
|
||||||
|
response_model,
|
||||||
|
query_parameters,
|
||||||
|
file_form_params,
|
||||||
|
streaming_response_model,
|
||||||
|
response_schema_name,
|
||||||
|
) = _find_models_for_endpoint(webmethod, api, name, is_post_put)
|
||||||
|
operation_description = _extract_operation_description_from_docstring(api, name)
|
||||||
|
response_description = _extract_response_description_from_docstring(webmethod, response_model, api, name)
|
||||||
|
|
||||||
|
# Retrieve and store extra body fields for this endpoint
|
||||||
|
func = app_module._get_protocol_method(api, name)
|
||||||
|
extra_body_params = getattr(func, "_extra_body_params", []) if func else []
|
||||||
|
if extra_body_params:
|
||||||
|
for method in method_list:
|
||||||
|
key = (fastapi_path, method.upper())
|
||||||
|
_extra_body_fields[key] = extra_body_params
|
||||||
|
|
||||||
|
if is_post_put and not request_model and not file_form_params and query_parameters:
|
||||||
|
request_model = _create_dynamic_request_model(
|
||||||
|
api, webmethod, name, primary_method, query_parameters, use_any=False
|
||||||
|
)
|
||||||
|
if not request_model:
|
||||||
|
request_model = _create_dynamic_request_model(
|
||||||
|
api, webmethod, name, primary_method, query_parameters, use_any=True, variant_suffix="Loose"
|
||||||
|
)
|
||||||
|
if request_model:
|
||||||
|
query_parameters = []
|
||||||
|
|
||||||
|
if file_form_params and is_post_put:
|
||||||
|
signature_params = list(file_form_params)
|
||||||
|
param_annotations = {param.name: param.annotation for param in file_form_params}
|
||||||
|
for param_name, param_type, default_value in query_parameters:
|
||||||
|
signature_params.append(
|
||||||
|
inspect.Parameter(
|
||||||
|
param_name,
|
||||||
|
inspect.Parameter.POSITIONAL_OR_KEYWORD,
|
||||||
|
default=default_value if default_value is not inspect.Parameter.empty else inspect.Parameter.empty,
|
||||||
|
annotation=param_type,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
param_annotations[param_name] = param_type
|
||||||
|
|
||||||
|
async def file_form_endpoint():
|
||||||
|
return response_model() if response_model else {}
|
||||||
|
|
||||||
|
if operation_description:
|
||||||
|
file_form_endpoint.__doc__ = operation_description
|
||||||
|
file_form_endpoint.__signature__ = inspect.Signature(signature_params)
|
||||||
|
file_form_endpoint.__annotations__ = param_annotations
|
||||||
|
endpoint_func = file_form_endpoint
|
||||||
|
elif request_model and response_model:
|
||||||
|
endpoint_func = _create_endpoint_with_request_model(request_model, response_model, operation_description)
|
||||||
|
elif request_model:
|
||||||
|
endpoint_func = _create_endpoint_with_request_model(request_model, None, operation_description)
|
||||||
|
elif response_model and query_parameters:
|
||||||
|
if is_post_put:
|
||||||
|
request_model = _create_dynamic_request_model(
|
||||||
|
api, webmethod, name, primary_method, query_parameters, use_any=False
|
||||||
|
)
|
||||||
|
if not request_model:
|
||||||
|
request_model = _create_dynamic_request_model(
|
||||||
|
api, webmethod, name, primary_method, query_parameters, use_any=True, variant_suffix="Loose"
|
||||||
|
)
|
||||||
|
|
||||||
|
if request_model:
|
||||||
|
endpoint_func = _create_endpoint_with_request_model(
|
||||||
|
request_model, response_model, operation_description
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
|
||||||
|
async def empty_endpoint() -> response_model:
|
||||||
|
return response_model() if response_model else {}
|
||||||
|
|
||||||
|
if operation_description:
|
||||||
|
empty_endpoint.__doc__ = operation_description
|
||||||
|
endpoint_func = empty_endpoint
|
||||||
|
else:
|
||||||
|
sorted_params = sorted(query_parameters, key=lambda x: (x[2] is not inspect.Parameter.empty, x[0]))
|
||||||
|
signature_params, param_annotations = _build_signature_params(sorted_params)
|
||||||
|
|
||||||
|
async def query_endpoint():
|
||||||
|
return response_model()
|
||||||
|
|
||||||
|
if operation_description:
|
||||||
|
query_endpoint.__doc__ = operation_description
|
||||||
|
query_endpoint.__signature__ = inspect.Signature(signature_params)
|
||||||
|
query_endpoint.__annotations__ = param_annotations
|
||||||
|
endpoint_func = query_endpoint
|
||||||
|
elif response_model:
|
||||||
|
|
||||||
|
async def response_only_endpoint() -> response_model:
|
||||||
|
return response_model()
|
||||||
|
|
||||||
|
if operation_description:
|
||||||
|
response_only_endpoint.__doc__ = operation_description
|
||||||
|
endpoint_func = response_only_endpoint
|
||||||
|
elif query_parameters:
|
||||||
|
signature_params, param_annotations = _build_signature_params(query_parameters)
|
||||||
|
|
||||||
|
async def params_only_endpoint():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
if operation_description:
|
||||||
|
params_only_endpoint.__doc__ = operation_description
|
||||||
|
params_only_endpoint.__signature__ = inspect.Signature(signature_params)
|
||||||
|
params_only_endpoint.__annotations__ = param_annotations
|
||||||
|
endpoint_func = params_only_endpoint
|
||||||
|
else:
|
||||||
|
# Endpoint with no parameters and no response model
|
||||||
|
# If we have a response_model from the function signature, use it even if _find_models_for_endpoint didn't find it
|
||||||
|
# This can happen if there was an exception during model finding
|
||||||
|
if response_model is None:
|
||||||
|
# Try to get response model directly from the function signature as a fallback
|
||||||
|
func = app_module._get_protocol_method(api, name)
|
||||||
|
if func:
|
||||||
|
try:
|
||||||
|
sig = inspect.signature(func)
|
||||||
|
return_annotation = sig.return_annotation
|
||||||
|
if return_annotation != inspect.Signature.empty:
|
||||||
|
if hasattr(return_annotation, "model_json_schema"):
|
||||||
|
response_model = return_annotation
|
||||||
|
elif get_origin(return_annotation) is Annotated:
|
||||||
|
args = get_args(return_annotation)
|
||||||
|
if args and hasattr(args[0], "model_json_schema"):
|
||||||
|
response_model = args[0]
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if response_model:
|
||||||
|
|
||||||
|
async def no_params_endpoint() -> response_model:
|
||||||
|
return response_model() if response_model else {}
|
||||||
|
else:
|
||||||
|
|
||||||
|
async def no_params_endpoint():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
if operation_description:
|
||||||
|
no_params_endpoint.__doc__ = operation_description
|
||||||
|
endpoint_func = no_params_endpoint
|
||||||
|
|
||||||
|
# Build response content with both application/json and text/event-stream if streaming
|
||||||
|
response_content: dict[str, Any] = {}
|
||||||
|
if response_model:
|
||||||
|
response_content["application/json"] = {"schema": {"$ref": f"#/components/schemas/{response_model.__name__}"}}
|
||||||
|
elif response_schema_name:
|
||||||
|
response_content["application/json"] = {"schema": {"$ref": f"#/components/schemas/{response_schema_name}"}}
|
||||||
|
if streaming_response_model:
|
||||||
|
# Get the schema name for the streaming model
|
||||||
|
# It might be a registered schema or a Pydantic model
|
||||||
|
streaming_schema_name = None
|
||||||
|
# Check if it's a registered schema first (before checking __name__)
|
||||||
|
# because registered schemas might be Annotated types
|
||||||
|
if schema_info := get_registered_schema_info(streaming_response_model):
|
||||||
|
streaming_schema_name = schema_info.name
|
||||||
|
elif hasattr(streaming_response_model, "__name__"):
|
||||||
|
streaming_schema_name = streaming_response_model.__name__
|
||||||
|
|
||||||
|
if streaming_schema_name:
|
||||||
|
response_content["text/event-stream"] = {
|
||||||
|
"schema": {"$ref": f"#/components/schemas/{streaming_schema_name}"}
|
||||||
|
}
|
||||||
|
|
||||||
|
# If no content types, use empty schema
|
||||||
|
# Add the endpoint to the FastAPI app
|
||||||
|
is_deprecated = webmethod.deprecated or False
|
||||||
|
route_kwargs = {
|
||||||
|
"name": name,
|
||||||
|
"tags": [_get_tag_from_api(api)],
|
||||||
|
"deprecated": is_deprecated,
|
||||||
|
"responses": {
|
||||||
|
400: {"$ref": "#/components/responses/BadRequest400"},
|
||||||
|
429: {"$ref": "#/components/responses/TooManyRequests429"},
|
||||||
|
500: {"$ref": "#/components/responses/InternalServerError500"},
|
||||||
|
"default": {"$ref": "#/components/responses/DefaultError"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
success_response: dict[str, Any] = {"description": response_description}
|
||||||
|
if response_content:
|
||||||
|
success_response["content"] = response_content
|
||||||
|
route_kwargs["responses"][200] = success_response
|
||||||
|
|
||||||
|
# FastAPI needs response_model parameter to properly generate OpenAPI spec
|
||||||
|
# Use the non-streaming response model if available
|
||||||
|
if response_model:
|
||||||
|
route_kwargs["response_model"] = response_model
|
||||||
|
|
||||||
|
method_map = {"GET": app.get, "POST": app.post, "PUT": app.put, "DELETE": app.delete, "PATCH": app.patch}
|
||||||
|
for method in method_list:
|
||||||
|
if handler := method_map.get(method):
|
||||||
|
handler(fastapi_path, **route_kwargs)(endpoint_func)
|
||||||
241
scripts/openapi_generator/main.py
Executable file
241
scripts/openapi_generator/main.py
Executable file
|
|
@ -0,0 +1,241 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Main entry point for the FastAPI OpenAPI generator.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import copy
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
import yaml
|
||||||
|
from fastapi.openapi.utils import get_openapi
|
||||||
|
|
||||||
|
from . import app, schema_collection, schema_filtering, schema_transforms, state
|
||||||
|
|
||||||
|
|
||||||
|
def generate_openapi_spec(output_dir: str) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Generate OpenAPI specification using FastAPI's built-in method.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
output_dir: Directory to save the generated files
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The generated OpenAPI specification as a dictionary
|
||||||
|
"""
|
||||||
|
state.reset_generator_state()
|
||||||
|
# Create the FastAPI app
|
||||||
|
fastapi_app = app.create_llama_stack_app()
|
||||||
|
|
||||||
|
# Generate the OpenAPI schema
|
||||||
|
openapi_schema = get_openapi(
|
||||||
|
title=fastapi_app.title,
|
||||||
|
version=fastapi_app.version,
|
||||||
|
description=fastapi_app.description,
|
||||||
|
routes=fastapi_app.routes,
|
||||||
|
servers=fastapi_app.servers,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set OpenAPI version to 3.1.0
|
||||||
|
openapi_schema["openapi"] = "3.1.0"
|
||||||
|
|
||||||
|
# Add standard error responses
|
||||||
|
openapi_schema = schema_transforms._add_error_responses(openapi_schema)
|
||||||
|
|
||||||
|
# Ensure all @json_schema_type decorated models are included
|
||||||
|
openapi_schema = schema_collection._ensure_json_schema_types_included(openapi_schema)
|
||||||
|
|
||||||
|
# Fix $ref references to point to components/schemas instead of $defs
|
||||||
|
openapi_schema = schema_transforms._fix_ref_references(openapi_schema)
|
||||||
|
|
||||||
|
# Fix path parameter resolution issues
|
||||||
|
openapi_schema = schema_transforms._fix_path_parameters(openapi_schema)
|
||||||
|
|
||||||
|
# Eliminate $defs section entirely for oasdiff compatibility
|
||||||
|
openapi_schema = schema_transforms._eliminate_defs_section(openapi_schema)
|
||||||
|
|
||||||
|
# Clean descriptions in schema definitions by removing docstring metadata
|
||||||
|
openapi_schema = schema_transforms._clean_schema_descriptions(openapi_schema)
|
||||||
|
openapi_schema = schema_transforms._normalize_empty_responses(openapi_schema)
|
||||||
|
|
||||||
|
# Remove query parameters from POST/PUT/PATCH endpoints that have a request body
|
||||||
|
# FastAPI sometimes infers parameters as query params even when they should be in the request body
|
||||||
|
openapi_schema = schema_transforms._remove_query_params_from_body_endpoints(openapi_schema)
|
||||||
|
|
||||||
|
# Add x-llama-stack-extra-body-params extension for ExtraBodyField parameters
|
||||||
|
openapi_schema = schema_transforms._add_extra_body_params_extension(openapi_schema)
|
||||||
|
|
||||||
|
# Remove request bodies from GET endpoints (GET requests should never have request bodies)
|
||||||
|
# This must run AFTER _add_extra_body_params_extension to ensure any request bodies
|
||||||
|
# that FastAPI incorrectly added to GET endpoints are removed
|
||||||
|
openapi_schema = schema_transforms._remove_request_bodies_from_get_endpoints(openapi_schema)
|
||||||
|
|
||||||
|
# Extract duplicate union types to shared schema references
|
||||||
|
openapi_schema = schema_transforms._extract_duplicate_union_types(openapi_schema)
|
||||||
|
|
||||||
|
# Split into stable (v1 only), experimental (v1alpha + v1beta), deprecated, and combined (stainless) specs
|
||||||
|
# Each spec needs its own deep copy of the full schema to avoid cross-contamination
|
||||||
|
stable_schema = schema_filtering._filter_schema_by_version(
|
||||||
|
copy.deepcopy(openapi_schema), stable_only=True, exclude_deprecated=True
|
||||||
|
)
|
||||||
|
experimental_schema = schema_filtering._filter_schema_by_version(
|
||||||
|
copy.deepcopy(openapi_schema), stable_only=False, exclude_deprecated=True
|
||||||
|
)
|
||||||
|
deprecated_schema = schema_filtering._filter_deprecated_schema(copy.deepcopy(openapi_schema))
|
||||||
|
combined_schema = schema_filtering._filter_combined_schema(copy.deepcopy(openapi_schema))
|
||||||
|
|
||||||
|
# Apply duplicate union extraction to combined schema (used by Stainless)
|
||||||
|
combined_schema = schema_transforms._extract_duplicate_union_types(combined_schema)
|
||||||
|
|
||||||
|
base_description = (
|
||||||
|
"This is the specification of the Llama Stack that provides\n"
|
||||||
|
" a set of endpoints and their corresponding interfaces that are\n"
|
||||||
|
" tailored to\n"
|
||||||
|
" best leverage Llama Models."
|
||||||
|
)
|
||||||
|
|
||||||
|
schema_configs = [
|
||||||
|
(
|
||||||
|
stable_schema,
|
||||||
|
"Llama Stack Specification",
|
||||||
|
"**✅ STABLE**: Production-ready APIs with backward compatibility guarantees.",
|
||||||
|
),
|
||||||
|
(
|
||||||
|
experimental_schema,
|
||||||
|
"Llama Stack Specification - Experimental APIs",
|
||||||
|
"**🧪 EXPERIMENTAL**: Pre-release APIs (v1alpha, v1beta) that may change before\n becoming stable.",
|
||||||
|
),
|
||||||
|
(
|
||||||
|
deprecated_schema,
|
||||||
|
"Llama Stack Specification - Deprecated APIs",
|
||||||
|
"**⚠️ DEPRECATED**: Legacy APIs that may be removed in future versions. Use for\n migration reference only.",
|
||||||
|
),
|
||||||
|
(
|
||||||
|
combined_schema,
|
||||||
|
"Llama Stack Specification - Stable & Experimental APIs",
|
||||||
|
"**🔗 COMBINED**: This specification includes both stable production-ready APIs\n and experimental pre-release APIs. Use stable APIs for production deployments\n and experimental APIs for testing new features.",
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
for schema, title, description_suffix in schema_configs:
|
||||||
|
if "info" not in schema:
|
||||||
|
schema["info"] = {}
|
||||||
|
schema["info"].update(
|
||||||
|
{
|
||||||
|
"title": title,
|
||||||
|
"version": "v1",
|
||||||
|
"description": f"{base_description}\n\n {description_suffix}",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
schemas_to_validate = [
|
||||||
|
(stable_schema, "Stable schema"),
|
||||||
|
(experimental_schema, "Experimental schema"),
|
||||||
|
(deprecated_schema, "Deprecated schema"),
|
||||||
|
(combined_schema, "Combined (stainless) schema"),
|
||||||
|
]
|
||||||
|
|
||||||
|
for schema, _ in schemas_to_validate:
|
||||||
|
schema_transforms._fix_schema_issues(schema)
|
||||||
|
schema_transforms._apply_legacy_sorting(schema)
|
||||||
|
|
||||||
|
print("\nValidating generated schemas...")
|
||||||
|
failed_schemas = [
|
||||||
|
name for schema, name in schemas_to_validate if not schema_transforms.validate_openapi_schema(schema, name)
|
||||||
|
]
|
||||||
|
if failed_schemas:
|
||||||
|
raise ValueError(f"Invalid schemas: {', '.join(failed_schemas)}")
|
||||||
|
|
||||||
|
# Ensure output directory exists
|
||||||
|
output_path = Path(output_dir)
|
||||||
|
output_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Save the stable specification
|
||||||
|
yaml_path = output_path / "llama-stack-spec.yaml"
|
||||||
|
schema_transforms._write_yaml_file(yaml_path, stable_schema)
|
||||||
|
# Post-process the YAML file to remove $defs section and fix references
|
||||||
|
with open(yaml_path) as f:
|
||||||
|
yaml_content = f.read()
|
||||||
|
|
||||||
|
if " $defs:" in yaml_content or "#/$defs/" in yaml_content:
|
||||||
|
# Use string replacement to fix references directly
|
||||||
|
if "#/$defs/" in yaml_content:
|
||||||
|
yaml_content = yaml_content.replace("#/$defs/", "#/components/schemas/")
|
||||||
|
|
||||||
|
# Parse the YAML content
|
||||||
|
yaml_data = yaml.safe_load(yaml_content)
|
||||||
|
|
||||||
|
# Move $defs to components/schemas if it exists
|
||||||
|
if "$defs" in yaml_data:
|
||||||
|
if "components" not in yaml_data:
|
||||||
|
yaml_data["components"] = {}
|
||||||
|
if "schemas" not in yaml_data["components"]:
|
||||||
|
yaml_data["components"]["schemas"] = {}
|
||||||
|
|
||||||
|
# Move all $defs to components/schemas
|
||||||
|
for def_name, def_schema in yaml_data["$defs"].items():
|
||||||
|
yaml_data["components"]["schemas"][def_name] = def_schema
|
||||||
|
|
||||||
|
# Remove the $defs section
|
||||||
|
del yaml_data["$defs"]
|
||||||
|
|
||||||
|
# Write the modified YAML back
|
||||||
|
schema_transforms._write_yaml_file(yaml_path, yaml_data)
|
||||||
|
|
||||||
|
print(f"Generated YAML (stable): {yaml_path}")
|
||||||
|
|
||||||
|
experimental_yaml_path = output_path / "experimental-llama-stack-spec.yaml"
|
||||||
|
schema_transforms._write_yaml_file(experimental_yaml_path, experimental_schema)
|
||||||
|
print(f"Generated YAML (experimental): {experimental_yaml_path}")
|
||||||
|
|
||||||
|
deprecated_yaml_path = output_path / "deprecated-llama-stack-spec.yaml"
|
||||||
|
schema_transforms._write_yaml_file(deprecated_yaml_path, deprecated_schema)
|
||||||
|
print(f"Generated YAML (deprecated): {deprecated_yaml_path}")
|
||||||
|
|
||||||
|
# Generate combined (stainless) spec
|
||||||
|
stainless_yaml_path = output_path / "stainless-llama-stack-spec.yaml"
|
||||||
|
schema_transforms._write_yaml_file(stainless_yaml_path, combined_schema)
|
||||||
|
print(f"Generated YAML (stainless/combined): {stainless_yaml_path}")
|
||||||
|
|
||||||
|
return stable_schema
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point for the FastAPI OpenAPI generator."""
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description="Generate OpenAPI specification using FastAPI")
|
||||||
|
parser.add_argument("output_dir", help="Output directory for generated files")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
print("Generating OpenAPI specification using FastAPI...")
|
||||||
|
print(f"Output directory: {args.output_dir}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
openapi_schema = generate_openapi_spec(output_dir=args.output_dir)
|
||||||
|
|
||||||
|
print("\nOpenAPI specification generated successfully!")
|
||||||
|
print(f"Schemas: {len(openapi_schema.get('components', {}).get('schemas', {}))}")
|
||||||
|
print(f"Paths: {len(openapi_schema.get('paths', {}))}")
|
||||||
|
operation_count = sum(
|
||||||
|
1
|
||||||
|
for path_info in openapi_schema.get("paths", {}).values()
|
||||||
|
for method in ["get", "post", "put", "delete", "patch"]
|
||||||
|
if method in path_info
|
||||||
|
)
|
||||||
|
print(f"Operations: {operation_count}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error generating OpenAPI specification: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
131
scripts/openapi_generator/schema_collection.py
Normal file
131
scripts/openapi_generator/schema_collection.py
Normal file
|
|
@ -0,0 +1,131 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Schema discovery and collection for OpenAPI generation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import importlib
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
|
def _ensure_components_schemas(openapi_schema: dict[str, Any]) -> None:
|
||||||
|
"""Ensure components.schemas exists in the schema."""
|
||||||
|
if "components" not in openapi_schema:
|
||||||
|
openapi_schema["components"] = {}
|
||||||
|
if "schemas" not in openapi_schema["components"]:
|
||||||
|
openapi_schema["components"]["schemas"] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def _load_extra_schema_modules() -> None:
|
||||||
|
"""
|
||||||
|
Import modules outside llama_stack_api that use schema_utils to register schemas.
|
||||||
|
|
||||||
|
The API package already imports its submodules via __init__, but server-side modules
|
||||||
|
like telemetry need to be imported explicitly so their decorator side effects run.
|
||||||
|
"""
|
||||||
|
extra_modules = [
|
||||||
|
"llama_stack.core.telemetry.telemetry",
|
||||||
|
]
|
||||||
|
for module_name in extra_modules:
|
||||||
|
try:
|
||||||
|
importlib.import_module(module_name)
|
||||||
|
except ImportError:
|
||||||
|
continue
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_and_fix_defs(schema: dict[str, Any], openapi_schema: dict[str, Any]) -> None:
|
||||||
|
"""
|
||||||
|
Extract $defs from a schema, move them to components/schemas, and fix references.
|
||||||
|
This handles both TypeAdapter-generated schemas and model_json_schema() schemas.
|
||||||
|
"""
|
||||||
|
if "$defs" in schema:
|
||||||
|
defs = schema.pop("$defs")
|
||||||
|
for def_name, def_schema in defs.items():
|
||||||
|
if def_name not in openapi_schema["components"]["schemas"]:
|
||||||
|
openapi_schema["components"]["schemas"][def_name] = def_schema
|
||||||
|
# Recursively handle $defs in nested schemas
|
||||||
|
_extract_and_fix_defs(def_schema, openapi_schema)
|
||||||
|
|
||||||
|
# Fix any references in the main schema that point to $defs
|
||||||
|
def fix_refs_in_schema(obj: Any) -> None:
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
if "$ref" in obj and obj["$ref"].startswith("#/$defs/"):
|
||||||
|
obj["$ref"] = obj["$ref"].replace("#/$defs/", "#/components/schemas/")
|
||||||
|
for value in obj.values():
|
||||||
|
fix_refs_in_schema(value)
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
fix_refs_in_schema(item)
|
||||||
|
|
||||||
|
fix_refs_in_schema(schema)
|
||||||
|
|
||||||
|
|
||||||
|
def _ensure_json_schema_types_included(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Ensure all registered schemas (decorated, explicit, and dynamic) are included in the OpenAPI schema.
|
||||||
|
Relies on llama_stack_api's registry instead of recursively importing every module.
|
||||||
|
"""
|
||||||
|
_ensure_components_schemas(openapi_schema)
|
||||||
|
|
||||||
|
from pydantic import TypeAdapter
|
||||||
|
|
||||||
|
from llama_stack_api.schema_utils import (
|
||||||
|
iter_dynamic_schema_types,
|
||||||
|
iter_json_schema_types,
|
||||||
|
iter_registered_schema_types,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Import extra modules (e.g., telemetry) whose schema registrations live outside llama_stack_api
|
||||||
|
_load_extra_schema_modules()
|
||||||
|
|
||||||
|
# Handle explicitly registered schemas first (union types, Annotated structs, etc.)
|
||||||
|
for registration_info in iter_registered_schema_types():
|
||||||
|
schema_type = registration_info.type
|
||||||
|
schema_name = registration_info.name
|
||||||
|
if schema_name not in openapi_schema["components"]["schemas"]:
|
||||||
|
try:
|
||||||
|
adapter = TypeAdapter(schema_type)
|
||||||
|
schema = adapter.json_schema(ref_template="#/components/schemas/{model}")
|
||||||
|
_extract_and_fix_defs(schema, openapi_schema)
|
||||||
|
openapi_schema["components"]["schemas"][schema_name] = schema
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Warning: Failed to generate schema for registered type {schema_name}: {e}")
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
traceback.print_exc()
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Add @json_schema_type decorated models
|
||||||
|
for model in iter_json_schema_types():
|
||||||
|
schema_name = getattr(model, "_llama_stack_schema_name", None) or getattr(model, "__name__", None)
|
||||||
|
if not schema_name:
|
||||||
|
continue
|
||||||
|
if schema_name not in openapi_schema["components"]["schemas"]:
|
||||||
|
try:
|
||||||
|
if hasattr(model, "model_json_schema"):
|
||||||
|
schema = model.model_json_schema(ref_template="#/components/schemas/{model}")
|
||||||
|
else:
|
||||||
|
adapter = TypeAdapter(model)
|
||||||
|
schema = adapter.json_schema(ref_template="#/components/schemas/{model}")
|
||||||
|
_extract_and_fix_defs(schema, openapi_schema)
|
||||||
|
openapi_schema["components"]["schemas"][schema_name] = schema
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Warning: Failed to generate schema for {schema_name}: {e}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Include any dynamic models generated while building endpoints
|
||||||
|
for model in iter_dynamic_schema_types():
|
||||||
|
try:
|
||||||
|
schema_name = model.__name__
|
||||||
|
if schema_name not in openapi_schema["components"]["schemas"]:
|
||||||
|
schema = model.model_json_schema(ref_template="#/components/schemas/{model}")
|
||||||
|
_extract_and_fix_defs(schema, openapi_schema)
|
||||||
|
openapi_schema["components"]["schemas"][schema_name] = schema
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
297
scripts/openapi_generator/schema_filtering.py
Normal file
297
scripts/openapi_generator/schema_filtering.py
Normal file
|
|
@ -0,0 +1,297 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Schema filtering and version filtering for OpenAPI generation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from llama_stack_api.schema_utils import iter_json_schema_types, iter_registered_schema_types
|
||||||
|
from llama_stack_api.version import (
|
||||||
|
LLAMA_STACK_API_V1,
|
||||||
|
LLAMA_STACK_API_V1ALPHA,
|
||||||
|
LLAMA_STACK_API_V1BETA,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_all_json_schema_type_names() -> set[str]:
|
||||||
|
"""Collect schema names from @json_schema_type-decorated models."""
|
||||||
|
schema_names = set()
|
||||||
|
for model in iter_json_schema_types():
|
||||||
|
schema_name = getattr(model, "_llama_stack_schema_name", None) or getattr(model, "__name__", None)
|
||||||
|
if schema_name:
|
||||||
|
schema_names.add(schema_name)
|
||||||
|
return schema_names
|
||||||
|
|
||||||
|
|
||||||
|
def _get_explicit_schema_names(openapi_schema: dict[str, Any]) -> set[str]:
|
||||||
|
"""Schema names to keep even if not referenced by a path."""
|
||||||
|
registered_schema_names = {info.name for info in iter_registered_schema_types()}
|
||||||
|
json_schema_type_names = _get_all_json_schema_type_names()
|
||||||
|
return registered_schema_names | json_schema_type_names
|
||||||
|
|
||||||
|
|
||||||
|
def _find_schema_refs_in_object(obj: Any) -> set[str]:
|
||||||
|
"""
|
||||||
|
Recursively find all schema references ($ref) in an object.
|
||||||
|
"""
|
||||||
|
refs = set()
|
||||||
|
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
for key, value in obj.items():
|
||||||
|
if key == "$ref" and isinstance(value, str) and value.startswith("#/components/schemas/"):
|
||||||
|
schema_name = value.split("/")[-1]
|
||||||
|
refs.add(schema_name)
|
||||||
|
else:
|
||||||
|
refs.update(_find_schema_refs_in_object(value))
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
refs.update(_find_schema_refs_in_object(item))
|
||||||
|
|
||||||
|
return refs
|
||||||
|
|
||||||
|
|
||||||
|
def _add_transitive_references(
|
||||||
|
referenced_schemas: set[str], all_schemas: dict[str, Any], initial_schemas: set[str] | None = None
|
||||||
|
) -> set[str]:
|
||||||
|
"""Add transitive references for given schemas."""
|
||||||
|
if initial_schemas:
|
||||||
|
referenced_schemas.update(initial_schemas)
|
||||||
|
additional_schemas = set()
|
||||||
|
for schema_name in initial_schemas:
|
||||||
|
if schema_name in all_schemas:
|
||||||
|
additional_schemas.update(_find_schema_refs_in_object(all_schemas[schema_name]))
|
||||||
|
else:
|
||||||
|
additional_schemas = set()
|
||||||
|
for schema_name in referenced_schemas:
|
||||||
|
if schema_name in all_schemas:
|
||||||
|
additional_schemas.update(_find_schema_refs_in_object(all_schemas[schema_name]))
|
||||||
|
|
||||||
|
while additional_schemas:
|
||||||
|
new_schemas = additional_schemas - referenced_schemas
|
||||||
|
if not new_schemas:
|
||||||
|
break
|
||||||
|
referenced_schemas.update(new_schemas)
|
||||||
|
additional_schemas = set()
|
||||||
|
for schema_name in new_schemas:
|
||||||
|
if schema_name in all_schemas:
|
||||||
|
additional_schemas.update(_find_schema_refs_in_object(all_schemas[schema_name]))
|
||||||
|
|
||||||
|
return referenced_schemas
|
||||||
|
|
||||||
|
|
||||||
|
def _find_schemas_referenced_by_paths(filtered_paths: dict[str, Any], openapi_schema: dict[str, Any]) -> set[str]:
|
||||||
|
"""
|
||||||
|
Find all schemas that are referenced by the filtered paths.
|
||||||
|
This recursively traverses the path definitions to find all $ref references.
|
||||||
|
"""
|
||||||
|
referenced_schemas = set()
|
||||||
|
|
||||||
|
# Traverse all filtered paths
|
||||||
|
for _, path_item in filtered_paths.items():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check each HTTP method in the path
|
||||||
|
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
|
||||||
|
if method in path_item:
|
||||||
|
operation = path_item[method]
|
||||||
|
if isinstance(operation, dict):
|
||||||
|
# Find all schema references in this operation
|
||||||
|
referenced_schemas.update(_find_schema_refs_in_object(operation))
|
||||||
|
|
||||||
|
# Also check the responses section for schema references
|
||||||
|
if "components" in openapi_schema and "responses" in openapi_schema["components"]:
|
||||||
|
referenced_schemas.update(_find_schema_refs_in_object(openapi_schema["components"]["responses"]))
|
||||||
|
|
||||||
|
# Also include schemas that are referenced by other schemas (transitive references)
|
||||||
|
# This ensures we include all dependencies
|
||||||
|
all_schemas = openapi_schema.get("components", {}).get("schemas", {})
|
||||||
|
additional_schemas = set()
|
||||||
|
|
||||||
|
for schema_name in referenced_schemas:
|
||||||
|
if schema_name in all_schemas:
|
||||||
|
additional_schemas.update(_find_schema_refs_in_object(all_schemas[schema_name]))
|
||||||
|
|
||||||
|
# Keep adding transitive references until no new ones are found
|
||||||
|
while additional_schemas:
|
||||||
|
new_schemas = additional_schemas - referenced_schemas
|
||||||
|
if not new_schemas:
|
||||||
|
break
|
||||||
|
referenced_schemas.update(new_schemas)
|
||||||
|
additional_schemas = set()
|
||||||
|
for schema_name in new_schemas:
|
||||||
|
if schema_name in all_schemas:
|
||||||
|
additional_schemas.update(_find_schema_refs_in_object(all_schemas[schema_name]))
|
||||||
|
|
||||||
|
return referenced_schemas
|
||||||
|
|
||||||
|
|
||||||
|
def _filter_schemas_by_references(
|
||||||
|
filtered_schema: dict[str, Any], filtered_paths: dict[str, Any], openapi_schema: dict[str, Any]
|
||||||
|
) -> dict[str, Any]:
|
||||||
|
"""Filter schemas to only include ones referenced by filtered paths and explicit schemas."""
|
||||||
|
if "components" not in filtered_schema or "schemas" not in filtered_schema["components"]:
|
||||||
|
return filtered_schema
|
||||||
|
|
||||||
|
referenced_schemas = _find_schemas_referenced_by_paths(filtered_paths, openapi_schema)
|
||||||
|
all_schemas = openapi_schema.get("components", {}).get("schemas", {})
|
||||||
|
explicit_names = _get_explicit_schema_names(openapi_schema)
|
||||||
|
referenced_schemas = _add_transitive_references(referenced_schemas, all_schemas, explicit_names)
|
||||||
|
|
||||||
|
filtered_schemas = {
|
||||||
|
name: schema for name, schema in filtered_schema["components"]["schemas"].items() if name in referenced_schemas
|
||||||
|
}
|
||||||
|
filtered_schema["components"]["schemas"] = filtered_schemas
|
||||||
|
|
||||||
|
if "components" in openapi_schema and "$defs" in openapi_schema["components"]:
|
||||||
|
if "components" not in filtered_schema:
|
||||||
|
filtered_schema["components"] = {}
|
||||||
|
filtered_schema["components"]["$defs"] = openapi_schema["components"]["$defs"]
|
||||||
|
|
||||||
|
return filtered_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _path_starts_with_version(path: str, version: str) -> bool:
|
||||||
|
"""Check if a path starts with a specific API version prefix."""
|
||||||
|
return path.startswith(f"/{version}/")
|
||||||
|
|
||||||
|
|
||||||
|
def _is_stable_path(path: str) -> bool:
|
||||||
|
"""Check if a path is a stable v1 path (not v1alpha or v1beta)."""
|
||||||
|
return (
|
||||||
|
_path_starts_with_version(path, LLAMA_STACK_API_V1)
|
||||||
|
and not _path_starts_with_version(path, LLAMA_STACK_API_V1ALPHA)
|
||||||
|
and not _path_starts_with_version(path, LLAMA_STACK_API_V1BETA)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _is_experimental_path(path: str) -> bool:
|
||||||
|
"""Check if a path is an experimental path (v1alpha or v1beta)."""
|
||||||
|
return _path_starts_with_version(path, LLAMA_STACK_API_V1ALPHA) or _path_starts_with_version(
|
||||||
|
path, LLAMA_STACK_API_V1BETA
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _is_path_deprecated(path_item: dict[str, Any]) -> bool:
|
||||||
|
"""Check if a path item has any deprecated operations."""
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
return False
|
||||||
|
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
|
||||||
|
if isinstance(path_item.get(method), dict) and path_item[method].get("deprecated", False):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _filter_schema_by_version(
|
||||||
|
openapi_schema: dict[str, Any], stable_only: bool = True, exclude_deprecated: bool = True
|
||||||
|
) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Filter OpenAPI schema by API version.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
openapi_schema: The full OpenAPI schema
|
||||||
|
stable_only: If True, return only /v1/ paths (stable). If False, return only /v1alpha/ and /v1beta/ paths (experimental).
|
||||||
|
exclude_deprecated: If True, exclude deprecated endpoints from the result.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Filtered OpenAPI schema
|
||||||
|
"""
|
||||||
|
filtered_schema = openapi_schema.copy()
|
||||||
|
|
||||||
|
if "paths" not in filtered_schema:
|
||||||
|
return filtered_schema
|
||||||
|
|
||||||
|
filtered_paths = {}
|
||||||
|
for path, path_item in filtered_schema["paths"].items():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Filter at operation level, not path level
|
||||||
|
# This allows paths with both deprecated and non-deprecated operations
|
||||||
|
filtered_path_item = {}
|
||||||
|
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
|
||||||
|
if method not in path_item:
|
||||||
|
continue
|
||||||
|
operation = path_item[method]
|
||||||
|
if not isinstance(operation, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Skip deprecated operations if exclude_deprecated is True
|
||||||
|
if exclude_deprecated and operation.get("deprecated", False):
|
||||||
|
continue
|
||||||
|
|
||||||
|
filtered_path_item[method] = operation
|
||||||
|
|
||||||
|
# Only include path if it has at least one operation after filtering
|
||||||
|
if filtered_path_item:
|
||||||
|
# Check if path matches version filter
|
||||||
|
if (stable_only and _is_stable_path(path)) or (not stable_only and _is_experimental_path(path)):
|
||||||
|
filtered_paths[path] = filtered_path_item
|
||||||
|
|
||||||
|
filtered_schema["paths"] = filtered_paths
|
||||||
|
return _filter_schemas_by_references(filtered_schema, filtered_paths, openapi_schema)
|
||||||
|
|
||||||
|
|
||||||
|
def _filter_deprecated_schema(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Filter OpenAPI schema to include only deprecated endpoints.
|
||||||
|
Includes all deprecated endpoints regardless of version (v1, v1alpha, v1beta).
|
||||||
|
"""
|
||||||
|
filtered_schema = openapi_schema.copy()
|
||||||
|
|
||||||
|
if "paths" not in filtered_schema:
|
||||||
|
return filtered_schema
|
||||||
|
|
||||||
|
# Filter paths to only include deprecated ones
|
||||||
|
filtered_paths = {}
|
||||||
|
for path, path_item in filtered_schema["paths"].items():
|
||||||
|
if _is_path_deprecated(path_item):
|
||||||
|
filtered_paths[path] = path_item
|
||||||
|
|
||||||
|
filtered_schema["paths"] = filtered_paths
|
||||||
|
|
||||||
|
return filtered_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _filter_combined_schema(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Filter OpenAPI schema to include both stable (v1) and experimental (v1alpha, v1beta) APIs.
|
||||||
|
Includes deprecated endpoints. This is used for the combined "stainless" spec.
|
||||||
|
"""
|
||||||
|
filtered_schema = openapi_schema.copy()
|
||||||
|
|
||||||
|
if "paths" not in filtered_schema:
|
||||||
|
return filtered_schema
|
||||||
|
|
||||||
|
# Filter paths to include stable (v1) and experimental (v1alpha, v1beta), excluding deprecated
|
||||||
|
filtered_paths = {}
|
||||||
|
for path, path_item in filtered_schema["paths"].items():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Filter at operation level, not path level
|
||||||
|
# This allows paths with both deprecated and non-deprecated operations
|
||||||
|
filtered_path_item = {}
|
||||||
|
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
|
||||||
|
if method not in path_item:
|
||||||
|
continue
|
||||||
|
operation = path_item[method]
|
||||||
|
if not isinstance(operation, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
filtered_path_item[method] = operation
|
||||||
|
|
||||||
|
# Only include path if it has at least one operation after filtering
|
||||||
|
if filtered_path_item:
|
||||||
|
# Check if path matches version filter (stable or experimental)
|
||||||
|
if _is_stable_path(path) or _is_experimental_path(path):
|
||||||
|
filtered_paths[path] = filtered_path_item
|
||||||
|
|
||||||
|
filtered_schema["paths"] = filtered_paths
|
||||||
|
|
||||||
|
return _filter_schemas_by_references(filtered_schema, filtered_paths, openapi_schema)
|
||||||
963
scripts/openapi_generator/schema_transforms.py
Normal file
963
scripts/openapi_generator/schema_transforms.py
Normal file
|
|
@ -0,0 +1,963 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Schema transformations and fixes for OpenAPI generation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import copy
|
||||||
|
from collections import OrderedDict
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
import yaml
|
||||||
|
from openapi_spec_validator import validate_spec
|
||||||
|
from openapi_spec_validator.exceptions import OpenAPISpecValidatorError
|
||||||
|
|
||||||
|
from . import endpoints, schema_collection
|
||||||
|
from ._legacy_order import (
|
||||||
|
LEGACY_OPERATION_KEYS,
|
||||||
|
LEGACY_PATH_ORDER,
|
||||||
|
LEGACY_RESPONSE_ORDER,
|
||||||
|
LEGACY_SCHEMA_ORDER,
|
||||||
|
LEGACY_SECURITY,
|
||||||
|
LEGACY_TAG_GROUPS,
|
||||||
|
LEGACY_TAGS,
|
||||||
|
)
|
||||||
|
from .state import _extra_body_fields
|
||||||
|
|
||||||
|
|
||||||
|
def _fix_ref_references(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Fix $ref references to point to components/schemas instead of $defs.
|
||||||
|
This prevents the YAML dumper from creating a root-level $defs section.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def fix_refs(obj: Any) -> None:
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
if "$ref" in obj and obj["$ref"].startswith("#/$defs/"):
|
||||||
|
# Replace #/$defs/ with #/components/schemas/
|
||||||
|
obj["$ref"] = obj["$ref"].replace("#/$defs/", "#/components/schemas/")
|
||||||
|
for value in obj.values():
|
||||||
|
fix_refs(value)
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
fix_refs(item)
|
||||||
|
|
||||||
|
fix_refs(openapi_schema)
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_empty_responses(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""Convert empty 200 responses into 204 No Content."""
|
||||||
|
|
||||||
|
for path_item in openapi_schema.get("paths", {}).values():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
for method in list(path_item.keys()):
|
||||||
|
operation = path_item.get(method)
|
||||||
|
if not isinstance(operation, dict):
|
||||||
|
continue
|
||||||
|
responses = operation.get("responses")
|
||||||
|
if not isinstance(responses, dict):
|
||||||
|
continue
|
||||||
|
response_200 = responses.get("200") or responses.get(200)
|
||||||
|
if response_200 is None:
|
||||||
|
continue
|
||||||
|
content = response_200.get("content")
|
||||||
|
if content and any(
|
||||||
|
isinstance(media, dict) and media.get("schema") not in ({}, None) for media in content.values()
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
responses.pop("200", None)
|
||||||
|
responses.pop(200, None)
|
||||||
|
responses["204"] = {"description": response_200.get("description", "No Content")}
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _eliminate_defs_section(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Eliminate $defs section entirely by moving all definitions to components/schemas.
|
||||||
|
This matches the structure of the old pyopenapi generator for oasdiff compatibility.
|
||||||
|
"""
|
||||||
|
schema_collection._ensure_components_schemas(openapi_schema)
|
||||||
|
|
||||||
|
# First pass: collect all $defs from anywhere in the schema
|
||||||
|
defs_to_move = {}
|
||||||
|
|
||||||
|
def collect_defs(obj: Any) -> None:
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
if "$defs" in obj:
|
||||||
|
# Collect $defs for later processing
|
||||||
|
for def_name, def_schema in obj["$defs"].items():
|
||||||
|
if def_name not in defs_to_move:
|
||||||
|
defs_to_move[def_name] = def_schema
|
||||||
|
|
||||||
|
# Recursively process all values
|
||||||
|
for value in obj.values():
|
||||||
|
collect_defs(value)
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
collect_defs(item)
|
||||||
|
|
||||||
|
# Collect all $defs
|
||||||
|
collect_defs(openapi_schema)
|
||||||
|
|
||||||
|
# Move all $defs to components/schemas
|
||||||
|
for def_name, def_schema in defs_to_move.items():
|
||||||
|
if def_name not in openapi_schema["components"]["schemas"]:
|
||||||
|
openapi_schema["components"]["schemas"][def_name] = def_schema
|
||||||
|
|
||||||
|
# Also move any existing root-level $defs to components/schemas
|
||||||
|
if "$defs" in openapi_schema:
|
||||||
|
print(f"Found root-level $defs with {len(openapi_schema['$defs'])} items, moving to components/schemas")
|
||||||
|
for def_name, def_schema in openapi_schema["$defs"].items():
|
||||||
|
if def_name not in openapi_schema["components"]["schemas"]:
|
||||||
|
openapi_schema["components"]["schemas"][def_name] = def_schema
|
||||||
|
# Remove the root-level $defs
|
||||||
|
del openapi_schema["$defs"]
|
||||||
|
|
||||||
|
# Second pass: remove all $defs sections from anywhere in the schema
|
||||||
|
def remove_defs(obj: Any) -> None:
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
if "$defs" in obj:
|
||||||
|
del obj["$defs"]
|
||||||
|
|
||||||
|
# Recursively process all values
|
||||||
|
for value in obj.values():
|
||||||
|
remove_defs(value)
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
remove_defs(item)
|
||||||
|
|
||||||
|
# Remove all $defs sections
|
||||||
|
remove_defs(openapi_schema)
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _add_error_responses(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Add standard error response definitions to the OpenAPI schema.
|
||||||
|
Uses the actual Error model from the codebase for consistency.
|
||||||
|
"""
|
||||||
|
if "components" not in openapi_schema:
|
||||||
|
openapi_schema["components"] = {}
|
||||||
|
if "responses" not in openapi_schema["components"]:
|
||||||
|
openapi_schema["components"]["responses"] = {}
|
||||||
|
|
||||||
|
try:
|
||||||
|
from llama_stack_api.datatypes import Error
|
||||||
|
|
||||||
|
schema_collection._ensure_components_schemas(openapi_schema)
|
||||||
|
if "Error" not in openapi_schema["components"]["schemas"]:
|
||||||
|
openapi_schema["components"]["schemas"]["Error"] = Error.model_json_schema()
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
schema_collection._ensure_components_schemas(openapi_schema)
|
||||||
|
if "Response" not in openapi_schema["components"]["schemas"]:
|
||||||
|
openapi_schema["components"]["schemas"]["Response"] = {"title": "Response", "type": "object"}
|
||||||
|
|
||||||
|
# Define standard HTTP error responses
|
||||||
|
error_responses = {
|
||||||
|
400: {
|
||||||
|
"name": "BadRequest400",
|
||||||
|
"description": "The request was invalid or malformed",
|
||||||
|
"example": {"status": 400, "title": "Bad Request", "detail": "The request was invalid or malformed"},
|
||||||
|
},
|
||||||
|
429: {
|
||||||
|
"name": "TooManyRequests429",
|
||||||
|
"description": "The client has sent too many requests in a given amount of time",
|
||||||
|
"example": {
|
||||||
|
"status": 429,
|
||||||
|
"title": "Too Many Requests",
|
||||||
|
"detail": "You have exceeded the rate limit. Please try again later.",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
500: {
|
||||||
|
"name": "InternalServerError500",
|
||||||
|
"description": "The server encountered an unexpected error",
|
||||||
|
"example": {"status": 500, "title": "Internal Server Error", "detail": "An unexpected error occurred"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add each error response to the schema
|
||||||
|
for _, error_info in error_responses.items():
|
||||||
|
response_name = error_info["name"]
|
||||||
|
openapi_schema["components"]["responses"][response_name] = {
|
||||||
|
"description": error_info["description"],
|
||||||
|
"content": {
|
||||||
|
"application/json": {"schema": {"$ref": "#/components/schemas/Error"}, "example": error_info["example"]}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add a default error response
|
||||||
|
openapi_schema["components"]["responses"]["DefaultError"] = {
|
||||||
|
"description": "An error occurred",
|
||||||
|
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/Error"}}},
|
||||||
|
}
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _fix_path_parameters(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Fix path parameter resolution issues by adding explicit parameter definitions.
|
||||||
|
"""
|
||||||
|
if "paths" not in openapi_schema:
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
for path, path_item in openapi_schema["paths"].items():
|
||||||
|
# Extract path parameters from the URL
|
||||||
|
path_params = endpoints._extract_path_parameters(path)
|
||||||
|
|
||||||
|
if not path_params:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Add parameters to each operation in this path
|
||||||
|
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
|
||||||
|
if method in path_item and isinstance(path_item[method], dict):
|
||||||
|
operation = path_item[method]
|
||||||
|
if "parameters" not in operation:
|
||||||
|
operation["parameters"] = []
|
||||||
|
|
||||||
|
# Add path parameters that aren't already defined
|
||||||
|
existing_param_names = {p.get("name") for p in operation["parameters"] if p.get("in") == "path"}
|
||||||
|
for param in path_params:
|
||||||
|
if param["name"] not in existing_param_names:
|
||||||
|
operation["parameters"].append(param)
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _get_schema_title(item: dict[str, Any]) -> str | None:
|
||||||
|
"""Extract a title for a schema item to use in union variant names."""
|
||||||
|
if "$ref" in item:
|
||||||
|
return item["$ref"].split("/")[-1]
|
||||||
|
elif "type" in item:
|
||||||
|
type_val = item["type"]
|
||||||
|
if type_val == "null":
|
||||||
|
return None
|
||||||
|
if type_val == "array" and "items" in item:
|
||||||
|
items = item["items"]
|
||||||
|
if isinstance(items, dict):
|
||||||
|
if "anyOf" in items or "oneOf" in items:
|
||||||
|
nested_union = items.get("anyOf") or items.get("oneOf")
|
||||||
|
if isinstance(nested_union, list) and len(nested_union) > 0:
|
||||||
|
nested_types = []
|
||||||
|
for nested_item in nested_union:
|
||||||
|
if isinstance(nested_item, dict):
|
||||||
|
if "$ref" in nested_item:
|
||||||
|
nested_types.append(nested_item["$ref"].split("/")[-1])
|
||||||
|
elif "oneOf" in nested_item:
|
||||||
|
one_of_items = nested_item.get("oneOf", [])
|
||||||
|
if one_of_items and isinstance(one_of_items[0], dict) and "$ref" in one_of_items[0]:
|
||||||
|
base_name = one_of_items[0]["$ref"].split("/")[-1].split("-")[0]
|
||||||
|
nested_types.append(f"{base_name}Union")
|
||||||
|
else:
|
||||||
|
nested_types.append("Union")
|
||||||
|
elif "type" in nested_item and nested_item["type"] != "null":
|
||||||
|
nested_types.append(nested_item["type"])
|
||||||
|
if nested_types:
|
||||||
|
unique_nested = list(dict.fromkeys(nested_types))
|
||||||
|
# Use more descriptive names for better code generation
|
||||||
|
if len(unique_nested) <= 3:
|
||||||
|
return f"list[{' | '.join(unique_nested)}]"
|
||||||
|
else:
|
||||||
|
# Include first few types for better naming
|
||||||
|
return f"list[{unique_nested[0]} | {unique_nested[1]} | ...]"
|
||||||
|
return "list[Union]"
|
||||||
|
elif "$ref" in items:
|
||||||
|
return f"list[{items['$ref'].split('/')[-1]}]"
|
||||||
|
elif "type" in items:
|
||||||
|
return f"list[{items['type']}]"
|
||||||
|
return "array"
|
||||||
|
return type_val
|
||||||
|
elif "title" in item:
|
||||||
|
return item["title"]
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _add_titles_to_unions(obj: Any, parent_key: str | None = None) -> None:
|
||||||
|
"""Recursively add titles to union schemas (anyOf/oneOf) to help code generators infer names."""
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
# Check if this is a union schema (anyOf or oneOf)
|
||||||
|
if "anyOf" in obj or "oneOf" in obj:
|
||||||
|
union_type = "anyOf" if "anyOf" in obj else "oneOf"
|
||||||
|
union_items = obj[union_type]
|
||||||
|
|
||||||
|
if isinstance(union_items, list) and len(union_items) > 0:
|
||||||
|
# Skip simple nullable unions (type | null) - these don't need titles
|
||||||
|
is_simple_nullable = (
|
||||||
|
len(union_items) == 2
|
||||||
|
and any(isinstance(item, dict) and item.get("type") == "null" for item in union_items)
|
||||||
|
and any(
|
||||||
|
isinstance(item, dict) and "type" in item and item.get("type") != "null" for item in union_items
|
||||||
|
)
|
||||||
|
and not any(
|
||||||
|
isinstance(item, dict) and ("$ref" in item or "anyOf" in item or "oneOf" in item)
|
||||||
|
for item in union_items
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if is_simple_nullable:
|
||||||
|
# Remove title from simple nullable unions if it exists
|
||||||
|
if "title" in obj:
|
||||||
|
del obj["title"]
|
||||||
|
else:
|
||||||
|
# Add titles to individual union variants that need them
|
||||||
|
for item in union_items:
|
||||||
|
if isinstance(item, dict):
|
||||||
|
# Skip null types
|
||||||
|
if item.get("type") == "null":
|
||||||
|
continue
|
||||||
|
# Add title to complex variants (arrays with unions, nested unions, etc.)
|
||||||
|
# Also add to simple types if they're part of a complex union
|
||||||
|
needs_title = (
|
||||||
|
"items" in item
|
||||||
|
or "anyOf" in item
|
||||||
|
or "oneOf" in item
|
||||||
|
or ("$ref" in item and "title" not in item)
|
||||||
|
)
|
||||||
|
if needs_title and "title" not in item:
|
||||||
|
variant_title = _get_schema_title(item)
|
||||||
|
if variant_title:
|
||||||
|
item["title"] = variant_title
|
||||||
|
|
||||||
|
# Try to infer a meaningful title from the union items for the parent
|
||||||
|
titles = []
|
||||||
|
for item in union_items:
|
||||||
|
if isinstance(item, dict):
|
||||||
|
title = _get_schema_title(item)
|
||||||
|
if title:
|
||||||
|
titles.append(title)
|
||||||
|
|
||||||
|
if titles:
|
||||||
|
# Create a title from the union items
|
||||||
|
unique_titles = list(dict.fromkeys(titles)) # Preserve order, remove duplicates
|
||||||
|
if len(unique_titles) <= 3:
|
||||||
|
title = " | ".join(unique_titles)
|
||||||
|
else:
|
||||||
|
title = f"{unique_titles[0]} | ... ({len(unique_titles)} variants)"
|
||||||
|
# Always set the title for unions to help code generators
|
||||||
|
# This will replace generic property titles with union-specific ones
|
||||||
|
obj["title"] = title
|
||||||
|
elif "title" not in obj and parent_key:
|
||||||
|
# Use parent key as fallback only if no title exists
|
||||||
|
obj["title"] = f"{parent_key.title()}Union"
|
||||||
|
|
||||||
|
# Recursively process all values
|
||||||
|
for key, value in obj.items():
|
||||||
|
_add_titles_to_unions(value, key)
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
_add_titles_to_unions(item, parent_key)
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_anyof_const_to_enum(obj: Any) -> None:
|
||||||
|
"""Convert anyOf with multiple const string values to a proper enum."""
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
if "anyOf" in obj:
|
||||||
|
any_of = obj["anyOf"]
|
||||||
|
if isinstance(any_of, list):
|
||||||
|
# Check if all items are const string values
|
||||||
|
const_values = []
|
||||||
|
has_null = False
|
||||||
|
can_convert = True
|
||||||
|
for item in any_of:
|
||||||
|
if isinstance(item, dict):
|
||||||
|
if item.get("type") == "null":
|
||||||
|
has_null = True
|
||||||
|
elif item.get("type") == "string" and "const" in item:
|
||||||
|
const_values.append(item["const"])
|
||||||
|
else:
|
||||||
|
# Not a simple const pattern, skip conversion for this anyOf
|
||||||
|
can_convert = False
|
||||||
|
break
|
||||||
|
|
||||||
|
# If we have const values and they're all strings, convert to enum
|
||||||
|
if can_convert and const_values and len(const_values) == len(any_of) - (1 if has_null else 0):
|
||||||
|
# Convert to enum
|
||||||
|
obj["type"] = "string"
|
||||||
|
obj["enum"] = const_values
|
||||||
|
# Preserve default if present, otherwise try to get from first const item
|
||||||
|
if "default" not in obj:
|
||||||
|
for item in any_of:
|
||||||
|
if isinstance(item, dict) and "const" in item:
|
||||||
|
obj["default"] = item["const"]
|
||||||
|
break
|
||||||
|
# Remove anyOf
|
||||||
|
del obj["anyOf"]
|
||||||
|
# Handle nullable
|
||||||
|
if has_null:
|
||||||
|
obj["nullable"] = True
|
||||||
|
# Remove title if it's just "string"
|
||||||
|
if obj.get("title") == "string":
|
||||||
|
del obj["title"]
|
||||||
|
|
||||||
|
# Recursively process all values
|
||||||
|
for value in obj.values():
|
||||||
|
_convert_anyof_const_to_enum(value)
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
_convert_anyof_const_to_enum(item)
|
||||||
|
|
||||||
|
|
||||||
|
def _fix_schema_recursive(obj: Any) -> None:
|
||||||
|
"""Recursively fix schema issues: exclusiveMinimum and null defaults."""
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
if "exclusiveMinimum" in obj and isinstance(obj["exclusiveMinimum"], int | float):
|
||||||
|
obj["minimum"] = obj.pop("exclusiveMinimum")
|
||||||
|
if "default" in obj and obj["default"] is None:
|
||||||
|
del obj["default"]
|
||||||
|
obj["nullable"] = True
|
||||||
|
for value in obj.values():
|
||||||
|
_fix_schema_recursive(value)
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
for item in obj:
|
||||||
|
_fix_schema_recursive(item)
|
||||||
|
|
||||||
|
|
||||||
|
def _clean_description(description: str) -> str:
|
||||||
|
"""Remove :param, :type, :returns, and other docstring metadata from description."""
|
||||||
|
if not description:
|
||||||
|
return description
|
||||||
|
|
||||||
|
lines = description.split("\n")
|
||||||
|
cleaned_lines = []
|
||||||
|
skip_until_empty = False
|
||||||
|
|
||||||
|
for line in lines:
|
||||||
|
stripped = line.strip()
|
||||||
|
# Skip lines that start with docstring metadata markers
|
||||||
|
if stripped.startswith(
|
||||||
|
(":param", ":type", ":return", ":returns", ":raises", ":exception", ":yield", ":yields", ":cvar")
|
||||||
|
):
|
||||||
|
skip_until_empty = True
|
||||||
|
continue
|
||||||
|
# If we're skipping and hit an empty line, resume normal processing
|
||||||
|
if skip_until_empty:
|
||||||
|
if not stripped:
|
||||||
|
skip_until_empty = False
|
||||||
|
continue
|
||||||
|
# Include the line if we're not skipping
|
||||||
|
cleaned_lines.append(line)
|
||||||
|
|
||||||
|
# Join and strip trailing whitespace
|
||||||
|
result = "\n".join(cleaned_lines).strip()
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def _clean_schema_descriptions(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""Clean descriptions in schema definitions by removing docstring metadata."""
|
||||||
|
if "components" not in openapi_schema or "schemas" not in openapi_schema["components"]:
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
schemas = openapi_schema["components"]["schemas"]
|
||||||
|
for schema_def in schemas.values():
|
||||||
|
if isinstance(schema_def, dict) and "description" in schema_def and isinstance(schema_def["description"], str):
|
||||||
|
schema_def["description"] = _clean_description(schema_def["description"])
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _add_extra_body_params_extension(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Add x-llama-stack-extra-body-params extension to requestBody for endpoints with ExtraBodyField parameters.
|
||||||
|
"""
|
||||||
|
if "paths" not in openapi_schema:
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
from pydantic import TypeAdapter
|
||||||
|
|
||||||
|
for path, path_item in openapi_schema["paths"].items():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
|
||||||
|
if method not in path_item:
|
||||||
|
continue
|
||||||
|
|
||||||
|
operation = path_item[method]
|
||||||
|
if not isinstance(operation, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if we have extra body fields for this path/method
|
||||||
|
key = (path, method.upper())
|
||||||
|
if key not in _extra_body_fields:
|
||||||
|
continue
|
||||||
|
|
||||||
|
extra_body_params = _extra_body_fields[key]
|
||||||
|
|
||||||
|
# Ensure requestBody exists
|
||||||
|
if "requestBody" not in operation:
|
||||||
|
continue
|
||||||
|
|
||||||
|
request_body = operation["requestBody"]
|
||||||
|
if not isinstance(request_body, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Get the schema from requestBody
|
||||||
|
content = request_body.get("content", {})
|
||||||
|
json_content = content.get("application/json", {})
|
||||||
|
schema_ref = json_content.get("schema", {})
|
||||||
|
|
||||||
|
# Remove extra body fields from the schema if they exist as properties
|
||||||
|
# Handle both $ref schemas and inline schemas
|
||||||
|
if isinstance(schema_ref, dict):
|
||||||
|
if "$ref" in schema_ref:
|
||||||
|
# Schema is a reference - remove from the referenced schema
|
||||||
|
ref_path = schema_ref["$ref"]
|
||||||
|
if ref_path.startswith("#/components/schemas/"):
|
||||||
|
schema_name = ref_path.split("/")[-1]
|
||||||
|
if "components" in openapi_schema and "schemas" in openapi_schema["components"]:
|
||||||
|
schema_def = openapi_schema["components"]["schemas"].get(schema_name)
|
||||||
|
if isinstance(schema_def, dict) and "properties" in schema_def:
|
||||||
|
for param_name, _, _ in extra_body_params:
|
||||||
|
if param_name in schema_def["properties"]:
|
||||||
|
del schema_def["properties"][param_name]
|
||||||
|
# Also remove from required if present
|
||||||
|
if "required" in schema_def and param_name in schema_def["required"]:
|
||||||
|
schema_def["required"].remove(param_name)
|
||||||
|
elif "properties" in schema_ref:
|
||||||
|
# Schema is inline - remove directly from it
|
||||||
|
for param_name, _, _ in extra_body_params:
|
||||||
|
if param_name in schema_ref["properties"]:
|
||||||
|
del schema_ref["properties"][param_name]
|
||||||
|
# Also remove from required if present
|
||||||
|
if "required" in schema_ref and param_name in schema_ref["required"]:
|
||||||
|
schema_ref["required"].remove(param_name)
|
||||||
|
|
||||||
|
# Build the extra body params schema
|
||||||
|
extra_params_schema = {}
|
||||||
|
for param_name, param_type, description in extra_body_params:
|
||||||
|
try:
|
||||||
|
# Generate JSON schema for the parameter type
|
||||||
|
adapter = TypeAdapter(param_type)
|
||||||
|
param_schema = adapter.json_schema(ref_template="#/components/schemas/{model}")
|
||||||
|
|
||||||
|
# Add description if provided
|
||||||
|
if description:
|
||||||
|
param_schema["description"] = description
|
||||||
|
|
||||||
|
extra_params_schema[param_name] = param_schema
|
||||||
|
except Exception:
|
||||||
|
# If we can't generate schema, skip this parameter
|
||||||
|
continue
|
||||||
|
|
||||||
|
if extra_params_schema:
|
||||||
|
# Add the extension to requestBody
|
||||||
|
if "x-llama-stack-extra-body-params" not in request_body:
|
||||||
|
request_body["x-llama-stack-extra-body-params"] = extra_params_schema
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _remove_query_params_from_body_endpoints(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Remove query parameters from POST/PUT/PATCH endpoints that have a request body.
|
||||||
|
FastAPI sometimes infers parameters as query params even when they should be in the request body.
|
||||||
|
"""
|
||||||
|
if "paths" not in openapi_schema:
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
body_methods = {"post", "put", "patch"}
|
||||||
|
|
||||||
|
for _path, path_item in openapi_schema["paths"].items():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
for method in body_methods:
|
||||||
|
if method not in path_item:
|
||||||
|
continue
|
||||||
|
|
||||||
|
operation = path_item[method]
|
||||||
|
if not isinstance(operation, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if this operation has a request body
|
||||||
|
has_request_body = "requestBody" in operation and operation["requestBody"]
|
||||||
|
|
||||||
|
if has_request_body:
|
||||||
|
# Remove all query parameters (parameters with "in": "query")
|
||||||
|
if "parameters" in operation:
|
||||||
|
# Filter out query parameters, keep path and header parameters
|
||||||
|
operation["parameters"] = [
|
||||||
|
param
|
||||||
|
for param in operation["parameters"]
|
||||||
|
if isinstance(param, dict) and param.get("in") != "query"
|
||||||
|
]
|
||||||
|
# Remove the parameters key if it's now empty
|
||||||
|
if not operation["parameters"]:
|
||||||
|
del operation["parameters"]
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _remove_request_bodies_from_get_endpoints(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Remove request bodies from GET endpoints and convert their parameters to query parameters.
|
||||||
|
|
||||||
|
GET requests should never have request bodies - all parameters should be query parameters.
|
||||||
|
This function removes any requestBody that FastAPI may have incorrectly added to GET endpoints
|
||||||
|
and converts any parameters in the requestBody to query parameters.
|
||||||
|
"""
|
||||||
|
if "paths" not in openapi_schema:
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
for _path, path_item in openapi_schema["paths"].items():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check GET method specifically
|
||||||
|
if "get" in path_item:
|
||||||
|
operation = path_item["get"]
|
||||||
|
if not isinstance(operation, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
if "requestBody" in operation:
|
||||||
|
request_body = operation["requestBody"]
|
||||||
|
# Extract parameters from requestBody and convert to query parameters
|
||||||
|
if isinstance(request_body, dict) and "content" in request_body:
|
||||||
|
content = request_body.get("content", {})
|
||||||
|
json_content = content.get("application/json", {})
|
||||||
|
schema = json_content.get("schema", {})
|
||||||
|
|
||||||
|
if "parameters" not in operation:
|
||||||
|
operation["parameters"] = []
|
||||||
|
elif not isinstance(operation["parameters"], list):
|
||||||
|
operation["parameters"] = []
|
||||||
|
|
||||||
|
# If the schema has properties, convert each to a query parameter
|
||||||
|
if isinstance(schema, dict) and "properties" in schema:
|
||||||
|
for param_name, param_schema in schema["properties"].items():
|
||||||
|
# Check if this parameter is already in the parameters list
|
||||||
|
existing_param = None
|
||||||
|
for existing in operation["parameters"]:
|
||||||
|
if isinstance(existing, dict) and existing.get("name") == param_name:
|
||||||
|
existing_param = existing
|
||||||
|
break
|
||||||
|
|
||||||
|
if not existing_param:
|
||||||
|
# Create a new query parameter from the requestBody property
|
||||||
|
required = param_name in schema.get("required", [])
|
||||||
|
query_param = {
|
||||||
|
"name": param_name,
|
||||||
|
"in": "query",
|
||||||
|
"required": required,
|
||||||
|
"schema": param_schema,
|
||||||
|
}
|
||||||
|
# Add description if present
|
||||||
|
if "description" in param_schema:
|
||||||
|
query_param["description"] = param_schema["description"]
|
||||||
|
operation["parameters"].append(query_param)
|
||||||
|
elif isinstance(schema, dict):
|
||||||
|
# Handle direct schema (not a model with properties)
|
||||||
|
# Try to infer parameter name from schema title
|
||||||
|
param_name = schema.get("title", "").lower().replace(" ", "_")
|
||||||
|
if param_name:
|
||||||
|
# Check if this parameter is already in the parameters list
|
||||||
|
existing_param = None
|
||||||
|
for existing in operation["parameters"]:
|
||||||
|
if isinstance(existing, dict) and existing.get("name") == param_name:
|
||||||
|
existing_param = existing
|
||||||
|
break
|
||||||
|
|
||||||
|
if not existing_param:
|
||||||
|
# Create a new query parameter from the requestBody schema
|
||||||
|
query_param = {
|
||||||
|
"name": param_name,
|
||||||
|
"in": "query",
|
||||||
|
"required": False, # Default to optional for GET requests
|
||||||
|
"schema": schema,
|
||||||
|
}
|
||||||
|
# Add description if present
|
||||||
|
if "description" in schema:
|
||||||
|
query_param["description"] = schema["description"]
|
||||||
|
operation["parameters"].append(query_param)
|
||||||
|
|
||||||
|
# Remove request body from GET endpoint
|
||||||
|
del operation["requestBody"]
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_duplicate_union_types(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Extract duplicate union types to shared schema references.
|
||||||
|
|
||||||
|
Stainless generates type names from union types based on their context, which can cause
|
||||||
|
duplicate names when the same union appears in different places. This function extracts
|
||||||
|
these duplicate unions to shared schema definitions and replaces inline definitions with
|
||||||
|
references to them.
|
||||||
|
|
||||||
|
According to Stainless docs, when duplicate types are detected, they should be extracted
|
||||||
|
to the same ref and declared as a model. This ensures Stainless generates consistent
|
||||||
|
type names regardless of where the union is referenced.
|
||||||
|
|
||||||
|
Fixes: https://www.stainless.com/docs/reference/diagnostics#Python/DuplicateDeclaration
|
||||||
|
"""
|
||||||
|
if "components" not in openapi_schema or "schemas" not in openapi_schema["components"]:
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
schemas = openapi_schema["components"]["schemas"]
|
||||||
|
|
||||||
|
# Extract the Output union type (used in OpenAIResponseObjectWithInput-Output and ListOpenAIResponseInputItem)
|
||||||
|
output_union_schema_name = "OpenAIResponseMessageOutputUnion"
|
||||||
|
output_union_title = None
|
||||||
|
|
||||||
|
# Get the union type from OpenAIResponseObjectWithInput-Output.input.items.anyOf
|
||||||
|
if "OpenAIResponseObjectWithInput-Output" in schemas:
|
||||||
|
schema = schemas["OpenAIResponseObjectWithInput-Output"]
|
||||||
|
if isinstance(schema, dict) and "properties" in schema:
|
||||||
|
input_prop = schema["properties"].get("input")
|
||||||
|
if isinstance(input_prop, dict) and "items" in input_prop:
|
||||||
|
items = input_prop["items"]
|
||||||
|
if isinstance(items, dict) and "anyOf" in items:
|
||||||
|
# Extract the union schema with deep copy
|
||||||
|
output_union_schema = copy.deepcopy(items["anyOf"])
|
||||||
|
output_union_title = items.get("title", "OpenAIResponseMessageOutputUnion")
|
||||||
|
|
||||||
|
# Collect all refs from the oneOf to detect duplicates
|
||||||
|
refs_in_oneof = set()
|
||||||
|
for item in output_union_schema:
|
||||||
|
if isinstance(item, dict) and "oneOf" in item:
|
||||||
|
oneof = item["oneOf"]
|
||||||
|
if isinstance(oneof, list):
|
||||||
|
for variant in oneof:
|
||||||
|
if isinstance(variant, dict) and "$ref" in variant:
|
||||||
|
refs_in_oneof.add(variant["$ref"])
|
||||||
|
item["x-stainless-naming"] = "OpenAIResponseMessageOutputOneOf"
|
||||||
|
|
||||||
|
# Remove duplicate refs from anyOf that are already in oneOf
|
||||||
|
deduplicated_schema = []
|
||||||
|
for item in output_union_schema:
|
||||||
|
if isinstance(item, dict) and "$ref" in item:
|
||||||
|
if item["$ref"] not in refs_in_oneof:
|
||||||
|
deduplicated_schema.append(item)
|
||||||
|
else:
|
||||||
|
deduplicated_schema.append(item)
|
||||||
|
output_union_schema = deduplicated_schema
|
||||||
|
|
||||||
|
# Create the shared schema with x-stainless-naming to ensure consistent naming
|
||||||
|
if output_union_schema_name not in schemas:
|
||||||
|
schemas[output_union_schema_name] = {
|
||||||
|
"anyOf": output_union_schema,
|
||||||
|
"title": output_union_title,
|
||||||
|
"x-stainless-naming": output_union_schema_name,
|
||||||
|
}
|
||||||
|
# Replace with reference
|
||||||
|
input_prop["items"] = {"$ref": f"#/components/schemas/{output_union_schema_name}"}
|
||||||
|
|
||||||
|
# Replace the same union in ListOpenAIResponseInputItem.data.items.anyOf
|
||||||
|
if "ListOpenAIResponseInputItem" in schemas and output_union_schema_name in schemas:
|
||||||
|
schema = schemas["ListOpenAIResponseInputItem"]
|
||||||
|
if isinstance(schema, dict) and "properties" in schema:
|
||||||
|
data_prop = schema["properties"].get("data")
|
||||||
|
if isinstance(data_prop, dict) and "items" in data_prop:
|
||||||
|
items = data_prop["items"]
|
||||||
|
if isinstance(items, dict) and "anyOf" in items:
|
||||||
|
# Replace with reference
|
||||||
|
data_prop["items"] = {"$ref": f"#/components/schemas/{output_union_schema_name}"}
|
||||||
|
|
||||||
|
# Extract the Input union type (used in _responses_Request.input.anyOf[1].items.anyOf)
|
||||||
|
input_union_schema_name = "OpenAIResponseMessageInputUnion"
|
||||||
|
|
||||||
|
if "_responses_Request" in schemas:
|
||||||
|
schema = schemas["_responses_Request"]
|
||||||
|
if isinstance(schema, dict) and "properties" in schema:
|
||||||
|
input_prop = schema["properties"].get("input")
|
||||||
|
if isinstance(input_prop, dict) and "anyOf" in input_prop:
|
||||||
|
any_of = input_prop["anyOf"]
|
||||||
|
if isinstance(any_of, list) and len(any_of) > 1:
|
||||||
|
# Check the second item (index 1) which should be the array type
|
||||||
|
second_item = any_of[1]
|
||||||
|
if isinstance(second_item, dict) and "items" in second_item:
|
||||||
|
items = second_item["items"]
|
||||||
|
if isinstance(items, dict) and "anyOf" in items:
|
||||||
|
# Extract the union schema with deep copy
|
||||||
|
input_union_schema = copy.deepcopy(items["anyOf"])
|
||||||
|
input_union_title = items.get("title", "OpenAIResponseMessageInputUnion")
|
||||||
|
|
||||||
|
# Collect all refs from the oneOf to detect duplicates
|
||||||
|
refs_in_oneof = set()
|
||||||
|
for item in input_union_schema:
|
||||||
|
if isinstance(item, dict) and "oneOf" in item:
|
||||||
|
oneof = item["oneOf"]
|
||||||
|
if isinstance(oneof, list):
|
||||||
|
for variant in oneof:
|
||||||
|
if isinstance(variant, dict) and "$ref" in variant:
|
||||||
|
refs_in_oneof.add(variant["$ref"])
|
||||||
|
item["x-stainless-naming"] = "OpenAIResponseMessageInputOneOf"
|
||||||
|
|
||||||
|
# Remove duplicate refs from anyOf that are already in oneOf
|
||||||
|
deduplicated_schema = []
|
||||||
|
for item in input_union_schema:
|
||||||
|
if isinstance(item, dict) and "$ref" in item:
|
||||||
|
if item["$ref"] not in refs_in_oneof:
|
||||||
|
deduplicated_schema.append(item)
|
||||||
|
else:
|
||||||
|
deduplicated_schema.append(item)
|
||||||
|
input_union_schema = deduplicated_schema
|
||||||
|
|
||||||
|
# Create the shared schema with x-stainless-naming to ensure consistent naming
|
||||||
|
if input_union_schema_name not in schemas:
|
||||||
|
schemas[input_union_schema_name] = {
|
||||||
|
"anyOf": input_union_schema,
|
||||||
|
"title": input_union_title,
|
||||||
|
"x-stainless-naming": input_union_schema_name,
|
||||||
|
}
|
||||||
|
# Replace with reference
|
||||||
|
second_item["items"] = {"$ref": f"#/components/schemas/{input_union_schema_name}"}
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_multiline_strings_to_literal(obj: Any) -> Any:
|
||||||
|
"""Recursively convert multi-line strings to LiteralScalarString for YAML block scalar formatting."""
|
||||||
|
try:
|
||||||
|
from ruamel.yaml.scalarstring import LiteralScalarString
|
||||||
|
|
||||||
|
if isinstance(obj, str) and "\n" in obj:
|
||||||
|
return LiteralScalarString(obj)
|
||||||
|
elif isinstance(obj, dict):
|
||||||
|
return {key: _convert_multiline_strings_to_literal(value) for key, value in obj.items()}
|
||||||
|
elif isinstance(obj, list):
|
||||||
|
return [_convert_multiline_strings_to_literal(item) for item in obj]
|
||||||
|
else:
|
||||||
|
return obj
|
||||||
|
except ImportError:
|
||||||
|
return obj
|
||||||
|
|
||||||
|
|
||||||
|
def _write_yaml_file(file_path: Path, schema: dict[str, Any]) -> None:
|
||||||
|
"""Write schema to YAML file using ruamel.yaml if available, otherwise standard yaml."""
|
||||||
|
try:
|
||||||
|
from ruamel.yaml import YAML
|
||||||
|
|
||||||
|
yaml_writer = YAML()
|
||||||
|
yaml_writer.default_flow_style = False
|
||||||
|
yaml_writer.sort_keys = False
|
||||||
|
yaml_writer.width = 4096
|
||||||
|
yaml_writer.allow_unicode = True
|
||||||
|
schema = _convert_multiline_strings_to_literal(schema)
|
||||||
|
with open(file_path, "w") as f:
|
||||||
|
yaml_writer.dump(schema, f)
|
||||||
|
except ImportError:
|
||||||
|
with open(file_path, "w") as f:
|
||||||
|
yaml.dump(schema, f, default_flow_style=False, sort_keys=False)
|
||||||
|
|
||||||
|
# Post-process to remove trailing whitespace from all lines
|
||||||
|
with open(file_path) as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
|
||||||
|
# Strip trailing whitespace from each line, preserving newlines
|
||||||
|
cleaned_lines = [line.rstrip() + "\n" if line.endswith("\n") else line.rstrip() for line in lines]
|
||||||
|
|
||||||
|
with open(file_path, "w") as f:
|
||||||
|
f.writelines(cleaned_lines)
|
||||||
|
|
||||||
|
|
||||||
|
def _apply_legacy_sorting(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Temporarily match the legacy ordering from origin/main so diffs are easier to read.
|
||||||
|
Remove this once the generator output stabilizes and we no longer need legacy diffs.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def order_mapping(data: dict[str, Any], priority: list[str]) -> OrderedDict[str, Any]:
|
||||||
|
ordered: OrderedDict[str, Any] = OrderedDict()
|
||||||
|
for key in priority:
|
||||||
|
if key in data:
|
||||||
|
ordered[key] = data[key]
|
||||||
|
for key, value in data.items():
|
||||||
|
if key not in ordered:
|
||||||
|
ordered[key] = value
|
||||||
|
return ordered
|
||||||
|
|
||||||
|
paths = openapi_schema.get("paths")
|
||||||
|
if isinstance(paths, dict):
|
||||||
|
openapi_schema["paths"] = order_mapping(paths, LEGACY_PATH_ORDER)
|
||||||
|
for path, path_item in openapi_schema["paths"].items():
|
||||||
|
if not isinstance(path_item, dict):
|
||||||
|
continue
|
||||||
|
ordered_path_item = OrderedDict()
|
||||||
|
for method in ["get", "post", "put", "delete", "patch", "head", "options"]:
|
||||||
|
if method in path_item:
|
||||||
|
ordered_path_item[method] = order_mapping(path_item[method], LEGACY_OPERATION_KEYS)
|
||||||
|
for key, value in path_item.items():
|
||||||
|
if key not in ordered_path_item:
|
||||||
|
if isinstance(value, dict) and key.lower() in {
|
||||||
|
"get",
|
||||||
|
"post",
|
||||||
|
"put",
|
||||||
|
"delete",
|
||||||
|
"patch",
|
||||||
|
"head",
|
||||||
|
"options",
|
||||||
|
}:
|
||||||
|
ordered_path_item[key] = order_mapping(value, LEGACY_OPERATION_KEYS)
|
||||||
|
else:
|
||||||
|
ordered_path_item[key] = value
|
||||||
|
openapi_schema["paths"][path] = ordered_path_item
|
||||||
|
|
||||||
|
components = openapi_schema.setdefault("components", {})
|
||||||
|
schemas = components.get("schemas")
|
||||||
|
if isinstance(schemas, dict):
|
||||||
|
components["schemas"] = order_mapping(schemas, LEGACY_SCHEMA_ORDER)
|
||||||
|
responses = components.get("responses")
|
||||||
|
if isinstance(responses, dict):
|
||||||
|
components["responses"] = order_mapping(responses, LEGACY_RESPONSE_ORDER)
|
||||||
|
|
||||||
|
if LEGACY_TAGS:
|
||||||
|
openapi_schema["tags"] = LEGACY_TAGS
|
||||||
|
|
||||||
|
if LEGACY_TAG_GROUPS:
|
||||||
|
openapi_schema["x-tagGroups"] = LEGACY_TAG_GROUPS
|
||||||
|
|
||||||
|
if LEGACY_SECURITY:
|
||||||
|
openapi_schema["security"] = LEGACY_SECURITY
|
||||||
|
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def _fix_schema_issues(openapi_schema: dict[str, Any]) -> dict[str, Any]:
|
||||||
|
"""Fix common schema issues: exclusiveMinimum, null defaults, and add titles to unions."""
|
||||||
|
# Convert anyOf with const values to enums across the entire schema
|
||||||
|
_convert_anyof_const_to_enum(openapi_schema)
|
||||||
|
|
||||||
|
# Fix other schema issues and add titles to unions
|
||||||
|
if "components" in openapi_schema and "schemas" in openapi_schema["components"]:
|
||||||
|
for schema_name, schema_def in openapi_schema["components"]["schemas"].items():
|
||||||
|
_fix_schema_recursive(schema_def)
|
||||||
|
_add_titles_to_unions(schema_def, schema_name)
|
||||||
|
return openapi_schema
|
||||||
|
|
||||||
|
|
||||||
|
def validate_openapi_schema(schema: dict[str, Any], schema_name: str = "OpenAPI schema") -> bool:
|
||||||
|
"""
|
||||||
|
Validate an OpenAPI schema using openapi-spec-validator.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
schema: The OpenAPI schema dictionary to validate
|
||||||
|
schema_name: Name of the schema for error reporting
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if valid, False otherwise
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
OpenAPIValidationError: If validation fails
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
validate_spec(schema)
|
||||||
|
print(f"{schema_name} is valid")
|
||||||
|
return True
|
||||||
|
except OpenAPISpecValidatorError as e:
|
||||||
|
print(f"{schema_name} validation failed: {e}")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(f"{schema_name} validation error: {e}")
|
||||||
|
return False
|
||||||
41
scripts/openapi_generator/state.py
Normal file
41
scripts/openapi_generator/state.py
Normal file
|
|
@ -0,0 +1,41 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Shared state for the OpenAPI generator module.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from llama_stack_api import Api
|
||||||
|
from llama_stack_api.schema_utils import clear_dynamic_schema_types, register_dynamic_schema_type
|
||||||
|
|
||||||
|
_dynamic_model_registry: dict[str, type] = {}
|
||||||
|
|
||||||
|
# Cache for protocol methods to avoid repeated lookups
|
||||||
|
_protocol_methods_cache: dict[Api, dict[str, Any]] | None = None
|
||||||
|
|
||||||
|
# Global dict to store extra body field information by endpoint
|
||||||
|
# Key: (path, method) tuple, Value: list of (param_name, param_type, description) tuples
|
||||||
|
_extra_body_fields: dict[tuple[str, str], list[tuple[str, type, str | None]]] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def register_dynamic_model(name: str, model: type) -> type:
|
||||||
|
"""Register and deduplicate dynamically generated request models."""
|
||||||
|
existing = _dynamic_model_registry.get(name)
|
||||||
|
if existing is not None:
|
||||||
|
register_dynamic_schema_type(existing)
|
||||||
|
return existing
|
||||||
|
_dynamic_model_registry[name] = model
|
||||||
|
register_dynamic_schema_type(model)
|
||||||
|
return model
|
||||||
|
|
||||||
|
|
||||||
|
def reset_generator_state() -> None:
|
||||||
|
"""Clear per-run caches so repeated generations stay deterministic."""
|
||||||
|
_dynamic_model_registry.clear()
|
||||||
|
_extra_body_fields.clear()
|
||||||
|
clear_dynamic_schema_types()
|
||||||
19
scripts/run_openapi_generator.sh
Executable file
19
scripts/run_openapi_generator.sh
Executable file
|
|
@ -0,0 +1,19 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
PYTHONPATH=${PYTHONPATH:-}
|
||||||
|
THIS_DIR="$(cd "$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")" && pwd)"
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
|
||||||
|
stack_dir=$(dirname "$THIS_DIR")
|
||||||
|
PYTHONPATH=$PYTHONPATH:$stack_dir \
|
||||||
|
python3 -m scripts.openapi_generator "$stack_dir"/docs/static
|
||||||
|
|
||||||
|
cp "$stack_dir"/docs/static/stainless-llama-stack-spec.yaml "$stack_dir"/client-sdks/stainless/openapi.yml
|
||||||
|
|
@ -19,7 +19,7 @@ import httpx
|
||||||
import yaml
|
import yaml
|
||||||
from fastapi import Response as FastAPIResponse
|
from fastapi import Response as FastAPIResponse
|
||||||
|
|
||||||
from llama_stack_api import is_unwrapped_body_param
|
from llama_stack.core.utils.type_inspection import is_unwrapped_body_param
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from llama_stack_client import (
|
from llama_stack_client import (
|
||||||
|
|
@ -42,17 +42,10 @@ from termcolor import cprint
|
||||||
from llama_stack.core.build import print_pip_install_help
|
from llama_stack.core.build import print_pip_install_help
|
||||||
from llama_stack.core.configure import parse_and_maybe_upgrade_config
|
from llama_stack.core.configure import parse_and_maybe_upgrade_config
|
||||||
from llama_stack.core.datatypes import BuildConfig, BuildProvider, DistributionSpec
|
from llama_stack.core.datatypes import BuildConfig, BuildProvider, DistributionSpec
|
||||||
from llama_stack.core.request_headers import (
|
from llama_stack.core.request_headers import PROVIDER_DATA_VAR, request_provider_data_context
|
||||||
PROVIDER_DATA_VAR,
|
|
||||||
request_provider_data_context,
|
|
||||||
)
|
|
||||||
from llama_stack.core.resolver import ProviderRegistry
|
from llama_stack.core.resolver import ProviderRegistry
|
||||||
from llama_stack.core.server.routes import RouteImpls, find_matching_route, initialize_route_impls
|
from llama_stack.core.server.routes import RouteImpls, find_matching_route, initialize_route_impls
|
||||||
from llama_stack.core.stack import (
|
from llama_stack.core.stack import Stack, get_stack_run_config_from_distro, replace_env_vars
|
||||||
Stack,
|
|
||||||
get_stack_run_config_from_distro,
|
|
||||||
replace_env_vars,
|
|
||||||
)
|
|
||||||
from llama_stack.core.telemetry import Telemetry
|
from llama_stack.core.telemetry import Telemetry
|
||||||
from llama_stack.core.telemetry.tracing import CURRENT_TRACE_CONTEXT, end_trace, setup_logger, start_trace
|
from llama_stack.core.telemetry.tracing import CURRENT_TRACE_CONTEXT, end_trace, setup_logger, start_trace
|
||||||
from llama_stack.core.utils.config import redact_sensitive_fields
|
from llama_stack.core.utils.config import redact_sensitive_fields
|
||||||
|
|
|
||||||
45
src/llama_stack/core/utils/type_inspection.py
Normal file
45
src/llama_stack/core/utils/type_inspection.py
Normal file
|
|
@ -0,0 +1,45 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
"""
|
||||||
|
Utility functions for type inspection and parameter handling.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import inspect
|
||||||
|
import typing
|
||||||
|
from typing import Any, get_args, get_origin
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from pydantic.fields import FieldInfo
|
||||||
|
|
||||||
|
|
||||||
|
def is_unwrapped_body_param(param_type: Any) -> bool:
|
||||||
|
"""
|
||||||
|
Check if a parameter type represents an unwrapped body parameter.
|
||||||
|
An unwrapped body parameter is an Annotated type with Body(embed=False)
|
||||||
|
|
||||||
|
This is used to determine whether request parameters should be flattened
|
||||||
|
in OpenAPI specs and client libraries (matching FastAPI's embed=False behavior).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
param_type: The parameter type annotation to check
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the parameter should be treated as an unwrapped body parameter
|
||||||
|
"""
|
||||||
|
# Check if it's Annotated with Body(embed=False)
|
||||||
|
if get_origin(param_type) is typing.Annotated:
|
||||||
|
args = get_args(param_type)
|
||||||
|
base_type = args[0]
|
||||||
|
metadata = args[1:]
|
||||||
|
|
||||||
|
# Look for Body annotation with embed=False
|
||||||
|
# Body() returns a FieldInfo object, so we check for that type and the embed attribute
|
||||||
|
for item in metadata:
|
||||||
|
if isinstance(item, FieldInfo) and hasattr(item, "embed") and not item.embed:
|
||||||
|
return inspect.isclass(base_type) and issubclass(base_type, BaseModel)
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
@ -353,8 +353,15 @@ from .safety import (
|
||||||
from .schema_utils import (
|
from .schema_utils import (
|
||||||
CallableT,
|
CallableT,
|
||||||
ExtraBodyField,
|
ExtraBodyField,
|
||||||
|
SchemaInfo,
|
||||||
WebMethod,
|
WebMethod,
|
||||||
|
clear_dynamic_schema_types,
|
||||||
|
get_registered_schema_info,
|
||||||
|
iter_dynamic_schema_types,
|
||||||
|
iter_json_schema_types,
|
||||||
|
iter_registered_schema_types,
|
||||||
json_schema_type,
|
json_schema_type,
|
||||||
|
register_dynamic_schema_type,
|
||||||
register_schema,
|
register_schema,
|
||||||
webmethod,
|
webmethod,
|
||||||
)
|
)
|
||||||
|
|
@ -388,27 +395,6 @@ from .shields import (
|
||||||
)
|
)
|
||||||
|
|
||||||
# Import from strong_typing
|
# Import from strong_typing
|
||||||
from .strong_typing.core import JsonType
|
|
||||||
from .strong_typing.docstring import Docstring, parse_type
|
|
||||||
from .strong_typing.inspection import (
|
|
||||||
get_signature,
|
|
||||||
is_generic_list,
|
|
||||||
is_type_optional,
|
|
||||||
is_type_union,
|
|
||||||
is_unwrapped_body_param,
|
|
||||||
unwrap_generic_list,
|
|
||||||
unwrap_optional_type,
|
|
||||||
unwrap_union_types,
|
|
||||||
)
|
|
||||||
from .strong_typing.name import python_type_to_name
|
|
||||||
from .strong_typing.schema import (
|
|
||||||
JsonSchemaGenerator,
|
|
||||||
Schema,
|
|
||||||
SchemaOptions,
|
|
||||||
StrictJsonType,
|
|
||||||
get_schema_identifier,
|
|
||||||
)
|
|
||||||
from .strong_typing.serialization import json_dump_string, object_to_json
|
|
||||||
from .tools import (
|
from .tools import (
|
||||||
ListToolDefsResponse,
|
ListToolDefsResponse,
|
||||||
ListToolGroupsResponse,
|
ListToolGroupsResponse,
|
||||||
|
|
@ -537,6 +523,7 @@ __all__ = [
|
||||||
"ExtraBodyField",
|
"ExtraBodyField",
|
||||||
"Files",
|
"Files",
|
||||||
"Fp8QuantizationConfig",
|
"Fp8QuantizationConfig",
|
||||||
|
"clear_dynamic_schema_types",
|
||||||
"get_schema_identifier",
|
"get_schema_identifier",
|
||||||
"get_signature",
|
"get_signature",
|
||||||
"GrammarResponseFormat",
|
"GrammarResponseFormat",
|
||||||
|
|
@ -557,6 +544,10 @@ __all__ = [
|
||||||
"is_type_optional",
|
"is_type_optional",
|
||||||
"is_type_union",
|
"is_type_union",
|
||||||
"is_unwrapped_body_param",
|
"is_unwrapped_body_param",
|
||||||
|
"iter_dynamic_schema_types",
|
||||||
|
"iter_json_schema_types",
|
||||||
|
"iter_registered_schema_types",
|
||||||
|
"get_registered_schema_info",
|
||||||
"Job",
|
"Job",
|
||||||
"JobStatus",
|
"JobStatus",
|
||||||
"json_dump_string",
|
"json_dump_string",
|
||||||
|
|
@ -759,6 +750,7 @@ __all__ = [
|
||||||
"RAGQueryGeneratorConfig",
|
"RAGQueryGeneratorConfig",
|
||||||
"RAGQueryResult",
|
"RAGQueryResult",
|
||||||
"RAGSearchMode",
|
"RAGSearchMode",
|
||||||
|
"register_dynamic_schema_type",
|
||||||
"register_schema",
|
"register_schema",
|
||||||
"RLHFAlgorithm",
|
"RLHFAlgorithm",
|
||||||
"RRFRanker",
|
"RRFRanker",
|
||||||
|
|
@ -796,6 +788,7 @@ __all__ = [
|
||||||
"ScoringResult",
|
"ScoringResult",
|
||||||
"ScoringResultRow",
|
"ScoringResultRow",
|
||||||
"Schema",
|
"Schema",
|
||||||
|
"SchemaInfo",
|
||||||
"SchemaOptions",
|
"SchemaOptions",
|
||||||
"SearchRankingOptions",
|
"SearchRankingOptions",
|
||||||
"Shield",
|
"Shield",
|
||||||
|
|
|
||||||
|
|
@ -48,6 +48,7 @@ class BenchmarkInput(CommonBenchmarkFields, BaseModel):
|
||||||
provider_benchmark_id: str | None = None
|
provider_benchmark_id: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListBenchmarksResponse(BaseModel):
|
class ListBenchmarksResponse(BaseModel):
|
||||||
data: list[Benchmark]
|
data: list[Benchmark]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -136,6 +136,7 @@ class DatasetInput(CommonDatasetFields, BaseModel):
|
||||||
dataset_id: str
|
dataset_id: str
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListDatasetsResponse(BaseModel):
|
class ListDatasetsResponse(BaseModel):
|
||||||
"""Response from listing datasets.
|
"""Response from listing datasets.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -54,6 +54,7 @@ class VersionInfo(BaseModel):
|
||||||
version: str
|
version: str
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListRoutesResponse(BaseModel):
|
class ListRoutesResponse(BaseModel):
|
||||||
"""Response containing a list of all available API routes.
|
"""Response containing a list of all available API routes.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -100,6 +100,7 @@ class OpenAIModel(BaseModel):
|
||||||
custom_metadata: dict[str, Any] | None = None
|
custom_metadata: dict[str, Any] | None = None
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class OpenAIListModelsResponse(BaseModel):
|
class OpenAIListModelsResponse(BaseModel):
|
||||||
data: list[OpenAIModel]
|
data: list[OpenAIModel]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1316,6 +1316,7 @@ OpenAIResponseInput = Annotated[
|
||||||
register_schema(OpenAIResponseInput, name="OpenAIResponseInput")
|
register_schema(OpenAIResponseInput, name="OpenAIResponseInput")
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListOpenAIResponseInputItem(BaseModel):
|
class ListOpenAIResponseInputItem(BaseModel):
|
||||||
"""List container for OpenAI response input items.
|
"""List container for OpenAI response input items.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -236,6 +236,7 @@ class PostTrainingRLHFRequest(BaseModel):
|
||||||
logger_config: dict[str, Any]
|
logger_config: dict[str, Any]
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class PostTrainingJob(BaseModel):
|
class PostTrainingJob(BaseModel):
|
||||||
job_uuid: str
|
job_uuid: str
|
||||||
|
|
||||||
|
|
@ -265,6 +266,7 @@ class PostTrainingJobStatusResponse(BaseModel):
|
||||||
checkpoints: list[Checkpoint] = Field(default_factory=list)
|
checkpoints: list[Checkpoint] = Field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListPostTrainingJobsResponse(BaseModel):
|
class ListPostTrainingJobsResponse(BaseModel):
|
||||||
data: list[PostTrainingJob]
|
data: list[PostTrainingJob]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -85,6 +85,7 @@ class Prompt(BaseModel):
|
||||||
return f"pmpt_{hex_string}"
|
return f"pmpt_{hex_string}"
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListPromptsResponse(BaseModel):
|
class ListPromptsResponse(BaseModel):
|
||||||
"""Response model to list prompts."""
|
"""Response model to list prompts."""
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -31,6 +31,7 @@ class ProviderInfo(BaseModel):
|
||||||
health: HealthResponse
|
health: HealthResponse
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListProvidersResponse(BaseModel):
|
class ListProvidersResponse(BaseModel):
|
||||||
"""Response containing a list of all available providers.
|
"""Response containing a list of all available providers.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,11 +4,9 @@
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
# the root directory of this source tree.
|
# the root directory of this source tree.
|
||||||
|
|
||||||
from collections.abc import Callable
|
from collections.abc import Callable, Iterable
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import Any, TypeVar
|
from typing import Any, Literal, TypeVar
|
||||||
|
|
||||||
from .strong_typing.schema import json_schema_type, register_schema # noqa: F401
|
|
||||||
|
|
||||||
|
|
||||||
class ExtraBodyField[T]:
|
class ExtraBodyField[T]:
|
||||||
|
|
@ -48,6 +46,98 @@ class ExtraBodyField[T]:
|
||||||
self.description = description
|
self.description = description
|
||||||
|
|
||||||
|
|
||||||
|
SchemaSource = Literal["json_schema_type", "registered_schema", "dynamic_schema"]
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class SchemaInfo:
|
||||||
|
"""Metadata describing a schema entry exposed to OpenAPI generation."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
type: Any
|
||||||
|
source: SchemaSource
|
||||||
|
|
||||||
|
|
||||||
|
_json_schema_types: dict[type, SchemaInfo] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def json_schema_type(cls):
|
||||||
|
"""
|
||||||
|
Decorator to mark a Pydantic model for top-level component registration.
|
||||||
|
|
||||||
|
Models marked with this decorator will be registered as top-level components
|
||||||
|
in the OpenAPI schema, while unmarked models will be inlined.
|
||||||
|
|
||||||
|
This provides control over schema registration to avoid unnecessary indirection
|
||||||
|
for simple one-off types while keeping complex reusable types as components.
|
||||||
|
"""
|
||||||
|
cls._llama_stack_schema_type = True
|
||||||
|
schema_name = getattr(cls, "__name__", f"Anonymous_{id(cls)}")
|
||||||
|
cls._llama_stack_schema_name = schema_name
|
||||||
|
_json_schema_types.setdefault(cls, SchemaInfo(name=schema_name, type=cls, source="json_schema_type"))
|
||||||
|
return cls
|
||||||
|
|
||||||
|
|
||||||
|
# Global registries for schemas discoverable by the generator
|
||||||
|
_registered_schemas: dict[Any, SchemaInfo] = {}
|
||||||
|
_dynamic_schema_types: dict[type, SchemaInfo] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def register_schema(schema_type, name: str | None = None):
|
||||||
|
"""
|
||||||
|
Register a schema type for top-level component registration.
|
||||||
|
|
||||||
|
This replicates the behavior of strong_typing's register_schema function.
|
||||||
|
It's used for union types and other complex types that should appear as
|
||||||
|
top-level components in the OpenAPI schema.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
schema_type: The type to register (e.g., union types, Annotated types)
|
||||||
|
name: Optional name for the schema in the OpenAPI spec. If not provided,
|
||||||
|
uses the type's __name__ or a generated name.
|
||||||
|
"""
|
||||||
|
if name is None:
|
||||||
|
name = getattr(schema_type, "__name__", f"Anonymous_{id(schema_type)}")
|
||||||
|
|
||||||
|
# Store the registration information in a global registry
|
||||||
|
# since union types don't allow setting attributes
|
||||||
|
_registered_schemas[schema_type] = SchemaInfo(name=name, type=schema_type, source="registered_schema")
|
||||||
|
|
||||||
|
return schema_type
|
||||||
|
|
||||||
|
|
||||||
|
def get_registered_schema_info(schema_type: Any) -> SchemaInfo | None:
|
||||||
|
"""Return the registration metadata for a schema type if present."""
|
||||||
|
return _registered_schemas.get(schema_type)
|
||||||
|
|
||||||
|
|
||||||
|
def iter_registered_schema_types() -> Iterable[SchemaInfo]:
|
||||||
|
"""Iterate over all explicitly registered schema entries."""
|
||||||
|
return tuple(_registered_schemas.values())
|
||||||
|
|
||||||
|
|
||||||
|
def iter_json_schema_types() -> Iterable[type]:
|
||||||
|
"""Iterate over all Pydantic models decorated with @json_schema_type."""
|
||||||
|
return tuple(info.type for info in _json_schema_types.values())
|
||||||
|
|
||||||
|
|
||||||
|
def iter_dynamic_schema_types() -> Iterable[type]:
|
||||||
|
"""Iterate over dynamic models registered at generation time."""
|
||||||
|
return tuple(info.type for info in _dynamic_schema_types.values())
|
||||||
|
|
||||||
|
|
||||||
|
def register_dynamic_schema_type(schema_type: type, name: str | None = None) -> type:
|
||||||
|
"""Register a dynamic model generated at runtime for schema inclusion."""
|
||||||
|
schema_name = name if name is not None else getattr(schema_type, "__name__", f"Anonymous_{id(schema_type)}")
|
||||||
|
_dynamic_schema_types[schema_type] = SchemaInfo(name=schema_name, type=schema_type, source="dynamic_schema")
|
||||||
|
return schema_type
|
||||||
|
|
||||||
|
|
||||||
|
def clear_dynamic_schema_types() -> None:
|
||||||
|
"""Clear dynamic schema registrations."""
|
||||||
|
_dynamic_schema_types.clear()
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class WebMethod:
|
class WebMethod:
|
||||||
level: str | None = None
|
level: str | None = None
|
||||||
|
|
|
||||||
|
|
@ -155,6 +155,7 @@ class ScoringFnInput(CommonScoringFnFields, BaseModel):
|
||||||
provider_scoring_fn_id: str | None = None
|
provider_scoring_fn_id: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListScoringFunctionsResponse(BaseModel):
|
class ListScoringFunctionsResponse(BaseModel):
|
||||||
data: list[ScoringFn]
|
data: list[ScoringFn]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -43,6 +43,7 @@ class ShieldInput(CommonShieldFields):
|
||||||
provider_shield_id: str | None = None
|
provider_shield_id: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListShieldsResponse(BaseModel):
|
class ListShieldsResponse(BaseModel):
|
||||||
data: list[Shield]
|
data: list[Shield]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,19 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
Provides auxiliary services for working with Python type annotations, converting typed data to and from JSON,
|
|
||||||
and generating a JSON schema for a complex type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__version__ = "0.3.4"
|
|
||||||
__author__ = "Levente Hunyadi"
|
|
||||||
__copyright__ = "Copyright 2021-2024, Levente Hunyadi"
|
|
||||||
__license__ = "MIT"
|
|
||||||
__maintainer__ = "Levente Hunyadi"
|
|
||||||
__status__ = "Production"
|
|
||||||
|
|
@ -1,229 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import dataclasses
|
|
||||||
import sys
|
|
||||||
from collections.abc import Callable
|
|
||||||
from dataclasses import is_dataclass
|
|
||||||
from typing import TypeVar, overload
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 9):
|
|
||||||
from typing import Annotated as Annotated
|
|
||||||
else:
|
|
||||||
from typing import Annotated as Annotated
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 10):
|
|
||||||
from typing import TypeAlias as TypeAlias
|
|
||||||
else:
|
|
||||||
from typing import TypeAlias as TypeAlias
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 11):
|
|
||||||
from typing import dataclass_transform as dataclass_transform
|
|
||||||
else:
|
|
||||||
from typing import dataclass_transform as dataclass_transform
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
def _compact_dataclass_repr(obj: object) -> str:
|
|
||||||
"""
|
|
||||||
Compact data-class representation where positional arguments are used instead of keyword arguments.
|
|
||||||
|
|
||||||
:param obj: A data-class object.
|
|
||||||
:returns: A string that matches the pattern `Class(arg1, arg2, ...)`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if is_dataclass(obj):
|
|
||||||
arglist = ", ".join(repr(getattr(obj, field.name)) for field in dataclasses.fields(obj))
|
|
||||||
return f"{obj.__class__.__name__}({arglist})"
|
|
||||||
else:
|
|
||||||
return obj.__class__.__name__
|
|
||||||
|
|
||||||
|
|
||||||
class CompactDataClass:
|
|
||||||
"A data class whose repr() uses positional rather than keyword arguments."
|
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
return _compact_dataclass_repr(self)
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def typeannotation(cls: type[T], /) -> type[T]: ...
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def typeannotation(cls: None, *, eq: bool = True, order: bool = False) -> Callable[[type[T]], type[T]]: ...
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass_transform(eq_default=True, order_default=False)
|
|
||||||
def typeannotation(
|
|
||||||
cls: type[T] | None = None, *, eq: bool = True, order: bool = False
|
|
||||||
) -> type[T] | Callable[[type[T]], type[T]]:
|
|
||||||
"""
|
|
||||||
Returns the same class as was passed in, with dunder methods added based on the fields defined in the class.
|
|
||||||
|
|
||||||
:param cls: The data-class type to transform into a type annotation.
|
|
||||||
:param eq: Whether to generate functions to support equality comparison.
|
|
||||||
:param order: Whether to generate functions to support ordering.
|
|
||||||
:returns: A data-class type, or a wrapper for data-class types.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def wrap(cls: type[T]) -> type[T]:
|
|
||||||
# mypy fails to equate bound-y functions (first argument interpreted as
|
|
||||||
# the bound object) with class methods, hence the `ignore` directive.
|
|
||||||
cls.__repr__ = _compact_dataclass_repr # type: ignore[method-assign]
|
|
||||||
if not dataclasses.is_dataclass(cls):
|
|
||||||
cls = dataclasses.dataclass( # type: ignore[call-overload]
|
|
||||||
cls,
|
|
||||||
init=True,
|
|
||||||
repr=False,
|
|
||||||
eq=eq,
|
|
||||||
order=order,
|
|
||||||
unsafe_hash=False,
|
|
||||||
frozen=True,
|
|
||||||
)
|
|
||||||
return cls
|
|
||||||
|
|
||||||
# see if decorator is used as @typeannotation or @typeannotation()
|
|
||||||
if cls is None:
|
|
||||||
# called with parentheses
|
|
||||||
return wrap
|
|
||||||
else:
|
|
||||||
# called without parentheses
|
|
||||||
return wrap(cls)
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class Alias:
|
|
||||||
"Alternative name of a property, typically used in JSON serialization."
|
|
||||||
|
|
||||||
name: str
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class Signed:
|
|
||||||
"Signedness of an integer type."
|
|
||||||
|
|
||||||
is_signed: bool
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class Storage:
|
|
||||||
"Number of bytes the binary representation of an integer type takes, e.g. 4 bytes for an int32."
|
|
||||||
|
|
||||||
bytes: int
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class IntegerRange:
|
|
||||||
"Minimum and maximum value of an integer. The range is inclusive."
|
|
||||||
|
|
||||||
minimum: int
|
|
||||||
maximum: int
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class Precision:
|
|
||||||
"Precision of a floating-point value."
|
|
||||||
|
|
||||||
significant_digits: int
|
|
||||||
decimal_digits: int = 0
|
|
||||||
|
|
||||||
@property
|
|
||||||
def integer_digits(self) -> int:
|
|
||||||
return self.significant_digits - self.decimal_digits
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class TimePrecision:
|
|
||||||
"""
|
|
||||||
Precision of a timestamp or time interval.
|
|
||||||
|
|
||||||
:param decimal_digits: Number of fractional digits retained in the sub-seconds field for a timestamp.
|
|
||||||
"""
|
|
||||||
|
|
||||||
decimal_digits: int = 0
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class Length:
|
|
||||||
"Exact length of a string."
|
|
||||||
|
|
||||||
value: int
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class MinLength:
|
|
||||||
"Minimum length of a string."
|
|
||||||
|
|
||||||
value: int
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class MaxLength:
|
|
||||||
"Maximum length of a string."
|
|
||||||
|
|
||||||
value: int
|
|
||||||
|
|
||||||
|
|
||||||
@typeannotation
|
|
||||||
class SpecialConversion:
|
|
||||||
"Indicates that the annotated type is subject to custom conversion rules."
|
|
||||||
|
|
||||||
|
|
||||||
int8: TypeAlias = Annotated[int, Signed(True), Storage(1), IntegerRange(-128, 127)]
|
|
||||||
int16: TypeAlias = Annotated[int, Signed(True), Storage(2), IntegerRange(-32768, 32767)]
|
|
||||||
int32: TypeAlias = Annotated[
|
|
||||||
int,
|
|
||||||
Signed(True),
|
|
||||||
Storage(4),
|
|
||||||
IntegerRange(-2147483648, 2147483647),
|
|
||||||
]
|
|
||||||
int64: TypeAlias = Annotated[
|
|
||||||
int,
|
|
||||||
Signed(True),
|
|
||||||
Storage(8),
|
|
||||||
IntegerRange(-9223372036854775808, 9223372036854775807),
|
|
||||||
]
|
|
||||||
|
|
||||||
uint8: TypeAlias = Annotated[int, Signed(False), Storage(1), IntegerRange(0, 255)]
|
|
||||||
uint16: TypeAlias = Annotated[int, Signed(False), Storage(2), IntegerRange(0, 65535)]
|
|
||||||
uint32: TypeAlias = Annotated[
|
|
||||||
int,
|
|
||||||
Signed(False),
|
|
||||||
Storage(4),
|
|
||||||
IntegerRange(0, 4294967295),
|
|
||||||
]
|
|
||||||
uint64: TypeAlias = Annotated[
|
|
||||||
int,
|
|
||||||
Signed(False),
|
|
||||||
Storage(8),
|
|
||||||
IntegerRange(0, 18446744073709551615),
|
|
||||||
]
|
|
||||||
|
|
||||||
float32: TypeAlias = Annotated[float, Storage(4)]
|
|
||||||
float64: TypeAlias = Annotated[float, Storage(8)]
|
|
||||||
|
|
||||||
# maps globals of type Annotated[T, ...] defined in this module to their string names
|
|
||||||
_auxiliary_types: dict[object, str] = {}
|
|
||||||
module = sys.modules[__name__]
|
|
||||||
for var in dir(module):
|
|
||||||
typ = getattr(module, var)
|
|
||||||
if getattr(typ, "__metadata__", None) is not None:
|
|
||||||
# type is Annotated[T, ...]
|
|
||||||
_auxiliary_types[typ] = var
|
|
||||||
|
|
||||||
|
|
||||||
def get_auxiliary_format(data_type: object) -> str | None:
|
|
||||||
"Returns the JSON format string corresponding to an auxiliary type."
|
|
||||||
|
|
||||||
return _auxiliary_types.get(data_type)
|
|
||||||
|
|
@ -1,440 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
import copy
|
|
||||||
import dataclasses
|
|
||||||
import datetime
|
|
||||||
import decimal
|
|
||||||
import enum
|
|
||||||
import ipaddress
|
|
||||||
import math
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import types
|
|
||||||
import typing
|
|
||||||
import uuid
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import Any, Literal, TypeVar, Union
|
|
||||||
|
|
||||||
from .auxiliary import (
|
|
||||||
Alias,
|
|
||||||
Annotated,
|
|
||||||
MaxLength,
|
|
||||||
Precision,
|
|
||||||
float32,
|
|
||||||
float64,
|
|
||||||
int16,
|
|
||||||
int32,
|
|
||||||
int64,
|
|
||||||
)
|
|
||||||
from .core import JsonType, Schema
|
|
||||||
from .docstring import Docstring, DocstringParam
|
|
||||||
from .inspection import TypeLike
|
|
||||||
from .serialization import json_to_object, object_to_json
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaNode:
|
|
||||||
title: str | None
|
|
||||||
description: str | None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaType(JsonSchemaNode):
|
|
||||||
type: str
|
|
||||||
format: str | None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaBoolean(JsonSchemaType):
|
|
||||||
type: Literal["boolean"]
|
|
||||||
const: bool | None
|
|
||||||
default: bool | None
|
|
||||||
examples: list[bool] | None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaInteger(JsonSchemaType):
|
|
||||||
type: Literal["integer"]
|
|
||||||
const: int | None
|
|
||||||
default: int | None
|
|
||||||
examples: list[int] | None
|
|
||||||
enum: list[int] | None
|
|
||||||
minimum: int | None
|
|
||||||
maximum: int | None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaNumber(JsonSchemaType):
|
|
||||||
type: Literal["number"]
|
|
||||||
const: float | None
|
|
||||||
default: float | None
|
|
||||||
examples: list[float] | None
|
|
||||||
minimum: float | None
|
|
||||||
maximum: float | None
|
|
||||||
exclusiveMinimum: float | None
|
|
||||||
exclusiveMaximum: float | None
|
|
||||||
multipleOf: float | None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaString(JsonSchemaType):
|
|
||||||
type: Literal["string"]
|
|
||||||
const: str | None
|
|
||||||
default: str | None
|
|
||||||
examples: list[str] | None
|
|
||||||
enum: list[str] | None
|
|
||||||
minLength: int | None
|
|
||||||
maxLength: int | None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaArray(JsonSchemaType):
|
|
||||||
type: Literal["array"]
|
|
||||||
items: "JsonSchemaAny"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaObject(JsonSchemaType):
|
|
||||||
type: Literal["object"]
|
|
||||||
properties: dict[str, "JsonSchemaAny"] | None
|
|
||||||
additionalProperties: bool | None
|
|
||||||
required: list[str] | None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaRef(JsonSchemaNode):
|
|
||||||
ref: Annotated[str, Alias("$ref")]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaAllOf(JsonSchemaNode):
|
|
||||||
allOf: list["JsonSchemaAny"]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaAnyOf(JsonSchemaNode):
|
|
||||||
anyOf: list["JsonSchemaAny"]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Discriminator:
|
|
||||||
propertyName: str
|
|
||||||
mapping: dict[str, str]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaOneOf(JsonSchemaNode):
|
|
||||||
oneOf: list["JsonSchemaAny"]
|
|
||||||
discriminator: Discriminator | None
|
|
||||||
|
|
||||||
|
|
||||||
JsonSchemaAny = Union[
|
|
||||||
JsonSchemaRef,
|
|
||||||
JsonSchemaBoolean,
|
|
||||||
JsonSchemaInteger,
|
|
||||||
JsonSchemaNumber,
|
|
||||||
JsonSchemaString,
|
|
||||||
JsonSchemaArray,
|
|
||||||
JsonSchemaObject,
|
|
||||||
JsonSchemaOneOf,
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class JsonSchemaTopLevelObject(JsonSchemaObject):
|
|
||||||
schema: Annotated[str, Alias("$schema")]
|
|
||||||
definitions: dict[str, JsonSchemaAny] | None
|
|
||||||
|
|
||||||
|
|
||||||
def integer_range_to_type(min_value: float, max_value: float) -> type:
|
|
||||||
if min_value >= -(2**15) and max_value < 2**15:
|
|
||||||
return int16
|
|
||||||
elif min_value >= -(2**31) and max_value < 2**31:
|
|
||||||
return int32
|
|
||||||
else:
|
|
||||||
return int64
|
|
||||||
|
|
||||||
|
|
||||||
def enum_safe_name(name: str) -> str:
|
|
||||||
name = re.sub(r"\W", "_", name)
|
|
||||||
is_dunder = name.startswith("__")
|
|
||||||
is_sunder = name.startswith("_") and name.endswith("_")
|
|
||||||
if is_dunder or is_sunder: # provide an alternative for dunder and sunder names
|
|
||||||
name = f"v{name}"
|
|
||||||
return name
|
|
||||||
|
|
||||||
|
|
||||||
def enum_values_to_type(
|
|
||||||
module: types.ModuleType,
|
|
||||||
name: str,
|
|
||||||
values: dict[str, Any],
|
|
||||||
title: str | None = None,
|
|
||||||
description: str | None = None,
|
|
||||||
) -> type[enum.Enum]:
|
|
||||||
enum_class: type[enum.Enum] = enum.Enum(name, values) # type: ignore
|
|
||||||
|
|
||||||
# assign the newly created type to the same module where the defining class is
|
|
||||||
enum_class.__module__ = module.__name__
|
|
||||||
enum_class.__doc__ = str(Docstring(short_description=title, long_description=description))
|
|
||||||
setattr(module, name, enum_class)
|
|
||||||
|
|
||||||
return enum.unique(enum_class)
|
|
||||||
|
|
||||||
|
|
||||||
def schema_to_type(schema: Schema, *, module: types.ModuleType, class_name: str) -> TypeLike:
|
|
||||||
"""
|
|
||||||
Creates a Python type from a JSON schema.
|
|
||||||
|
|
||||||
:param schema: The JSON schema that the types would correspond to.
|
|
||||||
:param module: The module in which to create the new types.
|
|
||||||
:param class_name: The name assigned to the top-level class.
|
|
||||||
"""
|
|
||||||
|
|
||||||
top_node = typing.cast(JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema))
|
|
||||||
if top_node.definitions is not None:
|
|
||||||
for type_name, type_node in top_node.definitions.items():
|
|
||||||
type_def = node_to_typedef(module, type_name, type_node)
|
|
||||||
if type_def.default is not dataclasses.MISSING:
|
|
||||||
raise TypeError("disallowed: `default` for top-level type definitions")
|
|
||||||
|
|
||||||
type_def.type.__module__ = module.__name__
|
|
||||||
setattr(module, type_name, type_def.type)
|
|
||||||
|
|
||||||
return node_to_typedef(module, class_name, top_node).type
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class TypeDef:
|
|
||||||
type: TypeLike
|
|
||||||
default: Any = dataclasses.MISSING
|
|
||||||
|
|
||||||
|
|
||||||
def json_to_value(target_type: TypeLike, data: JsonType) -> Any:
|
|
||||||
if data is not None:
|
|
||||||
return json_to_object(target_type, data)
|
|
||||||
else:
|
|
||||||
return dataclasses.MISSING
|
|
||||||
|
|
||||||
|
|
||||||
def node_to_typedef(module: types.ModuleType, context: str, node: JsonSchemaNode) -> TypeDef:
|
|
||||||
if isinstance(node, JsonSchemaRef):
|
|
||||||
match_obj = re.match(r"^#/definitions/(\w+)$", node.ref)
|
|
||||||
if not match_obj:
|
|
||||||
raise ValueError(f"invalid reference: {node.ref}")
|
|
||||||
|
|
||||||
type_name = match_obj.group(1)
|
|
||||||
return TypeDef(getattr(module, type_name), dataclasses.MISSING)
|
|
||||||
|
|
||||||
elif isinstance(node, JsonSchemaBoolean):
|
|
||||||
if node.const is not None:
|
|
||||||
return TypeDef(Literal[node.const], dataclasses.MISSING)
|
|
||||||
|
|
||||||
default = json_to_value(bool, node.default)
|
|
||||||
return TypeDef(bool, default)
|
|
||||||
|
|
||||||
elif isinstance(node, JsonSchemaInteger):
|
|
||||||
if node.const is not None:
|
|
||||||
return TypeDef(Literal[node.const], dataclasses.MISSING)
|
|
||||||
|
|
||||||
integer_type: TypeLike
|
|
||||||
if node.format == "int16":
|
|
||||||
integer_type = int16
|
|
||||||
elif node.format == "int32":
|
|
||||||
integer_type = int32
|
|
||||||
elif node.format == "int64":
|
|
||||||
integer_type = int64
|
|
||||||
else:
|
|
||||||
if node.enum is not None:
|
|
||||||
integer_type = integer_range_to_type(min(node.enum), max(node.enum))
|
|
||||||
elif node.minimum is not None and node.maximum is not None:
|
|
||||||
integer_type = integer_range_to_type(node.minimum, node.maximum)
|
|
||||||
else:
|
|
||||||
integer_type = int
|
|
||||||
|
|
||||||
default = json_to_value(integer_type, node.default)
|
|
||||||
return TypeDef(integer_type, default)
|
|
||||||
|
|
||||||
elif isinstance(node, JsonSchemaNumber):
|
|
||||||
if node.const is not None:
|
|
||||||
return TypeDef(Literal[node.const], dataclasses.MISSING)
|
|
||||||
|
|
||||||
number_type: TypeLike
|
|
||||||
if node.format == "float32":
|
|
||||||
number_type = float32
|
|
||||||
elif node.format == "float64":
|
|
||||||
number_type = float64
|
|
||||||
else:
|
|
||||||
if (
|
|
||||||
node.exclusiveMinimum is not None
|
|
||||||
and node.exclusiveMaximum is not None
|
|
||||||
and node.exclusiveMinimum == -node.exclusiveMaximum
|
|
||||||
):
|
|
||||||
integer_digits = round(math.log10(node.exclusiveMaximum))
|
|
||||||
else:
|
|
||||||
integer_digits = None
|
|
||||||
|
|
||||||
if node.multipleOf is not None:
|
|
||||||
decimal_digits = -round(math.log10(node.multipleOf))
|
|
||||||
else:
|
|
||||||
decimal_digits = None
|
|
||||||
|
|
||||||
if integer_digits is not None and decimal_digits is not None:
|
|
||||||
number_type = Annotated[
|
|
||||||
decimal.Decimal,
|
|
||||||
Precision(integer_digits + decimal_digits, decimal_digits),
|
|
||||||
]
|
|
||||||
else:
|
|
||||||
number_type = float
|
|
||||||
|
|
||||||
default = json_to_value(number_type, node.default)
|
|
||||||
return TypeDef(number_type, default)
|
|
||||||
|
|
||||||
elif isinstance(node, JsonSchemaString):
|
|
||||||
if node.const is not None:
|
|
||||||
return TypeDef(Literal[node.const], dataclasses.MISSING)
|
|
||||||
|
|
||||||
string_type: TypeLike
|
|
||||||
if node.format == "date-time":
|
|
||||||
string_type = datetime.datetime
|
|
||||||
elif node.format == "uuid":
|
|
||||||
string_type = uuid.UUID
|
|
||||||
elif node.format == "ipv4":
|
|
||||||
string_type = ipaddress.IPv4Address
|
|
||||||
elif node.format == "ipv6":
|
|
||||||
string_type = ipaddress.IPv6Address
|
|
||||||
|
|
||||||
elif node.enum is not None:
|
|
||||||
string_type = enum_values_to_type(
|
|
||||||
module,
|
|
||||||
context,
|
|
||||||
{enum_safe_name(e): e for e in node.enum},
|
|
||||||
title=node.title,
|
|
||||||
description=node.description,
|
|
||||||
)
|
|
||||||
|
|
||||||
elif node.maxLength is not None:
|
|
||||||
string_type = Annotated[str, MaxLength(node.maxLength)]
|
|
||||||
else:
|
|
||||||
string_type = str
|
|
||||||
|
|
||||||
default = json_to_value(string_type, node.default)
|
|
||||||
return TypeDef(string_type, default)
|
|
||||||
|
|
||||||
elif isinstance(node, JsonSchemaArray):
|
|
||||||
type_def = node_to_typedef(module, context, node.items)
|
|
||||||
if type_def.default is not dataclasses.MISSING:
|
|
||||||
raise TypeError("disallowed: `default` for array element type")
|
|
||||||
list_type = list[(type_def.type,)] # type: ignore
|
|
||||||
return TypeDef(list_type, dataclasses.MISSING)
|
|
||||||
|
|
||||||
elif isinstance(node, JsonSchemaObject):
|
|
||||||
if node.properties is None:
|
|
||||||
return TypeDef(JsonType, dataclasses.MISSING)
|
|
||||||
|
|
||||||
if node.additionalProperties is None or node.additionalProperties is not False:
|
|
||||||
raise TypeError("expected: `additionalProperties` equals `false`")
|
|
||||||
|
|
||||||
required = node.required if node.required is not None else []
|
|
||||||
|
|
||||||
class_name = context
|
|
||||||
|
|
||||||
fields: list[tuple[str, Any, dataclasses.Field]] = []
|
|
||||||
params: dict[str, DocstringParam] = {}
|
|
||||||
for prop_name, prop_node in node.properties.items():
|
|
||||||
type_def = node_to_typedef(module, f"{class_name}__{prop_name}", prop_node)
|
|
||||||
if prop_name in required:
|
|
||||||
prop_type = type_def.type
|
|
||||||
else:
|
|
||||||
prop_type = Union[(None, type_def.type)]
|
|
||||||
fields.append((prop_name, prop_type, dataclasses.field(default=type_def.default)))
|
|
||||||
prop_desc = prop_node.title or prop_node.description
|
|
||||||
if prop_desc is not None:
|
|
||||||
params[prop_name] = DocstringParam(prop_name, prop_desc)
|
|
||||||
|
|
||||||
fields.sort(key=lambda t: t[2].default is not dataclasses.MISSING)
|
|
||||||
if sys.version_info >= (3, 12):
|
|
||||||
class_type = dataclasses.make_dataclass(class_name, fields, module=module.__name__)
|
|
||||||
else:
|
|
||||||
class_type = dataclasses.make_dataclass(class_name, fields, namespace={"__module__": module.__name__})
|
|
||||||
class_type.__doc__ = str(
|
|
||||||
Docstring(
|
|
||||||
short_description=node.title,
|
|
||||||
long_description=node.description,
|
|
||||||
params=params,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
setattr(module, class_name, class_type)
|
|
||||||
return TypeDef(class_type, dataclasses.MISSING)
|
|
||||||
|
|
||||||
elif isinstance(node, JsonSchemaOneOf):
|
|
||||||
union_defs = tuple(node_to_typedef(module, context, n) for n in node.oneOf)
|
|
||||||
if any(d.default is not dataclasses.MISSING for d in union_defs):
|
|
||||||
raise TypeError("disallowed: `default` for union member type")
|
|
||||||
union_types = tuple(d.type for d in union_defs)
|
|
||||||
return TypeDef(Union[union_types], dataclasses.MISSING)
|
|
||||||
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SchemaFlatteningOptions:
|
|
||||||
qualified_names: bool = False
|
|
||||||
recursive: bool = False
|
|
||||||
|
|
||||||
|
|
||||||
def flatten_schema(schema: Schema, *, options: SchemaFlatteningOptions | None = None) -> Schema:
|
|
||||||
top_node = typing.cast(JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema))
|
|
||||||
flattener = SchemaFlattener(options)
|
|
||||||
obj = flattener.flatten(top_node)
|
|
||||||
return typing.cast(Schema, object_to_json(obj))
|
|
||||||
|
|
||||||
|
|
||||||
class SchemaFlattener:
|
|
||||||
options: SchemaFlatteningOptions
|
|
||||||
|
|
||||||
def __init__(self, options: SchemaFlatteningOptions | None = None) -> None:
|
|
||||||
self.options = options or SchemaFlatteningOptions()
|
|
||||||
|
|
||||||
def flatten(self, source_node: JsonSchemaObject) -> JsonSchemaObject:
|
|
||||||
if source_node.type != "object":
|
|
||||||
return source_node
|
|
||||||
|
|
||||||
source_props = source_node.properties or {}
|
|
||||||
target_props: dict[str, JsonSchemaAny] = {}
|
|
||||||
|
|
||||||
source_reqs = source_node.required or []
|
|
||||||
target_reqs: list[str] = []
|
|
||||||
|
|
||||||
for name, prop in source_props.items():
|
|
||||||
if not isinstance(prop, JsonSchemaObject):
|
|
||||||
target_props[name] = prop
|
|
||||||
if name in source_reqs:
|
|
||||||
target_reqs.append(name)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if self.options.recursive:
|
|
||||||
obj = self.flatten(prop)
|
|
||||||
else:
|
|
||||||
obj = prop
|
|
||||||
if obj.properties is not None:
|
|
||||||
if self.options.qualified_names:
|
|
||||||
target_props.update((f"{name}.{n}", p) for n, p in obj.properties.items())
|
|
||||||
else:
|
|
||||||
target_props.update(obj.properties.items())
|
|
||||||
if obj.required is not None:
|
|
||||||
if self.options.qualified_names:
|
|
||||||
target_reqs.extend(f"{name}.{n}" for n in obj.required)
|
|
||||||
else:
|
|
||||||
target_reqs.extend(obj.required)
|
|
||||||
|
|
||||||
target_node = copy.copy(source_node)
|
|
||||||
target_node.properties = target_props or None
|
|
||||||
target_node.additionalProperties = False
|
|
||||||
target_node.required = target_reqs or None
|
|
||||||
return target_node
|
|
||||||
|
|
@ -1,46 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Union
|
|
||||||
|
|
||||||
|
|
||||||
class JsonObject:
|
|
||||||
"Placeholder type for an unrestricted JSON object."
|
|
||||||
|
|
||||||
|
|
||||||
class JsonArray:
|
|
||||||
"Placeholder type for an unrestricted JSON array."
|
|
||||||
|
|
||||||
|
|
||||||
# a JSON type with possible `null` values
|
|
||||||
JsonType = Union[
|
|
||||||
None,
|
|
||||||
bool,
|
|
||||||
int,
|
|
||||||
float,
|
|
||||||
str,
|
|
||||||
dict[str, "JsonType"],
|
|
||||||
list["JsonType"],
|
|
||||||
]
|
|
||||||
|
|
||||||
# a JSON type that cannot contain `null` values
|
|
||||||
StrictJsonType = Union[
|
|
||||||
bool,
|
|
||||||
int,
|
|
||||||
float,
|
|
||||||
str,
|
|
||||||
dict[str, "StrictJsonType"],
|
|
||||||
list["StrictJsonType"],
|
|
||||||
]
|
|
||||||
|
|
||||||
# a meta-type that captures the object type in a JSON schema
|
|
||||||
Schema = dict[str, JsonType]
|
|
||||||
|
|
@ -1,872 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import base64
|
|
||||||
import dataclasses
|
|
||||||
import datetime
|
|
||||||
import enum
|
|
||||||
import inspect
|
|
||||||
import ipaddress
|
|
||||||
import sys
|
|
||||||
import typing
|
|
||||||
import uuid
|
|
||||||
from collections.abc import Callable
|
|
||||||
from types import ModuleType
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
Generic,
|
|
||||||
Literal,
|
|
||||||
NamedTuple,
|
|
||||||
Optional,
|
|
||||||
TypeVar,
|
|
||||||
Union,
|
|
||||||
)
|
|
||||||
|
|
||||||
from .core import JsonType
|
|
||||||
from .exception import JsonKeyError, JsonTypeError, JsonValueError
|
|
||||||
from .inspection import (
|
|
||||||
TypeLike,
|
|
||||||
create_object,
|
|
||||||
enum_value_types,
|
|
||||||
evaluate_type,
|
|
||||||
get_class_properties,
|
|
||||||
get_class_property,
|
|
||||||
get_resolved_hints,
|
|
||||||
is_dataclass_instance,
|
|
||||||
is_dataclass_type,
|
|
||||||
is_named_tuple_type,
|
|
||||||
is_type_annotated,
|
|
||||||
is_type_literal,
|
|
||||||
is_type_optional,
|
|
||||||
unwrap_annotated_type,
|
|
||||||
unwrap_literal_values,
|
|
||||||
unwrap_optional_type,
|
|
||||||
)
|
|
||||||
from .mapping import python_field_to_json_property
|
|
||||||
from .name import python_type_to_str
|
|
||||||
|
|
||||||
E = TypeVar("E", bound=enum.Enum)
|
|
||||||
T = TypeVar("T")
|
|
||||||
R = TypeVar("R")
|
|
||||||
K = TypeVar("K")
|
|
||||||
V = TypeVar("V")
|
|
||||||
|
|
||||||
|
|
||||||
class Deserializer(abc.ABC, Generic[T]):
|
|
||||||
"Parses a JSON value into a Python type."
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
"""
|
|
||||||
Creates auxiliary parsers that this parser is depending on.
|
|
||||||
|
|
||||||
:param context: A module context for evaluating types specified as a string.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def parse(self, data: JsonType) -> T:
|
|
||||||
"""
|
|
||||||
Parses a JSON value into a Python type.
|
|
||||||
|
|
||||||
:param data: The JSON value to de-serialize.
|
|
||||||
:returns: The Python object that the JSON value de-serializes to.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class NoneDeserializer(Deserializer[None]):
|
|
||||||
"Parses JSON `null` values into Python `None`."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> None:
|
|
||||||
if data is not None:
|
|
||||||
raise JsonTypeError(f"`None` type expects JSON `null` but instead received: {data}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class BoolDeserializer(Deserializer[bool]):
|
|
||||||
"Parses JSON `boolean` values into Python `bool` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> bool:
|
|
||||||
if not isinstance(data, bool):
|
|
||||||
raise JsonTypeError(f"`bool` type expects JSON `boolean` data but instead received: {data}")
|
|
||||||
return bool(data)
|
|
||||||
|
|
||||||
|
|
||||||
class IntDeserializer(Deserializer[int]):
|
|
||||||
"Parses JSON `number` values into Python `int` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> int:
|
|
||||||
if not isinstance(data, int):
|
|
||||||
raise JsonTypeError(f"`int` type expects integer data as JSON `number` but instead received: {data}")
|
|
||||||
return int(data)
|
|
||||||
|
|
||||||
|
|
||||||
class FloatDeserializer(Deserializer[float]):
|
|
||||||
"Parses JSON `number` values into Python `float` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> float:
|
|
||||||
if not isinstance(data, float) and not isinstance(data, int):
|
|
||||||
raise JsonTypeError(f"`int` type expects data as JSON `number` but instead received: {data}")
|
|
||||||
return float(data)
|
|
||||||
|
|
||||||
|
|
||||||
class StringDeserializer(Deserializer[str]):
|
|
||||||
"Parses JSON `string` values into Python `str` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> str:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`str` type expects JSON `string` data but instead received: {data}")
|
|
||||||
return str(data)
|
|
||||||
|
|
||||||
|
|
||||||
class BytesDeserializer(Deserializer[bytes]):
|
|
||||||
"Parses JSON `string` values of Base64-encoded strings into Python `bytes` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> bytes:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`bytes` type expects JSON `string` data but instead received: {data}")
|
|
||||||
return base64.b64decode(data, validate=True)
|
|
||||||
|
|
||||||
|
|
||||||
class DateTimeDeserializer(Deserializer[datetime.datetime]):
|
|
||||||
"Parses JSON `string` values representing timestamps in ISO 8601 format to Python `datetime` with time zone."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> datetime.datetime:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`datetime` type expects JSON `string` data but instead received: {data}")
|
|
||||||
|
|
||||||
if data.endswith("Z"):
|
|
||||||
data = f"{data[:-1]}+00:00" # Python's isoformat() does not support military time zones like "Zulu" for UTC
|
|
||||||
timestamp = datetime.datetime.fromisoformat(data)
|
|
||||||
if timestamp.tzinfo is None:
|
|
||||||
raise JsonValueError(f"timestamp lacks explicit time zone designator: {data}")
|
|
||||||
return timestamp
|
|
||||||
|
|
||||||
|
|
||||||
class DateDeserializer(Deserializer[datetime.date]):
|
|
||||||
"Parses JSON `string` values representing dates in ISO 8601 format to Python `date` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> datetime.date:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`date` type expects JSON `string` data but instead received: {data}")
|
|
||||||
|
|
||||||
return datetime.date.fromisoformat(data)
|
|
||||||
|
|
||||||
|
|
||||||
class TimeDeserializer(Deserializer[datetime.time]):
|
|
||||||
"Parses JSON `string` values representing time instances in ISO 8601 format to Python `time` type with time zone."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> datetime.time:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`time` type expects JSON `string` data but instead received: {data}")
|
|
||||||
|
|
||||||
return datetime.time.fromisoformat(data)
|
|
||||||
|
|
||||||
|
|
||||||
class UUIDDeserializer(Deserializer[uuid.UUID]):
|
|
||||||
"Parses JSON `string` values of UUID strings into Python `uuid.UUID` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> uuid.UUID:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`UUID` type expects JSON `string` data but instead received: {data}")
|
|
||||||
return uuid.UUID(data)
|
|
||||||
|
|
||||||
|
|
||||||
class IPv4Deserializer(Deserializer[ipaddress.IPv4Address]):
|
|
||||||
"Parses JSON `string` values of IPv4 address strings into Python `ipaddress.IPv4Address` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> ipaddress.IPv4Address:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`IPv4Address` type expects JSON `string` data but instead received: {data}")
|
|
||||||
return ipaddress.IPv4Address(data)
|
|
||||||
|
|
||||||
|
|
||||||
class IPv6Deserializer(Deserializer[ipaddress.IPv6Address]):
|
|
||||||
"Parses JSON `string` values of IPv6 address strings into Python `ipaddress.IPv6Address` type."
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> ipaddress.IPv6Address:
|
|
||||||
if not isinstance(data, str):
|
|
||||||
raise JsonTypeError(f"`IPv6Address` type expects JSON `string` data but instead received: {data}")
|
|
||||||
return ipaddress.IPv6Address(data)
|
|
||||||
|
|
||||||
|
|
||||||
class ListDeserializer(Deserializer[list[T]]):
|
|
||||||
"Recursively de-serializes a JSON array into a Python `list`."
|
|
||||||
|
|
||||||
item_type: type[T]
|
|
||||||
item_parser: Deserializer
|
|
||||||
|
|
||||||
def __init__(self, item_type: type[T]) -> None:
|
|
||||||
self.item_type = item_type
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
self.item_parser = _get_deserializer(self.item_type, context)
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> list[T]:
|
|
||||||
if not isinstance(data, list):
|
|
||||||
type_name = python_type_to_str(self.item_type)
|
|
||||||
raise JsonTypeError(f"type `List[{type_name}]` expects JSON `array` data but instead received: {data}")
|
|
||||||
|
|
||||||
return [self.item_parser.parse(item) for item in data]
|
|
||||||
|
|
||||||
|
|
||||||
class DictDeserializer(Deserializer[dict[K, V]]):
|
|
||||||
"Recursively de-serializes a JSON object into a Python `dict`."
|
|
||||||
|
|
||||||
key_type: type[K]
|
|
||||||
value_type: type[V]
|
|
||||||
value_parser: Deserializer[V]
|
|
||||||
|
|
||||||
def __init__(self, key_type: type[K], value_type: type[V]) -> None:
|
|
||||||
self.key_type = key_type
|
|
||||||
self.value_type = value_type
|
|
||||||
self._check_key_type()
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
self.value_parser = _get_deserializer(self.value_type, context)
|
|
||||||
|
|
||||||
def _check_key_type(self) -> None:
|
|
||||||
if self.key_type is str:
|
|
||||||
return
|
|
||||||
|
|
||||||
if issubclass(self.key_type, enum.Enum):
|
|
||||||
value_types = enum_value_types(self.key_type)
|
|
||||||
if len(value_types) != 1:
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"type `{self.container_type}` has invalid key type, "
|
|
||||||
f"enumerations must have a consistent member value type but several types found: {value_types}"
|
|
||||||
)
|
|
||||||
value_type = value_types.pop()
|
|
||||||
if value_type is not str:
|
|
||||||
f"`type `{self.container_type}` has invalid enumeration key type, expected `enum.Enum` with string values"
|
|
||||||
return
|
|
||||||
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"`type `{self.container_type}` has invalid key type, expected `str` or `enum.Enum` with string values"
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def container_type(self) -> str:
|
|
||||||
key_type_name = python_type_to_str(self.key_type)
|
|
||||||
value_type_name = python_type_to_str(self.value_type)
|
|
||||||
return f"Dict[{key_type_name}, {value_type_name}]"
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> dict[K, V]:
|
|
||||||
if not isinstance(data, dict):
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"`type `{self.container_type}` expects JSON `object` data but instead received: {data}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return dict(
|
|
||||||
(self.key_type(key), self.value_parser.parse(value)) # type: ignore[call-arg]
|
|
||||||
for key, value in data.items()
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class SetDeserializer(Deserializer[set[T]]):
|
|
||||||
"Recursively de-serializes a JSON list into a Python `set`."
|
|
||||||
|
|
||||||
member_type: type[T]
|
|
||||||
member_parser: Deserializer
|
|
||||||
|
|
||||||
def __init__(self, member_type: type[T]) -> None:
|
|
||||||
self.member_type = member_type
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
self.member_parser = _get_deserializer(self.member_type, context)
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> set[T]:
|
|
||||||
if not isinstance(data, list):
|
|
||||||
type_name = python_type_to_str(self.member_type)
|
|
||||||
raise JsonTypeError(f"type `Set[{type_name}]` expects JSON `array` data but instead received: {data}")
|
|
||||||
|
|
||||||
return set(self.member_parser.parse(item) for item in data)
|
|
||||||
|
|
||||||
|
|
||||||
class TupleDeserializer(Deserializer[tuple[Any, ...]]):
|
|
||||||
"Recursively de-serializes a JSON list into a Python `tuple`."
|
|
||||||
|
|
||||||
item_types: tuple[type[Any], ...]
|
|
||||||
item_parsers: tuple[Deserializer[Any], ...]
|
|
||||||
|
|
||||||
def __init__(self, item_types: tuple[type[Any], ...]) -> None:
|
|
||||||
self.item_types = item_types
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
self.item_parsers = tuple(_get_deserializer(item_type, context) for item_type in self.item_types)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def container_type(self) -> str:
|
|
||||||
type_names = ", ".join(python_type_to_str(item_type) for item_type in self.item_types)
|
|
||||||
return f"Tuple[{type_names}]"
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> tuple[Any, ...]:
|
|
||||||
if not isinstance(data, list) or len(data) != len(self.item_parsers):
|
|
||||||
if not isinstance(data, list):
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"type `{self.container_type}` expects JSON `array` data but instead received: {data}"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
count = len(self.item_parsers)
|
|
||||||
raise JsonValueError(
|
|
||||||
f"type `{self.container_type}` expects a JSON `array` of length {count} but received length {len(data)}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return tuple(item_parser.parse(item) for item_parser, item in zip(self.item_parsers, data, strict=False))
|
|
||||||
|
|
||||||
|
|
||||||
class UnionDeserializer(Deserializer):
|
|
||||||
"De-serializes a JSON value (of any type) into a Python union type."
|
|
||||||
|
|
||||||
member_types: tuple[type, ...]
|
|
||||||
member_parsers: tuple[Deserializer, ...]
|
|
||||||
|
|
||||||
def __init__(self, member_types: tuple[type, ...]) -> None:
|
|
||||||
self.member_types = member_types
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
self.member_parsers = tuple(_get_deserializer(member_type, context) for member_type in self.member_types)
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> Any:
|
|
||||||
for member_parser in self.member_parsers:
|
|
||||||
# iterate over potential types of discriminated union
|
|
||||||
try:
|
|
||||||
return member_parser.parse(data)
|
|
||||||
except (JsonKeyError, JsonTypeError):
|
|
||||||
# indicates a required field is missing from JSON dict -OR- the data cannot be cast to the expected type,
|
|
||||||
# i.e. we don't have the type that we are looking for
|
|
||||||
continue
|
|
||||||
|
|
||||||
type_names = ", ".join(python_type_to_str(member_type) for member_type in self.member_types)
|
|
||||||
raise JsonKeyError(f"type `Union[{type_names}]` could not be instantiated from: {data}")
|
|
||||||
|
|
||||||
|
|
||||||
def get_literal_properties(typ: type) -> set[str]:
|
|
||||||
"Returns the names of all properties in a class that are of a literal type."
|
|
||||||
|
|
||||||
return set(
|
|
||||||
property_name for property_name, property_type in get_class_properties(typ) if is_type_literal(property_type)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_discriminating_properties(types: tuple[type, ...]) -> set[str]:
|
|
||||||
"Returns a set of properties with literal type that are common across all specified classes."
|
|
||||||
|
|
||||||
if not types or not all(isinstance(typ, type) for typ in types):
|
|
||||||
return set()
|
|
||||||
|
|
||||||
props = get_literal_properties(types[0])
|
|
||||||
for typ in types[1:]:
|
|
||||||
props = props & get_literal_properties(typ)
|
|
||||||
|
|
||||||
return props
|
|
||||||
|
|
||||||
|
|
||||||
class TaggedUnionDeserializer(Deserializer):
|
|
||||||
"De-serializes a JSON value with one or more disambiguating properties into a Python union type."
|
|
||||||
|
|
||||||
member_types: tuple[type, ...]
|
|
||||||
disambiguating_properties: set[str]
|
|
||||||
member_parsers: dict[tuple[str, Any], Deserializer]
|
|
||||||
|
|
||||||
def __init__(self, member_types: tuple[type, ...]) -> None:
|
|
||||||
self.member_types = member_types
|
|
||||||
self.disambiguating_properties = get_discriminating_properties(member_types)
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
self.member_parsers = {}
|
|
||||||
for member_type in self.member_types:
|
|
||||||
for property_name in self.disambiguating_properties:
|
|
||||||
literal_type = get_class_property(member_type, property_name)
|
|
||||||
if not literal_type:
|
|
||||||
continue
|
|
||||||
|
|
||||||
for literal_value in unwrap_literal_values(literal_type):
|
|
||||||
tpl = (property_name, literal_value)
|
|
||||||
if tpl in self.member_parsers:
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"disambiguating property `{property_name}` in type `{self.union_type}` has a duplicate value: {literal_value}"
|
|
||||||
)
|
|
||||||
|
|
||||||
self.member_parsers[tpl] = _get_deserializer(member_type, context)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def union_type(self) -> str:
|
|
||||||
type_names = ", ".join(python_type_to_str(member_type) for member_type in self.member_types)
|
|
||||||
return f"Union[{type_names}]"
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> Any:
|
|
||||||
if not isinstance(data, dict):
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"tagged union type `{self.union_type}` expects JSON `object` data but instead received: {data}"
|
|
||||||
)
|
|
||||||
|
|
||||||
for property_name in self.disambiguating_properties:
|
|
||||||
disambiguating_value = data.get(property_name)
|
|
||||||
if disambiguating_value is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
member_parser = self.member_parsers.get((property_name, disambiguating_value))
|
|
||||||
if member_parser is None:
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"disambiguating property value is invalid for tagged union type `{self.union_type}`: {data}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return member_parser.parse(data)
|
|
||||||
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"disambiguating property value is missing for tagged union type `{self.union_type}`: {data}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class LiteralDeserializer(Deserializer):
|
|
||||||
"De-serializes a JSON value into a Python literal type."
|
|
||||||
|
|
||||||
values: tuple[Any, ...]
|
|
||||||
parser: Deserializer
|
|
||||||
|
|
||||||
def __init__(self, values: tuple[Any, ...]) -> None:
|
|
||||||
self.values = values
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
literal_type_tuple = tuple(type(value) for value in self.values)
|
|
||||||
literal_type_set = set(literal_type_tuple)
|
|
||||||
if len(literal_type_set) != 1:
|
|
||||||
value_names = ", ".join(repr(value) for value in self.values)
|
|
||||||
raise TypeError(
|
|
||||||
f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}"
|
|
||||||
)
|
|
||||||
|
|
||||||
literal_type = literal_type_set.pop()
|
|
||||||
self.parser = _get_deserializer(literal_type, context)
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> Any:
|
|
||||||
value = self.parser.parse(data)
|
|
||||||
if value not in self.values:
|
|
||||||
value_names = ", ".join(repr(value) for value in self.values)
|
|
||||||
raise JsonTypeError(f"type `Literal[{value_names}]` could not be instantiated from: {data}")
|
|
||||||
return value
|
|
||||||
|
|
||||||
|
|
||||||
class EnumDeserializer(Deserializer[E]):
|
|
||||||
"Returns an enumeration instance based on the enumeration value read from a JSON value."
|
|
||||||
|
|
||||||
enum_type: type[E]
|
|
||||||
|
|
||||||
def __init__(self, enum_type: type[E]) -> None:
|
|
||||||
self.enum_type = enum_type
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> E:
|
|
||||||
return self.enum_type(data)
|
|
||||||
|
|
||||||
|
|
||||||
class CustomDeserializer(Deserializer[T]):
|
|
||||||
"Uses the `from_json` class method in class to de-serialize the object from JSON."
|
|
||||||
|
|
||||||
converter: Callable[[JsonType], T]
|
|
||||||
|
|
||||||
def __init__(self, converter: Callable[[JsonType], T]) -> None:
|
|
||||||
self.converter = converter
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> T:
|
|
||||||
return self.converter(data)
|
|
||||||
|
|
||||||
|
|
||||||
class FieldDeserializer(abc.ABC, Generic[T, R]):
|
|
||||||
"""
|
|
||||||
Deserializes a JSON property into a Python object field.
|
|
||||||
|
|
||||||
:param property_name: The name of the JSON property to read from a JSON `object`.
|
|
||||||
:param field_name: The name of the field in a Python class to write data to.
|
|
||||||
:param parser: A compatible deserializer that can handle the field's type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
property_name: str
|
|
||||||
field_name: str
|
|
||||||
parser: Deserializer[T]
|
|
||||||
|
|
||||||
def __init__(self, property_name: str, field_name: str, parser: Deserializer[T]) -> None:
|
|
||||||
self.property_name = property_name
|
|
||||||
self.field_name = field_name
|
|
||||||
self.parser = parser
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def parse_field(self, data: dict[str, JsonType]) -> R: ...
|
|
||||||
|
|
||||||
|
|
||||||
class RequiredFieldDeserializer(FieldDeserializer[T, T]):
|
|
||||||
"Deserializes a JSON property into a mandatory Python object field."
|
|
||||||
|
|
||||||
def parse_field(self, data: dict[str, JsonType]) -> T:
|
|
||||||
if self.property_name not in data:
|
|
||||||
raise JsonKeyError(f"missing required property `{self.property_name}` from JSON object: {data}")
|
|
||||||
|
|
||||||
return self.parser.parse(data[self.property_name])
|
|
||||||
|
|
||||||
|
|
||||||
class OptionalFieldDeserializer(FieldDeserializer[T, Optional[T]]):
|
|
||||||
"Deserializes a JSON property into an optional Python object field with a default value of `None`."
|
|
||||||
|
|
||||||
def parse_field(self, data: dict[str, JsonType]) -> T | None:
|
|
||||||
value = data.get(self.property_name)
|
|
||||||
if value is not None:
|
|
||||||
return self.parser.parse(value)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class DefaultFieldDeserializer(FieldDeserializer[T, T]):
|
|
||||||
"Deserializes a JSON property into a Python object field with an explicit default value."
|
|
||||||
|
|
||||||
default_value: T
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
property_name: str,
|
|
||||||
field_name: str,
|
|
||||||
parser: Deserializer,
|
|
||||||
default_value: T,
|
|
||||||
) -> None:
|
|
||||||
super().__init__(property_name, field_name, parser)
|
|
||||||
self.default_value = default_value
|
|
||||||
|
|
||||||
def parse_field(self, data: dict[str, JsonType]) -> T:
|
|
||||||
value = data.get(self.property_name)
|
|
||||||
if value is not None:
|
|
||||||
return self.parser.parse(value)
|
|
||||||
else:
|
|
||||||
return self.default_value
|
|
||||||
|
|
||||||
|
|
||||||
class DefaultFactoryFieldDeserializer(FieldDeserializer[T, T]):
|
|
||||||
"Deserializes a JSON property into an optional Python object field with an explicit default value factory."
|
|
||||||
|
|
||||||
default_factory: Callable[[], T]
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
property_name: str,
|
|
||||||
field_name: str,
|
|
||||||
parser: Deserializer[T],
|
|
||||||
default_factory: Callable[[], T],
|
|
||||||
) -> None:
|
|
||||||
super().__init__(property_name, field_name, parser)
|
|
||||||
self.default_factory = default_factory
|
|
||||||
|
|
||||||
def parse_field(self, data: dict[str, JsonType]) -> T:
|
|
||||||
value = data.get(self.property_name)
|
|
||||||
if value is not None:
|
|
||||||
return self.parser.parse(value)
|
|
||||||
else:
|
|
||||||
return self.default_factory()
|
|
||||||
|
|
||||||
|
|
||||||
class ClassDeserializer(Deserializer[T]):
|
|
||||||
"Base class for de-serializing class-like types such as data classes, named tuples and regular classes."
|
|
||||||
|
|
||||||
class_type: type
|
|
||||||
property_parsers: list[FieldDeserializer]
|
|
||||||
property_fields: set[str]
|
|
||||||
|
|
||||||
def __init__(self, class_type: type[T]) -> None:
|
|
||||||
self.class_type = class_type
|
|
||||||
|
|
||||||
def assign(self, property_parsers: list[FieldDeserializer]) -> None:
|
|
||||||
self.property_parsers = property_parsers
|
|
||||||
self.property_fields = set(property_parser.property_name for property_parser in property_parsers)
|
|
||||||
|
|
||||||
def parse(self, data: JsonType) -> T:
|
|
||||||
if not isinstance(data, dict):
|
|
||||||
type_name = python_type_to_str(self.class_type)
|
|
||||||
raise JsonTypeError(f"`type `{type_name}` expects JSON `object` data but instead received: {data}")
|
|
||||||
|
|
||||||
object_data: dict[str, JsonType] = typing.cast(dict[str, JsonType], data)
|
|
||||||
|
|
||||||
field_values = {}
|
|
||||||
for property_parser in self.property_parsers:
|
|
||||||
field_values[property_parser.field_name] = property_parser.parse_field(object_data)
|
|
||||||
|
|
||||||
if not self.property_fields.issuperset(object_data):
|
|
||||||
unassigned_names = [name for name in object_data if name not in self.property_fields]
|
|
||||||
raise JsonKeyError(f"unrecognized fields in JSON object: {unassigned_names}")
|
|
||||||
|
|
||||||
return self.create(**field_values)
|
|
||||||
|
|
||||||
def create(self, **field_values: Any) -> T:
|
|
||||||
"Instantiates an object with a collection of property values."
|
|
||||||
|
|
||||||
obj: T = create_object(self.class_type)
|
|
||||||
|
|
||||||
# use `setattr` on newly created object instance
|
|
||||||
for field_name, field_value in field_values.items():
|
|
||||||
setattr(obj, field_name, field_value)
|
|
||||||
return obj
|
|
||||||
|
|
||||||
|
|
||||||
class NamedTupleDeserializer(ClassDeserializer[NamedTuple]):
|
|
||||||
"De-serializes a named tuple from a JSON `object`."
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
property_parsers: list[FieldDeserializer] = [
|
|
||||||
RequiredFieldDeserializer(field_name, field_name, _get_deserializer(field_type, context))
|
|
||||||
for field_name, field_type in get_resolved_hints(self.class_type).items()
|
|
||||||
]
|
|
||||||
super().assign(property_parsers)
|
|
||||||
|
|
||||||
def create(self, **field_values: Any) -> NamedTuple:
|
|
||||||
# mypy fails to deduce that this class returns NamedTuples only, hence the `ignore` directive
|
|
||||||
return self.class_type(**field_values) # type: ignore[no-any-return]
|
|
||||||
|
|
||||||
|
|
||||||
class DataclassDeserializer(ClassDeserializer[T]):
|
|
||||||
"De-serializes a data class from a JSON `object`."
|
|
||||||
|
|
||||||
def __init__(self, class_type: type[T]) -> None:
|
|
||||||
if not dataclasses.is_dataclass(class_type):
|
|
||||||
raise TypeError("expected: data-class type")
|
|
||||||
super().__init__(class_type) # type: ignore[arg-type]
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
property_parsers: list[FieldDeserializer] = []
|
|
||||||
resolved_hints = get_resolved_hints(self.class_type)
|
|
||||||
for field in dataclasses.fields(self.class_type):
|
|
||||||
field_type = resolved_hints[field.name]
|
|
||||||
property_name = python_field_to_json_property(field.name, field_type)
|
|
||||||
|
|
||||||
is_optional = is_type_optional(field_type)
|
|
||||||
has_default = field.default is not dataclasses.MISSING
|
|
||||||
has_default_factory = field.default_factory is not dataclasses.MISSING
|
|
||||||
|
|
||||||
if is_optional:
|
|
||||||
required_type: type[T] = unwrap_optional_type(field_type)
|
|
||||||
else:
|
|
||||||
required_type = field_type
|
|
||||||
|
|
||||||
parser = _get_deserializer(required_type, context)
|
|
||||||
|
|
||||||
if has_default:
|
|
||||||
field_parser: FieldDeserializer = DefaultFieldDeserializer(
|
|
||||||
property_name, field.name, parser, field.default
|
|
||||||
)
|
|
||||||
elif has_default_factory:
|
|
||||||
default_factory = typing.cast(Callable[[], Any], field.default_factory)
|
|
||||||
field_parser = DefaultFactoryFieldDeserializer(property_name, field.name, parser, default_factory)
|
|
||||||
elif is_optional:
|
|
||||||
field_parser = OptionalFieldDeserializer(property_name, field.name, parser)
|
|
||||||
else:
|
|
||||||
field_parser = RequiredFieldDeserializer(property_name, field.name, parser)
|
|
||||||
|
|
||||||
property_parsers.append(field_parser)
|
|
||||||
|
|
||||||
super().assign(property_parsers)
|
|
||||||
|
|
||||||
|
|
||||||
class FrozenDataclassDeserializer(DataclassDeserializer[T]):
|
|
||||||
"De-serializes a frozen data class from a JSON `object`."
|
|
||||||
|
|
||||||
def create(self, **field_values: Any) -> T:
|
|
||||||
"Instantiates an object with a collection of property values."
|
|
||||||
|
|
||||||
# create object instance without calling `__init__`
|
|
||||||
obj: T = create_object(self.class_type)
|
|
||||||
|
|
||||||
# can't use `setattr` on frozen dataclasses, pass member variable values to `__init__`
|
|
||||||
obj.__init__(**field_values) # type: ignore
|
|
||||||
return obj
|
|
||||||
|
|
||||||
|
|
||||||
class TypedClassDeserializer(ClassDeserializer[T]):
|
|
||||||
"De-serializes a class with type annotations from a JSON `object` by iterating over class properties."
|
|
||||||
|
|
||||||
def build(self, context: ModuleType | None) -> None:
|
|
||||||
property_parsers: list[FieldDeserializer] = []
|
|
||||||
for field_name, field_type in get_resolved_hints(self.class_type).items():
|
|
||||||
property_name = python_field_to_json_property(field_name, field_type)
|
|
||||||
|
|
||||||
is_optional = is_type_optional(field_type)
|
|
||||||
|
|
||||||
if is_optional:
|
|
||||||
required_type: type[T] = unwrap_optional_type(field_type)
|
|
||||||
else:
|
|
||||||
required_type = field_type
|
|
||||||
|
|
||||||
parser = _get_deserializer(required_type, context)
|
|
||||||
|
|
||||||
if is_optional:
|
|
||||||
field_parser: FieldDeserializer = OptionalFieldDeserializer(property_name, field_name, parser)
|
|
||||||
else:
|
|
||||||
field_parser = RequiredFieldDeserializer(property_name, field_name, parser)
|
|
||||||
|
|
||||||
property_parsers.append(field_parser)
|
|
||||||
|
|
||||||
super().assign(property_parsers)
|
|
||||||
|
|
||||||
|
|
||||||
def create_deserializer(typ: TypeLike, context: ModuleType | None = None) -> Deserializer:
|
|
||||||
"""
|
|
||||||
Creates a de-serializer engine to produce a Python object from an object obtained from a JSON string.
|
|
||||||
|
|
||||||
When de-serializing a JSON object into a Python object, the following transformations are applied:
|
|
||||||
|
|
||||||
* Fundamental types are parsed as `bool`, `int`, `float` or `str`.
|
|
||||||
* Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type
|
|
||||||
`datetime`, `date` or `time`.
|
|
||||||
* Byte arrays are read from a string with Base64 encoding into a `bytes` instance.
|
|
||||||
* UUIDs are extracted from a UUID string compliant with RFC 4122 into a `uuid.UUID` instance.
|
|
||||||
* Enumerations are instantiated with a lookup on enumeration value.
|
|
||||||
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively.
|
|
||||||
* Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs
|
|
||||||
using reflection (enumerating type annotations).
|
|
||||||
|
|
||||||
:raises TypeError: A de-serializer engine cannot be constructed for the input type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if context is None:
|
|
||||||
if isinstance(typ, type):
|
|
||||||
context = sys.modules[typ.__module__]
|
|
||||||
|
|
||||||
return _get_deserializer(typ, context)
|
|
||||||
|
|
||||||
|
|
||||||
_CACHE: dict[tuple[str, str], Deserializer] = {}
|
|
||||||
|
|
||||||
|
|
||||||
def _get_deserializer(typ: TypeLike, context: ModuleType | None) -> Deserializer:
|
|
||||||
"Creates or re-uses a de-serializer engine to parse an object obtained from a JSON string."
|
|
||||||
|
|
||||||
cache_key = None
|
|
||||||
|
|
||||||
if isinstance(typ, (str, typing.ForwardRef)):
|
|
||||||
if context is None:
|
|
||||||
raise TypeError(f"missing context for evaluating type: {typ}")
|
|
||||||
|
|
||||||
if isinstance(typ, str):
|
|
||||||
if hasattr(context, typ):
|
|
||||||
cache_key = (context.__name__, typ)
|
|
||||||
elif isinstance(typ, typing.ForwardRef):
|
|
||||||
if hasattr(context, typ.__forward_arg__):
|
|
||||||
cache_key = (context.__name__, typ.__forward_arg__)
|
|
||||||
|
|
||||||
typ = evaluate_type(typ, context)
|
|
||||||
|
|
||||||
typ = unwrap_annotated_type(typ) if is_type_annotated(typ) else typ
|
|
||||||
|
|
||||||
if isinstance(typ, type) and typing.get_origin(typ) is None:
|
|
||||||
cache_key = (typ.__module__, typ.__name__)
|
|
||||||
|
|
||||||
if cache_key is not None:
|
|
||||||
deserializer = _CACHE.get(cache_key)
|
|
||||||
if deserializer is None:
|
|
||||||
deserializer = _create_deserializer(typ)
|
|
||||||
|
|
||||||
# store de-serializer immediately in cache to avoid stack overflow for recursive types
|
|
||||||
_CACHE[cache_key] = deserializer
|
|
||||||
|
|
||||||
if isinstance(typ, type):
|
|
||||||
# use type's own module as context for evaluating member types
|
|
||||||
context = sys.modules[typ.__module__]
|
|
||||||
|
|
||||||
# create any de-serializers this de-serializer is depending on
|
|
||||||
deserializer.build(context)
|
|
||||||
else:
|
|
||||||
# special forms are not always hashable, create a new de-serializer every time
|
|
||||||
deserializer = _create_deserializer(typ)
|
|
||||||
deserializer.build(context)
|
|
||||||
|
|
||||||
return deserializer
|
|
||||||
|
|
||||||
|
|
||||||
def _create_deserializer(typ: TypeLike) -> Deserializer:
|
|
||||||
"Creates a de-serializer engine to parse an object obtained from a JSON string."
|
|
||||||
|
|
||||||
# check for well-known types
|
|
||||||
if typ is type(None):
|
|
||||||
return NoneDeserializer()
|
|
||||||
elif typ is bool:
|
|
||||||
return BoolDeserializer()
|
|
||||||
elif typ is int:
|
|
||||||
return IntDeserializer()
|
|
||||||
elif typ is float:
|
|
||||||
return FloatDeserializer()
|
|
||||||
elif typ is str:
|
|
||||||
return StringDeserializer()
|
|
||||||
elif typ is bytes:
|
|
||||||
return BytesDeserializer()
|
|
||||||
elif typ is datetime.datetime:
|
|
||||||
return DateTimeDeserializer()
|
|
||||||
elif typ is datetime.date:
|
|
||||||
return DateDeserializer()
|
|
||||||
elif typ is datetime.time:
|
|
||||||
return TimeDeserializer()
|
|
||||||
elif typ is uuid.UUID:
|
|
||||||
return UUIDDeserializer()
|
|
||||||
elif typ is ipaddress.IPv4Address:
|
|
||||||
return IPv4Deserializer()
|
|
||||||
elif typ is ipaddress.IPv6Address:
|
|
||||||
return IPv6Deserializer()
|
|
||||||
|
|
||||||
# dynamically-typed collection types
|
|
||||||
if typ is list:
|
|
||||||
raise TypeError("explicit item type required: use `List[T]` instead of `list`")
|
|
||||||
if typ is dict:
|
|
||||||
raise TypeError("explicit key and value types required: use `Dict[K, V]` instead of `dict`")
|
|
||||||
if typ is set:
|
|
||||||
raise TypeError("explicit member type required: use `Set[T]` instead of `set`")
|
|
||||||
if typ is tuple:
|
|
||||||
raise TypeError("explicit item type list required: use `Tuple[T, ...]` instead of `tuple`")
|
|
||||||
|
|
||||||
# generic types (e.g. list, dict, set, etc.)
|
|
||||||
origin_type = typing.get_origin(typ)
|
|
||||||
if origin_type is list:
|
|
||||||
(list_item_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return ListDeserializer(list_item_type)
|
|
||||||
elif origin_type is dict:
|
|
||||||
key_type, value_type = typing.get_args(typ)
|
|
||||||
return DictDeserializer(key_type, value_type)
|
|
||||||
elif origin_type is set:
|
|
||||||
(set_member_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return SetDeserializer(set_member_type)
|
|
||||||
elif origin_type is tuple:
|
|
||||||
return TupleDeserializer(typing.get_args(typ))
|
|
||||||
elif origin_type is Union:
|
|
||||||
union_args = typing.get_args(typ)
|
|
||||||
if get_discriminating_properties(union_args):
|
|
||||||
return TaggedUnionDeserializer(union_args)
|
|
||||||
else:
|
|
||||||
return UnionDeserializer(union_args)
|
|
||||||
elif origin_type is Literal:
|
|
||||||
return LiteralDeserializer(typing.get_args(typ))
|
|
||||||
|
|
||||||
if not inspect.isclass(typ):
|
|
||||||
if is_dataclass_instance(typ):
|
|
||||||
raise TypeError(f"dataclass type expected but got instance: {typ}")
|
|
||||||
else:
|
|
||||||
raise TypeError(f"unable to de-serialize unrecognized type: {typ}")
|
|
||||||
|
|
||||||
if issubclass(typ, enum.Enum):
|
|
||||||
return EnumDeserializer(typ)
|
|
||||||
|
|
||||||
if is_named_tuple_type(typ):
|
|
||||||
return NamedTupleDeserializer(typ)
|
|
||||||
|
|
||||||
# check if object has custom serialization method
|
|
||||||
convert_func = getattr(typ, "from_json", None)
|
|
||||||
if callable(convert_func):
|
|
||||||
return CustomDeserializer(convert_func)
|
|
||||||
|
|
||||||
if is_dataclass_type(typ):
|
|
||||||
dataclass_params = getattr(typ, "__dataclass_params__", None)
|
|
||||||
if dataclass_params is not None and dataclass_params.frozen:
|
|
||||||
return FrozenDataclassDeserializer(typ)
|
|
||||||
else:
|
|
||||||
return DataclassDeserializer(typ)
|
|
||||||
|
|
||||||
return TypedClassDeserializer(typ)
|
|
||||||
|
|
@ -1,410 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import builtins
|
|
||||||
import collections.abc
|
|
||||||
import dataclasses
|
|
||||||
import inspect
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import types
|
|
||||||
import typing
|
|
||||||
from collections.abc import Callable
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from io import StringIO
|
|
||||||
from typing import Any, Protocol, TypeVar
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 10):
|
|
||||||
from typing import TypeGuard
|
|
||||||
else:
|
|
||||||
from typing import TypeGuard
|
|
||||||
|
|
||||||
from .inspection import (
|
|
||||||
DataclassInstance,
|
|
||||||
get_class_properties,
|
|
||||||
get_signature,
|
|
||||||
is_dataclass_type,
|
|
||||||
is_type_enum,
|
|
||||||
)
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class DocstringParam:
|
|
||||||
"""
|
|
||||||
A parameter declaration in a parameter block.
|
|
||||||
|
|
||||||
:param name: The name of the parameter.
|
|
||||||
:param description: The description text for the parameter.
|
|
||||||
"""
|
|
||||||
|
|
||||||
name: str
|
|
||||||
description: str
|
|
||||||
param_type: type | str = inspect.Signature.empty
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
return f":param {self.name}: {self.description}"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class DocstringReturns:
|
|
||||||
"""
|
|
||||||
A `returns` declaration extracted from a docstring.
|
|
||||||
|
|
||||||
:param description: The description text for the return value.
|
|
||||||
"""
|
|
||||||
|
|
||||||
description: str
|
|
||||||
return_type: type = inspect.Signature.empty
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
return f":returns: {self.description}"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class DocstringRaises:
|
|
||||||
"""
|
|
||||||
A `raises` declaration extracted from a docstring.
|
|
||||||
|
|
||||||
:param typename: The type name of the exception raised.
|
|
||||||
:param description: The description associated with the exception raised.
|
|
||||||
"""
|
|
||||||
|
|
||||||
typename: str
|
|
||||||
description: str
|
|
||||||
raise_type: type = inspect.Signature.empty
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
return f":raises {self.typename}: {self.description}"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Docstring:
|
|
||||||
"""
|
|
||||||
Represents the documentation string (a.k.a. docstring) for a type such as a (data) class or function.
|
|
||||||
|
|
||||||
A docstring is broken down into the following components:
|
|
||||||
* A short description, which is the first block of text in the documentation string, and ends with a double
|
|
||||||
newline or a parameter block.
|
|
||||||
* A long description, which is the optional block of text following the short description, and ends with
|
|
||||||
a parameter block.
|
|
||||||
* A parameter block of named parameter and description string pairs in ReST-style.
|
|
||||||
* A `returns` declaration, which adds explanation to the return value.
|
|
||||||
* A `raises` declaration, which adds explanation to the exception type raised by the function on error.
|
|
||||||
|
|
||||||
When the docstring is attached to a data class, it is understood as the documentation string of the class
|
|
||||||
`__init__` method.
|
|
||||||
|
|
||||||
:param short_description: The short description text parsed from a docstring.
|
|
||||||
:param long_description: The long description text parsed from a docstring.
|
|
||||||
:param params: The parameter block extracted from a docstring.
|
|
||||||
:param returns: The returns declaration extracted from a docstring.
|
|
||||||
"""
|
|
||||||
|
|
||||||
short_description: str | None = None
|
|
||||||
long_description: str | None = None
|
|
||||||
params: dict[str, DocstringParam] = dataclasses.field(default_factory=dict)
|
|
||||||
returns: DocstringReturns | None = None
|
|
||||||
raises: dict[str, DocstringRaises] = dataclasses.field(default_factory=dict)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def full_description(self) -> str | None:
|
|
||||||
if self.short_description and self.long_description:
|
|
||||||
return f"{self.short_description}\n\n{self.long_description}"
|
|
||||||
elif self.short_description:
|
|
||||||
return self.short_description
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
output = StringIO()
|
|
||||||
|
|
||||||
has_description = self.short_description or self.long_description
|
|
||||||
has_blocks = self.params or self.returns or self.raises
|
|
||||||
|
|
||||||
if has_description:
|
|
||||||
if self.short_description and self.long_description:
|
|
||||||
output.write(self.short_description)
|
|
||||||
output.write("\n\n")
|
|
||||||
output.write(self.long_description)
|
|
||||||
elif self.short_description:
|
|
||||||
output.write(self.short_description)
|
|
||||||
|
|
||||||
if has_blocks:
|
|
||||||
if has_description:
|
|
||||||
output.write("\n")
|
|
||||||
|
|
||||||
for param in self.params.values():
|
|
||||||
output.write("\n")
|
|
||||||
output.write(str(param))
|
|
||||||
if self.returns:
|
|
||||||
output.write("\n")
|
|
||||||
output.write(str(self.returns))
|
|
||||||
for raises in self.raises.values():
|
|
||||||
output.write("\n")
|
|
||||||
output.write(str(raises))
|
|
||||||
|
|
||||||
s = output.getvalue()
|
|
||||||
output.close()
|
|
||||||
return s
|
|
||||||
|
|
||||||
|
|
||||||
def is_exception(member: object) -> TypeGuard[type[BaseException]]:
|
|
||||||
return isinstance(member, type) and issubclass(member, BaseException)
|
|
||||||
|
|
||||||
|
|
||||||
def get_exceptions(module: types.ModuleType) -> dict[str, type[BaseException]]:
|
|
||||||
"Returns all exception classes declared in a module."
|
|
||||||
|
|
||||||
return {name: class_type for name, class_type in inspect.getmembers(module, is_exception)}
|
|
||||||
|
|
||||||
|
|
||||||
class SupportsDoc(Protocol):
|
|
||||||
__doc__: str | None
|
|
||||||
|
|
||||||
|
|
||||||
def _maybe_unwrap_async_iterator(t):
|
|
||||||
origin_type = typing.get_origin(t)
|
|
||||||
if origin_type is collections.abc.AsyncIterator:
|
|
||||||
return typing.get_args(t)[0]
|
|
||||||
return t
|
|
||||||
|
|
||||||
|
|
||||||
def parse_type(typ: SupportsDoc) -> Docstring:
|
|
||||||
"""
|
|
||||||
Parse the docstring of a type into its components.
|
|
||||||
|
|
||||||
:param typ: The type whose documentation string to parse.
|
|
||||||
:returns: Components of the documentation string.
|
|
||||||
"""
|
|
||||||
# Use docstring from the iterator origin type for streaming apis
|
|
||||||
typ = _maybe_unwrap_async_iterator(typ)
|
|
||||||
|
|
||||||
doc = get_docstring(typ)
|
|
||||||
if doc is None:
|
|
||||||
return Docstring()
|
|
||||||
|
|
||||||
docstring = parse_text(doc)
|
|
||||||
check_docstring(typ, docstring)
|
|
||||||
|
|
||||||
# assign parameter and return types
|
|
||||||
if is_dataclass_type(typ):
|
|
||||||
properties = dict(get_class_properties(typing.cast(type, typ)))
|
|
||||||
|
|
||||||
for name, param in docstring.params.items():
|
|
||||||
param.param_type = properties[name]
|
|
||||||
|
|
||||||
elif inspect.isfunction(typ):
|
|
||||||
signature = get_signature(typ)
|
|
||||||
for name, param in docstring.params.items():
|
|
||||||
param.param_type = signature.parameters[name].annotation
|
|
||||||
if docstring.returns:
|
|
||||||
docstring.returns.return_type = signature.return_annotation
|
|
||||||
|
|
||||||
# assign exception types
|
|
||||||
defining_module = inspect.getmodule(typ)
|
|
||||||
if defining_module:
|
|
||||||
context: dict[str, type] = {}
|
|
||||||
context.update(get_exceptions(builtins))
|
|
||||||
context.update(get_exceptions(defining_module))
|
|
||||||
for exc_name, exc in docstring.raises.items():
|
|
||||||
raise_type = context.get(exc_name)
|
|
||||||
if raise_type is None:
|
|
||||||
type_name = getattr(typ, "__qualname__", None) or getattr(typ, "__name__", None) or None
|
|
||||||
raise TypeError(
|
|
||||||
f"doc-string exception type `{exc_name}` is not an exception defined in the context of `{type_name}`"
|
|
||||||
)
|
|
||||||
|
|
||||||
exc.raise_type = raise_type
|
|
||||||
|
|
||||||
return docstring
|
|
||||||
|
|
||||||
|
|
||||||
def parse_text(text: str) -> Docstring:
|
|
||||||
"""
|
|
||||||
Parse a ReST-style docstring into its components.
|
|
||||||
|
|
||||||
:param text: The documentation string to parse, typically acquired as `type.__doc__`.
|
|
||||||
:returns: Components of the documentation string.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not text:
|
|
||||||
return Docstring()
|
|
||||||
|
|
||||||
# find block that starts object metadata block (e.g. `:param p:` or `:returns:`)
|
|
||||||
text = inspect.cleandoc(text)
|
|
||||||
match = re.search("^:", text, flags=re.MULTILINE)
|
|
||||||
if match:
|
|
||||||
desc_chunk = text[: match.start()]
|
|
||||||
meta_chunk = text[match.start() :] # noqa: E203
|
|
||||||
else:
|
|
||||||
desc_chunk = text
|
|
||||||
meta_chunk = ""
|
|
||||||
|
|
||||||
# split description text into short and long description
|
|
||||||
parts = desc_chunk.split("\n\n", 1)
|
|
||||||
|
|
||||||
# ensure short description has no newlines
|
|
||||||
short_description = parts[0].strip().replace("\n", " ") or None
|
|
||||||
|
|
||||||
# ensure long description preserves its structure (e.g. preformatted text)
|
|
||||||
if len(parts) > 1:
|
|
||||||
long_description = parts[1].strip() or None
|
|
||||||
else:
|
|
||||||
long_description = None
|
|
||||||
|
|
||||||
params: dict[str, DocstringParam] = {}
|
|
||||||
raises: dict[str, DocstringRaises] = {}
|
|
||||||
returns = None
|
|
||||||
for match in re.finditer(r"(^:.*?)(?=^:|\Z)", meta_chunk, flags=re.DOTALL | re.MULTILINE):
|
|
||||||
chunk = match.group(0)
|
|
||||||
if not chunk:
|
|
||||||
continue
|
|
||||||
|
|
||||||
args_chunk, desc_chunk = chunk.lstrip(":").split(":", 1)
|
|
||||||
args = args_chunk.split()
|
|
||||||
desc = re.sub(r"\s+", " ", desc_chunk.strip())
|
|
||||||
|
|
||||||
if len(args) > 0:
|
|
||||||
kw = args[0]
|
|
||||||
if len(args) == 2:
|
|
||||||
if kw == "param":
|
|
||||||
params[args[1]] = DocstringParam(
|
|
||||||
name=args[1],
|
|
||||||
description=desc,
|
|
||||||
)
|
|
||||||
elif kw == "raise" or kw == "raises":
|
|
||||||
raises[args[1]] = DocstringRaises(
|
|
||||||
typename=args[1],
|
|
||||||
description=desc,
|
|
||||||
)
|
|
||||||
|
|
||||||
elif len(args) == 1:
|
|
||||||
if kw == "return" or kw == "returns":
|
|
||||||
returns = DocstringReturns(description=desc)
|
|
||||||
|
|
||||||
return Docstring(
|
|
||||||
long_description=long_description,
|
|
||||||
short_description=short_description,
|
|
||||||
params=params,
|
|
||||||
returns=returns,
|
|
||||||
raises=raises,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def has_default_docstring(typ: SupportsDoc) -> bool:
|
|
||||||
"Check if class has the auto-generated string assigned by @dataclass."
|
|
||||||
|
|
||||||
if not isinstance(typ, type):
|
|
||||||
return False
|
|
||||||
|
|
||||||
if is_dataclass_type(typ):
|
|
||||||
return typ.__doc__ is not None and re.match(f"^{re.escape(typ.__name__)}[(].*[)]$", typ.__doc__) is not None
|
|
||||||
|
|
||||||
if is_type_enum(typ):
|
|
||||||
return typ.__doc__ is not None and typ.__doc__ == "An enumeration."
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def has_docstring(typ: SupportsDoc) -> bool:
|
|
||||||
"Check if class has a documentation string other than the auto-generated string assigned by @dataclass."
|
|
||||||
|
|
||||||
if has_default_docstring(typ):
|
|
||||||
return False
|
|
||||||
|
|
||||||
return bool(typ.__doc__)
|
|
||||||
|
|
||||||
|
|
||||||
def get_docstring(typ: SupportsDoc) -> str | None:
|
|
||||||
if typ.__doc__ is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
if has_default_docstring(typ):
|
|
||||||
return None
|
|
||||||
|
|
||||||
return typ.__doc__
|
|
||||||
|
|
||||||
|
|
||||||
def check_docstring(typ: SupportsDoc, docstring: Docstring, strict: bool = False) -> None:
|
|
||||||
"""
|
|
||||||
Verifies the doc-string of a type.
|
|
||||||
|
|
||||||
:raises TypeError: Raised on a mismatch between doc-string parameters, and function or type signature.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if is_dataclass_type(typ):
|
|
||||||
check_dataclass_docstring(typ, docstring, strict)
|
|
||||||
elif inspect.isfunction(typ):
|
|
||||||
check_function_docstring(typ, docstring, strict)
|
|
||||||
|
|
||||||
|
|
||||||
def check_dataclass_docstring(typ: type[DataclassInstance], docstring: Docstring, strict: bool = False) -> None:
|
|
||||||
"""
|
|
||||||
Verifies the doc-string of a data-class type.
|
|
||||||
|
|
||||||
:param strict: Whether to check if all data-class members have doc-strings.
|
|
||||||
:raises TypeError: Raised on a mismatch between doc-string parameters and data-class members.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not is_dataclass_type(typ):
|
|
||||||
raise TypeError("not a data-class type")
|
|
||||||
|
|
||||||
properties = dict(get_class_properties(typ))
|
|
||||||
class_name = typ.__name__
|
|
||||||
|
|
||||||
for name in docstring.params:
|
|
||||||
if name not in properties:
|
|
||||||
raise TypeError(f"doc-string parameter `{name}` is not a member of the data-class `{class_name}`")
|
|
||||||
|
|
||||||
if not strict:
|
|
||||||
return
|
|
||||||
|
|
||||||
for name in properties:
|
|
||||||
if name not in docstring.params:
|
|
||||||
raise TypeError(f"member `{name}` in data-class `{class_name}` is missing its doc-string")
|
|
||||||
|
|
||||||
|
|
||||||
def check_function_docstring(fn: Callable[..., Any], docstring: Docstring, strict: bool = False) -> None:
|
|
||||||
"""
|
|
||||||
Verifies the doc-string of a function or member function.
|
|
||||||
|
|
||||||
:param strict: Whether to check if all function parameters and the return type have doc-strings.
|
|
||||||
:raises TypeError: Raised on a mismatch between doc-string parameters and function signature.
|
|
||||||
"""
|
|
||||||
|
|
||||||
signature = get_signature(fn)
|
|
||||||
func_name = fn.__qualname__
|
|
||||||
|
|
||||||
for name in docstring.params:
|
|
||||||
if name not in signature.parameters:
|
|
||||||
raise TypeError(f"doc-string parameter `{name}` is absent from signature of function `{func_name}`")
|
|
||||||
|
|
||||||
if docstring.returns is not None and signature.return_annotation is inspect.Signature.empty:
|
|
||||||
raise TypeError(f"doc-string has returns description in function `{func_name}` with no return type annotation")
|
|
||||||
|
|
||||||
if not strict:
|
|
||||||
return
|
|
||||||
|
|
||||||
for name, param in signature.parameters.items():
|
|
||||||
# ignore `self` in member function signatures
|
|
||||||
if name == "self" and (
|
|
||||||
param.kind is inspect.Parameter.POSITIONAL_ONLY or param.kind is inspect.Parameter.POSITIONAL_OR_KEYWORD
|
|
||||||
):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if name not in docstring.params:
|
|
||||||
raise TypeError(f"function parameter `{name}` in `{func_name}` is missing its doc-string")
|
|
||||||
|
|
||||||
if signature.return_annotation is not inspect.Signature.empty and docstring.returns is None:
|
|
||||||
raise TypeError(f"function `{func_name}` has no returns description in its doc-string")
|
|
||||||
|
|
@ -1,23 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class JsonKeyError(Exception):
|
|
||||||
"Raised when deserialization for a class or union type has failed because a matching member was not found."
|
|
||||||
|
|
||||||
|
|
||||||
class JsonValueError(Exception):
|
|
||||||
"Raised when (de)serialization of data has failed due to invalid value."
|
|
||||||
|
|
||||||
|
|
||||||
class JsonTypeError(Exception):
|
|
||||||
"Raised when deserialization of data has failed due to a type mismatch."
|
|
||||||
File diff suppressed because it is too large
Load diff
|
|
@ -1,39 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import keyword
|
|
||||||
|
|
||||||
from .auxiliary import Alias
|
|
||||||
from .inspection import get_annotation
|
|
||||||
|
|
||||||
|
|
||||||
def python_field_to_json_property(python_id: str, python_type: object | None = None) -> str:
|
|
||||||
"""
|
|
||||||
Map a Python field identifier to a JSON property name.
|
|
||||||
|
|
||||||
Authors may use an underscore appended at the end of a Python identifier as per PEP 8 if it clashes with a Python
|
|
||||||
keyword: e.g. `in` would become `in_` and `from` would become `from_`. Remove these suffixes when exporting to JSON.
|
|
||||||
|
|
||||||
Authors may supply an explicit alias with the type annotation `Alias`, e.g. `Annotated[MyType, Alias("alias")]`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if python_type is not None:
|
|
||||||
alias = get_annotation(python_type, Alias)
|
|
||||||
if alias:
|
|
||||||
return alias.name
|
|
||||||
|
|
||||||
if python_id.endswith("_"):
|
|
||||||
id = python_id[:-1]
|
|
||||||
if keyword.iskeyword(id):
|
|
||||||
return id
|
|
||||||
|
|
||||||
return python_id
|
|
||||||
|
|
@ -1,188 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import typing
|
|
||||||
from typing import Any, Literal, Union
|
|
||||||
|
|
||||||
from .auxiliary import _auxiliary_types
|
|
||||||
from .inspection import (
|
|
||||||
TypeLike,
|
|
||||||
is_generic_dict,
|
|
||||||
is_generic_list,
|
|
||||||
is_generic_sequence,
|
|
||||||
is_type_optional,
|
|
||||||
is_type_union,
|
|
||||||
unwrap_generic_dict,
|
|
||||||
unwrap_generic_list,
|
|
||||||
unwrap_generic_sequence,
|
|
||||||
unwrap_optional_type,
|
|
||||||
unwrap_union_types,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TypeFormatter:
|
|
||||||
"""
|
|
||||||
Type formatter.
|
|
||||||
|
|
||||||
:param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604.
|
|
||||||
"""
|
|
||||||
|
|
||||||
use_union_operator: bool
|
|
||||||
|
|
||||||
def __init__(self, use_union_operator: bool = False) -> None:
|
|
||||||
self.use_union_operator = use_union_operator
|
|
||||||
|
|
||||||
def union_to_str(self, data_type_args: tuple[TypeLike, ...]) -> str:
|
|
||||||
if self.use_union_operator:
|
|
||||||
return " | ".join(self.python_type_to_str(t) for t in data_type_args)
|
|
||||||
else:
|
|
||||||
if len(data_type_args) == 2 and type(None) in data_type_args:
|
|
||||||
# Optional[T] is represented as Union[T, None]
|
|
||||||
origin_name = "Optional"
|
|
||||||
data_type_args = tuple(t for t in data_type_args if t is not type(None))
|
|
||||||
else:
|
|
||||||
origin_name = "Union"
|
|
||||||
|
|
||||||
args = ", ".join(self.python_type_to_str(t) for t in data_type_args)
|
|
||||||
return f"{origin_name}[{args}]"
|
|
||||||
|
|
||||||
def plain_type_to_str(self, data_type: TypeLike) -> str:
|
|
||||||
"Returns the string representation of a Python type without metadata."
|
|
||||||
|
|
||||||
# return forward references as the annotation string
|
|
||||||
if isinstance(data_type, typing.ForwardRef):
|
|
||||||
fwd: typing.ForwardRef = data_type
|
|
||||||
return fwd.__forward_arg__
|
|
||||||
elif isinstance(data_type, str):
|
|
||||||
return data_type
|
|
||||||
|
|
||||||
origin = typing.get_origin(data_type)
|
|
||||||
if origin is not None:
|
|
||||||
data_type_args = typing.get_args(data_type)
|
|
||||||
|
|
||||||
if origin is dict: # Dict[T]
|
|
||||||
origin_name = "Dict"
|
|
||||||
elif origin is list: # List[T]
|
|
||||||
origin_name = "List"
|
|
||||||
elif origin is set: # Set[T]
|
|
||||||
origin_name = "Set"
|
|
||||||
elif origin is Union:
|
|
||||||
return self.union_to_str(data_type_args)
|
|
||||||
elif origin is Literal:
|
|
||||||
args = ", ".join(repr(arg) for arg in data_type_args)
|
|
||||||
return f"Literal[{args}]"
|
|
||||||
else:
|
|
||||||
origin_name = origin.__name__
|
|
||||||
|
|
||||||
args = ", ".join(self.python_type_to_str(t) for t in data_type_args)
|
|
||||||
return f"{origin_name}[{args}]"
|
|
||||||
|
|
||||||
return data_type.__name__
|
|
||||||
|
|
||||||
def python_type_to_str(self, data_type: TypeLike) -> str:
|
|
||||||
"Returns the string representation of a Python type."
|
|
||||||
|
|
||||||
if data_type is type(None):
|
|
||||||
return "None"
|
|
||||||
|
|
||||||
# use compact name for alias types
|
|
||||||
name = _auxiliary_types.get(data_type)
|
|
||||||
if name is not None:
|
|
||||||
return name
|
|
||||||
|
|
||||||
metadata = getattr(data_type, "__metadata__", None)
|
|
||||||
if metadata is not None:
|
|
||||||
# type is Annotated[T, ...]
|
|
||||||
metatuple: tuple[Any, ...] = metadata
|
|
||||||
arg = typing.get_args(data_type)[0]
|
|
||||||
|
|
||||||
# check for auxiliary types with user-defined annotations
|
|
||||||
metaset = set(metatuple)
|
|
||||||
for auxiliary_type, auxiliary_name in _auxiliary_types.items():
|
|
||||||
auxiliary_arg = typing.get_args(auxiliary_type)[0]
|
|
||||||
if arg is not auxiliary_arg:
|
|
||||||
continue
|
|
||||||
|
|
||||||
auxiliary_metatuple: tuple[Any, ...] | None = getattr(auxiliary_type, "__metadata__", None)
|
|
||||||
if auxiliary_metatuple is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if metaset.issuperset(auxiliary_metatuple):
|
|
||||||
# type is an auxiliary type with extra annotations
|
|
||||||
auxiliary_args = ", ".join(repr(m) for m in metatuple if m not in auxiliary_metatuple)
|
|
||||||
return f"Annotated[{auxiliary_name}, {auxiliary_args}]"
|
|
||||||
|
|
||||||
# type is an annotated type
|
|
||||||
args = ", ".join(repr(m) for m in metatuple)
|
|
||||||
return f"Annotated[{self.plain_type_to_str(arg)}, {args}]"
|
|
||||||
else:
|
|
||||||
# type is a regular type
|
|
||||||
return self.plain_type_to_str(data_type)
|
|
||||||
|
|
||||||
|
|
||||||
def python_type_to_str(data_type: TypeLike, use_union_operator: bool = False) -> str:
|
|
||||||
"""
|
|
||||||
Returns the string representation of a Python type.
|
|
||||||
|
|
||||||
:param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604.
|
|
||||||
"""
|
|
||||||
|
|
||||||
fmt = TypeFormatter(use_union_operator)
|
|
||||||
return fmt.python_type_to_str(data_type)
|
|
||||||
|
|
||||||
|
|
||||||
def python_type_to_name(data_type: TypeLike, force: bool = False) -> str:
|
|
||||||
"""
|
|
||||||
Returns the short name of a Python type.
|
|
||||||
|
|
||||||
:param force: Whether to produce a name for composite types such as generics.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# use compact name for alias types
|
|
||||||
name = _auxiliary_types.get(data_type)
|
|
||||||
if name is not None:
|
|
||||||
return name
|
|
||||||
|
|
||||||
# unwrap annotated types
|
|
||||||
metadata = getattr(data_type, "__metadata__", None)
|
|
||||||
if metadata is not None:
|
|
||||||
# type is Annotated[T, ...]
|
|
||||||
arg = typing.get_args(data_type)[0]
|
|
||||||
return python_type_to_name(arg, force=force)
|
|
||||||
|
|
||||||
if force:
|
|
||||||
# generic types
|
|
||||||
if is_type_optional(data_type, strict=True):
|
|
||||||
inner_name = python_type_to_name(unwrap_optional_type(data_type), force=True)
|
|
||||||
return f"Optional__{inner_name}"
|
|
||||||
elif is_generic_list(data_type):
|
|
||||||
item_name = python_type_to_name(unwrap_generic_list(data_type), force=True)
|
|
||||||
return f"List__{item_name}"
|
|
||||||
elif is_generic_sequence(data_type):
|
|
||||||
# Treat Sequence the same as List for schema generation purposes
|
|
||||||
item_name = python_type_to_name(unwrap_generic_sequence(data_type), force=True)
|
|
||||||
return f"List__{item_name}"
|
|
||||||
elif is_generic_dict(data_type):
|
|
||||||
key_type, value_type = unwrap_generic_dict(data_type)
|
|
||||||
key_name = python_type_to_name(key_type, force=True)
|
|
||||||
value_name = python_type_to_name(value_type, force=True)
|
|
||||||
return f"Dict__{key_name}__{value_name}"
|
|
||||||
elif is_type_union(data_type):
|
|
||||||
member_types = unwrap_union_types(data_type)
|
|
||||||
member_names = "__".join(python_type_to_name(member_type, force=True) for member_type in member_types)
|
|
||||||
return f"Union__{member_names}"
|
|
||||||
|
|
||||||
# named system or user-defined type
|
|
||||||
if hasattr(data_type, "__name__") and not typing.get_args(data_type):
|
|
||||||
return data_type.__name__
|
|
||||||
|
|
||||||
raise TypeError(f"cannot assign a simple name to type: {data_type}")
|
|
||||||
|
|
@ -1,791 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import collections.abc
|
|
||||||
import dataclasses
|
|
||||||
import datetime
|
|
||||||
import decimal
|
|
||||||
import enum
|
|
||||||
import functools
|
|
||||||
import inspect
|
|
||||||
import json
|
|
||||||
import types
|
|
||||||
import typing
|
|
||||||
import uuid
|
|
||||||
from collections.abc import Callable
|
|
||||||
from copy import deepcopy
|
|
||||||
from typing import (
|
|
||||||
Annotated,
|
|
||||||
Any,
|
|
||||||
ClassVar,
|
|
||||||
Literal,
|
|
||||||
TypeVar,
|
|
||||||
Union,
|
|
||||||
overload,
|
|
||||||
)
|
|
||||||
|
|
||||||
import jsonschema
|
|
||||||
|
|
||||||
from . import docstring
|
|
||||||
from .auxiliary import (
|
|
||||||
Alias,
|
|
||||||
IntegerRange,
|
|
||||||
MaxLength,
|
|
||||||
MinLength,
|
|
||||||
Precision,
|
|
||||||
get_auxiliary_format,
|
|
||||||
)
|
|
||||||
from .core import JsonArray, JsonObject, JsonType, Schema, StrictJsonType
|
|
||||||
from .inspection import (
|
|
||||||
TypeLike,
|
|
||||||
enum_value_types,
|
|
||||||
get_annotation,
|
|
||||||
get_class_properties,
|
|
||||||
is_type_enum,
|
|
||||||
is_type_like,
|
|
||||||
is_type_optional,
|
|
||||||
unwrap_optional_type,
|
|
||||||
)
|
|
||||||
from .name import python_type_to_name
|
|
||||||
from .serialization import object_to_json
|
|
||||||
|
|
||||||
# determines the maximum number of distinct enum members up to which a Dict[EnumType, Any] is converted into a JSON
|
|
||||||
# schema with explicitly listed properties (rather than employing a pattern constraint on property names)
|
|
||||||
OBJECT_ENUM_EXPANSION_LIMIT = 4
|
|
||||||
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
def get_class_docstrings(data_type: type) -> tuple[str | None, str | None]:
|
|
||||||
docstr = docstring.parse_type(data_type)
|
|
||||||
|
|
||||||
# check if class has a doc-string other than the auto-generated string assigned by @dataclass
|
|
||||||
if docstring.has_default_docstring(data_type):
|
|
||||||
return None, None
|
|
||||||
|
|
||||||
return docstr.short_description, docstr.long_description
|
|
||||||
|
|
||||||
|
|
||||||
def get_class_property_docstrings(
|
|
||||||
data_type: type, transform_fun: Callable[[type, str, str], str] | None = None
|
|
||||||
) -> dict[str, str]:
|
|
||||||
"""
|
|
||||||
Extracts the documentation strings associated with the properties of a composite type.
|
|
||||||
|
|
||||||
:param data_type: The object whose properties to iterate over.
|
|
||||||
:param transform_fun: An optional function that maps a property documentation string to a custom tailored string.
|
|
||||||
:returns: A dictionary mapping property names to descriptions.
|
|
||||||
"""
|
|
||||||
|
|
||||||
result: dict[str, str] = {}
|
|
||||||
# Only try to get MRO if data_type is actually a class
|
|
||||||
# Special types like Literal, Union, etc. don't have MRO
|
|
||||||
if not inspect.isclass(data_type):
|
|
||||||
return result
|
|
||||||
|
|
||||||
for base in inspect.getmro(data_type):
|
|
||||||
docstr = docstring.parse_type(base)
|
|
||||||
for param in docstr.params.values():
|
|
||||||
if param.name in result:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if transform_fun:
|
|
||||||
description = transform_fun(data_type, param.name, param.description)
|
|
||||||
else:
|
|
||||||
description = param.description
|
|
||||||
|
|
||||||
result[param.name] = description
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
def docstring_to_schema(data_type: type) -> Schema:
|
|
||||||
short_description, long_description = get_class_docstrings(data_type)
|
|
||||||
schema: Schema = {
|
|
||||||
"title": python_type_to_name(data_type, force=True),
|
|
||||||
}
|
|
||||||
|
|
||||||
description = "\n".join(filter(None, [short_description, long_description]))
|
|
||||||
if description:
|
|
||||||
schema["description"] = description
|
|
||||||
return schema
|
|
||||||
|
|
||||||
|
|
||||||
def id_from_ref(data_type: typing.ForwardRef | str | type) -> str:
|
|
||||||
"Extracts the name of a possibly forward-referenced type."
|
|
||||||
|
|
||||||
if isinstance(data_type, typing.ForwardRef):
|
|
||||||
forward_type: typing.ForwardRef = data_type
|
|
||||||
return forward_type.__forward_arg__
|
|
||||||
elif isinstance(data_type, str):
|
|
||||||
return data_type
|
|
||||||
else:
|
|
||||||
return data_type.__name__
|
|
||||||
|
|
||||||
|
|
||||||
def type_from_ref(data_type: typing.ForwardRef | str | type) -> tuple[str, type]:
|
|
||||||
"Creates a type from a forward reference."
|
|
||||||
|
|
||||||
if isinstance(data_type, typing.ForwardRef):
|
|
||||||
forward_type: typing.ForwardRef = data_type
|
|
||||||
true_type = eval(forward_type.__forward_code__)
|
|
||||||
return forward_type.__forward_arg__, true_type
|
|
||||||
elif isinstance(data_type, str):
|
|
||||||
true_type = eval(data_type)
|
|
||||||
return data_type, true_type
|
|
||||||
else:
|
|
||||||
return data_type.__name__, data_type
|
|
||||||
|
|
||||||
|
|
||||||
@dataclasses.dataclass
|
|
||||||
class TypeCatalogEntry:
|
|
||||||
schema: Schema | None
|
|
||||||
identifier: str
|
|
||||||
examples: JsonType | None = None
|
|
||||||
|
|
||||||
|
|
||||||
class TypeCatalog:
|
|
||||||
"Maintains an association of well-known Python types to their JSON schema."
|
|
||||||
|
|
||||||
_by_type: dict[TypeLike, TypeCatalogEntry]
|
|
||||||
_by_name: dict[str, TypeCatalogEntry]
|
|
||||||
|
|
||||||
def __init__(self) -> None:
|
|
||||||
self._by_type = {}
|
|
||||||
self._by_name = {}
|
|
||||||
|
|
||||||
def __contains__(self, data_type: TypeLike) -> bool:
|
|
||||||
if isinstance(data_type, typing.ForwardRef):
|
|
||||||
fwd: typing.ForwardRef = data_type
|
|
||||||
name = fwd.__forward_arg__
|
|
||||||
return name in self._by_name
|
|
||||||
else:
|
|
||||||
return data_type in self._by_type
|
|
||||||
|
|
||||||
def add(
|
|
||||||
self,
|
|
||||||
data_type: TypeLike,
|
|
||||||
schema: Schema | None,
|
|
||||||
identifier: str,
|
|
||||||
examples: list[JsonType] | None = None,
|
|
||||||
) -> None:
|
|
||||||
if isinstance(data_type, typing.ForwardRef):
|
|
||||||
raise TypeError("forward references cannot be used to register a type")
|
|
||||||
|
|
||||||
if data_type in self._by_type:
|
|
||||||
raise ValueError(f"type {data_type} is already registered in the catalog")
|
|
||||||
|
|
||||||
entry = TypeCatalogEntry(schema, identifier, examples)
|
|
||||||
self._by_type[data_type] = entry
|
|
||||||
self._by_name[identifier] = entry
|
|
||||||
|
|
||||||
def get(self, data_type: TypeLike) -> TypeCatalogEntry:
|
|
||||||
if isinstance(data_type, typing.ForwardRef):
|
|
||||||
fwd: typing.ForwardRef = data_type
|
|
||||||
name = fwd.__forward_arg__
|
|
||||||
return self._by_name[name]
|
|
||||||
else:
|
|
||||||
return self._by_type[data_type]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclasses.dataclass
|
|
||||||
class SchemaOptions:
|
|
||||||
definitions_path: str = "#/definitions/"
|
|
||||||
use_descriptions: bool = True
|
|
||||||
use_examples: bool = True
|
|
||||||
property_description_fun: Callable[[type, str, str], str] | None = None
|
|
||||||
|
|
||||||
|
|
||||||
class JsonSchemaGenerator:
|
|
||||||
"Creates a JSON schema with user-defined type definitions."
|
|
||||||
|
|
||||||
type_catalog: ClassVar[TypeCatalog] = TypeCatalog()
|
|
||||||
types_used: dict[str, TypeLike]
|
|
||||||
options: SchemaOptions
|
|
||||||
|
|
||||||
def __init__(self, options: SchemaOptions | None = None):
|
|
||||||
if options is None:
|
|
||||||
self.options = SchemaOptions()
|
|
||||||
else:
|
|
||||||
self.options = options
|
|
||||||
self.types_used = {}
|
|
||||||
|
|
||||||
@functools.singledispatchmethod
|
|
||||||
def _metadata_to_schema(self, arg: object) -> Schema:
|
|
||||||
# unrecognized annotation
|
|
||||||
return {}
|
|
||||||
|
|
||||||
@_metadata_to_schema.register
|
|
||||||
def _(self, arg: IntegerRange) -> Schema:
|
|
||||||
return {"minimum": arg.minimum, "maximum": arg.maximum}
|
|
||||||
|
|
||||||
@_metadata_to_schema.register
|
|
||||||
def _(self, arg: Precision) -> Schema:
|
|
||||||
return {
|
|
||||||
"multipleOf": 10 ** (-arg.decimal_digits),
|
|
||||||
"exclusiveMinimum": -(10**arg.integer_digits),
|
|
||||||
"exclusiveMaximum": (10**arg.integer_digits),
|
|
||||||
}
|
|
||||||
|
|
||||||
@_metadata_to_schema.register
|
|
||||||
def _(self, arg: MinLength) -> Schema:
|
|
||||||
return {"minLength": arg.value}
|
|
||||||
|
|
||||||
@_metadata_to_schema.register
|
|
||||||
def _(self, arg: MaxLength) -> Schema:
|
|
||||||
return {"maxLength": arg.value}
|
|
||||||
|
|
||||||
def _with_metadata(self, type_schema: Schema, metadata: tuple[Any, ...] | None) -> Schema:
|
|
||||||
if metadata:
|
|
||||||
for m in metadata:
|
|
||||||
type_schema.update(self._metadata_to_schema(m))
|
|
||||||
return type_schema
|
|
||||||
|
|
||||||
def _simple_type_to_schema(self, typ: TypeLike, json_schema_extra: dict | None = None) -> Schema | None:
|
|
||||||
"""
|
|
||||||
Returns the JSON schema associated with a simple, unrestricted type.
|
|
||||||
|
|
||||||
:returns: The schema for a simple type, or `None`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if typ is type(None):
|
|
||||||
return {"type": "null"}
|
|
||||||
elif typ is bool:
|
|
||||||
return {"type": "boolean"}
|
|
||||||
elif typ is int:
|
|
||||||
return {"type": "integer"}
|
|
||||||
elif typ is float:
|
|
||||||
return {"type": "number"}
|
|
||||||
elif typ is str:
|
|
||||||
if json_schema_extra and "contentEncoding" in json_schema_extra:
|
|
||||||
return {
|
|
||||||
"type": "string",
|
|
||||||
"contentEncoding": json_schema_extra["contentEncoding"],
|
|
||||||
}
|
|
||||||
return {"type": "string"}
|
|
||||||
elif typ is bytes:
|
|
||||||
return {"type": "string", "contentEncoding": "base64"}
|
|
||||||
elif typ is datetime.datetime:
|
|
||||||
# 2018-11-13T20:20:39+00:00
|
|
||||||
return {
|
|
||||||
"type": "string",
|
|
||||||
"format": "date-time",
|
|
||||||
}
|
|
||||||
elif typ is datetime.date:
|
|
||||||
# 2018-11-13
|
|
||||||
return {"type": "string", "format": "date"}
|
|
||||||
elif typ is datetime.time:
|
|
||||||
# 20:20:39+00:00
|
|
||||||
return {"type": "string", "format": "time"}
|
|
||||||
elif typ is decimal.Decimal:
|
|
||||||
return {"type": "number"}
|
|
||||||
elif typ is uuid.UUID:
|
|
||||||
# f81d4fae-7dec-11d0-a765-00a0c91e6bf6
|
|
||||||
return {"type": "string", "format": "uuid"}
|
|
||||||
elif typ is Any:
|
|
||||||
return {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "null"},
|
|
||||||
{"type": "boolean"},
|
|
||||||
{"type": "number"},
|
|
||||||
{"type": "string"},
|
|
||||||
{"type": "array"},
|
|
||||||
{"type": "object"},
|
|
||||||
]
|
|
||||||
}
|
|
||||||
elif typ is JsonObject:
|
|
||||||
return {"type": "object"}
|
|
||||||
elif typ is JsonArray:
|
|
||||||
return {"type": "array"}
|
|
||||||
else:
|
|
||||||
# not a simple type
|
|
||||||
return None
|
|
||||||
|
|
||||||
def type_to_schema(
|
|
||||||
self,
|
|
||||||
data_type: TypeLike,
|
|
||||||
force_expand: bool = False,
|
|
||||||
json_schema_extra: dict | None = None,
|
|
||||||
) -> Schema:
|
|
||||||
common_info = {}
|
|
||||||
if json_schema_extra and "deprecated" in json_schema_extra:
|
|
||||||
common_info["deprecated"] = json_schema_extra["deprecated"]
|
|
||||||
return self._type_to_schema(data_type, force_expand, json_schema_extra) | common_info
|
|
||||||
|
|
||||||
def _type_to_schema(
|
|
||||||
self,
|
|
||||||
data_type: TypeLike,
|
|
||||||
force_expand: bool = False,
|
|
||||||
json_schema_extra: dict | None = None,
|
|
||||||
) -> Schema:
|
|
||||||
"""
|
|
||||||
Returns the JSON schema associated with a type.
|
|
||||||
|
|
||||||
:param data_type: The Python type whose JSON schema to return.
|
|
||||||
:param force_expand: Forces a JSON schema to be returned even if the type is registered in the catalog of known types.
|
|
||||||
:returns: The JSON schema associated with the type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# short-circuit for common simple types
|
|
||||||
schema = self._simple_type_to_schema(data_type, json_schema_extra)
|
|
||||||
if schema is not None:
|
|
||||||
return schema
|
|
||||||
|
|
||||||
# types registered in the type catalog of well-known types
|
|
||||||
type_catalog = JsonSchemaGenerator.type_catalog
|
|
||||||
if not force_expand and data_type in type_catalog:
|
|
||||||
# user-defined type
|
|
||||||
identifier = type_catalog.get(data_type).identifier
|
|
||||||
self.types_used.setdefault(identifier, data_type)
|
|
||||||
return {"$ref": f"{self.options.definitions_path}{identifier}"}
|
|
||||||
|
|
||||||
# unwrap annotated types
|
|
||||||
metadata = getattr(data_type, "__metadata__", None)
|
|
||||||
if metadata is not None:
|
|
||||||
# type is Annotated[T, ...]
|
|
||||||
typ = typing.get_args(data_type)[0]
|
|
||||||
schema = self._simple_type_to_schema(typ)
|
|
||||||
if schema is not None:
|
|
||||||
# recognize well-known auxiliary types
|
|
||||||
fmt = get_auxiliary_format(data_type)
|
|
||||||
if fmt is not None:
|
|
||||||
schema.update({"format": fmt})
|
|
||||||
return schema
|
|
||||||
else:
|
|
||||||
return self._with_metadata(schema, metadata)
|
|
||||||
|
|
||||||
else:
|
|
||||||
# type is a regular type
|
|
||||||
typ = data_type
|
|
||||||
|
|
||||||
if isinstance(typ, typing.ForwardRef) or isinstance(typ, str):
|
|
||||||
if force_expand:
|
|
||||||
identifier, true_type = type_from_ref(typ)
|
|
||||||
return self.type_to_schema(true_type, force_expand=True)
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
identifier, true_type = type_from_ref(typ)
|
|
||||||
self.types_used[identifier] = true_type
|
|
||||||
except NameError:
|
|
||||||
identifier = id_from_ref(typ)
|
|
||||||
|
|
||||||
return {"$ref": f"{self.options.definitions_path}{identifier}"}
|
|
||||||
|
|
||||||
if is_type_enum(typ):
|
|
||||||
enum_type: type[enum.Enum] = typ
|
|
||||||
value_types = enum_value_types(enum_type)
|
|
||||||
if len(value_types) != 1:
|
|
||||||
raise ValueError(
|
|
||||||
f"enumerations must have a consistent member value type but several types found: {value_types}"
|
|
||||||
)
|
|
||||||
enum_value_type = value_types.pop()
|
|
||||||
|
|
||||||
enum_schema: Schema
|
|
||||||
if enum_value_type is bool or enum_value_type is int or enum_value_type is float or enum_value_type is str:
|
|
||||||
if enum_value_type is bool:
|
|
||||||
enum_schema_type = "boolean"
|
|
||||||
elif enum_value_type is int:
|
|
||||||
enum_schema_type = "integer"
|
|
||||||
elif enum_value_type is float:
|
|
||||||
enum_schema_type = "number"
|
|
||||||
elif enum_value_type is str:
|
|
||||||
enum_schema_type = "string"
|
|
||||||
|
|
||||||
enum_schema = {
|
|
||||||
"type": enum_schema_type,
|
|
||||||
"enum": [object_to_json(e.value) for e in enum_type],
|
|
||||||
}
|
|
||||||
if self.options.use_descriptions:
|
|
||||||
enum_schema.update(docstring_to_schema(typ))
|
|
||||||
return enum_schema
|
|
||||||
else:
|
|
||||||
enum_schema = self.type_to_schema(enum_value_type)
|
|
||||||
if self.options.use_descriptions:
|
|
||||||
enum_schema.update(docstring_to_schema(typ))
|
|
||||||
return enum_schema
|
|
||||||
|
|
||||||
origin_type = typing.get_origin(typ)
|
|
||||||
if origin_type is list:
|
|
||||||
(list_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return {"type": "array", "items": self.type_to_schema(list_type)}
|
|
||||||
elif origin_type is collections.abc.Sequence:
|
|
||||||
# Treat Sequence the same as list for JSON schema (both are arrays)
|
|
||||||
(sequence_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return {"type": "array", "items": self.type_to_schema(sequence_type)}
|
|
||||||
elif origin_type is dict:
|
|
||||||
key_type, value_type = typing.get_args(typ)
|
|
||||||
if not (key_type is str or key_type is int or is_type_enum(key_type)):
|
|
||||||
raise ValueError("`dict` with key type not coercible to `str` is not supported")
|
|
||||||
|
|
||||||
dict_schema: Schema
|
|
||||||
value_schema = self.type_to_schema(value_type)
|
|
||||||
if is_type_enum(key_type):
|
|
||||||
enum_values = [str(e.value) for e in key_type]
|
|
||||||
if len(enum_values) > OBJECT_ENUM_EXPANSION_LIMIT:
|
|
||||||
dict_schema = {
|
|
||||||
"propertyNames": {"pattern": "^(" + "|".join(enum_values) + ")$"},
|
|
||||||
"additionalProperties": value_schema,
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
dict_schema = {
|
|
||||||
"properties": dict.fromkeys(enum_values, value_schema),
|
|
||||||
"additionalProperties": False,
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
dict_schema = {"additionalProperties": value_schema}
|
|
||||||
|
|
||||||
schema = {"type": "object"}
|
|
||||||
schema.update(dict_schema)
|
|
||||||
return schema
|
|
||||||
elif origin_type is set:
|
|
||||||
(set_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return {
|
|
||||||
"type": "array",
|
|
||||||
"items": self.type_to_schema(set_type),
|
|
||||||
"uniqueItems": True,
|
|
||||||
}
|
|
||||||
elif origin_type is tuple:
|
|
||||||
args = typing.get_args(typ)
|
|
||||||
return {
|
|
||||||
"type": "array",
|
|
||||||
"minItems": len(args),
|
|
||||||
"maxItems": len(args),
|
|
||||||
"prefixItems": [self.type_to_schema(member_type) for member_type in args],
|
|
||||||
}
|
|
||||||
elif origin_type in (Union, types.UnionType):
|
|
||||||
discriminator = None
|
|
||||||
if typing.get_origin(data_type) is Annotated:
|
|
||||||
discriminator = typing.get_args(data_type)[1].discriminator
|
|
||||||
ret: Schema = {"oneOf": [self.type_to_schema(union_type) for union_type in typing.get_args(typ)]}
|
|
||||||
if discriminator:
|
|
||||||
# for each union type, we need to read the value of the discriminator
|
|
||||||
mapping: dict[str, JsonType] = {}
|
|
||||||
for union_type in typing.get_args(typ):
|
|
||||||
props = self.type_to_schema(union_type, force_expand=True)["properties"]
|
|
||||||
# mypy is confused here because JsonType allows multiple types, some of them
|
|
||||||
# not indexable (bool?) or not indexable by string (list?). The correctness of
|
|
||||||
# types depends on correct model definitions. Hence multiple ignore statements below.
|
|
||||||
discriminator_value = props[discriminator]["default"] # type: ignore[index,call-overload]
|
|
||||||
mapping[discriminator_value] = self.type_to_schema(union_type)["$ref"] # type: ignore[index]
|
|
||||||
|
|
||||||
ret["discriminator"] = {
|
|
||||||
"propertyName": discriminator,
|
|
||||||
"mapping": mapping,
|
|
||||||
}
|
|
||||||
return ret
|
|
||||||
elif origin_type is Literal:
|
|
||||||
literal_args = typing.get_args(typ)
|
|
||||||
if len(literal_args) == 1:
|
|
||||||
(literal_value,) = literal_args
|
|
||||||
schema = self.type_to_schema(type(literal_value))
|
|
||||||
schema["const"] = literal_value
|
|
||||||
return schema
|
|
||||||
elif len(literal_args) > 1:
|
|
||||||
first_value = literal_args[0]
|
|
||||||
schema = self.type_to_schema(type(first_value))
|
|
||||||
schema["enum"] = list(literal_args)
|
|
||||||
return schema
|
|
||||||
else:
|
|
||||||
return {"enum": []}
|
|
||||||
elif origin_type is type:
|
|
||||||
(concrete_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return {"const": self.type_to_schema(concrete_type, force_expand=True)}
|
|
||||||
elif origin_type is collections.abc.AsyncIterator:
|
|
||||||
(concrete_type,) = typing.get_args(typ)
|
|
||||||
return self.type_to_schema(concrete_type)
|
|
||||||
|
|
||||||
# dictionary of class attributes
|
|
||||||
members = dict(inspect.getmembers(typ, lambda a: not inspect.isroutine(a)))
|
|
||||||
|
|
||||||
property_docstrings = get_class_property_docstrings(typ, self.options.property_description_fun)
|
|
||||||
properties: dict[str, Schema] = {}
|
|
||||||
required: list[str] = []
|
|
||||||
for property_name, property_type in get_class_properties(typ):
|
|
||||||
# rename property if an alias name is specified
|
|
||||||
alias = get_annotation(property_type, Alias)
|
|
||||||
if alias:
|
|
||||||
output_name = alias.name
|
|
||||||
else:
|
|
||||||
output_name = property_name
|
|
||||||
|
|
||||||
defaults = {}
|
|
||||||
json_schema_extra = None
|
|
||||||
if "model_fields" in members:
|
|
||||||
f = members["model_fields"]
|
|
||||||
defaults = {k: finfo.default for k, finfo in f.items()}
|
|
||||||
if output_name in f:
|
|
||||||
finfo = f[output_name]
|
|
||||||
json_schema_extra = finfo.json_schema_extra or {}
|
|
||||||
if finfo.deprecated:
|
|
||||||
json_schema_extra["deprecated"] = True
|
|
||||||
|
|
||||||
if is_type_optional(property_type):
|
|
||||||
optional_type: type = unwrap_optional_type(property_type)
|
|
||||||
property_def = self.type_to_schema(optional_type, json_schema_extra=json_schema_extra)
|
|
||||||
else:
|
|
||||||
property_def = self.type_to_schema(property_type, json_schema_extra=json_schema_extra)
|
|
||||||
required.append(output_name)
|
|
||||||
|
|
||||||
# check if attribute has a default value initializer
|
|
||||||
if defaults.get(property_name) is not None:
|
|
||||||
def_value = defaults[property_name]
|
|
||||||
# check if value can be directly represented in JSON
|
|
||||||
if isinstance(
|
|
||||||
def_value,
|
|
||||||
(
|
|
||||||
bool,
|
|
||||||
int,
|
|
||||||
float,
|
|
||||||
str,
|
|
||||||
enum.Enum,
|
|
||||||
datetime.datetime,
|
|
||||||
datetime.date,
|
|
||||||
datetime.time,
|
|
||||||
),
|
|
||||||
):
|
|
||||||
property_def["default"] = object_to_json(def_value)
|
|
||||||
|
|
||||||
# add property docstring if available
|
|
||||||
property_doc = property_docstrings.get(property_name)
|
|
||||||
if property_doc:
|
|
||||||
# print(output_name, property_doc)
|
|
||||||
property_def.pop("title", None)
|
|
||||||
property_def["description"] = property_doc
|
|
||||||
|
|
||||||
properties[output_name] = property_def
|
|
||||||
|
|
||||||
schema = {"type": "object"}
|
|
||||||
if len(properties) > 0:
|
|
||||||
schema["properties"] = typing.cast(JsonType, properties)
|
|
||||||
schema["additionalProperties"] = False
|
|
||||||
if len(required) > 0:
|
|
||||||
schema["required"] = typing.cast(JsonType, required)
|
|
||||||
if self.options.use_descriptions:
|
|
||||||
schema.update(docstring_to_schema(typ))
|
|
||||||
return schema
|
|
||||||
|
|
||||||
def _type_to_schema_with_lookup(self, data_type: TypeLike) -> Schema:
|
|
||||||
"""
|
|
||||||
Returns the JSON schema associated with a type that may be registered in the catalog of known types.
|
|
||||||
|
|
||||||
:param data_type: The type whose JSON schema we seek.
|
|
||||||
:returns: The JSON schema associated with the type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
entry = JsonSchemaGenerator.type_catalog.get(data_type)
|
|
||||||
if entry.schema is None:
|
|
||||||
type_schema = self.type_to_schema(data_type, force_expand=True)
|
|
||||||
else:
|
|
||||||
type_schema = deepcopy(entry.schema)
|
|
||||||
|
|
||||||
# add descriptive text (if present)
|
|
||||||
if self.options.use_descriptions:
|
|
||||||
if isinstance(data_type, type) and not isinstance(data_type, typing.ForwardRef):
|
|
||||||
type_schema.update(docstring_to_schema(data_type))
|
|
||||||
|
|
||||||
# add example (if present)
|
|
||||||
if self.options.use_examples and entry.examples:
|
|
||||||
type_schema["examples"] = entry.examples
|
|
||||||
|
|
||||||
return type_schema
|
|
||||||
|
|
||||||
def classdef_to_schema(self, data_type: TypeLike, force_expand: bool = False) -> tuple[Schema, dict[str, Schema]]:
|
|
||||||
"""
|
|
||||||
Returns the JSON schema associated with a type and any nested types.
|
|
||||||
|
|
||||||
:param data_type: The type whose JSON schema to return.
|
|
||||||
:param force_expand: True if a full JSON schema is to be returned even for well-known types; false if a schema
|
|
||||||
reference is to be used for well-known types.
|
|
||||||
:returns: A tuple of the JSON schema, and a mapping between nested type names and their corresponding schema.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not is_type_like(data_type):
|
|
||||||
raise TypeError(f"expected a type-like object but got: {data_type}")
|
|
||||||
|
|
||||||
self.types_used = {}
|
|
||||||
try:
|
|
||||||
type_schema = self.type_to_schema(data_type, force_expand=force_expand)
|
|
||||||
|
|
||||||
types_defined: dict[str, Schema] = {}
|
|
||||||
while len(self.types_used) > len(types_defined):
|
|
||||||
# make a snapshot copy; original collection is going to be modified
|
|
||||||
types_undefined = {
|
|
||||||
sub_name: sub_type
|
|
||||||
for sub_name, sub_type in self.types_used.items()
|
|
||||||
if sub_name not in types_defined
|
|
||||||
}
|
|
||||||
|
|
||||||
# expand undefined types, which may lead to additional types to be defined
|
|
||||||
for sub_name, sub_type in types_undefined.items():
|
|
||||||
types_defined[sub_name] = self._type_to_schema_with_lookup(sub_type)
|
|
||||||
|
|
||||||
type_definitions = dict(sorted(types_defined.items()))
|
|
||||||
finally:
|
|
||||||
self.types_used = {}
|
|
||||||
|
|
||||||
return type_schema, type_definitions
|
|
||||||
|
|
||||||
|
|
||||||
class Validator(enum.Enum):
|
|
||||||
"Defines constants for JSON schema standards."
|
|
||||||
|
|
||||||
Draft7 = jsonschema.Draft7Validator
|
|
||||||
Draft201909 = jsonschema.Draft201909Validator
|
|
||||||
Draft202012 = jsonschema.Draft202012Validator
|
|
||||||
Latest = jsonschema.Draft202012Validator
|
|
||||||
|
|
||||||
|
|
||||||
def classdef_to_schema(
|
|
||||||
data_type: TypeLike,
|
|
||||||
options: SchemaOptions | None = None,
|
|
||||||
validator: Validator = Validator.Latest,
|
|
||||||
) -> Schema:
|
|
||||||
"""
|
|
||||||
Returns the JSON schema corresponding to the given type.
|
|
||||||
|
|
||||||
:param data_type: The Python type used to generate the JSON schema
|
|
||||||
:returns: A JSON object that you can serialize to a JSON string with json.dump or json.dumps
|
|
||||||
:raises TypeError: Indicates that the generated JSON schema does not validate against the desired meta-schema.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# short-circuit with an error message when passing invalid data
|
|
||||||
if not is_type_like(data_type):
|
|
||||||
raise TypeError(f"expected a type-like object but got: {data_type}")
|
|
||||||
|
|
||||||
generator = JsonSchemaGenerator(options)
|
|
||||||
type_schema, type_definitions = generator.classdef_to_schema(data_type)
|
|
||||||
|
|
||||||
class_schema: Schema = {}
|
|
||||||
if type_definitions:
|
|
||||||
class_schema["definitions"] = typing.cast(JsonType, type_definitions)
|
|
||||||
class_schema.update(type_schema)
|
|
||||||
|
|
||||||
validator_id = validator.value.META_SCHEMA["$id"]
|
|
||||||
try:
|
|
||||||
validator.value.check_schema(class_schema)
|
|
||||||
except jsonschema.exceptions.SchemaError:
|
|
||||||
raise TypeError(f"schema does not validate against meta-schema <{validator_id}>")
|
|
||||||
|
|
||||||
schema = {"$schema": validator_id}
|
|
||||||
schema.update(class_schema)
|
|
||||||
return schema
|
|
||||||
|
|
||||||
|
|
||||||
def validate_object(data_type: TypeLike, json_dict: JsonType) -> None:
|
|
||||||
"""
|
|
||||||
Validates if the JSON dictionary object conforms to the expected type.
|
|
||||||
|
|
||||||
:param data_type: The type to match against.
|
|
||||||
:param json_dict: A JSON object obtained with `json.load` or `json.loads`.
|
|
||||||
:raises jsonschema.exceptions.ValidationError: Indicates that the JSON object cannot represent the type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
schema_dict = classdef_to_schema(data_type)
|
|
||||||
jsonschema.validate(json_dict, schema_dict, format_checker=jsonschema.FormatChecker())
|
|
||||||
|
|
||||||
|
|
||||||
def print_schema(data_type: type) -> None:
|
|
||||||
"""Pretty-prints the JSON schema corresponding to the type."""
|
|
||||||
|
|
||||||
s = classdef_to_schema(data_type)
|
|
||||||
print(json.dumps(s, indent=4))
|
|
||||||
|
|
||||||
|
|
||||||
def get_schema_identifier(data_type: type) -> str | None:
|
|
||||||
if data_type in JsonSchemaGenerator.type_catalog:
|
|
||||||
return JsonSchemaGenerator.type_catalog.get(data_type).identifier
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def register_schema(
|
|
||||||
data_type: T,
|
|
||||||
schema: Schema | None = None,
|
|
||||||
name: str | None = None,
|
|
||||||
examples: list[JsonType] | None = None,
|
|
||||||
) -> T:
|
|
||||||
"""
|
|
||||||
Associates a type with a JSON schema definition.
|
|
||||||
|
|
||||||
:param data_type: The type to associate with a JSON schema.
|
|
||||||
:param schema: The schema to associate the type with. Derived automatically if omitted.
|
|
||||||
:param name: The name used for looking uo the type. Determined automatically if omitted.
|
|
||||||
:returns: The input type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
JsonSchemaGenerator.type_catalog.add(
|
|
||||||
data_type,
|
|
||||||
schema,
|
|
||||||
name if name is not None else python_type_to_name(data_type),
|
|
||||||
examples,
|
|
||||||
)
|
|
||||||
return data_type
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def json_schema_type(cls: type[T], /) -> type[T]: ...
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def json_schema_type(cls: None, *, schema: Schema | None = None) -> Callable[[type[T]], type[T]]: ...
|
|
||||||
|
|
||||||
|
|
||||||
def json_schema_type(
|
|
||||||
cls: type[T] | None = None,
|
|
||||||
*,
|
|
||||||
schema: Schema | None = None,
|
|
||||||
examples: list[JsonType] | None = None,
|
|
||||||
) -> type[T] | Callable[[type[T]], type[T]]:
|
|
||||||
"""Decorator to add user-defined schema definition to a class."""
|
|
||||||
|
|
||||||
def wrap(cls: type[T]) -> type[T]:
|
|
||||||
return register_schema(cls, schema, examples=examples)
|
|
||||||
|
|
||||||
# see if decorator is used as @json_schema_type or @json_schema_type()
|
|
||||||
if cls is None:
|
|
||||||
# called with parentheses
|
|
||||||
return wrap
|
|
||||||
else:
|
|
||||||
# called as @json_schema_type without parentheses
|
|
||||||
return wrap(cls)
|
|
||||||
|
|
||||||
|
|
||||||
register_schema(JsonObject, name="JsonObject")
|
|
||||||
register_schema(JsonArray, name="JsonArray")
|
|
||||||
|
|
||||||
register_schema(
|
|
||||||
JsonType,
|
|
||||||
name="JsonType",
|
|
||||||
examples=[
|
|
||||||
{
|
|
||||||
"property1": None,
|
|
||||||
"property2": True,
|
|
||||||
"property3": 64,
|
|
||||||
"property4": "string",
|
|
||||||
"property5": ["item"],
|
|
||||||
"property6": {"key": "value"},
|
|
||||||
}
|
|
||||||
],
|
|
||||||
)
|
|
||||||
register_schema(
|
|
||||||
StrictJsonType,
|
|
||||||
name="StrictJsonType",
|
|
||||||
examples=[
|
|
||||||
{
|
|
||||||
"property1": True,
|
|
||||||
"property2": 64,
|
|
||||||
"property3": "string",
|
|
||||||
"property4": ["item"],
|
|
||||||
"property5": {"key": "value"},
|
|
||||||
}
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
@ -1,97 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import inspect
|
|
||||||
import json
|
|
||||||
import sys
|
|
||||||
from types import ModuleType
|
|
||||||
from typing import Any, TextIO, TypeVar
|
|
||||||
|
|
||||||
from .core import JsonType
|
|
||||||
from .deserializer import create_deserializer
|
|
||||||
from .inspection import TypeLike
|
|
||||||
from .serializer import create_serializer
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
def object_to_json(obj: Any) -> JsonType:
|
|
||||||
"""
|
|
||||||
Converts a Python object to a representation that can be exported to JSON.
|
|
||||||
|
|
||||||
* Fundamental types (e.g. numeric types) are written as is.
|
|
||||||
* Date and time types are serialized in the ISO 8601 format with time zone.
|
|
||||||
* A byte array is written as a string with Base64 encoding.
|
|
||||||
* UUIDs are written as a UUID string.
|
|
||||||
* Enumerations are written as their value.
|
|
||||||
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively.
|
|
||||||
* Objects with properties (including data class types) are converted to a dictionaries of key-value pairs.
|
|
||||||
"""
|
|
||||||
|
|
||||||
typ: type = type(obj)
|
|
||||||
generator = create_serializer(typ)
|
|
||||||
return generator.generate(obj)
|
|
||||||
|
|
||||||
|
|
||||||
def json_to_object(typ: TypeLike, data: JsonType, *, context: ModuleType | None = None) -> object:
|
|
||||||
"""
|
|
||||||
Creates an object from a representation that has been de-serialized from JSON.
|
|
||||||
|
|
||||||
When de-serializing a JSON object into a Python object, the following transformations are applied:
|
|
||||||
|
|
||||||
* Fundamental types are parsed as `bool`, `int`, `float` or `str`.
|
|
||||||
* Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type
|
|
||||||
`datetime`, `date` or `time`
|
|
||||||
* A byte array is read from a string with Base64 encoding into a `bytes` instance.
|
|
||||||
* UUIDs are extracted from a UUID string into a `uuid.UUID` instance.
|
|
||||||
* Enumerations are instantiated with a lookup on enumeration value.
|
|
||||||
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively.
|
|
||||||
* Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs
|
|
||||||
using reflection (enumerating type annotations).
|
|
||||||
|
|
||||||
:raises TypeError: A de-serializing engine cannot be constructed for the input type.
|
|
||||||
:raises JsonKeyError: Deserialization for a class or union type has failed because a matching member was not found.
|
|
||||||
:raises JsonTypeError: Deserialization for data has failed due to a type mismatch.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# use caller context for evaluating types if no context is supplied
|
|
||||||
if context is None:
|
|
||||||
this_frame = inspect.currentframe()
|
|
||||||
if this_frame is not None:
|
|
||||||
caller_frame = this_frame.f_back
|
|
||||||
del this_frame
|
|
||||||
|
|
||||||
if caller_frame is not None:
|
|
||||||
try:
|
|
||||||
context = sys.modules[caller_frame.f_globals["__name__"]]
|
|
||||||
finally:
|
|
||||||
del caller_frame
|
|
||||||
|
|
||||||
parser = create_deserializer(typ, context)
|
|
||||||
return parser.parse(data)
|
|
||||||
|
|
||||||
|
|
||||||
def json_dump_string(json_object: JsonType) -> str:
|
|
||||||
"Dump an object as a JSON string with a compact representation."
|
|
||||||
|
|
||||||
return json.dumps(json_object, ensure_ascii=False, check_circular=False, separators=(",", ":"))
|
|
||||||
|
|
||||||
|
|
||||||
def json_dump(json_object: JsonType, file: TextIO) -> None:
|
|
||||||
json.dump(
|
|
||||||
json_object,
|
|
||||||
file,
|
|
||||||
ensure_ascii=False,
|
|
||||||
check_circular=False,
|
|
||||||
separators=(",", ":"),
|
|
||||||
)
|
|
||||||
file.write("\n")
|
|
||||||
|
|
@ -1,494 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import base64
|
|
||||||
import datetime
|
|
||||||
import enum
|
|
||||||
import functools
|
|
||||||
import inspect
|
|
||||||
import ipaddress
|
|
||||||
import sys
|
|
||||||
import typing
|
|
||||||
import uuid
|
|
||||||
from collections.abc import Callable
|
|
||||||
from types import FunctionType, MethodType, ModuleType
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
Generic,
|
|
||||||
Literal,
|
|
||||||
NamedTuple,
|
|
||||||
TypeVar,
|
|
||||||
Union,
|
|
||||||
)
|
|
||||||
|
|
||||||
from .core import JsonType
|
|
||||||
from .exception import JsonTypeError, JsonValueError
|
|
||||||
from .inspection import (
|
|
||||||
TypeLike,
|
|
||||||
enum_value_types,
|
|
||||||
evaluate_type,
|
|
||||||
get_class_properties,
|
|
||||||
get_resolved_hints,
|
|
||||||
is_dataclass_type,
|
|
||||||
is_named_tuple_type,
|
|
||||||
is_reserved_property,
|
|
||||||
is_type_annotated,
|
|
||||||
is_type_enum,
|
|
||||||
unwrap_annotated_type,
|
|
||||||
)
|
|
||||||
from .mapping import python_field_to_json_property
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
class Serializer(abc.ABC, Generic[T]):
|
|
||||||
@abc.abstractmethod
|
|
||||||
def generate(self, data: T) -> JsonType: ...
|
|
||||||
|
|
||||||
|
|
||||||
class NoneSerializer(Serializer[None]):
|
|
||||||
def generate(self, data: None) -> None:
|
|
||||||
# can be directly represented in JSON
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class BoolSerializer(Serializer[bool]):
|
|
||||||
def generate(self, data: bool) -> bool:
|
|
||||||
# can be directly represented in JSON
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
class IntSerializer(Serializer[int]):
|
|
||||||
def generate(self, data: int) -> int:
|
|
||||||
# can be directly represented in JSON
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
class FloatSerializer(Serializer[float]):
|
|
||||||
def generate(self, data: float) -> float:
|
|
||||||
# can be directly represented in JSON
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
class StringSerializer(Serializer[str]):
|
|
||||||
def generate(self, data: str) -> str:
|
|
||||||
# can be directly represented in JSON
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
class BytesSerializer(Serializer[bytes]):
|
|
||||||
def generate(self, data: bytes) -> str:
|
|
||||||
return base64.b64encode(data).decode("ascii")
|
|
||||||
|
|
||||||
|
|
||||||
class DateTimeSerializer(Serializer[datetime.datetime]):
|
|
||||||
def generate(self, obj: datetime.datetime) -> str:
|
|
||||||
if obj.tzinfo is None:
|
|
||||||
raise JsonValueError(f"timestamp lacks explicit time zone designator: {obj}")
|
|
||||||
fmt = obj.isoformat()
|
|
||||||
if fmt.endswith("+00:00"):
|
|
||||||
fmt = f"{fmt[:-6]}Z" # Python's isoformat() does not support military time zones like "Zulu" for UTC
|
|
||||||
return fmt
|
|
||||||
|
|
||||||
|
|
||||||
class DateSerializer(Serializer[datetime.date]):
|
|
||||||
def generate(self, obj: datetime.date) -> str:
|
|
||||||
return obj.isoformat()
|
|
||||||
|
|
||||||
|
|
||||||
class TimeSerializer(Serializer[datetime.time]):
|
|
||||||
def generate(self, obj: datetime.time) -> str:
|
|
||||||
return obj.isoformat()
|
|
||||||
|
|
||||||
|
|
||||||
class UUIDSerializer(Serializer[uuid.UUID]):
|
|
||||||
def generate(self, obj: uuid.UUID) -> str:
|
|
||||||
return str(obj)
|
|
||||||
|
|
||||||
|
|
||||||
class IPv4Serializer(Serializer[ipaddress.IPv4Address]):
|
|
||||||
def generate(self, obj: ipaddress.IPv4Address) -> str:
|
|
||||||
return str(obj)
|
|
||||||
|
|
||||||
|
|
||||||
class IPv6Serializer(Serializer[ipaddress.IPv6Address]):
|
|
||||||
def generate(self, obj: ipaddress.IPv6Address) -> str:
|
|
||||||
return str(obj)
|
|
||||||
|
|
||||||
|
|
||||||
class EnumSerializer(Serializer[enum.Enum]):
|
|
||||||
def generate(self, obj: enum.Enum) -> int | str:
|
|
||||||
value = obj.value
|
|
||||||
if isinstance(value, int):
|
|
||||||
return value
|
|
||||||
return str(value)
|
|
||||||
|
|
||||||
|
|
||||||
class UntypedListSerializer(Serializer[list]):
|
|
||||||
def generate(self, obj: list) -> list[JsonType]:
|
|
||||||
return [object_to_json(item) for item in obj]
|
|
||||||
|
|
||||||
|
|
||||||
class UntypedDictSerializer(Serializer[dict]):
|
|
||||||
def generate(self, obj: dict) -> dict[str, JsonType]:
|
|
||||||
if obj and isinstance(next(iter(obj.keys())), enum.Enum):
|
|
||||||
iterator = ((key.value, object_to_json(value)) for key, value in obj.items())
|
|
||||||
else:
|
|
||||||
iterator = ((str(key), object_to_json(value)) for key, value in obj.items())
|
|
||||||
return dict(iterator)
|
|
||||||
|
|
||||||
|
|
||||||
class UntypedSetSerializer(Serializer[set]):
|
|
||||||
def generate(self, obj: set) -> list[JsonType]:
|
|
||||||
return [object_to_json(item) for item in obj]
|
|
||||||
|
|
||||||
|
|
||||||
class UntypedTupleSerializer(Serializer[tuple]):
|
|
||||||
def generate(self, obj: tuple) -> list[JsonType]:
|
|
||||||
return [object_to_json(item) for item in obj]
|
|
||||||
|
|
||||||
|
|
||||||
class TypedCollectionSerializer(Serializer, Generic[T]):
|
|
||||||
generator: Serializer[T]
|
|
||||||
|
|
||||||
def __init__(self, item_type: type[T], context: ModuleType | None) -> None:
|
|
||||||
self.generator = _get_serializer(item_type, context)
|
|
||||||
|
|
||||||
|
|
||||||
class TypedListSerializer(TypedCollectionSerializer[T]):
|
|
||||||
def generate(self, obj: list[T]) -> list[JsonType]:
|
|
||||||
return [self.generator.generate(item) for item in obj]
|
|
||||||
|
|
||||||
|
|
||||||
class TypedStringDictSerializer(TypedCollectionSerializer[T]):
|
|
||||||
def __init__(self, value_type: type[T], context: ModuleType | None) -> None:
|
|
||||||
super().__init__(value_type, context)
|
|
||||||
|
|
||||||
def generate(self, obj: dict[str, T]) -> dict[str, JsonType]:
|
|
||||||
return {key: self.generator.generate(value) for key, value in obj.items()}
|
|
||||||
|
|
||||||
|
|
||||||
class TypedEnumDictSerializer(TypedCollectionSerializer[T]):
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
key_type: type[enum.Enum],
|
|
||||||
value_type: type[T],
|
|
||||||
context: ModuleType | None,
|
|
||||||
) -> None:
|
|
||||||
super().__init__(value_type, context)
|
|
||||||
|
|
||||||
value_types = enum_value_types(key_type)
|
|
||||||
if len(value_types) != 1:
|
|
||||||
raise JsonTypeError(
|
|
||||||
f"invalid key type, enumerations must have a consistent member value type but several types found: {value_types}"
|
|
||||||
)
|
|
||||||
|
|
||||||
value_type = value_types.pop()
|
|
||||||
if value_type is not str:
|
|
||||||
raise JsonTypeError("invalid enumeration key type, expected `enum.Enum` with string values")
|
|
||||||
|
|
||||||
def generate(self, obj: dict[enum.Enum, T]) -> dict[str, JsonType]:
|
|
||||||
return {key.value: self.generator.generate(value) for key, value in obj.items()}
|
|
||||||
|
|
||||||
|
|
||||||
class TypedSetSerializer(TypedCollectionSerializer[T]):
|
|
||||||
def generate(self, obj: set[T]) -> JsonType:
|
|
||||||
return [self.generator.generate(item) for item in obj]
|
|
||||||
|
|
||||||
|
|
||||||
class TypedTupleSerializer(Serializer[tuple]):
|
|
||||||
item_generators: tuple[Serializer, ...]
|
|
||||||
|
|
||||||
def __init__(self, item_types: tuple[type, ...], context: ModuleType | None) -> None:
|
|
||||||
self.item_generators = tuple(_get_serializer(item_type, context) for item_type in item_types)
|
|
||||||
|
|
||||||
def generate(self, obj: tuple) -> list[JsonType]:
|
|
||||||
return [item_generator.generate(item) for item_generator, item in zip(self.item_generators, obj, strict=False)]
|
|
||||||
|
|
||||||
|
|
||||||
class CustomSerializer(Serializer):
|
|
||||||
converter: Callable[[object], JsonType]
|
|
||||||
|
|
||||||
def __init__(self, converter: Callable[[object], JsonType]) -> None:
|
|
||||||
self.converter = converter
|
|
||||||
|
|
||||||
def generate(self, obj: object) -> JsonType:
|
|
||||||
return self.converter(obj)
|
|
||||||
|
|
||||||
|
|
||||||
class FieldSerializer(Generic[T]):
|
|
||||||
"""
|
|
||||||
Serializes a Python object field into a JSON property.
|
|
||||||
|
|
||||||
:param field_name: The name of the field in a Python class to read data from.
|
|
||||||
:param property_name: The name of the JSON property to write to a JSON `object`.
|
|
||||||
:param generator: A compatible serializer that can handle the field's type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
field_name: str
|
|
||||||
property_name: str
|
|
||||||
generator: Serializer
|
|
||||||
|
|
||||||
def __init__(self, field_name: str, property_name: str, generator: Serializer[T]) -> None:
|
|
||||||
self.field_name = field_name
|
|
||||||
self.property_name = property_name
|
|
||||||
self.generator = generator
|
|
||||||
|
|
||||||
def generate_field(self, obj: object, object_dict: dict[str, JsonType]) -> None:
|
|
||||||
value = getattr(obj, self.field_name)
|
|
||||||
if value is not None:
|
|
||||||
object_dict[self.property_name] = self.generator.generate(value)
|
|
||||||
|
|
||||||
|
|
||||||
class TypedClassSerializer(Serializer[T]):
|
|
||||||
property_generators: list[FieldSerializer]
|
|
||||||
|
|
||||||
def __init__(self, class_type: type[T], context: ModuleType | None) -> None:
|
|
||||||
self.property_generators = [
|
|
||||||
FieldSerializer(
|
|
||||||
field_name,
|
|
||||||
python_field_to_json_property(field_name, field_type),
|
|
||||||
_get_serializer(field_type, context),
|
|
||||||
)
|
|
||||||
for field_name, field_type in get_class_properties(class_type)
|
|
||||||
]
|
|
||||||
|
|
||||||
def generate(self, obj: T) -> dict[str, JsonType]:
|
|
||||||
object_dict: dict[str, JsonType] = {}
|
|
||||||
for property_generator in self.property_generators:
|
|
||||||
property_generator.generate_field(obj, object_dict)
|
|
||||||
|
|
||||||
return object_dict
|
|
||||||
|
|
||||||
|
|
||||||
class TypedNamedTupleSerializer(TypedClassSerializer[NamedTuple]):
|
|
||||||
def __init__(self, class_type: type[NamedTuple], context: ModuleType | None) -> None:
|
|
||||||
super().__init__(class_type, context)
|
|
||||||
|
|
||||||
|
|
||||||
class DataclassSerializer(TypedClassSerializer[T]):
|
|
||||||
def __init__(self, class_type: type[T], context: ModuleType | None) -> None:
|
|
||||||
super().__init__(class_type, context)
|
|
||||||
|
|
||||||
|
|
||||||
class UnionSerializer(Serializer):
|
|
||||||
def generate(self, obj: Any) -> JsonType:
|
|
||||||
return object_to_json(obj)
|
|
||||||
|
|
||||||
|
|
||||||
class LiteralSerializer(Serializer):
|
|
||||||
generator: Serializer
|
|
||||||
|
|
||||||
def __init__(self, values: tuple[Any, ...], context: ModuleType | None) -> None:
|
|
||||||
literal_type_tuple = tuple(type(value) for value in values)
|
|
||||||
literal_type_set = set(literal_type_tuple)
|
|
||||||
if len(literal_type_set) != 1:
|
|
||||||
value_names = ", ".join(repr(value) for value in values)
|
|
||||||
raise TypeError(
|
|
||||||
f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}"
|
|
||||||
)
|
|
||||||
|
|
||||||
literal_type = literal_type_set.pop()
|
|
||||||
self.generator = _get_serializer(literal_type, context)
|
|
||||||
|
|
||||||
def generate(self, obj: Any) -> JsonType:
|
|
||||||
return self.generator.generate(obj)
|
|
||||||
|
|
||||||
|
|
||||||
class UntypedNamedTupleSerializer(Serializer):
|
|
||||||
fields: dict[str, str]
|
|
||||||
|
|
||||||
def __init__(self, class_type: type[NamedTuple]) -> None:
|
|
||||||
# named tuples are also instances of tuple
|
|
||||||
self.fields = {}
|
|
||||||
field_names: tuple[str, ...] = class_type._fields
|
|
||||||
for field_name in field_names:
|
|
||||||
self.fields[field_name] = python_field_to_json_property(field_name)
|
|
||||||
|
|
||||||
def generate(self, obj: NamedTuple) -> JsonType:
|
|
||||||
object_dict = {}
|
|
||||||
for field_name, property_name in self.fields.items():
|
|
||||||
value = getattr(obj, field_name)
|
|
||||||
object_dict[property_name] = object_to_json(value)
|
|
||||||
|
|
||||||
return object_dict
|
|
||||||
|
|
||||||
|
|
||||||
class UntypedClassSerializer(Serializer):
|
|
||||||
def generate(self, obj: object) -> JsonType:
|
|
||||||
# iterate over object attributes to get a standard representation
|
|
||||||
object_dict = {}
|
|
||||||
for name in dir(obj):
|
|
||||||
if is_reserved_property(name):
|
|
||||||
continue
|
|
||||||
|
|
||||||
value = getattr(obj, name)
|
|
||||||
if value is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# filter instance methods
|
|
||||||
if inspect.ismethod(value):
|
|
||||||
continue
|
|
||||||
|
|
||||||
object_dict[python_field_to_json_property(name)] = object_to_json(value)
|
|
||||||
|
|
||||||
return object_dict
|
|
||||||
|
|
||||||
|
|
||||||
def create_serializer(typ: TypeLike, context: ModuleType | None = None) -> Serializer:
|
|
||||||
"""
|
|
||||||
Creates a serializer engine to produce an object that can be directly converted into a JSON string.
|
|
||||||
|
|
||||||
When serializing a Python object into a JSON object, the following transformations are applied:
|
|
||||||
|
|
||||||
* Fundamental types (`bool`, `int`, `float` or `str`) are returned as-is.
|
|
||||||
* Date and time types (`datetime`, `date` or `time`) produce an ISO 8601 format string with time zone
|
|
||||||
(ending with `Z` for UTC).
|
|
||||||
* Byte arrays (`bytes`) are written as a string with Base64 encoding.
|
|
||||||
* UUIDs (`uuid.UUID`) are written as a UUID string as per RFC 4122.
|
|
||||||
* Enumerations yield their enumeration value.
|
|
||||||
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are processed recursively.
|
|
||||||
* Complex objects with properties (including data class types) generate dictionaries of key-value pairs.
|
|
||||||
|
|
||||||
:raises TypeError: A serializer engine cannot be constructed for the input type.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if context is None:
|
|
||||||
if isinstance(typ, type):
|
|
||||||
context = sys.modules[typ.__module__]
|
|
||||||
|
|
||||||
return _get_serializer(typ, context)
|
|
||||||
|
|
||||||
|
|
||||||
def _get_serializer(typ: TypeLike, context: ModuleType | None) -> Serializer:
|
|
||||||
if isinstance(typ, (str, typing.ForwardRef)):
|
|
||||||
if context is None:
|
|
||||||
raise TypeError(f"missing context for evaluating type: {typ}")
|
|
||||||
|
|
||||||
typ = evaluate_type(typ, context)
|
|
||||||
|
|
||||||
if isinstance(typ, type):
|
|
||||||
return _fetch_serializer(typ)
|
|
||||||
else:
|
|
||||||
# special forms are not always hashable
|
|
||||||
return _create_serializer(typ, context)
|
|
||||||
|
|
||||||
|
|
||||||
@functools.cache
|
|
||||||
def _fetch_serializer(typ: type) -> Serializer:
|
|
||||||
context = sys.modules[typ.__module__]
|
|
||||||
return _create_serializer(typ, context)
|
|
||||||
|
|
||||||
|
|
||||||
def _create_serializer(typ: TypeLike, context: ModuleType | None) -> Serializer:
|
|
||||||
# check for well-known types
|
|
||||||
if typ is type(None):
|
|
||||||
return NoneSerializer()
|
|
||||||
elif typ is bool:
|
|
||||||
return BoolSerializer()
|
|
||||||
elif typ is int:
|
|
||||||
return IntSerializer()
|
|
||||||
elif typ is float:
|
|
||||||
return FloatSerializer()
|
|
||||||
elif typ is str:
|
|
||||||
return StringSerializer()
|
|
||||||
elif typ is bytes:
|
|
||||||
return BytesSerializer()
|
|
||||||
elif typ is datetime.datetime:
|
|
||||||
return DateTimeSerializer()
|
|
||||||
elif typ is datetime.date:
|
|
||||||
return DateSerializer()
|
|
||||||
elif typ is datetime.time:
|
|
||||||
return TimeSerializer()
|
|
||||||
elif typ is uuid.UUID:
|
|
||||||
return UUIDSerializer()
|
|
||||||
elif typ is ipaddress.IPv4Address:
|
|
||||||
return IPv4Serializer()
|
|
||||||
elif typ is ipaddress.IPv6Address:
|
|
||||||
return IPv6Serializer()
|
|
||||||
|
|
||||||
# dynamically-typed collection types
|
|
||||||
if typ is list:
|
|
||||||
return UntypedListSerializer()
|
|
||||||
elif typ is dict:
|
|
||||||
return UntypedDictSerializer()
|
|
||||||
elif typ is set:
|
|
||||||
return UntypedSetSerializer()
|
|
||||||
elif typ is tuple:
|
|
||||||
return UntypedTupleSerializer()
|
|
||||||
|
|
||||||
# generic types (e.g. list, dict, set, etc.)
|
|
||||||
origin_type = typing.get_origin(typ)
|
|
||||||
if origin_type is list:
|
|
||||||
(list_item_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return TypedListSerializer(list_item_type, context)
|
|
||||||
elif origin_type is dict:
|
|
||||||
key_type, value_type = typing.get_args(typ)
|
|
||||||
if key_type is str:
|
|
||||||
return TypedStringDictSerializer(value_type, context)
|
|
||||||
elif issubclass(key_type, enum.Enum):
|
|
||||||
return TypedEnumDictSerializer(key_type, value_type, context)
|
|
||||||
elif origin_type is set:
|
|
||||||
(set_member_type,) = typing.get_args(typ) # unpack single tuple element
|
|
||||||
return TypedSetSerializer(set_member_type, context)
|
|
||||||
elif origin_type is tuple:
|
|
||||||
return TypedTupleSerializer(typing.get_args(typ), context)
|
|
||||||
elif origin_type is Union:
|
|
||||||
return UnionSerializer()
|
|
||||||
elif origin_type is Literal:
|
|
||||||
return LiteralSerializer(typing.get_args(typ), context)
|
|
||||||
|
|
||||||
if is_type_annotated(typ):
|
|
||||||
return create_serializer(unwrap_annotated_type(typ))
|
|
||||||
|
|
||||||
# check if object has custom serialization method
|
|
||||||
convert_func = getattr(typ, "to_json", None)
|
|
||||||
if callable(convert_func):
|
|
||||||
return CustomSerializer(convert_func)
|
|
||||||
|
|
||||||
if is_type_enum(typ):
|
|
||||||
return EnumSerializer()
|
|
||||||
if is_dataclass_type(typ):
|
|
||||||
return DataclassSerializer(typ, context)
|
|
||||||
if is_named_tuple_type(typ):
|
|
||||||
if getattr(typ, "__annotations__", None):
|
|
||||||
return TypedNamedTupleSerializer(typ, context)
|
|
||||||
else:
|
|
||||||
return UntypedNamedTupleSerializer(typ)
|
|
||||||
|
|
||||||
# fail early if caller passes an object with an exotic type
|
|
||||||
if not isinstance(typ, type) or typ is FunctionType or typ is MethodType or typ is type or typ is ModuleType:
|
|
||||||
raise TypeError(f"object of type {typ} cannot be represented in JSON")
|
|
||||||
|
|
||||||
if get_resolved_hints(typ):
|
|
||||||
return TypedClassSerializer(typ, context)
|
|
||||||
else:
|
|
||||||
return UntypedClassSerializer()
|
|
||||||
|
|
||||||
|
|
||||||
def object_to_json(obj: Any) -> JsonType:
|
|
||||||
"""
|
|
||||||
Converts a Python object to a representation that can be exported to JSON.
|
|
||||||
|
|
||||||
* Fundamental types (e.g. numeric types) are written as is.
|
|
||||||
* Date and time types are serialized in the ISO 8601 format with time zone.
|
|
||||||
* A byte array is written as a string with Base64 encoding.
|
|
||||||
* UUIDs are written as a UUID string.
|
|
||||||
* Enumerations are written as their value.
|
|
||||||
* Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively.
|
|
||||||
* Objects with properties (including data class types) are converted to a dictionaries of key-value pairs.
|
|
||||||
"""
|
|
||||||
|
|
||||||
typ: type = type(obj)
|
|
||||||
generator = create_serializer(typ)
|
|
||||||
return generator.generate(obj)
|
|
||||||
|
|
@ -1,27 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
from typing import Any, TypeVar
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
class SlotsMeta(type):
|
|
||||||
def __new__(cls: type[T], name: str, bases: tuple[type, ...], ns: dict[str, Any]) -> T:
|
|
||||||
# caller may have already provided slots, in which case just retain them and keep going
|
|
||||||
slots: tuple[str, ...] = ns.get("__slots__", ())
|
|
||||||
|
|
||||||
# add fields with type annotations to slots
|
|
||||||
annotations: dict[str, Any] = ns.get("__annotations__", {})
|
|
||||||
members = tuple(member for member in annotations.keys() if member not in slots)
|
|
||||||
|
|
||||||
# assign slots
|
|
||||||
ns["__slots__"] = slots + tuple(members)
|
|
||||||
return super().__new__(cls, name, bases, ns) # type: ignore
|
|
||||||
|
|
||||||
|
|
||||||
class Slots(metaclass=SlotsMeta):
|
|
||||||
pass
|
|
||||||
|
|
@ -1,90 +0,0 @@
|
||||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This source code is licensed under the terms described in the LICENSE file in
|
|
||||||
# the root directory of this source tree.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Type-safe data interchange for Python data classes.
|
|
||||||
|
|
||||||
:see: https://github.com/hunyadi/strong_typing
|
|
||||||
"""
|
|
||||||
|
|
||||||
from collections.abc import Callable, Iterable
|
|
||||||
from typing import TypeVar
|
|
||||||
|
|
||||||
from .inspection import TypeCollector
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
def topological_sort(graph: dict[T, set[T]]) -> list[T]:
|
|
||||||
"""
|
|
||||||
Performs a topological sort of a graph.
|
|
||||||
|
|
||||||
Nodes with no outgoing edges are first. Nodes with no incoming edges are last.
|
|
||||||
The topological ordering is not unique.
|
|
||||||
|
|
||||||
:param graph: A dictionary of mappings from nodes to adjacent nodes. Keys and set members must be hashable.
|
|
||||||
:returns: The list of nodes in topological order.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# empty list that will contain the sorted nodes (in reverse order)
|
|
||||||
ordered: list[T] = []
|
|
||||||
|
|
||||||
seen: dict[T, bool] = {}
|
|
||||||
|
|
||||||
def _visit(n: T) -> None:
|
|
||||||
status = seen.get(n)
|
|
||||||
if status is not None:
|
|
||||||
if status: # node has a permanent mark
|
|
||||||
return
|
|
||||||
else: # node has a temporary mark
|
|
||||||
raise RuntimeError(f"cycle detected in graph for node {n}")
|
|
||||||
|
|
||||||
seen[n] = False # apply temporary mark
|
|
||||||
for m in graph[n]: # visit all adjacent nodes
|
|
||||||
if m != n: # ignore self-referencing nodes
|
|
||||||
_visit(m)
|
|
||||||
|
|
||||||
seen[n] = True # apply permanent mark
|
|
||||||
ordered.append(n)
|
|
||||||
|
|
||||||
for n in graph.keys():
|
|
||||||
_visit(n)
|
|
||||||
|
|
||||||
return ordered
|
|
||||||
|
|
||||||
|
|
||||||
def type_topological_sort(
|
|
||||||
types: Iterable[type],
|
|
||||||
dependency_fn: Callable[[type], Iterable[type]] | None = None,
|
|
||||||
) -> list[type]:
|
|
||||||
"""
|
|
||||||
Performs a topological sort of a list of types.
|
|
||||||
|
|
||||||
Types that don't depend on other types (i.e. fundamental types) are first. Types on which no other types depend
|
|
||||||
are last. The topological ordering is not unique.
|
|
||||||
|
|
||||||
:param types: A list of types (simple or composite).
|
|
||||||
:param dependency_fn: Returns a list of additional dependencies for a class (e.g. classes referenced by a foreign key).
|
|
||||||
:returns: The list of types in topological order.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not all(isinstance(typ, type) for typ in types):
|
|
||||||
raise TypeError("expected a list of types")
|
|
||||||
|
|
||||||
collector = TypeCollector()
|
|
||||||
collector.traverse_all(types)
|
|
||||||
graph = collector.graph
|
|
||||||
|
|
||||||
if dependency_fn:
|
|
||||||
new_types: set[type] = set()
|
|
||||||
for source_type, references in graph.items():
|
|
||||||
dependent_types = dependency_fn(source_type)
|
|
||||||
references.update(dependent_types)
|
|
||||||
new_types.update(dependent_types)
|
|
||||||
for new_type in new_types:
|
|
||||||
graph[new_type] = set()
|
|
||||||
|
|
||||||
return topological_sort(graph)
|
|
||||||
|
|
@ -88,6 +88,7 @@ class ToolStore(Protocol):
|
||||||
async def get_tool_group(self, toolgroup_id: str) -> ToolGroup: ...
|
async def get_tool_group(self, toolgroup_id: str) -> ToolGroup: ...
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListToolGroupsResponse(BaseModel):
|
class ListToolGroupsResponse(BaseModel):
|
||||||
"""Response containing a list of tool groups.
|
"""Response containing a list of tool groups.
|
||||||
|
|
||||||
|
|
@ -97,6 +98,7 @@ class ListToolGroupsResponse(BaseModel):
|
||||||
data: list[ToolGroup]
|
data: list[ToolGroup]
|
||||||
|
|
||||||
|
|
||||||
|
@json_schema_type
|
||||||
class ListToolDefsResponse(BaseModel):
|
class ListToolDefsResponse(BaseModel):
|
||||||
"""Response containing a list of tool definitions.
|
"""Response containing a list of tool definitions.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,8 +15,7 @@ from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from llama_stack_api.common.tracing import telemetry_traceable
|
from llama_stack_api.common.tracing import telemetry_traceable
|
||||||
from llama_stack_api.inference import InterleavedContent
|
from llama_stack_api.inference import InterleavedContent
|
||||||
from llama_stack_api.schema_utils import json_schema_type, webmethod
|
from llama_stack_api.schema_utils import json_schema_type, register_schema, webmethod
|
||||||
from llama_stack_api.strong_typing.schema import register_schema
|
|
||||||
from llama_stack_api.vector_stores import VectorStore
|
from llama_stack_api.vector_stores import VectorStore
|
||||||
from llama_stack_api.version import LLAMA_STACK_API_V1
|
from llama_stack_api.version import LLAMA_STACK_API_V1
|
||||||
|
|
||||||
|
|
@ -738,8 +737,8 @@ class VectorIO(Protocol):
|
||||||
self,
|
self,
|
||||||
vector_store_id: str,
|
vector_store_id: str,
|
||||||
file_id: str,
|
file_id: str,
|
||||||
include_embeddings: Annotated[bool | None, Query(default=False)] = False,
|
include_embeddings: Annotated[bool | None, Query()] = False,
|
||||||
include_metadata: Annotated[bool | None, Query(default=False)] = False,
|
include_metadata: Annotated[bool | None, Query()] = False,
|
||||||
) -> VectorStoreFileContentResponse:
|
) -> VectorStoreFileContentResponse:
|
||||||
"""Retrieves the contents of a vector store file.
|
"""Retrieves the contents of a vector store file.
|
||||||
|
|
||||||
|
|
|
||||||
48
tests/unit/server/test_schema_registry.py
Normal file
48
tests/unit/server/test_schema_registry.py
Normal file
|
|
@ -0,0 +1,48 @@
|
||||||
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
# All rights reserved.
|
||||||
|
#
|
||||||
|
# This source code is licensed under the terms described in the LICENSE file in
|
||||||
|
# the root directory of this source tree.
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
from llama_stack_api import Conversation, SamplingStrategy
|
||||||
|
from llama_stack_api.schema_utils import (
|
||||||
|
clear_dynamic_schema_types,
|
||||||
|
get_registered_schema_info,
|
||||||
|
iter_dynamic_schema_types,
|
||||||
|
iter_json_schema_types,
|
||||||
|
iter_registered_schema_types,
|
||||||
|
register_dynamic_schema_type,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_json_schema_registry_contains_known_model() -> None:
|
||||||
|
assert Conversation in iter_json_schema_types()
|
||||||
|
|
||||||
|
|
||||||
|
def test_registered_schema_registry_contains_sampling_strategy() -> None:
|
||||||
|
registered_names = {info.name for info in iter_registered_schema_types()}
|
||||||
|
assert "SamplingStrategy" in registered_names
|
||||||
|
|
||||||
|
schema_info = get_registered_schema_info(SamplingStrategy)
|
||||||
|
assert schema_info is not None
|
||||||
|
assert schema_info.name == "SamplingStrategy"
|
||||||
|
|
||||||
|
|
||||||
|
def test_dynamic_schema_registration_round_trip() -> None:
|
||||||
|
existing_models = tuple(iter_dynamic_schema_types())
|
||||||
|
clear_dynamic_schema_types()
|
||||||
|
try:
|
||||||
|
|
||||||
|
class TemporaryModel(BaseModel):
|
||||||
|
foo: str
|
||||||
|
|
||||||
|
register_dynamic_schema_type(TemporaryModel)
|
||||||
|
assert TemporaryModel in iter_dynamic_schema_types()
|
||||||
|
|
||||||
|
clear_dynamic_schema_types()
|
||||||
|
assert TemporaryModel not in iter_dynamic_schema_types()
|
||||||
|
finally:
|
||||||
|
for model in existing_models:
|
||||||
|
register_dynamic_schema_type(model)
|
||||||
152
uv.lock
generated
152
uv.lock
generated
|
|
@ -1834,6 +1834,21 @@ wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/fe/54/c86cd8e011fe98803d7e382fd67c0df5ceab8d2b7ad8c5a81524f791551c/jsonschema-4.25.0-py3-none-any.whl", hash = "sha256:24c2e8da302de79c8b9382fee3e76b355e44d2a4364bb207159ce10b517bd716", size = 89184, upload-time = "2025-07-18T15:39:42.956Z" },
|
{ url = "https://files.pythonhosted.org/packages/fe/54/c86cd8e011fe98803d7e382fd67c0df5ceab8d2b7ad8c5a81524f791551c/jsonschema-4.25.0-py3-none-any.whl", hash = "sha256:24c2e8da302de79c8b9382fee3e76b355e44d2a4364bb207159ce10b517bd716", size = 89184, upload-time = "2025-07-18T15:39:42.956Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "jsonschema-path"
|
||||||
|
version = "0.3.4"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "pathable" },
|
||||||
|
{ name = "pyyaml" },
|
||||||
|
{ name = "referencing" },
|
||||||
|
{ name = "requests" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/6e/45/41ebc679c2a4fced6a722f624c18d658dee42612b83ea24c1caf7c0eb3a8/jsonschema_path-0.3.4.tar.gz", hash = "sha256:8365356039f16cc65fddffafda5f58766e34bebab7d6d105616ab52bc4297001", size = 11159, upload-time = "2025-01-24T14:33:16.547Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/cb/58/3485da8cb93d2f393bce453adeef16896751f14ba3e2024bc21dc9597646/jsonschema_path-0.3.4-py3-none-any.whl", hash = "sha256:f502191fdc2b22050f9a81c9237be9d27145b9001c55842bece5e94e382e52f8", size = 14810, upload-time = "2025-01-24T14:33:14.652Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jsonschema-specifications"
|
name = "jsonschema-specifications"
|
||||||
version = "2025.4.1"
|
version = "2025.4.1"
|
||||||
|
|
@ -1913,6 +1928,38 @@ wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/89/43/d9bebfc3db7dea6ec80df5cb2aad8d274dd18ec2edd6c4f21f32c237cbbb/kubernetes-33.1.0-py2.py3-none-any.whl", hash = "sha256:544de42b24b64287f7e0aa9513c93cb503f7f40eea39b20f66810011a86eabc5", size = 1941335, upload-time = "2025-06-09T21:57:56.327Z" },
|
{ url = "https://files.pythonhosted.org/packages/89/43/d9bebfc3db7dea6ec80df5cb2aad8d274dd18ec2edd6c4f21f32c237cbbb/kubernetes-33.1.0-py2.py3-none-any.whl", hash = "sha256:544de42b24b64287f7e0aa9513c93cb503f7f40eea39b20f66810011a86eabc5", size = 1941335, upload-time = "2025-06-09T21:57:56.327Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "lazy-object-proxy"
|
||||||
|
version = "1.12.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/08/a2/69df9c6ba6d316cfd81fe2381e464db3e6de5db45f8c43c6a23504abf8cb/lazy_object_proxy-1.12.0.tar.gz", hash = "sha256:1f5a462d92fd0cfb82f1fab28b51bfb209fabbe6aabf7f0d51472c0c124c0c61", size = 43681, upload-time = "2025-08-22T13:50:06.783Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/0d/1b/b5f5bd6bda26f1e15cd3232b223892e4498e34ec70a7f4f11c401ac969f1/lazy_object_proxy-1.12.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8ee0d6027b760a11cc18281e702c0309dd92da458a74b4c15025d7fc490deede", size = 26746, upload-time = "2025-08-22T13:42:37.572Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/55/64/314889b618075c2bfc19293ffa9153ce880ac6153aacfd0a52fcabf21a66/lazy_object_proxy-1.12.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:4ab2c584e3cc8be0dfca422e05ad30a9abe3555ce63e9ab7a559f62f8dbc6ff9", size = 71457, upload-time = "2025-08-22T13:42:38.743Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/11/53/857fc2827fc1e13fbdfc0ba2629a7d2579645a06192d5461809540b78913/lazy_object_proxy-1.12.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:14e348185adbd03ec17d051e169ec45686dcd840a3779c9d4c10aabe2ca6e1c0", size = 71036, upload-time = "2025-08-22T13:42:40.184Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/2b/24/e581ffed864cd33c1b445b5763d617448ebb880f48675fc9de0471a95cbc/lazy_object_proxy-1.12.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:c4fcbe74fb85df8ba7825fa05eddca764138da752904b378f0ae5ab33a36c308", size = 69329, upload-time = "2025-08-22T13:42:41.311Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/78/be/15f8f5a0b0b2e668e756a152257d26370132c97f2f1943329b08f057eff0/lazy_object_proxy-1.12.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:563d2ec8e4d4b68ee7848c5ab4d6057a6d703cb7963b342968bb8758dda33a23", size = 70690, upload-time = "2025-08-22T13:42:42.51Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/5d/aa/f02be9bbfb270e13ee608c2b28b8771f20a5f64356c6d9317b20043c6129/lazy_object_proxy-1.12.0-cp312-cp312-win_amd64.whl", hash = "sha256:53c7fd99eb156bbb82cbc5d5188891d8fdd805ba6c1e3b92b90092da2a837073", size = 26563, upload-time = "2025-08-22T13:42:43.685Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f4/26/b74c791008841f8ad896c7f293415136c66cc27e7c7577de4ee68040c110/lazy_object_proxy-1.12.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:86fd61cb2ba249b9f436d789d1356deae69ad3231dc3c0f17293ac535162672e", size = 26745, upload-time = "2025-08-22T13:42:44.982Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/9b/52/641870d309e5d1fb1ea7d462a818ca727e43bfa431d8c34b173eb090348c/lazy_object_proxy-1.12.0-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:81d1852fb30fab81696f93db1b1e55a5d1ff7940838191062f5f56987d5fcc3e", size = 71537, upload-time = "2025-08-22T13:42:46.141Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/47/b6/919118e99d51c5e76e8bf5a27df406884921c0acf2c7b8a3b38d847ab3e9/lazy_object_proxy-1.12.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:be9045646d83f6c2664c1330904b245ae2371b5c57a3195e4028aedc9f999655", size = 71141, upload-time = "2025-08-22T13:42:47.375Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/e5/47/1d20e626567b41de085cf4d4fb3661a56c159feaa73c825917b3b4d4f806/lazy_object_proxy-1.12.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:67f07ab742f1adfb3966c40f630baaa7902be4222a17941f3d85fd1dae5565ff", size = 69449, upload-time = "2025-08-22T13:42:48.49Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/58/8d/25c20ff1a1a8426d9af2d0b6f29f6388005fc8cd10d6ee71f48bff86fdd0/lazy_object_proxy-1.12.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:75ba769017b944fcacbf6a80c18b2761a1795b03f8899acdad1f1c39db4409be", size = 70744, upload-time = "2025-08-22T13:42:49.608Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/c0/67/8ec9abe15c4f8a4bcc6e65160a2c667240d025cbb6591b879bea55625263/lazy_object_proxy-1.12.0-cp313-cp313-win_amd64.whl", hash = "sha256:7b22c2bbfb155706b928ac4d74c1a63ac8552a55ba7fff4445155523ea4067e1", size = 26568, upload-time = "2025-08-22T13:42:57.719Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/23/12/cd2235463f3469fd6c62d41d92b7f120e8134f76e52421413a0ad16d493e/lazy_object_proxy-1.12.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:4a79b909aa16bde8ae606f06e6bbc9d3219d2e57fb3e0076e17879072b742c65", size = 27391, upload-time = "2025-08-22T13:42:50.62Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/60/9e/f1c53e39bbebad2e8609c67d0830cc275f694d0ea23d78e8f6db526c12d3/lazy_object_proxy-1.12.0-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:338ab2f132276203e404951205fe80c3fd59429b3a724e7b662b2eb539bb1be9", size = 80552, upload-time = "2025-08-22T13:42:51.731Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4c/b6/6c513693448dcb317d9d8c91d91f47addc09553613379e504435b4cc8b3e/lazy_object_proxy-1.12.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8c40b3c9faee2e32bfce0df4ae63f4e73529766893258eca78548bac801c8f66", size = 82857, upload-time = "2025-08-22T13:42:53.225Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/12/1c/d9c4aaa4c75da11eb7c22c43d7c90a53b4fca0e27784a5ab207768debea7/lazy_object_proxy-1.12.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:717484c309df78cedf48396e420fa57fc8a2b1f06ea889df7248fdd156e58847", size = 80833, upload-time = "2025-08-22T13:42:54.391Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/0b/ae/29117275aac7d7d78ae4f5a4787f36ff33262499d486ac0bf3e0b97889f6/lazy_object_proxy-1.12.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:a6b7ea5ea1ffe15059eb44bcbcb258f97bcb40e139b88152c40d07b1a1dfc9ac", size = 79516, upload-time = "2025-08-22T13:42:55.812Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/19/40/b4e48b2c38c69392ae702ae7afa7b6551e0ca5d38263198b7c79de8b3bdf/lazy_object_proxy-1.12.0-cp313-cp313t-win_amd64.whl", hash = "sha256:08c465fb5cd23527512f9bd7b4c7ba6cec33e28aad36fbbe46bf7b858f9f3f7f", size = 27656, upload-time = "2025-08-22T13:42:56.793Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ef/3a/277857b51ae419a1574557c0b12e0d06bf327b758ba94cafc664cb1e2f66/lazy_object_proxy-1.12.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c9defba70ab943f1df98a656247966d7729da2fe9c2d5d85346464bf320820a3", size = 26582, upload-time = "2025-08-22T13:49:49.366Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/1a/b6/c5e0fa43535bb9c87880e0ba037cdb1c50e01850b0831e80eb4f4762f270/lazy_object_proxy-1.12.0-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6763941dbf97eea6b90f5b06eb4da9418cc088fce0e3883f5816090f9afcde4a", size = 71059, upload-time = "2025-08-22T13:49:50.488Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/06/8a/7dcad19c685963c652624702f1a968ff10220b16bfcc442257038216bf55/lazy_object_proxy-1.12.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:fdc70d81235fc586b9e3d1aeef7d1553259b62ecaae9db2167a5d2550dcc391a", size = 71034, upload-time = "2025-08-22T13:49:54.224Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/12/ac/34cbfb433a10e28c7fd830f91c5a348462ba748413cbb950c7f259e67aa7/lazy_object_proxy-1.12.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:0a83c6f7a6b2bfc11ef3ed67f8cbe99f8ff500b05655d8e7df9aab993a6abc95", size = 69529, upload-time = "2025-08-22T13:49:55.29Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/6f/6a/11ad7e349307c3ca4c0175db7a77d60ce42a41c60bcb11800aabd6a8acb8/lazy_object_proxy-1.12.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:256262384ebd2a77b023ad02fbcc9326282bcfd16484d5531154b02bc304f4c5", size = 70391, upload-time = "2025-08-22T13:49:56.35Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/59/97/9b410ed8fbc6e79c1ee8b13f8777a80137d4bc189caf2c6202358e66192c/lazy_object_proxy-1.12.0-cp314-cp314-win_amd64.whl", hash = "sha256:7601ec171c7e8584f8ff3f4e440aa2eebf93e854f04639263875b8c2971f819f", size = 26988, upload-time = "2025-08-22T13:49:57.302Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "linkify"
|
name = "linkify"
|
||||||
version = "1.4"
|
version = "1.4"
|
||||||
|
|
@ -1992,6 +2039,7 @@ dev = [
|
||||||
{ name = "black" },
|
{ name = "black" },
|
||||||
{ name = "mypy" },
|
{ name = "mypy" },
|
||||||
{ name = "nbval" },
|
{ name = "nbval" },
|
||||||
|
{ name = "openapi-spec-validator" },
|
||||||
{ name = "pre-commit" },
|
{ name = "pre-commit" },
|
||||||
{ name = "pytest" },
|
{ name = "pytest" },
|
||||||
{ name = "pytest-asyncio" },
|
{ name = "pytest-asyncio" },
|
||||||
|
|
@ -2117,6 +2165,7 @@ requires-dist = [
|
||||||
{ name = "python-dotenv" },
|
{ name = "python-dotenv" },
|
||||||
{ name = "python-multipart", specifier = ">=0.0.20" },
|
{ name = "python-multipart", specifier = ">=0.0.20" },
|
||||||
{ name = "pyyaml", specifier = ">=6.0" },
|
{ name = "pyyaml", specifier = ">=6.0" },
|
||||||
|
{ name = "pyyaml", specifier = ">=6.0.2" },
|
||||||
{ name = "rich" },
|
{ name = "rich" },
|
||||||
{ name = "sqlalchemy", extras = ["asyncio"], specifier = ">=2.0.41" },
|
{ name = "sqlalchemy", extras = ["asyncio"], specifier = ">=2.0.41" },
|
||||||
{ name = "starlette" },
|
{ name = "starlette" },
|
||||||
|
|
@ -2138,6 +2187,7 @@ dev = [
|
||||||
{ name = "black" },
|
{ name = "black" },
|
||||||
{ name = "mypy" },
|
{ name = "mypy" },
|
||||||
{ name = "nbval" },
|
{ name = "nbval" },
|
||||||
|
{ name = "openapi-spec-validator", specifier = ">=0.7.2" },
|
||||||
{ name = "pre-commit", specifier = ">=4.4.0" },
|
{ name = "pre-commit", specifier = ">=4.4.0" },
|
||||||
{ name = "pytest", specifier = ">=8.4" },
|
{ name = "pytest", specifier = ">=8.4" },
|
||||||
{ name = "pytest-asyncio", specifier = ">=1.0" },
|
{ name = "pytest-asyncio", specifier = ">=1.0" },
|
||||||
|
|
@ -3011,6 +3061,35 @@ wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/14/f3/ebbd700d8dc1e6380a7a382969d96bc0cbea8717b52fb38ff0ca2a7653e8/openai-2.5.0-py3-none-any.whl", hash = "sha256:21380e5f52a71666dbadbf322dd518bdf2b9d11ed0bb3f96bea17310302d6280", size = 999851, upload-time = "2025-10-17T18:14:45.528Z" },
|
{ url = "https://files.pythonhosted.org/packages/14/f3/ebbd700d8dc1e6380a7a382969d96bc0cbea8717b52fb38ff0ca2a7653e8/openai-2.5.0-py3-none-any.whl", hash = "sha256:21380e5f52a71666dbadbf322dd518bdf2b9d11ed0bb3f96bea17310302d6280", size = 999851, upload-time = "2025-10-17T18:14:45.528Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "openapi-schema-validator"
|
||||||
|
version = "0.6.3"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "jsonschema" },
|
||||||
|
{ name = "jsonschema-specifications" },
|
||||||
|
{ name = "rfc3339-validator" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/8b/f3/5507ad3325169347cd8ced61c232ff3df70e2b250c49f0fe140edb4973c6/openapi_schema_validator-0.6.3.tar.gz", hash = "sha256:f37bace4fc2a5d96692f4f8b31dc0f8d7400fd04f3a937798eaf880d425de6ee", size = 11550, upload-time = "2025-01-10T18:08:22.268Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/21/c6/ad0fba32775ae749016829dace42ed80f4407b171da41313d1a3a5f102e4/openapi_schema_validator-0.6.3-py3-none-any.whl", hash = "sha256:f3b9870f4e556b5a62a1c39da72a6b4b16f3ad9c73dc80084b1b11e74ba148a3", size = 8755, upload-time = "2025-01-10T18:08:19.758Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "openapi-spec-validator"
|
||||||
|
version = "0.7.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "jsonschema" },
|
||||||
|
{ name = "jsonschema-path" },
|
||||||
|
{ name = "lazy-object-proxy" },
|
||||||
|
{ name = "openapi-schema-validator" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/82/af/fe2d7618d6eae6fb3a82766a44ed87cd8d6d82b4564ed1c7cfb0f6378e91/openapi_spec_validator-0.7.2.tar.gz", hash = "sha256:cc029309b5c5dbc7859df0372d55e9d1ff43e96d678b9ba087f7c56fc586f734", size = 36855, upload-time = "2025-06-07T14:48:56.299Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/27/dd/b3fd642260cb17532f66cc1e8250f3507d1e580483e209dc1e9d13bd980d/openapi_spec_validator-0.7.2-py3-none-any.whl", hash = "sha256:4bbdc0894ec85f1d1bea1d6d9c8b2c3c8d7ccaa13577ef40da9c006c9fd0eb60", size = 39713, upload-time = "2025-06-07T14:48:54.077Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "opentelemetry-api"
|
name = "opentelemetry-api"
|
||||||
version = "1.36.0"
|
version = "1.36.0"
|
||||||
|
|
@ -3247,6 +3326,15 @@ wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/c6/ac/dac4a63f978e4dcb3c6d3a78c4d8e0192a113d288502a1216950c41b1027/parso-0.8.4-py2.py3-none-any.whl", hash = "sha256:a418670a20291dacd2dddc80c377c5c3791378ee1e8d12bffc35420643d43f18", size = 103650, upload-time = "2024-04-05T09:43:53.299Z" },
|
{ url = "https://files.pythonhosted.org/packages/c6/ac/dac4a63f978e4dcb3c6d3a78c4d8e0192a113d288502a1216950c41b1027/parso-0.8.4-py2.py3-none-any.whl", hash = "sha256:a418670a20291dacd2dddc80c377c5c3791378ee1e8d12bffc35420643d43f18", size = 103650, upload-time = "2024-04-05T09:43:53.299Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pathable"
|
||||||
|
version = "0.4.4"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/67/93/8f2c2075b180c12c1e9f6a09d1a985bc2036906b13dff1d8917e395f2048/pathable-0.4.4.tar.gz", hash = "sha256:6905a3cd17804edfac7875b5f6c9142a218c7caef78693c2dbbbfbac186d88b2", size = 8124, upload-time = "2025-01-10T18:43:13.247Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7d/eb/b6260b31b1a96386c0a880edebe26f89669098acea8e0318bff6adb378fd/pathable-0.4.4-py3-none-any.whl", hash = "sha256:5ae9e94793b6ef5a4cbe0a7ce9dbbefc1eec38df253763fd0aeeacf2762dbbc2", size = 9592, upload-time = "2025-01-10T18:43:11.88Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pathspec"
|
name = "pathspec"
|
||||||
version = "0.12.1"
|
version = "0.12.1"
|
||||||
|
|
@ -4404,6 +4492,18 @@ wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/1c/4c/cc276ce57e572c102d9542d383b2cfd551276581dc60004cb94fe8774c11/responses-0.25.8-py3-none-any.whl", hash = "sha256:0c710af92def29c8352ceadff0c3fe340ace27cf5af1bbe46fb71275bcd2831c", size = 34769, upload-time = "2025-08-08T19:01:45.018Z" },
|
{ url = "https://files.pythonhosted.org/packages/1c/4c/cc276ce57e572c102d9542d383b2cfd551276581dc60004cb94fe8774c11/responses-0.25.8-py3-none-any.whl", hash = "sha256:0c710af92def29c8352ceadff0c3fe340ace27cf5af1bbe46fb71275bcd2831c", size = 34769, upload-time = "2025-08-08T19:01:45.018Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "rfc3339-validator"
|
||||||
|
version = "0.1.4"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "six" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/28/ea/a9387748e2d111c3c2b275ba970b735e04e15cdb1eb30693b6b5708c4dbd/rfc3339_validator-0.1.4.tar.gz", hash = "sha256:138a2abdf93304ad60530167e51d2dfb9549521a836871b88d7f4695d0022f6b", size = 5513, upload-time = "2021-05-12T16:37:54.178Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7b/44/4e421b96b67b2daff264473f7465db72fbdf36a07e05494f50300cc7b0c6/rfc3339_validator-0.1.4-py2.py3-none-any.whl", hash = "sha256:24f6ec1eda14ef823da9e36ec7113124b39c04d50a4d3d3a3c2859577e7791fa", size = 3490, upload-time = "2021-05-12T16:37:52.536Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "rich"
|
name = "rich"
|
||||||
version = "14.1.0"
|
version = "14.1.0"
|
||||||
|
|
@ -4516,40 +4616,46 @@ wheels = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ruamel-yaml"
|
name = "ruamel-yaml"
|
||||||
version = "0.18.14"
|
version = "0.18.16"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "ruamel-yaml-clib", marker = "python_full_version < '3.14' and platform_python_implementation == 'CPython'" },
|
{ name = "ruamel-yaml-clib", marker = "python_full_version < '3.14' and platform_python_implementation == 'CPython'" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/39/87/6da0df742a4684263261c253f00edd5829e6aca970fff69e75028cccc547/ruamel.yaml-0.18.14.tar.gz", hash = "sha256:7227b76aaec364df15936730efbf7d72b30c0b79b1d578bbb8e3dcb2d81f52b7", size = 145511, upload-time = "2025-06-09T08:51:09.828Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/9f/c7/ee630b29e04a672ecfc9b63227c87fd7a37eb67c1bf30fe95376437f897c/ruamel.yaml-0.18.16.tar.gz", hash = "sha256:a6e587512f3c998b2225d68aa1f35111c29fad14aed561a26e73fab729ec5e5a", size = 147269, upload-time = "2025-10-22T17:54:02.346Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/af/6d/6fe4805235e193aad4aaf979160dd1f3c487c57d48b810c816e6e842171b/ruamel.yaml-0.18.14-py3-none-any.whl", hash = "sha256:710ff198bb53da66718c7db27eec4fbcc9aa6ca7204e4c1df2f282b6fe5eb6b2", size = 118570, upload-time = "2025-06-09T08:51:06.348Z" },
|
{ url = "https://files.pythonhosted.org/packages/0f/73/bb1bc2529f852e7bf64a2dec885e89ff9f5cc7bbf6c9340eed30ff2c69c5/ruamel.yaml-0.18.16-py3-none-any.whl", hash = "sha256:048f26d64245bae57a4f9ef6feb5b552a386830ef7a826f235ffb804c59efbba", size = 119858, upload-time = "2025-10-22T17:53:59.012Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ruamel-yaml-clib"
|
name = "ruamel-yaml-clib"
|
||||||
version = "0.2.12"
|
version = "0.2.14"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/20/84/80203abff8ea4993a87d823a5f632e4d92831ef75d404c9fc78d0176d2b5/ruamel.yaml.clib-0.2.12.tar.gz", hash = "sha256:6c8fbb13ec503f99a91901ab46e0b07ae7941cd527393187039aec586fdfd36f", size = 225315, upload-time = "2024-10-20T10:10:56.22Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/d8/e9/39ec4d4b3f91188fad1842748f67d4e749c77c37e353c4e545052ee8e893/ruamel.yaml.clib-0.2.14.tar.gz", hash = "sha256:803f5044b13602d58ea378576dd75aa759f52116a0232608e8fdada4da33752e", size = 225394, upload-time = "2025-09-22T19:51:23.753Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/48/41/e7a405afbdc26af961678474a55373e1b323605a4f5e2ddd4a80ea80f628/ruamel.yaml.clib-0.2.12-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:20b0f8dc160ba83b6dcc0e256846e1a02d044e13f7ea74a3d1d56ede4e48c632", size = 133433, upload-time = "2024-10-20T10:12:55.657Z" },
|
{ url = "https://files.pythonhosted.org/packages/b4/42/ccfb34a25289afbbc42017e4d3d4288e61d35b2e00cfc6b92974a6a1f94b/ruamel.yaml.clib-0.2.14-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:6aeadc170090ff1889f0d2c3057557f9cd71f975f17535c26a5d37af98f19c27", size = 271775, upload-time = "2025-09-23T14:24:12.771Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ec/b0/b850385604334c2ce90e3ee1013bd911aedf058a934905863a6ea95e9eb4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:943f32bc9dedb3abff9879edc134901df92cfce2c3d5c9348f172f62eb2d771d", size = 647362, upload-time = "2024-10-20T10:12:57.155Z" },
|
{ url = "https://files.pythonhosted.org/packages/82/73/e628a92e80197ff6a79ab81ec3fa00d4cc082d58ab78d3337b7ba7043301/ruamel.yaml.clib-0.2.14-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:5e56ac47260c0eed992789fa0b8efe43404a9adb608608631a948cee4fc2b052", size = 138842, upload-time = "2025-09-22T19:50:49.156Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/44/d0/3f68a86e006448fb6c005aee66565b9eb89014a70c491d70c08de597f8e4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95c3829bb364fdb8e0332c9931ecf57d9be3519241323c5274bd82f709cebc0c", size = 754118, upload-time = "2024-10-20T10:12:58.501Z" },
|
{ url = "https://files.pythonhosted.org/packages/2b/c5/346c7094344a60419764b4b1334d9e0285031c961176ff88ffb652405b0c/ruamel.yaml.clib-0.2.14-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:a911aa73588d9a8b08d662b9484bc0567949529824a55d3885b77e8dd62a127a", size = 647404, upload-time = "2025-09-22T19:50:52.921Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/52/a9/d39f3c5ada0a3bb2870d7db41901125dbe2434fa4f12ca8c5b83a42d7c53/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:749c16fcc4a2b09f28843cda5a193e0283e47454b63ec4b81eaa2242f50e4ccd", size = 706497, upload-time = "2024-10-20T10:13:00.211Z" },
|
{ url = "https://files.pythonhosted.org/packages/df/99/65080c863eb06d4498de3d6c86f3e90595e02e159fd8529f1565f56cfe2c/ruamel.yaml.clib-0.2.14-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a05ba88adf3d7189a974b2de7a9d56731548d35dc0a822ec3dc669caa7019b29", size = 753141, upload-time = "2025-09-22T19:50:50.294Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/b0/fa/097e38135dadd9ac25aecf2a54be17ddf6e4c23e43d538492a90ab3d71c6/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:bf165fef1f223beae7333275156ab2022cffe255dcc51c27f066b4370da81e31", size = 698042, upload-time = "2024-10-21T11:26:46.038Z" },
|
{ url = "https://files.pythonhosted.org/packages/3d/e3/0de85f3e3333f8e29e4b10244374a202a87665d1131798946ee22cf05c7c/ruamel.yaml.clib-0.2.14-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fb04c5650de6668b853623eceadcdb1a9f2fee381f5d7b6bc842ee7c239eeec4", size = 703477, upload-time = "2025-09-22T19:50:51.508Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ec/d5/a659ca6f503b9379b930f13bc6b130c9f176469b73b9834296822a83a132/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:32621c177bbf782ca5a18ba4d7af0f1082a3f6e517ac2a18b3974d4edf349680", size = 745831, upload-time = "2024-10-21T11:26:47.487Z" },
|
{ url = "https://files.pythonhosted.org/packages/d9/25/0d2f09d8833c7fd77ab8efeff213093c16856479a9d293180a0d89f6bed9/ruamel.yaml.clib-0.2.14-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:df3ec9959241d07bc261f4983d25a1205ff37703faf42b474f15d54d88b4f8c9", size = 741157, upload-time = "2025-09-23T18:42:50.408Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/db/5d/36619b61ffa2429eeaefaab4f3374666adf36ad8ac6330d855848d7d36fd/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b82a7c94a498853aa0b272fd5bc67f29008da798d4f93a2f9f289feb8426a58d", size = 715692, upload-time = "2024-12-11T19:58:17.252Z" },
|
{ url = "https://files.pythonhosted.org/packages/d3/8c/959f10c2e2153cbdab834c46e6954b6dd9e3b109c8f8c0a3cf1618310985/ruamel.yaml.clib-0.2.14-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:fbc08c02e9b147a11dfcaa1ac8a83168b699863493e183f7c0c8b12850b7d259", size = 745859, upload-time = "2025-09-22T19:50:54.497Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/b1/82/85cb92f15a4231c89b95dfe08b09eb6adca929ef7df7e17ab59902b6f589/ruamel.yaml.clib-0.2.12-cp312-cp312-win32.whl", hash = "sha256:e8c4ebfcfd57177b572e2040777b8abc537cdef58a2120e830124946aa9b42c5", size = 98777, upload-time = "2024-10-20T10:13:01.395Z" },
|
{ url = "https://files.pythonhosted.org/packages/ed/6b/e580a7c18b485e1a5f30a32cda96b20364b0ba649d9d2baaf72f8bd21f83/ruamel.yaml.clib-0.2.14-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c099cafc1834d3c5dac305865d04235f7c21c167c8dd31ebc3d6bbc357e2f023", size = 770200, upload-time = "2025-09-22T19:50:55.718Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/d7/8f/c3654f6f1ddb75daf3922c3d8fc6005b1ab56671ad56ffb874d908bfa668/ruamel.yaml.clib-0.2.12-cp312-cp312-win_amd64.whl", hash = "sha256:0467c5965282c62203273b838ae77c0d29d7638c8a4e3a1c8bdd3602c10904e4", size = 115523, upload-time = "2024-10-20T10:13:02.768Z" },
|
{ url = "https://files.pythonhosted.org/packages/ef/44/3455eebc761dc8e8fdced90f2b0a3fa61e32ba38b50de4130e2d57db0f21/ruamel.yaml.clib-0.2.14-cp312-cp312-win32.whl", hash = "sha256:b5b0f7e294700b615a3bcf6d28b26e6da94e8eba63b079f4ec92e9ba6c0d6b54", size = 98829, upload-time = "2025-09-22T19:50:58.895Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/29/00/4864119668d71a5fa45678f380b5923ff410701565821925c69780356ffa/ruamel.yaml.clib-0.2.12-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:4c8c5d82f50bb53986a5e02d1b3092b03622c02c2eb78e29bec33fd9593bae1a", size = 132011, upload-time = "2024-10-20T10:13:04.377Z" },
|
{ url = "https://files.pythonhosted.org/packages/76/ab/5121f7f3b651db93de546f8c982c241397aad0a4765d793aca1dac5eadee/ruamel.yaml.clib-0.2.14-cp312-cp312-win_amd64.whl", hash = "sha256:a37f40a859b503304dd740686359fcf541d6fb3ff7fc10f539af7f7150917c68", size = 115570, upload-time = "2025-09-22T19:50:57.981Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/7f/5e/212f473a93ae78c669ffa0cb051e3fee1139cb2d385d2ae1653d64281507/ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux2014_aarch64.whl", hash = "sha256:e7e3736715fbf53e9be2a79eb4db68e4ed857017344d697e8b9749444ae57475", size = 642488, upload-time = "2024-10-20T10:13:05.906Z" },
|
{ url = "https://files.pythonhosted.org/packages/d7/ae/e3811f05415594025e96000349d3400978adaed88d8f98d494352d9761ee/ruamel.yaml.clib-0.2.14-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:7e4f9da7e7549946e02a6122dcad00b7c1168513acb1f8a726b1aaf504a99d32", size = 269205, upload-time = "2025-09-23T14:24:15.06Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/1f/8f/ecfbe2123ade605c49ef769788f79c38ddb1c8fa81e01f4dbf5cf1a44b16/ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0b7e75b4965e1d4690e93021adfcecccbca7d61c7bddd8e22406ef2ff20d74ef", size = 745066, upload-time = "2024-10-20T10:13:07.26Z" },
|
{ url = "https://files.pythonhosted.org/packages/72/06/7d51f4688d6d72bb72fa74254e1593c4f5ebd0036be5b41fe39315b275e9/ruamel.yaml.clib-0.2.14-cp313-cp313-macosx_15_0_arm64.whl", hash = "sha256:dd7546c851e59c06197a7c651335755e74aa383a835878ca86d2c650c07a2f85", size = 137417, upload-time = "2025-09-22T19:50:59.82Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/e2/a9/28f60726d29dfc01b8decdb385de4ced2ced9faeb37a847bd5cf26836815/ruamel.yaml.clib-0.2.12-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96777d473c05ee3e5e3c3e999f5d23c6f4ec5b0c38c098b3a5229085f74236c6", size = 701785, upload-time = "2024-10-20T10:13:08.504Z" },
|
{ url = "https://files.pythonhosted.org/packages/5a/08/b4499234a420ef42960eeb05585df5cc7eb25ccb8c980490b079e6367050/ruamel.yaml.clib-0.2.14-cp313-cp313-manylinux2014_aarch64.whl", hash = "sha256:1c1acc3a0209ea9042cc3cfc0790edd2eddd431a2ec3f8283d081e4d5018571e", size = 642558, upload-time = "2025-09-22T19:51:03.388Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/84/7e/8e7ec45920daa7f76046578e4f677a3215fe8f18ee30a9cb7627a19d9b4c/ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:3bc2a80e6420ca8b7d3590791e2dfc709c88ab9152c00eeb511c9875ce5778bf", size = 693017, upload-time = "2024-10-21T11:26:48.866Z" },
|
{ url = "https://files.pythonhosted.org/packages/b6/ba/1975a27dedf1c4c33306ee67c948121be8710b19387aada29e2f139c43ee/ruamel.yaml.clib-0.2.14-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2070bf0ad1540d5c77a664de07ebcc45eebd1ddcab71a7a06f26936920692beb", size = 744087, upload-time = "2025-09-22T19:51:00.897Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/c5/b3/d650eaade4ca225f02a648321e1ab835b9d361c60d51150bac49063b83fa/ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:e188d2699864c11c36cdfdada94d781fd5d6b0071cd9c427bceb08ad3d7c70e1", size = 741270, upload-time = "2024-10-21T11:26:50.213Z" },
|
{ url = "https://files.pythonhosted.org/packages/20/15/8a19a13d27f3bd09fa18813add8380a29115a47b553845f08802959acbce/ruamel.yaml.clib-0.2.14-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bd8fe07f49c170e09d76773fb86ad9135e0beee44f36e1576a201b0676d3d1d", size = 699709, upload-time = "2025-09-22T19:51:02.075Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/87/b8/01c29b924dcbbed75cc45b30c30d565d763b9c4d540545a0eeecffb8f09c/ruamel.yaml.clib-0.2.12-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4f6f3eac23941b32afccc23081e1f50612bdbe4e982012ef4f5797986828cd01", size = 709059, upload-time = "2024-12-11T19:58:18.846Z" },
|
{ url = "https://files.pythonhosted.org/packages/19/ee/8d6146a079ad21e534b5083c9ee4a4c8bec42f79cf87594b60978286b39a/ruamel.yaml.clib-0.2.14-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ff86876889ea478b1381089e55cf9e345707b312beda4986f823e1d95e8c0f59", size = 708926, upload-time = "2025-09-23T18:42:51.707Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/30/8c/ed73f047a73638257aa9377ad356bea4d96125b305c34a28766f4445cc0f/ruamel.yaml.clib-0.2.12-cp313-cp313-win32.whl", hash = "sha256:6442cb36270b3afb1b4951f060eccca1ce49f3d087ca1ca4563a6eb479cb3de6", size = 98583, upload-time = "2024-10-20T10:13:09.658Z" },
|
{ url = "https://files.pythonhosted.org/packages/a9/f5/426b714abdc222392e68f3b8ad323930d05a214a27c7e7a0f06c69126401/ruamel.yaml.clib-0.2.14-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:1f118b707eece8cf84ecbc3e3ec94d9db879d85ed608f95870d39b2d2efa5dca", size = 740202, upload-time = "2025-09-22T19:51:04.673Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/b0/85/e8e751d8791564dd333d5d9a4eab0a7a115f7e349595417fd50ecae3395c/ruamel.yaml.clib-0.2.12-cp313-cp313-win_amd64.whl", hash = "sha256:e5b8daf27af0b90da7bb903a876477a9e6d7270be6146906b276605997c7e9a3", size = 115190, upload-time = "2024-10-20T10:13:10.66Z" },
|
{ url = "https://files.pythonhosted.org/packages/3d/ac/3c5c2b27a183f4fda8a57c82211721c016bcb689a4a175865f7646db9f94/ruamel.yaml.clib-0.2.14-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b30110b29484adc597df6bd92a37b90e63a8c152ca8136aad100a02f8ba6d1b6", size = 765196, upload-time = "2025-09-22T19:51:05.916Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/92/2e/06f56a71fd55021c993ed6e848c9b2e5e9cfce180a42179f0ddd28253f7c/ruamel.yaml.clib-0.2.14-cp313-cp313-win32.whl", hash = "sha256:f4e97a1cf0b7a30af9e1d9dad10a5671157b9acee790d9e26996391f49b965a2", size = 98635, upload-time = "2025-09-22T19:51:08.183Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/51/79/76aba16a1689b50528224b182f71097ece338e7a4ab55e84c2e73443b78a/ruamel.yaml.clib-0.2.14-cp313-cp313-win_amd64.whl", hash = "sha256:090782b5fb9d98df96509eecdbcaffd037d47389a89492320280d52f91330d78", size = 115238, upload-time = "2025-09-22T19:51:07.081Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/21/e2/a59ff65c26aaf21a24eb38df777cb9af5d87ba8fc8107c163c2da9d1e85e/ruamel.yaml.clib-0.2.14-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:7df6f6e9d0e33c7b1d435defb185095386c469109de723d514142632a7b9d07f", size = 271441, upload-time = "2025-09-23T14:24:16.498Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/6b/fa/3234f913fe9a6525a7b97c6dad1f51e72b917e6872e051a5e2ffd8b16fbb/ruamel.yaml.clib-0.2.14-cp314-cp314-macosx_15_0_arm64.whl", hash = "sha256:70eda7703b8126f5e52fcf276e6c0f40b0d314674f896fc58c47b0aef2b9ae83", size = 137970, upload-time = "2025-09-22T19:51:09.472Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ef/ec/4edbf17ac2c87fa0845dd366ef8d5852b96eb58fcd65fc1ecf5fe27b4641/ruamel.yaml.clib-0.2.14-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:a0cb71ccc6ef9ce36eecb6272c81afdc2f565950cdcec33ae8e6cd8f7fc86f27", size = 739639, upload-time = "2025-09-22T19:51:10.566Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/15/18/b0e1fafe59051de9e79cdd431863b03593ecfa8341c110affad7c8121efc/ruamel.yaml.clib-0.2.14-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:e7cb9ad1d525d40f7d87b6df7c0ff916a66bc52cb61b66ac1b2a16d0c1b07640", size = 764456, upload-time = "2025-09-22T19:51:11.736Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue