From aebd130b082814a5433f04dd9a4f50d873068b5c Mon Sep 17 00:00:00 2001 From: Anil Vishnoi Date: Thu, 13 Feb 2025 20:39:26 +0000 Subject: [PATCH 01/37] docs: Fix url to the llama-stack-spec yaml/html files (#1081) # What does this PR do? Fixes urls in the rfc doc (RFC-0001-llama-stack.md) Also fixes minor markdown linting issues Signed-off-by: Anil Vishnoi --- rfcs/RFC-0001-llama-stack.md | 30 +++++++++++++----------------- 1 file changed, 13 insertions(+), 17 deletions(-) diff --git a/rfcs/RFC-0001-llama-stack.md b/rfcs/RFC-0001-llama-stack.md index 2ff7838c1..7ba125c36 100644 --- a/rfcs/RFC-0001-llama-stack.md +++ b/rfcs/RFC-0001-llama-stack.md @@ -1,12 +1,15 @@ # The Llama Stack API **Authors:** + * Meta: @raghotham, @ashwinb, @hjshah, @jspisak ## Summary + As part of the Llama 3.1 release, Meta is releasing an RFC for ‘Llama Stack’, a comprehensive set of interfaces / API for ML developers building on top of Llama foundation models. We are looking for feedback on where the API can be improved, any corner cases we may have missed and your general thoughts on how useful this will be. Ultimately, our hope is to create a standard for working with Llama models in order to simplify the developer experience and foster innovation across the Llama ecosystem. ## Motivation + Llama models were always intended to work as part of an overall system that can orchestrate several components, including calling external tools. Our vision is to go beyond the foundation models and give developers access to a broader system that gives them the flexibility to design and create custom offerings that align with their vision. This thinking started last year when we first introduced a system-level safety model. Meta has continued to release new components for orchestration at the system level and, most recently in Llama 3.1, we’ve introduced the Llama Guard 3 safety model that is multilingual, a prompt injection filter, Prompt Guard and refreshed v3 of our CyberSec Evals. We are also releasing a reference implementation of an agentic system to demonstrate how all the pieces fit together. While building the reference implementation, we realized that having a clean and consistent way to interface between components could be valuable not only for us but for anyone leveraging Llama models and other components as part of their system. We’ve also heard from the community as they face a similar challenge as components exist with overlapping functionality and there are incompatible interfaces and yet don't cover the end-to-end model life cycle. @@ -16,22 +19,21 @@ With these motivations, we engaged folks in industry, startups, and the broader We welcome feedback and ways to improve the proposal. We’re excited to grow the ecosystem around Llama and lower barriers for both developers and platform providers. ## Design decisions -Meta releases weights of both the pretrained and instruction fine-tuned Llama models to support several use cases. These weights can be improved - fine tuned and aligned - with curated datasets to then be deployed for inference to support specific applications. The curated datasets can be produced manually by humans or synthetically by other models or by leveraging human feedback by collecting usage data of the application itself. This results in a continuous improvement cycle where the model gets better over time. This is the model life cycle. +Meta releases weights of both the pretrained and instruction fine-tuned Llama models to support several use cases. These weights can be improved - fine tuned and aligned - with curated datasets to then be deployed for inference to support specific applications. The curated datasets can be produced manually by humans or synthetically by other models or by leveraging human feedback by collecting usage data of the application itself. This results in a continuous improvement cycle where the model gets better over time. This is the model life cycle. ### Model Lifecycle ![Figure 1: Model Life Cycle](../docs/resources/model-lifecycle.png) - For each of the operations that need to be performed (e.g. fine tuning, inference, evals etc) during the model life cycle, we identified the capabilities as toolchain APIs that are needed. Some of these capabilities are primitive operations like inference while other capabilities like synthetic data generation are composed of other capabilities. The list of APIs we have identified to support the lifecycle of Llama models is below: -- /datasets - to support creating training and evaluation data sets -- /post_training - to support creating and managing supervised finetuning (SFT) or preference optimization jobs -- /evaluations - to support creating and managing evaluations for capabilities like question answering, summarization, or text - generation -- /synthetic_data_generation - to support generating synthetic data using data generation model and a reward model -- /reward_scoring - to support synthetic data generation -- /inference - to support serving the models for applications +* /datasets - to support creating training and evaluation data sets +* /post_training - to support creating and managing supervised finetuning (SFT) or preference optimization jobs +* /evaluations - to support creating and managing evaluations for capabilities like question answering, summarization, or text - generation +* /synthetic_data_generation - to support generating synthetic data using data generation model and a reward model +* /reward_scoring - to support synthetic data generation +* /inference - to support serving the models for applications ### Agentic System @@ -41,6 +43,7 @@ In addition to the model lifecycle, we considered the different components invol Note that as of today, in the OSS world, such a “loop” is often coded explicitly via elaborate prompt engineering using a ReAct pattern (typically) or preconstructed execution graph. Llama 3.1 (and future Llamas) attempts to absorb this multi-step reasoning loop inside the main model itself. **Let's consider an example:** + 1. The user asks the system "Who played the NBA finals last year?" 1. The model "understands" that this question needs to be answered using web search. It answers this abstractly with a message of the form "Please call the search tool for me with the query: 'List finalist teams for NBA in the last year' ". Note that the model by itself does not call the tool (of course!) 1. The executor consults the set of tool implementations which have been configured by the developer to find an implementation for the "search tool". If it does not find it, it returns an error to the model. Otherwise, it executes this tool and returns the result of this tool back to the model. @@ -62,14 +65,7 @@ We define the Llama Stack as a layer cake shown below. ![Figure 3: Llama Stack](../docs/resources/llama-stack.png) - - - -The API is defined in the [YAML](../docs/resources/llama-stack-spec.yaml) and [HTML](../docs/resources/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-stack, and llama-agentic-system repositories. - - - - +The API is defined in the [YAML](../docs/_static/llama-stack-spec.yaml) and [HTML](../docs/_static/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-stack, and llama-agentic-system repositories. ## Sample implementations @@ -77,8 +73,8 @@ To prove out the API, we implemented a handful of use cases to make things more There is also a sample inference endpoint implementation in the [llama-stack](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/distribution/server/server.py) repository. - ## Limitations + The reference implementation for Llama Stack APIs to date only includes sample implementations using the inference API. We are planning to flesh out the design of Llama Stack Distributions (distros) by combining capabilities from different providers into a single vertically integrated stack. We plan to implement other APIs and, of course, we’d love contributions!! Thank you in advance for your feedback, support and contributions to make this a better API. From 5858777ff038385539fb137bef6a8f9a5a87a177 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Thu, 13 Feb 2025 18:39:13 -0500 Subject: [PATCH 02/37] fix: Update VectorIO config classes in registry (#1079) This was missed in https://github.com/meta-llama/llama-stack/pull/1023. ``` Traceback (most recent call last): File "/home/yutang/.conda/envs/distribution-myenv/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/yutang/.conda/envs/distribution-myenv/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/yutang/repos/llama-stack/llama_stack/distribution/server/server.py", line 488, in main() File "/home/yutang/repos/llama-stack/llama_stack/distribution/server/server.py", line 389, in main impls = asyncio.run(construct_stack(config)) File "/home/yutang/.conda/envs/distribution-myenv/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/home/yutang/.conda/envs/distribution-myenv/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/home/yutang/repos/llama-stack/llama_stack/distribution/stack.py", line 202, in construct_stack impls = await resolve_impls(run_config, provider_registry or get_provider_registry(), dist_registry) File "/home/yutang/repos/llama-stack/llama_stack/distribution/resolver.py", line 230, in resolve_impls impl = await instantiate_provider( File "/home/yutang/repos/llama-stack/llama_stack/distribution/resolver.py", line 312, in instantiate_provider config_type = instantiate_class_type(provider_spec.config_class) File "/home/yutang/repos/llama-stack/llama_stack/distribution/utils/dynamic.py", line 13, in instantiate_class_type return getattr(module, class_name) AttributeError: module 'llama_stack.providers.inline.vector_io.faiss' has no attribute 'FaissImplConfig' ``` --------- Signed-off-by: Yuan Tang --- llama_stack/providers/registry/vector_io.py | 16 ++++++++-------- .../providers/remote/vector_io/chroma/chroma.py | 7 ++++--- 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/llama_stack/providers/registry/vector_io.py b/llama_stack/providers/registry/vector_io.py index 4422baba5..88a65397a 100644 --- a/llama_stack/providers/registry/vector_io.py +++ b/llama_stack/providers/registry/vector_io.py @@ -42,7 +42,7 @@ def available_providers() -> List[ProviderSpec]: provider_type="inline::meta-reference", pip_packages=EMBEDDING_DEPS + ["faiss-cpu"], module="llama_stack.providers.inline.vector_io.faiss", - config_class="llama_stack.providers.inline.vector_io.faiss.FaissImplConfig", + config_class="llama_stack.providers.inline.vector_io.faiss.FaissVectorIOConfig", deprecation_warning="Please use the `inline::faiss` provider instead.", api_dependencies=[Api.inference], ), @@ -51,7 +51,7 @@ def available_providers() -> List[ProviderSpec]: provider_type="inline::faiss", pip_packages=EMBEDDING_DEPS + ["faiss-cpu"], module="llama_stack.providers.inline.vector_io.faiss", - config_class="llama_stack.providers.inline.vector_io.faiss.FaissImplConfig", + config_class="llama_stack.providers.inline.vector_io.faiss.FaissVectorIOConfig", api_dependencies=[Api.inference], ), InlineProviderSpec( @@ -68,7 +68,7 @@ def available_providers() -> List[ProviderSpec]: adapter_type="chromadb", pip_packages=EMBEDDING_DEPS + ["chromadb-client"], module="llama_stack.providers.remote.vector_io.chroma", - config_class="llama_stack.providers.remote.vector_io.chroma.ChromaRemoteImplConfig", + config_class="llama_stack.providers.remote.vector_io.chroma.ChromaVectorIOConfig", ), api_dependencies=[Api.inference], ), @@ -77,7 +77,7 @@ def available_providers() -> List[ProviderSpec]: provider_type="inline::chromadb", pip_packages=EMBEDDING_DEPS + ["chromadb"], module="llama_stack.providers.inline.vector_io.chroma", - config_class="llama_stack.providers.inline.vector_io.chroma.ChromaInlineImplConfig", + config_class="llama_stack.providers.inline.vector_io.chroma.ChromaVectorIOConfig", api_dependencies=[Api.inference], ), remote_provider_spec( @@ -86,7 +86,7 @@ def available_providers() -> List[ProviderSpec]: adapter_type="pgvector", pip_packages=EMBEDDING_DEPS + ["psycopg2-binary"], module="llama_stack.providers.remote.vector_io.pgvector", - config_class="llama_stack.providers.remote.vector_io.pgvector.PGVectorConfig", + config_class="llama_stack.providers.remote.vector_io.pgvector.PGVectorVectorIOConfig", ), api_dependencies=[Api.inference], ), @@ -96,7 +96,7 @@ def available_providers() -> List[ProviderSpec]: adapter_type="weaviate", pip_packages=EMBEDDING_DEPS + ["weaviate-client"], module="llama_stack.providers.remote.vector_io.weaviate", - config_class="llama_stack.providers.remote.vector_io.weaviate.WeaviateConfig", + config_class="llama_stack.providers.remote.vector_io.weaviate.WeaviateVectorIOConfig", provider_data_validator="llama_stack.providers.remote.vector_io.weaviate.WeaviateRequestProviderData", ), api_dependencies=[Api.inference], @@ -107,7 +107,7 @@ def available_providers() -> List[ProviderSpec]: adapter_type="sample", pip_packages=[], module="llama_stack.providers.remote.vector_io.sample", - config_class="llama_stack.providers.remote.vector_io.sample.SampleConfig", + config_class="llama_stack.providers.remote.vector_io.sample.SampleVectorIOConfig", ), api_dependencies=[], ), @@ -117,7 +117,7 @@ def available_providers() -> List[ProviderSpec]: adapter_type="qdrant", pip_packages=EMBEDDING_DEPS + ["qdrant-client"], module="llama_stack.providers.remote.vector_io.qdrant", - config_class="llama_stack.providers.remote.vector_io.qdrant.QdrantConfig", + config_class="llama_stack.providers.remote.vector_io.qdrant.QdrantVectorIOConfig", ), api_dependencies=[Api.inference], ), diff --git a/llama_stack/providers/remote/vector_io/chroma/chroma.py b/llama_stack/providers/remote/vector_io/chroma/chroma.py index f894a8e65..bd684160a 100644 --- a/llama_stack/providers/remote/vector_io/chroma/chroma.py +++ b/llama_stack/providers/remote/vector_io/chroma/chroma.py @@ -16,12 +16,13 @@ from llama_stack.apis.inference import InterleavedContent from llama_stack.apis.vector_dbs import VectorDB from llama_stack.apis.vector_io import Chunk, QueryChunksResponse, VectorIO from llama_stack.providers.datatypes import Api, VectorDBsProtocolPrivate +from llama_stack.providers.inline.vector_io.chroma import ChromaVectorIOConfig as InlineChromaVectorIOConfig from llama_stack.providers.utils.memory.vector_store import ( EmbeddingIndex, VectorDBWithIndex, ) -from .config import ChromaVectorIOConfig +from .config import ChromaVectorIOConfig as RemoteChromaVectorIOConfig log = logging.getLogger(__name__) @@ -88,7 +89,7 @@ class ChromaIndex(EmbeddingIndex): class ChromaVectorIOAdapter(VectorIO, VectorDBsProtocolPrivate): def __init__( self, - config: Union[ChromaVectorIOConfig, ChromaVectorIOConfig], + config: Union[RemoteChromaVectorIOConfig, InlineChromaVectorIOConfig], inference_api: Api.inference, ) -> None: log.info(f"Initializing ChromaVectorIOAdapter with url: {config}") @@ -99,7 +100,7 @@ class ChromaVectorIOAdapter(VectorIO, VectorDBsProtocolPrivate): self.cache = {} async def initialize(self) -> None: - if isinstance(self.config, ChromaVectorIOConfig): + if isinstance(self.config, RemoteChromaVectorIOConfig): log.info(f"Connecting to Chroma server at: {self.config.url}") url = self.config.url.rstrip("/") parsed = urlparse(url) From 32d1e50a6f2a771032fd42512780c7acfb490ca3 Mon Sep 17 00:00:00 2001 From: Bill Murdock Date: Thu, 13 Feb 2025 18:44:55 -0500 Subject: [PATCH 03/37] test: Add qdrant to provider tests (#1039) # What does this PR do? This is a follow on to #1022 . It includes the changes I needed to be able to test the Qdrant support as requested by @terrytangyuan . I uncovered a lot of bigger, more systemic issues with the vector DB testing and I will open a new issue for those. For now, I am just delivering the work I already did on that. ## Test Plan As discussed on #1022: ``` podman pull qdrant/qdrant mkdir qdrant-data podman run -p 6333:6333 -v $(pwd)/qdrant-data:/qdrant/storage qdrant/qdrant ``` ``` ollama pull all-minilm:l6-v2 curl http://localhost:11434/api/embeddings -d '{"model": "all-minilm", "prompt": "Hello world"}' ``` ``` EMBEDDING_DIMENSION=384 QDRANT_URL=http://localhost pytest llama_stack/providers/tests/vector_io/test_vector_io.py -m "qdrant" -v -s --tb=short --embedding-model all-minilm:latest --disable-warnings ``` These show 3 tests passing and 15 deselected which is presumably working as intended. --------- Signed-off-by: Bill Murdock --- .../providers/tests/vector_io/conftest.py | 2 +- .../providers/tests/vector_io/fixtures.py | 28 ++++++++++++++----- 2 files changed, 22 insertions(+), 8 deletions(-) diff --git a/llama_stack/providers/tests/vector_io/conftest.py b/llama_stack/providers/tests/vector_io/conftest.py index 3da64ff2e..1f9799100 100644 --- a/llama_stack/providers/tests/vector_io/conftest.py +++ b/llama_stack/providers/tests/vector_io/conftest.py @@ -57,7 +57,7 @@ DEFAULT_PROVIDER_COMBINATIONS = [ ), pytest.param( { - "inference": "bedrock", + "inference": "ollama", "vector_io": "qdrant", }, id="qdrant", diff --git a/llama_stack/providers/tests/vector_io/fixtures.py b/llama_stack/providers/tests/vector_io/fixtures.py index 30a2679d7..beb9b4ebd 100644 --- a/llama_stack/providers/tests/vector_io/fixtures.py +++ b/llama_stack/providers/tests/vector_io/fixtures.py @@ -17,6 +17,7 @@ from llama_stack.providers.inline.vector_io.faiss import FaissVectorIOConfig from llama_stack.providers.inline.vector_io.sqlite_vec import SQLiteVectorIOConfig from llama_stack.providers.remote.vector_io.chroma import ChromaVectorIOConfig from llama_stack.providers.remote.vector_io.pgvector import PGVectorVectorIOConfig +from llama_stack.providers.remote.vector_io.qdrant import QdrantConfig from llama_stack.providers.remote.vector_io.weaviate import WeaviateVectorIOConfig from llama_stack.providers.tests.resolver import construct_stack_for_test from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig @@ -127,13 +128,26 @@ def vector_io_chroma() -> ProviderFixture: ) -VECTOR_IO_FIXTURES = [ - "faiss", - "pgvector", - "weaviate", - "chroma", - "sqlite_vec", -] +@pytest.fixture(scope="session") +def vector_io_qdrant() -> ProviderFixture: + url = os.getenv("QDRANT_URL") + if url: + config = QdrantConfig(url=url) + provider_type = "remote::qdrant" + else: + raise ValueError("QDRANT_URL must be set") + return ProviderFixture( + providers=[ + Provider( + provider_id="qdrant", + provider_type=provider_type, + config=config.model_dump(), + ) + ] + ) + + +VECTOR_IO_FIXTURES = ["faiss", "pgvector", "weaviate", "chroma", "qdrant", "sqlite_vec"] @pytest_asyncio.fixture(scope="session") From 225dd38e5ce8814c38c72c1769d7add584197354 Mon Sep 17 00:00:00 2001 From: ehhuang Date: Thu, 13 Feb 2025 16:17:50 -0800 Subject: [PATCH 04/37] test: add test for Agent.create_turn non-streaming response (#1078) Summary: This tests the fix to the SDK in https://github.com/meta-llama/llama-stack-client-python/pull/141 Test Plan: LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/ --safety-shield meta-llama/Llama-Guard-3-8B --- tests/client-sdk/agents/test_agents.py | 34 ++++++++++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-) diff --git a/tests/client-sdk/agents/test_agents.py b/tests/client-sdk/agents/test_agents.py index f42341f72..e5c20c3a5 100644 --- a/tests/client-sdk/agents/test_agents.py +++ b/tests/client-sdk/agents/test_agents.py @@ -319,7 +319,7 @@ def test_custom_tool(llama_stack_client, agent_config): logs = [str(log) for log in EventLogger().log(response) if log is not None] logs_str = "".join(logs) assert "-100" in logs_str - assert "CustomTool" in logs_str + assert "get_boiling_point" in logs_str # TODO: fix this flaky test @@ -403,7 +403,7 @@ def xtest_override_system_message_behavior(llama_stack_client, agent_config): logs_str = "".join(logs) print(logs_str) assert "-100" in logs_str - assert "CustomTool" in logs_str + assert "get_boiling_point" in logs_str def test_rag_agent(llama_stack_client, agent_config): @@ -527,3 +527,33 @@ def test_rag_and_code_agent(llama_stack_client, agent_config): logs = [str(log) for log in EventLogger().log(response) if log is not None] logs_str = "".join(logs) assert f"Tool:{tool_name}" in logs_str + + +def test_create_turn_response(llama_stack_client, agent_config): + client_tool = TestClientTool() + agent_config = { + **agent_config, + "input_shields": [], + "output_shields": [], + "client_tools": [client_tool.get_tool_definition()], + } + + agent = Agent(llama_stack_client, agent_config, client_tools=(client_tool,)) + session_id = agent.create_session(f"test-session-{uuid4()}") + + response = agent.create_turn( + messages=[ + { + "role": "user", + "content": "What is the boiling point of polyjuice?", + }, + ], + session_id=session_id, + stream=False, + ) + steps = response.steps + assert len(steps) == 3 + assert steps[0].step_type == "inference" + assert steps[1].step_type == "tool_execution" + assert steps[1].tool_calls[0].tool_name == "get_boiling_point" + assert steps[2].step_type == "inference" From 8b655e3cd2cc32fb9a588fc0dfa498ec7757e4a0 Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Thu, 13 Feb 2025 16:40:58 -0800 Subject: [PATCH 05/37] fix!: update eval-tasks -> benchmarks (#1032) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? - Update `/eval-tasks` to `/benchmarks` - ⚠️ Remove differentiation between `app` v.s. `benchmark` eval task config. Now we only have `BenchmarkConfig`. The overloaded `benchmark` is confusing and do not add any value. Backward compatibility is being kept as the "type" is not being used anywhere. [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan - This change is backward compatible - Run notebook test with ``` pytest -v -s --nbval-lax ./docs/getting_started.ipynb pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb ``` image [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant) --------- Signed-off-by: Ihar Hrachyshka Signed-off-by: Ben Browning Signed-off-by: Sébastien Han Signed-off-by: reidliu Co-authored-by: Ihar Hrachyshka Co-authored-by: Ben Browning Co-authored-by: Sébastien Han Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com> Co-authored-by: reidliu Co-authored-by: Yuan Tang --- docs/_static/llama-stack-spec.html | 2286 ++++++++++------- docs/_static/llama-stack-spec.yaml | 1465 ++++++----- docs/getting_started.ipynb | 4 +- .../Llama_Stack_Benchmark_Evals.ipynb | 4 +- docs/openapi_generator/pyopenapi/generator.py | 2 + .../pyopenapi/specification.py | 1 + docs/source/building_applications/evals.md | 14 +- .../building_applications/evaluation.md | 8 +- docs/source/concepts/evaluation_concepts.md | 4 +- docs/source/concepts/index.md | 2 +- docs/source/playground/index.md | 4 +- .../references/evals_reference/index.md | 30 +- .../llama_stack_client_cli_reference.md | 10 +- .../references/python_sdk_reference/index.md | 24 +- .../{eval_tasks => benchmarks}/__init__.py | 2 +- llama_stack/apis/benchmarks/benchmarks.py | 86 + llama_stack/apis/datatypes.py | 2 +- llama_stack/apis/eval/eval.py | 57 +- llama_stack/apis/eval_tasks/eval_tasks.py | 66 - llama_stack/apis/resource.py | 2 +- llama_stack/distribution/datatypes.py | 8 +- llama_stack/distribution/distribution.py | 2 +- llama_stack/distribution/resolver.py | 8 +- llama_stack/distribution/routers/__init__.py | 4 +- llama_stack/distribution/routers/routers.py | 79 +- .../distribution/routers/routing_tables.py | 70 +- llama_stack/distribution/stack.py | 6 +- llama_stack/distribution/ui/README.md | 2 +- .../ui/page/distribution/eval_tasks.py | 14 +- .../ui/page/distribution/resources.py | 8 +- .../ui/page/evaluations/native_eval.py | 46 +- llama_stack/providers/datatypes.py | 6 +- .../inline/eval/meta_reference/eval.py | 94 +- llama_stack/providers/tests/eval/test_eval.py | 56 +- llama_stack/providers/tests/resolver.py | 6 +- llama_stack/templates/bedrock/run.yaml | 2 +- llama_stack/templates/cerebras/run.yaml | 2 +- .../templates/dell/run-with-safety.yaml | 2 +- llama_stack/templates/dell/run.yaml | 2 +- .../experimental-post-training/run.yaml | 2 +- .../templates/fireworks/run-with-safety.yaml | 2 +- llama_stack/templates/fireworks/run.yaml | 2 +- .../hf-endpoint/run-with-safety.yaml | 2 +- llama_stack/templates/hf-endpoint/run.yaml | 2 +- .../hf-serverless/run-with-safety.yaml | 2 +- llama_stack/templates/hf-serverless/run.yaml | 2 +- .../meta-reference-gpu/run-with-safety.yaml | 2 +- .../templates/meta-reference-gpu/run.yaml | 2 +- .../meta-reference-quantized-gpu/run.yaml | 2 +- llama_stack/templates/nvidia/run.yaml | 2 +- .../templates/ollama/run-with-safety.yaml | 2 +- llama_stack/templates/ollama/run.yaml | 2 +- .../remote-vllm/run-with-safety.yaml | 2 +- llama_stack/templates/remote-vllm/run.yaml | 2 +- llama_stack/templates/sambanova/run.yaml | 2 +- .../templates/tgi/run-with-safety.yaml | 2 +- llama_stack/templates/tgi/run.yaml | 2 +- .../templates/together/run-with-safety.yaml | 2 +- llama_stack/templates/together/run.yaml | 2 +- llama_stack/templates/vllm-gpu/run.yaml | 2 +- 60 files changed, 2622 insertions(+), 1910 deletions(-) rename llama_stack/apis/{eval_tasks => benchmarks}/__init__.py (81%) create mode 100644 llama_stack/apis/benchmarks/benchmarks.py delete mode 100644 llama_stack/apis/eval_tasks/eval_tasks.py diff --git a/docs/_static/llama-stack-spec.html b/docs/_static/llama-stack-spec.html index 98270f7b8..b93f6a380 100644 --- a/docs/_static/llama-stack-spec.html +++ b/docs/_static/llama-stack-spec.html @@ -40,6 +40,286 @@ } ], "paths": { + "/v1/eval/tasks/{task_id}/evaluations": { + "post": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/EvaluateResponse" + } + } + } + } + }, + "tags": [ + "Eval" + ], + "description": "", + "parameters": [ + { + "name": "task_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + } + ], + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/DeprecatedEvaluateRowsRequest" + } + } + }, + "required": true + }, + "deprecated": true + } + }, + "/v1/eval-tasks/{task_id}": { + "get": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/Benchmark" + }, + { + "type": "null" + } + ] + } + } + } + } + }, + "tags": [ + "Benchmarks" + ], + "description": "", + "parameters": [ + { + "name": "eval_task_id", + "in": "query", + "required": true, + "schema": { + "type": "string" + } + } + ], + "deprecated": true + } + }, + "/v1/eval/tasks/{task_id}/jobs/{job_id}": { + "get": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/JobStatus" + }, + { + "type": "null" + } + ] + } + } + } + } + }, + "tags": [ + "Eval" + ], + "description": "", + "parameters": [ + { + "name": "task_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + }, + { + "name": "job_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + } + ], + "deprecated": true + }, + "delete": { + "responses": { + "200": { + "description": "OK" + } + }, + "tags": [ + "Eval" + ], + "description": "", + "parameters": [ + { + "name": "task_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + }, + { + "name": "job_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + } + ], + "deprecated": true + } + }, + "/v1/eval/tasks/{task_id}/jobs/{job_id}/result": { + "get": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/EvaluateResponse" + } + } + } + } + }, + "tags": [ + "Eval" + ], + "description": "", + "parameters": [ + { + "name": "task_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + }, + { + "name": "job_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + } + ], + "deprecated": true + } + }, + "/v1/eval-tasks": { + "get": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/ListBenchmarksResponse" + } + } + } + } + }, + "tags": [ + "Benchmarks" + ], + "description": "", + "parameters": [], + "deprecated": true + }, + "post": { + "responses": { + "200": { + "description": "OK" + } + }, + "tags": [ + "Benchmarks" + ], + "description": "", + "parameters": [], + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/DeprecatedRegisterEvalTaskRequest" + } + } + }, + "required": true + }, + "deprecated": true + } + }, + "/v1/eval/tasks/{task_id}/jobs": { + "post": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/Job" + } + } + } + } + }, + "tags": [ + "Eval" + ], + "description": "", + "parameters": [ + { + "name": "task_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + } + ], + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/DeprecatedRunEvalRequest" + } + } + }, + "required": true + }, + "deprecated": true + } + }, "/v1/datasetio/rows": { "get": { "responses": { @@ -530,7 +810,7 @@ } } }, - "/v1/eval/tasks/{task_id}/evaluations": { + "/v1/eval/benchmarks/{benchmark_id}/evaluations": { "post": { "responses": { "200": { @@ -550,7 +830,7 @@ "description": "", "parameters": [ { - "name": "task_id", + "name": "benchmark_id", "in": "path", "required": true, "schema": { @@ -670,6 +950,43 @@ ] } }, + "/v1/eval/benchmarks/{benchmark_id}": { + "get": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "oneOf": [ + { + "$ref": "#/components/schemas/Benchmark" + }, + { + "type": "null" + } + ] + } + } + } + } + }, + "tags": [ + "Benchmarks" + ], + "description": "", + "parameters": [ + { + "name": "benchmark_id", + "in": "path", + "required": true, + "schema": { + "type": "string" + } + } + ] + } + }, "/v1/datasets/{dataset_id}": { "get": { "responses": { @@ -728,43 +1045,6 @@ ] } }, - "/v1/eval-tasks/{eval_task_id}": { - "get": { - "responses": { - "200": { - "description": "OK", - "content": { - "application/json": { - "schema": { - "oneOf": [ - { - "$ref": "#/components/schemas/EvalTask" - }, - { - "type": "null" - } - ] - } - } - } - } - }, - "tags": [ - "EvalTasks" - ], - "description": "", - "parameters": [ - { - "name": "eval_task_id", - "in": "path", - "required": true, - "schema": { - "type": "string" - } - } - ] - } - }, "/v1/models/{model_id}": { "get": { "responses": { @@ -1348,7 +1628,7 @@ } } }, - "/v1/eval/tasks/{task_id}/jobs/{job_id}": { + "/v1/eval/benchmarks/{benchmark_id}/jobs/{job_id}": { "get": { "responses": { "200": { @@ -1375,7 +1655,7 @@ "description": "", "parameters": [ { - "name": "task_id", + "name": "benchmark_id", "in": "path", "required": true, "schema": { @@ -1404,7 +1684,7 @@ "description": "", "parameters": [ { - "name": "task_id", + "name": "benchmark_id", "in": "path", "required": true, "schema": { @@ -1422,7 +1702,7 @@ ] } }, - "/v1/eval/tasks/{task_id}/jobs/{job_id}/result": { + "/v1/eval/benchmarks/{benchmark_id}/jobs/{job_id}/result": { "get": { "responses": { "200": { @@ -1442,7 +1722,7 @@ "description": "", "parameters": [ { - "name": "job_id", + "name": "benchmark_id", "in": "path", "required": true, "schema": { @@ -1450,7 +1730,7 @@ } }, { - "name": "task_id", + "name": "job_id", "in": "path", "required": true, "schema": { @@ -1460,6 +1740,49 @@ ] } }, + "/v1/eval/benchmarks": { + "get": { + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/ListBenchmarksResponse" + } + } + } + } + }, + "tags": [ + "Benchmarks" + ], + "description": "", + "parameters": [] + }, + "post": { + "responses": { + "200": { + "description": "OK" + } + }, + "tags": [ + "Benchmarks" + ], + "description": "", + "parameters": [], + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/RegisterBenchmarkRequest" + } + } + }, + "required": true + } + } + }, "/v1/datasets": { "get": { "responses": { @@ -1503,49 +1826,6 @@ } } }, - "/v1/eval-tasks": { - "get": { - "responses": { - "200": { - "description": "OK", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ListEvalTasksResponse" - } - } - } - } - }, - "tags": [ - "EvalTasks" - ], - "description": "", - "parameters": [] - }, - "post": { - "responses": { - "200": { - "description": "OK" - } - }, - "tags": [ - "EvalTasks" - ], - "description": "", - "parameters": [], - "requestBody": { - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RegisterEvalTaskRequest" - } - } - }, - "required": true - } - } - }, "/v1/models": { "get": { "responses": { @@ -2121,7 +2401,7 @@ ] } }, - "/v1/eval/tasks/{task_id}/jobs": { + "/v1/eval/benchmarks/{benchmark_id}/jobs": { "post": { "responses": { "200": { @@ -2141,7 +2421,7 @@ "description": "", "parameters": [ { - "name": "task_id", + "name": "benchmark_id", "in": "path", "required": true, "schema": { @@ -2365,84 +2645,216 @@ "jsonSchemaDialect": "https://json-schema.org/draft/2020-12/schema", "components": { "schemas": { - "AppendRowsRequest": { + "AgentCandidate": { "type": "object", "properties": { - "dataset_id": { - "type": "string" + "type": { + "type": "string", + "const": "agent", + "default": "agent" }, - "rows": { + "config": { + "$ref": "#/components/schemas/AgentConfig" + } + }, + "additionalProperties": false, + "required": [ + "type", + "config" + ] + }, + "AgentConfig": { + "type": "object", + "properties": { + "sampling_params": { + "$ref": "#/components/schemas/SamplingParams" + }, + "input_shields": { "type": "array", "items": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] + "type": "string" + } + }, + "output_shields": { + "type": "array", + "items": { + "type": "string" + } + }, + "toolgroups": { + "type": "array", + "items": { + "$ref": "#/components/schemas/AgentTool" + } + }, + "client_tools": { + "type": "array", + "items": { + "$ref": "#/components/schemas/ToolDef" + } + }, + "tool_choice": { + "type": "string", + "enum": [ + "auto", + "required" + ], + "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." + }, + "tool_prompt_format": { + "type": "string", + "enum": [ + "json", + "function_tag", + "python_list" + ], + "description": "Prompt format for calling custom / zero shot tools." + }, + "tool_config": { + "$ref": "#/components/schemas/ToolConfig" + }, + "max_infer_iters": { + "type": "integer", + "default": 10 + }, + "model": { + "type": "string" + }, + "instructions": { + "type": "string" + }, + "enable_session_persistence": { + "type": "boolean" + }, + "response_format": { + "$ref": "#/components/schemas/ResponseFormat" + } + }, + "additionalProperties": false, + "required": [ + "model", + "instructions", + "enable_session_persistence" + ] + }, + "AgentTool": { + "oneOf": [ + { + "type": "string" + }, + { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "args": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } } + }, + "additionalProperties": false, + "required": [ + "name", + "args" + ] + } + ] + }, + "AggregationFunctionType": { + "type": "string", + "enum": [ + "average", + "median", + "categorical_count", + "accuracy" + ] + }, + "BasicScoringFnParams": { + "type": "object", + "properties": { + "type": { + "type": "string", + "const": "basic", + "default": "basic" + }, + "aggregation_functions": { + "type": "array", + "items": { + "$ref": "#/components/schemas/AggregationFunctionType" } } }, "additionalProperties": false, "required": [ - "dataset_id", - "rows" + "type" ] }, - "CompletionMessage": { + "BenchmarkConfig": { "type": "object", "properties": { - "role": { + "type": { "type": "string", - "const": "assistant", - "default": "assistant", - "description": "Must be \"assistant\" to identify this as the model's response" + "const": "benchmark", + "default": "benchmark" }, - "content": { - "$ref": "#/components/schemas/InterleavedContent", - "description": "The content of the model's response" + "eval_candidate": { + "$ref": "#/components/schemas/EvalCandidate" }, - "stop_reason": { - "type": "string", - "enum": [ - "end_of_turn", - "end_of_message", - "out_of_tokens" - ], - "description": "Reason why the model stopped generating. Options are: - `StopReason.end_of_turn`: The model finished generating the entire response. - `StopReason.end_of_message`: The model finished generating but generated a partial response -- usually, a tool call. The user may call the tool and continue the conversation with the tool's response. - `StopReason.out_of_tokens`: The model ran out of token budget." + "scoring_params": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/ScoringFnParams" + } }, - "tool_calls": { - "type": "array", - "items": { - "$ref": "#/components/schemas/ToolCall" - }, - "description": "List of tool calls. Each tool call is a ToolCall object." + "num_examples": { + "type": "integer" } }, "additionalProperties": false, "required": [ - "role", - "content", - "stop_reason" + "type", + "eval_candidate", + "scoring_params" + ] + }, + "EvalCandidate": { + "oneOf": [ + { + "$ref": "#/components/schemas/ModelCandidate" + }, + { + "$ref": "#/components/schemas/AgentCandidate" + } ], - "description": "A message containing the model's (assistant) response in a chat conversation." + "discriminator": { + "propertyName": "type", + "mapping": { + "model": "#/components/schemas/ModelCandidate", + "agent": "#/components/schemas/AgentCandidate" + } + } }, "GrammarResponseFormat": { "type": "object", @@ -2610,30 +3022,89 @@ ], "description": "Configuration for JSON schema-guided response generation." }, - "Message": { - "oneOf": [ - { - "$ref": "#/components/schemas/UserMessage" + "LLMAsJudgeScoringFnParams": { + "type": "object", + "properties": { + "type": { + "type": "string", + "const": "llm_as_judge", + "default": "llm_as_judge" }, - { + "judge_model": { + "type": "string" + }, + "prompt_template": { + "type": "string" + }, + "judge_score_regexes": { + "type": "array", + "items": { + "type": "string" + } + }, + "aggregation_functions": { + "type": "array", + "items": { + "$ref": "#/components/schemas/AggregationFunctionType" + } + } + }, + "additionalProperties": false, + "required": [ + "type", + "judge_model" + ] + }, + "ModelCandidate": { + "type": "object", + "properties": { + "type": { + "type": "string", + "const": "model", + "default": "model" + }, + "model": { + "type": "string" + }, + "sampling_params": { + "$ref": "#/components/schemas/SamplingParams" + }, + "system_message": { "$ref": "#/components/schemas/SystemMessage" - }, - { - "$ref": "#/components/schemas/ToolResponseMessage" - }, - { - "$ref": "#/components/schemas/CompletionMessage" } - ], - "discriminator": { - "propertyName": "role", - "mapping": { - "user": "#/components/schemas/UserMessage", - "system": "#/components/schemas/SystemMessage", - "tool": "#/components/schemas/ToolResponseMessage", - "assistant": "#/components/schemas/CompletionMessage" + }, + "additionalProperties": false, + "required": [ + "type", + "model", + "sampling_params" + ] + }, + "RegexParserScoringFnParams": { + "type": "object", + "properties": { + "type": { + "type": "string", + "const": "regex_parser", + "default": "regex_parser" + }, + "parsing_regexes": { + "type": "array", + "items": { + "type": "string" + } + }, + "aggregation_functions": { + "type": "array", + "items": { + "$ref": "#/components/schemas/AggregationFunctionType" + } } - } + }, + "additionalProperties": false, + "required": [ + "type" + ] }, "ResponseFormat": { "oneOf": [ @@ -2693,6 +3164,27 @@ } } }, + "ScoringFnParams": { + "oneOf": [ + { + "$ref": "#/components/schemas/LLMAsJudgeScoringFnParams" + }, + { + "$ref": "#/components/schemas/RegexParserScoringFnParams" + }, + { + "$ref": "#/components/schemas/BasicScoringFnParams" + } + ], + "discriminator": { + "propertyName": "type", + "mapping": { + "llm_as_judge": "#/components/schemas/LLMAsJudgeScoringFnParams", + "regex_parser": "#/components/schemas/RegexParserScoringFnParams", + "basic": "#/components/schemas/BasicScoringFnParams" + } + } + }, "SystemMessage": { "type": "object", "properties": { @@ -2735,6 +3227,611 @@ ], "description": "A text content item" }, + "ToolConfig": { + "type": "object", + "properties": { + "tool_choice": { + "type": "string", + "enum": [ + "auto", + "required" + ], + "description": "(Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto.", + "default": "auto" + }, + "tool_prompt_format": { + "type": "string", + "enum": [ + "json", + "function_tag", + "python_list" + ], + "description": "(Optional) Instructs the model how to format tool calls. By default, Llama Stack will attempt to use a format that is best adapted to the model. - `ToolPromptFormat.json`: The tool calls are formatted as a JSON object. - `ToolPromptFormat.function_tag`: The tool calls are enclosed in a tag. - `ToolPromptFormat.python_list`: The tool calls are output as Python syntax -- a list of function calls." + }, + "system_message_behavior": { + "type": "string", + "enum": [ + "append", + "replace" + ], + "description": "(Optional) Config for how to override the default system prompt. - `SystemMessageBehavior.append`: Appends the provided system message to the default system prompt. - `SystemMessageBehavior.replace`: Replaces the default system prompt with the provided system message. The system message can include the string '{{function_definitions}}' to indicate where the function definitions should be inserted.", + "default": "append" + } + }, + "additionalProperties": false, + "required": [ + "system_message_behavior" + ], + "description": "Configuration for tool use." + }, + "ToolDef": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "description": { + "type": "string" + }, + "parameters": { + "type": "array", + "items": { + "$ref": "#/components/schemas/ToolParameter" + } + }, + "metadata": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "additionalProperties": false, + "required": [ + "name" + ] + }, + "ToolParameter": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "parameter_type": { + "type": "string" + }, + "description": { + "type": "string" + }, + "required": { + "type": "boolean", + "default": true + }, + "default": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + }, + "additionalProperties": false, + "required": [ + "name", + "parameter_type", + "description", + "required" + ] + }, + "TopKSamplingStrategy": { + "type": "object", + "properties": { + "type": { + "type": "string", + "const": "top_k", + "default": "top_k" + }, + "top_k": { + "type": "integer" + } + }, + "additionalProperties": false, + "required": [ + "type", + "top_k" + ] + }, + "TopPSamplingStrategy": { + "type": "object", + "properties": { + "type": { + "type": "string", + "const": "top_p", + "default": "top_p" + }, + "temperature": { + "type": "number" + }, + "top_p": { + "type": "number", + "default": 0.95 + } + }, + "additionalProperties": false, + "required": [ + "type" + ] + }, + "URL": { + "type": "object", + "properties": { + "uri": { + "type": "string" + } + }, + "additionalProperties": false, + "required": [ + "uri" + ] + }, + "DeprecatedEvaluateRowsRequest": { + "type": "object", + "properties": { + "input_rows": { + "type": "array", + "items": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "scoring_functions": { + "type": "array", + "items": { + "type": "string" + } + }, + "task_config": { + "$ref": "#/components/schemas/BenchmarkConfig" + } + }, + "additionalProperties": false, + "required": [ + "input_rows", + "scoring_functions", + "task_config" + ] + }, + "EvaluateResponse": { + "type": "object", + "properties": { + "generations": { + "type": "array", + "items": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "scores": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/ScoringResult" + } + } + }, + "additionalProperties": false, + "required": [ + "generations", + "scores" + ] + }, + "ScoringResult": { + "type": "object", + "properties": { + "score_rows": { + "type": "array", + "items": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "aggregated_results": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "additionalProperties": false, + "required": [ + "score_rows", + "aggregated_results" + ] + }, + "Benchmark": { + "type": "object", + "properties": { + "identifier": { + "type": "string" + }, + "provider_resource_id": { + "type": "string" + }, + "provider_id": { + "type": "string" + }, + "type": { + "type": "string", + "const": "benchmark", + "default": "benchmark" + }, + "dataset_id": { + "type": "string" + }, + "scoring_functions": { + "type": "array", + "items": { + "type": "string" + } + }, + "metadata": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "additionalProperties": false, + "required": [ + "identifier", + "provider_resource_id", + "provider_id", + "type", + "dataset_id", + "scoring_functions", + "metadata" + ] + }, + "JobStatus": { + "type": "string", + "enum": [ + "completed", + "in_progress", + "failed", + "scheduled" + ] + }, + "ListBenchmarksResponse": { + "type": "object", + "properties": { + "data": { + "type": "array", + "items": { + "$ref": "#/components/schemas/Benchmark" + } + } + }, + "additionalProperties": false, + "required": [ + "data" + ] + }, + "DeprecatedRegisterEvalTaskRequest": { + "type": "object", + "properties": { + "eval_task_id": { + "type": "string" + }, + "dataset_id": { + "type": "string" + }, + "scoring_functions": { + "type": "array", + "items": { + "type": "string" + } + }, + "provider_benchmark_id": { + "type": "string" + }, + "provider_id": { + "type": "string" + }, + "metadata": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "additionalProperties": false, + "required": [ + "eval_task_id", + "dataset_id", + "scoring_functions" + ] + }, + "DeprecatedRunEvalRequest": { + "type": "object", + "properties": { + "task_config": { + "$ref": "#/components/schemas/BenchmarkConfig" + } + }, + "additionalProperties": false, + "required": [ + "task_config" + ] + }, + "Job": { + "type": "object", + "properties": { + "job_id": { + "type": "string" + } + }, + "additionalProperties": false, + "required": [ + "job_id" + ] + }, + "AppendRowsRequest": { + "type": "object", + "properties": { + "dataset_id": { + "type": "string" + }, + "rows": { + "type": "array", + "items": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + } + }, + "additionalProperties": false, + "required": [ + "dataset_id", + "rows" + ] + }, + "CompletionMessage": { + "type": "object", + "properties": { + "role": { + "type": "string", + "const": "assistant", + "default": "assistant", + "description": "Must be \"assistant\" to identify this as the model's response" + }, + "content": { + "$ref": "#/components/schemas/InterleavedContent", + "description": "The content of the model's response" + }, + "stop_reason": { + "type": "string", + "enum": [ + "end_of_turn", + "end_of_message", + "out_of_tokens" + ], + "description": "Reason why the model stopped generating. Options are: - `StopReason.end_of_turn`: The model finished generating the entire response. - `StopReason.end_of_message`: The model finished generating but generated a partial response -- usually, a tool call. The user may call the tool and continue the conversation with the tool's response. - `StopReason.out_of_tokens`: The model ran out of token budget." + }, + "tool_calls": { + "type": "array", + "items": { + "$ref": "#/components/schemas/ToolCall" + }, + "description": "List of tool calls. Each tool call is a ToolCall object." + } + }, + "additionalProperties": false, + "required": [ + "role", + "content", + "stop_reason" + ], + "description": "A message containing the model's (assistant) response in a chat conversation." + }, + "Message": { + "oneOf": [ + { + "$ref": "#/components/schemas/UserMessage" + }, + { + "$ref": "#/components/schemas/SystemMessage" + }, + { + "$ref": "#/components/schemas/ToolResponseMessage" + }, + { + "$ref": "#/components/schemas/CompletionMessage" + } + ], + "discriminator": { + "propertyName": "role", + "mapping": { + "user": "#/components/schemas/UserMessage", + "system": "#/components/schemas/SystemMessage", + "tool": "#/components/schemas/ToolResponseMessage", + "assistant": "#/components/schemas/CompletionMessage" + } + } + }, "ToolCall": { "type": "object", "properties": { @@ -2950,57 +4047,6 @@ ], "description": "A message representing the result of a tool invocation." }, - "TopKSamplingStrategy": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "top_k", - "default": "top_k" - }, - "top_k": { - "type": "integer" - } - }, - "additionalProperties": false, - "required": [ - "type", - "top_k" - ] - }, - "TopPSamplingStrategy": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "top_p", - "default": "top_p" - }, - "temperature": { - "type": "number" - }, - "top_p": { - "type": "number", - "default": 0.95 - } - }, - "additionalProperties": false, - "required": [ - "type" - ] - }, - "URL": { - "type": "object", - "properties": { - "uri": { - "type": "string" - } - }, - "additionalProperties": false, - "required": [ - "uri" - ] - }, "UserMessage": { "type": "object", "properties": { @@ -3309,43 +4355,6 @@ "job_uuid" ] }, - "ToolConfig": { - "type": "object", - "properties": { - "tool_choice": { - "type": "string", - "enum": [ - "auto", - "required" - ], - "description": "(Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto.", - "default": "auto" - }, - "tool_prompt_format": { - "type": "string", - "enum": [ - "json", - "function_tag", - "python_list" - ], - "description": "(Optional) Instructs the model how to format tool calls. By default, Llama Stack will attempt to use a format that is best adapted to the model. - `ToolPromptFormat.json`: The tool calls are formatted as a JSON object. - `ToolPromptFormat.function_tag`: The tool calls are enclosed in a tag. - `ToolPromptFormat.python_list`: The tool calls are output as Python syntax -- a list of function calls." - }, - "system_message_behavior": { - "type": "string", - "enum": [ - "append", - "replace" - ], - "description": "(Optional) Config for how to override the default system prompt. - `SystemMessageBehavior.append`: Appends the provided system message to the default system prompt. - `SystemMessageBehavior.replace`: Replaces the default system prompt with the provided system message. The system message can include the string '{{function_definitions}}' to indicate where the function definitions should be inserted.", - "default": "append" - } - }, - "additionalProperties": false, - "required": [ - "system_message_behavior" - ], - "description": "Configuration for tool use." - }, "ChatCompletionRequest": { "type": "object", "properties": { @@ -3644,218 +4653,6 @@ ], "description": "A chunk of a streamed completion response." }, - "AgentConfig": { - "type": "object", - "properties": { - "sampling_params": { - "$ref": "#/components/schemas/SamplingParams" - }, - "input_shields": { - "type": "array", - "items": { - "type": "string" - } - }, - "output_shields": { - "type": "array", - "items": { - "type": "string" - } - }, - "toolgroups": { - "type": "array", - "items": { - "$ref": "#/components/schemas/AgentTool" - } - }, - "client_tools": { - "type": "array", - "items": { - "$ref": "#/components/schemas/ToolDef" - } - }, - "tool_choice": { - "type": "string", - "enum": [ - "auto", - "required" - ], - "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." - }, - "tool_prompt_format": { - "type": "string", - "enum": [ - "json", - "function_tag", - "python_list" - ], - "description": "Prompt format for calling custom / zero shot tools." - }, - "tool_config": { - "$ref": "#/components/schemas/ToolConfig" - }, - "max_infer_iters": { - "type": "integer", - "default": 10 - }, - "model": { - "type": "string" - }, - "instructions": { - "type": "string" - }, - "enable_session_persistence": { - "type": "boolean" - }, - "response_format": { - "$ref": "#/components/schemas/ResponseFormat" - } - }, - "additionalProperties": false, - "required": [ - "model", - "instructions", - "enable_session_persistence" - ] - }, - "AgentTool": { - "oneOf": [ - { - "type": "string" - }, - { - "type": "object", - "properties": { - "name": { - "type": "string" - }, - "args": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - } - }, - "additionalProperties": false, - "required": [ - "name", - "args" - ] - } - ] - }, - "ToolDef": { - "type": "object", - "properties": { - "name": { - "type": "string" - }, - "description": { - "type": "string" - }, - "parameters": { - "type": "array", - "items": { - "$ref": "#/components/schemas/ToolParameter" - } - }, - "metadata": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - } - }, - "additionalProperties": false, - "required": [ - "name" - ] - }, - "ToolParameter": { - "type": "object", - "properties": { - "name": { - "type": "string" - }, - "parameter_type": { - "type": "string" - }, - "description": { - "type": "string" - }, - "required": { - "type": "boolean", - "default": true - }, - "default": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - }, - "additionalProperties": false, - "required": [ - "name", - "parameter_type", - "description", - "required" - ] - }, "CreateAgentRequest": { "type": "object", "properties": { @@ -4582,241 +5379,6 @@ ], "description": "Response containing generated embeddings." }, - "AgentCandidate": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "agent", - "default": "agent" - }, - "config": { - "$ref": "#/components/schemas/AgentConfig" - } - }, - "additionalProperties": false, - "required": [ - "type", - "config" - ] - }, - "AggregationFunctionType": { - "type": "string", - "enum": [ - "average", - "median", - "categorical_count", - "accuracy" - ] - }, - "AppEvalTaskConfig": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "app", - "default": "app" - }, - "eval_candidate": { - "$ref": "#/components/schemas/EvalCandidate" - }, - "scoring_params": { - "type": "object", - "additionalProperties": { - "$ref": "#/components/schemas/ScoringFnParams" - } - }, - "num_examples": { - "type": "integer" - } - }, - "additionalProperties": false, - "required": [ - "type", - "eval_candidate", - "scoring_params" - ] - }, - "BasicScoringFnParams": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "basic", - "default": "basic" - }, - "aggregation_functions": { - "type": "array", - "items": { - "$ref": "#/components/schemas/AggregationFunctionType" - } - } - }, - "additionalProperties": false, - "required": [ - "type" - ] - }, - "BenchmarkEvalTaskConfig": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "benchmark", - "default": "benchmark" - }, - "eval_candidate": { - "$ref": "#/components/schemas/EvalCandidate" - }, - "num_examples": { - "type": "integer" - } - }, - "additionalProperties": false, - "required": [ - "type", - "eval_candidate" - ] - }, - "EvalCandidate": { - "oneOf": [ - { - "$ref": "#/components/schemas/ModelCandidate" - }, - { - "$ref": "#/components/schemas/AgentCandidate" - } - ], - "discriminator": { - "propertyName": "type", - "mapping": { - "model": "#/components/schemas/ModelCandidate", - "agent": "#/components/schemas/AgentCandidate" - } - } - }, - "EvalTaskConfig": { - "oneOf": [ - { - "$ref": "#/components/schemas/BenchmarkEvalTaskConfig" - }, - { - "$ref": "#/components/schemas/AppEvalTaskConfig" - } - ], - "discriminator": { - "propertyName": "type", - "mapping": { - "benchmark": "#/components/schemas/BenchmarkEvalTaskConfig", - "app": "#/components/schemas/AppEvalTaskConfig" - } - } - }, - "LLMAsJudgeScoringFnParams": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "llm_as_judge", - "default": "llm_as_judge" - }, - "judge_model": { - "type": "string" - }, - "prompt_template": { - "type": "string" - }, - "judge_score_regexes": { - "type": "array", - "items": { - "type": "string" - } - }, - "aggregation_functions": { - "type": "array", - "items": { - "$ref": "#/components/schemas/AggregationFunctionType" - } - } - }, - "additionalProperties": false, - "required": [ - "type", - "judge_model" - ] - }, - "ModelCandidate": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "model", - "default": "model" - }, - "model": { - "type": "string" - }, - "sampling_params": { - "$ref": "#/components/schemas/SamplingParams" - }, - "system_message": { - "$ref": "#/components/schemas/SystemMessage" - } - }, - "additionalProperties": false, - "required": [ - "type", - "model", - "sampling_params" - ] - }, - "RegexParserScoringFnParams": { - "type": "object", - "properties": { - "type": { - "type": "string", - "const": "regex_parser", - "default": "regex_parser" - }, - "parsing_regexes": { - "type": "array", - "items": { - "type": "string" - } - }, - "aggregation_functions": { - "type": "array", - "items": { - "$ref": "#/components/schemas/AggregationFunctionType" - } - } - }, - "additionalProperties": false, - "required": [ - "type" - ] - }, - "ScoringFnParams": { - "oneOf": [ - { - "$ref": "#/components/schemas/LLMAsJudgeScoringFnParams" - }, - { - "$ref": "#/components/schemas/RegexParserScoringFnParams" - }, - { - "$ref": "#/components/schemas/BasicScoringFnParams" - } - ], - "discriminator": { - "propertyName": "type", - "mapping": { - "llm_as_judge": "#/components/schemas/LLMAsJudgeScoringFnParams", - "regex_parser": "#/components/schemas/RegexParserScoringFnParams", - "basic": "#/components/schemas/BasicScoringFnParams" - } - } - }, "EvaluateRowsRequest": { "type": "object", "properties": { @@ -4855,7 +5417,7 @@ } }, "task_config": { - "$ref": "#/components/schemas/EvalTaskConfig" + "$ref": "#/components/schemas/BenchmarkConfig" } }, "additionalProperties": false, @@ -4865,113 +5427,6 @@ "task_config" ] }, - "EvaluateResponse": { - "type": "object", - "properties": { - "generations": { - "type": "array", - "items": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - } - }, - "scores": { - "type": "object", - "additionalProperties": { - "$ref": "#/components/schemas/ScoringResult" - } - } - }, - "additionalProperties": false, - "required": [ - "generations", - "scores" - ] - }, - "ScoringResult": { - "type": "object", - "properties": { - "score_rows": { - "type": "array", - "items": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - } - }, - "aggregated_results": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - } - }, - "additionalProperties": false, - "required": [ - "score_rows", - "aggregated_results" - ] - }, "Session": { "type": "object", "properties": { @@ -5287,69 +5742,6 @@ "type" ] }, - "EvalTask": { - "type": "object", - "properties": { - "identifier": { - "type": "string" - }, - "provider_resource_id": { - "type": "string" - }, - "provider_id": { - "type": "string" - }, - "type": { - "type": "string", - "const": "eval_task", - "default": "eval_task" - }, - "dataset_id": { - "type": "string" - }, - "scoring_functions": { - "type": "array", - "items": { - "type": "string" - } - }, - "metadata": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - } - }, - "additionalProperties": false, - "required": [ - "identifier", - "provider_resource_id", - "provider_id", - "type", - "dataset_id", - "scoring_functions", - "metadata" - ] - }, "Model": { "type": "object", "properties": { @@ -5891,15 +6283,6 @@ ], "description": "Artifacts of a finetuning job." }, - "JobStatus": { - "type": "string", - "enum": [ - "completed", - "in_progress", - "failed", - "scheduled" - ] - }, "PostTrainingJobStatusResponse": { "type": "object", "properties": { @@ -6243,21 +6626,6 @@ "data" ] }, - "ListEvalTasksResponse": { - "type": "object", - "properties": { - "data": { - "type": "array", - "items": { - "$ref": "#/components/schemas/EvalTask" - } - } - }, - "additionalProperties": false, - "required": [ - "data" - ] - }, "ListModelsResponse": { "type": "object", "properties": { @@ -7169,6 +7537,60 @@ "data" ] }, + "RegisterBenchmarkRequest": { + "type": "object", + "properties": { + "benchmark_id": { + "type": "string" + }, + "dataset_id": { + "type": "string" + }, + "scoring_functions": { + "type": "array", + "items": { + "type": "string" + } + }, + "provider_benchmark_id": { + "type": "string" + }, + "provider_id": { + "type": "string" + }, + "metadata": { + "type": "object", + "additionalProperties": { + "oneOf": [ + { + "type": "null" + }, + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "type": "string" + }, + { + "type": "array" + }, + { + "type": "object" + } + ] + } + } + }, + "additionalProperties": false, + "required": [ + "benchmark_id", + "dataset_id", + "scoring_functions" + ] + }, "RegisterDatasetRequest": { "type": "object", "properties": { @@ -7223,60 +7645,6 @@ "url" ] }, - "RegisterEvalTaskRequest": { - "type": "object", - "properties": { - "eval_task_id": { - "type": "string" - }, - "dataset_id": { - "type": "string" - }, - "scoring_functions": { - "type": "array", - "items": { - "type": "string" - } - }, - "provider_eval_task_id": { - "type": "string" - }, - "provider_id": { - "type": "string" - }, - "metadata": { - "type": "object", - "additionalProperties": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "boolean" - }, - { - "type": "number" - }, - { - "type": "string" - }, - { - "type": "array" - }, - { - "type": "object" - } - ] - } - } - }, - "additionalProperties": false, - "required": [ - "eval_task_id", - "dataset_id", - "scoring_functions" - ] - }, "RegisterModelRequest": { "type": "object", "properties": { @@ -7468,7 +7836,7 @@ "type": "object", "properties": { "task_config": { - "$ref": "#/components/schemas/EvalTaskConfig" + "$ref": "#/components/schemas/BenchmarkConfig" } }, "additionalProperties": false, @@ -7476,18 +7844,6 @@ "task_config" ] }, - "Job": { - "type": "object", - "properties": { - "job_id": { - "type": "string" - } - }, - "additionalProperties": false, - "required": [ - "job_id" - ] - }, "RunShieldRequest": { "type": "object", "properties": { @@ -7970,6 +8326,9 @@ { "name": "BatchInference (Coming Soon)" }, + { + "name": "Benchmarks" + }, { "name": "DatasetIO" }, @@ -7979,9 +8338,6 @@ { "name": "Eval" }, - { - "name": "EvalTasks" - }, { "name": "Inference", "description": "This API provides the raw interface to the underlying models. Two kinds of models are supported:\n- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.\n- Embedding models: these models generate embeddings to be used for semantic search.", @@ -8033,10 +8389,10 @@ "tags": [ "Agents", "BatchInference (Coming Soon)", + "Benchmarks", "DatasetIO", "Datasets", "Eval", - "EvalTasks", "Inference", "Inspect", "Models", diff --git a/docs/_static/llama-stack-spec.yaml b/docs/_static/llama-stack-spec.yaml index a646d7e08..b30025020 100644 --- a/docs/_static/llama-stack-spec.yaml +++ b/docs/_static/llama-stack-spec.yaml @@ -10,6 +10,175 @@ info: servers: - url: http://any-hosted-llama-stack.com paths: + /v1/eval/tasks/{task_id}/evaluations: + post: + responses: + '200': + description: OK + content: + application/json: + schema: + $ref: '#/components/schemas/EvaluateResponse' + tags: + - Eval + description: '' + parameters: + - name: task_id + in: path + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DeprecatedEvaluateRowsRequest' + required: true + deprecated: true + /v1/eval-tasks/{task_id}: + get: + responses: + '200': + description: OK + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/Benchmark' + - type: 'null' + tags: + - Benchmarks + description: '' + parameters: + - name: eval_task_id + in: query + required: true + schema: + type: string + deprecated: true + /v1/eval/tasks/{task_id}/jobs/{job_id}: + get: + responses: + '200': + description: OK + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/JobStatus' + - type: 'null' + tags: + - Eval + description: '' + parameters: + - name: task_id + in: path + required: true + schema: + type: string + - name: job_id + in: path + required: true + schema: + type: string + deprecated: true + delete: + responses: + '200': + description: OK + tags: + - Eval + description: '' + parameters: + - name: task_id + in: path + required: true + schema: + type: string + - name: job_id + in: path + required: true + schema: + type: string + deprecated: true + /v1/eval/tasks/{task_id}/jobs/{job_id}/result: + get: + responses: + '200': + description: OK + content: + application/json: + schema: + $ref: '#/components/schemas/EvaluateResponse' + tags: + - Eval + description: '' + parameters: + - name: task_id + in: path + required: true + schema: + type: string + - name: job_id + in: path + required: true + schema: + type: string + deprecated: true + /v1/eval-tasks: + get: + responses: + '200': + description: OK + content: + application/json: + schema: + $ref: '#/components/schemas/ListBenchmarksResponse' + tags: + - Benchmarks + description: '' + parameters: [] + deprecated: true + post: + responses: + '200': + description: OK + tags: + - Benchmarks + description: '' + parameters: [] + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DeprecatedRegisterEvalTaskRequest' + required: true + deprecated: true + /v1/eval/tasks/{task_id}/jobs: + post: + responses: + '200': + description: OK + content: + application/json: + schema: + $ref: '#/components/schemas/Job' + tags: + - Eval + description: '' + parameters: + - name: task_id + in: path + required: true + schema: + type: string + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/DeprecatedRunEvalRequest' + required: true + deprecated: true /v1/datasetio/rows: get: responses: @@ -322,7 +491,7 @@ paths: schema: $ref: '#/components/schemas/EmbeddingsRequest' required: true - /v1/eval/tasks/{task_id}/evaluations: + /v1/eval/benchmarks/{benchmark_id}/evaluations: post: responses: '200': @@ -335,7 +504,7 @@ paths: - Eval description: '' parameters: - - name: task_id + - name: benchmark_id in: path required: true schema: @@ -407,6 +576,26 @@ paths: required: true schema: type: string + /v1/eval/benchmarks/{benchmark_id}: + get: + responses: + '200': + description: OK + content: + application/json: + schema: + oneOf: + - $ref: '#/components/schemas/Benchmark' + - type: 'null' + tags: + - Benchmarks + description: '' + parameters: + - name: benchmark_id + in: path + required: true + schema: + type: string /v1/datasets/{dataset_id}: get: responses: @@ -440,26 +629,6 @@ paths: required: true schema: type: string - /v1/eval-tasks/{eval_task_id}: - get: - responses: - '200': - description: OK - content: - application/json: - schema: - oneOf: - - $ref: '#/components/schemas/EvalTask' - - type: 'null' - tags: - - EvalTasks - description: '' - parameters: - - name: eval_task_id - in: path - required: true - schema: - type: string /v1/models/{model_id}: get: responses: @@ -802,7 +971,7 @@ paths: schema: $ref: '#/components/schemas/InvokeToolRequest' required: true - /v1/eval/tasks/{task_id}/jobs/{job_id}: + /v1/eval/benchmarks/{benchmark_id}/jobs/{job_id}: get: responses: '200': @@ -817,7 +986,7 @@ paths: - Eval description: '' parameters: - - name: task_id + - name: benchmark_id in: path required: true schema: @@ -835,7 +1004,7 @@ paths: - Eval description: '' parameters: - - name: task_id + - name: benchmark_id in: path required: true schema: @@ -845,7 +1014,7 @@ paths: required: true schema: type: string - /v1/eval/tasks/{task_id}/jobs/{job_id}/result: + /v1/eval/benchmarks/{benchmark_id}/jobs/{job_id}/result: get: responses: '200': @@ -858,16 +1027,43 @@ paths: - Eval description: '' parameters: + - name: benchmark_id + in: path + required: true + schema: + type: string - name: job_id in: path required: true schema: type: string - - name: task_id - in: path - required: true - schema: - type: string + /v1/eval/benchmarks: + get: + responses: + '200': + description: OK + content: + application/json: + schema: + $ref: '#/components/schemas/ListBenchmarksResponse' + tags: + - Benchmarks + description: '' + parameters: [] + post: + responses: + '200': + description: OK + tags: + - Benchmarks + description: '' + parameters: [] + requestBody: + content: + application/json: + schema: + $ref: '#/components/schemas/RegisterBenchmarkRequest' + required: true /v1/datasets: get: responses: @@ -895,33 +1091,6 @@ paths: schema: $ref: '#/components/schemas/RegisterDatasetRequest' required: true - /v1/eval-tasks: - get: - responses: - '200': - description: OK - content: - application/json: - schema: - $ref: '#/components/schemas/ListEvalTasksResponse' - tags: - - EvalTasks - description: '' - parameters: [] - post: - responses: - '200': - description: OK - tags: - - EvalTasks - description: '' - parameters: [] - requestBody: - content: - application/json: - schema: - $ref: '#/components/schemas/RegisterEvalTaskRequest' - required: true /v1/models: get: responses: @@ -1278,7 +1447,7 @@ paths: type: array items: type: string - /v1/eval/tasks/{task_id}/jobs: + /v1/eval/benchmarks/{benchmark_id}/jobs: post: responses: '200': @@ -1291,7 +1460,7 @@ paths: - Eval description: '' parameters: - - name: task_id + - name: benchmark_id in: path required: true schema: @@ -1429,65 +1598,146 @@ jsonSchemaDialect: >- https://json-schema.org/draft/2020-12/schema components: schemas: - AppendRowsRequest: + AgentCandidate: type: object properties: - dataset_id: + type: type: string - rows: - type: array - items: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object + const: agent + default: agent + config: + $ref: '#/components/schemas/AgentConfig' additionalProperties: false required: - - dataset_id - - rows - CompletionMessage: + - type + - config + AgentConfig: type: object properties: - role: - type: string - const: assistant - default: assistant - description: >- - Must be "assistant" to identify this as the model's response - content: - $ref: '#/components/schemas/InterleavedContent' - description: The content of the model's response - stop_reason: + sampling_params: + $ref: '#/components/schemas/SamplingParams' + input_shields: + type: array + items: + type: string + output_shields: + type: array + items: + type: string + toolgroups: + type: array + items: + $ref: '#/components/schemas/AgentTool' + client_tools: + type: array + items: + $ref: '#/components/schemas/ToolDef' + tool_choice: type: string enum: - - end_of_turn - - end_of_message - - out_of_tokens + - auto + - required description: >- - Reason why the model stopped generating. Options are: - `StopReason.end_of_turn`: - The model finished generating the entire response. - `StopReason.end_of_message`: - The model finished generating but generated a partial response -- usually, - a tool call. The user may call the tool and continue the conversation - with the tool's response. - `StopReason.out_of_tokens`: The model ran - out of token budget. - tool_calls: - type: array - items: - $ref: '#/components/schemas/ToolCall' + Whether tool use is required or automatic. This is a hint to the model + which may not be followed. It depends on the Instruction Following capabilities + of the model. + tool_prompt_format: + type: string + enum: + - json + - function_tag + - python_list description: >- - List of tool calls. Each tool call is a ToolCall object. + Prompt format for calling custom / zero shot tools. + tool_config: + $ref: '#/components/schemas/ToolConfig' + max_infer_iters: + type: integer + default: 10 + model: + type: string + instructions: + type: string + enable_session_persistence: + type: boolean + response_format: + $ref: '#/components/schemas/ResponseFormat' additionalProperties: false required: - - role - - content - - stop_reason - description: >- - A message containing the model's (assistant) response in a chat conversation. + - model + - instructions + - enable_session_persistence + AgentTool: + oneOf: + - type: string + - type: object + properties: + name: + type: string + args: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - name + - args + AggregationFunctionType: + type: string + enum: + - average + - median + - categorical_count + - accuracy + BasicScoringFnParams: + type: object + properties: + type: + type: string + const: basic + default: basic + aggregation_functions: + type: array + items: + $ref: '#/components/schemas/AggregationFunctionType' + additionalProperties: false + required: + - type + BenchmarkConfig: + type: object + properties: + type: + type: string + const: benchmark + default: benchmark + eval_candidate: + $ref: '#/components/schemas/EvalCandidate' + scoring_params: + type: object + additionalProperties: + $ref: '#/components/schemas/ScoringFnParams' + num_examples: + type: integer + additionalProperties: false + required: + - type + - eval_candidate + - scoring_params + EvalCandidate: + oneOf: + - $ref: '#/components/schemas/ModelCandidate' + - $ref: '#/components/schemas/AgentCandidate' + discriminator: + propertyName: type + mapping: + model: '#/components/schemas/ModelCandidate' + agent: '#/components/schemas/AgentCandidate' GrammarResponseFormat: type: object properties: @@ -1598,19 +1848,65 @@ components: - json_schema description: >- Configuration for JSON schema-guided response generation. - Message: - oneOf: - - $ref: '#/components/schemas/UserMessage' - - $ref: '#/components/schemas/SystemMessage' - - $ref: '#/components/schemas/ToolResponseMessage' - - $ref: '#/components/schemas/CompletionMessage' - discriminator: - propertyName: role - mapping: - user: '#/components/schemas/UserMessage' - system: '#/components/schemas/SystemMessage' - tool: '#/components/schemas/ToolResponseMessage' - assistant: '#/components/schemas/CompletionMessage' + LLMAsJudgeScoringFnParams: + type: object + properties: + type: + type: string + const: llm_as_judge + default: llm_as_judge + judge_model: + type: string + prompt_template: + type: string + judge_score_regexes: + type: array + items: + type: string + aggregation_functions: + type: array + items: + $ref: '#/components/schemas/AggregationFunctionType' + additionalProperties: false + required: + - type + - judge_model + ModelCandidate: + type: object + properties: + type: + type: string + const: model + default: model + model: + type: string + sampling_params: + $ref: '#/components/schemas/SamplingParams' + system_message: + $ref: '#/components/schemas/SystemMessage' + additionalProperties: false + required: + - type + - model + - sampling_params + RegexParserScoringFnParams: + type: object + properties: + type: + type: string + const: regex_parser + default: regex_parser + parsing_regexes: + type: array + items: + type: string + aggregation_functions: + type: array + items: + $ref: '#/components/schemas/AggregationFunctionType' + additionalProperties: false + required: + - type ResponseFormat: oneOf: - $ref: '#/components/schemas/JsonSchemaResponseFormat' @@ -1645,6 +1941,17 @@ components: greedy: '#/components/schemas/GreedySamplingStrategy' top_p: '#/components/schemas/TopPSamplingStrategy' top_k: '#/components/schemas/TopKSamplingStrategy' + ScoringFnParams: + oneOf: + - $ref: '#/components/schemas/LLMAsJudgeScoringFnParams' + - $ref: '#/components/schemas/RegexParserScoringFnParams' + - $ref: '#/components/schemas/BasicScoringFnParams' + discriminator: + propertyName: type + mapping: + llm_as_judge: '#/components/schemas/LLMAsJudgeScoringFnParams' + regex_parser: '#/components/schemas/RegexParserScoringFnParams' + basic: '#/components/schemas/BasicScoringFnParams' SystemMessage: type: object properties: @@ -1683,6 +1990,383 @@ components: - type - text description: A text content item + ToolConfig: + type: object + properties: + tool_choice: + type: string + enum: + - auto + - required + description: >- + (Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto. + default: auto + tool_prompt_format: + type: string + enum: + - json + - function_tag + - python_list + description: >- + (Optional) Instructs the model how to format tool calls. By default, Llama + Stack will attempt to use a format that is best adapted to the model. + - `ToolPromptFormat.json`: The tool calls are formatted as a JSON object. + - `ToolPromptFormat.function_tag`: The tool calls are enclosed in a + tag. - `ToolPromptFormat.python_list`: The tool calls are output as Python + syntax -- a list of function calls. + system_message_behavior: + type: string + enum: + - append + - replace + description: >- + (Optional) Config for how to override the default system prompt. - `SystemMessageBehavior.append`: + Appends the provided system message to the default system prompt. - `SystemMessageBehavior.replace`: + Replaces the default system prompt with the provided system message. The + system message can include the string '{{function_definitions}}' to indicate + where the function definitions should be inserted. + default: append + additionalProperties: false + required: + - system_message_behavior + description: Configuration for tool use. + ToolDef: + type: object + properties: + name: + type: string + description: + type: string + parameters: + type: array + items: + $ref: '#/components/schemas/ToolParameter' + metadata: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - name + ToolParameter: + type: object + properties: + name: + type: string + parameter_type: + type: string + description: + type: string + required: + type: boolean + default: true + default: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - name + - parameter_type + - description + - required + TopKSamplingStrategy: + type: object + properties: + type: + type: string + const: top_k + default: top_k + top_k: + type: integer + additionalProperties: false + required: + - type + - top_k + TopPSamplingStrategy: + type: object + properties: + type: + type: string + const: top_p + default: top_p + temperature: + type: number + top_p: + type: number + default: 0.95 + additionalProperties: false + required: + - type + URL: + type: object + properties: + uri: + type: string + additionalProperties: false + required: + - uri + DeprecatedEvaluateRowsRequest: + type: object + properties: + input_rows: + type: array + items: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + scoring_functions: + type: array + items: + type: string + task_config: + $ref: '#/components/schemas/BenchmarkConfig' + additionalProperties: false + required: + - input_rows + - scoring_functions + - task_config + EvaluateResponse: + type: object + properties: + generations: + type: array + items: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + scores: + type: object + additionalProperties: + $ref: '#/components/schemas/ScoringResult' + additionalProperties: false + required: + - generations + - scores + ScoringResult: + type: object + properties: + score_rows: + type: array + items: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + aggregated_results: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - score_rows + - aggregated_results + Benchmark: + type: object + properties: + identifier: + type: string + provider_resource_id: + type: string + provider_id: + type: string + type: + type: string + const: benchmark + default: benchmark + dataset_id: + type: string + scoring_functions: + type: array + items: + type: string + metadata: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - identifier + - provider_resource_id + - provider_id + - type + - dataset_id + - scoring_functions + - metadata + JobStatus: + type: string + enum: + - completed + - in_progress + - failed + - scheduled + ListBenchmarksResponse: + type: object + properties: + data: + type: array + items: + $ref: '#/components/schemas/Benchmark' + additionalProperties: false + required: + - data + DeprecatedRegisterEvalTaskRequest: + type: object + properties: + eval_task_id: + type: string + dataset_id: + type: string + scoring_functions: + type: array + items: + type: string + provider_benchmark_id: + type: string + provider_id: + type: string + metadata: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - eval_task_id + - dataset_id + - scoring_functions + DeprecatedRunEvalRequest: + type: object + properties: + task_config: + $ref: '#/components/schemas/BenchmarkConfig' + additionalProperties: false + required: + - task_config + Job: + type: object + properties: + job_id: + type: string + additionalProperties: false + required: + - job_id + AppendRowsRequest: + type: object + properties: + dataset_id: + type: string + rows: + type: array + items: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - dataset_id + - rows + CompletionMessage: + type: object + properties: + role: + type: string + const: assistant + default: assistant + description: >- + Must be "assistant" to identify this as the model's response + content: + $ref: '#/components/schemas/InterleavedContent' + description: The content of the model's response + stop_reason: + type: string + enum: + - end_of_turn + - end_of_message + - out_of_tokens + description: >- + Reason why the model stopped generating. Options are: - `StopReason.end_of_turn`: + The model finished generating the entire response. - `StopReason.end_of_message`: + The model finished generating but generated a partial response -- usually, + a tool call. The user may call the tool and continue the conversation + with the tool's response. - `StopReason.out_of_tokens`: The model ran + out of token budget. + tool_calls: + type: array + items: + $ref: '#/components/schemas/ToolCall' + description: >- + List of tool calls. Each tool call is a ToolCall object. + additionalProperties: false + required: + - role + - content + - stop_reason + description: >- + A message containing the model's (assistant) response in a chat conversation. + Message: + oneOf: + - $ref: '#/components/schemas/UserMessage' + - $ref: '#/components/schemas/SystemMessage' + - $ref: '#/components/schemas/ToolResponseMessage' + - $ref: '#/components/schemas/CompletionMessage' + discriminator: + propertyName: role + mapping: + user: '#/components/schemas/UserMessage' + system: '#/components/schemas/SystemMessage' + tool: '#/components/schemas/ToolResponseMessage' + assistant: '#/components/schemas/CompletionMessage' ToolCall: type: object properties: @@ -1803,42 +2487,6 @@ components: - content description: >- A message representing the result of a tool invocation. - TopKSamplingStrategy: - type: object - properties: - type: - type: string - const: top_k - default: top_k - top_k: - type: integer - additionalProperties: false - required: - - type - - top_k - TopPSamplingStrategy: - type: object - properties: - type: - type: string - const: top_p - default: top_p - temperature: - type: number - top_p: - type: number - default: 0.95 - additionalProperties: false - required: - - type - URL: - type: object - properties: - uri: - type: string - additionalProperties: false - required: - - uri UserMessage: type: object properties: @@ -2063,46 +2711,6 @@ components: additionalProperties: false required: - job_uuid - ToolConfig: - type: object - properties: - tool_choice: - type: string - enum: - - auto - - required - description: >- - (Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto. - default: auto - tool_prompt_format: - type: string - enum: - - json - - function_tag - - python_list - description: >- - (Optional) Instructs the model how to format tool calls. By default, Llama - Stack will attempt to use a format that is best adapted to the model. - - `ToolPromptFormat.json`: The tool calls are formatted as a JSON object. - - `ToolPromptFormat.function_tag`: The tool calls are enclosed in a - tag. - `ToolPromptFormat.python_list`: The tool calls are output as Python - syntax -- a list of function calls. - system_message_behavior: - type: string - enum: - - append - - replace - description: >- - (Optional) Config for how to override the default system prompt. - `SystemMessageBehavior.append`: - Appends the provided system message to the default system prompt. - `SystemMessageBehavior.replace`: - Replaces the default system prompt with the provided system message. The - system message can include the string '{{function_definitions}}' to indicate - where the function definitions should be inserted. - default: append - additionalProperties: false - required: - - system_message_behavior - description: Configuration for tool use. ChatCompletionRequest: type: object properties: @@ -2356,133 +2964,6 @@ components: - delta description: >- A chunk of a streamed completion response. - AgentConfig: - type: object - properties: - sampling_params: - $ref: '#/components/schemas/SamplingParams' - input_shields: - type: array - items: - type: string - output_shields: - type: array - items: - type: string - toolgroups: - type: array - items: - $ref: '#/components/schemas/AgentTool' - client_tools: - type: array - items: - $ref: '#/components/schemas/ToolDef' - tool_choice: - type: string - enum: - - auto - - required - description: >- - Whether tool use is required or automatic. This is a hint to the model - which may not be followed. It depends on the Instruction Following capabilities - of the model. - tool_prompt_format: - type: string - enum: - - json - - function_tag - - python_list - description: >- - Prompt format for calling custom / zero shot tools. - tool_config: - $ref: '#/components/schemas/ToolConfig' - max_infer_iters: - type: integer - default: 10 - model: - type: string - instructions: - type: string - enable_session_persistence: - type: boolean - response_format: - $ref: '#/components/schemas/ResponseFormat' - additionalProperties: false - required: - - model - - instructions - - enable_session_persistence - AgentTool: - oneOf: - - type: string - - type: object - properties: - name: - type: string - args: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - additionalProperties: false - required: - - name - - args - ToolDef: - type: object - properties: - name: - type: string - description: - type: string - parameters: - type: array - items: - $ref: '#/components/schemas/ToolParameter' - metadata: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - additionalProperties: false - required: - - name - ToolParameter: - type: object - properties: - name: - type: string - parameter_type: - type: string - description: - type: string - required: - type: boolean - default: true - default: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - additionalProperties: false - required: - - name - - parameter_type - - description - - required CreateAgentRequest: type: object properties: @@ -2962,163 +3443,6 @@ components: - embeddings description: >- Response containing generated embeddings. - AgentCandidate: - type: object - properties: - type: - type: string - const: agent - default: agent - config: - $ref: '#/components/schemas/AgentConfig' - additionalProperties: false - required: - - type - - config - AggregationFunctionType: - type: string - enum: - - average - - median - - categorical_count - - accuracy - AppEvalTaskConfig: - type: object - properties: - type: - type: string - const: app - default: app - eval_candidate: - $ref: '#/components/schemas/EvalCandidate' - scoring_params: - type: object - additionalProperties: - $ref: '#/components/schemas/ScoringFnParams' - num_examples: - type: integer - additionalProperties: false - required: - - type - - eval_candidate - - scoring_params - BasicScoringFnParams: - type: object - properties: - type: - type: string - const: basic - default: basic - aggregation_functions: - type: array - items: - $ref: '#/components/schemas/AggregationFunctionType' - additionalProperties: false - required: - - type - BenchmarkEvalTaskConfig: - type: object - properties: - type: - type: string - const: benchmark - default: benchmark - eval_candidate: - $ref: '#/components/schemas/EvalCandidate' - num_examples: - type: integer - additionalProperties: false - required: - - type - - eval_candidate - EvalCandidate: - oneOf: - - $ref: '#/components/schemas/ModelCandidate' - - $ref: '#/components/schemas/AgentCandidate' - discriminator: - propertyName: type - mapping: - model: '#/components/schemas/ModelCandidate' - agent: '#/components/schemas/AgentCandidate' - EvalTaskConfig: - oneOf: - - $ref: '#/components/schemas/BenchmarkEvalTaskConfig' - - $ref: '#/components/schemas/AppEvalTaskConfig' - discriminator: - propertyName: type - mapping: - benchmark: '#/components/schemas/BenchmarkEvalTaskConfig' - app: '#/components/schemas/AppEvalTaskConfig' - LLMAsJudgeScoringFnParams: - type: object - properties: - type: - type: string - const: llm_as_judge - default: llm_as_judge - judge_model: - type: string - prompt_template: - type: string - judge_score_regexes: - type: array - items: - type: string - aggregation_functions: - type: array - items: - $ref: '#/components/schemas/AggregationFunctionType' - additionalProperties: false - required: - - type - - judge_model - ModelCandidate: - type: object - properties: - type: - type: string - const: model - default: model - model: - type: string - sampling_params: - $ref: '#/components/schemas/SamplingParams' - system_message: - $ref: '#/components/schemas/SystemMessage' - additionalProperties: false - required: - - type - - model - - sampling_params - RegexParserScoringFnParams: - type: object - properties: - type: - type: string - const: regex_parser - default: regex_parser - parsing_regexes: - type: array - items: - type: string - aggregation_functions: - type: array - items: - $ref: '#/components/schemas/AggregationFunctionType' - additionalProperties: false - required: - - type - ScoringFnParams: - oneOf: - - $ref: '#/components/schemas/LLMAsJudgeScoringFnParams' - - $ref: '#/components/schemas/RegexParserScoringFnParams' - - $ref: '#/components/schemas/BasicScoringFnParams' - discriminator: - propertyName: type - mapping: - llm_as_judge: '#/components/schemas/LLMAsJudgeScoringFnParams' - regex_parser: '#/components/schemas/RegexParserScoringFnParams' - basic: '#/components/schemas/BasicScoringFnParams' EvaluateRowsRequest: type: object properties: @@ -3139,64 +3463,12 @@ components: items: type: string task_config: - $ref: '#/components/schemas/EvalTaskConfig' + $ref: '#/components/schemas/BenchmarkConfig' additionalProperties: false required: - input_rows - scoring_functions - task_config - EvaluateResponse: - type: object - properties: - generations: - type: array - items: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - scores: - type: object - additionalProperties: - $ref: '#/components/schemas/ScoringResult' - additionalProperties: false - required: - - generations - - scores - ScoringResult: - type: object - properties: - score_rows: - type: array - items: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - aggregated_results: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - additionalProperties: false - required: - - score_rows - - aggregated_results Session: type: object properties: @@ -3401,44 +3673,6 @@ components: additionalProperties: false required: - type - EvalTask: - type: object - properties: - identifier: - type: string - provider_resource_id: - type: string - provider_id: - type: string - type: - type: string - const: eval_task - default: eval_task - dataset_id: - type: string - scoring_functions: - type: array - items: - type: string - metadata: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - additionalProperties: false - required: - - identifier - - provider_resource_id - - provider_id - - type - - dataset_id - - scoring_functions - - metadata Model: type: object properties: @@ -3766,13 +4000,6 @@ components: - job_uuid - checkpoints description: Artifacts of a finetuning job. - JobStatus: - type: string - enum: - - completed - - in_progress - - failed - - scheduled PostTrainingJobStatusResponse: type: object properties: @@ -3977,16 +4204,6 @@ components: additionalProperties: false required: - data - ListEvalTasksResponse: - type: object - properties: - data: - type: array - items: - $ref: '#/components/schemas/EvalTask' - additionalProperties: false - required: - - data ListModelsResponse: type: object properties: @@ -4569,6 +4786,36 @@ components: additionalProperties: false required: - data + RegisterBenchmarkRequest: + type: object + properties: + benchmark_id: + type: string + dataset_id: + type: string + scoring_functions: + type: array + items: + type: string + provider_benchmark_id: + type: string + provider_id: + type: string + metadata: + type: object + additionalProperties: + oneOf: + - type: 'null' + - type: boolean + - type: number + - type: string + - type: array + - type: object + additionalProperties: false + required: + - benchmark_id + - dataset_id + - scoring_functions RegisterDatasetRequest: type: object properties: @@ -4599,36 +4846,6 @@ components: - dataset_id - dataset_schema - url - RegisterEvalTaskRequest: - type: object - properties: - eval_task_id: - type: string - dataset_id: - type: string - scoring_functions: - type: array - items: - type: string - provider_eval_task_id: - type: string - provider_id: - type: string - metadata: - type: object - additionalProperties: - oneOf: - - type: 'null' - - type: boolean - - type: number - - type: string - - type: array - - type: object - additionalProperties: false - required: - - eval_task_id - - dataset_id - - scoring_functions RegisterModelRequest: type: object properties: @@ -4739,18 +4956,10 @@ components: type: object properties: task_config: - $ref: '#/components/schemas/EvalTaskConfig' + $ref: '#/components/schemas/BenchmarkConfig' additionalProperties: false required: - task_config - Job: - type: object - properties: - job_id: - type: string - additionalProperties: false - required: - - job_id RunShieldRequest: type: object properties: @@ -5049,10 +5258,10 @@ tags: x-displayName: >- Agents API for creating and interacting with agentic systems. - name: BatchInference (Coming Soon) + - name: Benchmarks - name: DatasetIO - name: Datasets - name: Eval - - name: EvalTasks - name: Inference description: >- This API provides the raw interface to the underlying models. Two kinds of models @@ -5083,10 +5292,10 @@ x-tagGroups: tags: - Agents - BatchInference (Coming Soon) + - Benchmarks - DatasetIO - Datasets - Eval - - EvalTasks - Inference - Inspect - Models diff --git a/docs/getting_started.ipynb b/docs/getting_started.ipynb index abe537c8e..ee616b471 100644 --- a/docs/getting_started.ipynb +++ b/docs/getting_started.ipynb @@ -324,7 +324,7 @@ "- vector_io\n", "container_image: null\n", "datasets: []\n", - "eval_tasks: []\n", + "benchmarks: []\n", "image_name: together\n", "metadata_store:\n", " db_path: /Users/ashwin/.llama/distributions/together/registry.db\n", @@ -508,7 +508,7 @@ "- vector_io\n", "container_image: null\n", "datasets: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m\n", - "eval_tasks: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m\n", + "benchmarks: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m\n", "image_name: together\n", "metadata_store:\n", " db_path: \u001b[35m/Users/ashwin/.llama/distributions/together/\u001b[0m\u001b[95mregistry.db\u001b[0m\n", diff --git a/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb b/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb index 84da25246..8eecf84ab 100644 --- a/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb +++ b/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb @@ -370,7 +370,7 @@ "- tool_runtime\n", "datasets: []\n", "container_image: null\n", - "eval_tasks: []\n", + "benchmarks: []\n", "image_name: together\n", "memory_banks: []\n", "metadata_store:\n", @@ -551,7 +551,7 @@ "- tool_runtime\n", "datasets: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m\n", "container_image: null\n", - "eval_tasks: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m\n", + "benchmarks: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m\n", "image_name: together\n", "memory_banks: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m\n", "metadata_store:\n", diff --git a/docs/openapi_generator/pyopenapi/generator.py b/docs/openapi_generator/pyopenapi/generator.py index a0385cae0..0f3b99784 100644 --- a/docs/openapi_generator/pyopenapi/generator.py +++ b/docs/openapi_generator/pyopenapi/generator.py @@ -647,6 +647,7 @@ class Generator: description = "\n".join( filter(None, [doc_string.short_description, doc_string.long_description]) ) + return Operation( tags=[op.defining_class.__name__], summary=None, @@ -656,6 +657,7 @@ class Generator: requestBody=requestBody, responses=responses, callbacks=callbacks, + deprecated=True if "DEPRECATED" in op.func_name else None, security=[] if op.public else None, ) diff --git a/docs/openapi_generator/pyopenapi/specification.py b/docs/openapi_generator/pyopenapi/specification.py index 4b54295c5..f96de58b6 100644 --- a/docs/openapi_generator/pyopenapi/specification.py +++ b/docs/openapi_generator/pyopenapi/specification.py @@ -117,6 +117,7 @@ class Operation: requestBody: Optional[RequestBody] = None callbacks: Optional[Dict[str, "Callback"]] = None security: Optional[List["SecurityRequirement"]] = None + deprecated: Optional[bool] = None @dataclass diff --git a/docs/source/building_applications/evals.md b/docs/source/building_applications/evals.md index c4cb476e4..f28e0d5fd 100644 --- a/docs/source/building_applications/evals.md +++ b/docs/source/building_applications/evals.md @@ -41,14 +41,14 @@ system_message = { "content": SYSTEM_PROMPT_TEMPLATE, } -client.eval_tasks.register( - eval_task_id="meta-reference::mmmu", +client.benchmarks.register( + benchmark_id="meta-reference::mmmu", dataset_id=f"mmmu-{subset}-{split}", scoring_functions=["basic::regex_parser_multiple_choice_answer"], ) response = client.eval.evaluate_rows( - task_id="meta-reference::mmmu", + benchmark_id="meta-reference::mmmu", input_rows=eval_rows, scoring_functions=["basic::regex_parser_multiple_choice_answer"], task_config={ @@ -99,14 +99,14 @@ eval_rows = client.datasetio.get_rows_paginated( ``` ```python -client.eval_tasks.register( - eval_task_id="meta-reference::simpleqa", +client.benchmarks.register( + benchmark_id="meta-reference::simpleqa", dataset_id=simpleqa_dataset_id, scoring_functions=["llm-as-judge::405b-simpleqa"], ) response = client.eval.evaluate_rows( - task_id="meta-reference::simpleqa", + benchmark_id="meta-reference::simpleqa", input_rows=eval_rows.rows, scoring_functions=["llm-as-judge::405b-simpleqa"], task_config={ @@ -156,7 +156,7 @@ agent_config = { } response = client.eval.evaluate_rows( - task_id="meta-reference::simpleqa", + benchmark_id="meta-reference::simpleqa", input_rows=eval_rows.rows, scoring_functions=["llm-as-judge::405b-simpleqa"], task_config={ diff --git a/docs/source/building_applications/evaluation.md b/docs/source/building_applications/evaluation.md index 91e5c552b..ad220f751 100644 --- a/docs/source/building_applications/evaluation.md +++ b/docs/source/building_applications/evaluation.md @@ -10,15 +10,15 @@ Here's how to set up basic evaluation: ```python # Create an evaluation task -response = client.eval_tasks.register( - eval_task_id="my_eval", +response = client.benchmarks.register( + benchmark_id="my_eval", dataset_id="my_dataset", scoring_functions=["accuracy", "relevance"], ) # Run evaluation job = client.eval.run_eval( - task_id="my_eval", + benchmark_id="my_eval", task_config={ "type": "app", "eval_candidate": {"type": "agent", "config": agent_config}, @@ -26,5 +26,5 @@ job = client.eval.run_eval( ) # Get results -result = client.eval.job_result(task_id="my_eval", job_id=job.job_id) +result = client.eval.job_result(benchmark_id="my_eval", job_id=job.job_id) ``` diff --git a/docs/source/concepts/evaluation_concepts.md b/docs/source/concepts/evaluation_concepts.md index 399d99d92..3ca4b0ac8 100644 --- a/docs/source/concepts/evaluation_concepts.md +++ b/docs/source/concepts/evaluation_concepts.md @@ -5,7 +5,7 @@ The Llama Stack Evaluation flow allows you to run evaluations on your GenAI appl We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications. - `/datasetio` + `/datasets` API - `/scoring` + `/scoring_functions` API -- `/eval` + `/eval_tasks` API +- `/eval` + `/benchmarks` API This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing). @@ -21,7 +21,7 @@ The Evaluation APIs are associated with a set of Resources as shown in the follo - **Scoring**: evaluate outputs of the system. - Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics. - **Eval**: generate outputs (via Inference or Agents) and perform scoring. - - Associated with `EvalTask` resource. + - Associated with `Benchmark` resource. Use the following decision tree to decide how to use LlamaStack Evaluation flow. diff --git a/docs/source/concepts/index.md b/docs/source/concepts/index.md index 1437ec623..403e47c48 100644 --- a/docs/source/concepts/index.md +++ b/docs/source/concepts/index.md @@ -42,7 +42,7 @@ Some of these APIs are associated with a set of **Resources**. Here is the mappi - **Tool Runtime** is associated with `ToolGroup` resources. - **DatasetIO** is associated with `Dataset` resources. - **Scoring** is associated with `ScoringFunction` resources. -- **Eval** is associated with `Model` and `EvalTask` resources. +- **Eval** is associated with `Model` and `Benchmark` resources. Furthermore, we allow these resources to be **federated** across multiple providers. For example, you may have some Llama models served by Fireworks while others are served by AWS Bedrock. Regardless, they will all work seamlessly with the same uniform Inference API provided by Llama Stack. diff --git a/docs/source/playground/index.md b/docs/source/playground/index.md index d74bf1a03..9691609ab 100644 --- a/docs/source/playground/index.md +++ b/docs/source/playground/index.md @@ -64,7 +64,7 @@ Interactive pages for users to play with and explore Llama Stack API capabilitie ``` ```bash - $ llama-stack-client eval_tasks register \ + $ llama-stack-client benchmarks register \ --eval-task-id meta-reference-mmlu \ --provider-id meta-reference \ --dataset-id mmlu \ @@ -86,7 +86,7 @@ Interactive pages for users to play with and explore Llama Stack API capabilitie - Under the hood, it uses Llama Stack's `/providers` API to get information about the providers. - **API Resources**: Inspect Llama Stack API resources - - This page allows you to inspect Llama Stack API resources (`models`, `datasets`, `memory_banks`, `eval_tasks`, `shields`). + - This page allows you to inspect Llama Stack API resources (`models`, `datasets`, `memory_banks`, `benchmarks`, `shields`). - Under the hood, it uses Llama Stack's `//list` API to get information about each resources. - Please visit [Core Concepts](https://llama-stack.readthedocs.io/en/latest/concepts/index.html) for more details about the resources. diff --git a/docs/source/references/evals_reference/index.md b/docs/source/references/evals_reference/index.md index 86f66208a..71dbb47e5 100644 --- a/docs/source/references/evals_reference/index.md +++ b/docs/source/references/evals_reference/index.md @@ -5,7 +5,7 @@ The Llama Stack Evaluation flow allows you to run evaluations on your GenAI appl We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications. - `/datasetio` + `/datasets` API - `/scoring` + `/scoring_functions` API -- `/eval` + `/eval_tasks` API +- `/eval` + `/benchmarks` API This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing). @@ -21,7 +21,7 @@ The Evaluation APIs are associated with a set of Resources as shown in the follo - **Scoring**: evaluate outputs of the system. - Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics. - **Eval**: generate outputs (via Inference or Agents) and perform scoring. - - Associated with `EvalTask` resource. + - Associated with `Benchmark` resource. Use the following decision tree to decide how to use LlamaStack Evaluation flow. @@ -77,14 +77,14 @@ system_message = { "content": SYSTEM_PROMPT_TEMPLATE, } -client.eval_tasks.register( - eval_task_id="meta-reference::mmmu", +client.benchmarks.register( + benchmark_id="meta-reference::mmmu", dataset_id=f"mmmu-{subset}-{split}", scoring_functions=["basic::regex_parser_multiple_choice_answer"], ) response = client.eval.evaluate_rows( - task_id="meta-reference::mmmu", + benchmark_id="meta-reference::mmmu", input_rows=eval_rows, scoring_functions=["basic::regex_parser_multiple_choice_answer"], task_config={ @@ -135,14 +135,14 @@ eval_rows = client.datasetio.get_rows_paginated( ``` ```python -client.eval_tasks.register( - eval_task_id="meta-reference::simpleqa", +client.benchmarks.register( + benchmark_id="meta-reference::simpleqa", dataset_id=simpleqa_dataset_id, scoring_functions=["llm-as-judge::405b-simpleqa"], ) response = client.eval.evaluate_rows( - task_id="meta-reference::simpleqa", + benchmark_id="meta-reference::simpleqa", input_rows=eval_rows.rows, scoring_functions=["llm-as-judge::405b-simpleqa"], task_config={ @@ -192,7 +192,7 @@ agent_config = { } response = client.eval.evaluate_rows( - task_id="meta-reference::simpleqa", + benchmark_id="meta-reference::simpleqa", input_rows=eval_rows.rows, scoring_functions=["llm-as-judge::405b-simpleqa"], task_config={ @@ -281,7 +281,7 @@ The following examples give the quick steps to start running evaluations using t #### Benchmark Evaluation CLI Usage: There are 2 inputs necessary for running a benchmark eval -- `eval-task-id`: the identifier associated with the eval task. Each `EvalTask` is parametrized by +- `eval-task-id`: the identifier associated with the eval task. Each `Benchmark` is parametrized by - `dataset_id`: the identifier associated with the dataset. - `List[scoring_function_id]`: list of scoring function identifiers. - `eval-task-config`: specifies the configuration of the model / agent to evaluate on. @@ -289,7 +289,7 @@ Usage: There are 2 inputs necessary for running a benchmark eval ``` llama-stack-client eval run_benchmark \ ---eval-task-config ~/eval_task_config.json \ +--eval-task-config ~/benchmark_config.json \ --visualize ``` @@ -309,15 +309,15 @@ llama-stack-client eval run_scoring ... --dataset-id --scoring-functions [ ...] [--provider-id ] [--provider-eval-task-id ] [--metadata ] +$ llama-stack-client benchmarks register --eval-task-id --dataset-id --scoring-functions [ ...] [--provider-id ] [--provider-eval-task-id ] [--metadata ] ``` Options: @@ -191,7 +191,7 @@ Options: - `--num-examples`: Optional. Number of examples to evaluate (useful for debugging) - `--visualize`: Optional flag. If set, visualizes evaluation results after completion -Example eval_task_config.json: +Example benchmark_config.json: ```json { "type": "benchmark", diff --git a/docs/source/references/python_sdk_reference/index.md b/docs/source/references/python_sdk_reference/index.md index 8a06e2244..9d1130422 100644 --- a/docs/source/references/python_sdk_reference/index.md +++ b/docs/source/references/python_sdk_reference/index.md @@ -181,8 +181,8 @@ from llama_stack_client.types import EvaluateResponse, Job Methods: -- client.eval.evaluate_rows(task_id, \*\*params) -> EvaluateResponse -- client.eval.run_eval(task_id, \*\*params) -> Job +- client.eval.evaluate_rows(benchmark_id, \*\*params) -> EvaluateResponse +- client.eval.run_eval(benchmark_id, \*\*params) -> Job ### Jobs @@ -194,9 +194,9 @@ from llama_stack_client.types.eval import JobStatusResponse Methods: -- client.eval.jobs.retrieve(job_id, \*, task_id) -> EvaluateResponse -- client.eval.jobs.cancel(job_id, \*, task_id) -> None -- client.eval.jobs.status(job_id, \*, task_id) -> Optional[JobStatusResponse] +- client.eval.jobs.retrieve(job_id, \*, benchmark_id) -> EvaluateResponse +- client.eval.jobs.cancel(job_id, \*, benchmark_id) -> None +- client.eval.jobs.status(job_id, \*, benchmark_id) -> Optional[JobStatusResponse] ## Inspect @@ -443,20 +443,20 @@ Methods: - client.scoring_functions.list() -> ScoringFunctionListResponse - client.scoring_functions.register(\*\*params) -> None -## EvalTasks +## Benchmarks Types: ```python from llama_stack_client.types import ( - EvalTask, - ListEvalTasksResponse, - EvalTaskListResponse, + Benchmark, + ListBenchmarksResponse, + BenchmarkListResponse, ) ``` Methods: -- client.eval_tasks.retrieve(eval_task_id) -> Optional[EvalTask] -- client.eval_tasks.list() -> EvalTaskListResponse -- client.eval_tasks.register(\*\*params) -> None +- client.benchmarks.retrieve(benchmark_id) -> Optional[Benchmark] +- client.benchmarks.list() -> BenchmarkListResponse +- client.benchmarks.register(\*\*params) -> None diff --git a/llama_stack/apis/eval_tasks/__init__.py b/llama_stack/apis/benchmarks/__init__.py similarity index 81% rename from llama_stack/apis/eval_tasks/__init__.py rename to llama_stack/apis/benchmarks/__init__.py index 7ca216706..f8f564957 100644 --- a/llama_stack/apis/eval_tasks/__init__.py +++ b/llama_stack/apis/benchmarks/__init__.py @@ -4,4 +4,4 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. -from .eval_tasks import * # noqa: F401 F403 +from .benchmarks import * # noqa: F401 F403 diff --git a/llama_stack/apis/benchmarks/benchmarks.py b/llama_stack/apis/benchmarks/benchmarks.py new file mode 100644 index 000000000..50019b18c --- /dev/null +++ b/llama_stack/apis/benchmarks/benchmarks.py @@ -0,0 +1,86 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. +from typing import Any, Dict, List, Literal, Optional, Protocol, runtime_checkable + +from llama_models.schema_utils import json_schema_type, webmethod +from pydantic import BaseModel, Field + +from llama_stack.apis.resource import Resource, ResourceType + + +class CommonBenchmarkFields(BaseModel): + dataset_id: str + scoring_functions: List[str] + metadata: Dict[str, Any] = Field( + default_factory=dict, + description="Metadata for this evaluation task", + ) + + +@json_schema_type +class Benchmark(CommonBenchmarkFields, Resource): + type: Literal[ResourceType.benchmark.value] = ResourceType.benchmark.value + + @property + def benchmark_id(self) -> str: + return self.identifier + + @property + def provider_benchmark_id(self) -> str: + return self.provider_resource_id + + +class BenchmarkInput(CommonBenchmarkFields, BaseModel): + benchmark_id: str + provider_id: Optional[str] = None + provider_benchmark_id: Optional[str] = None + + +class ListBenchmarksResponse(BaseModel): + data: List[Benchmark] + + +@runtime_checkable +class Benchmarks(Protocol): + @webmethod(route="/eval/benchmarks", method="GET") + async def list_benchmarks(self) -> ListBenchmarksResponse: ... + + @webmethod(route="/eval/benchmarks/{benchmark_id}", method="GET") + async def get_benchmark( + self, + benchmark_id: str, + ) -> Optional[Benchmark]: ... + + @webmethod(route="/eval/benchmarks", method="POST") + async def register_benchmark( + self, + benchmark_id: str, + dataset_id: str, + scoring_functions: List[str], + provider_benchmark_id: Optional[str] = None, + provider_id: Optional[str] = None, + metadata: Optional[Dict[str, Any]] = None, + ) -> None: ... + + @webmethod(route="/eval-tasks", method="GET") + async def DEPRECATED_list_eval_tasks(self) -> ListBenchmarksResponse: ... + + @webmethod(route="/eval-tasks/{task_id}", method="GET") + async def DEPRECATED_get_eval_task( + self, + eval_task_id: str, + ) -> Optional[Benchmark]: ... + + @webmethod(route="/eval-tasks", method="POST") + async def DEPRECATED_register_eval_task( + self, + eval_task_id: str, + dataset_id: str, + scoring_functions: List[str], + provider_benchmark_id: Optional[str] = None, + provider_id: Optional[str] = None, + metadata: Optional[Dict[str, Any]] = None, + ) -> None: ... diff --git a/llama_stack/apis/datatypes.py b/llama_stack/apis/datatypes.py index ccc395b80..0751b2c9b 100644 --- a/llama_stack/apis/datatypes.py +++ b/llama_stack/apis/datatypes.py @@ -28,7 +28,7 @@ class Api(Enum): vector_dbs = "vector_dbs" datasets = "datasets" scoring_functions = "scoring_functions" - eval_tasks = "eval_tasks" + benchmarks = "benchmarks" tool_groups = "tool_groups" # built-in API diff --git a/llama_stack/apis/eval/eval.py b/llama_stack/apis/eval/eval.py index ae13a5bd9..e5c782150 100644 --- a/llama_stack/apis/eval/eval.py +++ b/llama_stack/apis/eval/eval.py @@ -38,19 +38,9 @@ EvalCandidate = register_schema( @json_schema_type -class BenchmarkEvalTaskConfig(BaseModel): +class BenchmarkConfig(BaseModel): type: Literal["benchmark"] = "benchmark" eval_candidate: EvalCandidate - num_examples: Optional[int] = Field( - description="Number of examples to evaluate (useful for testing), if not provided, all examples in the dataset will be evaluated", - default=None, - ) - - -@json_schema_type -class AppEvalTaskConfig(BaseModel): - type: Literal["app"] = "app" - eval_candidate: EvalCandidate scoring_params: Dict[str, ScoringFnParams] = Field( description="Map between scoring function id and parameters for each scoring function you want to run", default_factory=dict, @@ -62,12 +52,6 @@ class AppEvalTaskConfig(BaseModel): # we could optinally add any specific dataset config here -EvalTaskConfig = register_schema( - Annotated[Union[BenchmarkEvalTaskConfig, AppEvalTaskConfig], Field(discriminator="type")], - name="EvalTaskConfig", -) - - @json_schema_type class EvaluateResponse(BaseModel): generations: List[Dict[str, Any]] @@ -76,27 +60,52 @@ class EvaluateResponse(BaseModel): class Eval(Protocol): - @webmethod(route="/eval/tasks/{task_id}/jobs", method="POST") + @webmethod(route="/eval/benchmarks/{benchmark_id}/jobs", method="POST") async def run_eval( + self, + benchmark_id: str, + task_config: BenchmarkConfig, + ) -> Job: ... + + @webmethod(route="/eval/benchmarks/{benchmark_id}/evaluations", method="POST") + async def evaluate_rows( + self, + benchmark_id: str, + input_rows: List[Dict[str, Any]], + scoring_functions: List[str], + task_config: BenchmarkConfig, + ) -> EvaluateResponse: ... + + @webmethod(route="/eval/benchmarks/{benchmark_id}/jobs/{job_id}", method="GET") + async def job_status(self, benchmark_id: str, job_id: str) -> Optional[JobStatus]: ... + + @webmethod(route="/eval/benchmarks/{benchmark_id}/jobs/{job_id}", method="DELETE") + async def job_cancel(self, benchmark_id: str, job_id: str) -> None: ... + + @webmethod(route="/eval/benchmarks/{benchmark_id}/jobs/{job_id}/result", method="GET") + async def job_result(self, benchmark_id: str, job_id: str) -> EvaluateResponse: ... + + @webmethod(route="/eval/tasks/{task_id}/jobs", method="POST") + async def DEPRECATED_run_eval( self, task_id: str, - task_config: EvalTaskConfig, + task_config: BenchmarkConfig, ) -> Job: ... @webmethod(route="/eval/tasks/{task_id}/evaluations", method="POST") - async def evaluate_rows( + async def DEPRECATED_evaluate_rows( self, task_id: str, input_rows: List[Dict[str, Any]], scoring_functions: List[str], - task_config: EvalTaskConfig, + task_config: BenchmarkConfig, ) -> EvaluateResponse: ... @webmethod(route="/eval/tasks/{task_id}/jobs/{job_id}", method="GET") - async def job_status(self, task_id: str, job_id: str) -> Optional[JobStatus]: ... + async def DEPRECATED_job_status(self, task_id: str, job_id: str) -> Optional[JobStatus]: ... @webmethod(route="/eval/tasks/{task_id}/jobs/{job_id}", method="DELETE") - async def job_cancel(self, task_id: str, job_id: str) -> None: ... + async def DEPRECATED_job_cancel(self, task_id: str, job_id: str) -> None: ... @webmethod(route="/eval/tasks/{task_id}/jobs/{job_id}/result", method="GET") - async def job_result(self, job_id: str, task_id: str) -> EvaluateResponse: ... + async def DEPRECATED_job_result(self, task_id: str, job_id: str) -> EvaluateResponse: ... diff --git a/llama_stack/apis/eval_tasks/eval_tasks.py b/llama_stack/apis/eval_tasks/eval_tasks.py deleted file mode 100644 index a0a533055..000000000 --- a/llama_stack/apis/eval_tasks/eval_tasks.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. -from typing import Any, Dict, List, Literal, Optional, Protocol, runtime_checkable - -from llama_models.schema_utils import json_schema_type, webmethod -from pydantic import BaseModel, Field - -from llama_stack.apis.resource import Resource, ResourceType - - -class CommonEvalTaskFields(BaseModel): - dataset_id: str - scoring_functions: List[str] - metadata: Dict[str, Any] = Field( - default_factory=dict, - description="Metadata for this evaluation task", - ) - - -@json_schema_type -class EvalTask(CommonEvalTaskFields, Resource): - type: Literal[ResourceType.eval_task.value] = ResourceType.eval_task.value - - @property - def eval_task_id(self) -> str: - return self.identifier - - @property - def provider_eval_task_id(self) -> str: - return self.provider_resource_id - - -class EvalTaskInput(CommonEvalTaskFields, BaseModel): - eval_task_id: str - provider_id: Optional[str] = None - provider_eval_task_id: Optional[str] = None - - -class ListEvalTasksResponse(BaseModel): - data: List[EvalTask] - - -@runtime_checkable -class EvalTasks(Protocol): - @webmethod(route="/eval-tasks", method="GET") - async def list_eval_tasks(self) -> ListEvalTasksResponse: ... - - @webmethod(route="/eval-tasks/{eval_task_id}", method="GET") - async def get_eval_task( - self, - eval_task_id: str, - ) -> Optional[EvalTask]: ... - - @webmethod(route="/eval-tasks", method="POST") - async def register_eval_task( - self, - eval_task_id: str, - dataset_id: str, - scoring_functions: List[str], - provider_eval_task_id: Optional[str] = None, - provider_id: Optional[str] = None, - metadata: Optional[Dict[str, Any]] = None, - ) -> None: ... diff --git a/llama_stack/apis/resource.py b/llama_stack/apis/resource.py index 145113a5d..70ec63c55 100644 --- a/llama_stack/apis/resource.py +++ b/llama_stack/apis/resource.py @@ -15,7 +15,7 @@ class ResourceType(Enum): vector_db = "vector_db" dataset = "dataset" scoring_function = "scoring_function" - eval_task = "eval_task" + benchmark = "benchmark" tool = "tool" tool_group = "tool_group" diff --git a/llama_stack/distribution/datatypes.py b/llama_stack/distribution/datatypes.py index 97706f22a..f62996081 100644 --- a/llama_stack/distribution/datatypes.py +++ b/llama_stack/distribution/datatypes.py @@ -8,10 +8,10 @@ from typing import Annotated, Any, Dict, List, Optional, Union from pydantic import BaseModel, Field +from llama_stack.apis.benchmarks import Benchmark, BenchmarkInput from llama_stack.apis.datasetio import DatasetIO from llama_stack.apis.datasets import Dataset, DatasetInput from llama_stack.apis.eval import Eval -from llama_stack.apis.eval_tasks import EvalTask, EvalTaskInput from llama_stack.apis.inference import Inference from llama_stack.apis.models import Model, ModelInput from llama_stack.apis.safety import Safety @@ -37,7 +37,7 @@ RoutableObject = Union[ VectorDB, Dataset, ScoringFn, - EvalTask, + Benchmark, Tool, ToolGroup, ] @@ -50,7 +50,7 @@ RoutableObjectWithProvider = Annotated[ VectorDB, Dataset, ScoringFn, - EvalTask, + Benchmark, Tool, ToolGroup, ], @@ -173,7 +173,7 @@ a default SQLite store will be used.""", vector_dbs: List[VectorDBInput] = Field(default_factory=list) datasets: List[DatasetInput] = Field(default_factory=list) scoring_fns: List[ScoringFnInput] = Field(default_factory=list) - eval_tasks: List[EvalTaskInput] = Field(default_factory=list) + benchmarks: List[BenchmarkInput] = Field(default_factory=list) tool_groups: List[ToolGroupInput] = Field(default_factory=list) server: ServerConfig = Field( diff --git a/llama_stack/distribution/distribution.py b/llama_stack/distribution/distribution.py index 2dcf38463..384e2c3c8 100644 --- a/llama_stack/distribution/distribution.py +++ b/llama_stack/distribution/distribution.py @@ -44,7 +44,7 @@ def builtin_automatically_routed_apis() -> List[AutoRoutedApiInfo]: router_api=Api.scoring, ), AutoRoutedApiInfo( - routing_table_api=Api.eval_tasks, + routing_table_api=Api.benchmarks, router_api=Api.eval, ), AutoRoutedApiInfo( diff --git a/llama_stack/distribution/resolver.py b/llama_stack/distribution/resolver.py index 353c2971b..0bc2e774c 100644 --- a/llama_stack/distribution/resolver.py +++ b/llama_stack/distribution/resolver.py @@ -9,10 +9,10 @@ import logging from typing import Any, Dict, List, Set from llama_stack.apis.agents import Agents +from llama_stack.apis.benchmarks import Benchmarks from llama_stack.apis.datasetio import DatasetIO from llama_stack.apis.datasets import Datasets from llama_stack.apis.eval import Eval -from llama_stack.apis.eval_tasks import EvalTasks from llama_stack.apis.inference import Inference from llama_stack.apis.inspect import Inspect from llama_stack.apis.models import Models @@ -37,8 +37,8 @@ from llama_stack.distribution.store import DistributionRegistry from llama_stack.distribution.utils.dynamic import instantiate_class_type from llama_stack.providers.datatypes import ( Api, + BenchmarksProtocolPrivate, DatasetsProtocolPrivate, - EvalTasksProtocolPrivate, InlineProviderSpec, ModelsProtocolPrivate, ProviderSpec, @@ -73,7 +73,7 @@ def api_protocol_map() -> Dict[Api, Any]: Api.scoring: Scoring, Api.scoring_functions: ScoringFunctions, Api.eval: Eval, - Api.eval_tasks: EvalTasks, + Api.benchmarks: Benchmarks, Api.post_training: PostTraining, Api.tool_groups: ToolGroups, Api.tool_runtime: ToolRuntime, @@ -92,7 +92,7 @@ def additional_protocols_map() -> Dict[Api, Any]: ScoringFunctions, Api.scoring_functions, ), - Api.eval: (EvalTasksProtocolPrivate, EvalTasks, Api.eval_tasks), + Api.eval: (BenchmarksProtocolPrivate, Benchmarks, Api.benchmarks), } diff --git a/llama_stack/distribution/routers/__init__.py b/llama_stack/distribution/routers/__init__.py index 18197ca7f..a54f57fb3 100644 --- a/llama_stack/distribution/routers/__init__.py +++ b/llama_stack/distribution/routers/__init__.py @@ -11,8 +11,8 @@ from llama_stack.distribution.store import DistributionRegistry from llama_stack.providers.datatypes import Api, RoutingTable from .routing_tables import ( + BenchmarksRoutingTable, DatasetsRoutingTable, - EvalTasksRoutingTable, ModelsRoutingTable, ScoringFunctionsRoutingTable, ShieldsRoutingTable, @@ -33,7 +33,7 @@ async def get_routing_table_impl( "shields": ShieldsRoutingTable, "datasets": DatasetsRoutingTable, "scoring_functions": ScoringFunctionsRoutingTable, - "eval_tasks": EvalTasksRoutingTable, + "benchmarks": BenchmarksRoutingTable, "tool_groups": ToolGroupsRoutingTable, } diff --git a/llama_stack/distribution/routers/routers.py b/llama_stack/distribution/routers/routers.py index e716e44b0..f45975189 100644 --- a/llama_stack/distribution/routers/routers.py +++ b/llama_stack/distribution/routers/routers.py @@ -9,9 +9,8 @@ from typing import Any, AsyncGenerator, Dict, List, Optional from llama_stack.apis.common.content_types import URL, InterleavedContent from llama_stack.apis.datasetio import DatasetIO, PaginatedRowsResult from llama_stack.apis.eval import ( - AppEvalTaskConfig, + BenchmarkConfig, Eval, - EvalTaskConfig, EvaluateResponse, Job, JobStatus, @@ -347,23 +346,23 @@ class EvalRouter(Eval): async def run_eval( self, - task_id: str, - task_config: AppEvalTaskConfig, + benchmark_id: str, + task_config: BenchmarkConfig, ) -> Job: - return await self.routing_table.get_provider_impl(task_id).run_eval( - task_id=task_id, + return await self.routing_table.get_provider_impl(benchmark_id).run_eval( + benchmark_id=benchmark_id, task_config=task_config, ) async def evaluate_rows( self, - task_id: str, + benchmark_id: str, input_rows: List[Dict[str, Any]], scoring_functions: List[str], - task_config: EvalTaskConfig, + task_config: BenchmarkConfig, ) -> EvaluateResponse: - return await self.routing_table.get_provider_impl(task_id).evaluate_rows( - task_id=task_id, + return await self.routing_table.get_provider_impl(benchmark_id).evaluate_rows( + benchmark_id=benchmark_id, input_rows=input_rows, scoring_functions=scoring_functions, task_config=task_config, @@ -371,30 +370,72 @@ class EvalRouter(Eval): async def job_status( self, - task_id: str, + benchmark_id: str, job_id: str, ) -> Optional[JobStatus]: - return await self.routing_table.get_provider_impl(task_id).job_status(task_id, job_id) + return await self.routing_table.get_provider_impl(benchmark_id).job_status(benchmark_id, job_id) async def job_cancel( self, - task_id: str, + benchmark_id: str, job_id: str, ) -> None: - await self.routing_table.get_provider_impl(task_id).job_cancel( - task_id, + await self.routing_table.get_provider_impl(benchmark_id).job_cancel( + benchmark_id, job_id, ) async def job_result( + self, + benchmark_id: str, + job_id: str, + ) -> EvaluateResponse: + return await self.routing_table.get_provider_impl(benchmark_id).job_result( + benchmark_id, + job_id, + ) + + async def DEPRECATED_run_eval( + self, + task_id: str, + task_config: BenchmarkConfig, + ) -> Job: + return await self.run_eval(benchmark_id=task_id, task_config=task_config) + + async def DEPRECATED_evaluate_rows( + self, + task_id: str, + input_rows: List[Dict[str, Any]], + scoring_functions: List[str], + task_config: BenchmarkConfig, + ) -> EvaluateResponse: + return await self.evaluate_rows( + benchmark_id=task_id, + input_rows=input_rows, + scoring_functions=scoring_functions, + task_config=task_config, + ) + + async def DEPRECATED_job_status( + self, + task_id: str, + job_id: str, + ) -> Optional[JobStatus]: + return await self.job_status(benchmark_id=task_id, job_id=job_id) + + async def DEPRECATED_job_cancel( + self, + task_id: str, + job_id: str, + ) -> None: + return await self.job_cancel(benchmark_id=task_id, job_id=job_id) + + async def DEPRECATED_job_result( self, task_id: str, job_id: str, ) -> EvaluateResponse: - return await self.routing_table.get_provider_impl(task_id).job_result( - task_id, - job_id, - ) + return await self.job_result(benchmark_id=task_id, job_id=job_id) class ToolRuntimeRouter(ToolRuntime): diff --git a/llama_stack/distribution/routers/routing_tables.py b/llama_stack/distribution/routers/routing_tables.py index 009775ca5..2cddc3970 100644 --- a/llama_stack/distribution/routers/routing_tables.py +++ b/llama_stack/distribution/routers/routing_tables.py @@ -4,14 +4,15 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. +import logging from typing import Any, Dict, List, Optional from pydantic import TypeAdapter +from llama_stack.apis.benchmarks import Benchmark, Benchmarks, ListBenchmarksResponse from llama_stack.apis.common.content_types import URL from llama_stack.apis.common.type_system import ParamType from llama_stack.apis.datasets import Dataset, Datasets, ListDatasetsResponse -from llama_stack.apis.eval_tasks import EvalTask, EvalTasks, ListEvalTasksResponse from llama_stack.apis.models import ListModelsResponse, Model, Models, ModelType from llama_stack.apis.resource import ResourceType from llama_stack.apis.scoring_functions import ( @@ -38,6 +39,8 @@ from llama_stack.distribution.datatypes import ( from llama_stack.distribution.store import DistributionRegistry from llama_stack.providers.datatypes import Api, RoutingTable +logger = logging.getLogger(__name__) + def get_impl_api(p: Any) -> Api: return p.__provider_spec__.api @@ -60,7 +63,7 @@ async def register_object_with_provider(obj: RoutableObject, p: Any) -> Routable elif api == Api.scoring: return await p.register_scoring_function(obj) elif api == Api.eval: - return await p.register_eval_task(obj) + return await p.register_benchmark(obj) elif api == Api.tool_runtime: return await p.register_tool(obj) else: @@ -121,7 +124,7 @@ class CommonRoutingTableImpl(RoutingTable): scoring_functions = await p.list_scoring_functions() await add_objects(scoring_functions, pid, ScoringFn) elif api == Api.eval: - p.eval_task_store = self + p.benchmark_store = self elif api == Api.tool_runtime: p.tool_store = self @@ -141,8 +144,8 @@ class CommonRoutingTableImpl(RoutingTable): return ("DatasetIO", "dataset") elif isinstance(self, ScoringFunctionsRoutingTable): return ("Scoring", "scoring_function") - elif isinstance(self, EvalTasksRoutingTable): - return ("Eval", "eval_task") + elif isinstance(self, BenchmarksRoutingTable): + return ("Eval", "benchmark") elif isinstance(self, ToolGroupsRoutingTable): return ("Tools", "tool") else: @@ -428,20 +431,20 @@ class ScoringFunctionsRoutingTable(CommonRoutingTableImpl, ScoringFunctions): await self.register_object(scoring_fn) -class EvalTasksRoutingTable(CommonRoutingTableImpl, EvalTasks): - async def list_eval_tasks(self) -> ListEvalTasksResponse: - return ListEvalTasksResponse(data=await self.get_all_with_type("eval_task")) +class BenchmarksRoutingTable(CommonRoutingTableImpl, Benchmarks): + async def list_benchmarks(self) -> ListBenchmarksResponse: + return ListBenchmarksResponse(data=await self.get_all_with_type("benchmark")) - async def get_eval_task(self, eval_task_id: str) -> Optional[EvalTask]: - return await self.get_object_by_identifier("eval_task", eval_task_id) + async def get_benchmark(self, benchmark_id: str) -> Optional[Benchmark]: + return await self.get_object_by_identifier("benchmark", benchmark_id) - async def register_eval_task( + async def register_benchmark( self, - eval_task_id: str, + benchmark_id: str, dataset_id: str, scoring_functions: List[str], metadata: Optional[Dict[str, Any]] = None, - provider_eval_task_id: Optional[str] = None, + provider_benchmark_id: Optional[str] = None, provider_id: Optional[str] = None, ) -> None: if metadata is None: @@ -453,17 +456,46 @@ class EvalTasksRoutingTable(CommonRoutingTableImpl, EvalTasks): raise ValueError( "No provider specified and multiple providers available. Please specify a provider_id." ) - if provider_eval_task_id is None: - provider_eval_task_id = eval_task_id - eval_task = EvalTask( - identifier=eval_task_id, + if provider_benchmark_id is None: + provider_benchmark_id = benchmark_id + benchmark = Benchmark( + identifier=benchmark_id, dataset_id=dataset_id, scoring_functions=scoring_functions, metadata=metadata, provider_id=provider_id, - provider_resource_id=provider_eval_task_id, + provider_resource_id=provider_benchmark_id, + ) + await self.register_object(benchmark) + + async def DEPRECATED_list_eval_tasks(self) -> ListBenchmarksResponse: + logger.warning("DEPRECATED: Use /eval/benchmarks instead") + return await self.list_benchmarks() + + async def DEPRECATED_get_eval_task( + self, + eval_task_id: str, + ) -> Optional[Benchmark]: + logger.warning("DEPRECATED: Use /eval/benchmarks instead") + return await self.get_benchmark(eval_task_id) + + async def DEPRECATED_register_eval_task( + self, + eval_task_id: str, + dataset_id: str, + scoring_functions: List[str], + provider_benchmark_id: Optional[str] = None, + provider_id: Optional[str] = None, + metadata: Optional[Dict[str, Any]] = None, + ) -> None: + logger.warning("DEPRECATED: Use /eval/benchmarks instead") + return await self.register_benchmark( + benchmark_id=eval_task_id, + dataset_id=dataset_id, + scoring_functions=scoring_functions, + metadata=metadata, + provider_benchmark_id=provider_benchmark_id, ) - await self.register_object(eval_task) class ToolGroupsRoutingTable(CommonRoutingTableImpl, ToolGroups): diff --git a/llama_stack/distribution/stack.py b/llama_stack/distribution/stack.py index 2baad8ac4..9335dc3a9 100644 --- a/llama_stack/distribution/stack.py +++ b/llama_stack/distribution/stack.py @@ -15,10 +15,10 @@ from termcolor import colored from llama_stack.apis.agents import Agents from llama_stack.apis.batch_inference import BatchInference +from llama_stack.apis.benchmarks import Benchmarks from llama_stack.apis.datasetio import DatasetIO from llama_stack.apis.datasets import Datasets from llama_stack.apis.eval import Eval -from llama_stack.apis.eval_tasks import EvalTasks from llama_stack.apis.inference import Inference from llama_stack.apis.inspect import Inspect from llama_stack.apis.models import Models @@ -53,7 +53,7 @@ class LlamaStack( PostTraining, VectorIO, Eval, - EvalTasks, + Benchmarks, Scoring, ScoringFunctions, DatasetIO, @@ -78,7 +78,7 @@ RESOURCES = [ "register_scoring_function", "list_scoring_functions", ), - ("eval_tasks", Api.eval_tasks, "register_eval_task", "list_eval_tasks"), + ("benchmarks", Api.benchmarks, "register_benchmark", "list_benchmarks"), ("tool_groups", Api.tool_groups, "register_tool_group", "list_tool_groups"), ] diff --git a/llama_stack/distribution/ui/README.md b/llama_stack/distribution/ui/README.md index c0a2597af..8fceb5c63 100644 --- a/llama_stack/distribution/ui/README.md +++ b/llama_stack/distribution/ui/README.md @@ -26,7 +26,7 @@ $ llama-stack-client datasets register \ ``` ```bash -$ llama-stack-client eval_tasks register \ +$ llama-stack-client benchmarks register \ --eval-task-id meta-reference-mmlu \ --provider-id meta-reference \ --dataset-id mmlu \ diff --git a/llama_stack/distribution/ui/page/distribution/eval_tasks.py b/llama_stack/distribution/ui/page/distribution/eval_tasks.py index f58969663..1428ae9ab 100644 --- a/llama_stack/distribution/ui/page/distribution/eval_tasks.py +++ b/llama_stack/distribution/ui/page/distribution/eval_tasks.py @@ -8,12 +8,12 @@ import streamlit as st from modules.api import llama_stack_api -def eval_tasks(): - # Eval Tasks Section - st.header("Eval Tasks") +def benchmarks(): + # Benchmarks Section + st.header("Benchmarks") - eval_tasks_info = {d.identifier: d.to_dict() for d in llama_stack_api.client.eval_tasks.list()} + benchmarks_info = {d.identifier: d.to_dict() for d in llama_stack_api.client.benchmarks.list()} - if len(eval_tasks_info) > 0: - selected_eval_task = st.selectbox("Select an eval task", list(eval_tasks_info.keys()), key="eval_task_inspect") - st.json(eval_tasks_info[selected_eval_task], expanded=True) + if len(benchmarks_info) > 0: + selected_benchmark = st.selectbox("Select an eval task", list(benchmarks_info.keys()), key="benchmark_inspect") + st.json(benchmarks_info[selected_benchmark], expanded=True) diff --git a/llama_stack/distribution/ui/page/distribution/resources.py b/llama_stack/distribution/ui/page/distribution/resources.py index 94b840bcb..684270d4d 100644 --- a/llama_stack/distribution/ui/page/distribution/resources.py +++ b/llama_stack/distribution/ui/page/distribution/resources.py @@ -4,8 +4,8 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. +from page.distribution.benchmarks import benchmarks from page.distribution.datasets import datasets -from page.distribution.eval_tasks import eval_tasks from page.distribution.models import models from page.distribution.scoring_functions import scoring_functions from page.distribution.shields import shields @@ -20,7 +20,7 @@ def resources_page(): "Shields", "Scoring Functions", "Datasets", - "Eval Tasks", + "Benchmarks", ] icons = ["magic", "memory", "shield", "file-bar-graph", "database", "list-task"] selected_resource = option_menu( @@ -34,8 +34,8 @@ def resources_page(): }, }, ) - if selected_resource == "Eval Tasks": - eval_tasks() + if selected_resource == "Benchmarks": + benchmarks() elif selected_resource == "Vector Databases": vector_dbs() elif selected_resource == "Datasets": diff --git a/llama_stack/distribution/ui/page/evaluations/native_eval.py b/llama_stack/distribution/ui/page/evaluations/native_eval.py index 112d9cff0..f1cae714a 100644 --- a/llama_stack/distribution/ui/page/evaluations/native_eval.py +++ b/llama_stack/distribution/ui/page/evaluations/native_eval.py @@ -11,28 +11,28 @@ import streamlit as st from modules.api import llama_stack_api -def select_eval_task_1(): - # Select Eval Tasks +def select_benchmark_1(): + # Select Benchmarks st.subheader("1. Choose An Eval Task") - eval_tasks = llama_stack_api.client.eval_tasks.list() - eval_tasks = {et.identifier: et for et in eval_tasks} - eval_tasks_names = list(eval_tasks.keys()) - selected_eval_task = st.selectbox( + benchmarks = llama_stack_api.client.benchmarks.list() + benchmarks = {et.identifier: et for et in benchmarks} + benchmarks_names = list(benchmarks.keys()) + selected_benchmark = st.selectbox( "Choose an eval task.", - options=eval_tasks_names, + options=benchmarks_names, help="Choose an eval task. Each eval task is parameterized by a dataset, and list of scoring functions.", ) with st.expander("View Eval Task"): - st.json(eval_tasks[selected_eval_task], expanded=True) + st.json(benchmarks[selected_benchmark], expanded=True) - st.session_state["selected_eval_task"] = selected_eval_task - st.session_state["eval_tasks"] = eval_tasks + st.session_state["selected_benchmark"] = selected_benchmark + st.session_state["benchmarks"] = benchmarks if st.button("Confirm", key="confirm_1"): - st.session_state["selected_eval_task_1_next"] = True + st.session_state["selected_benchmark_1_next"] = True def define_eval_candidate_2(): - if not st.session_state.get("selected_eval_task_1_next", None): + if not st.session_state.get("selected_benchmark_1_next", None): return st.subheader("2. Define Eval Candidate") @@ -161,11 +161,11 @@ def run_evaluation_3(): Review the configurations that will be used for this evaluation run, make any necessary changes, and then click the "Run Evaluation" button. """ ) - selected_eval_task = st.session_state["selected_eval_task"] - eval_tasks = st.session_state["eval_tasks"] + selected_benchmark = st.session_state["selected_benchmark"] + benchmarks = st.session_state["benchmarks"] eval_candidate = st.session_state["eval_candidate"] - dataset_id = eval_tasks[selected_eval_task].dataset_id + dataset_id = benchmarks[selected_benchmark].dataset_id rows = llama_stack_api.client.datasetio.get_rows_paginated( dataset_id=dataset_id, rows_in_page=-1, @@ -180,16 +180,16 @@ def run_evaluation_3(): help="Number of examples from the dataset to evaluate. ", ) - eval_task_config = { + benchmark_config = { "type": "benchmark", "eval_candidate": eval_candidate, "scoring_params": {}, } with st.expander("View Evaluation Task", expanded=True): - st.json(eval_tasks[selected_eval_task], expanded=True) + st.json(benchmarks[selected_benchmark], expanded=True) with st.expander("View Evaluation Task Configuration", expanded=True): - st.json(eval_task_config, expanded=True) + st.json(benchmark_config, expanded=True) # Add run button and handle evaluation if st.button("Run Evaluation"): @@ -209,10 +209,10 @@ def run_evaluation_3(): progress_bar.progress(progress, text=progress_text) # Run evaluation for current row eval_res = llama_stack_api.client.eval.evaluate_rows( - task_id=selected_eval_task, + benchmark_id=selected_benchmark, input_rows=[r], - scoring_functions=eval_tasks[selected_eval_task].scoring_functions, - task_config=eval_task_config, + scoring_functions=benchmarks[selected_benchmark].scoring_functions, + task_config=benchmark_config, ) for k in r.keys(): @@ -225,7 +225,7 @@ def run_evaluation_3(): output_res[k] = [] output_res[k].append(eval_res.generations[0][k]) - for scoring_fn in eval_tasks[selected_eval_task].scoring_functions: + for scoring_fn in benchmarks[selected_benchmark].scoring_functions: if scoring_fn not in output_res: output_res[scoring_fn] = [] output_res[scoring_fn].append(eval_res.scores[scoring_fn].score_rows[0]) @@ -245,7 +245,7 @@ def native_evaluation_page(): st.set_page_config(page_title="Evaluations (Generation + Scoring)", page_icon="🦙") st.title("📊 Evaluations (Generation + Scoring)") - select_eval_task_1() + select_benchmark_1() define_eval_candidate_2() run_evaluation_3() diff --git a/llama_stack/providers/datatypes.py b/llama_stack/providers/datatypes.py index ccdaf76e7..b92f9dc0a 100644 --- a/llama_stack/providers/datatypes.py +++ b/llama_stack/providers/datatypes.py @@ -10,9 +10,9 @@ from urllib.parse import urlparse from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.apis.benchmarks import Benchmark from llama_stack.apis.datasets import Dataset from llama_stack.apis.datatypes import Api -from llama_stack.apis.eval_tasks import EvalTask from llama_stack.apis.models import Model from llama_stack.apis.scoring_functions import ScoringFn from llama_stack.apis.shields import Shield @@ -48,8 +48,8 @@ class ScoringFunctionsProtocolPrivate(Protocol): async def register_scoring_function(self, scoring_fn: ScoringFn) -> None: ... -class EvalTasksProtocolPrivate(Protocol): - async def register_eval_task(self, eval_task: EvalTask) -> None: ... +class BenchmarksProtocolPrivate(Protocol): + async def register_benchmark(self, benchmark: Benchmark) -> None: ... class ToolsProtocolPrivate(Protocol): diff --git a/llama_stack/providers/inline/eval/meta_reference/eval.py b/llama_stack/providers/inline/eval/meta_reference/eval.py index 1c44caf7f..cd99c9ad8 100644 --- a/llama_stack/providers/inline/eval/meta_reference/eval.py +++ b/llama_stack/providers/inline/eval/meta_reference/eval.py @@ -8,13 +8,13 @@ from typing import Any, Dict, List, Optional from tqdm import tqdm from llama_stack.apis.agents import Agents, StepType +from llama_stack.apis.benchmarks import Benchmark from llama_stack.apis.datasetio import DatasetIO from llama_stack.apis.datasets import Datasets -from llama_stack.apis.eval_tasks import EvalTask from llama_stack.apis.inference import Inference, UserMessage from llama_stack.apis.scoring import Scoring from llama_stack.distribution.datatypes import Api -from llama_stack.providers.datatypes import EvalTasksProtocolPrivate +from llama_stack.providers.datatypes import BenchmarksProtocolPrivate from llama_stack.providers.inline.agents.meta_reference.agent_instance import ( MEMORY_QUERY_TOOL, ) @@ -26,15 +26,15 @@ from llama_stack.providers.utils.common.data_schema_validator import ( from llama_stack.providers.utils.kvstore import kvstore_impl from .....apis.common.job_types import Job -from .....apis.eval.eval import Eval, EvalTaskConfig, EvaluateResponse, JobStatus +from .....apis.eval.eval import BenchmarkConfig, Eval, EvaluateResponse, JobStatus from .config import MetaReferenceEvalConfig -EVAL_TASKS_PREFIX = "eval_tasks:" +EVAL_TASKS_PREFIX = "benchmarks:" class MetaReferenceEvalImpl( Eval, - EvalTasksProtocolPrivate, + BenchmarksProtocolPrivate, ): def __init__( self, @@ -55,36 +55,36 @@ class MetaReferenceEvalImpl( # TODO: assume sync job, will need jobs API for async scheduling self.jobs = {} - self.eval_tasks = {} + self.benchmarks = {} async def initialize(self) -> None: self.kvstore = await kvstore_impl(self.config.kvstore) - # Load existing eval_tasks from kvstore + # Load existing benchmarks from kvstore start_key = EVAL_TASKS_PREFIX end_key = f"{EVAL_TASKS_PREFIX}\xff" - stored_eval_tasks = await self.kvstore.range(start_key, end_key) + stored_benchmarks = await self.kvstore.range(start_key, end_key) - for eval_task in stored_eval_tasks: - eval_task = EvalTask.model_validate_json(eval_task) - self.eval_tasks[eval_task.identifier] = eval_task + for benchmark in stored_benchmarks: + benchmark = Benchmark.model_validate_json(benchmark) + self.benchmarks[benchmark.identifier] = benchmark async def shutdown(self) -> None: ... - async def register_eval_task(self, task_def: EvalTask) -> None: + async def register_benchmark(self, task_def: Benchmark) -> None: # Store in kvstore key = f"{EVAL_TASKS_PREFIX}{task_def.identifier}" await self.kvstore.set( key=key, value=task_def.model_dump_json(), ) - self.eval_tasks[task_def.identifier] = task_def + self.benchmarks[task_def.identifier] = task_def async def run_eval( self, - task_id: str, - task_config: EvalTaskConfig, + benchmark_id: str, + task_config: BenchmarkConfig, ) -> Job: - task_def = self.eval_tasks[task_id] + task_def = self.benchmarks[benchmark_id] dataset_id = task_def.dataset_id candidate = task_config.eval_candidate scoring_functions = task_def.scoring_functions @@ -95,7 +95,7 @@ class MetaReferenceEvalImpl( rows_in_page=(-1 if task_config.num_examples is None else task_config.num_examples), ) res = await self.evaluate_rows( - task_id=task_id, + benchmark_id=benchmark_id, input_rows=all_rows.rows, scoring_functions=scoring_functions, task_config=task_config, @@ -108,7 +108,7 @@ class MetaReferenceEvalImpl( return Job(job_id=job_id) async def _run_agent_generation( - self, input_rows: List[Dict[str, Any]], task_config: EvalTaskConfig + self, input_rows: List[Dict[str, Any]], task_config: BenchmarkConfig ) -> List[Dict[str, Any]]: candidate = task_config.eval_candidate create_response = await self.agents_api.create_agent(candidate.config) @@ -151,7 +151,7 @@ class MetaReferenceEvalImpl( return generations async def _run_model_generation( - self, input_rows: List[Dict[str, Any]], task_config: EvalTaskConfig + self, input_rows: List[Dict[str, Any]], task_config: BenchmarkConfig ) -> List[Dict[str, Any]]: candidate = task_config.eval_candidate assert candidate.sampling_params.max_tokens is not None, "SamplingParams.max_tokens must be provided" @@ -187,10 +187,10 @@ class MetaReferenceEvalImpl( async def evaluate_rows( self, - task_id: str, + benchmark_id: str, input_rows: List[Dict[str, Any]], scoring_functions: List[str], - task_config: EvalTaskConfig, + task_config: BenchmarkConfig, ) -> EvaluateResponse: candidate = task_config.eval_candidate if candidate.type == "agent": @@ -203,7 +203,7 @@ class MetaReferenceEvalImpl( # scoring with generated_answer score_input_rows = [input_r | generated_r for input_r, generated_r in zip(input_rows, generations)] - if task_config.type == "app" and task_config.scoring_params is not None: + if task_config.scoring_params is not None: scoring_functions_dict = { scoring_fn_id: task_config.scoring_params.get(scoring_fn_id, None) for scoring_fn_id in scoring_functions @@ -217,18 +217,60 @@ class MetaReferenceEvalImpl( return EvaluateResponse(generations=generations, scores=score_response.results) - async def job_status(self, task_id: str, job_id: str) -> Optional[JobStatus]: + async def job_status(self, benchmark_id: str, job_id: str) -> Optional[JobStatus]: if job_id in self.jobs: return JobStatus.completed return None - async def job_cancel(self, task_id: str, job_id: str) -> None: + async def job_cancel(self, benchmark_id: str, job_id: str) -> None: raise NotImplementedError("Job cancel is not implemented yet") - async def job_result(self, task_id: str, job_id: str) -> EvaluateResponse: - status = await self.job_status(task_id, job_id) + async def job_result(self, benchmark_id: str, job_id: str) -> EvaluateResponse: + status = await self.job_status(benchmark_id, job_id) if not status or status != JobStatus.completed: raise ValueError(f"Job is not completed, Status: {status.value}") return self.jobs[job_id] + + async def DEPRECATED_run_eval( + self, + task_id: str, + task_config: BenchmarkConfig, + ) -> Job: + return await self.run_eval(benchmark_id=task_id, task_config=task_config) + + async def DEPRECATED_evaluate_rows( + self, + task_id: str, + input_rows: List[Dict[str, Any]], + scoring_functions: List[str], + task_config: BenchmarkConfig, + ) -> EvaluateResponse: + return await self.evaluate_rows( + benchmark_id=task_id, + input_rows=input_rows, + scoring_functions=scoring_functions, + task_config=task_config, + ) + + async def DEPRECATED_job_status( + self, + task_id: str, + job_id: str, + ) -> Optional[JobStatus]: + return await self.job_status(benchmark_id=task_id, job_id=job_id) + + async def DEPRECATED_job_cancel( + self, + task_id: str, + job_id: str, + ) -> None: + return await self.job_cancel(benchmark_id=task_id, job_id=job_id) + + async def DEPRECATED_job_result( + self, + task_id: str, + job_id: str, + ) -> EvaluateResponse: + return await self.job_result(benchmark_id=task_id, job_id=job_id) diff --git a/llama_stack/providers/tests/eval/test_eval.py b/llama_stack/providers/tests/eval/test_eval.py index ec3d08728..ad80b8601 100644 --- a/llama_stack/providers/tests/eval/test_eval.py +++ b/llama_stack/providers/tests/eval/test_eval.py @@ -10,8 +10,8 @@ import pytest from llama_stack.apis.common.content_types import URL from llama_stack.apis.common.type_system import ChatCompletionInputType, StringType from llama_stack.apis.eval.eval import ( - AppEvalTaskConfig, - BenchmarkEvalTaskConfig, + AppBenchmarkConfig, + BenchmarkBenchmarkConfig, ModelCandidate, ) from llama_stack.apis.inference import SamplingParams @@ -30,18 +30,18 @@ from .constants import JUDGE_PROMPT class Testeval: @pytest.mark.asyncio - async def test_eval_tasks_list(self, eval_stack): + async def test_benchmarks_list(self, eval_stack): # NOTE: this needs you to ensure that you are starting from a clean state # but so far we don't have an unregister API unfortunately, so be careful - eval_tasks_impl = eval_stack[Api.eval_tasks] - response = await eval_tasks_impl.list_eval_tasks() + benchmarks_impl = eval_stack[Api.benchmarks] + response = await benchmarks_impl.list_benchmarks() assert isinstance(response, list) @pytest.mark.asyncio async def test_eval_evaluate_rows(self, eval_stack, inference_model, judge_model): - eval_impl, eval_tasks_impl, datasetio_impl, datasets_impl, models_impl = ( + eval_impl, benchmarks_impl, datasetio_impl, datasets_impl, models_impl = ( eval_stack[Api.eval], - eval_stack[Api.eval_tasks], + eval_stack[Api.benchmarks], eval_stack[Api.datasetio], eval_stack[Api.datasets], eval_stack[Api.models], @@ -59,17 +59,17 @@ class Testeval: scoring_functions = [ "basic::equality", ] - task_id = "meta-reference::app_eval" - await eval_tasks_impl.register_eval_task( - eval_task_id=task_id, + benchmark_id = "meta-reference::app_eval" + await benchmarks_impl.register_benchmark( + benchmark_id=benchmark_id, dataset_id="test_dataset_for_eval", scoring_functions=scoring_functions, ) response = await eval_impl.evaluate_rows( - task_id=task_id, + benchmark_id=benchmark_id, input_rows=rows.rows, scoring_functions=scoring_functions, - task_config=AppEvalTaskConfig( + task_config=AppBenchmarkConfig( eval_candidate=ModelCandidate( model=inference_model, sampling_params=SamplingParams(), @@ -92,9 +92,9 @@ class Testeval: @pytest.mark.asyncio async def test_eval_run_eval(self, eval_stack, inference_model, judge_model): - eval_impl, eval_tasks_impl, datasets_impl, models_impl = ( + eval_impl, benchmarks_impl, datasets_impl, models_impl = ( eval_stack[Api.eval], - eval_stack[Api.eval_tasks], + eval_stack[Api.benchmarks], eval_stack[Api.datasets], eval_stack[Api.models], ) @@ -105,15 +105,15 @@ class Testeval: "basic::subset_of", ] - task_id = "meta-reference::app_eval-2" - await eval_tasks_impl.register_eval_task( - eval_task_id=task_id, + benchmark_id = "meta-reference::app_eval-2" + await benchmarks_impl.register_benchmark( + benchmark_id=benchmark_id, dataset_id="test_dataset_for_eval", scoring_functions=scoring_functions, ) response = await eval_impl.run_eval( - task_id=task_id, - task_config=AppEvalTaskConfig( + benchmark_id=benchmark_id, + task_config=AppBenchmarkConfig( eval_candidate=ModelCandidate( model=inference_model, sampling_params=SamplingParams(), @@ -121,9 +121,9 @@ class Testeval: ), ) assert response.job_id == "0" - job_status = await eval_impl.job_status(task_id, response.job_id) + job_status = await eval_impl.job_status(benchmark_id, response.job_id) assert job_status and job_status.value == "completed" - eval_response = await eval_impl.job_result(task_id, response.job_id) + eval_response = await eval_impl.job_result(benchmark_id, response.job_id) assert eval_response is not None assert len(eval_response.generations) == 5 @@ -131,9 +131,9 @@ class Testeval: @pytest.mark.asyncio async def test_eval_run_benchmark_eval(self, eval_stack, inference_model): - eval_impl, eval_tasks_impl, datasets_impl, models_impl = ( + eval_impl, benchmarks_impl, datasets_impl, models_impl = ( eval_stack[Api.eval], - eval_stack[Api.eval_tasks], + eval_stack[Api.benchmarks], eval_stack[Api.datasets], eval_stack[Api.models], ) @@ -159,20 +159,20 @@ class Testeval: ) # register eval task - await eval_tasks_impl.register_eval_task( - eval_task_id="meta-reference-mmlu", + await benchmarks_impl.register_benchmark( + benchmark_id="meta-reference-mmlu", dataset_id="mmlu", scoring_functions=["basic::regex_parser_multiple_choice_answer"], ) # list benchmarks - response = await eval_tasks_impl.list_eval_tasks() + response = await benchmarks_impl.list_benchmarks() assert len(response) > 0 benchmark_id = "meta-reference-mmlu" response = await eval_impl.run_eval( - task_id=benchmark_id, - task_config=BenchmarkEvalTaskConfig( + benchmark_id=benchmark_id, + task_config=BenchmarkBenchmarkConfig( eval_candidate=ModelCandidate( model=inference_model, sampling_params=SamplingParams(), diff --git a/llama_stack/providers/tests/resolver.py b/llama_stack/providers/tests/resolver.py index 0ff632717..76343b7f4 100644 --- a/llama_stack/providers/tests/resolver.py +++ b/llama_stack/providers/tests/resolver.py @@ -10,8 +10,8 @@ from typing import Any, Dict, List, Optional from pydantic import BaseModel +from llama_stack.apis.benchmarks import BenchmarkInput from llama_stack.apis.datasets import DatasetInput -from llama_stack.apis.eval_tasks import EvalTaskInput from llama_stack.apis.models import ModelInput from llama_stack.apis.scoring_functions import ScoringFnInput from llama_stack.apis.shields import ShieldInput @@ -42,7 +42,7 @@ async def construct_stack_for_test( vector_dbs: Optional[List[VectorDBInput]] = None, datasets: Optional[List[DatasetInput]] = None, scoring_fns: Optional[List[ScoringFnInput]] = None, - eval_tasks: Optional[List[EvalTaskInput]] = None, + benchmarks: Optional[List[BenchmarkInput]] = None, tool_groups: Optional[List[ToolGroupInput]] = None, ) -> TestStack: sqlite_file = tempfile.NamedTemporaryFile(delete=False, suffix=".db") @@ -56,7 +56,7 @@ async def construct_stack_for_test( vector_dbs=vector_dbs or [], datasets=datasets or [], scoring_fns=scoring_fns or [], - eval_tasks=eval_tasks or [], + benchmarks=benchmarks or [], tool_groups=tool_groups or [], ) run_config = parse_and_maybe_upgrade_config(run_config) diff --git a/llama_stack/templates/bedrock/run.yaml b/llama_stack/templates/bedrock/run.yaml index be6c9a928..7d03b7c29 100644 --- a/llama_stack/templates/bedrock/run.yaml +++ b/llama_stack/templates/bedrock/run.yaml @@ -107,7 +107,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/cerebras/run.yaml b/llama_stack/templates/cerebras/run.yaml index 05d3f4525..6afff2be2 100644 --- a/llama_stack/templates/cerebras/run.yaml +++ b/llama_stack/templates/cerebras/run.yaml @@ -109,7 +109,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/dell/run-with-safety.yaml b/llama_stack/templates/dell/run-with-safety.yaml index 04c5957d4..ddec3a715 100644 --- a/llama_stack/templates/dell/run-with-safety.yaml +++ b/llama_stack/templates/dell/run-with-safety.yaml @@ -108,7 +108,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: brave-search diff --git a/llama_stack/templates/dell/run.yaml b/llama_stack/templates/dell/run.yaml index 706444eb1..9394c94ef 100644 --- a/llama_stack/templates/dell/run.yaml +++ b/llama_stack/templates/dell/run.yaml @@ -99,7 +99,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: brave-search diff --git a/llama_stack/templates/experimental-post-training/run.yaml b/llama_stack/templates/experimental-post-training/run.yaml index 75d103c9f..e70ccdd2d 100644 --- a/llama_stack/templates/experimental-post-training/run.yaml +++ b/llama_stack/templates/experimental-post-training/run.yaml @@ -85,4 +85,4 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] diff --git a/llama_stack/templates/fireworks/run-with-safety.yaml b/llama_stack/templates/fireworks/run-with-safety.yaml index 0fbe14a5a..8f95e9d59 100644 --- a/llama_stack/templates/fireworks/run-with-safety.yaml +++ b/llama_stack/templates/fireworks/run-with-safety.yaml @@ -164,7 +164,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/fireworks/run.yaml b/llama_stack/templates/fireworks/run.yaml index ccf67dcbb..64229a5d8 100644 --- a/llama_stack/templates/fireworks/run.yaml +++ b/llama_stack/templates/fireworks/run.yaml @@ -153,7 +153,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/hf-endpoint/run-with-safety.yaml b/llama_stack/templates/hf-endpoint/run-with-safety.yaml index f520a2fda..867d7a076 100644 --- a/llama_stack/templates/hf-endpoint/run-with-safety.yaml +++ b/llama_stack/templates/hf-endpoint/run-with-safety.yaml @@ -116,7 +116,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/hf-endpoint/run.yaml b/llama_stack/templates/hf-endpoint/run.yaml index 708cb1bcc..d60acdefd 100644 --- a/llama_stack/templates/hf-endpoint/run.yaml +++ b/llama_stack/templates/hf-endpoint/run.yaml @@ -106,7 +106,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/hf-serverless/run-with-safety.yaml b/llama_stack/templates/hf-serverless/run-with-safety.yaml index 7f0abf5be..e58ad15b3 100644 --- a/llama_stack/templates/hf-serverless/run-with-safety.yaml +++ b/llama_stack/templates/hf-serverless/run-with-safety.yaml @@ -116,7 +116,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/hf-serverless/run.yaml b/llama_stack/templates/hf-serverless/run.yaml index c0b7a4c60..5045e821a 100644 --- a/llama_stack/templates/hf-serverless/run.yaml +++ b/llama_stack/templates/hf-serverless/run.yaml @@ -106,7 +106,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/meta-reference-gpu/run-with-safety.yaml b/llama_stack/templates/meta-reference-gpu/run-with-safety.yaml index c5286fc6b..caac65c8c 100644 --- a/llama_stack/templates/meta-reference-gpu/run-with-safety.yaml +++ b/llama_stack/templates/meta-reference-gpu/run-with-safety.yaml @@ -118,7 +118,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/meta-reference-gpu/run.yaml b/llama_stack/templates/meta-reference-gpu/run.yaml index 310585f23..bade9a076 100644 --- a/llama_stack/templates/meta-reference-gpu/run.yaml +++ b/llama_stack/templates/meta-reference-gpu/run.yaml @@ -107,7 +107,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/meta-reference-quantized-gpu/run.yaml b/llama_stack/templates/meta-reference-quantized-gpu/run.yaml index d43cf3917..f131e8ea6 100644 --- a/llama_stack/templates/meta-reference-quantized-gpu/run.yaml +++ b/llama_stack/templates/meta-reference-quantized-gpu/run.yaml @@ -109,7 +109,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/nvidia/run.yaml b/llama_stack/templates/nvidia/run.yaml index c8ae362f5..14fb28354 100644 --- a/llama_stack/templates/nvidia/run.yaml +++ b/llama_stack/templates/nvidia/run.yaml @@ -139,7 +139,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/ollama/run-with-safety.yaml b/llama_stack/templates/ollama/run-with-safety.yaml index ac5dab755..9d5bfc7a0 100644 --- a/llama_stack/templates/ollama/run-with-safety.yaml +++ b/llama_stack/templates/ollama/run-with-safety.yaml @@ -113,7 +113,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/ollama/run.yaml b/llama_stack/templates/ollama/run.yaml index 3a60fe61f..9ac1f3267 100644 --- a/llama_stack/templates/ollama/run.yaml +++ b/llama_stack/templates/ollama/run.yaml @@ -110,7 +110,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/remote-vllm/run-with-safety.yaml b/llama_stack/templates/remote-vllm/run-with-safety.yaml index 1fe998a1f..dd43f21f6 100644 --- a/llama_stack/templates/remote-vllm/run-with-safety.yaml +++ b/llama_stack/templates/remote-vllm/run-with-safety.yaml @@ -118,7 +118,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/remote-vllm/run.yaml b/llama_stack/templates/remote-vllm/run.yaml index 9d3db8a31..24cd207c7 100644 --- a/llama_stack/templates/remote-vllm/run.yaml +++ b/llama_stack/templates/remote-vllm/run.yaml @@ -107,7 +107,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/sambanova/run.yaml b/llama_stack/templates/sambanova/run.yaml index 39b0f3c4e..26815dcd0 100644 --- a/llama_stack/templates/sambanova/run.yaml +++ b/llama_stack/templates/sambanova/run.yaml @@ -118,7 +118,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/tgi/run-with-safety.yaml b/llama_stack/templates/tgi/run-with-safety.yaml index ed6c9ef6f..e1d85f59a 100644 --- a/llama_stack/templates/tgi/run-with-safety.yaml +++ b/llama_stack/templates/tgi/run-with-safety.yaml @@ -106,7 +106,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/tgi/run.yaml b/llama_stack/templates/tgi/run.yaml index 8bf76f37b..fc73e0978 100644 --- a/llama_stack/templates/tgi/run.yaml +++ b/llama_stack/templates/tgi/run.yaml @@ -105,7 +105,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/together/run-with-safety.yaml b/llama_stack/templates/together/run-with-safety.yaml index 298926630..f101a5d60 100644 --- a/llama_stack/templates/together/run-with-safety.yaml +++ b/llama_stack/templates/together/run-with-safety.yaml @@ -159,7 +159,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/together/run.yaml b/llama_stack/templates/together/run.yaml index 920003759..8af85979d 100644 --- a/llama_stack/templates/together/run.yaml +++ b/llama_stack/templates/together/run.yaml @@ -148,7 +148,7 @@ shields: vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search diff --git a/llama_stack/templates/vllm-gpu/run.yaml b/llama_stack/templates/vllm-gpu/run.yaml index 41a545e1a..cdce5510d 100644 --- a/llama_stack/templates/vllm-gpu/run.yaml +++ b/llama_stack/templates/vllm-gpu/run.yaml @@ -109,7 +109,7 @@ shields: [] vector_dbs: [] datasets: [] scoring_fns: [] -eval_tasks: [] +benchmarks: [] tool_groups: - toolgroup_id: builtin::websearch provider_id: tavily-search From 2a8e199e1031862d6d60c86d14b16ab092e3cb7b Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Thu, 13 Feb 2025 16:52:46 -0800 Subject: [PATCH 06/37] fix notebook --- docs/getting_started.ipynb | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/docs/getting_started.ipynb b/docs/getting_started.ipynb index ee616b471..51ae945f4 100644 --- a/docs/getting_started.ipynb +++ b/docs/getting_started.ipynb @@ -3419,22 +3419,22 @@ }, { "cell_type": "code", - "execution_count": null, - "id": "865fc5a8", - "metadata": {}, - "outputs": [], - "source": [ - "!pip install llama-stack-client==0.1.0" - ] - }, - { - "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "44e05e16", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + " % Total % Received % Xferd Average Speed Time Time Time Current\n", + " Dload Upload Total Spent Left Speed\n", + "100 275k 100 275k 0 0 780k 0 --:--:-- --:--:-- --:--:-- 780k\n" + ] + } + ], "source": [ - "!wget https://raw.githubusercontent.com/meta-llama/llama-models/refs/heads/main/Llama_Repo.jpeg" + "!curl -O https://raw.githubusercontent.com/meta-llama/llama-models/refs/heads/main/Llama_Repo.jpeg" ] }, { @@ -3444,6 +3444,7 @@ "metadata": {}, "outputs": [], "source": [ + "# NBVAL_SKIP\n", "from PIL import Image\n", "import matplotlib.pyplot as plt\n", "\n", @@ -3580,6 +3581,7 @@ " model=LLAMA32_11B_INSTRUCT,\n", " instructions=\"You are a helpful assistant\",\n", " enable_session_persistence=False,\n", + " toolgroups=[],\n", " )\n", "\n", " agent = Agent(client, agent_config)\n", @@ -3630,7 +3632,7 @@ "provenance": [] }, "kernelspec": { - "display_name": "toolchain", + "display_name": "master", "language": "python", "name": "python3" }, @@ -3644,7 +3646,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.15" + "version": "3.10.16" }, "widgets": { "application/vnd.jupyter.widget-state+json": { From da53dc3f5fef7b214a80d1d450b62118532f263f Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Thu, 13 Feb 2025 17:10:45 -0800 Subject: [PATCH 07/37] fix: openapi for eval-task (#1085) # What does this PR do? - as title ## Test Plan - the deprecated endpoint need to obey what it was before [//]: # (## Documentation) --- docs/_static/llama-stack-spec.html | 4 ++-- docs/_static/llama-stack-spec.yaml | 4 ++-- llama_stack/apis/benchmarks/benchmarks.py | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/_static/llama-stack-spec.html b/docs/_static/llama-stack-spec.html index b93f6a380..026a061c8 100644 --- a/docs/_static/llama-stack-spec.html +++ b/docs/_static/llama-stack-spec.html @@ -81,7 +81,7 @@ "deprecated": true } }, - "/v1/eval-tasks/{task_id}": { + "/v1/eval-tasks/{eval_task_id}": { "get": { "responses": { "200": { @@ -109,7 +109,7 @@ "parameters": [ { "name": "eval_task_id", - "in": "query", + "in": "path", "required": true, "schema": { "type": "string" diff --git a/docs/_static/llama-stack-spec.yaml b/docs/_static/llama-stack-spec.yaml index b30025020..e4f0398c0 100644 --- a/docs/_static/llama-stack-spec.yaml +++ b/docs/_static/llama-stack-spec.yaml @@ -35,7 +35,7 @@ paths: $ref: '#/components/schemas/DeprecatedEvaluateRowsRequest' required: true deprecated: true - /v1/eval-tasks/{task_id}: + /v1/eval-tasks/{eval_task_id}: get: responses: '200': @@ -51,7 +51,7 @@ paths: description: '' parameters: - name: eval_task_id - in: query + in: path required: true schema: type: string diff --git a/llama_stack/apis/benchmarks/benchmarks.py b/llama_stack/apis/benchmarks/benchmarks.py index 50019b18c..af5784bbc 100644 --- a/llama_stack/apis/benchmarks/benchmarks.py +++ b/llama_stack/apis/benchmarks/benchmarks.py @@ -68,7 +68,7 @@ class Benchmarks(Protocol): @webmethod(route="/eval-tasks", method="GET") async def DEPRECATED_list_eval_tasks(self) -> ListBenchmarksResponse: ... - @webmethod(route="/eval-tasks/{task_id}", method="GET") + @webmethod(route="/eval-tasks/{eval_task_id}", method="GET") async def DEPRECATED_get_eval_task( self, eval_task_id: str, From b0b696cb4ff0ac5277bc2f580790d594d48b9712 Mon Sep 17 00:00:00 2001 From: Hardik Shah Date: Thu, 13 Feb 2025 18:18:23 -0800 Subject: [PATCH 08/37] fix: regex pattern matching to support :path suffix in the routes (#1089) This PR fixes client sdk test failure -- https://github.com/meta-llama/llama-stack-ops/actions/runs/13320153693/job/37203122048 by updating the regex matching pattern to also consider `:path` in the routes --- llama_stack/distribution/library_client.py | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/llama_stack/distribution/library_client.py b/llama_stack/distribution/library_client.py index 55a15e5e9..a7ef753b9 100644 --- a/llama_stack/distribution/library_client.py +++ b/llama_stack/distribution/library_client.py @@ -231,7 +231,13 @@ class AsyncLlamaStackAsLibraryClient(AsyncLlamaStackClient): def _convert_path_to_regex(path: str) -> str: # Convert {param} to named capture groups - pattern = re.sub(r"{(\w+)}", r"(?P<\1>[^/]+)", path) + # handle {param:path} as well which allows for forward slashes in the param value + pattern = re.sub( + r"{(\w+)(?::path)?}", + lambda m: f"(?P<{m.group(1)}>{'[^/]+' if not m.group(0).endswith(':path') else '.+'})", + path, + ) + return f"^{pattern}$" for api, api_endpoints in endpoints.items(): From b27c41fe399c60c07dd04e6c7d8c68685f517014 Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Thu, 13 Feb 2025 18:40:16 -0800 Subject: [PATCH 09/37] fix: disable sqlite-vec test (#1090) # What does this PR do? - sqlite_vec not added to all template yet, disable test for now to unblock release cut [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan image [//]: # (## Documentation) --- tests/client-sdk/vector_io/test_vector_io.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/tests/client-sdk/vector_io/test_vector_io.py b/tests/client-sdk/vector_io/test_vector_io.py index c5be4ab3f..c7e4040b6 100644 --- a/tests/client-sdk/vector_io/test_vector_io.py +++ b/tests/client-sdk/vector_io/test_vector_io.py @@ -8,7 +8,11 @@ import random import pytest -INLINE_VECTOR_DB_PROVIDERS = ["faiss", "sqlite_vec"] +INLINE_VECTOR_DB_PROVIDERS = [ + "faiss", + # TODO: add sqlite_vec to templates + # "sqlite_vec", +] @pytest.fixture(scope="function") From 2f7268b79030ba898ac7cac9376949471b970ddd Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Fri, 14 Feb 2025 13:31:36 +0800 Subject: [PATCH 10/37] fix: add the missed help description info (#1096) --- llama_stack/cli/model/describe.py | 1 + 1 file changed, 1 insertion(+) diff --git a/llama_stack/cli/model/describe.py b/llama_stack/cli/model/describe.py index a25513633..3e55052c5 100644 --- a/llama_stack/cli/model/describe.py +++ b/llama_stack/cli/model/describe.py @@ -34,6 +34,7 @@ class ModelDescribe(Subcommand): "--model-id", type=str, required=True, + help="See `llama model list` or `llama model list --show-all` for the list of available models", ) def _run_model_describe_cmd(self, args: argparse.Namespace) -> None: From 406465622e65b9eeb55daeb98055d0206000143a Mon Sep 17 00:00:00 2001 From: Ben Browning Date: Fri, 14 Feb 2025 09:31:00 -0500 Subject: [PATCH 11/37] fix: Update QdrantConfig to QdrantVectorIOConfig (#1104) # What does this PR do? This fixes an import introduced due to merging #1079 before #1039, and thus the changes from #1039 needing to update `QdrantConfig` to `QdrantVectorIOConfig`. ## Test Plan I ran the remote vllm provider inference tests against the latest main: ``` VLLM_URL="http://localhost:8001/v1" python -m pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py --providers "inference=vllm_remote" ``` That failed with: ``` File "/home/bbrownin/src/llama-stack/llama_stack/providers/tests/vector_io/fixtures.py", line 20, in from llama_stack.providers.remote.vector_io.qdrant import QdrantConfig ImportError: Error importing plugin "llama_stack.providers.tests.vector_io.fixtures": cannot import name 'QdrantConfig' from 'llama_stack.providers.remote.vector_io.qdrant' (/home/bbrownin/src/llama-stack/llama_stack/providers/remote/vector_io/qdrant/__init__.py) ``` After this change, the import no longer fails and the tests pass. Signed-off-by: Ben Browning --- llama_stack/providers/tests/vector_io/fixtures.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/llama_stack/providers/tests/vector_io/fixtures.py b/llama_stack/providers/tests/vector_io/fixtures.py index beb9b4ebd..1797d47a5 100644 --- a/llama_stack/providers/tests/vector_io/fixtures.py +++ b/llama_stack/providers/tests/vector_io/fixtures.py @@ -17,7 +17,7 @@ from llama_stack.providers.inline.vector_io.faiss import FaissVectorIOConfig from llama_stack.providers.inline.vector_io.sqlite_vec import SQLiteVectorIOConfig from llama_stack.providers.remote.vector_io.chroma import ChromaVectorIOConfig from llama_stack.providers.remote.vector_io.pgvector import PGVectorVectorIOConfig -from llama_stack.providers.remote.vector_io.qdrant import QdrantConfig +from llama_stack.providers.remote.vector_io.qdrant import QdrantVectorIOConfig from llama_stack.providers.remote.vector_io.weaviate import WeaviateVectorIOConfig from llama_stack.providers.tests.resolver import construct_stack_for_test from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig @@ -132,7 +132,7 @@ def vector_io_chroma() -> ProviderFixture: def vector_io_qdrant() -> ProviderFixture: url = os.getenv("QDRANT_URL") if url: - config = QdrantConfig(url=url) + config = QdrantVectorIOConfig(url=url) provider_type = "remote::qdrant" else: raise ValueError("QDRANT_URL must be set") From a3cb039e8334234d6db89fe8f92d4bcc449604da Mon Sep 17 00:00:00 2001 From: raghotham Date: Fri, 14 Feb 2025 08:55:22 -0800 Subject: [PATCH 12/37] docs: Add region parameter to Bedrock provider (#1103) # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) --- llama_stack/templates/bedrock/doc_template.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/llama_stack/templates/bedrock/doc_template.md b/llama_stack/templates/bedrock/doc_template.md index 2121719b7..357638ea5 100644 --- a/llama_stack/templates/bedrock/doc_template.md +++ b/llama_stack/templates/bedrock/doc_template.md @@ -55,7 +55,8 @@ docker run \ --port $LLAMA_STACK_PORT \ --env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ --env AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \ - --env AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN + --env AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \ + --env AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION ``` ### Via Conda @@ -66,5 +67,6 @@ llama stack run ./run.yaml \ --port $LLAMA_STACK_PORT \ --env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ --env AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \ - --env AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN + --env AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \ + --env AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION ``` From c0ee5129803f06aece73aa1b161d44335b469dfd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?S=C3=A9bastien=20Han?= Date: Fri, 14 Feb 2025 18:01:57 +0100 Subject: [PATCH 13/37] build: configure ruff from pyproject.toml (#1100) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? - Remove hardcoded configurations from pre-commit. - Allow configuration to be set via pyproject.toml. - Merge .ruff.toml settings into pyproject.toml. - Ensure the linter and formatter use the defined configuration instead of being overridden by pre-commit. Signed-off-by: Sébastien Han [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: Sébastien Han --- .pre-commit-config.yaml | 6 -- .ruff.toml | 37 ----------- llama_stack/cli/table.py | 8 +-- .../inline/agents/meta_reference/safety.py | 2 +- .../inline/eval/meta_reference/eval.py | 4 +- .../code_interpreter/code_env_prefix.py | 6 -- .../inline/tool_runtime/rag/memory.py | 2 +- .../providers/inline/vector_io/faiss/faiss.py | 2 +- .../inline/vector_io/sqlite_vec/sqlite_vec.py | 2 +- .../remote/vector_io/chroma/chroma.py | 2 +- .../remote/vector_io/qdrant/qdrant.py | 2 +- .../tests/inference/test_vision_inference.py | 2 +- .../utils/inference/openai_compat.py | 2 +- pyproject.toml | 63 +++++++++++++++++++ 14 files changed, 78 insertions(+), 62 deletions(-) delete mode 100644 .ruff.toml diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index a7ece3b25..9bdb10d95 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -29,13 +29,7 @@ repos: - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.9.4 hooks: - # Run the linter with import sorting. - id: ruff - args: [ - --fix, - --exit-non-zero-on-fix, - --select, I, - ] - id: ruff-format - repo: https://github.com/adamchainz/blacken-docs diff --git a/.ruff.toml b/.ruff.toml deleted file mode 100644 index a913ae690..000000000 --- a/.ruff.toml +++ /dev/null @@ -1,37 +0,0 @@ -# Suggested config from pytorch that we can adapt -lint.select = ["B", "C", "E" , "F" , "N", "W", "B9"] - -line-length = 120 - -# C408 ignored because we like the dict keyword argument syntax -# E501 is not flexible enough, we're using B950 instead -# N812 ignored because import torch.nn.functional as F is PyTorch convention -# N817 ignored because importing using acronyms is convention (DistributedDataParallel as DDP) -# E731 allow usage of assigning lambda expressions -# E701 let black auto-format statements on one line -# E704 let black auto-format statements on one line -lint.ignore = [ - "E203", "E305", "E402", "E501", "E721", "E741", "F405", "F821", "F841", - "C408", "E302", "W291", "E303", "N812", "N817", "E731", "E701", - # These are the additional ones we started ignoring after moving to ruff. We should look into each one of them later. - "C901", "C405", "C414", "N803", "N999", "C403", "C416", "B028", "C419", "C401", "B023", - # shebang has extra meaning in fbcode lints, so I think it's not worth trying - # to line this up with executable bit - "EXE001", - # random naming hints don't need - "N802", - # these ignores are from flake8-bugbear; please fix! - "B007", "B008" -] - -exclude = [ - "./.git", - "./docs/*", - "./build", - "./scripts", - "./venv", - "*.pyi", - ".pre-commit-config.yaml", - "*.md", - ".flake8" -] diff --git a/llama_stack/cli/table.py b/llama_stack/cli/table.py index 847719f81..599749231 100644 --- a/llama_stack/cli/table.py +++ b/llama_stack/cli/table.py @@ -26,13 +26,13 @@ def format_row(row, col_widths): lines.extend(textwrap.wrap(line, width, break_long_words=False, replace_whitespace=False)) return lines - wrapped = [wrap(item, width) for item, width in zip(row, col_widths)] + wrapped = [wrap(item, width) for item, width in zip(row, col_widths, strict=False)] max_lines = max(len(subrow) for subrow in wrapped) lines = [] for i in range(max_lines): line = [] - for cell_lines, width in zip(wrapped, col_widths): + for cell_lines, width in zip(wrapped, col_widths, strict=False): value = cell_lines[i] if i < len(cell_lines) else "" line.append(value + " " * (width - len(strip_ansi_colors(value)))) lines.append("| " + (" | ".join(line)) + " |") @@ -50,14 +50,14 @@ def print_table(rows, headers=None, separate_rows: bool = False, sort_by: Iterab rows.sort(key=lambda x: tuple(x[i] for i in sort_by)) if not headers: - col_widths = [max(itemlen(item) for item in col) for col in zip(*rows)] + col_widths = [max(itemlen(item) for item in col) for col in zip(*rows, strict=False)] else: col_widths = [ max( itemlen(header), max(itemlen(item) for item in col), ) - for header, col in zip(headers, zip(*rows)) + for header, col in zip(headers, zip(*rows, strict=False), strict=False) ] col_widths = [min(w, 80) for w in col_widths] diff --git a/llama_stack/providers/inline/agents/meta_reference/safety.py b/llama_stack/providers/inline/agents/meta_reference/safety.py index 30ce52e3b..2497be070 100644 --- a/llama_stack/providers/inline/agents/meta_reference/safety.py +++ b/llama_stack/providers/inline/agents/meta_reference/safety.py @@ -41,7 +41,7 @@ class ShieldRunnerMixin: for identifier in identifiers ] ) - for identifier, response in zip(identifiers, responses): + for identifier, response in zip(identifiers, responses, strict=False): if not response.violation: continue diff --git a/llama_stack/providers/inline/eval/meta_reference/eval.py b/llama_stack/providers/inline/eval/meta_reference/eval.py index cd99c9ad8..0f77b7347 100644 --- a/llama_stack/providers/inline/eval/meta_reference/eval.py +++ b/llama_stack/providers/inline/eval/meta_reference/eval.py @@ -201,7 +201,9 @@ class MetaReferenceEvalImpl( raise ValueError(f"Invalid candidate type: {candidate.type}") # scoring with generated_answer - score_input_rows = [input_r | generated_r for input_r, generated_r in zip(input_rows, generations)] + score_input_rows = [ + input_r | generated_r for input_r, generated_r in zip(input_rows, generations, strict=False) + ] if task_config.scoring_params is not None: scoring_functions_dict = { diff --git a/llama_stack/providers/inline/tool_runtime/code_interpreter/code_env_prefix.py b/llama_stack/providers/inline/tool_runtime/code_interpreter/code_env_prefix.py index f28ae248c..1850d69f7 100644 --- a/llama_stack/providers/inline/tool_runtime/code_interpreter/code_env_prefix.py +++ b/llama_stack/providers/inline/tool_runtime/code_interpreter/code_env_prefix.py @@ -83,12 +83,6 @@ import sys as _sys from contextlib import ( # noqa contextmanager as _contextmanager, ) -from contextlib import ( - redirect_stderr as _redirect_stderr, -) -from contextlib import ( - redirect_stdout as _redirect_stdout, -) from multiprocessing.connection import Connection as _Connection # Mangle imports to avoid polluting model execution namespace. diff --git a/llama_stack/providers/inline/tool_runtime/rag/memory.py b/llama_stack/providers/inline/tool_runtime/rag/memory.py index 5695d4037..a6cd57923 100644 --- a/llama_stack/providers/inline/tool_runtime/rag/memory.py +++ b/llama_stack/providers/inline/tool_runtime/rag/memory.py @@ -118,7 +118,7 @@ class MemoryToolRuntimeImpl(ToolsProtocolPrivate, ToolRuntime, RAGToolRuntime): return RAGQueryResult(content=None) # sort by score - chunks, scores = zip(*sorted(zip(chunks, scores), key=lambda x: x[1], reverse=True)) + chunks, scores = zip(*sorted(zip(chunks, scores, strict=False), key=lambda x: x[1], reverse=True), strict=False) tokens = 0 picked = [] diff --git a/llama_stack/providers/inline/vector_io/faiss/faiss.py b/llama_stack/providers/inline/vector_io/faiss/faiss.py index b52fb074c..410d8bd8b 100644 --- a/llama_stack/providers/inline/vector_io/faiss/faiss.py +++ b/llama_stack/providers/inline/vector_io/faiss/faiss.py @@ -103,7 +103,7 @@ class FaissIndex(EmbeddingIndex): chunks = [] scores = [] - for d, i in zip(distances[0], indices[0]): + for d, i in zip(distances[0], indices[0], strict=False): if i < 0: continue chunks.append(self.chunk_by_index[int(i)]) diff --git a/llama_stack/providers/inline/vector_io/sqlite_vec/sqlite_vec.py b/llama_stack/providers/inline/vector_io/sqlite_vec/sqlite_vec.py index fcd7cd8f9..6c787bc29 100644 --- a/llama_stack/providers/inline/vector_io/sqlite_vec/sqlite_vec.py +++ b/llama_stack/providers/inline/vector_io/sqlite_vec/sqlite_vec.py @@ -80,7 +80,7 @@ class SQLiteVecIndex(EmbeddingIndex): try: # Start transaction cur.execute("BEGIN TRANSACTION") - for chunk, emb in zip(chunks, embeddings): + for chunk, emb in zip(chunks, embeddings, strict=False): # Serialize and insert the chunk metadata. chunk_json = chunk.model_dump_json() cur.execute(f"INSERT INTO {self.metadata_table} (chunk) VALUES (?)", (chunk_json,)) diff --git a/llama_stack/providers/remote/vector_io/chroma/chroma.py b/llama_stack/providers/remote/vector_io/chroma/chroma.py index bd684160a..3bf3a7740 100644 --- a/llama_stack/providers/remote/vector_io/chroma/chroma.py +++ b/llama_stack/providers/remote/vector_io/chroma/chroma.py @@ -69,7 +69,7 @@ class ChromaIndex(EmbeddingIndex): chunks = [] scores = [] - for dist, doc in zip(distances, documents): + for dist, doc in zip(distances, documents, strict=False): try: doc = json.loads(doc) chunk = Chunk(**doc) diff --git a/llama_stack/providers/remote/vector_io/qdrant/qdrant.py b/llama_stack/providers/remote/vector_io/qdrant/qdrant.py index e1091e2cf..586b8ca95 100644 --- a/llama_stack/providers/remote/vector_io/qdrant/qdrant.py +++ b/llama_stack/providers/remote/vector_io/qdrant/qdrant.py @@ -55,7 +55,7 @@ class QdrantIndex(EmbeddingIndex): ) points = [] - for i, (chunk, embedding) in enumerate(zip(chunks, embeddings)): + for i, (chunk, embedding) in enumerate(zip(chunks, embeddings, strict=False)): chunk_id = f"{chunk.metadata['document_id']}:chunk-{i}" points.append( PointStruct( diff --git a/llama_stack/providers/tests/inference/test_vision_inference.py b/llama_stack/providers/tests/inference/test_vision_inference.py index 2f96e66d4..4d7183c49 100644 --- a/llama_stack/providers/tests/inference/test_vision_inference.py +++ b/llama_stack/providers/tests/inference/test_vision_inference.py @@ -88,7 +88,7 @@ class TestVisionModelInference: expected_strings_to_check = [ ["puppy"], ] - for image, expected_strings in zip(images, expected_strings_to_check): + for image, expected_strings in zip(images, expected_strings_to_check, strict=False): response = [ r async for r in await inference_impl.chat_completion( diff --git a/llama_stack/providers/utils/inference/openai_compat.py b/llama_stack/providers/utils/inference/openai_compat.py index 00e291e8f..33f0f4e22 100644 --- a/llama_stack/providers/utils/inference/openai_compat.py +++ b/llama_stack/providers/utils/inference/openai_compat.py @@ -132,7 +132,7 @@ def convert_openai_completion_logprobs( if logprobs.tokens and logprobs.token_logprobs: return [ TokenLogProbs(logprobs_by_token={token: token_lp}) - for token, token_lp in zip(logprobs.tokens, logprobs.token_logprobs) + for token, token_lp in zip(logprobs.tokens, logprobs.token_logprobs, strict=False) ] return None diff --git a/pyproject.toml b/pyproject.toml index 2f40ceac9..feaae153b 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -76,3 +76,66 @@ license-files = [] name = "pytorch-cpu" url = "https://download.pytorch.org/whl/cpu" explicit = true + +[tool.ruff] +line-length = 120 +exclude = [ + "./.git", + "./docs/*", + "./build", + "./scripts", + "./venv", + "*.pyi", + ".pre-commit-config.yaml", + "*.md", + ".flake8", +] + +[tool.ruff.lint] +select = [ + "B", # flake8-bugbear + "B9", # flake8-bugbear subset + "C", # comprehensions + "E", # pycodestyle + "F", # Pyflakes + "N", # Naming + "W", # Warnings + "I", # isort +] +ignore = [ + "E203", + "E305", + "E402", + "E501", # line too long + "E721", + "E741", + "F405", + "F821", + "F841", + "C408", # ignored because we like the dict keyword argument syntax + "E302", + "W291", + "E303", + "N812", # ignored because import torch.nn.functional as F is PyTorch convention + "N817", # ignored because importing using acronyms is convention (DistributedDataParallel as DDP) + "E731", # allow usage of assigning lambda expressions + # These are the additional ones we started ignoring after moving to ruff. We should look into each one of them later. + "C901", + "C405", + "C414", + "N803", + "N999", + "C403", + "C416", + "B028", + "C419", + "C401", + "B023", + # shebang has extra meaning in fbcode lints, so I think it's not worth trying + # to line this up with executable bit + "EXE001", + "N802", # random naming hints don't need + # these ignores are from flake8-bugbear; please fix! + "B007", + "B008", +] From 314ee09ae326f0d3f9e0fbfc07c8dac8e1b4fce3 Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Fri, 14 Feb 2025 09:10:59 -0800 Subject: [PATCH 14/37] chore: move all Llama Stack types from llama-models to llama-stack (#1098) llama-models should have extremely minimal cruft. Its sole purpose should be didactic -- show the simplest implementation of the llama models and document the prompt formats, etc. This PR is the complement to https://github.com/meta-llama/llama-models/pull/279 ## Test Plan Ensure all `llama` CLI `model` sub-commands work: ```bash llama model list llama model download --model-id ... llama model prompt-format -m ... ``` Ran tests: ```bash cd tests/client-sdk LLAMA_STACK_CONFIG=fireworks pytest -s -v inference/ LLAMA_STACK_CONFIG=fireworks pytest -s -v vector_io/ LLAMA_STACK_CONFIG=fireworks pytest -s -v agents/ ``` Create a fresh venv `uv venv && source .venv/bin/activate` and run `llama stack build --template fireworks --image-type venv` followed by `llama stack run together --image-type venv` <-- the server runs Also checked that the OpenAPI generator can run and there is no change in the generated files as a result. ```bash cd docs/openapi_generator sh run_openapi_generator.sh ``` --- .pre-commit-config.yaml | 9 +- docs/openapi_generator/generate.py | 12 - docs/openapi_generator/pyopenapi/generator.py | 12 +- .../openapi_generator/pyopenapi/operations.py | 2 +- .../pyopenapi/specification.py | 2 +- docs/openapi_generator/pyopenapi/utility.py | 2 +- llama_stack/apis/agents/agents.py | 2 +- llama_stack/apis/agents/event_logger.py | 206 ---- .../apis/batch_inference/batch_inference.py | 2 +- llama_stack/apis/benchmarks/benchmarks.py | 2 +- llama_stack/apis/common/content_types.py | 5 +- llama_stack/apis/common/deployment_types.py | 2 +- llama_stack/apis/common/job_types.py | 3 +- llama_stack/apis/common/training_types.py | 3 +- llama_stack/apis/common/type_system.py | 3 +- llama_stack/apis/datasetio/datasetio.py | 2 +- llama_stack/apis/datasets/datasets.py | 2 +- llama_stack/apis/datatypes.py | 2 +- llama_stack/apis/eval/eval.py | 2 +- llama_stack/apis/inference/inference.py | 16 +- llama_stack/apis/inspect/inspect.py | 3 +- llama_stack/apis/models/models.py | 2 +- .../apis/post_training/post_training.py | 2 +- llama_stack/apis/safety/safety.py | 2 +- llama_stack/apis/scoring/scoring.py | 2 +- .../scoring_functions/scoring_functions.py | 2 +- llama_stack/apis/shields/shields.py | 2 +- .../synthetic_data_generation.py | 2 +- llama_stack/apis/telemetry/telemetry.py | 5 +- llama_stack/apis/tools/rag_tool.py | 2 +- llama_stack/apis/tools/tools.py | 2 +- llama_stack/apis/vector_dbs/vector_dbs.py | 2 +- llama_stack/apis/vector_io/vector_io.py | 2 +- llama_stack/cli/download.py | 6 +- llama_stack/cli/model/describe.py | 2 +- llama_stack/cli/model/list.py | 3 +- llama_stack/cli/model/prompt_format.py | 3 +- llama_stack/cli/model/safety_models.py | 6 +- llama_stack/distribution/client.py | 30 - llama_stack/models/llama/datatypes.py | 277 +++++ llama_stack/models/llama/llama3/dog.jpg | Bin 0 -> 40215 bytes llama_stack/models/llama/llama3/interface.py | 257 ++++ llama_stack/models/llama/llama3/pasta.jpeg | Bin 0 -> 448611 bytes .../llama/llama3/prompt_templates/__init__.py | 22 + .../llama/llama3/prompt_templates/base.py | 39 + .../llama3/prompt_templates/system_prompts.py | 311 +++++ .../llama3/prompt_templates/tool_response.py | 63 + .../models/llama/llama3/template_data.py | 120 ++ .../llama/llama3/test_system_prompts.py | 199 ++++ llama_stack/models/llama/llama3_1/__init__.py | 12 + llama_stack/models/llama/llama3_1/prompts.py | 259 +++++ llama_stack/models/llama/llama3_2/__init__.py | 12 + .../models/llama/llama3_2/prompts_text.py | 235 ++++ .../models/llama/llama3_2/prompts_vision.py | 133 +++ llama_stack/models/llama/llama3_3/prompts.py | 258 ++++ llama_stack/models/llama/prompt_format.py | 204 ++++ llama_stack/models/llama/sku_list.py | 1000 ++++++++++++++++ llama_stack/providers/datatypes.py | 2 +- .../agents/meta_reference/agent_instance.py | 2 +- .../meta_reference/tests/test_chat_agent.py | 2 +- .../inference/meta_reference/generation.py | 14 +- .../inference/meta_reference/inference.py | 15 +- .../meta_reference/model_parallel.py | 4 +- .../meta_reference/quantization/loader.py | 4 +- .../providers/inline/inference/vllm/config.py | 2 +- .../providers/inline/inference/vllm/vllm.py | 2 +- .../post_training/torchtune/common/utils.py | 4 +- .../post_training/torchtune/post_training.py | 3 +- .../recipes/lora_finetuning_single_device.py | 2 +- .../inline/safety/llama_guard/llama_guard.py | 4 +- .../inline/vector_io/faiss/config.py | 2 +- .../remote/inference/bedrock/bedrock.py | 2 +- .../remote/inference/cerebras/cerebras.py | 3 +- .../remote/inference/cerebras/config.py | 3 +- .../remote/inference/databricks/config.py | 3 +- .../remote/inference/databricks/databricks.py | 2 +- .../remote/inference/fireworks/config.py | 3 +- .../remote/inference/fireworks/fireworks.py | 2 +- .../providers/remote/inference/groq/config.py | 3 +- .../providers/remote/inference/groq/groq.py | 5 +- .../remote/inference/groq/groq_utils.py | 2 +- .../remote/inference/nvidia/config.py | 3 +- .../remote/inference/nvidia/nvidia.py | 4 +- .../remote/inference/nvidia/openai_utils.py | 20 +- .../remote/inference/ollama/ollama.py | 2 +- .../remote/inference/runpod/config.py | 3 +- .../remote/inference/runpod/runpod.py | 2 +- .../remote/inference/sambanova/config.py | 3 +- .../remote/inference/sambanova/sambanova.py | 12 +- .../providers/remote/inference/tgi/config.py | 3 +- .../providers/remote/inference/tgi/tgi.py | 2 +- .../remote/inference/together/config.py | 3 +- .../remote/inference/together/together.py | 2 +- .../providers/remote/inference/vllm/config.py | 3 +- .../providers/remote/inference/vllm/vllm.py | 4 +- .../providers/remote/safety/bedrock/config.py | 3 +- .../tool_runtime/brave_search/brave_search.py | 2 +- .../remote/vector_io/pgvector/config.py | 3 +- .../remote/vector_io/qdrant/config.py | 3 +- .../providers/tests/agents/test_agents.py | 3 +- .../tests/inference/groq/test_groq_utils.py | 3 +- .../tests/inference/test_prompt_adapter.py | 13 +- .../tests/inference/test_text_inference.py | 16 +- llama_stack/providers/tests/report.py | 5 +- .../providers/utils/inference/__init__.py | 4 +- .../utils/inference/model_registry.py | 3 +- .../utils/inference/openai_compat.py | 15 +- .../utils/inference/prompt_adapter.py | 37 +- .../providers/utils/kvstore/sqlite/config.py | 3 +- .../utils/telemetry/trace_protocol.py | 3 +- llama_stack/schema_utils.py | 50 + llama_stack/scripts/generate_prompt_format.py | 65 ++ llama_stack/strong_typing/__init__.py | 19 + llama_stack/strong_typing/auxiliary.py | 226 ++++ llama_stack/strong_typing/classdef.py | 440 +++++++ llama_stack/strong_typing/core.py | 46 + llama_stack/strong_typing/deserializer.py | 876 ++++++++++++++ llama_stack/strong_typing/docstring.py | 399 +++++++ llama_stack/strong_typing/exception.py | 23 + llama_stack/strong_typing/inspection.py | 1034 +++++++++++++++++ llama_stack/strong_typing/mapping.py | 40 + llama_stack/strong_typing/name.py | 182 +++ llama_stack/strong_typing/py.typed | 0 llama_stack/strong_typing/schema.py | 752 ++++++++++++ llama_stack/strong_typing/serialization.py | 97 ++ llama_stack/strong_typing/serializer.py | 497 ++++++++ llama_stack/strong_typing/slots.py | 27 + llama_stack/strong_typing/topological.py | 89 ++ llama_stack/templates/bedrock/bedrock.py | 3 +- llama_stack/templates/cerebras/cerebras.py | 3 +- llama_stack/templates/fireworks/fireworks.py | 3 +- llama_stack/templates/nvidia/nvidia.py | 3 +- llama_stack/templates/sambanova/sambanova.py | 3 +- llama_stack/templates/together/together.py | 3 +- pyproject.toml | 1 + requirements.txt | 9 +- tests/client-sdk/report.py | 12 +- uv.lock | 10 +- 138 files changed, 8491 insertions(+), 465 deletions(-) delete mode 100644 llama_stack/apis/agents/event_logger.py create mode 100644 llama_stack/models/llama/datatypes.py create mode 100644 llama_stack/models/llama/llama3/dog.jpg create mode 100644 llama_stack/models/llama/llama3/interface.py create mode 100644 llama_stack/models/llama/llama3/pasta.jpeg create mode 100644 llama_stack/models/llama/llama3/prompt_templates/__init__.py create mode 100644 llama_stack/models/llama/llama3/prompt_templates/base.py create mode 100644 llama_stack/models/llama/llama3/prompt_templates/system_prompts.py create mode 100644 llama_stack/models/llama/llama3/prompt_templates/tool_response.py create mode 100644 llama_stack/models/llama/llama3/template_data.py create mode 100644 llama_stack/models/llama/llama3/test_system_prompts.py create mode 100644 llama_stack/models/llama/llama3_1/__init__.py create mode 100644 llama_stack/models/llama/llama3_1/prompts.py create mode 100644 llama_stack/models/llama/llama3_2/__init__.py create mode 100644 llama_stack/models/llama/llama3_2/prompts_text.py create mode 100644 llama_stack/models/llama/llama3_2/prompts_vision.py create mode 100644 llama_stack/models/llama/llama3_3/prompts.py create mode 100644 llama_stack/models/llama/prompt_format.py create mode 100644 llama_stack/models/llama/sku_list.py create mode 100644 llama_stack/schema_utils.py create mode 100644 llama_stack/scripts/generate_prompt_format.py create mode 100644 llama_stack/strong_typing/__init__.py create mode 100644 llama_stack/strong_typing/auxiliary.py create mode 100644 llama_stack/strong_typing/classdef.py create mode 100644 llama_stack/strong_typing/core.py create mode 100644 llama_stack/strong_typing/deserializer.py create mode 100644 llama_stack/strong_typing/docstring.py create mode 100644 llama_stack/strong_typing/exception.py create mode 100644 llama_stack/strong_typing/inspection.py create mode 100644 llama_stack/strong_typing/mapping.py create mode 100644 llama_stack/strong_typing/name.py create mode 100644 llama_stack/strong_typing/py.typed create mode 100644 llama_stack/strong_typing/schema.py create mode 100644 llama_stack/strong_typing/serialization.py create mode 100644 llama_stack/strong_typing/serializer.py create mode 100644 llama_stack/strong_typing/slots.py create mode 100644 llama_stack/strong_typing/topological.py diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 9bdb10d95..9b8b9a8df 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -30,6 +30,7 @@ repos: rev: v0.9.4 hooks: - id: ruff + exclude: ^llama_stack/strong_typing/.*$ - id: ruff-format - repo: https://github.com/adamchainz/blacken-docs @@ -43,7 +44,13 @@ repos: rev: 0.5.26 hooks: - id: uv-export - args: ["--frozen", "--no-hashes", "--no-emit-project"] + args: [ + "--frozen", + "--no-hashes", + "--no-emit-project", + "--output-file=requirements.txt" + ] + files: ^pyproject\.toml$ - id: uv-sync # - repo: https://github.com/pre-commit/mirrors-mypy diff --git a/docs/openapi_generator/generate.py b/docs/openapi_generator/generate.py index 48109e5d8..dcbee7d2f 100644 --- a/docs/openapi_generator/generate.py +++ b/docs/openapi_generator/generate.py @@ -16,18 +16,6 @@ from pathlib import Path import fire import ruamel.yaml as yaml -from llama_models import schema_utils - -# We do some monkey-patching to ensure our definitions only use the minimal -# (json_schema_type, webmethod) definitions from the llama_models package. For -# generation though, we need the full definitions and implementations from the -# (json-strong-typing) package. - -from .strong_typing.schema import json_schema_type, register_schema - -schema_utils.json_schema_type = json_schema_type -schema_utils.register_schema = register_schema - from llama_stack.apis.version import LLAMA_STACK_API_VERSION # noqa: E402 from llama_stack.distribution.stack import LlamaStack # noqa: E402 diff --git a/docs/openapi_generator/pyopenapi/generator.py b/docs/openapi_generator/pyopenapi/generator.py index 0f3b99784..60cd7a242 100644 --- a/docs/openapi_generator/pyopenapi/generator.py +++ b/docs/openapi_generator/pyopenapi/generator.py @@ -10,9 +10,9 @@ import typing from dataclasses import make_dataclass from typing import Any, Dict, Set, Union -from ..strong_typing.core import JsonType -from ..strong_typing.docstring import Docstring, parse_type -from ..strong_typing.inspection import ( +from llama_stack.strong_typing.core import JsonType +from llama_stack.strong_typing.docstring import Docstring, parse_type +from llama_stack.strong_typing.inspection import ( is_generic_list, is_type_optional, is_type_union, @@ -20,15 +20,15 @@ from ..strong_typing.inspection import ( unwrap_optional_type, unwrap_union_types, ) -from ..strong_typing.name import python_type_to_name -from ..strong_typing.schema import ( +from llama_stack.strong_typing.name import python_type_to_name +from llama_stack.strong_typing.schema import ( get_schema_identifier, JsonSchemaGenerator, register_schema, Schema, SchemaOptions, ) -from ..strong_typing.serialization import json_dump_string, object_to_json +from llama_stack.strong_typing.serialization import json_dump_string, object_to_json from .operations import ( EndpointOperation, diff --git a/docs/openapi_generator/pyopenapi/operations.py b/docs/openapi_generator/pyopenapi/operations.py index bf4d35c87..88a403182 100644 --- a/docs/openapi_generator/pyopenapi/operations.py +++ b/docs/openapi_generator/pyopenapi/operations.py @@ -15,7 +15,7 @@ from llama_stack.apis.version import LLAMA_STACK_API_VERSION from termcolor import colored -from ..strong_typing.inspection import get_signature +from llama_stack.strong_typing.inspection import get_signature def split_prefix( diff --git a/docs/openapi_generator/pyopenapi/specification.py b/docs/openapi_generator/pyopenapi/specification.py index f96de58b6..9e5363b4a 100644 --- a/docs/openapi_generator/pyopenapi/specification.py +++ b/docs/openapi_generator/pyopenapi/specification.py @@ -9,7 +9,7 @@ import enum from dataclasses import dataclass from typing import Any, ClassVar, Dict, List, Optional, Union -from ..strong_typing.schema import JsonType, Schema, StrictJsonType +from llama_stack.strong_typing.schema import JsonType, Schema, StrictJsonType URL = str diff --git a/docs/openapi_generator/pyopenapi/utility.py b/docs/openapi_generator/pyopenapi/utility.py index 54f10d473..f134aab4b 100644 --- a/docs/openapi_generator/pyopenapi/utility.py +++ b/docs/openapi_generator/pyopenapi/utility.py @@ -9,7 +9,7 @@ import typing from pathlib import Path from typing import TextIO -from ..strong_typing.schema import object_to_json, StrictJsonType +from llama_stack.strong_typing.schema import object_to_json, StrictJsonType from .generator import Generator from .options import Options diff --git a/llama_stack/apis/agents/agents.py b/llama_stack/apis/agents/agents.py index 106d34584..ccd15c3d6 100644 --- a/llama_stack/apis/agents/agents.py +++ b/llama_stack/apis/agents/agents.py @@ -19,7 +19,6 @@ from typing import ( runtime_checkable, ) -from llama_models.schema_utils import json_schema_type, register_schema, webmethod from pydantic import BaseModel, ConfigDict, Field from llama_stack.apis.common.content_types import URL, ContentDelta, InterleavedContent @@ -38,6 +37,7 @@ from llama_stack.apis.inference import ( from llama_stack.apis.safety import SafetyViolation from llama_stack.apis.tools import ToolDef from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, register_schema, webmethod class Attachment(BaseModel): diff --git a/llama_stack/apis/agents/event_logger.py b/llama_stack/apis/agents/event_logger.py deleted file mode 100644 index 835ce4cee..000000000 --- a/llama_stack/apis/agents/event_logger.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -from typing import Optional - -from llama_models.llama3.api.datatypes import ToolPromptFormat -from llama_models.llama3.api.tool_utils import ToolUtils -from termcolor import cprint - -from llama_stack.apis.agents import AgentTurnResponseEventType, StepType -from llama_stack.apis.common.content_types import ToolCallParseStatus -from llama_stack.apis.inference import ToolResponseMessage -from llama_stack.providers.utils.inference.prompt_adapter import ( - interleaved_content_as_str, -) - - -class LogEvent: - def __init__( - self, - role: Optional[str] = None, - content: str = "", - end: str = "\n", - color="white", - ): - self.role = role - self.content = content - self.color = color - self.end = "\n" if end is None else end - - def __str__(self): - if self.role is not None: - return f"{self.role}> {self.content}" - else: - return f"{self.content}" - - def print(self, flush=True): - cprint(f"{str(self)}", color=self.color, end=self.end, flush=flush) - - -EventType = AgentTurnResponseEventType - - -class EventLogger: - async def log( - self, - event_generator, - stream=True, - tool_prompt_format: ToolPromptFormat = ToolPromptFormat.json, - ): - previous_event_type = None - previous_step_type = None - - async for chunk in event_generator: - if not hasattr(chunk, "event"): - # Need to check for custom tool first - # since it does not produce event but instead - # a Message - if isinstance(chunk, ToolResponseMessage): - yield ( - chunk, - LogEvent(role="CustomTool", content=chunk.content, color="grey"), - ) - continue - - event = chunk.event - event_type = event.payload.event_type - if event_type in { - EventType.turn_start.value, - EventType.turn_complete.value, - }: - # Currently not logging any turn realted info - yield event, None - continue - - step_type = event.payload.step_type - # handle safety - if step_type == StepType.shield_call and event_type == EventType.step_complete.value: - violation = event.payload.step_details.violation - if not violation: - yield ( - event, - LogEvent(role=step_type, content="No Violation", color="magenta"), - ) - else: - yield ( - event, - LogEvent( - role=step_type, - content=f"{violation.metadata} {violation.user_message}", - color="red", - ), - ) - - # handle inference - if step_type == StepType.inference: - if stream: - if event_type == EventType.step_start.value: - # TODO: Currently this event is never received - yield ( - event, - LogEvent(role=step_type, content="", end="", color="yellow"), - ) - elif event_type == EventType.step_progress.value: - # HACK: if previous was not step/event was not inference's step_progress - # this is the first time we are getting model inference response - # aka equivalent to step_start for inference. Hence, - # start with "Model>". - if ( - previous_event_type != EventType.step_progress.value - and previous_step_type != StepType.inference - ): - yield ( - event, - LogEvent(role=step_type, content="", end="", color="yellow"), - ) - - delta = event.payload.delta - if delta.type == "tool_call": - if delta.parse_status == ToolCallParseStatus.succeeded: - yield ( - event, - LogEvent( - role=None, - content=delta.tool_call, - end="", - color="cyan", - ), - ) - else: - yield ( - event, - LogEvent( - role=None, - content=delta.text, - end="", - color="yellow", - ), - ) - else: - # step_complete - yield event, LogEvent(role=None, content="") - - else: - # Not streaming - if event_type == EventType.step_complete.value: - response = event.payload.step_details.model_response - if response.tool_calls: - content = ToolUtils.encode_tool_call(response.tool_calls[0], tool_prompt_format) - else: - content = response.content - yield ( - event, - LogEvent( - role=step_type, - content=content, - color="yellow", - ), - ) - - # handle tool_execution - if ( - step_type == StepType.tool_execution - and - # Only print tool calls and responses at the step_complete event - event_type == EventType.step_complete.value - ): - details = event.payload.step_details - for t in details.tool_calls: - yield ( - event, - LogEvent( - role=step_type, - content=f"Tool:{t.tool_name} Args:{t.arguments}", - color="green", - ), - ) - for r in details.tool_responses: - yield ( - event, - LogEvent( - role=step_type, - content=f"Tool:{r.tool_name} Response:{r.content}", - color="green", - ), - ) - - if step_type == StepType.memory_retrieval and event_type == EventType.step_complete.value: - details = event.payload.step_details - inserted_context = interleaved_content_as_str(details.inserted_context) - content = f"fetched {len(inserted_context)} bytes from {details.vector_db_ids}" - - yield ( - event, - LogEvent( - role=step_type, - content=content, - color="cyan", - ), - ) - - previous_event_type = event_type - previous_step_type = step_type diff --git a/llama_stack/apis/batch_inference/batch_inference.py b/llama_stack/apis/batch_inference/batch_inference.py index 413c81c5a..0fa5c78ce 100644 --- a/llama_stack/apis/batch_inference/batch_inference.py +++ b/llama_stack/apis/batch_inference/batch_inference.py @@ -6,7 +6,6 @@ from typing import List, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel from llama_stack.apis.inference import ( @@ -21,6 +20,7 @@ from llama_stack.apis.inference import ( ToolDefinition, ToolPromptFormat, ) +from llama_stack.schema_utils import json_schema_type, webmethod @json_schema_type diff --git a/llama_stack/apis/benchmarks/benchmarks.py b/llama_stack/apis/benchmarks/benchmarks.py index af5784bbc..91b1ca927 100644 --- a/llama_stack/apis/benchmarks/benchmarks.py +++ b/llama_stack/apis/benchmarks/benchmarks.py @@ -5,10 +5,10 @@ # the root directory of this source tree. from typing import Any, Dict, List, Literal, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel, Field from llama_stack.apis.resource import Resource, ResourceType +from llama_stack.schema_utils import json_schema_type, webmethod class CommonBenchmarkFields(BaseModel): diff --git a/llama_stack/apis/common/content_types.py b/llama_stack/apis/common/content_types.py index e648f9a19..0d0afa894 100644 --- a/llama_stack/apis/common/content_types.py +++ b/llama_stack/apis/common/content_types.py @@ -7,10 +7,11 @@ from enum import Enum from typing import Annotated, List, Literal, Optional, Union -from llama_models.llama3.api.datatypes import ToolCall -from llama_models.schema_utils import json_schema_type, register_schema from pydantic import BaseModel, Field, model_validator +from llama_stack.models.llama.datatypes import ToolCall +from llama_stack.schema_utils import json_schema_type, register_schema + @json_schema_type class URL(BaseModel): diff --git a/llama_stack/apis/common/deployment_types.py b/llama_stack/apis/common/deployment_types.py index 16a5c8ad6..83eea28a2 100644 --- a/llama_stack/apis/common/deployment_types.py +++ b/llama_stack/apis/common/deployment_types.py @@ -7,10 +7,10 @@ from enum import Enum from typing import Any, Dict, Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel from llama_stack.apis.common.content_types import URL +from llama_stack.schema_utils import json_schema_type @json_schema_type diff --git a/llama_stack/apis/common/job_types.py b/llama_stack/apis/common/job_types.py index c945bd8ff..bc070017b 100644 --- a/llama_stack/apis/common/job_types.py +++ b/llama_stack/apis/common/job_types.py @@ -5,9 +5,10 @@ # the root directory of this source tree. from enum import Enum -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel +from llama_stack.schema_utils import json_schema_type + @json_schema_type class Job(BaseModel): diff --git a/llama_stack/apis/common/training_types.py b/llama_stack/apis/common/training_types.py index b4bd1b0c6..d6c6c6919 100644 --- a/llama_stack/apis/common/training_types.py +++ b/llama_stack/apis/common/training_types.py @@ -7,9 +7,10 @@ from datetime import datetime from typing import Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel +from llama_stack.schema_utils import json_schema_type + @json_schema_type class PostTrainingMetric(BaseModel): diff --git a/llama_stack/apis/common/type_system.py b/llama_stack/apis/common/type_system.py index fa9c5e92e..139ae8875 100644 --- a/llama_stack/apis/common/type_system.py +++ b/llama_stack/apis/common/type_system.py @@ -6,10 +6,11 @@ from typing import Literal, Union -from llama_models.schema_utils import json_schema_type, register_schema from pydantic import BaseModel, Field from typing_extensions import Annotated +from llama_stack.schema_utils import json_schema_type, register_schema + @json_schema_type class StringType(BaseModel): diff --git a/llama_stack/apis/datasetio/datasetio.py b/llama_stack/apis/datasetio/datasetio.py index 2ad7aab73..d85d22876 100644 --- a/llama_stack/apis/datasetio/datasetio.py +++ b/llama_stack/apis/datasetio/datasetio.py @@ -6,10 +6,10 @@ from typing import Any, Dict, List, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel from llama_stack.apis.datasets import Dataset +from llama_stack.schema_utils import json_schema_type, webmethod @json_schema_type diff --git a/llama_stack/apis/datasets/datasets.py b/llama_stack/apis/datasets/datasets.py index 5e2b38697..fe9d30e2a 100644 --- a/llama_stack/apis/datasets/datasets.py +++ b/llama_stack/apis/datasets/datasets.py @@ -6,12 +6,12 @@ from typing import Any, Dict, List, Literal, Optional, Protocol -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel, Field from llama_stack.apis.common.content_types import URL from llama_stack.apis.common.type_system import ParamType from llama_stack.apis.resource import Resource, ResourceType +from llama_stack.schema_utils import json_schema_type, webmethod class CommonDatasetFields(BaseModel): diff --git a/llama_stack/apis/datatypes.py b/llama_stack/apis/datatypes.py index 0751b2c9b..6df93052c 100644 --- a/llama_stack/apis/datatypes.py +++ b/llama_stack/apis/datatypes.py @@ -6,7 +6,7 @@ from enum import Enum -from llama_models.schema_utils import json_schema_type +from llama_stack.schema_utils import json_schema_type @json_schema_type diff --git a/llama_stack/apis/eval/eval.py b/llama_stack/apis/eval/eval.py index e5c782150..e2ff4458e 100644 --- a/llama_stack/apis/eval/eval.py +++ b/llama_stack/apis/eval/eval.py @@ -6,7 +6,6 @@ from typing import Any, Dict, List, Literal, Optional, Protocol, Union -from llama_models.schema_utils import json_schema_type, register_schema, webmethod from pydantic import BaseModel, Field from typing_extensions import Annotated @@ -15,6 +14,7 @@ from llama_stack.apis.common.job_types import Job, JobStatus from llama_stack.apis.inference import SamplingParams, SystemMessage from llama_stack.apis.scoring import ScoringResult from llama_stack.apis.scoring_functions import ScoringFnParams +from llama_stack.schema_utils import json_schema_type, register_schema, webmethod @json_schema_type diff --git a/llama_stack/apis/inference/inference.py b/llama_stack/apis/inference/inference.py index 9fccd3911..433ba3274 100644 --- a/llama_stack/apis/inference/inference.py +++ b/llama_stack/apis/inference/inference.py @@ -17,7 +17,13 @@ from typing import ( runtime_checkable, ) -from llama_models.llama3.api.datatypes import ( +from pydantic import BaseModel, Field, field_validator +from typing_extensions import Annotated + +from llama_stack.apis.common.content_types import ContentDelta, InterleavedContent +from llama_stack.apis.models import Model +from llama_stack.apis.telemetry.telemetry import MetricResponseMixin +from llama_stack.models.llama.datatypes import ( BuiltinTool, SamplingParams, StopReason, @@ -25,14 +31,8 @@ from llama_models.llama3.api.datatypes import ( ToolDefinition, ToolPromptFormat, ) -from llama_models.schema_utils import json_schema_type, register_schema, webmethod -from pydantic import BaseModel, Field, field_validator -from typing_extensions import Annotated - -from llama_stack.apis.common.content_types import ContentDelta, InterleavedContent -from llama_stack.apis.models import Model -from llama_stack.apis.telemetry.telemetry import MetricResponseMixin from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, register_schema, webmethod class LogProbConfig(BaseModel): diff --git a/llama_stack/apis/inspect/inspect.py b/llama_stack/apis/inspect/inspect.py index cd51469c1..4a647a2d9 100644 --- a/llama_stack/apis/inspect/inspect.py +++ b/llama_stack/apis/inspect/inspect.py @@ -6,9 +6,10 @@ from typing import List, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel +from llama_stack.schema_utils import json_schema_type, webmethod + @json_schema_type class ProviderInfo(BaseModel): diff --git a/llama_stack/apis/models/models.py b/llama_stack/apis/models/models.py index 7e6d9854f..64b9510ea 100644 --- a/llama_stack/apis/models/models.py +++ b/llama_stack/apis/models/models.py @@ -7,11 +7,11 @@ from enum import Enum from typing import Any, Dict, List, Literal, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel, ConfigDict, Field from llama_stack.apis.resource import Resource, ResourceType from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, webmethod class CommonModelFields(BaseModel): diff --git a/llama_stack/apis/post_training/post_training.py b/llama_stack/apis/post_training/post_training.py index 8cd2979a8..ed15c6de4 100644 --- a/llama_stack/apis/post_training/post_training.py +++ b/llama_stack/apis/post_training/post_training.py @@ -8,13 +8,13 @@ from datetime import datetime from enum import Enum from typing import Any, Dict, List, Literal, Optional, Protocol, Union -from llama_models.schema_utils import json_schema_type, register_schema, webmethod from pydantic import BaseModel, Field from typing_extensions import Annotated from llama_stack.apis.common.content_types import URL from llama_stack.apis.common.job_types import JobStatus from llama_stack.apis.common.training_types import Checkpoint +from llama_stack.schema_utils import json_schema_type, register_schema, webmethod @json_schema_type diff --git a/llama_stack/apis/safety/safety.py b/llama_stack/apis/safety/safety.py index 513733d1e..fd2f0292c 100644 --- a/llama_stack/apis/safety/safety.py +++ b/llama_stack/apis/safety/safety.py @@ -7,12 +7,12 @@ from enum import Enum from typing import Any, Dict, List, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel, Field from llama_stack.apis.inference import Message from llama_stack.apis.shields import Shield from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, webmethod @json_schema_type diff --git a/llama_stack/apis/scoring/scoring.py b/llama_stack/apis/scoring/scoring.py index 5bacaaf66..960149476 100644 --- a/llama_stack/apis/scoring/scoring.py +++ b/llama_stack/apis/scoring/scoring.py @@ -6,10 +6,10 @@ from typing import Any, Dict, List, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel from llama_stack.apis.scoring_functions import ScoringFn, ScoringFnParams +from llama_stack.schema_utils import json_schema_type, webmethod # mapping of metric to value ScoringResultRow = Dict[str, Any] diff --git a/llama_stack/apis/scoring_functions/scoring_functions.py b/llama_stack/apis/scoring_functions/scoring_functions.py index fece50fbd..52508d2ec 100644 --- a/llama_stack/apis/scoring_functions/scoring_functions.py +++ b/llama_stack/apis/scoring_functions/scoring_functions.py @@ -16,12 +16,12 @@ from typing import ( runtime_checkable, ) -from llama_models.schema_utils import json_schema_type, register_schema, webmethod from pydantic import BaseModel, Field from typing_extensions import Annotated from llama_stack.apis.common.type_system import ParamType from llama_stack.apis.resource import Resource, ResourceType +from llama_stack.schema_utils import json_schema_type, register_schema, webmethod # Perhaps more structure can be imposed on these functions. Maybe they could be associated diff --git a/llama_stack/apis/shields/shields.py b/llama_stack/apis/shields/shields.py index ae316ee53..ec1179ac4 100644 --- a/llama_stack/apis/shields/shields.py +++ b/llama_stack/apis/shields/shields.py @@ -6,11 +6,11 @@ from typing import Any, Dict, List, Literal, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel from llama_stack.apis.resource import Resource, ResourceType from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, webmethod class CommonShieldFields(BaseModel): diff --git a/llama_stack/apis/synthetic_data_generation/synthetic_data_generation.py b/llama_stack/apis/synthetic_data_generation/synthetic_data_generation.py index a61fb0cf2..7b41192af 100644 --- a/llama_stack/apis/synthetic_data_generation/synthetic_data_generation.py +++ b/llama_stack/apis/synthetic_data_generation/synthetic_data_generation.py @@ -7,10 +7,10 @@ from enum import Enum from typing import Any, Dict, List, Optional, Protocol, Union -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel from llama_stack.apis.inference import Message +from llama_stack.schema_utils import json_schema_type, webmethod class FilteringFunction(Enum): diff --git a/llama_stack/apis/telemetry/telemetry.py b/llama_stack/apis/telemetry/telemetry.py index 63ae1dc73..d010a7e3b 100644 --- a/llama_stack/apis/telemetry/telemetry.py +++ b/llama_stack/apis/telemetry/telemetry.py @@ -17,11 +17,12 @@ from typing import ( runtime_checkable, ) -from llama_models.llama3.api.datatypes import Primitive -from llama_models.schema_utils import json_schema_type, register_schema, webmethod from pydantic import BaseModel, Field from typing_extensions import Annotated +from llama_stack.models.llama.datatypes import Primitive +from llama_stack.schema_utils import json_schema_type, register_schema, webmethod + # Add this constant near the top of the file, after the imports DEFAULT_TTL_DAYS = 7 diff --git a/llama_stack/apis/tools/rag_tool.py b/llama_stack/apis/tools/rag_tool.py index 2e6b43eb8..cff8eeefe 100644 --- a/llama_stack/apis/tools/rag_tool.py +++ b/llama_stack/apis/tools/rag_tool.py @@ -7,12 +7,12 @@ from enum import Enum from typing import Any, Dict, List, Literal, Optional, Union -from llama_models.schema_utils import json_schema_type, register_schema, webmethod from pydantic import BaseModel, Field from typing_extensions import Annotated, Protocol, runtime_checkable from llama_stack.apis.common.content_types import URL, InterleavedContent from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, register_schema, webmethod @json_schema_type diff --git a/llama_stack/apis/tools/tools.py b/llama_stack/apis/tools/tools.py index 2a407ca00..b83be127f 100644 --- a/llama_stack/apis/tools/tools.py +++ b/llama_stack/apis/tools/tools.py @@ -7,13 +7,13 @@ from enum import Enum from typing import Any, Dict, List, Literal, Optional -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel, Field from typing_extensions import Protocol, runtime_checkable from llama_stack.apis.common.content_types import URL, InterleavedContent from llama_stack.apis.resource import Resource, ResourceType from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, webmethod from .rag_tool import RAGToolRuntime diff --git a/llama_stack/apis/vector_dbs/vector_dbs.py b/llama_stack/apis/vector_dbs/vector_dbs.py index 1da2c128c..9a4aa322f 100644 --- a/llama_stack/apis/vector_dbs/vector_dbs.py +++ b/llama_stack/apis/vector_dbs/vector_dbs.py @@ -6,11 +6,11 @@ from typing import List, Literal, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel from llama_stack.apis.resource import Resource, ResourceType from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, webmethod @json_schema_type diff --git a/llama_stack/apis/vector_io/vector_io.py b/llama_stack/apis/vector_io/vector_io.py index 8feeaa6d4..2bbb3bce8 100644 --- a/llama_stack/apis/vector_io/vector_io.py +++ b/llama_stack/apis/vector_io/vector_io.py @@ -10,12 +10,12 @@ # the root directory of this source tree. from typing import Any, Dict, List, Optional, Protocol, runtime_checkable -from llama_models.schema_utils import json_schema_type, webmethod from pydantic import BaseModel, Field from llama_stack.apis.inference import InterleavedContent from llama_stack.apis.vector_dbs import VectorDB from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol +from llama_stack.schema_utils import json_schema_type, webmethod class Chunk(BaseModel): diff --git a/llama_stack/cli/download.py b/llama_stack/cli/download.py index 3ea534277..6b0463c10 100644 --- a/llama_stack/cli/download.py +++ b/llama_stack/cli/download.py @@ -16,8 +16,6 @@ from pathlib import Path from typing import Dict, List, Optional import httpx -from llama_models.datatypes import Model -from llama_models.sku_list import LlamaDownloadInfo from pydantic import BaseModel, ConfigDict from rich.console import Console from rich.progress import ( @@ -31,6 +29,8 @@ from rich.progress import ( from termcolor import cprint from llama_stack.cli.subcommand import Subcommand +from llama_stack.models.llama.datatypes import Model +from llama_stack.models.llama.sku_list import LlamaDownloadInfo class Download(Subcommand): @@ -454,7 +454,7 @@ def run_download_cmd(args: argparse.Namespace, parser: argparse.ArgumentParser): # Handle comma-separated model IDs model_ids = [model_id.strip() for model_id in args.model_id.split(",")] - from llama_models.sku_list import llama_meta_net_info, resolve_model + from llama_stack.models.llama.sku_list import llama_meta_net_info, resolve_model from .model.safety_models import ( prompt_guard_download_info, diff --git a/llama_stack/cli/model/describe.py b/llama_stack/cli/model/describe.py index 3e55052c5..d8f4e035c 100644 --- a/llama_stack/cli/model/describe.py +++ b/llama_stack/cli/model/describe.py @@ -7,11 +7,11 @@ import argparse import json -from llama_models.sku_list import resolve_model from termcolor import colored from llama_stack.cli.subcommand import Subcommand from llama_stack.cli.table import print_table +from llama_stack.models.llama.sku_list import resolve_model class ModelDescribe(Subcommand): diff --git a/llama_stack/cli/model/list.py b/llama_stack/cli/model/list.py index 9b5ebb1a5..4fe28751e 100644 --- a/llama_stack/cli/model/list.py +++ b/llama_stack/cli/model/list.py @@ -6,10 +6,9 @@ import argparse -from llama_models.sku_list import all_registered_models - from llama_stack.cli.subcommand import Subcommand from llama_stack.cli.table import print_table +from llama_stack.models.llama.sku_list import all_registered_models class ModelList(Subcommand): diff --git a/llama_stack/cli/model/prompt_format.py b/llama_stack/cli/model/prompt_format.py index 2e1e1601e..ea9596ba5 100644 --- a/llama_stack/cli/model/prompt_format.py +++ b/llama_stack/cli/model/prompt_format.py @@ -8,9 +8,8 @@ import argparse import textwrap from io import StringIO -from llama_models.datatypes import CoreModelId, ModelFamily, is_multimodal, model_family - from llama_stack.cli.subcommand import Subcommand +from llama_stack.models.llama.datatypes import CoreModelId, ModelFamily, is_multimodal, model_family class ModelPromptFormat(Subcommand): diff --git a/llama_stack/cli/model/safety_models.py b/llama_stack/cli/model/safety_models.py index 2321c4615..c81783f60 100644 --- a/llama_stack/cli/model/safety_models.py +++ b/llama_stack/cli/model/safety_models.py @@ -6,11 +6,11 @@ from typing import Any, Dict, Optional -from llama_models.datatypes import CheckpointQuantizationFormat -from llama_models.llama3.api.datatypes import SamplingParams -from llama_models.sku_list import LlamaDownloadInfo from pydantic import BaseModel, ConfigDict, Field +from llama_stack.models.llama.datatypes import CheckpointQuantizationFormat, SamplingParams +from llama_stack.models.llama.sku_list import LlamaDownloadInfo + class PromptGuardModel(BaseModel): """Make a 'fake' Model-like object for Prompt Guard. Eventually this will be removed.""" diff --git a/llama_stack/distribution/client.py b/llama_stack/distribution/client.py index b1d174ede..1925b864f 100644 --- a/llama_stack/distribution/client.py +++ b/llama_stack/distribution/client.py @@ -186,33 +186,3 @@ def extract_async_iterator_type(type_hint): inner_args = get_args(arg) return inner_args[0] return None - - -async def example(model: str = None): - from llama_stack.apis.inference import Inference, UserMessage # noqa: F403 - from llama_stack.apis.inference.event_logger import EventLogger - - client_class = create_api_client_class(Inference) - client = client_class("http://localhost:5003") - - if not model: - model = "Llama3.2-3B-Instruct" - - message = UserMessage(content="hello world, write me a 2 sentence poem about the moon") - cprint(f"User>{message.content}", "green") - - stream = True - iterator = await client.chat_completion( - model=model, - messages=[message], - stream=stream, - ) - - async for log in EventLogger().log(iterator): - log.print() - - -if __name__ == "__main__": - import asyncio - - asyncio.run(example()) diff --git a/llama_stack/models/llama/datatypes.py b/llama_stack/models/llama/datatypes.py new file mode 100644 index 000000000..a5dc9ac4a --- /dev/null +++ b/llama_stack/models/llama/datatypes.py @@ -0,0 +1,277 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +from enum import Enum +from typing import Any, Dict, Literal, Optional, Union + +# import all for backwards compatibility +from llama_models.datatypes import * # noqa: F403 +from pydantic import BaseModel, ConfigDict, Field, field_validator +from typing_extensions import Annotated + +from llama_stack.schema_utils import json_schema_type, register_schema + +register_schema(ToolCall) + + +@json_schema_type +class ToolParamDefinition(BaseModel): + param_type: str + description: Optional[str] = None + required: Optional[bool] = True + default: Optional[Any] = None + + +@json_schema_type +class ToolDefinition(BaseModel): + tool_name: Union[BuiltinTool, str] + description: Optional[str] = None + parameters: Optional[Dict[str, ToolParamDefinition]] = None + + @field_validator("tool_name", mode="before") + @classmethod + def validate_field(cls, v): + if isinstance(v, str): + try: + return BuiltinTool(v) + except ValueError: + return v + return v + + +@json_schema_type +class GreedySamplingStrategy(BaseModel): + type: Literal["greedy"] = "greedy" + + +@json_schema_type +class TopPSamplingStrategy(BaseModel): + type: Literal["top_p"] = "top_p" + temperature: Optional[float] = Field(..., gt=0.0) + top_p: Optional[float] = 0.95 + + +@json_schema_type +class TopKSamplingStrategy(BaseModel): + type: Literal["top_k"] = "top_k" + top_k: int = Field(..., ge=1) + + +SamplingStrategy = register_schema( + Annotated[ + Union[GreedySamplingStrategy, TopPSamplingStrategy, TopKSamplingStrategy], + Field(discriminator="type"), + ], + name="SamplingStrategy", +) + + +@json_schema_type +class SamplingParams(BaseModel): + strategy: SamplingStrategy = Field(default_factory=GreedySamplingStrategy) + + max_tokens: Optional[int] = 0 + repetition_penalty: Optional[float] = 1.0 + + +class CheckpointQuantizationFormat(Enum): + # default format + bf16 = "bf16" + + # used for enabling fp8_rowwise inference, some weights are bf16 + fp8_mixed = "fp8-mixed" + + int8 = "int8" + + int4 = "int4" + + +class ModelFamily(Enum): + llama2 = "llama2" + llama3 = "llama3" + llama3_1 = "llama3_1" + llama3_2 = "llama3_2" + llama3_3 = "llama3_3" + safety = "safety" + + +class CoreModelId(Enum): + """Each of these models is a unique "SKU". These root models can be served in various garbs (especially by quantizing them)""" + + # Llama 2 family + llama2_7b = "Llama-2-7b" + llama2_13b = "Llama-2-13b" + llama2_70b = "Llama-2-70b" + llama2_7b_chat = "Llama-2-7b-chat" + llama2_13b_chat = "Llama-2-13b-chat" + llama2_70b_chat = "Llama-2-70b-chat" + + # Llama 3 family + llama3_8b = "Llama-3-8B" + llama3_70b = "Llama-3-70B" + llama3_8b_instruct = "Llama-3-8B-Instruct" + llama3_70b_instruct = "Llama-3-70B-Instruct" + + # Llama 3.1 family + llama3_1_8b = "Llama3.1-8B" + llama3_1_70b = "Llama3.1-70B" + llama3_1_405b = "Llama3.1-405B" + llama3_1_8b_instruct = "Llama3.1-8B-Instruct" + llama3_1_70b_instruct = "Llama3.1-70B-Instruct" + llama3_1_405b_instruct = "Llama3.1-405B-Instruct" + + # Llama 3.2 family + llama3_2_1b = "Llama3.2-1B" + llama3_2_3b = "Llama3.2-3B" + llama3_2_1b_instruct = "Llama3.2-1B-Instruct" + llama3_2_3b_instruct = "Llama3.2-3B-Instruct" + llama3_2_11b_vision = "Llama3.2-11B-Vision" + llama3_2_90b_vision = "Llama3.2-90B-Vision" + llama3_2_11b_vision_instruct = "Llama3.2-11B-Vision-Instruct" + llama3_2_90b_vision_instruct = "Llama3.2-90B-Vision-Instruct" + + # Llama 3.3 family + llama3_3_70b_instruct = "Llama3.3-70B-Instruct" + + # Safety models + llama_guard_3_8b = "Llama-Guard-3-8B" + llama_guard_2_8b = "Llama-Guard-2-8B" + llama_guard_3_11b_vision = "Llama-Guard-3-11B-Vision" + llama_guard_3_1b = "Llama-Guard-3-1B" + + +def is_multimodal(model_id) -> bool: + if model_id in [ + CoreModelId.llama3_2_11b_vision, + CoreModelId.llama3_2_90b_vision, + CoreModelId.llama3_2_11b_vision_instruct, + CoreModelId.llama3_2_90b_vision_instruct, + ]: + return True + else: + return False + + +def model_family(model_id) -> ModelFamily: + if model_id in [ + CoreModelId.llama2_7b, + CoreModelId.llama2_13b, + CoreModelId.llama2_70b, + CoreModelId.llama2_7b_chat, + CoreModelId.llama2_13b_chat, + CoreModelId.llama2_70b_chat, + ]: + return ModelFamily.llama2 + elif model_id in [ + CoreModelId.llama3_8b, + CoreModelId.llama3_70b, + CoreModelId.llama3_8b_instruct, + CoreModelId.llama3_70b_instruct, + ]: + return ModelFamily.llama3 + elif model_id in [ + CoreModelId.llama3_1_8b, + CoreModelId.llama3_1_70b, + CoreModelId.llama3_1_405b, + CoreModelId.llama3_1_8b_instruct, + CoreModelId.llama3_1_70b_instruct, + CoreModelId.llama3_1_405b_instruct, + ]: + return ModelFamily.llama3_1 + elif model_id in [ + CoreModelId.llama3_2_1b, + CoreModelId.llama3_2_3b, + CoreModelId.llama3_2_1b_instruct, + CoreModelId.llama3_2_3b_instruct, + CoreModelId.llama3_2_11b_vision, + CoreModelId.llama3_2_90b_vision, + CoreModelId.llama3_2_11b_vision_instruct, + CoreModelId.llama3_2_90b_vision_instruct, + ]: + return ModelFamily.llama3_2 + elif model_id in [ + CoreModelId.llama3_3_70b_instruct, + ]: + return ModelFamily.llama3_3 + elif model_id in [ + CoreModelId.llama_guard_3_8b, + CoreModelId.llama_guard_2_8b, + CoreModelId.llama_guard_3_11b_vision, + CoreModelId.llama_guard_3_1b, + ]: + return ModelFamily.safety + else: + raise ValueError(f"Unknown model family for {model_id}") + + +class Model(BaseModel): + core_model_id: CoreModelId + description: str + huggingface_repo: Optional[str] = None + recommended_sampling_params: Optional[SamplingParams] = None + arch_args: Dict[str, Any] + variant: str = "" + + quantization_format: CheckpointQuantizationFormat = CheckpointQuantizationFormat.bf16 + pth_file_count: int + metadata: Optional[Dict[str, Any]] = Field(default_factory=dict) + + # silence pydantic until we remove the `model_` fields + model_config = ConfigDict(protected_namespaces=()) + + @property + def model_family(self) -> ModelFamily: + return model_family(self.core_model_id) + + # The SKU is uniquely identified by (model_id, variant) combo + def descriptor(self, shorten_default_variant: bool = True) -> str: + if not self.variant: + return self.core_model_id.value + return f"{self.core_model_id.value}:{self.variant}" + + @property + def is_instruct_model(self) -> bool: + return "instruct" in self.id.name + + # Featured models are shown in the non-exhaustive model list + @property + def is_featured(self) -> bool: + return self.model_family in [ + ModelFamily.llama3_1, + ModelFamily.llama3_2, + ModelFamily.llama3_3, + ModelFamily.safety, + ] + + @property + def max_seq_length(self) -> int: + if self.model_family == ModelFamily.llama2: + return 4096 + elif self.core_model_id == CoreModelId.llama_guard_2_8b: + return 4096 + elif self.model_family == ModelFamily.llama3: + return 8192 + elif self.model_family in [ModelFamily.llama3_1, ModelFamily.llama3_3]: + return 131072 + elif self.model_family == ModelFamily.llama3_2: + if self.quantization_format == CheckpointQuantizationFormat.int4: + return 8192 + return 131072 + elif self.core_model_id in [ + CoreModelId.llama_guard_3_8b, + CoreModelId.llama_guard_3_11b_vision, + CoreModelId.llama_guard_3_1b, + ]: + return 131072 + else: + raise ValueError(f"Unknown max_seq_len for {self.core_model_id}") diff --git a/llama_stack/models/llama/llama3/dog.jpg b/llama_stack/models/llama/llama3/dog.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f9a3a80571b41ba2a03cc35d29e4c03b21e23e23 GIT binary patch literal 40215 zcmbSybyS?(@u@zjJ?A054P&lobFdC;$M;(+lwTH$V=6g@J*I zfsTcViHVJkg@a3shx`0FF8NC$0%B?kS{iB!Dk?fA9#%RAE=DRUw%6=jeEdQ}LbR-+ zk|F{UJc2?3|0Y4f#>U2dj!TAzMRk7sA%XIm{{02&z}a=zW_W#K}CIrhKi1khW0c%OXl!b3`PJRi+t)uZI5agqGYdz|%`dDX*VZ>Sx3+h7PfpLyFD|dHZ*KqLLV3dZ zzuJER`#*3IKH++XhK7oU`41P$Gry-dDj^y=JwL`v8Es5!4vj00^pZ>p){cm9ZjcWmbi;D8Jc&LN`X~5Yyih^1OYI|VY zw{O~Jh|hHdKNO(cK3DNK?`;>#9X$LT%updUT5>b9;4Oy%HC=UOH5WNXNf+%a7oH2= z;|VmY;9341ZQkWFO%)Whus~?q;f%*c8S`CT z?_+w&N^GLFP_c>=baV*cd>weYA|dhsMX)S~EIU*S&A*U%Ux!dl?%#l?8>oyzQ=(;9 zP==4>Y2jPNrg2$%jk$pZjdw^WB0o>eI}fZqp{-}@5s{eV4Al~k7Mj-VWnn{!$OH%M zsWXBBMz&|n&&^3vlgVX%8>N!nnQXQm2+-M*QCEnzQj7Srx2s+6ClU@ktCZ(J?J9Fe zy7r*x)7N|K87<#sml@B$vN+}izU+gce|%^auc9ey zs%Qi?toG`8nC67(Bv-)e>x6DPC`RIaU@_>PYkNks*BbkRSCeIx-I+w^)f_*gVZ(8O zm(&IlbGbzU7Wt&NrVpURoRTt8&p~qSlF?Dg2s|UDM;Bv;Eg@Ao6Ss{A7ReDoQ-s0Z zWoUNAvp#KnK@+cC4x7FpeMMV}Z@Q57Z}0j^G@W7W9YOR!!iY?{=*qa_I#YQg(%6}n z7WA*PFeV$6^HQILQ;IQDG!0++Gm090-YqI&PGKvl!{A z$h1+B@(oe}Zj75Q5Z0OG*LV>@QL5~cL90cQbmHL*XAqY>UYfl^_D8)>vt^RL@QDt{zvu@ zOcFZco*i8jb5=k)v{#(CDG%#JodL>_M2tdi;!MIV1O`Ndok{44r`1vw!D_l+3qFu5 zgFT2NK2XT>x9ILvisa!A|7q7YjZ4=4gTq?2a#ES+3xPyiWfnYQIt7oXl`X+_ z<=dmjW81^&$-3Z=c7D6+Caq)`)=zLyVx=U1ziB%+8EJ1^|te$L|;siah=ECn{t|MH`n1;ap(FwJ_CO7e1wesGbn z@pq0BDm_^~N|y5HI`%cju44kuG?Mi~SI=LRt9oooo3tcFzk*JOXHNWh>1FughNDul zsUob~h~4YmyS3rG?Uw1{ySlmme4;#~g}|y+N@$lWqNYB`m5GB*G|nSjo~Q)lE+yA? zq0T5c<@r;{gn^}JfS$tj2>-%I$x%$IUYxBY58#`|lK?vt_w-Syco0wF3x>yL6wBG@dgzcXDkPH7(~4_WiA~vjX!^w- z&7s3ymdl~Q7|YKGzrqNqHS%(|F?TGq%Xn|!<(gb9$#_|XrmQa{{sojMUsRb;Ys!&M zjLFBxOuN!M5P2~=o-@rgYt8@hg=}0&jPj4y~UD`QOl`EWQ>I!1AwMT%Kx@JPaP)thGs3;~Oa*nK9+ z-;j9y0+GtODt!W$$_E%_6Rb>OQUS;uATSf+?08oGo_CJ}s!aoXQ26mwlCaV`m z6lQyn^-n_HG2kC>2u@-t-4Uf%)&AGcVecQQl&YBUM}FEdoiqnMgoII4N^#Va1nhaK zg&NCNC7(N;4eYyZxz?DNDb0<&Vv(MzQe1_3zQa2I)}3oqbH6ta-}PL>g~39w!Tcs6 z4%QDDs;v_2KWVSi*`(*22dG&zq)qI{KL)G12rQTm@aHbJbesZ@$Vdz}mGwSSY)NVs zvD<8syf@Zo7dQV4uw?8+bW@y154A`h>e(HB{dhaO7lv)tg)?4(`tI{zK!uH9T$pB< z%B76&zCSH*RUm?3ZAlDN3JIb*;jVS49zBRl23&1(UI3ZrZC>^GZ z%#1f2xzQpT7=EFu2tx=L!e(K;6 z5^9T7py{cxQ4e|Ia|U%YUWM-&CF0k&uO{4$_tn6TsheKk_4pum9zQ%$10?S+Kr?O6 zqAOH-f?%pGQjhk8FKy2(J+!1KX4|3k;lPw}qm(%Gva)fBxZ==z<>yOs()IG~w%I<~ z*79x95>*?1)3|=(&$^vZhI)}h&~8JCRsu72M@MS}EKV;UF{zrmNH<^}5Mc%Xe3D4y%aj$ob?~p%O zbBE9niXkr;nqe9YHNu>U4U`KiTlQ3%Oz*6gKn3JjOFDCV5zc@yP|}R8L$j6go}rM) zi3s_TX(?M@pH7-!^X`Ie+8dXH{-w$Xy3mc#ltPayQ1;=w22O=gmv&N>am{uULT_l- zOPi$YFJhVZ`do!3J@5a~2T-A}j`mQGdnkxX6C8Pn=jq!G-U~K-Iqf8%WBQnr9B&f zSfwMEBBJrEk`u{35hpZ18S!hjXFL@my2L5d0>?fjCp0X3AJ&a|R<%4fDIr;c;^ntV zLiGw@JLjS2qe*DVo90|NPt4t;V#u>L?~fxnhHlu9q^mDrK1gzLLd(R53w3;09i}Zt z58mVy$e|bMHDR4w@$|QL6>S~9y^9=1KF`RGDbDpg!T&LY<`LK1ES(Zy*rQ1ToqYIe zahOH({kuI)e>Ad1n%KfcQz$UVywe@(etjkD#$@Srk8NRyREz>*bHT9E^M4EVuq{6> z&mS4BmX@iD($u?@r@S6+`bjWR<&$kp!;EW7-$eQG_Jb$qHtZEPI@Gtk6=0+C0%IEC znGn}QMLUAXye92_^}Y9<$udi`J??%LRW(v9lw_~Qth=Ps79|Pws5VjXFn}%uR86YQ zYCF>1HRwg*G3Z51cl54$cJ<=^~E?#FZ}6f3nb9S|})-G46=b zTY3vvR5P^QG;jGF=OIUn*Bh1xbsdWFYI(~dlF zrV&}rZ|9WAbSrl>NV>4xFP{|(UO7I zd|H)HLgFVTy5<@#arybgP;j<+j2s#(D!3`0%8{hzqfTfbLb~!r&FiTa)vp7X;Enpx zUo7Raj@d2(1a*z@N~gqdujs4F7!%nv?u#;SY~yN7c(KCd+@&kkCdsWq^ctrzgqDwY zs=k<(+e32fON}z=X2l6UM>+4fp_Yg8C+dZad%aWk^qVgXG2c)+VVHL*vYqcaWi^|% z8oBp9q`3Y9v_MG>RX=)jcb~X$_Ru;kBDl)--oa8Mg{?HPh-BjZb9v(A4^mlhSX`u! zX>DSd!^nhV;uyogZR54RE5`!zOdCw;Jhe3V7hv+|mvq@B$-*d@FA2HsehUi2+i+h36$aC^tR93BTxar$YWjdFc#$i0d?uaxxb zdC~Wc2@!$yCw-Eb-0n&%bo&a}w;Re6yQOVZIqwr9FQC7OoSLSdA^53Sy=P>&x|L(t z)_!N+g>uM}Y{Q25jw5yT{pbcUHtWODI+_cL=+?<=Uyg`{A)Cz+nv*XyeY~#HYxqkq z{3HfKejdSFM2{SfufF{326@`gII3UyjZTJm#A&vaOZPQVzxb_LF&N@WW@5@_A1imJ z@fWZh?5#aVU?ql7M`q^x6+^XSVDt*!kK2qJ7GgwMYtzrl^_pFYT}sG>=SO#@6pMEd zEt&kH=x;LyxdP7??{2dQOxEe+4k(5c`C{mzqo|3)>4K9Xfg1@6XquIiWhsaFtvr!H>z%43)+k>Pa65`9g2$@<R#8<*lJ5$-}|OvmF~#=b)bPCZ(T`k<%S84;rqf zGrJ@JHsH@EzM#0CSj9Dq5EbwaV=ErHin=nT1?|aKb9q~c&fCTKpcP!&fwf0ei)^Y- zZ}Cg$qQN(@V{3y=4Rl9@C*x>kXkh}JOZS2m%*!8f@2=3IBYy94PItl++w8`;hVjT2 zJQTuWd_JOC$PRuO{ju@xrKTCpqTj*bc~xMaxM^HQpYiuxRm;V2Zahk+X30&?V20o zXmmO&hUQ-n3IUZH&6%1lgen6AV#5-x=?|nE&lPH5%Fy5aUih1Ho)dk$nR>Vd8S&FW>@vQcacdizMT| zS@9+M&D-?HQ%CYE+F}r*ySK=81je*930vXpsM{7x&GnM-GG3(D=Z2GuedxX`p;jz` z1xkls@E&lJpSvon#WLQ!tsZ8(U~F33qz30)tc9YPR|vJ0x(=;}y$tRsW#YS~IG1Gb zvZtb~oi;hdF|5ej?MPEutJeIJq*$U6JONJAQ(J&G&@g1GwzWB9yiw-S4JB~e(%8DI z<+Tt)*h2RjCO3giuc{(Ja!i>8tPO~RocU_*lb_IPqsv4)GQ6K{#~>&OH5ZDYDG$C(aK{rtsV~ z{nwQ#-gI~b*U;UPl0ATid|Vl}RFiK~#Xi6ACh(zH`3vZbPk8`&A}slpxVnU|UaVIt z$%s6jb5AD!1$aM3w0^_WJ`z%&23D2{*sy~`cVz=HbUQ`YFk1PHKW`7u51I@1j-2|# z99843&SRGeY=J8N)Z}`YBv-6fFci&PNS_7ti&*Rxf=Sfs{DO(_~r#l?DI) zpaQPpOyS_ABnb4i>NzkSTqW ztG7Q>PtB+#B6 zNo*`z%{Cu!PGVh<_#Ab%n-wl1(7w+MnQ;)P(1et=4o3ZlU*Hb98;^=_L-6Hzt}Nc(o68=OihK zI2(oeNuX>??qv`KAR-JB(ni;EqL5S>yiFsoT_%`0ZN7%U$FIjHgV<~vc((!lkpeuIS4mtpQCWw^Eytc z^@P|UP%+Z`PVYuKgyA8-qm|62H}2heSZ{27E9%e`ISY82POUQbGkc~_Khrnc5; z9B5y-Nz3V6Gj{LMq%|lV8*q!vlL*aT!emuY=h#ungIVX%7#`st~=)B#F0_QAq74btHUEb%2?28y4f= zzLVH@FzvixbYF6<8K@HNa#!u`{i}m1poJ3B810xSI60<_mQ@$Gw~wjPzRF9y3(+5W zP1;a>`R-v;VQ0T58bMyH=;v24-|zTRHOxbx*O~Hzp@#_CR@V1`YpI zvfSo}akjW(@p^8A^LH?N{(MtR30g_5! zIlwKad^mzOUOL4m@Mr#|@+&7Ml!@4Od@_`Xxy#;Wo%#lBi48Q*p)X92TTHO`UJ7pM%a} zFE1^yb~qGSL;V*ZZL$I5QEF9)cUvgZ2#U=ehPiaTPQzAOAF#a8yjPBu&DEX>)hBU2 zf1R)2uDbBENND;nVN$bb@F7s-Q$8}I~8%r-$zN;#0)G?0u|{V7ARl03 zCLhn=AmGPQ?XG|8n6R(zOZWLl-B&jA_$s|H|I@G{3nDynL=1c(E8{2kRM*hBu~0{K z0+C&fjWTN)3&eR70@~L6^47n&USe7aDp2>Su}$b-T0@o6zM6ZH8)I*}>gYyrUAphy zvDWjz*?;lzFMx<$s(#F!`ImyyHlL`VYq2VAn#WDM=@l2bP% zV`+Oko>G`sOPp0F?)O2Eb=MZd=FgAhr_E_6-}a`UBoy8y2$?Ogb8+?p#0iC+`jyhc zYpsEm3e-g!JZ@t5ylO}90G=Y3Tj3JP^`9-(8?>^r;cGzB2bEhzW3ZTz65!H&{?na|QwzbrSWWgqG7heHw z*h?-`H@{}se=VhLOPHy-%|SO(wJFsQ7!kR5!DARQjVk;~ko{ZY^1Lb)K9D6%i6U}^ z`can}&qa8EhgCR~>Sujuty~IgqX`p3>W?wb6a3ok(z{xjO14+DF=9;M+%(ogBz8e$Q|sxB8{7T?uJC>&AU#LJL_| z^JKE86;cx!KhC1^sVuvHT{*a8OYlZeafbcVZ9_Qg|XiohCh{$xb*hE0po%j=f zhQUhdc*_>u114}#Ca-95-^f+%f-}PF_w&P$fW2OIE*+1neqFSi_HN-jl?XZ1t^lg5 zedVUPd0AB}^jqAl^FGeWM{0i##q>XmEQFdv`yw_3jOP!|w*;RV>Tq^&Wo=p;Yw*51 z9_5x;FW%u?#1ZTe_PSGPDNMfo09ZT9^$bEV2P-zDF%iycq|KH!EAN7weN{meT$m%@ z^4%V89|XbkN)4%Fe1STqo$)Nho&l_6gBN9c6GfV-VmbOMu^L;iOLy$i_RaR=ZDXIs zorvC#24Kdvq5Kr|0KMvEIs|x2cKm8@)df3h4gG%a$=CW-^1E25Pg0QcWu`9i;HH}y zt*GMYHTbNC@O`dOAWa0tkNTn?QXm@A)IUlQr<4`IS#682bX-cL7D4q1fsJClN4Ijs z)Sgd|(_}18L3y%NvqY%>s82UJHq1{T6~`lVkHR~XvcgGdNlH+HuNh*lILvmXoApeW z(X*!=Q!PhBT<_#{1dvsw8CQf~+6FT20KNm#bMl%I^hK=Y+LGj1R%ZeOj- zCIilJZVWW7+vY#|%*vLRIcMqbh=p#AQn}z&sZzc0mrB7(bYe0v=rp*0kp5Ng_U;?I zE0oiY_EPDItrIFk8U-}CQdxpNA%Xhj+&f@6$dp_+Cr!9cY57k7(@VjZkEdz`EqPnF zPwLcWeFsW_^liQ>QS)TlJ+o8LRM@bJ zQ6NNmBfsE-t#WFXA~7?2RnmwU0WrbitUGpC6|G4`>iK%LD*dYb?T^&D43l$~|~LBs=wIOrSS2bV#oh4S*D47K zif|aOjK1xj0iuXSh#U1tXNsVWLjg4JP2#KBtXJ8B)5^Li^p9{Ud;dHUu5dfL&oa+n zK%AFZO!a?@ar=uM&0D(#{$E%!c-6^#Fw`Mqips}fcQQ!HdK4g?%ITJCzKc(e{rmG?BnQr->OtCyBKtHtahstZHGat{(ABiU-zcSqUjJ)Z| zHtw6zzB#)_$b|?!o7^y3;zoTxnthhg*%yd={(v4(zx$BVyfYcmn%P1DeAp5HSo;l+ z#gN83`^R?0JZQS^g3y#9%grs&7C3N?(H)waCRlLvt|f0<(7gFK8SZ^!DS6dC360U& zkD(DxcFhEDW+)d5AR%c&|MkrNB=zzt2~mXTU5BO0Cs$QXRBqplmGImR^D!5MV(!qT z&~8M&4NG*LqAh;m4+4(|=XV#Js!0vx-ANNTt*soInlP3^17TbBuEnzh8x;I=A9wbC zyc}EWfvdit%Ug;w%U8|@<)6JU@&z41!*jKD&V52!eNF3vh zA!v15KG|#8Ty}@&eKk2w;cqlRm{>wEudA&iaC||nl9bJcdFF1dw2D>d1bFaeCP7ik z#%Ehp;kN8)W#D5P4e=BzIkm4PjFBQf=1g4@pKQq)*bz|-l2bBEnm3~!m#J5eFCtZD zGgRkYY6Q{y$qhILFZ!@ZaDz6qs={ZRa}Z1gznQf}o` z$YB+a-*&3WDshSz=r2Y&ZBY^fv1L@?D`PI&w>CBUz0~dHLwZD)!`&M~iP)Ce+R;CU zu}DhhoOA{ars4F2mB)RjS@)63GVo18s%GO~I|sIKX|yw=uBT{*@^7PfpxndDhG|9#BQ)J!RImu^gLU<(W zR20j{jZd8R9lB$hLy8RRt9{x;{6}uhZBLK#lktce-m%Wk3vUM>JxZAFSct;H2A;^x zWA%TL8$y4l|FAq-@#tBG5RC0}YHo4y~oyf8x#;fEvQno7h%JPJ3QK?yrI5sqF z4#*Zww(U0k9-!f`Hl=z{Z`f{m$(P6LY97o0^MrHaImZoSIT1M{>)3RTaX{^!V}3%nGxWwoYBK$7E*e&tg$AgmN8Bx_HXA zEs42yL$O`C_Nh;Vu0G286AdA0<8{*FltYq_ycl1}F$Br)1Nxo49V^&x;we)xDE38Z zOMH4r?u_Qfc4o5~E1%bg$~JohlchZsBvRk{dfH(jw)=`V+P?K!$OCAwo)dE#__`rg zJh?gMGCfBUN^#0L$^^Z;x<8C2Nt4Ze?b(6}D?(DHdb7~6aV#HM=7LRv72onT{ajdnut`$vJ7fizP z(}#sv98<8&D_WhjbCFDh2k_QG%i-bGuZLo;Bxgr`qp5*yS2!p^nLUbtbEbG+hA*}HH{&0wC$N2VGzC7vPd=QDPWXraKY{d#$M$&5u8 z0|x&n-DrYNTFaAaffk4Q`Zgsp1?%8iK>)z5-Jj}woEilGI1()tY~@2`n0Y;}`Ds`$ zDEcehPP`UcX0fvN=LT~6gF;+Su`G3VcG{8Gu}V~#o7dIGxbW+DO4&oTgp_`o9;zV2 zsNFN}%Mz}HmqTOkeOKaK4j&m@xa~tgjwQ8%CU21j8Y+b=qBG# zRtXDon@UQYRQy+d?(5>S*3Ngp@kIoowx-56uYzl@JXKqFuWB>X%1;Z7>w@3)mldgU zWyZv2e{IBWG9?YNRJ-^YMAyt2aphVy+}P-zmPszguIFt2=nZ*=A71ooqg9Fk*jK5q zS9shn|7b^=Y5@5x%lX@BOa89)EajGPphOPMU_ud%@GCOLw;+5%@AH(Qe#&9Mp^k^P z`x~Kd4Wa2b^msy35V<#f5L?|*zH_KRTQ;XxH|^e20@jn*(8d=Qb`AIWiTVTkUeC|a z@&p;0vKF{S0qrokRIR=~c~V~OL65SD2pUS?!1RZNZ0v_)n0ka(x%jN4aG&XLG{ecB zIV|GCV11UJ4KzutcKt?0Ec6o%pVFST<+Vo=&j*C7`?FtpHn~DhyD}ubIH%vlT_$4W z9Rk$854!c55$dl89{J!Ax(Vn9RGDf`@GvlyR?Frv2rQoLuCihlm%>ks9%i+V$MNg) zkwqg4KiXdRd^6LUyVwlmQo&{1F;1&tPbzYe4fB*rN%NmeJ1L+92a%q)`j8=8@bYad zL)bQCU;#`hVhHV_=)3f*Bt51E4lnS0%-e`WPc}oKJ0QNa1h5(YwW)sl_(ibgZ}OxNBMX zicNn3dx)QVV$eX}Okh`nzTOL?2Z1n^oBrNq#-`Y_yQeC0B!lQ|G6~(eRAH5*2+0QA zv$A^b_8qgBzcMrW&rQY8RT42;vhq+K{S?Mih@Xw3?0hHnxS8SPPprxO$@86OGE`n8 zB-zU>u4KgP4l8weC+-=d;!%(ZH(2%yE!zI^&-%R z+1?C%!T3a6ceLAeTgZmC*^pz1ajPe5epy&qx83(B9)mtv)c`NTNa{XK%wyo{pI6nc z7vyV7&}j2F%kKpGZhjw}OHZ|n_rAX3Hzj#dmH#eu_X8?{T6otj;;aA2LJc>7*13O- z^sr3YN9O3y1+X%dGlThX?(9KRuW6Pi2f0CKoX*hV7RGEO!^(-FS1OXc^{m ztJt+roLjW*>pSi8oeN=_&AjqztMT0}V&t+B8^jRsf;`m9yftB$b2gawJtQ#AC}WL_ znXu})e8N>SO+~H-NE9Z-BfcASuiNdxF_4hf`4?cGh585_zTWKxqPPbNo6R zz;uO5zE0aY<+M;VQp?uj!Q@u*&Vy&2ai-Bdzkf~rWaFVYw_rHL(-`g56xp(TyBDfr1erII%kxec%Iw?=aYiIOEKZ$w<09WDn?oymS(1bnCm~ks#SpV8 zEq-=9Zu&elAbAPII?qnP50;Dj7tr5z3m3}916@1)1wbExDefP${WXz=Z7u&L!02YJ zK-n&DwX?DetDj2o{Ry{*8DF$3hcpCRn%GrSgoBLIgOn9mt|IF`H-1sJ02O#9umWYL z?3EO-c8s42sO;!Az^ySi&zTE$U9Fgee6O;8^40P-v2s)Lxz(qucv9cn zq2W#rrRlzfUMg1nPG1u6(5q+qkiTM}-%wUdxCb7aaX;jdGCCVo6fjrO+u-J6jY9B2 z29K{^R>K?aH9DGrQL9g4x_i%@b5+#7d3z$PlvZV={hx)k%>6Mhgi}!43JTF$^fDpk zVj=8@7w2jN)_@<<(sfYEVd8DwITvt`?4Q}Z8Kag>eM7yI>%S@1;VGk?a{;9ZP+zu= zKSF&gI+%-_S+k>zJ~d2wH-Wkg2bxJm%5OcM4jB}JD`-vX+SzgcDLAqCkX0o-5KoHd zQ>MCqt3RkY;H>x`0eA8kJKOhk#GvlPPx!!0zeDBn_`kU?o+3(2@6%K6A3`sbQACWT z;w-eLHp>4TBI_)hO2@`@i_?w~8TKCKMY8TvQxSWgcMZ(=3xQq^j-f1PWZ2%WDWHOkW;-RKXlZPBbVIX z-IlP(&DRV-Wjhr57>_2@><&$n(;A}alEeKF7L4SbNtK{1_n6|J4ab$fwIioP+=vmb z00$ci*)SOkM&$FTl>OgKTu-&@z!%CQz;n!R+#30g0bIcmwl4`M)(xXT4WT}zS=G1E z9Blh}PtJhasqm|JZjX$ZYBTn=?HDA^^j(IL#T$}}%@6)nd)+jQEysy%4R4pXUwfi* zHxC9GT+kvAB%6K0P7o`vi0;5offR_zi`2%Xcd>r~d|d;J1PH3X05RQ|3x&xvuNmH- z%+3p095ohbi3$rkT7$9TSyrgizyTLwvFyeDu}Mq^R>p$6yJ?!`h3$+rqsBG7e(A5$ z@amo0Hi$>;^VYYm_1YggJm1ppe}8;S(RVqw#F!g~&!^+YOBvg=g7Zo8+2E zQMTj5#M*4Aw~#bZ{!+%WIHXTi!gKZ})9TezSl7MiQxP1`RByJ;&zn{|d2SniSSyds z7?fwBj9TTtek|H+di^ncPp5s$ zDZf?TLT&3JTiHk5N%3WXfo~~&!&h(ZEP|ekd(Jib50_Z@sNI*y`tVe5GDg zVBQa$RX!<)DZ@N})H7v>o0AZ(4~w&)G6bOGBH-LtXscxBNmCD7?S4$UhKuJs^?p1i zJZZd844+kLaO~~+U%o(E)XMd{rppj`nULNF-G7$TlO?V!f_x@1 zBsN56Q>2sWMC%phPv85!;#NYeF@7_J%2*bjm|l+{)tqD}$PV$#K#Gia2mRJ&;N7X; zxG((67*C#yTb&q-jAkdi2iG8=THXM?Vlw_LoXwS4z39|3OCx|4P#$X8;(VPIRtV{w z2t#~4GJi73-||=a$x81q?pf4@5^XJSK~4YB-Oy~;f`Eh5)oBA%dNbx{sNw49{1kwe@7vLhp&6j%VbKmPIMfeh|WVO13Jp$ zL=w4KWweuI(M0DO*GjviZv%m2Yb_2`G#!({d;HWvlV0$Y8B^S)_J+TadJ>^p9mB^d=GqVux?7<>)3NiYP>mOGHx8}jy_U^R)W(rshz{FO6r=QH7$yJXFNIlyhc<(pu5&0*397 zy30hb#QCFOuWb`6bAu0(fZ*XDm-4(<(6 z@#*(~Y}{+wECl;`ePXJOuLK+BE3Pq~VV+gO6e#y{Ut}VK8>2&@wXuO&p()?pR}ezR zdHxNmZWQE5-IRsS+S)ri?}LGg`l!XTAb@M}+ zYtllV__6IVuGg`z8W+mbBNT~5s0X!U!=%TG8mFI|%2AI@suwSoIk|m1I9)$%mcY2s zjlLK)?-P`tJ$%6+ILI@7dSBiX>Ij@NQ)nb~Gh2`K!}Z!R!|QP`wp0eBvNs67X2-4M z%$o*s8a`*@U}&Xs_2rUY5d~(ghNwm_@R%9b)&OiV!A2_!&(_FBorGbaA+xvL?*e!v zL$k}^e!@frDkDQoRCaovkg`G^N?+giiU4>~RnFvaD=n|Cv0>n%Z_DS&UP8JLATKniL4Pl=6E>7s=-7!0ngGlz6V7n$P1S zLHGeo@a!HP|8O%fdWMOz8Hd~pu8`9}gV^k1lcMa+M&DjnE+!)``ZxR@9*7RDJU82O zDu=qydfLgE4(3_9zX!f?*}L&F(Y9J&+2TET+!{{2A_YOO{dC$AFcfi}E+88WH;Fcsa|xi7^f zd2@@x`R;c1`o?Hda;fAxLrV1R6}cy4QP#z#EQJo=>;oeY>HCfYQzReyi$MOK$~v7( zPz5Jwv_t~UeqmNvouah1aSq1`^Oue0nru1eE9I7joltv2n!@sUf2sarj^nG%iLE}* zAlA{=xwF^w>G6PZYxTT)Xq8O=`@m@sT*@4Ol*O4H50zBNAyY{mzz(FO&~XudI(Hd= z0n;QHY*;&~44|lXifcGJsocHi+Us`KrtTe`jGf7KS>HUNP29)c1{JqN588R`SeRCY z;~t86SxTFv3B}d_f@{YXFD|nP3Qsr}Z#bC!FnrO-Y)UtqGRve59-RomFZvnqdP63 zBt5BoS!F5fe%=;@&_`-spSu@%iqZcCsNaJMq_MZ!$F*vun`l*nJ1HUt9-plGToJYM z?O;!~&GPRzo=Z8{!2?jAD?Osiom) zV}(HEdQmA#AQ>hV?3=~9I(<>EwMw>#MRU| z+xQ`#N9q=M@80ZB^Hb70PI7219TEJ~_TJ`}A_-`&u0+wIDU<|6KiD@3Nm?8}m)UiZ z8Ai#4dD1YqnO2PF$flkImlw8MoA_QC>b9asUM6lOPQE(0AiK)UtYa>^E6@m|Hvuo3 zH#cRGB~z8xtVT{XJ@Blkd%I;&yeYwg!t18xL+`4N__`mB*>2uc1<`aFJZ@Zk8%^Ey zjm3T>A=xv4yq^lD#mI2LO&r7y$i~q36db&$=F-E7;?d{o{y>5$5M)@j^T^6wve1~C zB#n)RlW7xuxpoa{J|S&UT8^n4m!3QyyeB78_IVLvu_<|k{f_I!mWbCJ5uzEDram9z zhs4D11+Ey6pi<4B&gr&kA_LUl3*%hEd&fPP@6R;43jG!p1G4<2%G+BFBl&5(NE{eHl0J{Kv^A)A+6TiKkc{#TvpFQ~! z(`PdipCMl8@`%CDEakza9j;OM<$n2HqA)xFUzs73dFBe0fSz&UJ*T_rle4n1HdFJ< zeMPci)PniQuA+DZAC{FOmCn4s)wmG9D!VhYRX$P4mhL#QxIklMJMUz9@7Zxt}Qpk22fSq)FF_N>UKTBWarkWgdN8Lnr%ktP&9vJx zd|r=(jew5aCg{P#IPhSxB_dU5F3;5e%cTZw7=##C{I%B{es~@=h*iolUqu*?T5r$b z)x`WkO?HkoYH+XM(35cYN2j_PJTQ0s3LH4{QFW~)awJvOT2K)!23qJT?p@D1ijmP;v>d(J@c@rHSdMb?K-udxnvz)U+uyktT>MsD@ zqH@77UHB%P#)bXH%xAGb2Ls>f=~ooEd_7jeQ}+*g~nT%=hs`af$n3bqDer>p;ISToCL{wCi;P9p$U1 zcR3$NVNjcLDEyF&TD)7#)) zb2|kvra(1EpSV>EOXQQ&Q^idjQS|07U|af_;w+aa-YpK^HjB#>|C4AnKkAmL_6 za=NU3DSazQ;Idw)`p?^jHNZDRK} zj>O1mT+|65r#p;wsTn*cnwmq!RE7Deo%riO#)iSERqBgLgVup?2YQxLG&j-)`RbUs z9<=0`797-!=&YON9M##I+j*Xxg}zV zvY^^qu=TDxUyT;_GbtGZ0T=q$p5@I^pGSwM?C8ZOQS0wpJ8bmZ z4D5wxasIaB`Gu~2!c~Ug-cN>e#8LIdLv?!EmDrM0d5CW33!h5REOc<(M>NqQx1Atd zh9dwETvuD+D_fh3o6TPGc1=4?jIQ=L%#Hjj?~mtP9)UA0)bl!qn4PK=bv=jBS81;5 zO%9J?pj!NjpX-@=ysy{vu4=E|vAs2D+}2wYc^BKI!^<$-GCo+7_*4&0{`JCmiqiK} zQ5~>lvNCRVCzrfPAEn%|? z*4D%g$p9PW9h7yxuH;Ge;SENH)CkQrE^)}lHHt@Eg@24fBEV4_x^REZ5q9)e1n+yKD^YnAYI1_ zOCOmNW;q6W5z&F{ed?B{KiIU#STidrBg|2NQ2ziC9=YrFrMNu_Jh<*ux(tR+F;TLl zC_&gOJ!_!w-k7pzZ*qUtK_2ICMFaf)b)BZ^lWMZfcOh8q;W;I`do3%9+FX`f zLH&nr;WmqTOilsFLcllVagpra)VgfI>QF3S>m7nA^c1$!ELvW-qs+3k)DWzZkUmiu za8x(@K(`*uLG4-^K9cb2cC*N*YDSW=KkXb7`QnoVXj|CKvfRYQgBSB6m$nbD(C}+A zRF_YdQMEtPq}&!&$XCZB*ys6*^zCjteLF)sq;~#fU@?|1>cC}y`iz`+uNv`9`CUys ziTwL1QC&zsc?ilkvtufM-KkoXj$OlCO0MpHc48aSO!Plm?{vLT+vz@1s*A)yHM6k) z09X{|)9Z}!+#hPP#JT;;z ziZMKJO9(rm86d}=PnO@^C!MFKYF~(!Nfwl62M$?BBbMjazAGF(c&h0lF|-@QO8%#p zlT0m2O+@Csw$GYSAG(d+^R=6UEDAJh4@zi;uD*b^{xtk8SG*g?$L*u zsd9kTq1K|DaYa?Bt{PWG1p!S?k}h#orN}%~O3k~iV`H&98lb5a46LGJde=E`bbPVRNG=I( zwF)*Vy$$cQk~-5(!5!-CIpBzFxJ?+FSw$HKwxVdepR1W`a-8 zT9D7jTFesr(wY70S{==3CFiYKc?RCKg%!{Z)~uVbSFJ&4nrShfYLndUJ!vJl1a+zs zzD*V#%fj~^>UeK5!F6uZw7E^K+6F%~r!DlZ z+Wq9Z)pV?!M#Ty!^aiCfTODvdxj$O!9(XL7lETEx zdj9}~4fiGHo>m;H?l=R|yD4>fVNbE??9xlqWt@AP z@niC@)aJa*Q?p3)u(7&q=5@^;BOtog?p8i`Evz8BvHEo<+~T=u^hxHFT}f)zu#O>v ztK-z4U*%qlE|m5Vo4rm{hIi*?{{S{a^=x2%6~^m#kxeh}E;kN>G%W4 z4+$B2%*T#4h5V>W$$0+&Alb^>r=b4;Xx3(+kVMNG9r*c~yBu@_+cl*uvj|xrg>B)- z&U5oOu{{)e_pE0kMiLL16@erVyn6xdT`+CCvL{Y4k19G1JfvDorGD};1n0gvrP8eb z0JLW8Roy1lt#(>B%Cjqe7&Xy-4?Uti z2IXy6{t@{k#jVTc2^iy?f2(hC+tgP>s_D|(O=+TO^V^_^0@Jhc<{!N00F#b@dgS)5 zVeR0!)bG?4Xkz`)^dxuu>b>TTs~t#7Z6(0d#ETEw9zsbt1b|N?uNbP)>Spv-I}2?d z!Z|K9tu3Ux-8xCFYV!)QIWZ3ABo97jd9*ClJ4!Sw8@wYi)1lK@t=}KX3Dbqf(~=& zYFoPqtqV!wd&`S!C@zUG-N!AngV2@cHu`{ZpGxyxM$MpxHDE#r!Y9x%KjU7LE|$X2 zM2zzB%OuT@?u85d>Z^El?`*H_L|g7GW{|3%;!tu&M))0IUGp1!5-t(dgqF&bVi2t6&OV$kMwya`9y>Jg!J48 zW7Kr5t1CTj#ly#7&VcS{loCFx{{YvnO1Oz9q3Bb>!L3e*!yY5Lv6w}7Y_>@4her9W znY~f}0Ogtg0A%s>uII$k+gx~Zb-afbvMw5aq82}Ojyq=@isNl{duNNymeyQ-*+7WL zDt$*s717QkpTRnx+3ptN-|XbBzySbrf-pzvSkSEHSvTCz7eX2zgpgn3uJW!cZj)jnEaw;9vxvEUXz>u1W>Di_+;){!vuOXty z{{W>5Y{!aaqhjP#P3Uh*KJq)rU;PX4t`A;p&VIGpLjM5hU3*s%t)Jf$?_4zgGuOl~ z-J>KPYB{=7$4ZV~m32c5%8Z+jN=1VQIIB>gW~<9;26)c1ZKIoPA2?oZ&bJphnm8+JE#0>qB|TCIy-W93a|mFt(W8#Y)8c* z5SA-YH0M2P#LIzM(;ONA*Rs079M;Tt8D6!>+s(CwYQ=Z*j%X3N=Z{*Ar}wPCwP(FN zO}o7-0d!m(=QOQ&K|91btw}BilzFaV8=bAzt-a^&y4E(-*}V*HdR@+WtcXon)aBmC znxzwBy?PGNJPcfKGV--qvW=&@R8=p4NdvwsJV}b!@cEF-cO*v+kQ^UO)2lvOZJAJ` z%~mVS=WpP+l38Bik|fC_2FU}`uB)E8@*U#IQA9hh>Bfu+eUIvwRL_I zm1VhGcP`O8V50Ro?O9h>jm;@NY<0d1o9B3ER+PNEV^A^(9Fl#9_*Zj%a%`e!lIA-# z$I74vcKRsdxSI`4H2(k)T*nkCo*{!AFCBjj_x7gW-;0TzndQB>bGK!`%DMF2)cpl{ zSg77fo~{pKuBoXDiQ9aaw@ue%r)VEgI({{UCFJboAc>@Z?n4<0KCRUL7_A$d9Y=76 z>FsRu%GtzsN%(=&>T53Q;^NHt5Zy-^C;eP;?_z!GDaBsM?Ub~+R^u|di`gZVj+XF_ z2iLHx5L#YIcL`WLxd8$QRA*=VN8Yq7V%Ly@)4>YAdx+LQ@HN$G`qzducR{LZcK0{s zSnsuqY*6vWPbHX;2h-BGgS)ahYd%$b5cq6arjz}tdoILF=88vtLVUm}WBGIMT_x{? zWtv64vjTX)B%(G;FB#kGo}!(kKZZ3Cd;Onx47Rg3k#deU$1`o*N`uz}Kb2;S3!Mfl z*!&}}$1{*6hRr2edJsqh+yXx;>9OXvLuKI~ChlcYCAGTC!s>CB?!L|JeJaE=Tv_O| zIYn!GR!Hxy=ZMK06+a}H1TwZTG7miuO3%0P@7tkyp2?%RKk@Y(ZH%z~EvF#k@FUi` zFAVC#Qq&E^?HtfZpJuu^8ImUGPCJbCq{E?pYa83^Ekr%+5#Qch#dY>eWcgAxA1DW= zcJ}8W6I`yDq})$)zGN#Ex@`seBIkfThE50NU8;Cm`#{tYqrVAj2}g=%AZ!SFw`20< zeus*$t$4dbv64v|C6OaLmemeC$3MjWgVB2XQEP&BLTMU3#-Ta8k~T=C+YP!f>s314YD}&82OLVtHa`r zJ{wnx$3|(8fIREx*tj3;j?{93xz$Z<`i7p`hL@&W>K2z7bhnYVjllfK4n9`y0T?5W z2(D4DG{w{-ihVa((B@O~AZFY6V~^6g%ioB4rkOPBrfQc7H25Vgkz;`|(+WW(k302!blxRHHIo}T#q zE2Y=pZ-OPcvvX^3z(@=(6`RyxkVwJgbM&q^!x4XO!5mTEmAi?&$o!`SDev1jKKZM@ zKGZFqJ8dZvM7pwaC09%-$R9G0aamxgIagjn%&MqRYpXbBH4D^-IH<@Z@GI$?K0+}g z?@k>lb{zDj3VkVDu2C~i%>fUkE)F_VOqT{FHIEdhH5-$Tl=+41*Gtf@&o#4XMLacGmKLq5c)fYe)C3^{#^6;0s)5wR1YOo1{Nls(r5M(I3_ndd;l<{2am$*~G zDsP{$z-nS!3sXFdUg~a4lD>A2@vAyPQv^2Kx5*&RTd?kNS(ob^=?%rvCwpyU;kHO3h6`S+mS;5!@e@B0#D=&IV03 z)XRQ?*4YzbKfRTk2vM6e0QIjGywnVs`#gS9WVe{? zOEEo9*WRz{x~#grmWCq=wyzUhNx0|tx!S=;_f7%H=acDK7qJleBWm`?-f#Z3J3gXT zB1xRdxkI#v_{hoqY1dQ96|+q+k%IIO+|)4)HcWd*sj3>S*PSB9xCp(ol1&oqX$7s! z^6w;uM`B%$NzYofZ6S?;-LvSsiTvu3R?kY;)9v7m;ga%3Se)To_;NdEnu#l;c+^qc z^zVi?vD|8=ceId{LjHCX0rdi<)?*r$sXfsEGq*T72OMNqL*Q#0h%^iPn5Knpaf^8{ zpplH@BZHpRjAuu7EsezU$9m3icj5oihkZg`m_#w>EqIv z+Hmx)g}khF6$==LaYwh1zOMtGlT8>uEoIzV2-_f6TPw$d$gCYflbjJ(8nR+^VvNyn zQZh4G*M;PBfN@vY?voj-*7ivt85yl2?CwP4%&c=hT$uZ&qI7Z9O?N{|jHvlVSGdvB zZ`~Ds(&lQDMmD~T%YSA2P;|d)>VC|=lzo|fDOfp%Ppslzo|fD6sjCNBbw*nskHdU60vU(vPw!>?znejyL-xgZF8FvXA#`vHK!=3SY7( zprv5uIPs+a0JBd1%Krdny8i&OucaSlPeDb4nB>Nl?MwZXeJiZ{D*6g=wogGvD07^L z_EGewu9!z^>we0flzov8VM~;1XD+rSdsUdN%66^qvL~RR*{`KbrkK@MjHM+-NG4(4 zubUP0r27b|czCkbCPhJT)vK$Ei@yq4#}e+`+@J1>sw@R-%Ocx&Qe;)#BIg77S3P=a zoK%-{rw>)tlY{JeoT(d4E^yJn*Z}g*YQBVwbsM{oKh-HY>}`#?3c@FMle%9>m=@5*i^dDfaFcScGW;LbzZ8}Oxwt0+t zaEf2F-$@v=fZ_{y+vIobGvD0SW}zX!SmK3@v8yx5COCKQ-$BrQD$j`JpHaC>iP%F6 zF-vF{&~;<0E8a-lMP1!X$ZGFfdp6K%}DExN{_ndWH0H4<^_T zHHXgG&MPenNB72D{{Rnv#;M0E!#?I8GM+*F8!5 zG<3q3wcPbj2k0|D!+Xmq=G!lp$1He8UVkn>TJ!UI{=cnz4~9d-rriCpfsNoLq>OOG z9G}X(f@@gB?jw*$fsWS$8;4^{0cxsEm}A?cG?3r6zp10`&+2IVFr@o1uJ2(JhwCyO z_W3<%{?Kv!Mw|9gN%m1ku!wzTL(Zc9w4d5CdeaT4r28PF*u*}uA?Hz${{R&+ublU! z`xvA)gVv8_5ccvSd$(>gROe=T(EA{!n=wz>p!UqdD8fwEwVcHiWK-JAW{)Fa;*)|Z zH5qVUk;W;;>^k#VtgH<;=l7`OFWNHHyJR&1w+5M)6mNQbg7%FL`zSrBKFGe5qF1H! z9*<^MwP#kd$LcBGe15gTj}Q^woIFGR)ogQE?>y-JMK7A3qPS1^NPpU;{{V!C{i-Yn zb>^q2rSntNR}4HtKfO*KAs^nN!DG4e6Vy`qiRvqd9wA@vRD)j!_`g~#2WjRY?kPm6 z>MM-?-Ouq+{k#6vD-O)nuMnb>Y$QA@dfMREhg3;pVF*T?t& z09q_7Gp)+{((&pmlN$K`{XKQ%KgCYLW_3dzw6`Sx0A<`L^xgSaHk#U>;;SpGEU`R$ z{_>GoR);k{&Gldht=8&)NnGSXlu?$wK|Pa8JKx@VJHT7})MwYJ=4 zlztV*yv=btKSULl8W()WPA;b`R0yU&I;~8Kq*X$9&7V_NWBE@#1$s+O4=yh4v(f$( zO3~}mLl|x5HX-=JkE!YSS5dC6mv4B@bkk}VWc;xbSQGTF7vZ(^QFx*h^1*3Ts<9c{ zUC)Sbe2FCHd7NYjqW#(rc0KFH#;a95Y$Lo`#%lIPc9#12gmb!xG-q04^&+S8J(@I9VJYxQF*?^=|aBN%onRGIJWMZkHHPIWY6#ra;=S=(w)0T(R8CDyGzMKnJ%O0=UF?$wSb8O76qYom}&A zRQVb-?JmmsIXwaXRiOpo5h8zbHvLU=%E|~kx(ah#%DE`m8OPSPaXFNW*P4ng+~GzK zmJl%YAO646rP9n6cJFZH8B}l>XDn+z+9Y(`@si+{EKl>TO+M7>Hwh43wsDZtIp7~* zMYP>o=hkQF_quGdw49q`x{oB^DlTd=(Y_~*-I|Uf0 ze{47R>tAM^Y;*fp&3hbQjjhrT72XKdRULzcUW<{^us|e=-SJC_wUIA(jY&SWma$}H zS3UVvj_Ajq-Y#j7`PUx#uT${uo?|xih|5PJVnDq>AH+U}yywH|oA^g(J8Rf97|Pm4 zlVuP#=;w^{41u%@_HeP~h0fbTfQvc9Y{N#&xEuz^{v|yPc_CFp zwBQhN!LL`hxEAoq%ye+j@cM%IYk3TN(17q9{GhLZe_hfk) zs&1lY@5}2;^7@Lv{?#6|zuLq5)#}{yISu?RCKuMQKedTmYA^`ha*Z;-yU-*?O0#i1J;o0X+`#|L#vNkPwjE*MUc(1sZA)p)rje+!TQGpRmpS(@FLzzq)JD zzp<11l!r!N?tYY94>>lIdKyD(DD7T~bVNVF`c!YBq5coji;VN8jn~$NX9uaS>TeGz z{tC~R!<_K6Tn=$rk8@Aw*0(?5A92{wG$}m|A?RV?p7fhr)YkaWWb`zDU`gm`4Gw<| z>qpJ&T@U;t@91d%0EBt{C?1DD^Lm;-Z&O_l{3FtO8b9G4e?vvZq0jvOrj&0z&2%5| zkH4X%9v^Yo(Qwe`@hA6<8I*q!u7~>t{pqJce|jzk63a0DBUbF@oZUnN9$XB5HKYB5 zC%rv1S)+Az>ZB1vcUA?XOm@F`0G814o*(=xD&FQeV^l$b=qm4vY@~|yXX++U$Lrp( zRZc>#OCJ7}N@q)YFbUL{6U%vh{Tv@0OzXHILbfT-vtfRRk=c_a6SW z#g2k$O>{jB4MwHP`k2h}F#YQrT~CN)|yQl038n<aix`;b|f`wkcJuQBnKkoG<+x`jy# zE+tQ)AL0#bfuFwOapq9>6uUDOV}YN`P0!D2uw%4?n}=+4!g6XinQUS5kY?q$X7nH9 zTK*c)BEFbiY7Ll`hwk(DkG*;oDkw*po?R+Zt1eiDorF`slS;1@s~n;?-eKx$%=bVj ziUbkFy%lS3ShRUAio~H~UK9?1RzqpCdREZ2u5-;kW}iZdxRcb=eEBohuAMG#N@>$3 z^s43;DUqv<>1{FVf`t7kwvh=2Hs|YFm)dO2rOWc_ENh@f1+dK8!X*RpA=ELKtot^CF;S)&bt>PPUN%s*a5d2WSfKeeCCV5+hL ze!1eki%#>!v;d$of>S6!Vmpp_+`UJr&3O2kUPrM7%K{tA*sQz}sb7~f+ly96cQ)5u z-zi+4;P&>f0tA7K*R%XaXM@3S12jT=i=_^B6f%L%KDuO9mxzLn}_u}YQuv&770 z&054ekEx@1^{I`VqxfmRW|RA-y$hUHBiwyyHy>J%!6)}kCy@UD-85V*Gja8$QhjOu zOaB0Nltja|D}{wnPkM1v>q@FeYEloOqT{g%Ls67DsmMNtn0I;_1V$tGrm)9Kke2kO z?P7Ww0D-gXOv$T>X7U~Zt4pLg8KUTRGvWs|Y%j@IV$zv<(#vL0aZ_-yks^?4GuE$5 zZpXDj_IJ-ptBIr81ax_IKH4_>Q!*eC)3#LGDR%o&V7Um4dQ-f$^c5&dU6fdNATS~7 zDTdBA?N-84gr#7)N9+-LP&D}S(y#13l%Wr$0dk1bqx7eLXUFMRg42S~7bt&c$LUA* ze14UowvDt{E>Ql@kI+&5pC6@P+ex;I4&@*0_~}2h@Y{v*rT6&#hnytI&xA+cc@tX zmh`0En}SDE{>s&;V7ZY`+2d4Txi05BN2xWb;}kI7+e#KRYlxMaP`DWX0B76z)cRa8 zG)NhgB9KI9oS#xVoa6adi94+s^c3GRx+T(JnHEVEpCJ+@NB$KjB%g9JYaimqmS@xS z{d(C-suKj57{a$IPvQvtYp_`&@|jSeJNBy+%R3HFxyF8it=(AK>h@QfMZwH>5D4?w zHZmA{fq4Wi`kea()*S2uBKEsgB4~mK$Oz?A{{XLE(}%Qa9y_S0Ma!wLHG-DTDg&plrEEH! z)I|+;q2>rsUlte$5cgl4%hYSi039JYMbIZcZ_IvFn7IptWAb@F4A7#q$= z{01w~?2L2iY4+=RmKjx2;gFrsOuTR7kTO2-_BDs5!w;Kp6GjUwhr^N1K^PvK@%Yze zp%|f#C5eodvM=tOj>CC*kFhe{vau8^ww zQ%a9Y?BzKA`Cq3sN4Y*l+z<$(h9s|G4*h|$O%)Pu@Q8ml}YjAE~tkG(X*p47Pq?M!C1 z$>~h{7d z)?88>Z9y3S01FJ$FX1z5uNXP=WEeiix~cqk@;Px9_Gxxz_K&5*Vc~0yYVdyYbLHD9 z?mL>~T$grk6>8pjhQcBQ7N{dd9X-u+ktR;#(zJ#3lQ|zmg$s|sRMu+p-a;c8 za^EQRs9fdBuRW{Myb0j;l6^l`xL@5yrNBSL-Kn*4IaulZYvxPgJr?tFJEjoq=s3^k z#d-Jk?D5LLNGT!rJ}*x*!gmM&%CfTi9OAq(=GtkbEQ&;VIAg|r>wz=G<7}k{y9WHBw(-{pL1J! zWtGIwzCp_Yo<8X8E1s?%5lx-ct4dDCM06qIUmUgEi;H)OC!CIjv5bBsSD>_Npdx@3 zMgSd!W%yPPvuX22>MyZfghE4^TWp5pG67tJ2j!|KiiM2O>_q_6-FmN z#gkDOy{jX_HnUq=+=)&@INS6Yu85_2*a)c5R`Wi0HFw1|+QlCz`cfE8Uh?S-b=Y~< zGt5Si#A@-5lVukbiY6LwlkHc`e$>+)0b+rRo|MxOtpgn=lquXSXBdqqlxsxBJt@9W z^rGXjpXGZ}d0y3^Vy2j=6^!#2wIPezulZDlFKR9pGyJb=Ll?DQF^@`Nj+hjTUewNiw*rOg3yUDUa##W)H@`5!DF0yRCdr$dEzCC-AhOt108@JYTS87+}tv- zuJ7UJg{QT>n^wP!DVa!#Pf!W#UZo4R+2iAyj8g7-J;vjAAaj-uLHy}F!@{j;_@cqI z-xF#!*uYtskD)mq&a)y@#?h8zfn6;eNt?bG@bvoTq_)t5>nxq4UjG1%d+n{fR#tG@ z!nlql3Zv;*o)ytx&@@jf%)^W>f z9z&lsum^VZ1JHl<>((cUZmlM_f4eFaKZvh0meFE!Zz0Ot0+MZP0^Izd{-pX3dh+qf zsV2If-cvN;D88Smr)lBaNlKhTuquI-8QgsbsVDhW?LSS12_+6O9`d1^x$b^~sA%_c z!l>duGFAu4LNSc@$8JAL>EpGyWHXWG7}PFEz}>?!K7*R_B;|WHdzvMoHIB&Lq8Z#5 z<_EC{{{XMYwOS^#k&IfniP6-R%w+NzNa^?*&}#Rx*Is+Y_gH>o^{-MD#(YP&FKeXT01uZt#(&^I z5&r<7n#Q`k8QFoKEoJ6rX%^94s{Pd?19mb!v+4D&$4Ap3d#UZ^jS3PAf3Wz$$m+Qm z=b^`~b6OOF2~@>y2*FZcp*YSl^y0k&+S=OV%DsUkj#wBgvo-$F&D|k$M-;+n z$*%LyxTh49W}83(rsJA+X(#~*q{TXr&;tmhry6c)isE9LY1Gr!fD)Q%)RX|iDUDAx z8&d`}(?}GinrH&>YH(_RH8?dZ46quMH9#Jm8j#BYsdrQXsdrEWyQ#sd5Y(Zl88+%n z)O)Bc#%KZi#Y+?y6AZ-!jy+WguD?pq?KGV#JNxJa_j3)%jxvedel-oov2^2Tl*XX& zv5lkjs8VwHM19P!fi1?y!v@AnN=Y{uJM6(g}2u8ptmWFeY&E$h0TFW9T3$rS42 zkt&1w_NVU(@nIN{{Y1Vha>(1YbHqKiMGgvPrWs$u1X}uFPl|& zQM!u-%U*3R`l=eF4Mm4?ebnI9UwvumnoY(0Vtf?0EwnZ=(jP?~{JYxmKkv%Bsf2H!OGx|s@AGLoouniI|i9iN^a|u z!2C}&ogb8!P`$sF_Ua@{lBbn#zQ2boKM`I9wLRB2E0JH?#P$}~3ki%zC;GPQ3j>0_ zMZcIu#(C@PDYUJ-CKjE2*t! z7?Kw#--Ua5WWCu(=6t?jpE7UID9EVyd((GUrg_Aq({oHMP0avET9b7%G?}I%T+>OU z%>>Xer0Gp9NxFa@O*EQr>S=0X9hwQ^qb*6gfE;Otg4Dv)3{#3>#RSs}P$FPy;M73V zfuIRNrOiefmvtZ|&%HPrhcw`588&FnDm>EWkP>E>wJ;iW13Qlm_@3KPwQVB%4Kyk; zhpR7UUtj4{U+B`peETI^s2FjcasD;txB7eC2r+E5#hj}oOz_LM4tBXfIY8-zHgOxu@>7`y3R)-x*6)8J3QG#hLpK@)< z1z0f8T1G(eVpQE5blg9YsSJ2VVh-cTAku9mxyiujJ66^)PobSAmkSaV+9n;3BC0l- zH}E<)S->g zv97ZwmEl>0c7#f>nXV;XcfL6vTFkY$n&QqG=T9zK5IZlSJmS2wN%)O_8vUN`C%L!S zRH6))uAUDb%QywJX;fp5af<1q8+16a51QJYiDOYs5?n&@NHVT4MRV!n2w(5mOuvY! zM~f{sjM`iS_*Ta?i<&l%Z=yjgHscE;mo=l3YX|-kyB_e)Kj1X)_)9d-{&M|NfU&u_ z*4Kr!VW?l+-SSGtu$+H)C6D5A2j^Tt@n(=Z&g=Dz*QNY2)NQY{OY6I743QhJFYe<6 zC@1;WHR-C5M+-ZQ@>gN2M|*Vh0KVV_#(ZM-NY^CQzPbu#Ym zOTik2AfA5r8+|^USFPN}m)18Hz~D<8N050L7_5&B-$y2kYB%|p7!k_&u@S?GGO%qqqh2V3i}vNNgTHYm)eo}iV-<67Qglvm^f>s!*pQ>7&+ z?v8r+s`)?hjCXDnTc`YgU_$&UdWQecD_@pFe)Eu!QdwdBa^+!|2N9sLmr_?mbwYZk&Q`4?&r`iYFVo*sPx2JmF zveo|2bv2F5F-LI&g?}t<4yW$0`u_khMQlfMW{6Coe51%>datjy<4T4X**nHi--u!TYo4AtZLTDObZF*{wz@Y}HSQdQ|52k>%8;ma#Z2(N+$hc&8s{X6GL%2cOQG zY;PAiw73`)P-(rZC< z9YvJ3$W#N+&bg@M(lr>=lW8LX-Yh@zr*s7qOre_usp>c*xTfx=0+UV61k-XUfVn1+ znoZQ)iUcV%(r6@Nm|B1WX@t^~nqfT91%Po%4Mq^8qv$$;I{nK4mk?~=Kg4a_=vzTznG<%f)0AJ@+zySQsP+i(uU&ZC!K^)3D zZXA!H?Oc^Gag)4sXJtHOqomg|JIJ-Y8&g3dTca`R)^jd@?~{Z2QrrzGRa{?52vyF&>G3~=ew1zTG${Ngaoo z{xrR=E3noV2mHsqEEX;ECZS?*In1B^YDLpg#$&y*IqCcHUKqYT)d|SeK>q-NtzWkB zw}~eOt)zR94hteH{EIi$#Bh~dk2gj~iHMP~ft=vTCXqiqAsKMaZ8*}hl z!z8Ymbf5LkV#2Y9i#$;jUSyh7#y;)#Lyyb~VEB`LZ~nH#COd&5e>_uIyh7dykn;LB zvZ-cikA&J)xBAUC>5u(1XQ^ZGPLFEXO)lIw`)RoTwN?*{vW$GV$6;`P#wwDp(XsaXQW|1P0%<;MI_K~;e zz|Bn`j`TTK63?f|PhHj_5B@s!e_xMSvOlvY(n!lXL}VN<*8-wCDtA#`9DXP8FO*5Y|>WO47IBLYgC`(mpPP_o3;NeUbe%SGM0a;%u>#mP=e4Vs*1i{Mdc3B8?OWL7BOfiafFG#A zsHuphHDxOrP@`6quPEH>^sk88c9Wv$iF*65bxSF4H)oCNySAUNAbl${!k#R&(=@A% zLix;1H1ga9C+>*}Y<>c`C-^sb9ygD~>HCrUuQnD^gV1F0P+xp4@g>SCzldyRKrjcJ zxAdsGcC?YT(r)o*pvmK_PaF6uT~1jd)BeM$TRa9=>ngr>{YdHc_O7Du#=33QzlgNy z?mUOEwe$Dq1S2Lv2h=wom3V@D9`RD!8%5MD4Y`m?Vp&NAz`$&J;F0*&y8Iy4F0Q4x z)wNmYmRt!|A+$Ic;F5VCg!mIyTBv9G@3-%?FoJ|De7xvVGWuE_IUG~8nwIb7qb zdCAE|q;U&k+lF}KoCSt^EiKfi9d~5ce%cY}(6Bzpf5NKI;hQ_1 zZ6cn7_m4a94c*?S;_XjYa11hlvN1RX$@vC5@C|#Oov-Ph9kOZl3*F}>H*U@V9f!U@ zoq2`N!>euWE|sdwr%1;w<&esLqav|yJ{@?9c?*BSNpCR!0H%T0^uexa(W!gH?uH_d zEvIw0)O=-St$6p&vbjmD8&8gCrd|d!hE^V+^N+8!br&BKv>O?hPSiKzZ=VD4uh@VEbVWo{f5%%pC)L^l7yc8U}xNVRMYt1QeR!=XbW7=!GTTdnc`P=D{QFgj zG>bcCU+oQ6*4>=#5h2{f`TIbsn<^fR!>w=k4CxN zKGQsQ#fSQ_nI9kSirBEx?BG?FMVPia+z^0&z(LJ%dY8o83maqq023%}B|mdv zNu(x__3KT}D`OFJNKGc{ZtiFSNv4`Y)Yde>T+<2Wkhi@wngE;=flkhQ)ByI+6ubQ? z44ic9Pg9QdTrnhKmwEb71)aj3&}ORd=9kUM^4VN z)27z0wVQ1%3qRR>*yNH~w)J2Wl~8!j4|?FKh@7f8L1>Sm!{wA=g;=2GCS#IUe*n_b-7nUrNtP7>WS8Vd=gxARm-u(mQ>1IM**X_}*01-Fke zg*`n_tx>n}HRXk!)ux`1+re|hgUbxWdSs423T?)q^JuooaD(j;%1lyV1~HMyKBGN9 z8p^dUbz`>$jCg7DBztFqd`#~cf>+-g3^J3!6=M1`Yp72Ea?yji2|ZNuE6zL}bn(o9 zlU?Qg$(rR`cioIBu525WYPfVgO#YQRlp%>)G8rL;Rw3Yp!T$7Wf2CD@7fF--W}9T+ z_QVh6QW-ty^Lh2JO!-T)t9WZeZ~2GDzxB>XU*u{A@Ro?(67Dnp#nXRW)$=bD33;W< z7rHB+Yi4A>f^@d~mzU}7xgYthMtloyf7R;u3;y-GALWYO6Jnp{`_#HneFyETE~h$t z5h{PH*4O?8Zb$NZVDOHpoW-o$pZog1<6SSDJ!yQ}PY+G>DPpRxv~xqlz95Wznv@6J zT)6&gQzegz;W=GF3HG}G0PJgj&BuCgo9|HYl%GP@HnQquE$5Bx#v{Izj^QsBm;MqT z8^zV`o7;Ln^{rDNJ?b*R)I1d)!&v&pcl;%ODpFco26jD3vsOQtrnil?<@@?d=bRrIVF`$do-k_=Y75!>%KYN2`YI>`_P)oxXX2g?!059A522ZvXPphvTe z%t#x^P^0kM{cC^5Q0ZPDxLYkY?n{fJq1}{3V?cTgfs%ifN7ZKck)p)Vi?V^<-TX|n zW+pgpE|@7?y~MAOdgKfW&VPy;vO*xSw}(%QwiP1%PxG%QT~7DSS2ME^cn!!t*shbo z*J)|3MR%*Z4L03@pU$)UL~EXk1_)O;;PKPGF;=jVUlRw6zL0`D33Xo^UdI#pGa^X4 z{M(5DQV&8p3>vTfi+z0i^?hH=@8v);KQb}-3htoRbbVq`Iv6g)ayRlnD$}~~?T)c* zUrmNPJM|e6rxDHVkKsP!@uH8_RJ2C8ygp+JeA9Y)95t4kVW*!m+SWfe{(8U8{{Z*V z>G)!t2HPN3{b6)uZkR=bRd?F1QsyMj^0x&lMc@t+jEI ztKa%oP2HsG8B!9&9Ov4m62SX13q0&ez_mcxUEl8)-+r8k4(9i%KF(_ z0MW`KV1-(%+5qE?G$6P;0??@vw`>;b;fa zQPisr2sJbTS;Nfy4uG81bCcGLQfa*h2`IGMx!rhb=G;yhrbPqP*IM`SHcy$H*N($8 zK--fUt4XZhA^!kcNc1$RQ;aoRn&RutVg1YGu@7C-0BWfaNxY$@Qmwq>9HE$a*mR>Ei8i!%!|$U+n`ldE60M zVr~RaVcu*Xsmz+ImMZhHIo+~s?KfDO13C0fud(a%FlFN~R0H=B0pp#j`;ZFpE zQi-#SmHg-zDSON(fycETV8}exQ2FIgC*G6H(?gv)U$!l?dd(0kXn_+Q{%r12!$<@~o25tAG(vN7$r^50%6m&5HA!$fPV$y?2t z*|OzvwlzI5+b8-`YrhgTJB!(3(_~m6je-|FGI{}yN%j@NIP{g{5WTGQfBh%}4E zhgUAqahEE{(dVY+U>F0~;}z;Qo+7u`G{l>aFk>WEa+^=8W|6_EN10l2+UrXnFls&& z)->8k8cEyv91HK10_Rt&N_Fm9Mt|bc#doJeJx@L9x2WEU^d*VT ze}1e#0BZn{2H-M%JuA?|$;z5+`FzU`sA6RVIg0=Z0Dd&W&I#x~l$-IKVB@jR^Q105 zQaK$)PIFtF083|%qp2pBC)XhNrum96G4l1KGA=N9KS}^y!1Ni%29Rex2^6Kr9E=`5 zl!Sl}%%FR9pa6loh>WKaPaX3pHR0hR~m+t_EH5N**0}z;XxP>JEP_WAUjvouwEs>NgHPwHmo% z0B^ib(8oCU>VGPDkZmetI2diGr?(wXGyvfekXR@Op=E8u>N@dDv5EWOansa$^x~8j zVlbc*yx{Zg$NvDXk(?s7a7vI0o}`odPy+2%AZ3UnjGj;8f2BJBfTJpZx?2FBOwzGn zSq5|6M*#80PdNHej!5Bn$IJ)^Cmx^UOauw$WHBV?u6Hhf4w%IrO0uyabIB{y@F$v+ z%^1Xta0mpFS%)8>_cW{h=VZc=P-OAx-vo0&75>VscWnb0B#>}Ezx`^jHG;Y@+F5?` zjAQHX?^;N|^v%9pat24gJ%`s6)?tIlWx)Dz>^pnX1Dm|?@ST`Q3`hH<<15**-m`z< z4UBDA=apQH?NWIkO6^%m2OaQu80YEUqGaKQ0U7z1JcH;xX<0Iqr5%~Xzl0(L<;xIp z^Ec~4_+}{F0IQyHlh(Tzi9z|m0C(%3WBGejF9`sK$EgI2A7hWsi�N_h%^D3|Qqy zJw-&?976-k=b+=7>l)kJ1op=~(;f}XN#V1*co;ukG0ida9Avr@blM5@&myZniGk4m zb=y6-EB8h~&N4lH%|_O0N%;|o7~yf(p*W;9BZYk)OdJOMss_@gQC;KdC=q-q~qF{Z0267Yq@7K?mNtmeUY0 z1PpcgPf=CdPT=J_bDvsPA2H6+TaTuIPXlm0F;O<2I$+>_lf;9{P-mC~-!A;kvC`1wyX!)(Lx?r79BEXS=~ zAmz#YaZ`^FTmUjMeMzj-ZsWFTrWYT=XuA$rn*+pBVEny(`c%o{SI10ttPd&u4IWc^ zQFbm^of`O=-mS1IMN?Mbn*oE%fT4n%g*_<|QF=Egdbaf-B> z;x|Ia(1XTn&-SqA2955FNF*9Lg~*RaU1BV*HxkDmDdZYSHKbgya1V7Suf8kJgpQ{w z%sz&kCzNLrHzNn0)N>2jp4Z{IR{sD+onrZ=kyzk&@}5;}1Ge1>n zWB8YhjP$|HdEBlw8TFNA98LxaJ6SJFts4DN#dx#D+OLYP_UaHw-)6XIt!B7|W4JLK z?v6m91{`kb@}I4Ao&wY+henh;or*9Cs(L5Ge-K~lMmcTZwT*_!uGyK&N8bgB+mXkw zYPqS4Jp$eI`zgG{RwK+kTi%u5>+0jorqe7o07+Cd&ybo_0f_h|qeQG~2IV`Ljw_Fdy6~@7< zOxxCJn6jPQwqxspI3w#6UR?#oo*Pg-4Otd-!Z_)u^mTRtC(2P zQWbNxSQC&=KdGl=l?7LJMhB}8uWF>S00|>*2>ai|>4Q&MLmZF+JZBltzZ3|w$3>5i z?uO_+X}>8uik*OU0Bt}0dW<$yZf&F-kQA;yrjIq71%MM)=TeLpUgHUqB%7&zO;J$mqe!ho=`GXNiKgOB3x z`5)&}e5>=78=K|%iOKZNJ5_TgSQCaMcOQ0~WZD48EPCzd^`tU$h05m{<$)bfuea$- zkU8I#PDU7De_DXZy~ffz=QuSY$kG+)196OXpb1Du3`(ynox24G1m&=q0YGjt!=J-F{*>VBpWy3`Lgb%+Z^og{ z;q$-C(+W;M8b0H5$G!0IJk$f2>F+sGf6Ii)V!8IA&w17wBTeTH+-)0(KS zfJp7EQ&Iv zi1~2q&*ADSK_OOS+c^L)1PTCiNE8BHMn*8ipU7kp`qLf3BWq;;0DO*z=uad3XfA;e z`D*A!)hFg7zTE!+jWY!|n4$^0{hSklj%Xdn5tboP9Gn0dbDwH&lvRotxCHVJasL47 zrjoJ`RXk%KFvqoOl8Y~wn7?5Lfuwlr-0;4bj+~lve0js12=vxPz z^~E%x!5GN&3)X?$reZ^Y2M6yQoRi$=-l2O{QGyF()do?q&UyXXPA$`O`kfl5jA4@!!~1uaY|{BNzd)MK{RFJOFzf(O~8>jflo@0X&SH zWALc{#JL&HNbCW{ZTSngp51Uo1}=W~ImiI?{ApM@j3cLxK#%b~=>E*=JienjBDcJd z0nXyU{uB#U`@tjpNzE1xW1np|KPgeTbAkRepV_N}`@R1CE1~jK3;>w#^3mmhFc4$V zcc)Y7i`Y`~Kypfy^ zYP6bDuVMHwuF%>1r*MemXB)ZC<<^v1T(%dkZZW` z@5GHyLck8tu;G-hQ2zjhO4caJmrzOLuOE#<+C(LZ^3)Q2e)pzpIns>xI6PG7_IX$JmHD=vqa8>-gFSs}Wr|Vqm)u4O#{=;hty2iw z#`47e9{Hp;=V{tmdHJ)=78fy111a3j4quk=oVEb#j=q4@XXM&9WWOY-{S5$G2aVYT5sY`E zEL07-6%sM)ijXk_rUd{>yLRqjk7{-lleF>Mo|PDWo#{@~*AxLL0F37+Jc@YQ6m7rn&9=O&jR zvE9KG0R6-+Lmnwu@CubAcOseyJQIp`R~td31E21zXFUG^38wtJR4P7KoRdz>YPchs zX>f6j(*Xfy2MLc{({{5)*dru$?kU7EeZ!}EU5cP$MZus1Rb1pQGC3W;3Tq5HDCgzm ze}y?hY$ML;=}sUzk8XyL$i7m{pf*JSZXgT;!OwClY^GoB~{GafnPKEAY`UNM#&bg4)LQ%d{L0}70c_VlFL^K+fPl=7#srv=Cy z&@wH9J;6U?%dO`4?{o^82Fb5h`AjoupGrKnPzlfYRJ(E1(^W}0C#3*HV;LOt??D{!%n!9nTPGj`^`><>ANwIer)uq4hiecHv|Gle$)XT+yF6b<8?dD$VDK_2gsC}`k4#bn|JjP|xjX;> literal 0 HcmV?d00001 diff --git a/llama_stack/models/llama/llama3/interface.py b/llama_stack/models/llama/llama3/interface.py new file mode 100644 index 000000000..bc42228a5 --- /dev/null +++ b/llama_stack/models/llama/llama3/interface.py @@ -0,0 +1,257 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +from pathlib import Path +from typing import List, Optional + +from llama_models.datatypes import ( + BuiltinTool, + RawMessage, + StopReason, + ToolCall, + ToolPromptFormat, +) +from llama_models.llama3.api.chat_format import ChatFormat +from llama_models.llama3.api.tokenizer import Tokenizer +from termcolor import colored + +from llama_stack.models.llama.datatypes import ToolDefinition + +from . import template_data +from .prompt_templates import ( + BuiltinToolGenerator, + FunctionTagCustomToolGenerator, + JsonCustomToolGenerator, + SystemDefaultGenerator, + ToolResponseGenerator, +) + +THIS_DIR = Path(__file__).parent + + +class Template: + def __init__( + self, + role, + template_name, + data_provider=None, + notes=None, + ): + self.role = role + self.template_name = template_name + self.data_provider = data_provider or "" + self._notes = notes or "" + + @property + def notes(self): + default = "↵ represents newline" + notes = default + if self._notes: + notes += "\n" + notes += self._notes + return notes + + +TEMPLATES = [ + Template( + "user", + "user-default", + "user_default", + ), + Template( + "user", + "user-images", + "user_images", + ), + Template("user", "user-interleaved-images", "user_interleaved_images"), + Template( + "assistant", + "assistant-builtin-tool-call", + "assistant_builtin_tool_call", + "Notice <|python_tag|>", + ), + Template( + "assistant", + "assistant-custom-tool-call", + "assistant_custom_tool_call", + "Notice format", + ), + Template( + "assistant", + "assistant-default", + "assistant_default", + ), + Template( + "system", + "system-builtin-and-custom-tools", + "system_message_builtin_and_custom_tools", + ), + Template( + "system", + "system-builtin-tools-only", + "system_message_builtin_tools_only", + ), + Template( + "system", + "system-custom-tools-only", + "system_message_custom_tools_only", + ), + Template( + "system", + "system-default", + "system_default", + ), + Template( + "tool", + "tool-success", + "tool_success", + "Note ipython header and [stdout]", + ), + Template( + "tool", + "tool-failure", + "tool_failure", + "Note ipython header and [stderr]", + ), +] + + +class LLama31Interface: + def __init__(self, tool_prompt_format: ToolPromptFormat = ToolPromptFormat.json): + self.tokenizer = Tokenizer.get_instance() + self.formatter = ChatFormat(self.tokenizer) + self.tool_prompt_format = tool_prompt_format + + def get_tokens(self, messages: List[RawMessage]) -> List[int]: + model_input = self.formatter.encode_dialog_prompt( + messages, + self.tool_prompt_format, + ) + return model_input.tokens + + def tool_response_messages(self, *args, **kwargs): + template = ToolResponseGenerator().gen(*args, **kwargs) + return [ + RawMessage( + role="tool", + content=template.render(), + ) + ] + + def system_messages( + self, + builtin_tools: List[BuiltinTool], + custom_tools: List[ToolDefinition], + instruction: Optional[str] = None, + ) -> List[RawMessage]: + messages = [] + + default_gen = SystemDefaultGenerator() + default_template = default_gen.gen() + + sys_content = "" + + tool_template = None + if builtin_tools or custom_tools: + tool_gen = BuiltinToolGenerator() + tool_template = tool_gen.gen(builtin_tools + custom_tools) + + sys_content += tool_template.render() + sys_content += "\n" + + sys_content += default_template.render() + + if instruction: + sys_content += "\n\n" + sys_content += instruction + + sys_content += "\n" + messages.append(RawMessage(role="system", content=sys_content)) + + if custom_tools: + if self.tool_prompt_format == ToolPromptFormat.json: + tool_gen = JsonCustomToolGenerator() + elif self.tool_prompt_format == ToolPromptFormat.function_tag: + tool_gen = FunctionTagCustomToolGenerator() + else: + raise ValueError(f"Non supported ToolPromptFormat {self.tool_prompt_format}") + + custom_template = tool_gen.gen(custom_tools) + messages.append(RawMessage(role="user", content=custom_template.render())) + + return messages + + def assistant_response_messages( + self, + content: str, + stop_reason: StopReason, + tool_call: Optional[ToolCall] = None, + ) -> List[RawMessage]: + tool_calls = [] + if tool_call: + tool_calls.append(tool_call) + return [ + RawMessage( + role="assistant", + content=content, + tool_calls=tool_calls, + stop_reason=stop_reason, + ) + ] + + def user_message(self, content: str) -> List[RawMessage]: + return [RawMessage(role="user", content=content)] + + def display_message_as_tokens(self, message: RawMessage) -> None: + """Util to print tokenized string to shell""" + tokens = self.formatter.encode_message(message, self.tool_prompt_format) + on_colors = [ + "on_red", + "on_green", + "on_yellow", + "on_blue", + "on_magenta", + "on_cyan", + ] + for i, t in enumerate(tokens): + on_col = on_colors[i % len(on_colors)] + print(colored(self.tokenizer.decode([t]), "white", on_col), end="") + print("\n", end="") + + +def list_jinja_templates() -> List[Template]: + return TEMPLATES + + +def render_jinja_template(name: str, tool_prompt_format: ToolPromptFormat): + by_name = {t.template_name: t for t in TEMPLATES} + if name not in by_name: + raise ValueError(f"No template found for `{name}`") + + template = by_name[name] + interface = LLama31Interface(tool_prompt_format) + + data_func = getattr(template_data, template.data_provider) + if template.role == "system": + messages = interface.system_messages(**data_func()) + elif template.role == "tool": + messages = interface.tool_response_messages(**data_func()) + elif template.role == "assistant": + messages = interface.assistant_response_messages(**data_func()) + elif template.role == "user": + messages = interface.user_message(**data_func()) + + tokens = interface.get_tokens(messages) + special_tokens = list(interface.tokenizer.special_tokens.values()) + tokens = [(interface.tokenizer.decode([t]), t in special_tokens) for t in tokens] + return template, tokens diff --git a/llama_stack/models/llama/llama3/pasta.jpeg b/llama_stack/models/llama/llama3/pasta.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..e8299321c3cdf913817d3a331803facced10e40b GIT binary patch literal 448611 zcmeEtWmp_t)8^n3+})kvI=BRP7$ms64z59hO$N7wAR#cgJHd5;1PD&B0E1g_2m!J@ z@4owefA+`j{@u6w>T{jbU0r?O)id4Ib*ld^{oMc%s4J@}1CWpa0Ho&y@OKj{+u6_0 zTY`_z)0fxA-pkg3*Urm>FUZE5Pk@)74)_<#Db4h&vzLj%#a^1p zSVW6o%UjXG*+n%3>|hY04YCVyvjf^Q$;vQD1xW;XczZbb*)RloxO@6a1W7af%ecgI z|DVl#Obq{0@pF@AGSSjyQ1k*jFbMMs^YZg31-S$WGCkV>+dE3=zf%6Uh3Anp)4!nx z1_ts53h{b@o%jTRKp-E#AfKQh&$9-PZ?LDIO%RW#FZ2Jh@XEp04(#IX=i=qb@Q+0s zTQ7eJuy?Ux;(KPq_isx7 zbpOjkG7SHb|9Id(9{7(3{^NoFc;G)C`2WuX{~cpFcs>VKfzJ`t-xYu&02%2Y`KLwx zM^I7zY0*$oQBcv*(9!?3VqjsSV_;#Rqhr3n#KQhZo_jbiuyOtw{A1*Q9*T^Hf`W#P zfsXMnmH(^eZy$gV6Agd{KtW;xAQK{?5F-5@0?#=T;wvR(6;(BL1CXJSvB_&wJ9`I5CubK|UqAnVz@Xre zn76TU@d=5L^o-1`?40)>az7WBz)H)?D=HhCnp;}i+B?1s3=R#Cz(>c%XJ+T-7Z#U( zEN^Zhws&^-_74uvFD|dHZ*K2?-~Yo4>6z#MlK(9)!e?H{|6Bu%e|RAw2R^STgs5nY z0_a5YdKfl7#7u%wm?R2mMGgH}%tHESq_*H`Y%&(%P1f^&sQruC{~599|0ibuMeM(M zEdy{+ke)9d3L!ui@Ywl^J`Ab9j0i9(P|#%lMdO{z)yi?JxPl^V&OavBep}(PTt*1Gb72k*2cJ zfOY8R-+6R@_4HYP)&Cs~!gfhhLGt|ti7iPAGSKUuQ4Y9=z{st4v)W#6VuP0^6knc? zcz=^c0@b$ILt}QW14%yy){S(=2tK+Z$XDZmtQ~bChxi|pRbWnQhns@v5{~s=T{_sc zuMo7!Zg~q2fw+~T2Xofpo2G1^ZMagZBz|9y7LRw<%`~=H5igIF1S87qEZu4zK&_GV zSJh74-aVXmze<neXl2|?j|)My53x14@!p?3{QxVu(DD;W^pG9Lhl;6H0{hpb4ihI0`)8zCK|J~ zJ3&8&=6m;B98PwE?oxvNsqul0I!F_~Z6U|Er}C+ni^%28TgIz$?$}P|`IQWho=pbA8)8V5>nJb|ilR0sz){&ldb; zW+|K}(Bc5Nxe`P10#x z%7y!L09RZU1a|3hm9DLULuZ@{z#Ltvz^@$wX{xAr`Mz>#coF7~m1@Eaqkg9@R*`6LHMH_l*wq2Z8pHWUYe1d)MeZu)nhV#`Um#ml}hTAy(6U88MqT)*L~ zpPhezlopiL#Pv;TZZnOg_4Vz@WKA613*(ufK5|!e79d?D0>n49Y+Yk>?M94Ri}H`ov1<(_X;`0PDI6Jr z6Sl5jgjFOz0|>`NR#z1XUasl*K*r`hJjyn8WY>nn@yNwIJcCIua+Az2rM(6MW*NEZ zHz7MB^myGdUZBw_6w)J!?Dv#v%R02NzNW1OAGoxo!Q87w$%YaIv|+3vFE=+(`M77x zs|YusTh)gl@`<+t_!w!B(nhrLJr@5uWueUMJ=WbX*)}7XeO02{cAqMJQoZGmmaB)G zw@)i?3v$S%GFHRh^F(rrLc^yO3Gg<8hmoSp18*vP%JIObTv2%^q+Ip;XbRv}&tYkD zn(PyubXZ)aMh6$ECuW%jCWW`q6ikq*(sv!k&g@3Do|~taSH``*-B%7ErR5y-vgSf5 zfV!%Vj`*5#3iyukjA6V)M|ZhUt{Ooa=0|fU+oR>Y5Ry1^@zrSWF3HiWl$Yi_XbWU2 z=8T%#Eg%g7N*U?1(bHW2baILcfc7^#tRy^{mOM{ za+R#LF*a3%@Pd0vhMSh7ppVmf#n~PmW%XRJro2EB;MexwuyA-0Dox#iItlT20~^T< zwjA$CA&p!|a9&-9tq2My$Qv#`reN4=+b-iIiQkE~MOmex zGu}MQg{xA;_Vff(9q;yUT|A=vSesi{l1Bv5TC9xN2K13bKgZ`%qy0$&NflCZMV}?I z;+j;JYchq}c4UU#*C*-a~y`qf^_AR8AOYu2K} z6laj8P#M)mG6!4D0kU`C|9G+%y(voo&<9>s+$7_1_RE<=eb<|{b;>(-R->4Ze$&I$ z%ReELU%}GQ&j5Wgndg*x7rE2E{Chb*oyM{Ou%uQ}>rW7`?4dK405EV^Ute=#+^fP9 zIKAjO8ERYFX1JfCze-nGj zj|30NX^@SWmstLkn@&RxaCe0M}>4tOveFqyLyr^a~!7G3j ztLDeCz}oB8ZDy3@E(5muXkBl>ePEsJtdtYKxS-RV(RKX0TY}+sIKxeCtk~-0=S<7p znyL7$_YStSz^w*39{F!nWiQDU{3EI4ai?k`%X{}q}%E!Z?Ux`l? zD?dlZ7a|c-TU~X}Pw=MG!f>I&#&Snmb-%jG+U6jK?TD7N@J(@EPz=#2do6cqzA?ke6#3cCkK8LA$qyS={v)Qz*UMCRP!E_M_donO}D^)g+?M`Rsm z7YiU@+t{&yIdAV->&%weUw{Gw9i_T|%{r5pGI%m`w#z;OJIwMiGR$V9Vz>RQYJ9vT z4QWd`lK5sA|A~T8ChNRz7az@q@t?Q}3r=306(oDRrjf=`Q1n6{sYXBA4tV ze0S_QasKbHY_-r7XreJuskUnrNCN_FbVyS^orD&j?>G?c)zP}>Lbvf3db;kepAmnb ze~@yTlcM*$IgU&tWKHWG=&?M{8TbpJ36uR2szbPMpw#wcdd#YWu(WU&aSitjA!*_b zr*?VoD2FGSGMNj{$c0AI2vMbx(8O4VX1%(0GOf`!6DSqXCfHHMp!fv!g8`y)VVt+* z&E*9MGC}xhVU4&-^98P?_dfoCs_G%d^c3ryG)#M|9&vhAnnyP$*pyl>$XQE?rF^2X ztkM1mv)tkz5kFE^_DvZ`Yy1k=({x<+9)g6;i?z?2);X$vYyXo8m%|I*&7JuKpSYVf z-QtgMs@N;^xm;+|*MbBIcNz;1dn344=Xy+JgCd$cDy2WjEPp=a2UO#6p0v^M3G*kQ z^kdLvre@@2L`xX58x4o5S7YZzj;k!>M6C!*3P|P621fcQ9i|@|dMT?#D*I?qY|wxd zl%|C11ktu~vn<;%C(8u2GCVG2HhvilG!)y@skl~od6x5r_^%w+gZUrE69b(@XK zZjl=VUcf1iT}`aP;Nqx-lH|F~fQ&xMDh|lIR3FH#6KkW?+v?MyI5zvyy=1N+rdPic z_UHpiP1abP0%^ltxsxn>_aj;+INCb8XjblwzjePXPfITb_ygvH{n0?{U}>Yq+Lrxk zte5B*%pw_Yw#t3ld+;Q4iS*1+ONy&Bwmqk+W2qC68?Wh!$}!xWXq(Kls}P|kgOaP( zDF)_z5E((25N|IrJx!FD80~YjyqTIn=Yq8IyK!;i;IyFaOY_#wbFLY{@kO2NiSgSY z#7%l*vk~EXt)LuZ6+F!CkEXeWaH3bp#3Gs*%;u#gW4!^)s8}ZdTo#TiF>E5e%t46= z50*{tb?KIww%2fx_sz(kzhqB*5;$xtRqzMO*yRZs6IX@Fwdaqy(Dp8`nH#{xK1)_? zP|p|q*!sDwV>0()qIt-v!QJ>{P{!6sO*!%+`4IY~LaSPxOxL;8El}Z@;tIG`uHhZv z1Dp&V_Em2#>y+HzIs8MbwfD}G)riSj5Xc)`&D|YvlCZwhQ7LHdMePkwZh4_rK;a|{ z^BwoxE2d$J{{F?QK<`o}9F2ec$0&o3ApwcU4^jpX1V+Q2NJfVH{!a9(nVk{JCOs0A zq^lY>^tFzADrIYBf%{qYBmEqnSVP6{GZjmL=zS9cGNwU6V}oGn@)cYE_UX=abHe4+ zXbFffnx%1>t5zRgCb?~7 zMUo~t^ASz~*IWLW%9Y+=>w>U4&{Jbadxw;p{5;3Qz;IK(f)eXKcge_=d#LKjfJ1lH z>k6V56CnlzDayaes})6J7kRJia8F~HTcV9;995!cVk-k2Ci`W&?UK}C}Ye`cH4 z=Y7m)uw)}+fEe91*%=tSAm2qV6DcaU{Gxk!@(Wqao)lhBb5~k4S7R%X9ws^AS(IWv zlbJoRYcZ6j@P?qj1Q+^%C{y=qB#+-4@jcMY7N0}bu~L8EGS}l)R3+TY$BcTxW&L$A znM=AjSxY#-W5hKtN8UA&G7-4SIV~d|KvB7qQe)bdB-PN45Xmd^z}8|>W#$E!*d599 zzNdV2=Bfnaf8WhA_lbN%70(tQnzX*OY8InP&=b73o*(OTDYm0JXhV^@6>FHrf(b3o z$PKc97pynWXSJP7h94XaKEO%eo~|v`IC819U(U|tP1jD6IObF6^|2L@bCJs?*{(s7 ze&Bd#OCnzcjn)3#Oc`{mBUm5i`Syyzr=QkMsK19c-1dZ`h(S!p^xk+AKib2CImwcy zzW=~j$yY-^UyKJYX0vecg$3`rFxg=j4vulDO926d!FHeYmomN7hdShp)vKAWc>;o)|8OE3Ui=><-S?5>NRul2l# zvb^^^igerdJ>HGO!)*;mx5&|MPH(8Ibjx0FHK{kpH{HH|lfb`{{VG#b_MJ5MRG7iJ zmO}s&STJM{vLW&l;`7St7Ev)Jau*joGUMw5tGK7L7+i#aRq77u37Py^#(Q<=-}#<1 zk&eFh#t2(X`1zoJ`H@?eHd4I$AS}Ycpl3633d29lBZBeUQ5eU)DUcS8BotamxQS=7 zdWsr88>6X=Fnai%M3KV-I3Ai=3b7q^a~QH7?;dhKl;<9U;SifT|ew3 zJ?nIOdtC-wq5sfuRb%H%(WKh0H2NaNKquKhhlSx_p&aYOB1_asuAf0=D&RrMN=WgG zW4d5+wO}Zy=nPL_rlREhdM|3dSSKzK+NcS@!M&{zs_pdRVjPoD*hoD}N2sgi3BfjX@9d%^e_^nLV!fTS|nSW$(N zuoB1o8SW2^abmdLDhK*rX*o9Z=GbZ2S(CoUTAb{Ku43HF!MIQlbAv3JR)JxaJ@WA4 zZV}B7hpVA89T^^tGaUxeK-guy&+J+Giv|QLy@3Xvk&eJBqFy*Jt`Iz@-=UB7&^h;x z;rU(??%Ey(jq3lEYnT;i7bt(Aeq6QMnl4ze9Fcj}DEI&sx~}Y>MB#vcpBueUwDQ(Si?DtvdpO=Po}&!`kdc_18Ff#6Jqi|S%=J} zg%wjCEwsd*I{e-h*O6Y^vp|nq+M(m%>CrXQHQ-@r?ws3>!3e^1n^xfxvWd5M@XNTSMKC@dgQ-3$ z=pMU(IuQW)&Q6`rT3b6d?k!2+W;AG#CHK|mCtC;gYx5oBnP%%Q!cTZ>T$#C!29So; zgLPJ$q{bVBNOxIKBWR`7HoVEw&aIs2XXwF%o5_(Nr@qc#fN3jqg@3!_(5Qf2ZI-y8 zERSlwb&kgofU!q|Vrk}4%dh#nnDW%kKS>4J z-1jW!ZT&Ai3|XTO2SknNS0}>$0?vJJSkLwnHZ#U&QiZk&n;;{lMEP&@45|_aA0&#; znl&gTb##MAOPq9pvg(g$k0pNr8@VB_D$=JPH-9#1cG&P#p2+G;P~9Q6=#P8Ph5E^f zx{Q=B)`ijy;!*ZPo=9Ugdte;DX-&dF-0Cya1ra8mqtfNLP#99<65cl_3C%MU%uia&xXIf%qEX8fKoQosE zr7OgR8&av+YUaXP8~8Ky+mq}+*>6>iIne1a)izZq#u(;D*!sMl)z6UpGjbgd7tcEdZ=6Dt;tCVoUaQ!yktIdmH=hB;+?>w+KZ@?}-JD*#< z11=3~{Y`FC8*y%{xqFH5U)uEd(PWr}ltFD<^fl_(GjeZPs;5Ntp|q2^w}z@FJrCAk zA(PxP$YvXxqNps@rj*XP!jz73R&5mAD!0pbOZG$z=JHu{?X9)y$l-t%&$sUCokr=- zp+e%9)mbHGu?ztd!FBK6IMd2%2ik?iL}T*J^f;XVI#Cpvn~w=xnq)QhJ$*O-#FJ8>jgUh%EYnz ziXnn9-Ak_HwPOOp7{GnE)tvq-ofYz&P07XG-k>4%5a!vUNYL7PnNRJ5_5z>D=w%*p z|Jtx`4joT-H_x7o&>Xjs@NI_r`v8^@14j2&QRw}mP~hry#pb3%QOP~D7J#QVVVtgi z(kzPk=t?q>hwSpTnRZEPL3>eRUi_7PMTbP@@-h`_@l|}fxohXv z;X%AWyL}G(YxmhKets2%w*7U*P6KDN0trgSu(UQc>P*&OfMe>XBdDI=P>v9})V2tR zV?T>u`G;{)@=MsNxd*Kl8J-zFtI0(-2MeB(GR25Ow={T=LEnWO7H@xzTj*-%`Jz1n zw2OnZ0!_Xe;gx6w9ze3MzJCDie2-JH#&S11gv09S3#`@gv{z|5+HAfB+sQy zuC)-v@ubUQN4zH4yo0+^zAyOjz=>oFJTM+BEe%@?!G8VTp_Jv%tdRB+R-bsOsIpx7 zPJ{YDM#0qrOENB>_-?M17qv@C%!icZK$C}V_!0t!F9dbyRfoz@7nPmr;@9BUSC0ra z7O36Nr4eChmqBdObf`nGD-17_%3xTc2H$lAF61{1=AhS;60;gKv|n`yJKkipBOpe& zBdUSC_(%ExO1VSH8Nm}227_cdT8@@BPG=-VJ7;E+^jE5AW`6-S+6JkdpyasH9Zo^E zw@7e=Yg{Z`{mdr8MGPPXC*zkQV*eWdJuS<{d@N~!OpjjFU26IB-J#hm;= z`Baq_Eo-`+A&#PV1V~rq$PL!8M8FxYR}j56@y|pDt1m%YUDe85{2}>wVL}CEurT(s z<50WQQf+~mp7=8wg5~9rLCQW<$OV3UfQuW|2)Q!`JL&N{{@Y5Xl3>&jV`-^S@b{~p z@*IVMFiGciW z8@_*B4SbpNa@^Z@p_gLI+)ub%?6Y ziXZys$*T*rCl9woMBltacbm$x<{R5F7Q2JQd8sQ5ex}PLzs~2v9^J=CWOw+ThjLw( zLfBy*-20){NMj7NeLiIhHqSvzz`$7Oe($~9TE6|XWWJKmQoxiE5qN%G#-3QBmm(C1 znCXDGhuPC?hg_B0f8pv>#sF?*wtKKP=uP~ZJB&}Hc5%uiUUDpXXGsJI`xdg*-rJ@s z?o`HBeGpmevuaFh4$(Db?#2IT;Tiz9NHaNRZNILl3Dfc+t{wAvPBpxeKtSVt7i!?; z*{m6Pp^N!a-Ls)|nF5f?mK6Q(4^GUsKu5&#mwZXf=#69Jv6Vbyh>2^{D>)JNC zb9Uwl-zjnJe&IE~+(N~B6TTJ52{mP*NF3w+Ka1{?;{P$dp(7G+-1 za{uzC=yh$Am4$D>wB09+`WXu^PxOZwSK8=)h>%X+Z98D{hv;*Oif`{#T-O!BjL14= z0!o1Puam7mtcqh4{W2J${suluO9q@6qKXkkqG}Em^v(a#A~$PKrFhB4oeY+kSFiRe zM$J`v=R=^Y4pvdruRskB^~MP4wA=6^Ub7C5%|tL{d^dVZIeTN(ZL?=1e>7~)^_q|k zeAG!DYOv)|NiO9iUZ*=PgFMESB4Vi>3@~~u>wEi(Ndpyd#p?3Mj3~3@N=j|0F)Wpt z%rma#Dh2(BD$U6G=k67HcipSUf*P3_UemdZC6BGnzX1F&Dh(Vf~xmxX$WIWl<3A6S)k*a?3+&@nrgX^mOtYUqXFdd?>;FOkEU+M;(CmQfQ zqf)#%f*?LdlAP8ncPU9;54e>swLh6N4j>~Pan9QU5@{)x+=qOWVUm&2Jyy)^q=+8r zyKt`Y3$r2g^*&?4k-^+=kanu++qAt`*{tdtcID+II4QAKLq?rk6sO>13`MXL5D`=h!x)#jNdYE(4xNeCT6%V*3J@ z{ri{zd02&BXEz;a^Dn?}Xs?vJBxb-Sk7Lx`uZG(SQyB{tECJQe0<}4w^=q3kr$vun zC<8GLAWdZPf#MjRve>5{Vcgi|;I6=}=9338vhP%UU^HJ1N_9TpB8RL{x?uV^&lPMn znhP|Gn1R!A-Qhw5%P729Bwa&(>0u4JXvm+Nw0#WqiCy6w1Jxb>P#Tg563$3wcIheC zk57c92BYxr#vxz0{4ajr>n;msj3XTk6Ml0EB)_rHh9mt2U_Ok?jge$H6LY2G18WaG zNKX58at`z~ij?;l0`?#+?`Ka{ZY+$yZl#*IQa>{8)K#%CZ;SFByw1lr@CyBS@lWc6sfu4=gLfH?EVJzD1(I zxy=_^id3B63&i}gz^8n%yj7r=pnd~%?$@V)MvzJ%ypgk=19}8zVO%ktXl_9h8g}ps zb?>ZPWDHGg7bqF04BK{yBTn%g=&NS{0ywO(9mBLI z&1D~}Gx1AnEWO0g87x7^d2UvYLf3t`J}iYlndNbZN_b6zsN&R65GRb@AU@?%+$D?? zCuK1@*5SDluh7&KXO{A>^kB1J&R}k1rDxeoG5sEQ0J3fvBUNMa{1lP7qhhqvWU6R$ zu{s;0E5Ju{Y2?wOe{y>c8Deb%$R8Pbw4t*&L1sUErXjJI%$+hzqm8f)x~>V;As_&z zULE9adzcrYDOt6(OZI@ZojdJ3OiulN5)%$K0Ps3$-bqJLDn$RLy|fI*T`Bf7dJr-t z3n?B#!e49ov8)x=apqM{sF z2vjVMgHrN(7{hQ{7I@ZRk*eNuTRq^r&Oc;#p+khDKbJPmx4l5y#`{KPo9&)+&$UA# zcrD-De&TtY>c+MFY`;Q*nN@H~YnwKcE1tXE#Qg0V3}H>JVnx2?*tf6S<$?)d!!yRGRD5w zd?f?MQDlo(#22ZY+taU8`cu2NA^L-n4X9f|tB9k2vMOU|^{KZx{(+d1J>5fRFhIwP zleLM;U*x=O=r7>M*R7O+KX8fKxQo7OpOBno39O(Tg z?q^>}5~XwMm!&FcN?7AXw|8e{m$`S)4~W@9^dW0+B+Q9`r5OjW%SMOtKA$GQVsp^6 zUAv;?SGvue+_xdg*5AbSa@$O`sQ*~{+lt#j(=sMZ37_Maqov&ndT9q;0z|i&v963~ zd+Dd0Du>xZrY6!N@r^lrFSg@o7pGo7dWKS!VBx~x(D|80#<8CYAZYUHY=P3NQ7Hr@ zLBpFOsjQ6Ttrp{RO4mF)w*ON2{Yd^cgXh74QMv;0j%WBGM@>Bza|;IrsxftQKBUed zN40P%KB0%_V1d9hTwWO?ppNY>@sIX_5y4j)2?$C|9BzTz&htc4#I$K`CCsi%GI%_@ zj(4$#b=^YX1vq3I`NwYOx%hD4IJm1vKfC&-F5zbjTZ8vSr7LRDpH9;*w_#b*>m=9B zT#dXcFkA|PsFRWp4x*k&^|YIUIX*UmeB&u;h%N1_#x;KxM?w}x)=qmv!PNsBdrMnQ z*@tR7)TOt(Rh0G-Iy$BxlpqIPtfODA)r|wC)%|QpbY3Pwe~F(;f_Rvn7<-~iHgnSE zW-ZXYdtvrT7UEY_FZh42bW01G{Q|j0dV{DMNPJGl^bEf`vinUljOW?OT#=5B9%vK< zooG-K1dHU=rrz3MbFw(Ks3x`q{Fzn6-H5uI`GDNI>T}4cl;tL^f{f=gi4Q}C`fB3E zh`Zb0mV{<1_tS(x%Ia5jlP9TTADopP^{FpQdp`9JblmGU{yx;kAVne*$m3*%tH+t$ zF?)2K?{oEIVz8j-eruqMVQ-;o*dKjbnoTum({~-x|0N#>oG_*)IgdJ2Qlm*LPz-lL z&f&D5NXg0sOU#-o*^JRW7zVShMW#%_kTsPF6Rs!N?XcgOy)6k}?>pC&RD{yl)d0HU z1+4LX)w*w?2+IBmRvvdd?&Q_tq@Vem;T;sE05PXzd|emwNU)ZqWLh}drF5^Uo2C$y zz=g8!K-1m27XuPN)IE0Vlg=e1%nMA#3(;pOO2u(X_SL%BGW-qF-{IC98lh)=`keZ3 zG&e(lwU+E0K@NQAG*HycGF0EbKGx#@Wzhd?A-54dC1c7}8xt*sWEMhYKYmHNwnIaF z`}}7Tt`2kp;6;K6Wjr9B;%fyh!bAW}o|T0pTqgCgUGN?Su&)FRVDKrpniZ5d3-2i3 zPW8gx_IYiX$zxOAI!2O3L*z1Uk%**&(5sI0qa^tyiNeMGVGgSIxeV+N-Dq+;(8dAb zXU-daxV4t;;T&=F|MRdQSPim%M`!0m-4mUM{y>~}TaH|AZSvG2I9`dg7Nw@ur2Q%p zl$Nf|FJ6JdwjO+M=)y|v&HsiAN*1Xo<}5i1H%>*;$*R{8fNAu z8yyMFZE>(GchzQ?Jy};~r{@7L4u$5@3YRn5fZVz2SgwZyt?-*)tSo}!1i^^Z7T?Xi zzHmEZPT$HQ5OFF7EOc?uD~9ZzNrlAQ4>(48d0hp{>qe{&&g@to;15~>)YYkz27@T{ zB^H~5tOhi^2y|G${DPcM#Ao`0EpNs95RhMfNk~j9vHQ|o<;HKx;x1s^Ki)IH0_nU| zR-yGV--C2X)KDXrToz;<`not6B;5TvG5-m<3W7S+gE_YvGS?&p^xn1@$@t!tZS2=F z+FH{G=B2p)7DDJmF}3EpSVMVNuSjqChQ2Da*0f-Jwc>j4si%f{#IR8PHmYBO+&JG> zjMs1ln$q*G4cP=3>1zkeWGQSgy!?hKEnbsCXiI%yi&0F6X4iAG>Wz*TTHiOoZ(b^E zQYBW!=-olVa(*C}(6Dj-aBx0i5;k~UVT3uaA{VY!o3MERYv8SEv*#v6siWBaoQq0R zh26x$F_G>q9SG8{M`s|;^U;$Mxlt@jXj+|USD|6HpAX2-3g~>!2QP$jmi`5#gaRoF z<#pQ5$P-&$BQGCJHD9s1P1VF&q$nrzN`GU|lK-Mt4EAujx_7(Ryo`2G)3{# zSJ<<-70oY%{kWq_{ZFml%8@0yBp>9R`;GU~5BdE`YMnWvoz|h1`cPw)Vt9F8aKH>< z0p6@-)h7LqFYDim+}shAeG?&c@n>8d#Yi4=Z~EM}1k&QAKZiCq(45bmJu-?21G?}t zMpQ*Q9f}VZ6N(t@3JvRgTki;|Ne$eqb(o!Nx`j3jzr9jTtHrS9uH(u(P@i!k6Hv@C z=sO8Rs#|NPcXF7*U~2O#jFA{#!ZW$tf36wBdmv6d5n8dMfz|{{@D03B-IU{**Qs4^ zxxY~n=V1HsyVW0S(kSsFtL=D4>MuZQdUWUw8Mo!&jmHvlP4X$iyRN1tf_!DD$Vlps zM^##wSgS$D2E9VdRf~YIZA#W_KNxpw82t0oltAeVm!q^rZ0_or2;?7?xC`4felINl z0$k4Sr_EIv_CBARdf%yw=Xhe!CIwFkCyDwia(d9wmxbXrCZT&Izfs@!>wQa>z3tJa ztI`tEWl7VPk{6*+Jy<*lbw~8Lkrx~lRf7|5r0&0N*5tj2ONc;jCf>YxPmWBJcW>;A zmz!n>uxnAhNkv;diCNNrJ#qi>Wi1!SwtE;Yj#Vcin!Y$iO^xipcjfU?hM-e!4Nh{T zpnK5GqTX;MMYvmQoQ&a@cZW1puOx7ZNVLXdl9wt&CR32w*2|Ug8H{od8$Q3O#(m@} zsH&{`q}!Lg+l}Wol<+;hJh&`L_-pv2kwB`9C--=l-sT-!+7D(x+j^bjxu2Hbqzwx0 z(R#au)1U#sq8KifXPl!?I&W5!;u=Q<-N;O#n|omN+7o=qKp?)=@GFepMlqA8`G)i; z&=l3CZ8XwJD(&^H1PdDWZ?|ai1B>P|F-7C!J~MWMv>PpJ=58kKOa@728>bD1>KV~%sS^ZGC%EnsZj-ZpZKIIGo0m?e{Y ziOeO3h^eO!4jVBFdLqfZJ-5P2o~)e7uMNi$H*8;fb!_KteCvw8sqdw^C7&|TF}<~- zN7Xo5DJ`bY2iZzTxJZZK&TnhTW?iSsQ07JS&m^MIe!BVvVKJaw^$wb1hJj}4(8f`^ ztP9n zh}oj+K$u8if3CG>Q2Ka4iUTneOXv?k{G;$bjvDgyFBf1q&z=dQFxLC<$A3$XP73P zLah-Py&)Z5g1z7Dr)aP*k7UWi8!eWwHT-UB>y&z;`g9-sce~v`3fvEv!*{!M2Jx}j zIf{ePum5!1mVK_(4mDtOz!dm7QyTA!(jHh>56v)^<*&oQx=6anYM&yxtC`CqPSdYu ztT#p>=jj9bZkXA*cC7tbtd#Fi36^%*iDTGWfiM!F*t7P7L5k^RWO>@X+Ib+Ik<;CR%}L!{U)tIi4>r%qetAf$mnewsUZul4RyHK?TzNB}}fNt-$ zy8Bx|J8UtJ%rAs^fx{-zn~RJ;futIWVhu~L-D0&*9=Ywb7vt0dPy;5sv+on2cHven z^XQ6vAuW>FoZV9jt#dXV*b(~pEm}9KyBK!%doZ6HiK@>y;(Z&oEmw~o#lOQ=H!jV% zl1IRA&3M=MncP#~1NXUe&-tQWdTyj@Io1^X39cRaYGOP@qPZCza5?1m5#~TjQNO$vH+CG zE&g0(P(!1rp{VayR#mSuc&?;iZ_D_E^tqsHboSSFd+*sz=0YG%+Xw82va>p957*O_ z6~Pnn&+eXv;_)-&0PVDNOxh{f8>J`7J3_Cpl*T!JjCA9guk`#W%{^8)3dVTf%Uqwl=hF^Ior;cAD>8b7Tv$GX6BDVwl zPZas3fwdUws;4_MTS!ZVGap~N{uJSq6~)@hx=`jY@G&CtW83RUTO<{gbq<_lJN=F(wR==8Ah^Ln?tMA5) z7uA8b5J|t8$m78!wp-tM7rJb3IBRhU2Pp#7F61f z2PI>qz|;!@=_*Q%_g9&G-06w0mpE8L>}TFTV#v%X5aw{H!8NZdNbLRsYTbmtG<~M2 zaOSE4NOKt~J)Pf{KxvB$#|zN4kw{il#qeTiE95X@R82T`V%eU+)CYS0V1$pl9Vv@7 z!%GC@EpX(8|8+&zHS=lSZPlFqYEU+FO5PJlh+_mSC^5$nyBDKwuWW87yO6_0!vnUK zo5b^#cgIOrLFiJ7CgTa{>V!s3iI6q;D#dWqKn*BAB;y*HJ-rF0JorxKlUYFGbELGC z9Qt;grm~AD!u32Y0$`N&!7DsR+7Epqsc6-B+l=XI*?gGt3$iLuip*H z;6`V_hpF@^86-bee(Mw3ll}sdYX#$zeEyyP_Ho|ZprOY-~(gL zQa@cDsO@$hQ`AI>STzcDEn4DY z1gj{S&nI|DDV76TsX)~_e^5c}Kxxcfh(lHulX*teFZHHk+%+`w<~uh;x00L&86_>H zmg|p)7k5$Mb_R|sPNRx>8-Mnq$;ToT>pZ-n@19Y=!7?pND^HV_0w0mxR`Knt6mMJ? z&$ByHPiY>?Ak(_#(?%BBgu98L{2$_o=Vx2{>ip`|AF(6V&IOmet3Ff^IAUs@Ay%39 zCv$^=!Lh5kp5AWDq$qbHCIb%M>TLRcb)Fw{?g6h1)?ESdQ%fTKac&LYfOLc?=0!dG zrxzxR;OK@%D7^uh+W?*+(slXrj+GQb`qIp9Q@IO^E)-uUJD_!@t{Ln$}seVk!@`ei!8)z#vl z%)i$#Br;ATMtMJrdqKtxjru9F)^>bd(JEJ_}pgq@OrGSyoWsEn?yj8XNm}<$>aHpi|Fo;|%pb&`>BT zYMWHf>_zXsG1FZy{{`48Vn0l8%xknHFA+h5>f)xVx+K6Vb35mIxB9q3n$bT%-a%J2 z@9gE;DX2qp_l!+V%&1cq@}yLOjCZubb#W=gdZoA0!4563nxvzzqiNK8GRR zx7M=0q|J(;*KD<@E)~&7)mj;!<1Zs&Z-<`NvC|HJZ&J&DulZR&Brj#io0if}GkLei z|E^=>s2enTCO34#3^1O#k8L`%`P}DzzkmnZkTgqw)$q7^V9^Cks7b1uT!;P?fN9_> zW-k}!{RI$3(A<60=t)1M6&bhRjxtx2_ocj8pHcGM%S!FT%-WpeD-QOFEs^)n^!>!m-k3F~H^ z*OA&2xw(z{3vj@dK~CcRIC|W$SaIie6EIuDY#7%cgc99e=Ne~BY4-l&$g#!#adtxo z?`gD$qdSdWH?hyNo%H9O)s4`2m8DDA4SRR^gw2`Jq!f~SF%9X8nP@MzM~xi4y*KHq>RbnjdV`>4AS^HGOyxY<5s({9FEXJ}150d~?s zTb=OflQw(KN3MzFLZWu&R7vcIk)Y+Q2AVp%BhF!k4-*Hy_;0Q0n<}o5ztHma>kh?) z=N3fRx`m8mo{`vH)@ay6IgAh1d)m06#QIY19$Mczy|4x;3`n?2LXBElqda5K$SJpK zUQT~nF7c6@vrUN>ICcBw{eFKsx%#)e7Nubb!_(Kd;j8p8#)d@gRgmvpG^QfCKT5-d ze#Wq9M_&$bw~Ty+c-R;HtZG-d6Lzf}3dTZld#Lx0D$Q-R%foBz3hO~x)2}_} z(d9h5K1(Q8LeMQP7tfH?@k42MKVOSid7Qc`?(?r(KKhLUBK+Z&LER{1Vho{%0$Zc2 zX}c)b?#3B-j9ITQhV$LH8s}--7XopZtwVE@EgVS|zCB7GLH`2YLIu5y7&49wy~0!^ z8R?vM-44#gA@j_^lwGX?Mh%M=ce$kDJ&tm><>W*4+eANPEwlQy>Lw>aVwS(Fj3nNs zKi>3Tq4%_L<*Vr!6qF)y+yF9fJIpTqMy?R=iND&E-Oq{#x#Q>71h`S_#i#Sb!OnnZB@?Qm$L&!M|t zh+)*1ggUquV>Gvkq%YI*BL|X$>txUIA;%u8#p>ulfKmJ~9~DogZ$(a|II!+Rm;brt zvFizGcmE|4L%~vA>O@odX641uBXj%(c@k`bwc#WMp~2}euv%fceN&bF#5gmL zT!(QM(Xb`$6QAQI6#Zk=^weYPoUV}k1JB3D)y>T5m?7Jv&`D7hBW}Kd?}hAwl7Ak( zbY8iTdA;Y=5wArVp%ZK|5K?4hjpN5B%_u+2JvsjANFBF13Rhn|D;xMz<1_Es?}2~? z8V)YNc@0|Q=e$WCOV*~u7sFh@C|rF@^g~E{%N=7+98)fAn0@TfJR~a}#Xq9`zL-u8 z3NW8GFJP_4aA6~9W(S5kkY%IRZRb)O?VEtlZW;5+SMkO8;7qP6I*MLO`+kTTe9_sYR7 zf3t=wlVqmN@uh(B<-LkKyye|c)dKxMq!As{121>T^Cv#yx0UPovztwgCSUP@zmr@K za|qzBe~5wf`Qdqj;;bq*N9Ld&fs503)*E7V>N9s0 zVbguH$}1rc-Itc!;Rx>bQx@gS`YFYQUfwC#bKP3H^wgKJ#>${vE+gqhwY!2=6Qi^k zy7yOAIe!6EJ>Q1251vz8V(Pyi!wmgKDPK1wXnwLB^5}*!vM$`|J<7H& zKjyvx*?hCfkS*EZZALGKN_9Ouyp`_j3IG0m{OpSJHS_+Pc&dl=EVqgm4ncc)yWMVs zf#+&oyTp_0zR&LmlINMC2V@5k&*9&i+oJHB1X2Fs_%AQub*6u+KT;~hl6;h9LEzCB z@OlieH!iKevPS#JPS+wOmMR+I=?fRxT~-+~fxRU-RP>XqvV;!@^xuByL%s0Ly$=Cy z^SKqibzEgM%FN4^T%u7MsK;n=HmgtYE@p@9H1`rHdK>TQ;)g9f<(JDdyrh>pw~~*A z54hgS5L0FueLAsIXTkB_NP%e^P%wS)k`Rg{m-4Sl#Fp5mA0$0C$m8b0CxVG#RB{D# zVW9<2x%vehwG$iT043wq^*l|Y(Ttk}oNE~<;B1)+)XDSCavL;Tc?>i+h?znzGa>^5 zkA18PX5A1-V?Cn%q2epjbDXR9>LPXIJ*D}^?3Nr`g7WDWCU6+f1_mXx((^EscU!Tpd=d5v(#9ELHcu8 zXzm8n#Z3#kwgXkAp**Tvon4NkcLoeI^{sm?U7Uq+)4g%lvw%QW{cA$X^|ObUpRFA7 zizaDB9mT!WHzQ+^dC#>i&Y2A00k^M1Td$&1_ogs_Z^s*c?{srO}@%Xr-xm zLJN)MlOd~+@i)U=HtjL?JTUv@*QCXCvU2U$){<>L-t#JXC#`D=i=oddO6SWz6Fvjk zYLLt&)6s+QSgv@llK%i`U)UqVo-($$WYFUhFg%nP#eSY^z6*}(GJ-sgrvkXo68tK% zy@bd1VaPp2UGKTgQi75E_kPiT@J^2wcsA~BOTv0{qAL;*^dmL%H;FtY;!g+Owx8k+ zE?8Yo0V~1$Yx+9)rTaQ*8r(BOai)eD}%h@XrI_i#{U2dZ)CjCEhU-&PwtBJ`d5J5 z+h6IIEoZ4)JTf66mDGcu#=OedDbi=BLls968?DDOARG_YkRlQ?KUz}CMho=kiWlYp zt#Qf?XwB@?xZ9W&{_cJ13yAii8U1S5W&ri7*Fl|#<0IajCCaU1XH%NUWsvzu!0E+W zAOt6z)mu@uc_XJ4ZAL)@n$50h=xb7ImdO2fnsC`DdCf!>WX?_wDk9P%1AuZVi*ox& z3C&>`r0jcEvuL#xjRNpUzY3c+Qf}8CxIAQ3n}EeYN{j>iswuq2ZK}r|jZ%st z?Us?$oR7w)RovW>pIWJGrB@CCHDG+BB9#5qGIU^r018%e#~-Cf3Cd-;KGg4*2b$)T zu2Rtqku$iTPQH~HLR7Fl0H??YI47swsN$5iR~>qaBH1#P)rA)kp$nWGQZ$9}f-#zk zShqe?jE~BqGPGYYVS(2opEP1-qhd*v1~@ze`P63SBtopkbL&wp&CS)ow6wQrBxV~` zMr+f)7k9z5f6R&9Z#UY(48qSHsa(u<2msCw6!(m+bSV+kC&-(VTQ4D$D#2)8bA40Ax{) zza@L4XpjZLhaXCcZ59Bed3n#MuNx7XQ|8k{*r9^5*zlw9ddUt&sK=;UiM|!tu*kL; z9e!H&+no|D{m?%O&$!WIOo1QiToTS9PwgG60Oio}d!L0+Fv8j{bKPq)@8M0Td^NDo z`?c;8=rFK9EmXPi^jmj%f0(XGXY}s<<7nk_cF^!^?}awoPx?CWdy2-t_*Vl07TJEd zuYJDo>_;S^KZRso_--Oe2nTLOd6=FeqPJ(GM<;q+9uX(Pt8jUQXV_Dq_*ZHU1OyY; zBE6Dt4_n56)a3esQOBUgyDE#2dy4Xx5>e*0&c~ECv`3ri-wR*>e1QEHw=8}d+9Cqb zpT%p?bnOxn1|tLTt$SS-CtQ?M_*b__IHgi4+~jx}Q&zF&FYxBnsr}0IvBxai1Oc{9c)_*-a6VQL8@(yO<^n`a0H1fWv|ZVO;~*L2VIbi;?eE7FeyJ#@rlnSsKTPV}(f*0oxd@aafvkX*o@i zhQrmZK2%lKE@Fct81I_m{6XRv;w;MfJx&YpDpcftn~`Mm#?Uy0Yv@iLdX?tSe%wv=hBqqx;PVsW$01b4x#X?#~~@|9^xIj$2_@jdLWRTS~* zRjlRI=a3MNNjV47yy;fWD^yM~Z^+&_IuzpW?0Y|lbqjm8P)>3>*HdYx$9M)U#z4k# z*1WsGi>suHWG#+9mF#{LQ#2!gTydKIjqu)%HI@ zBhsu71K2(t;2ir`LmV+f23x2cii%B2IKUg1j@Yl>D8@A6u6}zua>O>Gv?I~HFK21A z#Rvzcaa*>YAGKJI?z7H%R&|!QXc!@JpI=JbvefOKAPlN~Ypxi$y&}#ExH`0!s6(Yk z5@lghG3iw9?c+ryV*~3~uJ6E7=NRpZ<-AGabhJnP;K1oz8J;p2YA$kTRs#)Ir9_?e zA9ryKCAVWI(A5iz`)I()yzLrG&umr`{6oH+5{yoI9M|EQ{{SB0s8y9X z?tYV(@by;(CVDmgw{D4*V10#R>Kg28lDQmq?OcfQ4dko7Nl5q2HceV)>mkDqO@4Wb z&+!?otrem56f)X$BdNJAwhLqmn$uQtdj3_*iI}!`k?GG`p5E}Ie9z4o9IG2vr!z*? zy_6$lcUBspO}zjKvs?hX3OX7Cf3~4!R9;zy!_<=m1$H_bsUUv-BISgHiJ;grec`ak!&fIp- zTHvja*OVNj+33;2Q;V`a4ZM6-Bj#KVy!uu)r|}ZqQH*y3J^8OM@h6QAnH#^Iv<1&K z!skgnV8-l`p60y|!QL}hjCrmzo(_7~yTbSyHL(kw5stvRH1tQVTU;O>y#8+&Jkk)7QnJx(mMHXSQ)s{o5UQULz6e`bSTh(1Dh_a!_BFqRY^|apZNl+hbMbS-alMp? z5pq=Hk81bpV5s{$MIWBf;(T@|7@XwwJ_hkGhHNzLV4+>OZ|>Kqd?xVjn(GJdRF>`s zO5^@G=~{-Lt25fgz;no2+xRWw*&w!!W;s35`;woKR;nk$T%+}+MpDk7JmyKa-bS2ku0F0h1v$fYO?;{GVt&z_a z^_WV~s`*dI^9tC^EnB9~BJo$lD{>6cz%%{rmE;~Z@PcTU4={yCBgx-98H`2UKsDgg)2a5b}KE+kQ*Klz^ zqqtKKhR!I)tXb{eH}Omwj7JPMlffLquN}n$|Jo=i9Y0A9Orbe1AkiqQIMHNBlKYK-l) zKZsG-uwcZMJYu?=4~u0`0W)VjbCF*^&#t_0k1*ptwJcsMxJ25s5C@^JNBbUWK3078 zKN*CZXVV%-#$csPmn;T;U=BF0%RujjmOAosZ+qUemrO-`eoE z3;-D5n#cP$8A)C&`AlXP1&O|h`?tdX02mf#1Wt@Q4A-XkKgVMBBqCwR9Adveyc6+; zXv}_P;X8V^?rYq>9eit0i5uSI80U3#;qyF37F@Lt&$I3rqd2)mA7bfx>b5fdx>rZ1 zYAW%Lhd#C7ehv7oYk3NfwkhCz#Mh%}UN1>iGd}IzFnU+l<+;jjYJNeRaP;AKW6@%e zFv$l8sH+K%6@4q8()Fa9X!GlK+TkT2s2wDVESUY_kEJKsc7cIf4GNaeuc)M!+!X;vK9zAD z?otstHb^yMJm4L*F}4htBcEDSwBwWX#W!S1A!!ITByu@?=ZtYpYgYp~6!?{i04Fp= zPC6E%n*b6j-MKRK7^+c1a5pD zgDe1d42qRwQpc`pvQ7(tGgGU%Jc_pT5=@wFJXDH^-!UEOu^=m){uK2>ILA4qaXkl! zm~=h63YB9707v7BSdo};-Ku!uf8wcRra}{N&J8ri6|zU;R~98d#gA%fw^>frIQrDx znkRAy@<lUs)>W)`>LXl9I)ZmbS-m9IDP&%!%RN(M%?+DSZxuXB#> zDS!m44*9COwu@_Z8C6h89As9|pJOS~>U<^fqxNFcHM?mcxYD7KN|BPB*TG-4Py7>a zOxC{Bf1+uDB*7}^m|$!6v&7#H?Qa!;wo{SQ733cfeg)fKd9wY85WwVZ&1mMNtj-Bx z?JLJde>dM3zh|F_KL~XfgH_VxlGryBs*(Wrt{6E%ji=v={n!1V{{Ujo4|u-Xc&>Cc zcPdHDLm02lkJ^X+3CFB>QhSYK!J2c!$#1jcU419j?nwTP} z`=Xjg-v0oV9&(B!vkh9QCC#IdQc16EJg_Ao+@-M_zrQ!V^O>4XtpO3Wpliq zX<7Dzm;7o4c0uyDdS5Zv5ZJ8KcN3RFP2}`!kF6h-M6!e!1MsQNcI0BUO|}v95nSgP z`qQ0(87x0KV-3S3(^@h|-u#UxDEBo(+Tnu{xNs;J5^&5%wJ^k_^GG8Kq}DQ%zKot! zOC7;d6aYqhRh4-dZY#m6*2;tu!12$DM#>`xI5?v!K?P4W984Dq ze=O4FUq00?EfFaON~bP3^%TvkoRT=F5E*uWdGx17xjPhe9OTrwG_(@26oNETRq%Gw zX<9Ysh;I{3v{Kv8EC))lVWnQ{x5=mJ@a}dvIP5Fv{{Vo004?kU7q_2V-Wj17(E+ z?(MHp(DX}dSilGevF}|Kag^}z_gUxu3A(`=Z8kAk7sqEo$X-ig)y6XA_C-;Ma&{l2MuV%T+ zV~&}w6Hxet5|XA|uW&n774O6_0oyh(dS<={8^+B$#h*il$r9Z3tDSnm#}b3|HJf$f z7*UGvQC?kn@fI)+W;}H@M&sf&xnZ_{sV2UDzZ+JPvFlUF;Rd4)h6o9ahlaAHr;<&A~@g9vFk*8*kivIv!NXcEpAoI;ri^OpG zg=8b1`K~8Z@g%aAF`nJ)EB+EWW0W(tJ;|?`sm1t+;u5}xuCg1`y^mzlwcBuj2Omt< zyc+xpyP8mJnz8wYWu9aiz`Z!S3C?>M58MnoBsd_6a<6j z=Z^Hz;uzh+d4TiSSDl{_v0bq<5_(rb;G0{$Qu!xkV4(LrSF?uC@p!C6X+v9_l`^_m zdR*2$8$h&c)DkerJoK)+N77M1!jDYisCZ*aF{li@ZS7rUm9%i6RZ-7s{d*>%Ds(sS_#C{+au&`M`$6VK$c-!I{TE`|B5;MqZ&iIM(5<7L3)@)-rY}bhRqsJH8wxH45 zOwFD$qk=2&ZZ_lS)0?A3-Muv2{VRs}UR5nprpVs$AH}4c!L{Wt96q} zY<-?_z;rP9MJrvOdE5L`H;b{Z2pH~b7sTEpH_@`eCQtLP4e+nUwY-EGF4d209@W-Z zc;edfC0Onhl5xqesmv=w8-?Y4QTgZgTENy#D_FaIZ8g=)NbCGu(#2u1Pf|Tcdb?vh zS5cW2)ILWir9SpZAde-$JYeR(GQ@CB8noq7O6{JMGuIE@qO?*XAUGgr7&O0XjX?pp z9dlS3=ZN97c2PJ}jybGrZx}4od17FDo@*Q*4Po&0lx%lH8Ctw2M!Jr6mp{{Rv)LkW}21P?=AOYsw5Hhy3;4Wt|bd)GE&N*Q{+k9#M>eQs$q zH(v4dS2LJpoP5KlrDxe{qUK>Dzg`DDYss~r9L@IjR*+y5_iIPP{xen)Zn`HYEV=1i zd44vYCs`k$z905fE|){y^i(D#B#a)oHCM#9DQ1M1*oU)xoW{R{ApaC7h2RxG1(CD8O zP0Z>Z;ktDvt$iLB7l@pcXQAa{GFniD(e)OE;z(zZvdNq;Kx?bfo9&xOa&y|eGvVio zmgpm+l1Tx+mF=Djybg-{@y8kMU#aAjm0C*Y=2`wL>aE5nq4;M*dE;58APxpib~ZXf zM;eH`@zS}E4r(#S09GL9t#%qk>{DVenLP##eU&Ud7{^od%K45Wb8YM)@R7YfUKM%f zyB`94Beo8cn3T5!ot)OUfV?U%U1gRWh65QD>%J85t+lLRG<<=_6{Ilq`L5CPIhK1? zDbtG=z@87Y({1Czjl;exytI}JQ~_9UI@T2J9-VFqfs9rbsp8`cm0a`NHQ17Z@;)Y{ zCy1-ecK5o(D7!?%-`==?8nSAUyjyde^HJGrCi${tI47rCz2w7inLJZdQ&uXRULJQm zH^!d_ZFRecGTSN6dZlq~@I|!cX>|AuSUAC2`hIN|+F3U<1A)+0JC6)n&I>B9PvWmC zrcsBjQ<)z_M~QK?Y5O$vK3vr1)_fwMzV=i&C!PgK;jh{b)Ljc}yRd~$56mmsJbCca z-%_)e5RO6Z*1kafp#BMXt3yx7x8d@=Zs9d!cuc}P z?mC}$!CY^N;ri)KzFx=VH-@}zs7rFH3lsF_y<5TFE7gK5#@1yR z=T_4+LoK;-WQ~`nJ*(6H9Q+{CriLMPrmSuT3~qx0zdFTmT=8;te60S0qvDPZwKdAF zc`c`nZ`D*ZPNyA8^{Dl&OHsbIX(fpKs(On0V0;eIn%t~fapE^r%nNEKJQ8nh%hdAP`>zXd3qF>9W>Nd?3v63AdsI04tc~J|pl0B>FAx-F- zK4!jg?Hg(666(&cglvxIH2Zx~$vfKzwNVO8qjW_34z;DGXm^)@xMNv-MR&LW3Ff|oEzO#04E)Q8xO>@N zQCc4RZz@j?Rk~Jy!BE8GisQUJ;uM+{$0YmLOK+&g$`y`t-!=C5OiU*Wv+|r?2CW*7 z%R=$PbtCbmSk=4KFv_mS=Ex(8u&6j-dsk9v?0M3r*Fo?xTw<&tW0DWyP(~LZVEz?t zA?Q!@s!5pHivsQ>k~pWNSr`NBOA@Mail zS`jdFky<2$dk`B&&Psk1M`%>%9lBMp6!J|-ktTNI>s1>EF;y+e^{JwOZsgUJwnDr& z;ZI9Z*-bfJ>_gOp$_J0>NFpV+sLfqVQae*j8($z0Dk7w$S|L~&8BQ@*6>tW4;-Yjv zE)GpgvTiUdTQNJ6p~B!EDe2Ul)pmIg%Zyak>e%^>8X|03gOQwKtj46A=YdtAU>g}V zW+(Zvp0u5U$gq*C4nXFg_Hc8U!Stph4ZLQoAzXE#BrTZ_*@i#^ntIw`6TYVq2O+m& zp0O&g3=cJ;wTs+^Exue6$9h9*LCQG%s=r&gNIAt|TL1dAs75O6@ zi24eIXx>v3x#yU4to<@##dHm za;AA_i~bf{-oq<-3!SHeIj;ryt@}CYy0xUX)7Y$mff!`3v***`yNredBz3N1#9jxq zyNAkX0o;1xuaxGUBk|+*s{a6jb?Ck(isE}+CN*t`YXs*t`Q7oq_D=E7!b{0o8!08V zLUWkT;4At`@%O>2EkfQ%e#96Po}DY^AKDxCIMKXUZ6&StjF}*i#xX*n8jn*2?P<&1 ze;gyC6qhVl9)6=DW>-$lE!87ml+pDcxK+%~=Y^jNHfnOSU{{Z2a zi#{6aQfa#WmnsEpM*29fEop9kn8z^M0X*ZiI7@?yf-kxW9lFz; z>;;tM*My^aGwK|dGbW5~<#WehYIu~gj(gN_6%k;uRwIx*(xKx8Nc?HZsG~jD=UwfP z(VPJ)wy;6pZ`#A|1VrXBOMBWYL5zmE`)=-eBGHH$8!>iMSBN)q@|-tgH3* zqN7n(E*Ea(&AD)WsUs`6?rdkKXm%%q_;XD;>@!^V)V4jDLk0{p-lYlhhF0&-HC`>o zZH*spIjaL0!yF%KZev;s!x^ODayrtsF~u7}9cvbodY5#NMhiAszLg*hax?f;O1N$X zPKuxoDzrg+jDRrCYEc`j?hVE`rVa_?IPFN2mU4t+(vE9{RoF{zD>>A&3BhzN1}eUu^n!migecf!)84B|egOqRB%al|@b5&EUDnaO zvf&&Oxa(QDM|MhsmWcIl+27%mB2lVe+wYQ5#z#u}ftu+emN?@cU>P2@ z&sf~YZa~}Dsr0Bd&lHPDNJr#x*1j7y;wo5ba&mf}<`*qs)rP@nSe#!Mfw!x>tyJpW_#myEjUti|h8sqG8?Ag`g}E0YJ=MC`Bdz>viU8YhLlM|NTKSJ!_}g_Wl}RN& zm?pWa{{V^_#pH%KqnPu-&3e$_{7p$-cP3cMPM?v*4l*u? zZsfNWWqv9|xITZ)>zd*h;kK`AxpiWF38?4rjs2kpS3LBu6Cc5NicxCEwS&YtG)xl3N(-n)rNQ3}RtY(ms}q zRYaNXvV2Cik`l=MxCHg8a(K|(5w(ENMJBvkOz|F}cQ=*hkUElDy%XW)g?p|_o1ZJ( z`qo%}5yfJvD8U_7>cn1UJ+-Boazcg{cH2w`!mF2FHVcx`Yf+1 zh8{{&IByZ#>N@<)u3MAxm1Bxe4}3gTm&}?xlh(UPbXY@e5f8$&^xZNzhSA*g#eAMy z!Za)5m25)1kE*MhPK6#wXo$Q!;kd0)6-L~8cdnY#O^tc^c6(m7%MM8E zUs;F4qtZL4x;B)fy6|t;qYbp{S>T8ej9vh2x$DdI!W` z5n0_Kl4cnLr=@uhigg>kR^;iLOb;WL0YC?w*G3;Lqc!eC`Y7Y7N|ma;4^i-U#_zBs zaJ}KrsXQ9=Zw-8U-0?)W0}S8ZuQT`;`z&~e#8)xj>m|fw0+En$UcZ0)KIvL6p3QBh zc})`KNF7M;T{FnBJg!IPS^jaAePz>wyFE+6-xclOPxBt*Jq|NkTGz!J1-A^-f^o;q zUnh8@;!dI9TgTHr(oWDwAn{lee1E&VQTB_zJDu3Yc{Tl|6yam%FxU@mTb?gRvG4vn z_`PhUoKM$|9;y64@!IOw=*b}?I5{=*KZt%XTU@@$Zp4gR0|vbt;ID}#y0}wl;I7fS zylNC{PZxH5{u=>D4}!bg`#Zs2Aht_XaNvw*t#99Wr5Y^62P3X)%KR01GFisQJ3`>| zTH3w+*|vEhCkirtmG-pjPK_URA3gObRK#45$G;Qfw6$>VqX!>*p|795EBsltxK;^r zR~ZsEc&B`N@m<}OjPD!BPb9LQg1$}h_r?2=@NcoKgTd`|I|r-w9DW|nsWursuJ z*XnQVulpoJ;r{>$S}o;@Jh&U2rF(oe!q~i4??fDP#d&WG;S^iEpF_f)FA!-~zI3rR z2;oTSUQ6+h;!&|llH%xs7X@>SSLi>*Pk`1|8YIq^YP(}NA#3vM_V2R0@UF5F*sw`t zWyUdHu5-iH@To;hMtpY(@m3dzYx3I79y#Ox02==QYG1t3$+rIMi5Ynx%XI%!j&0W$fUj`oW5FP z{?R9zaq(-!F&)lLY}tGS@vZwbzh-q%I6F;c_@m&*j5H{kWkV)9s8e32_9K*aiRRVf z8VaW(kdS!I#tYpJ8>VZu@HdQQp6ygNLtqYhuNhknMj3=RF*3gHgc|hkgc>c)#q?4T z#H)bW2a4jBRZ10Akr=$;X+%b(`%sh->`qzkj9`NPr z#k|GxU=`R7Yv_N6J`BFRSgqiJ*ta=ludBdw(x+&n@_gmSy+MBiSRWlPQ61D#dqFZBni54is5f>Cuz$w=LdjIYxt8}j&@*;)b-Cw#fA}bOPrIQ zc&-Uncd}=%hr-G*=8`n@T|uG%yL|;|j}k=^w&S!8`L07%zn4u=$h|t?R}14$iSp?7 z@-Z@Q2Y@S&5tw~hYI-W`K_DE`kHdtaNYJ7BH9JA&4~!+k1W4jApE4~4*H zCyMtk6KK)ied};XbBgod5b3aK(~=k{&T(4eDpH2AI&c^#U0Jie@SW^87V*HxIbH^9 zYWKnRI;F8$VO~c$6~XuyUWph7=sFDYYtwX{R?^}%3`Pk1q}9->s>f4{Fu=})?G$;Z zjDKgX4_8AYSt|k$@YkDazq1vEtcF`lPb}w>7QUzRZKPFSC>)O6s;%COXLL{Q!9JL- zhZ8R?VeIrZLmfqP`l7jqGyM&GCNKHqXJ2+oSehi^BdheREFJ%weAr0KI+d z!SvsRlV87)_EJe4k}LHK_TunWmaTJj9mT4UPC>^t^H+hs7Q!x`Ni6{ce;BWpt430; z(fV!&IjK_uoSE{{{?BsT+?k|9x40`?c7F=ZLA0vGO%hQ6)U`~hONE*iomAYgp8 z<=!gOENo?vs2E-fqXXH&W4^ML+2S4~@H`gw?I?)1d}F0~$BFc$vUV!WG1!Xt z9~WQRMQq^?dB)RTIpdp%?k-s(PQLvsjXBNO^=jf}4HG#DCo#!_bNJMd-OkbtqZsFd zOp{Ezc?%Ta^r`Ihi;Q4`PtvBOW6Pz@lgBV1_{MGeG^0t^vfJ&2^R^?BlZbtx>ui{{mz zqMicr-`XwFq?8l1XP&j_`cH@>xQIH)a((OLUxnWlOpz#x>GFCOJuBS&5#u}Qt$}$- z@6CFcEY@jid=6>D*KOLzxLIm(F$2x#zD;Ub-vf>3@vkt`d|am+iQ}jhxu*ELdk@~^ zdJ|s$A33LcHheZWAg>$pqtIfxR>s*o)#P<#1T}IRcZgB;oP(3=TUI)wNN~rD^#;AF zINEgbfW&;Q+Ow^$-Q*!gan2gA+sa>YDV{qT?WbBTYH`mA zRwbp2aQoS(eVC94T%6Ob!7ICH6=pd0fX9=ba%*_CC85bEIF)|R#0BJYRuTsX@TmlQ zoDeZn`E8$;wv2Q$BaSIH<8OMi9k?ASQX&@^H6SmJ1zK@Ptt>am$>;e~MYcZZsbk-a zvedChk&)J=$ewEzjkXR*9cj*`M-^?M^-31suo?Q)IWW0I#0zf68L5>y9DWsN?4^&& z2jxykik+>&q@Bdwh$CICk~phT#g*NVDN+_Ba6c-QVv$b+kEINov9YIZ-;GThV4R%N zDmWRysbWBSj2dy&YgSl*sXLu~;CfZ7dA6wI6xGX9Htg>h zu4;%Tx2=p3z-cW0lVy)$!l$ zY5xEO{qTQ{tfr2A2{KgqP8S3k{ZQ4sA#W-j@PZC|SDpCd;YF^a3r8iIx;975Gm6p; zDPBhoB9-Hz`6&{TqnuD%1J|D7oH#`UU}Bn9w^LGuU^qBE%|#>|4penDgtQ&SPFP@YD$+~8 z7+ig7D54B7Dr8U&HsE)xr0!gF0zn*7w_mTdN@I&B2x&}U@C8M#%~WoPytQB$DFk<< z-ayFDUO=g^$}j;WcBowM&KDk|6zrJ!Nn~IXceNrik&Rh31T80Xf!n*RXA zmm1aMLXw@tFZY1(0cW3RKim&1S010@9 z#S)ojSK3Dz9jmhVUGRDJ+0x?ZjP4w5;EMVqz<&g7Eh1ROq92#8-qrmnm1LNF4JD!Q zRUGTi?D^MD_zQP;5*C1g>x$aa{{Uyuw<{F3E>1e*iu#5RhPHM{5JE6I@mDPT6BGQg zLxK)K#e1=(MtWOPqk^23&m#DB@JhnSF6&h3fHqgF>Awx_?QK>wWB|vdbs7(X*4>8W z>U}%cUuofmWK-pljyWFH)d)wIl`W0qN))WyJabv_Tu~NJBezu*%WB>hf;gLE3=i)L z_icYbh8W$Yj4lc3TsE(y+FUC}h&uH(^Vy~k+UaBHa@od7HL>EF{)Pm5NE{w8E6)61 z5?ZUS(0J}TSJ3+Z0D^?k?v5~X?_Ou)&xefc>QEf1;=TtTlvGakKHDyIWh*v#uAkxy zOMB5Agaf+-cCU5#8?D855-VuKD!9%o$F)y~LL0r&e|wb`>z@QHqtR_u7BF9U&T2A< zI#`sPPfItz;ob?(D^uzJ00&E$B9cLYjs`1#R=>BmwcJ<^+}8=9cz`9kDo7UqeJP$Q z@r}fGXKz0B_So96h7r3To>R-}&a&mwa*B%B^A`Qsa$<8ZmOr%n#X>exOSjxz|8KG@U#D=CO1 zH)>mtl+^bgIG0cxWw|*1?rY^g1$DrNTEAPe$?nPG9t)IzLjApK+3>!Om9<}yZOtQO* zay}BRcqZ&I5FP@N?^dHvERDp}4=0r$&)3qc&8oa=c^<^q(P40kjNRLkPgY{W?F0PkE)=ZXH$7WEnJTxX5GB?|?1^Kg4@J*%R=X$(5l>BD4?5k5;E zxqb06D@h4!mG-P#?~3retQzDGOd9#;QTVkz)xkxTl}`bOTC=0;TCSqMX?Y15@7}p+ z=NODstxX>C;hHol-l6R}kHmvKB#Xu4r##m$;$MjHTSR1<;~5zFk6PkybvwwTVv2Yj z2T@*K@k`*(9CtUyUU_%-M;>LuLBG9ymlhiKIvgF*c=}- zXJc=oYaThiTWBM9+;Lw)d;|TFAi45ywM{2s-Ht=S9{?;giwIFH_KlzrquRO8i~j%* zq0{u)#8DjVJmcEFtF`dd7M2S8Kp=Fl75$?0sSc5H_PHM+>0aerJb4-T-WxH**Tvpa z-Jg^nwic@wg)iooLWs&wJE9u(I#O=^2fZOxWD zKwQ_1e$PL%HQuG-_%F5F2lC@!%B-B`z6TeFqdaFSKWfV{FvDO~M)gMb!ViPr5PWoy z>Nmc0USkcRvtM5P1pS@-S)@F2U#kFA1;DRN{h$5-S+$k0OF*hHaMArM>VE@#6oT5> zHp09wWv^Bl4j++CEl&>>#`u_3bk@h1cr)RB#kHhjWgwoQ*F|^mneQ##V^+%!fY-KZ z-wL3GQKA{Z^sS3uhd>?PO7!cB(m7lc<*~r{uCkTWJ}3CM`zz1lOJ;#!U}OXpLEGNG zbk@J%nFm_dB$njPpteFF;=fuZ_-$+{(6 zFn%Ovm|8rP=VRfI*+c#b%i)iQUMGuPI$?Q$;z>EL&|d(2KDEs;STnRbL4B zGSbTA9jlN#iuA2A*(0^uxcj_U)#Z39SSmb_N9X)g#5mmR5zk&to;~ra!-*b|IB0nC z&uaXy{j+`pP4OGWjR=96Bb1yd$6Eb%__<`)I&{#USfRo7ua3Sbd^U#b#0_%x=kImf z-n>30p*%%6+hlzwhuNdSR&u%U_rrhKj?FgxqfL-5K2x-EYt{4**#gH;g&u7`jksKK z`PbCm2Jj7}c4`D6&NBMB$rhZTWJM1au|<#?YY-wON%u}RX@+29-=7mDq|WwEIF=8r2d;;&^MMI+|_0EXWHrL&SNf3qH& zSgQ4}r+yCn5RTKxnrlYL0AYtp^dATO64UIi65*f=gTO1&y+=Usd=_XGBL^KguT^0q z30WiJGfpOzT70l&9|`z28wgM@8%IDo*H;7)TOc^bD2@fQh=H|n?O8hJzP3(JT8z{oL<_2zy%_=1ruZIlCp$)|X`#WT*u6^|YBUSs14E^b+s zWjwDNtHs4*D@{H_>GGVS5t@}#LO&dRrtk%uc_)w$0=o?Y`EKp-TmnaG@P!(srmDVB z!t^+=Uif%kXpy9RW?D5 zu{q*!b$0%DB3YbKK)~>uBOkvJ8>dyj$YD=GJywL_3=x@m|~FDOTba z2>4;ZTH&w!4gIlq=E@ba-3a2atxi=Hk8>);(ZkD{XO#Fx>sm>&$lzyyMS32Qt?H3% z;l0gtx?jTR6@d|=<2-GxC-5{@_T?2Zj^K5xqe7)P?sa2xOf`AgM5m$no_5EWdbSR0 zq|>#K{Id*tfn2TrmYQe`%(xzvm3iX$G|1aG8%LluPY|hYc-3%`#5=_G3td({z$>mh zl50-V{s}%%<382%wxjXR(@&qv`%;~!l3Nwk{6F}^X|GC(E;&6gYg|4)m04LcpA(Ye zDN*)~XnQj#!D6Eap~Z3k01;wIG|3CE9E#r3^?`S8HXd=D)-Q}>Ynvxm8B%&zqVG8B zeDqq4OC#Xl+S}pH-o0~iHK1b3N#Iw3>0SlC(?X*{#jtV@@Ymh{02RCw9;q~lBN$uUlsgk6JE=0 zXz9D2hQ7=A#o+ttblZql9E@PD+W4R1pMfv*uM^59ft48Lax28g)Qn`A_qj6XjpTX8 zvo@iB7VNPA^NQy#d@R>aWRf-{oB@jTsB{>E1tE@o2Nh*JHoj&=E7K$EUTm=qd?(UF z3g+(5EVl4bc`{=lj@xTfP4E=1!Y{Lz+;TFt=+=H4S!W_hQ~}O?Yta50d_1w#CttKg z9D)cK=~+f{t5VlGrG!+K+2cBQ!b_>t0N_9$zqqz+5%+fVt*txYwxDE9macsQuS)u} zL-;kP-&+3i$7W9jM_THc<8N~CPfaJY(e<8;pm_)qY2SG9PF zhCz}J3fI2)U&1=fHu0$^BZ3O!iuFGT{1UfFm&vvA!2kePoY!an01DF9%D|$aEQ{2y zTJo!9@}*AYD%9~cd0FEo##>TV#(F6QtnzHyEXe9!Rr z$Cj76(#IsBKqG?H^k;zeTW7XL5{zTkzLSTu->k;hK1(^_6^EUiPS(=Xaliy)zpY9p zj@Kl)9MqN?g{y$7<2deXJH)q~!|=h9y8B zl~`MC!ytjhSGAR-Ipd{Vwo1>M=lown8sXywPMPkiifn0!2M7_K%yIks)9~ zU}`}KIporkL2DuviF1HyL>uJX7wcAIw?opTTW*JM)~C?OtS!cK(wrdNdYZJh>;2kt z+l>9ns{&Umc~}@Cp0>&Nh$rb*R@uR&Ffiw6{3u#%TDl;LGL3=wRI#Ba0QRXOwq{+$ zaZ)9#D99A<#+{i1D-K8FPJnLeKD9f*#AFdm3KjdnRVQ-YtUfqSeqL$&Ksh3o5CJ6h z_Ng}Hf)75JqeNug(Bva-JJW##f<|c-NZ?|niZv^i1M;Cem|s#UBf@S6p{51gaJj2y z6?MflTL5?z($us=8W3@l_|%?SxA9bh05D2-r>skYe80w<(_&WTZzQR}=hCcOM#XYZ zewd&sFMzyKl43u0t~a)&G+K%9as^n1Ky%v_o|DSi<2=-{NZm8dJCSycC~g^>YxM6| z?dObPaoh5&m>rMI4QF{cY8;}4pO_er(5QpEr`JbKeD=lM^}4LxTv zBbMjoHLkK&B2h=-fNzrGBRL8{ zzX!>K_aPWiiEjdqBP8;fsRKM=q21qW&<56ISxnwbRB7f zuQe`KHMO-Y?%5##=bqJMDfJo0daDYqI&Nkq!XUi4EOw zrG8JJ;w4ftN9cKG5hRL!KJl8Vka^g~MswD@N5_8<6%o}X80U~{sPXmYrGD;WT=05V zk@$zgn%<)9s{?Y~>+z{&m2ugFZZ6k9L(8E;jU?pGGsV6ppHh{g-pmgqk4nGr*TWBg zsif0FO0Wk6S4-jFgZHvrB$pONn>>@#73*IQ{t%_UB^I4=o;Vfze+Xm`1qD_};?41t z>ciQ=op-{&3c~~z>>(WE01i!i&x5p>Olk5J-A zbtkQUix{^ny^okwl{k#P(&Jd1vacO0F3Qp|a1|JJr139{A%K{J6OOp7og>6b^9DSw z?_O&sR+_4kJq$h;s-tt#Gz}))pl4Oc^{wx)Y>k-^5x0R@J`KKG)J%@OGhOA4mAtU< z*{^#KN);n>=JEBN8n$Pcc)v)woV1$`-g2PV5p|`AHwj|zUen@>SmB0Ds2u)cyvxLC zZ56{Qf0Ue)is!~*BW`2a$}zL41fsM#m(!JyQE&2{tLKvbYO z&wN!&t7vZ5ZruF6>*uTCDb@E+sG*6gN=dWBJVD_E^JFP&&g_g;&j;uimhu83U^Dm{ z^o>Kq_KzbeUPo=Tntq*YW(y*KLFh$!(aI^pr!n+PH;r6BD%8K>?K*U~S9ZuFkb6`< zFu8`-#D4EM>+Mz_S`b?KeqbJqE0poKiR=xlq|=;*>}oR~WeP4gIq-OD*o;m2PdD+h zrRBt;Fue{zuRNQ=*Vnh_$aVmpqPlHs#1P4<6lpj-44$>O;aAgUd^u7_Q(vCPC&FeG zwMX7ORwpi_p>N?&fUfPPNvBeI9e%a+r^Bry%83C*1a#uOv%$j6CX6fNZ>M_pj|ER} z!HDS(3*>xz)s5;ld;sWo!T$}k25jyh9k z5hET>O?}-gu9`~7{MF(l~3D&#& z^;ELvk48C-Dr;7Go~$M?u#Av<0p7Sz6L@!1)tto?MTUR7Ugdw_xCh9m0QMuTPon%f zwY!-&z&YcAT$S>P;%Cb(k3yb(hrz>~`NL26RjC(sZcVw9OAie5%_M~Nn;`)10?4iE7;5OiZ}&x%fxYH^B;B^9&aauJiBe| zGT6ZcSDAcD_-h67JYQ$Q86@L1?YCFfv6d#|gX(%$FXA7tUT%1scNikQOhho1soA7w zhRiXrozITEckp%uxROYtKtp`#j%rVZehx*}BepPr5JJ47ENqZ7 z0sir?6Zlo5EtRZI9AprKc1L>k8(mdkg%Gwt#yPLEgZ6N;KLyU}&lQM$&qo@PO$U@8 zj)w-g4~u#t-)WOB(8FQi{VSpH_4C~Tp>ytO-YwF_)}1SD$tZu7b#l6t&znuss;+)_ zd}sJ0r+AmftoL?-kUmJqy>`C}e`k*gS_o!3e19+;vU6U6@h?ZVw($|3MNzQvfm=Th zrqkwl!?*=V#w+9JhmAaI=4a}7c~-BL#m8$Mm&0EHY1&Lt$7^h$0kmFSK;CarT(QWrvF0FKq2sKBa*MaD7BTGCQPgsjcZDKB6_5d-GP z?OQ8N5~@ZLv4dkOt;ri3uUhJ~$>n(PI&?LMW2X~1J+sofJu=L!dvYs!H;IleuQtas z<9mNP83PO)a!q*Wiu^5McXo4amOaYX&>EhEF4fh!&tfZ|z4&o$J9&aWN$xsPi>DgU zvo*^zDdA;%o-1SF2Dgt8N5KB?1k*Kd4Qf)vRx~-sO7>{J9i&ZwnfeN`{{RZHZ&k#s!xo_pcJH-X?a5_1Fp!!a+`<$_RBE`L{G|%E!2? za`-4-AoC&IalyrR`gepQRlz~|R@JYBt<`}8IrSCQM*%3Ubsi%;gk_=QvHT=ki=E$O zBhiI+9u)X;`g(wnl^M?!==NS1w@s>IxQ>O1@$HnK^s_Mz; zbDCd=01`sL~))}XWF$A(g@TB7{^0PG;yNkNX>OdF}0E9PZ27v?2gvP2(gkg zQcCf}4qRi?HAeSQOI0U-PfF+YUlyW#tvFwN)pJG%JEtkm=+o7&lG*HBJ#_nVsMt)1a1Q?imh zN*ptdoUh2wMe(nWZJ=F&!8zla^6wmYwdaMTN%kBa*fr+9B>32ME71fOB(Xi7w)`Ds zsCdPHeC3h5w`%gRxurb9vxFVbYPL@-whD7R)5o{oA=Gtv&WL4~bNu|`y`%PZ)4Wh7 zh3}QZ<0o}?J`?x>qq$k`ErBFwBv+~UOX2>RqJ{EfL9}q7SJ7dz=un?I;O5!vYvKKb zQPp@*&NTHBWE>8(z9W5+ecY2%!8A6J2+Hs~_NR+B$9yjS20{{RZiRqgVDjt(oDXHwgP1;cZi z`$OXwz^Uf*RV@O>%gO}j^RJgYUGPW77CO|B>Gn?Wmf-B@E5gQ7t3p;t`>d-n!eXiW z&r=se(WcfXSZv~VEZ@Rw-aZceBG*mAUCnOFn@0wz{0;a!abQ)oTV@XF^AJ1N(%uH} z+;%#8Lke^l1&1cPa_ki3l}Q(vW^q#Xaof2kf_@v?!Dxo(gWX!a6G8B%muiu;5J!&W zipsKh^!U|flNt3Dr*Gn0J8~k8@_)Nt%&Jm?)bSNb)Sis%EbXnK+2oW7-9W0oAt~gt zE2ukh(yM8@yGVhUa8CeMXN)!IBZRUNfPQLXOrsp^jmaXT6nHa9erfwSG02 zQkG$uxI<)q{gGi*aEeVHgW>-Giletl2k ztAQ~dX!gOa&k6in)%-g=X5RUT-OBn`vzOqi;9cN6eouwWADQ} zS25%N0FN3h0!Cd>1}n~7V!kN0{i3z8W;Wk^*u!o);<=9%d~4Np7Gf`n+5P zxx7Pj z^4M;Zc1Nvswoa}?1|2Kr{{VyE6Jpe@qdt1JEZ}1Y@m!wU=XKR+??< ze=BD>Y_QeZwT`AD$AEbE6(~WR;8Sgm=z#~Rs{%Z7JNwtTq?}KhmBuOp1x*xar|V9R z5>quaaR52$DjboW(AePwZZxi#0FFnsN|87Buqsgq1D2}? zcYA&nD#aoUE)U^HQX3;!!3THYig1Jz_l-)shu%zPm3HprVy#F^axoH)pqh7*gmgSq z>x^(eDwShm4nU?}?5c|xPX7Q(f0vP+*sCjUtVTsOV>xBp->o*R4HcO(+d-r;0nq2^ zS42ujEBaD-BOIkjuVrRdL|}STm5Rs)Dt{?K#^1_?w@|@>8Km9WBw@1H1NEyB0uUT@ zq*zEfSN$qPPB|j2El8AF6&vQk6(X>1tIujk;Z_-`7BQR?S9FlCrGx->2dzwys2MHz zQpE_!$2{~kIfUco7@{({3r0cqWIfF_tQ4M3eziGC<y5ygmL!lql=p}4)rv+Sf?@$>BO!n_mLi^AXJJufadX$^BnL489AevSKX57?oHCQ9a8x z08{zXi@aMtf%0MC=MtH9u7{0KYN57U(mpW%v;R|c|wxWi|8REMeUkqH`#kxcU zcdZ`?cxojHH00_l?q2Zq zx20kfxP_MoIPYA>hpE6OK<>OAE3(q=<9lW-z#MUpO4|)JQSy%$89|(f zi#$A-GCX{f>0FX&*AQLh6on1NdS<-}PY`lDo^f2A?YNFj>IMntH1QZ{r_6U@u@iKZ zPKUv|&)7gx#|w_N=vE#uwM$=^g(O#w!>A>qx9@N<#bz9$2YMr+8tcjKKt)RJZahXfX{D)^QAS!udkNpB6} zmc|PPub+Hz`%>QO*N=@zh1_}`weUPa#PkJI3N8%~CwE^FFChh!)e! zwXahhfUS#f8rgYy5&|*F74c4)`$0_+4BC!)<8xN+r|ly$$FOPx80QXa%LHhczpREu$V`Dh2@+(Z#KWA&W6M?_Ya|oj+!o`0S}q z+BTksma~S9)fD5X_pB)8mO09W>(Z>xt+dI|6P`stZmsfL1Eoc2QXqv~7Kc+Q zOPi&RA-_bm_BY2$LvRF$rIuA!zi`w=Wbx18kq*J<#Vz#WYo{~M&aqTJsU=PBcHR@v z($>XKmyRlJPtCQNV^tsmFa>XEHnz)UvuNX}#YK5x1;opN%KBH?^K(TXjnr}a%W_8l z0DuxpkdDJ671wH7J58m?q!aRI*zYi_-n}*CGxHtq>UxR)J z*y+>x$!57No)*4)G0W#)7N;GLyvg{Yh7&4uW{A8ArD`n$;VpsEgwy}0YekO z?^iTW2gPWKh1<9Ian`qB(_)BjE016+(ZkT>jih|0Gcx+cMpToV`{L0WcJXp4nTG2W`i7kD7#)%#fi$tSg1lC{t(lA5y~EK)&`riRkg1IO>sj+Y zgH;<=A&$-<+qMdsZKBQs0q!X_u<8v6I`U}uq{VhNNW%QNHCp0bs9PDwYPP~g%JJ=r zu56vTE6rCcTAM>%%#BWY=0YPF;<=4mN4>sR89Q63HQuh0tfb?gN{U?=Vy&@RS9T$?wR0)SqsRAuW?;oi{Yzyl}LnR*A=2HV~z8-9QUi%YM=lB_7qXVQ>O0C zaT$&xzg-BT(h?(|C+WpmhRQV~_gv%BuR;(m8;WeqBO8GRyKb#ya5*=)!ET7T*@}(s zQZPWz9<>$5sR|}!IK^XX8uh$+Wt1HHnn^CEcAI9s#-k7(m7ja4F<+A>J$bAvpAp7S z%=qW6NByv}9i~#l+Otx;tc_A@pql;l+J-7mPDOdL@$O7}|7G#R&6{&7PvOykC56Z3%4TgUxjkSgctacs>0q zhxn0vky=T+_(2sl;V8iDr%}|XkD9+}Z;euTI?C#Av=9+aSx-GH<8KiD+E*u1K4zfG zIt2`Q`q!-f+_&;w>DQ5@g;9Y62cgA&X?V+7)ij-QXc|{LT;Z7KjMwE{NlzJ8y{XBg z`bWdet1YdA=Bk%5_N|ZYi6q5h`&5OWIm7TPVjqrIUL;>G8@42$;XNzh%?HOf5nQN} z?fUfGYuo-3d_aZnVpp3yj(-~b^B+eupwe?~Gxb{8Mq5gUG$O8jY4F%wYZ|g#U9lh@ zq*v8H2_Uerh9^64z+^D5lKvI=uSvGIj_A%5vH6KL>d^dr()23;>-pk8#fAQ`sUBZRvLU`b07d?uS&HS#SI-8vfM7>KJYd055!;FvrN## z#RjgWvw$~_wU^)@+bdP^J>Fu}av0!q#eJ9h$x3xO7en)kycvp6_LI>cqEdWJwvsX< zxEyjmYNdDMDDPq*us=gzF!*EQ-lwZTv%KdXnXg6gr-}^wWO>*R;w@gCEXCp)K64kA z#|od3x$zCIFYTD2&s=A5Wmqyw zo=0lztAhqVkc2^Gw1IY_>cQ81{-i!JT-E@9P!k;wy)(z*dX8x*I)4qz*oAp!zflLaxw=3 zu{;&u7Z_V3!K#;S2Xb zS4JOQjvjpNohSDD00#ER{{U*BxYk-vndT?)tOI%%A2>gaMRBMW$XCT{PX&F^&5hzI zTFIK%8p|F-^sOHU{6N0&?yQ$E+>oJw+7A`QJ=O``FMfp7V(?=dyVp!|_{nU1)+0KF zWa-OP&(N>ghvW2js|CE*1-AtU=C8Ou75MHwO4>G#=cog%elPq7_?LCzooWkpyxO8c zv=d*Vf3tVSh;J-nj$7p=v4Tx><@M!+tH|DmrO^cc<=2>cAj?g^sWA}PTQ098TJ(evhLfOa@+6a z*Yc-AKpcviAf`* zl5&3SloU!}}q=bRr zp{ZGS9FxU3s~OttZadRi*?8d8fDRPXe4)Dl=j&QTNp})9W5B5p$&IEyDOD6VPW2!| zxBmdGM3$kk50ZzXnrV**<`rjPW#H6uL{R56uF-LKARCuFeJNl4TYtK0LcCy#WSH3B z?wZyQR4E^dXmhlleQENQ1Z)HFsN)1EIqgnZi+sRW$~*Hj>!hyR6z*_TH7ra^gy?(m zOBiKfr8w$MOuVtmjQZxYYq+eeHd0kGxIVvHcwNd19(#&tk~RfM@5L#Xj05eOo29TR zSaO!Z+SvRlvPqMMJ+nb@RFTF9b5NNX-2p#J&P}#y7TVN*nIiIBf!G=hio_iBG@etV z0i2A|PQe#&PIJdJDCw!CtR%TP3E$eAxxx9xBQaInxaY4l$*!0nCkLKuSn{Me$yiwd z``ZT<8oMt>#t#(9WGeV1^v|_NB*8$eUavy0eTyRQOb|dng+ndclaU`4R%?QTj;Hxk zBbi*|*WQ!R)(}tPFD2-aepMij)GOt7w4YH`BXAfjOjzd!6?P3fQoRN^ZH`YF#br9K z+8f52wUR5mtYq4tj^A3))AX6Ir9l!F&j*Ur&^{JxdW`7u#6TSddZ&T@6v1HYCEQ@J z86=A2l+xBmrGvoKt2Ln)!(WG!N|7?e7~`GG(AT(l6TnyT7)gf;I-FEq1@Qf>K2a9R zLjBwd_1_Fx#Tpzax6wI>D5cK-lkQj+*jKBpC7q46`(rudV2oPDh0&>ky3 z`^VREWC`b5tyCu5VA597T^}4RziS_&cw7mQ+D(B#$lfv5zR&On!-#bI zc%zCiM$yz(cYr<~*yzxRlGAGpV*iX^ zBhksJ%M~TeY0cw{cwC{l$F^%LQ248FXdO(TU~qR=jriZjdfuNq;s@iYt_NB1&be`L z0;GU{ym4O*Uq8gke70xm8Gj66>qDAN9^$roH-tiPVm0Bsd%b!Z5~Y;+`3HBSRsw+7sDIOC;mS=`30m*cSpugZxu zA;aKf6~L*|aXr{s9^%cvbsjO(iu5(m*6Kob2Is9@ zhrz+ns=zrNaaes>b6p+PbH8_%OyN$eac3S1W7JWmYZo_nV+cm$gOOdW-^00QjD&|I z()e@Xjoeo%S_6&U#cHEC&c}sc8%kGb)$nw31RH@JFGljV-5&)2A??q^(Si8O_tAZqTPr^jxAqe0D1V&s5>nvyV8c8|za zN^n}3DZPOrP=^c6HsEap9B0z4+e4Hak3Q8T_DLd=Yo2uz*xE8yvCiII$~aL~V!C!G zers;~PeoD!VErn9)6Py4ueEbtSJ2%~owg%FND2wXHdTvRz*uxV&5?Q>q^QTw@QKt85o>#SM8G|74+#>2+5dBnl8oIAq#>#ij+b( z0vC#GN?QjbBCG-xYfB~6j$7_W<$yj#rjd@Kp3Db4)Ndk#$@)^bH57y^k@rPJ%0nIp z(w!y3sK`H^LiYrMNEFnX8fYYKvS4wVbY(G?Zhh*kmjs@e^r%hwg+6u7T9kT}l$d1bA zRz!WPjAxwGwz`OH6_I+^m|FPOZ9?QEA-44SyVqyoxqS6Tbs*!M)-f2PC2UaT_UF#80%4ANMwwa&Isnalr*`+%jf&u(1 zjK1+h$n3btIs7Z3@fEq5nN*yL=H$_{F)iM_>eVVndTe@FXj5?JW_{MZa1(R6Aa<$~ z{6vpW`C8~9;Nx{(@eR7#*}h}I#|PHFUj3zhD?fn?&etM1fx$bu6~~Fh%MTuN=yIGE zA2Y0}So()T_?d61&9N>6WbiU8tkbnvZlNI#5BH6HKl?iTYCa;JXO8YkLm!p5b+4&> zFXHBw$sOEsoh!F4#Y+)Kn8!cLVV76-a60X0R$DlkgM!1W8u8DI{xgm{NmR%O2c>i# zGx3qPwUilwJ+OLL&3_p_BTME;k;u)EdLESVSh&JGr#4%IsZreVU)nzIKNCYZ2s)k* zTKJpd7s9mEwOQqkEytc%V-@x%ifwhD6i#-=`(M7=;rvVRlI9p%OPgaM!LOgGg^hW0 zG=8tc*}wK=R2+|kykp?4OTw2vYOn6W&jc#!hS4ydUDZE!|dG6pZ6(HS({)PZX?n$fbzJGxV=( z(>3^oxRx`!aL6FnwS=q4moG!|T*D0KQN{H>iqbq^Z}wFla;V2$)kDO-Ad2u6iGdq< z8Lt*!h*H=_e8(k1U~|yb8;^|E%tUhDU=f4!bgtRw)Tkt;^*Cto^-7NM)a!L^65CPK zT18QkbDVUpi^3Xg)*(zMYX;=k0paf+SYNt?y&Zav_31tu)h#9pRmMpB)#+vVtt$Ga zbLJT(dNI}~u6PSoxQaN^XD5$J^vx3LIqg;09(wW3c?W}T@Tv@;;~74c>K+@rO{z*5eJ<%A{f_cE}OQQH*0GJUjJ8@7=;@e4Xakwy3 z$t_(5k$Y)<2kzNUdy39ADO5c;%N0&Koc{odbci(jMjlsuVMj{&kM^Iwn%_*G7!-lD z;GXsM&xtJr_S;ZyN9SXMisT{qGp%24kL-8~a!xwe=zoa6XB!D*k`~iZgMHz*n&z%P4rzKk zrrs?X69C*ps@4L09QmE?*dta3ON@z=+n4qDjV%)!&T^LFiAwDmIQFj7!`7ehk4&@MCL6gG<>Klpa=P5@d2LKxE6pFF{{RO*E(@DT zLA?iX9D~Md-uxf&8Cno#y8}JJug#B!zZg!CvA}N1kbds$SG;%^_JM?=2`~A`&#|L$5Jo1zsl#dR-3CTT=bxmF)C|J)GZr@N>T!}N!BD~v6@!h4sS2L+S z>u*o-WD+jaI2quBUUc||)7;>x!|E+79kO19$t(^kMZH-VVg9wuTkDXz?q~e!!fR0} z1UDYk{{Yr6?;Ot}w#ezCxCG;H;-`w?3F}cACI&$T#O zkT)pc3Xa;~F#{YPhc#&*a3NQNUiBPhImGZPVO%80L@ap&6ov*TBoBJDk>P!PX{J6& z#}(3=XAGj#Dwwg=KT1z5kKz3)*o-m2G>s4=sjEquw3$!KBymV>w{kyPvMrBhG{0se zjDuGg>L|+0rna)LITbob$3dEvpeitFM0&gnyiWI((4=I`Q?WEPNDHZZHy0Q;Ozzi>q!Esv-_+r;+W{bDpBCRimES4hsK_A}7O5>EAv_78+griGFj$1{tipdp3WqmQ) zx_vuVhS^5xFaYaRHC;Ju6fjbF;;^UHr&lVm_2Ribtf|V#_vv7vQ7aw&wzmt&s*R9( zkz1B846%l6=g?Q0=r>ogFv7MEPkQfk1-o=taljmmcCPFlSyXzQ8I1~fcxcUutkBikLwN}p=U)pW@ve1s>Cwdu~2YUhJX5ozvG(vmgck&(_Z zTM*n^!hn{|QMS^gk83KN_u{H+SFNWj8IJ&WsZmtp%+aK!?Pjjadd8`1ZjB)zVNNT; zemi*b&rY+LCC1;EI1Sddz9xK6i%dsN`1Rwf!K%$6m~4EMC9|Gu!nOYZ7tJdXH2FEnWj!lL;#ZE& zsHy~v$DEcv_2$>QtdpJ0*cq>qz~w)@btJWH{m1LE%kc3=t?siV$0klabJnwOHFJWZ zOz~MaT8ZC))m~e+2mEX3>E-rUYeRy+5IZ*6ZtGQHkQ%e*)tLqv{{UL4d3N9y0GxNM z>8F-vzI)z~|PTZ+6m39~t(n*x+eKdH(JcwQPeGNcq_v;_V6GvE=FpV?u zn)s^qt6og=i8N2KKOA$?uUW~oD()j`?aMNejUL%$bT9dKzkBGly+Z|EK7GEJBGO+7hF01<(Tg5whR!4)8k~@S>Z{dpR!eA-j-J*^x$BHqT zls|ax8XdFS?fGl4(_)kb2N}m|%CfPHZOnN3*GHyWqK*{ey~wnpe3fdBY;>_R!r+me zy{nG#Zig(e<#`~S8t<9}Ab_1kQM}V(m39Op)MAxSeikyPE?FjdEw_c|mVMG^uUgnm z18jRh=rC)wJ|2l1a-jNRq+JdWa#IBL;<)36an$LSI%wrIuMaLWwC6o>T@{9a0RX7M z9Wh!q`bx&YsN=sCp>1xQ5;}WUss>eY5~|GTv>1jkpySk4n{6>8Y=A!jSN_hq1R7OT zN7`w$V&!G4AZvKXIphjsTcYEF1z2Va&ILy8z&Ho#Q_p=2n@L#}o&kbRX(LP>qbJ^_ zkb?7l0KBvNYPA)L`NHGO|Tq=>Ny6c0Z9g)?uYLhcTJSEdKDS~Pu-`&r8^Ei zX=arb_~}s#fI5-*Q*W^?u46(u0;t?+THyyq{c58~pk%jST7u%~6oAI2=^<~qYFor0 zazI)~a1kE2#+FQb}>R=OT!ANXH;jZAltJUh#>lT3pYZ5!1bLdbft= zoVihwdE>o$M3(V3a)5DJ_Zn@x0u={dFVN@5W-=-)@h5YJbpL#o3pd=e~*#kk|?f&7U^C);};s8!%ZrN8RP@f zxSa>%ZJ&s*%yHjf@90}K(_hXmwDUA+%W`&_@?}|8oz|zbhr#_`2_!)i*#!`X!Sn-> zORDMiH`c#1kGtFHUUlNH6Y9EM`HdB}?(JP4h5SYK`&E*0w3NXF*KIsqDMytNR|`6G zQ%w2$_N?$c2TZt{B+fEU2U_?S;w1k7VAmzH+D67W75Y>AN434ihjkLD$Wn7(oL?I3 zig>EwqCz<&99Neb$>JPY_IY+TaKc^ga%+0>oGW+r>07=Yu@}y)=bz_XY?m`yw<8(q zJ!_%xe~8u=jLRNQ;0W}s>tXKibFLY^c}tY^tp~x;-2m~$k<$cpuTJ=X;YF~JH~fzX=ZJW1l0)MsfTnYR(p*N2_cmLfNf zhqIE?bZS~lL)QEo`#|4l_hJbo3>n4_dRM>vGyS0~ZP8x8;& zJuAB^%^R!l>}j0v>}W#qN9a5rAhfu)*eBnQdZT@BXMS+-1_W}(ydS}zv=yGA7T>gE zAaYM%rFsvCJ|V-XbhsS!8Lyb&z`H-6K95#rHwdG1zRg$Dq3uNN1Jr(^7L-)nDqo)_`Q z;)b1LX)JeBs+G^m4SZYiugBK9h4kjz1hLOv4Q_mN_{Dv!>US{0%&X=RyLNlmi0b;R zw=UD%9ptgduR4__S^G(z)U#i!$CYkZyuN1{=cYy~u(<0%7Vp3Sfwd zH!NF6y?Znx1sx6vVkG0v_bywjmISvQz3Z>=o|_9Q%3qR3b6kyzs{E{XbDnFn@N!(; zNG+{mW-X1XqOru(YBooqmRIMfn>{PTe`glLFy|<^;MUxiD+nBnW0HE-UW?$b7RPJn zuA0i5bM9*c#CqnVrcT!PQMxcZ?ysK2Qkqwh`ZfYou@tS$u5>c%&2MqODZ-J^t!rpM z5_Nki6U(?P6z7s_%x(3%iDWF#)b};Hq}_>J08b;RubReDs|{{@RA|wc#C-?gFOAos z&`)qd$lx)rZ}1<+lPe)hox}inYzp}|!M19uk~st#@3fsz*(mbmAdWiXzE-@d#!c>H zUjs(Fv{Cj~hQ2RNDxpiwqtG>X-WvF@WC0Dx_dBcM-5=tW-j>Rqdk{DnHQe}H#7j9)2b(1H8>@ET#g@~BK7Kjry1q>CzW@nA z_P)i5rd1-~fJo_G#+C6B?1f=7ay@$2gDxq>Mpl$r;nc~a7JU(I;#iAdTx8ajr->1H zE!&_q;JSCjsn-DRIQOoHPWX!)$^%M4$6VK`f#R7-tLSi4%2l;jJ*vmW5r^8cWc!-j z({=dfoF7hW?y}A(dM3{UACkgLM0*aGt23&y zoQl?iQQN~Yr9Fapn$3t!voc1I|BM`+OcYH4=9|I}?MQH?eu(7~=p_9uNk13X&^@ zes=foQan53;=R~ZjFIG2qbR$S%3*zJ06FRS3bW;Bwmo>DKzfcVLyk+S3~-j==~^XL zz{sl+#24>#O_3kW;Bi#1)J1a%V}&@S5EWdURN+tzvUR0YR#G-l#06q zHzKEKO14<rX>B6rQ4nLO2G5 z3S`JV1v7E!@7kRlY{xku;*_mo;FZfUZ7a`O9u)GJf3j(@lAMpmrF@v!{{XUUNOT;J z!${P|gXPZ^HG?abN5|EuhS7diZ0m*|r0t=>=>7r{5iH1ZPfF&Z!JQVo)}Ygw<&LK@pcFvysqOjfl@- z7x5nM7a^%vq0s4eDrDV&a54ruS6|`lINedgV0W%d!*?;=71}f01J=9G4#z7hfyR1| zK~qODsaf7E`ivGw8Y=QfS>f9SL`HMF9M@5&UBZKFs)9WQbGl`<#BdKSbCZBYXR5Mq$7JpF(Rg z{?a(l0YUFw(95E}BZ4{1;`w58x*vrgig2D&VR6an_*ZA7X^}*LZ5XOHDA5p|{p)W} z@?w*J1h=nB?xTf;DCmzj7m2N1`^=QX7#l=mj=WS&VI0guMld^pTNZ0=a7sF!?YVtEYL@@N3>}0rlxQt3ifd0}S6TZl1NaiO4vv4co z&x=10r`B%fiZ>@Nazj_9{B`mEsp5<2p|*D0yx@+8ydzc6FQuL3W0G6uRj-AkgsY9O z?K&U0W%X&`C?(YAwOiMCzIjVGZ1xq;UtW!%2G2ZuR`#)JG$_%;Hyq)B?e(s9dkdKu zp+XLQI@i(C!N!`qL}K$-SzR)t%SaA}LAN(NYW;MaTq01E3=X%#JO*p})s(ym$j z8}Uu;!Wirula0lCQ>RLIvPXrE$5Z8ZX9uPDdG4EHqcG^rj+N;@3w{skJ}bLhds{bQ zkKta`>OTYj0A+nb`tB>8X3?87e9`UgUu1j>{gLdnlO5pG);+_hPHWx5;3XuRN9Q@V zcTSb!ncyD_zhhsuK^@KJmPsUi^{;jCm+Y&j>1vN2o)a996N6t+cst;gg@v40hjRgp zGH@%jwfJ{u8erNk0LMtFrWOuHz%fOm#zS)x119z*?>F`_16&Jm7wy9L+XFW+z8BL~iejj%cjErDb(-~xp z4dCy*Wye~?=rN9JWao(acYg0P6ld~q8_862iu>3pKS+DKpUAw}Xr+en^A)Ac# zs${TYdW`k0BUVpFVeF!d*J2k?agYZj)=k9g8~myl2VR*q9FoGVhma3y#~Q$rJ*hA2LS30TC-toqnZlH=Yb!$M_Y$m*np^ zxRNC@5DJ#0`$D0~2CF^JNhciCZ8+*!QO?K8TQ_CfBpPtITz&7u6`pQ?x3n0)NvOfryb(PG3HS%`jAhV$m>}$ zUVu1ZP)mNKZQa+sFK#yrnpYZ%mcV`uKy@Vy4r?xZ)c}@p_|!jdTw{1Oo2@1_3mcED zN3-w!b5(A&VCQIl-D?(2NFyJZR1#_#=)#9p+f!)5=+d831D(}YU0a@*{A)7nQwX^x zsN8CZa=`Sgm2_9Q&&vm zIXS7})e|`qW6)O{9=5<_zG{1YaaTKJ{4-RmYG&~Az0x}t)Ju@L;}oLd3WnRBeQT3X z5U3z;&*M_Bi1r>29r>-}R8ExJ(CPfu19txa!lRbzAiD-L+PTm8N#b9Zf8b3p_>8tf zxB|6R7VOfUDeTRSLgq6W3nmUYu5Vq`#lAs&@zd6p?^+u&t`B2er;7E-BUWW$w;Y!?cj$`5UByqvMgz#Z$u zd<(7XdXI|yqZI@x7@Gf&``ikFMZAQp2mvAf4*DK;n4?2DA z7YcGsbRH76N%ZBD%EIj!6rLfpU<)_^sLg#`=-Qe*MP4wh)K;e_d9S{erpOE9YW$+U zI{mtSI7y;tH;R{p5S)DY=N0tN#VvL8ohov&DQp~9<`4W7C*n=j@1J1T12R6+xvwu3 zQt4AWu=;-Wp}(`=?ICqN`ibwl+Sm9Xjh9f?6Lzc&oy&-GL#9 zF#|crCb>KPN&Gt*20#GlroAm>OWPG_fNZGAuOs-|8(CV)(MR)gLmt(vEYsEZ& ztwo7J0FB&%Tu_vySD7C}P1C1Kl=EF;-D7?E`evwGUrBD}Sfo9BRjqd988TZLJPO5o zq8NkgUrC3jDzLrJ3oyk>oc{oJ&fmj75vI~u11k;Pa(J(7_(kzywL6&M`(ENO3g?>n zXYEDY6&bGQ;YW#EO4U)F%AE0B`1~YYJvTJM=N06+B#*mvU0OXuP8*mEfH9tHobk=- zX^PS;r-9tpCE#y~Sku!pZYQo!BC$Ru`0&ebAU9db&OIx_r%qSpTb`6_OP84x_1_$k zAyIjcnB!@y9u?FMxpt~0Bpw?lt$4n-;}uJ$#Ncz#cdouIISul#2Ah0>+wky_CW2?=am!-CjSb>J4jfB*WE}s&0L6@OR^$y>T>Yadzho$IZokaqy4gRmP!gi8a9A zfODKz=C{LN8^febjPm2=&n;g_d@lI!XQoCx`^=u_HFW9D8ne&(#VlOkDfY&-<7G=m zk?wLf^d`P({iOA^*R=bH;gB~374)k5{{Za|r>2{jFE;dFRciT(=Qc!N=rNTP3@&N20_YgYZ2yhyG@Fflzy+t$9W_;cYj z*CkmRE#Ug&J!{$gJMd3UzJVCprtEN8a%)OB=tiF@**uv}r#e4BJO%qDYL~Y{T}swW z1H$8zUa_S9#dgx$g4!54U*cYC?XL~~&z3PrB^Hr}E!S^K?==4ag1T+760}NiIpd1- zDC6T8ku17D4Uf%zQ}!YaCKAb`JGmfq73tpwzhn&?!%(Enr?bh%U50DyKNEa9(`_Pl ziYUN2&IdJ(qgZLSN6Rd^UIO~pftcsYm2O*|)N<*DgVI}f)2DJbz<#kY_INjF1Vg0FmQKpzB5xN1i{Gz-%xzwvx zj8nPYmtm_>mG(X%mE^L%dufI?J~@ItiCCEVqD#Py}_@VtCdTiW9#s>Y4YCZX>;QrB4Pxrp*hKJwX@-0h@MsIMsAnp z>CJfk=ZM}?Ol_QX$*Pts=ygNb8(->57T)^$9D%A2s#n zz9OZDqbiZP%JYwE>9lQamGETg=xdsq$>BSS05gs$_IJKZkZXdb3V1nOYNRW|=c#CZ zA_}NP#Gc^xuD3+-X3_z1jCCfwb4}J=Hn!n_D{{xg1TN)}pFy6L^f^{$=(VZi<1*OR zTOQ}5_@XH=zF-`8u8&UEcaVJ0e;V-JAH}lAv`EA+T-Rse{{R(7Ax1Lt2Nn8WQ=djn zE1wUVWSptB8=kpqsjDa@zX4j-mtsW$m=RnCk>U?C7?2w2^u0_l8*B6o7GoJCd=@tV z;U<~Cvv&$cXb}}}nA0rogMH!n)t{A)pscXj}SL&78cqD!v@;+sCyz6ez&WPvs2ce=0C~@z$JHqG*R5`cqy|zbLDQl_P{w|=t!)toExZwm)3}K|v$*Z_ z#bjzaovo~4)$&2X=Duez%_-9OneSn+JnbU2&`Wd*09%JBEFFM-CFPFIIsphz7V)Go$`-;f+V}Qfq zu=t=EfkiQC=HUr26_iS{njSJz7jdL43Wp=k4 zSBmbTk>aXUPNt`=LZuij5uMKfLop$H zgIPM4iEX1|?>;&aT;ggsj}Y>ggU3DVBUNj;r!LIGs-vqdP~lFWHQUhaZMQyg7-Fjj%pL44am zkB?4jtlFHMubaCEo^ws|ZexVToiV}hRbkSuWoF4E&||H4@-&YjRa&-Sc=F}uvO6B5t29g2K1xNn?TE6(Y_lW&Z$Y ziq6#h7os#w9yJFC_-Wy=czRK?S{?Pc&a9e9{L=B)?8PmFZtJG(B=A7$Yld%ubHOk( zSg|Z|giPYUMSOMe($7#W5JX@JDaJ8et)IYnZe88z)(WbvLF~d;cK%GK{u*CTQbkWyfEXRbax31vCGg6_Plw8e&M>6)?~2vXyf*|650(HO{cEq&Y-2zQ@^Oxp>%uUn z8y_K7wOaGMna`(&p|^)>!GrHqr_-7#3PlmX;}zb?q?wQgZfdf4XEK=z^ISNZuKE{2 zO6MbIp-*v-kd8VU*3@I!0Frwe+p)5WEu_cDt_AzYspzLTEevf3 z!D$xJ4U4Sa*8+~iC6xFpn+(cxmS(;O{c{yeFtlOz=)mfQ>{x!>Zo5$&; z1Lw*V4|?MD4~LHj?yf_{zY?u&pfU+_@4KaNV=>f}E}n<6M+;JmS{+e+JWimU7pr9yJyZkG^HyfdeUD;crG)J#+fDJ5wv>x*PiL05^Szm z0dO|2U>n;;IKO3oLEiMikPsAknb!1GAO z4?~&3EfK9H(u0F1>rl&nq>-F~?^x375gf5^$2DB)!H#zy&akOc(Fj4dZhgEF)Agn` z@*bwKTUS$&oDtrmlT{^2GjJ=D+oNA*?rk;xM+9P-CHf8+)(`fu3O;YvqMKPg75Mtr zZxJQ2vVp#J2$gY@`P36?ae_}hJ6An6x$|)${3^R^gp3)pSUgi@stoAcRa_FMk3wn* zbyZ=&#y#twZB&N&SQ={TQcJ+>d8O^-%Z?b?y3~sgl~3YotLpd|ITfEh*UC=goOGeQG1B;NTNn{JN+u%jerQYfZRYi9ljc)}z`vyCz$u z7)IuZo@=>w;QJaSx41m+bm zoy4}|uOrg4Ked@j+ROF=#vioZ;0mwM| z)H3*xpc_kZ+M|{=X!2VXhpMwe*)n6A(bJtZb82#hO%3JoSRckg$E_oHz1O6t^R6as zSfdf<^Pcq+YZ4wj!JPD|je9-KVM~D4<0syb^zBib z*%P7sMO&*Y7T~S1-`se*9ltc3XM#;~o+o=tcc z#J}1BG_6zY@?BuD{`F%#!W6yMbYZgkwcX-+=ZG{&yjbio92O@Oz83g91(m?LOI1(~ zaLZU<0z7?V;+2MDIB?wvZ`Qtm(<0KN(|qP2oM$=hUe*SM3W-&*hAOrr3wmjs&5oGb zd@F7~PzT-itbY|5FwCHp3VvZ)-X`%*muoi+K_`J(66#j^ox2F6QV(33^(SdY#|;-z zOPWW|KeQFq>^fYcGDbn?@UP4t+MmK&$BKMYC`cYfSxLzm9`*X&@w>v2-dIQ@Ke(51+3b<(p=i-<|hDk=Du>i9mcmY^%!c@=Z0wbU&22JwX01{CtG$IY~(Kj zzQFyOt*$gChB=}jY#Ao3*?ck6?5+%NYN9n6UzeKeJUygo`dMGIIYO(`73X5;x{sN& z(97`=jXp$~?>-pSqI(F-N(SY}4P4*FZx8t0Ti6AX`Y%Ii|o~d2W@f ztN5!MER;N3iOf~O~g zUw(XU&~)8)-sc-gU~mUq*Tw$;8N3F%ZS* zd%*J+!Daw|D(UrGaQ1-h$P3cGebe@Jv_97bKWj+xy-M0doSo-}$jR$k-T~KNLe*Aq zcs z*T3y$CA3=-{rC6z>*On4a{37I9Hm#iLx19n$ccF-IosEDM6mL4(Tc5VaECKmpIpWJ zVrntYLthmc>%O~h2L9N~b1Fl1s4!EGS-7u&CGmyWBNHAwlUp7a@%E(^@<_698w6*H z<(>+)I5|a|;j?T#NOClOnOJ;T)4WRun70hu$9Cepx8qiuZLDe2Tx?vNn(|+T-xO~e z+BA|<2Lr8j{vz?e_Jxy0CBQ!?Ae!NEpDpAHRO&lNsq>G-EgowtIpDdveWU}PwZLjN{{S}R8TYR2DwLyR=5s0vl~&U#!Q$z(@r~ypxH-jZ zS^Q(wEe1u)t_DEe-nmUjPnFQOL!Q-^)^mr;IVaQBy+3A@ZqFTJu^4&tT=i>zj(Xk7 zBr;se*&o8)Yqt1x<6E6Z=}pXs3_v+*@lOnC6I?QvAPnGUy|dwmfoHf>K^X&Z!4>oQ zl_w}S6ZB3R%&|4(li2!u;NOUCt&B4nM@(Zi^gn@oPiYuao>*jp)$>=wuL@n-h@nWx zY!AFnE7W{G+P$QxvH~`oV>R&js<^5S@ki@;3KVN4&AIfag|+xzAkQkZ4wx0uF0~sj zTNq)-HRAph)~}KfvZ%m0Jl9D-i{`X|w2}ar;2&Cavl`K-d7YoxRHacx8a_DJZC2V? z;dKX`ob;~-@yEm)jUM_%x0@SJC0e>)h<^}mV?!KrY%UIPbCX^L@e9T9EUv=j?qkR% zz85;jSH|MiI3HVP=x!cPfI3)}*y(m2mmbZfa*vlhpKZxxGl~eGM7)2AHzOMmmBGQa8+!e8+b^1xYo$ z?4_fRkU9#fEEg&OWyU*!Qy4|FTKLJ;lA32v;O`jR>9=SyKn-6{d<*!#5X5dSHe&+} z-D~6vONO^#%0VNp;nKUm0{F`3PniSDj-5?>hY@7&5l@~AQ_!K0r;4TQrO^E~_*diW zOWQ>;T>?siKsD+*X0;UPq&`k7;Gctk8g4Cos73Aoz#w<8rF;wVPD|JkrJNDQIj?}H zi>-#Av#7Q_oL&wvcSpCA9Ptv%j(b%bi&evUZhh+$NAY}DDhQC|j+w1>)gzWX`HBgu zuPS($>~T}}keb&*B%Pru6fpNEt!QcbyxAL$06474@4lj;TsD zbUi0NXr0!PsArwWis>}1VB3)ECqCx5O$PUSF8Axun(6G=707JoulU!|V5K-K$8*BQ z(sOpP=z0%{%Q6PY$Gvxc9`OP`!@n5!uN1Mq@>r6@f$v>MhkQ}wl~$1P>TB*YjMi{z zO`jv0Wl^aH(ml&b)*_f-F|F%6xTH`9I(5x?Ux)lwj1&ohJZ8Hs8(c~Du$T@ySLs=H zH=QdVo#XQP(!J5v?FRr3Kb0?2%KkM<*7!P&vUTa2y$)Or!>xTRYDP%;nbMRVqF{Pd zN{4CVy;$SqZ@$$RYhHx?8dAh zjtTtfL?iX@QegKq_AqScwM*3wo4X+6 z9V$71F|c3mn!DvOIonZ8gJbdhCbWgL2_J?jE+Pj#xixCS+DqcQP~aY-s_9y6xALgR z%ig<>4ES;}7=$hYws1v$U5mwXw`26oHV#f&qThx*INKD3BL}GOUH*rsC_t(?+*axD$Q9Q z6{_jFo%t*C5Ds%zyfN_J{{U6Gh)ovZyST4V)BYbu@)l@LGs@$wcHR*1Ev#iE7PxHJ z&*Zpj`Gt6=vG%gVQl(EqPlX=}?R3bIWwTwuKqDL)_J0q2KehWR#cgJ*js_RMtyb`V zhoD<{o?(U^MS4$%1(Z>2Tnr9)ujm}7ETMo4Yqjp6#Nqk%lQaBl@e;#B zf!=vBgeN)aitwL_pS8}9r6<|ro+mpZXCaV+b6zv8e#wccYGO%k zq$59dyVratQgsq%n^qJk#_Z&DZ;YDn#9dp<(;$#WU#kON`S8!dx~0vOa$UrClboLQ zTj5{q-KO|k~vyJq1#^@SV#{cc}-^jw{r49X8eo@vsgvj8;on+p`xVze8F&RQnt>@%7rfXDQ*o z3C|qCwrvB{S8=82QYjWW{E;L4*Qam8=ieVJnm$2~Jt)i_5$`qfI3Gh@>k z3EK%gcdOP~4&WW5Bhv!5EMT@Py@)vKaaw6>62uu!0Q42hI<{p>a&g$_rqYraK{4kQ z$9Tr(-oaVCpa}-O8(FmTU@V+hlKfMS<|t(HPIx@mjhbc;4@Jo%uK|LFB6gAH9xw51 zHuAHX6cA6!4>iW>KM^cs$~%T&a0Y7!;%=p>TfDNI@tkqZc$bQNZ>`zg5SzB-?gG9) z2gEYQO{C6z{{ROpwR0t*?$4^htX&PDXFUO~=fIvGH!6~OBLMXRyeHuHzdpGtd4;!N z5I@GgnfPUCDk3z;?~WAo74;Y?RI4Q^9~Ugntwm^^?}dCVY|*CcVVrZtbTart*<&I& zIRny@!}m}`0%QZzBv$Nqo?>BeduG0x6%&n+Q&nX3IU(WbpmzPq&UoUlSZTsP`Dc#R zxb`Z*w$MpE#Uok)8jb<>6{Xx2(dfdLHidgxyoDgfI5-~lsi*33;3_R*U0Yn^YLlGv zS2SHNVBzGEx$Tbb&a;XIM91~eWZL6BB zKPx1b^5Cff$o8tOdi%Gl=bm|}u5}cRN^ay1b6D59yD=)fXQnD=RmGij=*n9ob#-8$ zeXxGDo3Hq)C~db#HVXC0u4hp3C)ojT=Y!8ZD~$0+jitNYYi@GK(uIgSspWI3)3%4G z_)o`iUCZS>yRbm((z@627Wl{vMi#zv@Lk5U7NO?I?iHIS2c>$7>+`{BiDWq=sr0X6 zgw&~{=QC`*Q^Z*_iv6JcX)c#zKanZkgD0B#8(aOMwMey@EZX78-gqsHb6-S$(6`@g z(r!s{%AQYCUjTet)kc-#nHeq^6Cudm(!V$2*It!XI5YH42+HvF@a{EZc6}S8{B@pK zf<4jCVoptEc=O{l@M-NM-kjr;y1XaDJ~k-|ZF1tXz9e|TTTCe>!N+VG`AV6cY0>4m z^)+%DF|TzUihmQnI$gc%M7ONPP6!7TgW+$8I<~ib6YWK?!<^TR_}|43KCJtgo__Xu z&3XsGYbVri;|%Q0!6VkX@|?1zda6wC$JU{jVC@yq`d{JY=BBW#y~A}KPIF${;7wan zU98hFEsk^2yvM*k8S>zdBX;Enr%Lr-3~7ySu`wla>)yXn!C*awHDrD>oaR!mRi=7g zi{d+>ZWLS+agEuk{wDa1bELzQaz+O}^^XT8`E z<9T=(&MFx-W|V=D4n1q$ty$j4_&C$Ptmr?rUEG5GM*^Mwtg)~e2irBy{{UzZPG3A! zQfj3};xpTd=9Jsq))ch4(LS-&yv%(ny!yuD#y>jeCf0yc^Ho>Ya@%~<_*Vr+@1fHQ zDXX0$YR5n8k4~AWrqxSyWA)8*hgrcpi4`20$P1q`uytgenX;s!j)HAUMgIVn-D!)d zCOACTJ0`Mr^3@A#Pm_loDpc>Os)Sk94OAK?`qu85e-gNFGp|9KygswKGp!C-*!4XlT81ldO8|Rf znXCAoAdJ2Jb6hw4ByS8PEB#tG)I zt#ug8g@f=NQ=004dK3&eLw z6NdxVvSsmG!i9X%)3K~wQ^Yb|lFJ~*ae#TJiLVC>1sW6P4Nm706B!^?RJ|)EGDz=H zi-9X0#PsJPqjHMdk#H-|P;XRvS4kSsTq`%;!1NU*#lkEmO!Ta)d!4O@ZZT3`U4*h? z7dRaaY`K&S`L2!|;{O1P6HU_RZ#w`+NHzJ<`&RsSpG4NBmK&m3Sm0-^eWCkGYprdh zPDGdrgy2`@_w7BXTmBI>z~>`O|8Wapt%pw zUlQtiUZV#29ax1pIqO~p@jpYb63cZAAoU=QwacgNM`(n&6wwBR$oPs&Yubss! z)X@7}1z5UviOKlCOFX&8e~1p1%tvlkZd7o3S8?Kil4N9H`&TC|s+msL>DX3KnvRUB z)K^ec+@(Zgky)C>j!=wX*1RPmLJ7dCI<%-kIqlZAg?GQX{~lXYOH;nwAl480{Dd3#;Ee+XFyGK z+J}x~`#VSGf=K5TUtD0kfloRb6;N7%=58l zW0HRNYR(d*z+`prSeJU?W;^6;_5{&stDi6l9eVNfuBupdOz`VwRO?lI?C9;ZOM95u z1qA!k?EDE0<-yqwanyyX+Agi;$|A_Y?kmxJ4{03N5vqK}_`nq8rOc8$u(YXSbZD*N zFNS<9(_4%N$QZA6_&edtF?SqK22b#*6-&c@5xGlHW3@OVCo9OV!^1YuqUsP7hX-&t z$3a~9j3gGaN9b8DUm6g+Pj&cr;G0LV63K23?o?x%@3h~CF-dGM=C&Z`k+-E|{3*MF z?@zZ`WBJL>#Yz8TY`z0*uD09JO-BeinRFQHnSvOT!ttKp|~&px>D^mfq}5Ea@w zjs7T@0jz>IZHs)%-`JUt2o)?3@AXq^4-xvyCGk?_C9`kT!(R+DYX z`K!mZ?}lC^u$hIzuo>jn%H`Q-vy-V>IdaX zJHAGp_QarZ^H#pIsjTmhG?IH7rvfaEfC)M2iu*bgBTkyR`JOJTuUFlA89&+cXLDfs zD6JhcLA<+F+YZd04Rrn&@Qiw#z{u6#OMdw)7O#s@3_u7CRq+7~!ao79T(C!9u| zOC#uTIB#te%!h;|P}z(UdJIsSE!DJuguxxN#d<}?lL-tq0&&<@KBJ|tL|>Hp4r{iK zXHOOGvuWVnH_rjS2Js#3gn?F47%%`6UqSpV_@OK*98zelPkQ(YlLwF#tbDz6ATCEHO)@Djb2u18ailu29G4dP(vO&S6`%0 zBQ6x4D}?ZdrkCJhht0@2=DXbr-S&w#mdWj3XP4nsNGB7+$5m9DZ%Z1UT)Ro#+kuK~ zH}_VJo!C5c>sqjB<)YoUf;t}6m9A;>K!K1p0PS9-Yz*i_o+ptCk*xVy(RhEwj-`Kj zz~dzMuKxhSo-EzB%#dXI8u|X)RGJyFxUbK^?Oor4wfD+MWEcY-YtzZInNymCj~6w| zAt-YnLg-rSPZDildFX4Ww7Y3keBg2GUQgj|eHbAx&!DdRMb%=RS8D=AeyNt?)mWz4 z`A%zxl{%j4I#6We=Rch$*%a>Qnu-ghkcGurMg(*C*Se(Sv^+-nU6}(1FOr;7qf@+b z-lUWgImQU%ifm&ITGCDKWhm)%ON{VPQymoa82o8L_5M@GH(^3JBm#IV={v>%!j(pb-Ha2Z7F4gIZq- zJ{`kvZjv-(1P%vE_02cK5nIFvT#dakU#DfcBq`8W-1y99aJkw%b6WUlBeThg5TD@! zymR6|g5$N8MlmS|Kf}_#uknVR6gH0|AS$WrUo-r1)~s~#Gx_-WPZ{lB1IJl@8xKRC zqq08BgmG2zRg_?O^J&o8%uog3^IN_ijs=xKAa}1T*ZwG5N2zR%YykKJJ!`7)M~M>g zcPylq!NISA;e2i$c!e5h{gV-wR;58oXR>JefU&m%LbxBDZ`o^5swpxr%sIj7Tqc(_ z#P-4DUUvR6sWB106wCz zwBHe1+m%K<FQjHD3qWtj#nVcfv=Zp&T8=o9U_K_XDpD}K8UXAdwF{rZK zNDnXGo&z*>XUh^ZOrz?QJ*3WxKMPN(!mRinnH3EG01mFf8R!= zagHkHqo>=fdq&mlE9q&?N*bR8@p6q@bUc0hKP`)qf}jqV>rlKC85L2o2k?%y*Z8h6 z0=bI}4_d&|VQZQ9d1Z0b`&TtoY9`dmI`CV8iu_U)t6+221@N=<3}#tF zFv_n|H_+?;7lV{|W@;p5%?ImNO(JZcKWmG&KE2*;4Ztdd99eUP3fvugLUPILJ zn(wUb_S>9{n)f))nU9c42U_p0_SZbqjP;!K=Di^X1BM$4bYvvY3@!#N_m@k4Lxi6e|o6I@cF9 zN{?NR+NzZ(SsND7svuIGM^RO;SZ4&_dYYaom6vAxel<^3lL$b^6`fY26NVbdeXHyM zfC%-bPkvV`<_uOz)r5FL2p+VtU8IL-1m~t}C{F6g>r}6}>*)a=X7GItVQV^~+p9=O z`h!{;rP{>-UA+Z)cg24ie_?6!Ik-f?&P#DzIP6NpQgVF{QvrsSBC3Q;pNW4IEi{3FUgmJ1^G&*Ep>ng@pM*JCV0fDT1P;=945+OaZp&Uvcd z81bE?x^!ka*Ek(bai1FeWeipmEOUaz2>>@!qu&f<&x@nzwI;e zrprarZkFOzBm>pB75U-uZ{x+5r{ZTrc&wsBoE-PBMg6(Fec~&~?o!5i6-GlUy?h_7 z_@eq9T1hT0BqwqCO?(DB5iH)bjrKoZ$ndmstTbmW+4aYRej+`@O6XVpl?R~pr+EJW z#mhFFer%AuEqwRz-{LgUM;vD0ft)Ze&3atAw0ga`V<{(YHiKRavdXmUE9`v~@f0e@ zojndl;qG-^dve2oTE3V4n!F?<5lrYq6&#V)yl=wZ6@u$fM%jW6S-q?3zkr$y7BIoM zg318{+P?n)B*UcAvH9;5-Dd&v zc-_=jSNlVJQGEi^<}|qq`!j&uv0oJUlg66Ik34L)ae0y{m*yd}ftvi^I>pX)y_9x8 zLE+32ryp%MvG!N&cdE^*-6h%18G$0c_V6CAZM3-8IQa#9W&0NRFxqLB@<|xn6;4X_ z#eIFEr}bNl5R)K#wgxxS+-!Bsd1|^4qu;Yqb%&Xtoy6Og5QwOTE?rgJ83~mCXL3f zp-QWDHC=U9Swk1d`qn(x+nePCW}+8tNX}2=Tutk#v?8MQHz(FZeaw$qfwfXQ@mWUf zfN%~^twA;B(o~%OH4P`Jw453>W!5?2v5ISI+!N-C!;<_j0Mr8O<}T`LV$#@zk+|BT z?k&EelT-xY$S`XXTlP>uV^yWpFt`Bw)^e%J$&^*i=hYW+!w-IGqgBTSQPZt+zuHZo zx_>&6Z%d{?=8~wMg|1|`Iv8~@8zuPe4Qkly5s1bdR~B#B5xGgHuCBo2k@(h>>bNr+ zisxx_;ub|D$_V<_9;4!PF3tLrSU0-M6u~Sx>sT7Lw)iftNCzgarY2jJ3_N6XcYY=Y zLKTl4NUD-}iFy6!>s)2dy*DR$9Pw2Th@T~Z=hrpLv8m{7Dh(sFn_tUhw;j2qwZ2Ik zEz><~h(04?<8y)Sn!91Is~j14&rH^S*-l!T)1-HB{OWx%>s0S%F-TW-M{a7RjtlJ%a{{Zbk zO)*j4y)x&*aB48I2np?h*15f3;rm<5<}4+6^y^;zDMC=T$DNC#E^c~y9k;>_R!fPa zZ08(g`d79<%rx&fzfsb>H{o8MEG|+fQn~Mn_S+kX-o!=@?mL?0sc5~AUoMR5!`?Z) zdtTISt`C)Z}vA1{Nt``$bKPflF7KHKqk7)PgiLmU}YHxvF%=e;}Z7_+dC7Q-j*Hn zT;R-V$-?nCJzG*~9hKOY&&$PRUdbTnGoM3SH=0z^Ig^vkQ=dy_8+Mf=uNCalOHB`w zl_^eFY|V|?TWPB+svBk6(vSh~NhPe%pOP$qI^whS1d-3pp1*Vo>9)MLIV(Bd$bIId zw*+Cc%|XMCPc;-a!Q5_q4@$C@p)t*GPSo^5l%vaO42nI+rC5am;I=6~Rlr`o>DVj> z80NN`ayTU>*|*`V)rR4D11Gh5Hif8(t-|DVuP4*yHq(L0IrQSX-7CaCOh9kX9jnc$ zMm*O?)nwR+Q(Cj7@dd(cEMJUsn&$2#Mq{yuwI-vkEW>Nad93NL6gKSSS2VE6%Ix(k zW8&tLEXzI#2caC*8`%|*?8hCAHcN#p7#^LoRp*?uob;+#Xx`@zCl;zll&bN{^WrcL zdYp8vZ64B6fE5Ehb5(5>T0l$v2(Fhxv4Uh*Y>}Gkj1*Pb=2eoPx<=MgsvW8U1A$q3 zhn*B`*&y~9uIfE883Sww9;T`49vMr(*v1WW%Bqa4b5+C5LdkAR9>Gh8Q$Xt{M9cwz@!WU7<3~^)w>D1QC5e{Bd&zZ#GYtfr=iLc>Hd#kB7Tk>)- zUrm0@-T{+Z)Z>o;=s*|0QMJfIF_G51Yv6x`u4eHB5MIK{9zZdQ{RjP;{tMehV-?IX z3>*^MYpVrCPNy?Fa2U#%Wz8gL+59K8O+3Vwf-q2Fz~-@Tz8|KSaIHFr+J5#wO7|}m z=`i?vOAnEOAKu4G@ZS^sOwt!wI&H<+;GfSmo@Hn)PoTo_eOxl-Z1vxREe6YY=7cYm z!em#zcxD-QtdKDB!g3EeHS>qU9~kLwV@U3hh6ArrUqJXn#Fh(csTIod1_`e+mLl37 zjUFS8CEc@rrr$B;yCYZ79&GtdClTeO@lmXP9fB0W`_1T?4o*6n`Iq*E@pXi@GD>!-ApEAjQ;60UjT^^Ceb)@*vAC>4RU5ND zOz}U15;SCnFraqDc@M^UO8OTl6(SE8=Z%yUey6NHn0Wc1&Mo+4MeMIU#${A=61EvlrJ zihRI*MSPRtj}=X&&OGK|2TJu%3jAKQhT1i2%mKj!^)=;YSX#1e#T;0Al;a;W+%+5f zWm3~j3X%6ut#FxF;#yeSIDHNNV9P?bX{x-Y4cA2CrlgBml6|(G2 zS9XuF$}2)D_dB`uG;S32>shzAh{qq+xf9~j?X{Sbj>e^rUuj%rzF)0%;jon`q}lIO zP~^F-I1J&7cgW_ov`eU>0Pn+f0~O3esH~6j@(;aRwAB(8B+5RO?Nx+o(6o;(Ny>U2 zvEe;??DHcABd03eE7N>2;`!PhSPsAw*1mbsd_W_ZFqa=wUC)O6UmB(0l zl5NEi%GT$w&*D=V$voBFBgU$;rdw&wIX!F6VDTJRHsCCT@sYW?6x)4q;gm?HJ^3~9 zwXn4m-Hz-%K4g7y@OQ>B-Mng&20R0b`WwL{UBwv~B(8ER;m?7-GiYz4ib=9b1SmMK zvc3`PQD0lz%o`^d;Mdr2y?$n)BjmV}jH$-y9f}9RmS{R2D$9!9%M`=4t5^ck06nK99p|}c* zw~S)CKNQ$3vbr$`o-gKsr#0yw z7}di71M+(VUN7PMwT$FvC#cPLejC62R)w`KnE{EcvWM!vGfswMoZrqeYrRdRn5U!dVD^CWy8CWj>5&EX((g(8&!T<*nJ zwUJl{1Rqc;z;$KaUYcp_aLHMY#VU+2G^#-~5(gkuZp_?nJe>BZ+Z(BT(mPV{PV9E8 zwTA@L)KwjgGDy{o>NJ8(THVThZb;Wf22wZdDUhc;TdlnDf#M>VHusqIoj-~sv9 zH=;v5)QYj^lfVMK7r_4j3-wnTeXrX<=c^txfS#0i@Xsxh_1`Y9_*ZsI*R^z@hdW^nABK$4*H+D zcz=hbms+({e7bD)?L${$A#<1G12yd*3A7DmB8{%&U>6*bU7v^k4kS|s`wB9SRD|Zc zO)tR*O@{#pC$`=zP7>kjv|_Op6`y1DOv8z(V*dbl6>fx9b4?6rHvl&CRP}vTZAo;L z^y%8IYcj2ZDi{x36X{$}jXX&Wm9ZvLoj_sIz0B`0_7vK^nl{QX%C$u7N8(?JZ>4VC zJ^&uI=YA;o`7{l%PtcM#9<|NO19CGSPK>%&t$*;*> zQ{qOa73gbw!lhd|4c@&!;NQmI5^6e>R)1-BH~?~M^&Ep2n_(1a>IFIJ1N@h*5Qw zwA}qpw)oDvqAXVTYT4i}?vUjEU2fXMWC%5tj_81<9 zj}dssPY`53BD{j*#n*auuyvPc9ZqYD@!!UobmT0zWEC0CYpS+mPYF)*K1&b7QpH#I zbv;*B@r|>)ub#L+-aTtJ)5lh}G7`+7W3GEw%-Wyr6>e@A$$mIF<+|4^;{O2Jiq^^o z-FdeI?+|Pc{h(!vLM?9z&Nv5wQ)xf6Rr$j; z(=Hc)uhzWmvo^(6ye4{?hYR88MM^08{{Z4IjBIsrGkKsX=W%T2yuv+p-rG>~R_Ep2 z+qX61x|i)DN)@@eAu-jPrEz-C?LA{ry-GaV?s+uodnaTT+I&!ec5|A1(?uSCd4MXV z*0?RmOnpsf>N@l?L|P-yLs2M2MA9_tR%Vo%u!N9}pM3PHx`wwGm*tl^Va0QIy7Qu) zj4}mtULyE~4xbq_%mMYye7C}ytViT{ifo!bl;5{OGYtpr1x#yR~}uNVXHrh9-b>F#M6e6XP|h?S%%KSMrnrt z;8({Vwim`&^gR~oQtW4G!tcd;m&JdM`g9t5qFjNHd)MdJ?St`M_0xA0S4>os)Ypx~ zc&}`|l_$CMd?%GomL*iIv^*onzY%Wr?-_JW`Q7-y|`^zc%{QjaP>M$R#@#NmC4*!=|1 zJ~*8|#Uq|`lB4*mGshnjm&CTL%OG+=1LovcirIL!Iju0#DL5SGnl1cfhCrz@^}*b0 zX=5=_v}LjRbNfbkM(bUM?n0oe0zd<$d>iqP!#6q>uPJpS=HmjtOngym zXW}b4Hxk8=f(3kq`%Qc#6Uj8tT9R^k#dT%M5X31rp>-#0JEwh5n0zy>+}hrNr#NBG zdRMyqHPxilphlHfQpax$xXB$C74-F$9#rCw9)>Ek zWa8QDUJ-^U2KS4Rj@9me3H)1Srm{SShS6l z+&(to1vsxg@vp{>V(`31>57k5=QZq9&mCGEi23+rRO45gTOGIU0r8SA5ZuU?(-&7I zbCaK?XMWCK4W-w-M-}bOtCK4OkUM+Vns^q=TKI+Hcw@~fyw-9X5u2uasL47s&_VQN6ehpojP)x zJ7Zf-%twfZ#yWe`Ol^)wZ%Wac&A>QM#-TRsgaW@>Usi0dZ11_7Ci1vKN2#jU5udyd zHL}*nZsFJ7tIK5WP8am7YEU+XZj8c~Cf-<{4@!)ZI1GO})|*LUK?n4zv+0v7Wd=I- ztlk;3Raz*_mAN@JPD0rsN&M?hTNzN`sVCf3JBv3%ou;vLHPo+Yq*h;);i`3G#~gO3 z9SRN35BSqsVpAY=r5ruvCpJFvz9f!RSL1Jf0fi;>Wpg4CyW*%^%I z*EN}X?g=fPosDW=FO?Z3J?k3s_B0I4y|6`d&kmx^p*13<#s0zu;s!X)CBC}QFlCcG z`qplzsKUAAeifA_r4o;lL0ru!l$%WS?E}QSfZY8DCb~;|^}2)+K6BE%b4k}Fgog7D z0qkqC(0ol4im1eWNv`}gTkdCwqj$`8^IJ9(5{47vS}mp2u;aab z%j0cs%GL(?)L?VXe1-c?c(X~kl30X_N`uaM73E{;MN0iopvvc3X(;G?i=}8X+ul4f zi~xDat!v){+uz%H@u&ojw3BMKqf~cRQVS9FtFN!Y_I8aKP6tfz4QEPq7RSHJxsGxT z5NP)}3)70}bj?m!?Sxnv>}#FWwL>Jp{o*=`n@_sjfHM^!_O6N4eNKugL1;I|SLm^^ zeDkQSI?-f%EN8K4azb{rE1zv$2B_d6&t;b7toQ; z3?!0z(n;Yoa7NGx^{#?=g{~WLJanif)NPc8WdMPIYqk#2O&)$G)a7H9)qF4^iwFlp zoL4ET=|(pr%FjJTdTrN{IxyMJE1TBzRVri&f_qg$6LPu3il&?tjsi#bujEO?~GDxr6ISY<^Rl9voU;^NQRc-AGV6 ziJJCenIxZjf=xzNZhKV5KbIXU7P8#=toiS`(^9IbX4w$Dc5ISynwQUK=bw62vf0#% za_8=aAC*np(kRYSN;f95n4^FP9rIllk*Ww|!kpmr&2uY|N62`oR_*QjvHDiFpD(;~ zSH=4^ba$4XB8~n~_#>}9D(!};6SmfnHcuF@JcnFmY$*Wuu7|^yk37pGHcFlbYARBb zh0)W6%%N4J&q>hyDKCg62^=8gdRKYj@7d?=Fs#PZJLi?hO3(N?F8Sb%(O42U2Ct|* zA8(~Yr$paoPoJqBtIU*KWp;fn929X>)SOQv@VCLMUk>U}Tz!?5!;W#ludx0CYBpMC zlNz%j!usR2crT5-O{D0OZUi>d_d%*U7wsFT=t<0LMsbA?+Yn={EF>^;PN z6Z=4XL({E%FEp{-Dzc#j=D#xjI)2a^#ix$tZ9eRUjzV~EdiH;h-?a2v-itkqH_XJL z!l|!{JZ-7Vs%tY|yo^kYN!?u-s-Ie`ZinU>o?lnna`Zl{_#^vhNo4Usdwikw=m*=c~KlbR;wM|S~$9{(&bcf_u<^=x$8+F;@C@y3N7$h9k zuLS%})@}7wx_h=fms7+( zM?4(h*Ry!1!VyZFmH<5fuQc&5g)VJ3NQ?J*z!8U~!Ug|ionVZ4SmTyyKFJ&a1 z*sFH;+am>~Ir`T=+M6igV0Reiq+L@9lo>vUyq@WsxlC9-?8adl-63nHUZhGBVZe9sb9 zgsA9!f8eY1me>$+k;QCkA2c~EIrXkjz|y~xpq@I8F_cgrB?WRACK0H zSe*_p#m^iv7TuhldFfnT=BI9x8CF~j^v!PkMAXf)$aio$5nfe!;-TmHZ713 zZ7Mj9R((EQLRBGi)U-`VO9;fEW1QDlrD{eM+_DqLTJo<7YwI9XJu$(q+QU^sff4!) z*QJ(L=8BciGcuvAk4W(Mh|W1-}n%i1taeMKg% zsq*u0oY0N1a1B4U3FfP$pd5k+Jt_Wp9Fd>KwP`ROLk@9G62I{paF8&_{HcsSV~SU0 zC(MTi`EmTI9mt8!?EI#YdT~fsF}Xd|)|A`2Gm}3WehYj)y4LTaXSHSkjNn(;-v_=9 zTidKmY=Rwt+<2dMR?4Prt zOWi&ytCd;m%CqVpW>b~}jw_Avrnp`=Rh;8IiodVwrUJnj^Uo%@j}q!_FlJ@|de_0{ zyh7s#Np5?YZdo2>IU|zs3O)V9M#@+pTBU8_s2)X>jGlt8+*|`7+QZzFS{faxyqE~r z9QUt@%kXO-i|3!b&#kG5cxpC|!sxyeR*_6`@r)dItzB!wF+pa-$>^CJS4rXh5UMUz z4&>CHGYf5|NLC=si;-Wi;Iin^!96-3Jx4N|#%S5|N5#!ZXz)9$1jns>?eQPtJW$(6 zWw_uT0k2y9sr*qTylo6cL%HAs&3upXV%NpjT65jlM$!U=gaGkhj^~&xZe^Itoi(C* zpVAyGsM5f~wRdZrx5VFzk!8?a07>VX@oy7&-}^#yvR%Xzj8-?qJKqyUe)jiPbA~6n zn&^H9d=}PxX{bectEpJagXJ(kFt4rQ?hTY-6&UJ$W;2TLcnVvlihdUG4~qP2dIp_g zBO|sv&AeCEzq0r2t38a-++W1zLaNTid-@9Qeh2=}x^|fztODB@(ekaA=AXmmlRH(~FC&Leg5a~K?zmsm@D9PQ_n(Orq z6j`F5v+^)FHPdMJ_O{ky80Hzr9MvBXUcj4T^H+90^ATTDQZS;qY<@o+Mk5nnRdO#0 z>2W;Sh?N-Xdgi+`X$`H8WNp|7lhoHG;XOvq7CvOkc)=jl-X!s~7LYneFdtBv>UUgEe-3lAC_6`lm-y*40x$!6CrN*0b(nF{m zf_C%Lynn_Yv>vmmD?)WQE1Wjg#%s^~b@2x0Rha(wAo`l(FZFhn^Pj{T@ba!5l#-QJ z+domSi^1ViNh_V^rST&2=2IlM4BYXSrH|q@(TqecS-8eC(!A*|nWP2Z13hzE8Zm2$ z!j>dwy?snF3S7G$CNGHNMDMIM%gshXBi%ZWt!)eYt9Zlt>@d6<$MBYqaV$tof5r0V zx(!;&OM63v&IfO$cVY6#Qhw?_Pcq^;){f^1<4+%Img^)fbU-};HRjs4jP)C-SsLNM z2aao?_=OzO%HaVcJmR>)rrfmTmE+f~d(^O$sKH$P=NXt)m0P!=r=n|@rr=DMt_t!p z#=X1Xuf$DeJEXTa@6I~4c`t;#6?dvij>&~50FlM4I3!o*KY+Y@bKsynxrqum z$>O`qPuts1@QZm5JBI@U71fvHo)ubJ1~)5~JK#T3^dB2rt+)wxg!+0`#GW&^p5F_2 zpyTgmzc#)f{{X>3q}S(;9WEIrX&5VH^fmO?!k>u#B=OQmEyNoUz&IJ=z1mr&ELXb( zeBLjI=Z5#H9@Ve-vv>;(**>dPwCOJH(3hEy<4n=~Gd;{gNf$XC>uS$PTNL?_3<};# z+p{Wir3muG<@NiiEMsRf5y2U+oc{o5ABuN64XnX#Hjo={HShlb6m__5G}%iy8-45H zFWRfeNpQ2mvZfbsCp_2XykU)0>9|Ge&(SzH0I_(5DJ>5(@i*;3;u{@Nnk_-_8}63k zuIRtD=CLKZq1Bml-Cn+J@$R`S*G(Z~v>v&xV$a1g+qqIYA6}L6nPw9Zprr{rv-Rw5 z0|2Od(OMr?c*FLRz16MFxzvemyp89Y`S;?tjHcG6npe4I-Jg)2E1$jbxmZ=1v)k!h zr-?jU9K@GmPBEPHudTx1YSfoCL-TywILk8YM(XI#`(QemE;tSOSE~Fr@#5&>AZOe= zgI*stw5=}a$i{y4X=z?1xR>+yeKD*Qa{0=|@uYF0vKVhbD$XVSD!iX)l= zT3|yC7P7y!(rJnrag$eBqA52dWNR8N?bJS6o_gd8;=D)TJL}7jE-21jun%hWyBH(7 z`3NVDYUCalv=YY67C>-LYAH$kM3I$x@uL>jr(ydYe$GPSt*#`6nac)$lY(pXSK!CO zDD+J|3AQRif(F{~U)Zxu)8mc^B!vecFE#gHgfy!dtPredNKwc&`i@zLrB4c_ZCUu{ zZ<%7TIH@PK%P1H^JIDb3Bvz@nR*VskrfMlI8T$(J-7aQyMn*i=SYi!A`gn2FiroIs=cxQ? zl322gkU6PT5-n6{=cUt=2M6@1CDVang%!|EXi%Tsimq(@#>f<QNlxzUu2`C}rwmD0ciBDTCC;jKGT zyDNPVZg>@*!rO}HDJXYQ9t0`hXm1Ew*%fTe)+PvEE zlRYX`6Oo^8nSM_M)!`DbCpq-0s-9aC-F}rQy(_^3>S)8oMQS>;zQ%U88DnmE2a4h~ z%|ygpWPzS*w$?5Lk+L%O>yEYOo-J_`u}9#J20B+ZGS|8~@Y3c+2B)kkV}?1+Qe97Y zDp%ZdRW&KW`L|YkY8wG8_^&y~snG9prMJ}Z0Y`36t!(Ie+xg3zj&p&U;A7RwFbg5) z)~@Ng>ZFRsPIH4!4$WBEr8;&#jPO3K5J2rFTL5*d{x|XDI#!%Vm=_C@#MhO0PsL(5 z!x^@c0OhM2;~$Q;Xp&}p;d>sn?$g9cP|)#e(Qv7AgZTCPO2MH+bU(BJ+D%xpQ z+?{+$Dln0y`|Z%4YUCa#X=5>?5tGO!xk>Kr)Om%*K;UMp!{R8SlWE|C&o#XqGd&-! z(Td+w&}=Tg*C+wNC!AK*nOKu~O`#Bd-L~pp~r5vr39O z-0Sq6e9Isk_XF28ttXRr7H~Q3UT3E3)4@2y4El=gJUyy3>H|riOp5HN<4s+h6l*Hb zvt#u=HCZ?P+@3}&2JcT;@}uzMitXmN6T&l$p4hBSLrN~?T0CI!TuO42(2CSlBXn}| z+mVn)GsRV#PRZPTYoWNX*hmp)0--uT+h&W^hUOH6NCM<+(WKx{KcoO$y;+ zBzLThu>nZb{@ zTy?GEI&fMJ9Q8UJz|x3cmCv?oD^!A5Bq|#|wcX!oK_kkHbJn>J5fK|Q?HSMB#b@oD zr(+CyRO#877Mr}`^J6_tc3u^NPc|?~IX!(VhKheKa6mko?))7;oE3rO@JDLsaHyel z&nqvfd_qZRddGtGr_i>~6qzIY-7C@UKWIyxJ5LtkaHkxQYsVm!qO%~dJ4Z^x*LA{U zDp-M>U{`^NqdJpp{bLDKC^KqQ78qoaZ&>n$N{8UVDP+a9bxN^H@5a z>)c7XNjMnIR*uQ;2n8}vO4|#CRGz(0HxrDjQZJR3HIEtJ>I%y&#j-gjxtr;vm5WK= zp%u`{qYyCBjQZAH{-3eRpwFdx5{u_fDEVAY9;JCFBw`gA05SZk&73aF^L6wFrhPg= zhTHgJq_?(YjP679uD2!0xy>vr>HH!@r4G&AYVC~flB+4f&jfK*ij?3ceFiDRc*8dz zTIQS^I%v|Wq@>QXOz~CTnxIE?N3Kq4+}rz|KAdm(b z;;@zqmgm;sIGU_1rMcedy6A;~Ey(X)hv8ok3yW~3KtbpSTJc>j>O&%xCyt`KuLkOl zjH%{-7Z~-gn9eW}jCq#l+~wKDJX9>+r=wj(4chJ7ljucrJ|gha-|pc{9uC#6$HICp zrLMBc1-mwQBns%8;l16m;_A}}t}DsG;%edJc_V80I#{RgE1x&`kKx_WXLz7t&j%IB zUTIgFb-`Pb!`F)X*6ZQgT*ge6!>2=DTk$X9fVYlIgo6j1U{};-+3h*aQ)k3yd^_yZ zoDt$T63HG&;C3dXj_42plzR26y8fFjle`ik&o$1<@jq|&A265IL}d6Av+H& zJvvr>iU&PQh4oMUxYPGN}->>ezk)J zo>>4Y04I!Ario-?yhc-i8@+Pku+XUoq3vO@lZBL-^l!mb7n+P4vxNZjUvvBgf*1v} zW9C)h*TKI7br6@Z?voe;81G+id=u7U5L-we42jV6uZ{Yep^CfE=FVzLtvwO-SA;KP z5FyCQ*3PA?Lmj&k;Cq_$j}Q2sDD6OI1LWroM!VxncmD&99M_MgigFZOqRAnR1;69X_z&p48&B7L{yJcz`PIOb+huxwqcH#uTb#Lf?)u*Ks}9H(tI^*YO3B?;8v)3El-wx z1%5wA4zRG*pSI$$lCM#yqGwN{32iZw)!cNoTc*PM8(#n$kv2_@hjMrv;zc;R7o9(QkA^R%|s zykn{PFa~Igay@yk!*gunt}7etu<6Q8^gm0=vTk~JsiaZR=~}#+-Pe%%1S4*TT?MaXYi;c#gd* zx^y}hZR{eqw%WJ^ugWWk_@jPotR;}K%ApnC`PQxu!U0jp_WKMH|?O&o+ zi>Zfkhg0Tql&Rz6DJIW}zA@U*;!hQYxJDsgfIIV5N5k70?bT(qQy5>EW7@mFh}uLN zezNySfc)9Xu2Sp9+FqWlqTxehlE;8+=JMPH9KNjQS9X5Si^pQISV%&O(Id$IA^ZjK z_MviJ^f-$t>$Fyn!k>e-T72k5f~Z0WAXi@}jI`USi@T(Ro~n4R+reW_nGk}2H}`lZ zzQX~J!pGoe%3~v#MSIPiPlau?-3rb&7UC=*8P0NRs?)r0ZDkJmZjpze;dChp2#7~TxpN1uE*9U56fNKoXiD7}4Ge&uSLH(;gGwOYR;})H!*(J)%ppQoB zUQ^>w+LK1NyLcY=uSO=mP4S2AN2=aMENiF`06Tf7 z?*9M@{{Ul+YsFqv)7dx;g%2!3bp0#sZ-ig4L@DK&?48aB1Y*6*L;Z~`>@2}+5DJU{i=WQD z`dK=Zbw}set}>-6+^O97BSZLWE}3YLJ)`vDTwsdL@n?sfMg%Ru^IdVYZ_C{DPrmKHhfd!?+MR5%uG34@;yaC z;V*=DTGpbVTQCK<7|ne8AG4*pUqk0vkVB3L>s+`vRIO6d zIdQd8gK6q{=fVE~*;(&xW0ACw!;ieY*Qs89&DT0zkdiGsW(QUk_2J`r zQdDCCwZ0+zH`A>&$3YT;6FBc)^=u_5(mrzukFK#+T~E&c02+KWy0d0TZHPI^73aFA z!09!coxx5)&fIses(vYH7FJrDp^G_ebg8}=d^5N41m0}+0kTIpuad1QHER1bev^l< zR+bx`zK5Fr%0IJqv16^qZY_%-aKj_Devo_;_;qiiSVTl6M$y3-u0P<%!W}nAiaWUU z(UF0Sb6)=drP|qPmI_J)+Li5P*j0vw`X4QuX7a>OkhyJXYo|qx;BFbgu3N_*F|pDj zX@1bEs;6)!t9(K6{ica%T+f^@2Q~7K$G_T1V|i???vKm}2X{*0&oi1>DlYq)@b*tv z9a(9iXZDHs{{RKDTw2@Rl6k;Z?Ozi7Uh#gZ;%kZL*&z&0G2XpdVZZo`s(JIVksZ1Q z>svE?1JjP4^UbG`#~J4}`D@Da9QP9OA7`QarUMg{y(nBMTbHT5xemeX(({xy*K2T5xA1!>LKpqeGXrjSEDrvqsihv zE1~r%N)jtx>!Mi5X-`ge*EOo@StZ%e9WzdgQ-!3#VaVtWQr1>EvcgV%1%0&)CnYqG zE|$|Ni=#-guWz6ZGRg1q%Q0nwRwK=oO+S#UT!ZY_EF}6OEaO2hW0%-#ql-Vic&I0Pa?6b+G`d$jkDO-VWTa! zv22oOmK_Bx?}UGOk<%P;Sm7Y)=+10SN`A?5n6|pC@C0pC+~T}mDX1SqX_~&FtJtmVFuu~c2iCH+&lk(0xi`67 zcY5|cGr=|*W|p=IZmOX+f(J_SzmJ{>Q{n5EwEN2w5KpcvrxQ`en`q?2%aUnH7&f02 z^vOuwaHM~ANUduxh+1{W2`pd^rHy`gYG1U^h`cf4o6R2jRtDDP<6;TyYiCXUwcxeH zch|}@(=A^!4El^`X(Q`t2INlzxU|b=nB)z|t$GzHQ;c;yigjl zgJcIe709o}?IH_$^azH(6GeYq+TBh$M!(iti)ZQZ2t#tTaNu~?6a>Sb6YMi5U2`S#kuXqzy zI-Qd5ARgmAYe!!2^@aVUnYE4Bjy_SE@^6M866_3P%DV&sgNCn1(fnzr>L+4Jla6_) zjaZ~dvwXK~@^6X%01lQ7_Ym5Z1RQ`mSCDI31d&MI-Eu4H{{R|8V|S;!$sj*3LtX>o ztu7Y6ltQCuC%tiEsm5;SMG5lgd0wZavZ-RVgQ?gs%K$jIQ;9O)vR*GlxG73S2LwJ;E2j40%24#I+9?y!TEca(U~X zE2H@1solk>OCIC*xg^(?J=B4V7W(_wy@bQAO;4V9**4;JtKtpt6nR4Dxiv^?#7K6| z4o^~RotIW&#_S(zh-wV&fak9!yOkKEb4M9hn{&R=wfQ2DD3_?gIj+Y>)cp8@xd$Hg z0Ac0r#AAYJRQ~P zw(v)A4hT8^FIvae{5iiasxzGAb)`Y_I~}zp%^eOOMTpznZeVhF_O8QDyYk=!j)SqR zchV$+;IbIX*K^?rObt zcODnfc3_10ansVeCr_PSoY19JR94v0vVmF%QFv}V^^d1QU^yc`rxn$BerXu01Go{>9V-ad z*5|cL4;2$9R`AIKa-ahpF2>A?7(SSJ>4x?ukoGdv10D#sL zuNbQ#!wc-;sm1G-hByuePipgz6{87SG)}eY-XSS3`ndq(o@WBWTv@iX%T-k4o@+7gmlQPh1M@w0{Z99BNiPdgi(% zS=Bf&IC(~C*sF1J#@M3d5IHr=>hem%ViSz@72axE9H{|CKu=OJT>bZlWFspP1e)ZV z{pFGNl$@nWMVz&^nIz5$laP3=%MBPR3^*sfPi>;SJiJ`zjw@osdC|;(5$QGEc56WjS&Rbe!Fy zjGKK2%;Gr9XRa#Z_!m)T3#XULa63%nc~Q_~sstqmXHeeLXKNp0Og=jIjl73aV^F>`@olHbBlO1XdV zvfD<}!HxhGSLVvl>nbOuOC-|CA1msA4W+Y#%|tlIQ^j+aT0>j8R~-)qzJc-1munPH z8C|E4JuArd`vB81RnN?L70oP5sUcRuRG_UCaq>iCZJ}~W>sDcS*BJ_Zy z5!Z@>3u}WWa1MFqy6MI9+Peltai1Cf9clg=wsuQpO@QIE*1p8jJTl9r{i0F` z*~b{b^skgZY;O=lrrNwy#!;G0s~+yz>t8v=IC@w-3vox%aJFGv8JMb5mCq1@*4|cS zIKc1eT<^pW3BhxxNh+$oM>!R9MAsGM+Rc&FsI3ilYkQq7)mx3ciu~Ix#ZEO9_qqC3 zKLn))t)-8VJ~4P=JwED5B8UJ7YVw0d?sZqj0zT2verHS?ym9ME4hBRDK@ zabL1z^7_UdJD<+HS%#hyFnp^*1W>280p!)&VyeR@@TyQi%yLQV-mS$1h{5-9k4p9G zN)cAwpFI8Koy~ndY&av2PHUmk8CjG#$S1XO_L6c6^*mPoi9XpAamEgJSDRk;x;@;w zqe)ooH2p0u5nN{?uUlCd_7)29CKdP@c#hC65Ga# zvJP-@n)yt@RCOCO#nw}ekFK=uiK07P`QsSi=dEUVi{f(0YJ97bGI7?tYTL&0#~;l% z0~qGBb)Oqt;INnhp2oZ^T&pRT#^qw*8{KknpTkclVcqLHS*Q+>C~Os^>`jgJ0898+s5JzMMR7STYyb{ zIq=WLOL1m_r`?baLFr!wcss?j!KfsjZb)IA*V8`;JbK360F!{D88zx;lXxsoA&zy1 zRVZB_Q|WrNGh2q_Oou-1oK=khWtd=Hfl_mIIYWFNrNuQIj_`kb+%{anEXAH zN2K^;TTsiBgWsC;&kF0CV^n!FUJIgYMH>gN*1ZS98vg*!rDZ=isjs}t>vO>$2adz? zN$7jVg{rGY0F$WiT?LilWE|wzoOoYa%Bg4QznykEW~P$FHk$S#+dNrX2THSsUzic~ zr!LlAz;RSilUmlc52b91Lifc-H zq0QXMeMmMNbq{8+Zp1Lot71#uoH_=2d6Ou>E4e=zX}7A!1W zQ$L_^IZUd~_Bu}#`0*{#o9#oVBQ@ndHTa|xbV+bdPpPhR#{U2kGaQkm+ncChdRG^x z%X8vwWn(fIPNb;mUy9@SgCCpXRT>WWe{uRgQ<2i84rwE^(5>(MXL>gVWi8O=y{qBJ z!+3P$SCZHU$7SnW{{X^&2wPd(#H^r@0m-ji)As4HtcpiZ%D-pez7V5>qZ~zbd9#Z= z%%w>>G)d!%-%8VP@a@3oHOK3p611H>kWVk?^GQ{Jpd|mOI z_MDRASdz}iRZv%qt~%G#XWUCeDN&3ydsvPb;T&dpCn-hSx%OON7Huw{ZKcYQ+YQZT z>)OnkwUe1<2RS1=SIM6XJ}+u!b$k6x5=RVM?`FMW{wB>fn7&ogobp>eE8WPmDdsq3 zIcQ#Q!&A!fUd|}U_|DNKxmAc0&T-nlO!%#>YE~E1q%x8>5EK)@ub{pv_~6{yk>@Y_ zM?GuhZ;T!z&CSH7WyVf9t_=E|roGmEM-R}cWf`lO+E>JVR@(E0aaEKZF-o5N(#+md6Fh74-iAgTG`g6T|vi`I=}_ zBLk%;iF^U2_)VDts2g~%qFF608s?HeEX+8@IfU&+pE>Hk04?t9OI=zf>~pnvuPyP9 z?C`K^8=$ht&)y5oeNC(QzeloGibq8%G05v&toM3PiFFWS3-`Z=tzhSebLE3PN#ikd zPEE6!{hI#(W?ee_Qie;rC2SGZF`E0!;V10xrC8~3`F3#)pnQTc(!8VgebC~yx`eDN zwBVjQ*V$hS^w^%xJRs@mUvHIrC{WcOmGKrQ6*~U&b~C&+@QN!&K^2|=?iRY}dK?KcK4R8vzFRa5Nds5Ss)IP{{Ro5 zu4S$6wX4x;RJL23SBH$vsx`?Tr>BL&Q|9M;9$SC#s_Vtt!%Zc-GVVK0%!>5ShQ9{p z(4tvB$+dVViuIp|KMQTj$2-PB+w(3fyPx6pwWYW2ExL5uO97P4uWI);#qkP@_A6tH z@X=iw)FTXS91K%DRq;O8PO&0K3VIMtSn*bmd1o8>FtYG_SAu+6kHwQS$8L^SlaBS# zkFQ@1MaC;o%hnqiSND>!PvXDDIrVcV*sjUL5wus&9xw1Vx8mJj&eE}~L4=OJ;5-0`Nlr|1?gvq%@^JQH2lj4m|a57-igRBhvE#%trR024v%zS_s* z@AfaBV`p@$C{_RiUtdEh!{8_?aYyC({{V=%j8#`8?9Vg(to||SR_ioTTm#7j3iy}D z9yOm-)FgP=g=aX(>0O`1zY%zc$5)cgbLOyTBw+QfOa2w!t92KiZ661wM<%d%N-&;_ zQ{=00UN;&0xy{(;?e!apr&ExebOyACoit^flYzxu{{V$^sadz0@D;%0aOSSr_+tK8 zakm*bCl%A|*CmcOiKeEeo!QOH=LtNsoxe(Yw2Ghw#zl1J;l8UC7Lob?Ob%_cVy3~m3HZJ+@~P_>6-B>VC5Q?JxZCfoDZOEptQMRw`sxTdRI~5-748gP!y2d zd)JuwJH#UJ0bgGB^qH0@4z+ni`uA& zHsQ#xsOIIQpt>I?E6%K?C)DSzVcYT{BOR-X_>-c=sA#tlym|QzdL{k2T(Y39W5o7; zb)2f`c9lH`TIQN_O)=ZvKNx>&KLpL;e~lW|q-(V#xiYB7sjr@XAU>lU{!9_EV%Q3u zN%pU|Kj5Z%C7*}BD?_MAf-!Lc+;NT7@|T1!{5dDY?I!bDD)zDJ5wzR6P@s(Fzdy#| zX-5{}EfZ(!88u3u+0mW0N2Y$nKeMODEmy`Ge~GSiDWlWA^9ex*8L!okguVg0(scV# zaRCN20~J&DSJQkCtoU9XTSkf*;gS_GIv^kncCU2PbR8`MED?d|F<*0*VCi5eFJ|{Y zJJ*ID2kqnWM)TnB!rf!ViyODt?A^jARv>X-NB+&f4fNjyd@<7eHDhYA!)IiW z#=CLCt}|Y5`*C~q>5 zQ#~`p`V?0;nunJ&oIGs*0Cv9n{gS>oNAYJ)I)$YB)n93Bw_+Ec|UZ(}<1 zv>_NR&$K*$ewYFzgR3jNccxsfrvxD6GUha6VVm?t`dy`!5w`u^u zAe!>u3x3fWUHodFZa_~^=DNFIi5gz4m|L{xzUuSo<9(**u}*}pnj_XNh=03|I^wx~ zR@f^x;oqng)!yD(T`u_L+;N-^^@plJ7@ec2$*HMnYI-!O${U;qi0p$qvaWNQ;I->Q zUGRH!#d?p3?aGHL0qfhPdC!V1+)46`=g?P&ijAapMx~A_`YollpyTgW9pu=^3V7*V zCZ%fWoc^M)?=6$dECD2*NEOPTE~f3STeF0Y6@~E^LwU4@2!NLdpGxVpou)upcpjpy_?t|E z^F=!&&J^I{y(}dUXSt19&yk-1{?Ps(G2Y62yn2(~yjFM$k{DOjU$no5)x^%w@-ld@ zmi1LO_pcexe%0zma(f>+TB>upGvtJhbB;O#OA?YePMvwBFgaqvO#U?GLYxDVc;dRD zJF7FEtB)A-efoj zwKPc!9jyI5DcZq6eo>HpD;DnY;A|+yIn8$6K_q!Ny6}wdb(&nZ?!a-+t#_Ujh6xo= zb6zuJ;!i46XE`c(HR=BV4*W!y456GA#YIWU$3v?Lii%f}-OZ!!wh0pB91bg-)qFQ_ z&yj93p2MwrUxzK@y|+YYhFEkpr#FHvWqx5fBZH1B&Xi{4dyvIbnn|89ABANfy!$Tz z@^RLzL!h*57{}Ary;k$VwzkU3Q-k~_xxHIYf(8)?yLwhKtIF(e6-xRXv)ME_AYgh5 z)bR6&rv=V)Qpsx(1{m>>I%1h;Y2`li>shHeOGIsDYoovL)#P$Spp0?Swl6fuD#4XPq7u}{w3Gp1<-uC zIT)_v$MbnG$b8Mj5_qo?@s_&L$iz2uikeYXBw>oIp&bY6dZ1@u0r`&<{Q|)1GldX? z^Lke)7Nplx4bFP;n)EM*2^4oo2?XR3o<()4aE6^vDl%0*C8^)|BjICS1(e&84l|YD zS7{f*scfx0+3i>A2SHtD!zlFmEu%7`5)^L5c6#UbP}27XP)d>)6-`DI>~zw@r8Mk( z@#0?$!D1dm7d^&0S0g2bv25fVo-!-YzA0-K_Dbp%NZfKoc^&78WBYK6cHf$~Q+(Gw zNhm@ovtRoj+^CV&LC-i9Gig&KB*viR^#YF$&f1fyd=b0SyUjnrQ|d4RpeWnW4z;SP zA@TY+Mqx&W~lyJI7<7_#Xri%?xrr4;gG%(q0$R?QZNmpsI2FL%nzhg|vg= z;(|y@l{wsN&~#6UNv5XBW>v@CAk(3ar%F4YM}xu2aaKJKQP6Cyvm}K$9Z9b^_>HUD z+S<&?I}!l_I|^@zek(AIp2{{mh6{DCB=NtEbzMS6gw9uS`=h;ic+7ID^maY`E*p}a z=5szc)ZXG*1Ld-EJu8CMZJy+_d3euEcCNDT&bZ1;mK~O;oi!z68%K-|wT&EQIO!wK z#bxy|O|`MZ%cIEdNXhNeq8bd*uyUiPYVIV_;@V?Rn?BW2T_!bt7pG7~dNIsnYeeFn zQ6&pS4ftnyCY7#0Wsl?zGQW*}n|{Z#U21v-w6NQOy;q&QfnS_H0@5w!@g2NZ2vt1M zxRLK)ss8}6{l1-V;pzqvi!W{)>2nYwa(Op9-{n z5WkmX2kyxP4h4K$@h;-l%U+f^+!6`S+Wdnu!b1m&QGxrOSDDK+!gKUI&*FZJx`n&2 zj1QEISA_VB!7lej1-^b`md7>p9=~{(Q8F`*LB(>5p+j|bwSxS|gNpi$t1zjDrDOAc zEy|;cpEEpi!{Lspbaq0djQ16<55X@K&cC^~E&%KeeIM}m;ba=4B1>)`aRA_PUf4ue z5aW@9U!`~d0J6@LdvCnho!G-J-qq*cJ^hY!_P2Y8w86>70`pySt1DVa^e}aB(T%Qr zb>Y2B%K|pY1ON_tS5<9%vH`MFkTIIa-quL=o=N7od;b6nYrY_2rg$#)La`QayIwJ*-nG2S&iKgf+Pycy+O#f; zqHN%Ep7rPAX(_9-8uPL0Hrk2uagsfA-m&~ctC1N)DFoCu+M9Vq@8GY>J?o$GPOMTQ zUO~8LrFk_l>Q7@jby{T|d&Fiq$TBha&2n0nv**afl0iIos(SXmLzI?$ka)#oE~e7R zaO7j5&0{PSS3}ssVqZGB*62E$rQ$eljyu=9d>rv-l&R;)IdP17SIwGD=gb|P;2ub? zNceT)pRrsbiGb&lYv(c4ULLBPPokPwifOa$p9J`zO9DpZFg*@yx3tw)3g~giQC~cK zHSxavfJu-5?_RT_c#qGCPnZbX*1nS~%v|7;dY=iKU^yn*N1CYw)4h{E9?r7L!7)6R9eUM`!I6(`rn4=j%8ZZBt)ze>0uKsm;FZwi zeAVK=8F^8V@(kmqYs-9Z<5@!Q^Op_S9+g|f-Z6QGGRJV>gIs0pwyCV#VM7EbmNonZ zpH#$VIGIwX2G8saw+#riQoYu1HQh)-0};63k>0&S;dg_jhsv2(_l@_7999>Dds)t;d}5Zij^)-etIc>n#*dGSrd!VpSD671A-Wp6e8~WG7x4#nKP}iPIi=JJHVVb@1)8ggli8UGF zOUEmd^JlGjev_woi&eZA78+dAbHfBR@RYEX^LnzcRbKle`bUSdtPVq#Dsr-IA5MHK z@oRmO7MfDS83#4q>K_;vOSStt+>*TuDd}Dh@Xt%}jj@g^jXq=>atkQ~wRRf+0Eczk z-7VJM*_855(aAOXCPRb8(ZX8IADHnX!A~D~NiGBl+j!2A4VsUUpxF-_^8*i zy}Vapj(I+&y5AM}55(&mhK6jMf-zoK;=hL))J($ZUvWPvVa0XT!eVPm(np0$6@kLl zZVaJk<5^=}ldv3T2B_=aF_6Y4IRu;lD?0N-zSCr6nm1ww54C0`mF2{Qv~E;!)1`B) zrkyv;(cH60NjWALy1cj2GhlAVQ`)WQz8<>PCP5D4lk*%^zYq9kYx#hZ*Z8mq?kn1U z6nrx+D3(n)We72hb+1B0A@0wJC4<_sXQa5-&sP9LY@KVHS}M`FA_bL zl`MX4(1?3i!2bXn^?7x#6-d5r@JSm%>CJdK_9ALFi#d2_1glS->~Y>8@wU0ENPM|i zl6NN4)9YOy?ENo^b*~c}X{@A?Ny%b5S6T3H;9OoOx`u0uCMs7fG1k7l_$T{4XgVdB zyoXDPcH@AVKaF{LK0{j-RaKOhe8ng0b*To}X@l3s9KnIIh3J+KAHRSj*#_0bhSgg2P6~<(Z9DnCk=Dw1`26IIFTj z`?1IbpGxOEHLNVrZUp3jIW@ksy8r@`J9Aq?S3YKNlpUm2zKkm5IQr+MWJzeUxgdf` z?OT@?%xuz%oQmBV=Q3oBc9EYl$maQ-3??1(yB}MT<+z+pRb)fse-PPP z%H%qZd942c0%{i5+K^?(-WabX9wPB4j&(3B(aIL&;=P06kAgKj7^0eZSdOj7HS!r< zdfApEok!h1_P$*V9vhZ#Q{Q|Cd21X9tlWXlE469ut|8d0yo^_mSYO=fHu)f?-&2a2 zekZN7VeVKrbj^LtanPWYWPDC5CzWR9Oz*Y74%=Tw*}*$V^{+eem%^)QplKQ~N#{80 zT^51y9@6N1%rbcSk6PE&d_$(sW4YZ29P?hCDPgOtv*$5Xad=s1+OgmopN3PzQshU_ z=cRMLFZhY4=(a9P4a2V@yYGv7u9{<)WZ^(lmLE#_L*l-rsB1D!V`VIXazGk{{Y$x_K8Vhme)`i+_ z-vKqx66;NKaAnw7|tpls>x)hAZO*RuMvL4+H_iT znCy0*xd~ri_-Dd;eT@DF9*qTg z9M{7ShdRad>9z*JBdD*f{ssI9x|tz+t9vJ?# z+Gd9%Wq2x_SGR${+E=L0m(4M-j9g$h;qSpac*>+UrH((lUX-5+^&Kxw6Gvdjjy_uV zUw}UYe#54&sp1Po6F3`Dhf4J+{2!%50><_+H?PVo=%T5@Hb={BdFbwTD4bS!fzIn)~f6(FyLWim@)ugaboyzJU2&rgiL2!0lS* zd`ShzlIaqY@|*!(Wb!6B1o6#PmN=w2FptKwl(f1Uin;j-{{RJM{ht2-Y%h$z8+40r zgdYhtO-o3#wkB)n6a{81IOp4%@PCG%vB$;vzBK9o00w+t;oFN1X6PicLnz5uiRJ#4 z``tF1Yame2F9(h~8uG6J{>{ew_M!1N$L|_h!m|CA-%aq!0zY?XiSx1dFJ3)ts+Kah zJels{ID->~py_;$i}qsupLD;2{{Rj&?+e?-zwFkivPj)nJPj*c3BqwS%aZ$fvijYX{#?GRH3la$! z&t52uu4}16_RL+z(MXvFxStUGAkp<}(RV$9h6f;p73j+r(0)w)YG^FH!-9Z&;*x^C zq&T*$j$=a64wYn<=@m{^}08l^~;9{!JZ<5%-0=0EC z&|b+UW_%0#R)4`cd=>G(Ux77EDm{kfUQgZg*FKf?AMF1C@LNRqN#F|~3uv}03>LP@ z9B88$0Arf;JHHRv-CV5lA{-p_sb>+{+mt{sD>z0je9v)C(+ zh@mvPA3l6E{iiIonBz;i!nwv&n)OX9#(G}64)|^XBy;mu#2+4X9ZTSLy)~?i_Wo1+ znHb2b{{RU;YU$y)OMAUtht1CBubinWxN5EvKEDqNn2Ksg+#0>Q+(i86B#sSn+WwA8 zWf!OCSiS`K^P}r_N+iFH$pfhs)V{H$>v1c=GV{}Eu6nbDO^$_yoEE1As%ekCgTxP^ z#YLg~H`aVxewu!jY)iQS5S&-5_#@!%rSS(?j?Tg}<(B!|&{x*q1HWZ|4){94Ddn|} zM*}6DGTE-|4p}TrlWRl9$K#eAaz}IHJ0I9L#d_tbk?wrAVbf%%%ulNkcl7Xz@XT0`l<0F9KD^fmPqGTbH|=#LZY*qQuNJ}>x>`vhnj zCXXxW)^Vs<$yW=5n)vU>J{`XBuBmt7tFfSZeYvI4p6B z`|tLE(WI8%U5l02a&cc0{8i9F)ZL#W=Ol67xnU?%+{(2HPHmhbNYB)MLcM?XIruSS z@uT9B_;*oI(4vMYM+choT{}sd=3#9NS#!`1_3pp1?})$QO?+X}_1$eA8+qjs+%Xsm z3GY=-v}IK!v^cSrr$MBBkK^C?C8vVVf`#R-jS9O(oW~v*abKL@9ljW9e+E7$-*__e zH&lj9!q5CldK&(Tz8KhOe-nNb#dT_{BE6!J!?+_g`7i$f1*rH2d>!L`KjH_fe6?Oqt`F&uu}25VByJyKiAF)I_vodVAF^Lrk9Bo(2iv z;)v5->~~JA-9C24%o^c2esRrZUg=Vj!A?#qMj%&~eB-YbcGegqUz`k|TI!`nP2A#^ zE7~u3p}I&hYYL)-itpuLO{$&$xt&f&#$&2+IwH?JUetZxnYk56qhF_~~c$2}^ah&*Lr zh}$ldZ5@v_oUpdo_NZoSlIf1|Hn$2d6LbU+MQ|Fnv}L&3Paj%`#Xc@epuuN6o-tgN z-np znNUbcJ({9aT_k-DS;RCd#YU!wrFc@)_k#*H_2#$l41z6%f{bFh-v`}!v8xx!^sa{U ze8?0oL0)<3UVV8C;$xCPpSM$mr%4Y$0iQW2V-7Qr}%GCg6zW3xC(jVyB#k?mROJt&b`Ssi^Wrq zQ|R!yT}Mvs9B0IxPWJIxLlMC|8sYpk;SUgaqgN5y+Pme3NmE}&cx&KP{y4t~maXNM zAH&6cPy0Up#d-yd5nWzbA#J(kMR#DR%MtH%JY46O0}g3!e4+5W_IB|Us*=xQN`MdD zHS9hZ_(6GKn7p&VQV1kh-Chd)oIEwBX`r^98n!S#T3;Ld1<@h1Ee)FPE5K3KyRjK( zG#gC#3Fg@MRk-YYfp6d&X<-4PP`LSs9V(^Q!&@i=`B11Hp?cS@_`|^0It1*g2~^$K z*F7bK(yK_;t`rUl?_U{@z}He~GuO+r+)|$_i)V^>pTJR0r^sBmI2>>({{RiYXH84Q zx65&jM_1=gAE>fxjk#n!_ceGX*2a~pD@K# z!>Kqk&GkRoV$R;eNTau3F;|?9E9URNSdvD7N=D@#hDUI#oF(-7^s(!5?34d37?O&wdu)VgiVW3)# zQ3zpxOIOQZvsZxh-9qL8EwT~_IU>IB_)VqRXgYjSMGB|@ZB?(V$>ADQ+>e6Kak9iz z_HAf=H{qF~O)enPgu;Swde(oAJY}Zo*0M-vP^y_Ln(|F&;`?Z}?Hb6yApZ4ukH!z$ z8sRJ_hwTg*&&qf9ua)9{DyNsl#y=6!!@Mm#ejz@LkM_U#*Jl(mMFqm;(14+<;7=d; z(o4-&X$y?A9)`Owj(-z%9ddV$dxsw}JBw$E@Qq(uCUz!W*(Bte_&fz1Mrl>ztM%C* zy-~~PWVj_sJ#1;-_=-+738fLm8xZjhui7E=9ZH57Qh7D?N5frW&UjSs2XNtXI2HNh;E#wu*;9ZH z3gCcGQ(r)QGWhhCvBtAs5s~*pS9KZB4Le&y@?6Sp2)2P4{zPr`a- z^jTyX$F+F(fxb9c-NJ|M!*K&DPfGMXBjR3}dH@$H+3G8`<0;E?#rmEmGCAKHd>YgB z6mRrJIq8+oYs7vge$J7Vk>IvmW1w34rauvB_u)sLpnnkcsv72tW2i*U6}k}JYs;-D zsdGi1rW$pjEm8Tq@s*>Ai3~u;9@U9rU(E_hW9n;vW~xByqc*j<+Gw`|Kn829(Jlq7s^gwJde;r1 z&avZ@liZr>^m)Xt0DO_(wS2~9LA$#jQ-Z`PS~uMGuZJEeArTCxJaL-#zXYJO)-bUlp zSAXFxNcn&V&#BFLzlQumyLL!;>u}2+M?sp>ie1q+ zbF}kV780}8t+eOLU`9^UTElb_J}$TTdnDl{x`3RHhPEvH5oxC~$RSq12WcEvad~A0 zq<^~$oOi4pQ&i-~771R%y?;NhAHZQ7Y6|D|u3b{CSqWvNSwP}Vv%VX!cC{nfeOTC`&ACV2SeDi|a3mxhz0Pazj~x6sUlJK@njEPI=En!xyrWb7j&$q$mcAB< z4?MbJxGQj_T+X`Yc8T;EFB#?8G=1f_bK>6xd^EoCCcC>*wP_HxdiJlaJ`DcPx*WP< z%WY>FV!ZzDrC-_J>%jVmxrQb3k5Zz&bK#hWOSFwG3aM5Z4PR@NWH@ZTDKv2LM;_v{ zSYCweyog}^ob53pb|Rv3yy2*9~tY=OybJLUAZGQAB6rFrm7cEyR;0<{3{L!uRg5`b!E#FvW{zt zgcRyC;J+6B&RU-HIkejz-WGQuBO_AcHv_0g|9ael{EA_sBsdepEQq)d=v1ZPdzUug^(Ts?XP|KFYxIv zb@;7gv@8YzKi;oWoAzguPYVIFET}RL+*hM~6ZkLvhpRz)S1g?JpLUEs3DKH%II)@1 z#8Y;W>Hh!)z7y&8T5L`BV!Z}2UabBVu(h@+z#w2%4H|oG2ThGuF_F}RP-=cSg{3JI zZqNHw^iZe+Sb|)gM)Y+^mK9!2N6Gv-rMsB(z7{aMmt3d^?1rn%j;iA_)kqmwT#A~gV)}@ zoR1UDL0MS&o+Xlz_fG!+!Iq{wMQH)XYqNmHv8!ZqI5p?`Z;W)^KHqCfpn;rb zq`vs4X957T1<31~#a4`Z=y1FpdettG+iDuUowIFc=N&7~ej4 z@opr6o*7Gh4>jSx6@DS!>V=|;;j-Lq^sgHqnJQ6f9?nydRK#K}Q=w1d{{Y6Rnrn+Y zId_u8E*lxIFVVam;!lWLsC%`L#?6p3U5A1G7ivB)))w~u6DCeTW#EeX7vZuC*!#+vmJcJL$xVC59q@nleYMjq(p!rnAA^!B+4SFsmbSL5 z`+zVoFtyQW9vp(fKr7eRwQeP>0RSn*eXb8K!{F&CMm9bVAD?1zbsU_JF!48m%!<*+ z802)WHs|4#?=PHUKT7wD{Vv`{VzB~9=9z8bOL@2u@$M@OP6C~2YH3F~t4fyV&AR7@ zwS60#c4gbhP&!vLsCfItknXcFvjOX0OZdOQmXpOJBtUV=#d)pg!rOT+41yucaxqv^ z!b+7yqVzMZn8Oi9?OgEvX4AyJD{!7sMRCu3R2qN4uQoQCYhBFUNgWM+3#NQIvw>Ab zrya&B_0NT3wSb7NuoQAJUr$Pe6*SvFHqCI6#8z@h`R*Ts)*4(HYgFBiTWxZlIMQ_e z9yufO*XCa1rF+MX;}%xUEHE9)20vdNma$(O&k>{PoHBe;iiQ>?y#|1#!BRM74KSCfo>MZ%rOjv z#s_0q9|t}hHl22|+{fRzuX?cX{k#?+EJKmWHR~#Boh?tH%4jua&l9`wy~VZYk~KNc zVO@8JG;52?HCds_>Bz34PYcIA;b9QYcsQ=3Mex0~y_%5_&r`*7Ej35gsi?(V@{bbu zC1*@DN2endaw*VbCbk!w7pkcC+Eq|&?}rd+v4 zg??j^w!9kHPi-w(?$ghy(~gY!d%{1m@rpq-x_>$6j62Tq%^Jgi=AI@-2*cfwzSGU__45lN-L zm5@tp;}!L%!~Xz=dQOF99B|m8gi+4g>U=TajT=q1Lu;o-m>gjSYk6$#;o(pLk4p98 zxl!Rx^(A&cO1FmA&?=|{rbSo2pLyB4rAsuWVKE>h(y6;-G@LHz3{BsywRMLF}18}z6MCnwXjU3mHVmo-{KGD4{9S@V|plOs|? zWS^xyI6H&j`%?fVxgVW7J4XsCdD_^9u1y#jC7Z8aYH0AoK9yKLIjIDPA0g>Sw42?P z7~HiL#Azy{oD=I-nd4R)pE>D@!=6_cD%j2`_gajGMPN9`uS(vXYZB9oHxe@4#wyH> z95Dt5wPV?8A=CmgN2#j~b>=qG+!4~WbySTLvbijzN;1Uocr{)@mc>e3D*U8-nu19{ zZZXrXLz4+g<4UKcQq(Qcqaf#+zY7M&>=1owuB2602L$J!6kUxZ(azo5Y?QV*&s@~j z@+bDWwLPFJaeR++j)Z59H8J^I#1iaa-Et!d3}@G0Cd zMk?GFK57Tg2c{}-w8!^)j{M`09>saD&SbCK2YqPg6$JD^i(sxJYPs4xO z%U03dAeQ$X21{4c9|?bKTYK4U;Jd$4O8`D@_2ru1{1Rv4b({|tlP%BMBH@d~c9yMg zfF&2uQve!no~<`nnY3hM^cEOCnYt|>z&ZDI%bmffn5+wn^E%Xa+itb zl3gxeF}`hwn)o;N*|d9jTKeHhXxyE^abKXx0l%r|m!c3wW>L zDoLwYiHgFzj^V(^JXh~Bd?xU>gLIJ$Rxw7g$At=@V0sTq_;3CSL#_B$)5CXz%R=5$ zDgrniYt?uf6fo`IQ|Iw@aZ_B^H2(nLlb;gX#(x6z+Z|HvTH-5l4608csK+(uU+`H^ zfk(#w00s4ni#tv8S;E&)w%p*L&*}woKk!FC9NfdM>b@PB!Hzqf(BSqazUBR=eivGN zL->7r1KWoZ+Oo2d)D=JDThgscz7G4F;F^^c&$;}^D@vzn42_%+M|$7z#iyGpvo=n6 z2E8Ztl>MJIAA~;_HBS#oV8Rn}@<3TSsV9M0ULEk6Rz@)3{VV3`)oSTzdXdFbOHB** zmXR&43gF|lXKVgKrrn&^P}&61Tjc=&4h?gf_KS5eB^iMoO>;VmHaZqM4mM2VuJ2L_ z7=llyG`4!&g_H&FP-@yW(p)fbHy-*XnDZf4GfW;4bZoN?NV zUlBHqIhkb09f+dR0%OarIN(+{h_^(KBwm;xiqTF@U7YfVw$UB_k@1K92wS9uNbC<4 z4Bry2E*OcX7(F(c@<=r)SLIMgJ*v!C*F^b@?T*>5j%66?Z%YX!8(znzpApA#kka+{ zs?y9Qa_HcnL*BT7u1%0e7uN@^Y3X|FMtkKKn5zp4T zFAZFeHA_BEpsqQuzr1AwbeB0JkPW{Y+Zaf;%^;MH|_pP_K>XHuM0 zCeJCg_y+n<={?)0y5_-pX0 zNF$A&9N|VXE6<^qNlD!O0pp|Wv^ZTK;N_HBe%7{C8-ImyTo1*sgqn1gFB;o|0$?0h z*7trBiq_b|ZHOB>fnIOpF9fyXxrPowz*ZH8I+UXeMC{9})12=!;I9(=9U644I>uLX zVMrBQM)-YyadRE4HiUy90PU}?^&f#(S9b9ob+4m*HF2eAR%i~`$OE-Q;h%?c>G1E4 zBb}HE2NlSCQlD4SV|BLNn+IWEVM7}57*wo$mM1D%&09SWQ9l^8Z9X)%`$DlN1P;{~ zh(0H2nuG^!+Gl6V2hzS`)3px~Ug~B>or>!dW!4XeJ;mJ zc;$55BR%S8MpA=vJFuB97L`TI6wi>pKj@O^8|7qS&T3_9mxn$Oj1X$d2f=+AZ!8~&OlD0~pyI+`Qd5q{|oFdQI^1PP~EDUcO9)4E;F{@tOKJ9i(2yB+^dh{gT(g?jPmqkc~QlQ%CPk5x{_ws?8)%=RIr8%>z%`92Try1 zXNRVi-o-6l!jQi$Uhp5mJ8d6KSmV=`6ed6m?NVOpaA^19Igos(iuYb=JSG~<_<7{? zF}Rn?D|im;U$S}R85!dh~u7kv$36?1uTX?n}>s<5OTw2HF%^3u6Ghe1<^(fX? ziTUntRp7n>~Bq-&Wbm}XYxV;N5#9UoUl&?fye(Ty_I-^y(7aLf z?3#7Hvl$yzm4ymM2vlHTVz1Y^>@OeEtpZ%**KG<4g(Lx<=(X1a|E{k*`T zxdT4c!rD&AI%6mEuAjqmw)Nfmn&8bSzE;ufVCp(^N2>T#tAu4Z=xfxx8$7aY-O2Z_ zDDW(vaZ$590L6O;gLLT_MpXg9?_V8Ggw
0&Fn!0xn5$q)b*yZ-EY>)BobA>%zO>hkK7q|x&^mLJ+vZ1*n)>gtR!Ann05>RuAogo=lE z-Nk(I;a?E|=vrKQ*ROaBT3KU_nYm+MsN}f5cv-XZyz2?<<;`Q;d_Aa_QR&;7?KE4` zN{)W-t$DA8Z-YdQyT4Ig&W(22R?k}cShjq_S7&Rd&IDis*0iFLglu>J00I70iKa|i zK2CeqtWtjIi5TG4uzI2%=ga;i_<<4)rr;^ZJl8pA;$v|sm|W#^z|DEzi#%O(8(%sJ6A#kCt2IQ2ZM9Dd$va zMO#ybrwQUQQ*KJhA9`qhH`6tlLM6g&KXilMy0;o_#h6(Le1I{E`Fr+cy4E~JWV2pQ z8*x+DrF|_1o%TxvF5^2nAlKOC7`0Z9J4fZ5OP0e4iFA%%$NFWx#kr2&1tbh+xZewW zCfDvR&C=b=8fPu&E2!~3<@7hp4&k?dt#=;}Ztd?NB5k7p433rUMy)EW@ajgk6Ag5Z zO+FlWf+RvNC1nGKTFlk_A>tOFZN%~z$3`Z--U#g_wJ98A@qxz`hp*|DYi`9*1B2V8 zYgMHK%b|>^I;t%mHKceLwxS+vvvvemqj+aRjw>~{bVOmt1Eo>7(=Jx)%#4q{#dcZ- zhb+q(mNmf~*B%zUdF+o*8}89pWlJv(TT3PxK_0lRIQ%tau{2L=0^^*HwXFt_AwRo9 z57Mh@b6i@-m(O3$y?%JLW92Kk)t!?$En~yFbAqu#pqA*uu6TdKZD|hZ<4{=jQPQWg zid|b&17X@*o;p{d={^v7El{V*03XBFu$*tQBBgJuK_<^M*X{2_Efh?{0~r;}Tj-jX z+XR!yz!S&Py|Y{Jj4{3qw>Zvg0C-s*UUcaNiAlVQZ&qf-@o}Nj&$jeDIINYdGwq63)5w74j#CJQw2G_10($@cH?N@~@!2 z8vH5oCH0~kxf}vBUUo@e6xSqC>1TLsCAplA>%_kk?W0!Nb{jnqcCJ28i;}|9G6;l^zVaq@TL|VFMpSdyB)#ak_* zO)DIfz9qi8g}!HEM+6$Hp!iPf#F~(jc$|#!#d>a^@RHssLo#hq*_Vp%d>Qb{*36{w z#GrcOxBkgeY1DVIg>2^who<>AEBq?>WevP>FonMF99OPsz7|VMQ74t@&{RGhwYJk^ z0HEM;TVLA{Mw<%(kzUO{T5(ZG_$+oW6O5|bD9>i_V5rBoDTySR+W7ox&BfQ8*$%k$ zsGxvJJ_zb7H!f$P&lKYye3lp_xn(;J4}4a=oC#|dWaIFrSzE^fZXLyUmDW==V z%{V_P;2xEoR|r04LRBkGMG*Lj;FVYf81%(pMd8J|mvTZdeFb4^e-Z7P^T=z3ZhIb; z)c8kOw7pblrQAr#Ij%~0s&%SbJr7Q%0<{VkYeLnRg5mi=hxDgkcqRsoiGUx%x{Dbh zc*7Y0VHZUWe)sds=>hlLH^&-0a4Jdg5z!=8{rlo00CuA^`sBU6JsT}!Yah?V%S#MH1 zV9U@7t$$>Ta$k&QrDcokc>rK7aK^lq(XB`(u93m~Lh)+p@XBSAZyl?`ymRqJC7GjJ zfC1_6UH<^b>#=QVD_hIR74pWsb!}HxAUPnOG2GXcTD4l0Y>$`C^BTB+b{n0fzB@NE zl~>5lIjz47c%-esB!HfkPvL*Syo*X2+IM+E9p;%o*j!$vPk5J0szkyf$+cKZI6YtWQI6K zR1AcujZ;EdB6QO-^& zlDP|z)d`TC6KX6_F&M1!H6W0Xxs2XNb!5uwnA$m7wjMTcV8b&3#q(V;P%|x=V zKr>KDc&w!7^QzBw-P8@JCwV zZ=ss&Q{TAYH)`|^WlP$Tv|z6%(AN&mt8(kOXE^#%IckZrcV>ha&ut#UPBDQ@8lwgb z?%jITNhS=tcOC^(Y1~PI*aOzKjM@udW23dy(VQ1KAMFaCR`BMJ;!Adl@)+1M;ElPi zU2Yhip~)4kq-xVCRa4TsDb2Gv!808G&0Ye#cAfNixqOkZa%+3T-vc}&qs}I@(&0>W z0x`vGTk8_2$Zx`|+iP-$C7Zu`>xD-YbGi|m)fZvWY_09hv{8Vgf&i=yW8vP9b$*(Q zTA`I2A-0aSx;2#p<|8JP?U1?OxB`0~)a5e0oKn=LnrcG`h)2+IROYaVHgGfUD_yQZ zrxjf%wDZIv10IKp*OJ{N1Th@eb7=O^OlRwl zQ(MY+i9Jiu8&p+87{Taq04_n%zQC-%5bKr}2+qo&f$7$u)-|DP8z`6n21m7d=Zri- zKA{ngJC1qpN{soXW^>d@Y;r#s{w7a{u+tcs2_tIrUl@MVpR+HFzBO4~&vmFgK44<0 zf^%O}-}pycePy)_tcY7XPg7d(cy{vo)T3F5`07_R(_WNn({Pr@1xUHOAD&J5A$66w~n9mEDx zU$*}Mw(sosZSfnz7q-oA8JZ@;73s%X{5Acdye;Bif`1Tid?l=-2$EL&JcV(+bK1OI z%9>SqqG?hJ)=b#B)GW&F6993*HIt|$a<*1rM_Tjd@#V)Mn*{VE)r~X7movZ3F8uoT zuPSt;)!B_aJ!|T1>N+Co4$T@bC#V%#9|#qcWf+stVz+!7WpUz5#IucwEaSHo@BR(_ zo}W#S{99`Nx;AxJNlCTmxaZ|RJb0m1$N2V*& zH1CJMYLm(baVMrLtJ3}kZ?_E5=Z^K;K`O6Px}>SfHd~$w+Ev6yET@k20X%1z`8Dj` zA^1=%k>fFA2c|2MzVJAY_X0fj=|xG>r_)gO5la0|S4_LOM^rDzp%v+047@*mtXxI* zdt%#&%PnJQJ^^dH=NGcCj;j6tyq2y+gNF;@WQHF2k!G*zYYEuUf*lc?}Rc3 z^sOZ4N=gX#a9C~}0`7%!z(_+j9Yr0MY& ziyU$>(zU!zs{a6H+PZ_Z4wchalASGdJt*QOQlf-RKM_rCs1fi9>N8l@o*TB*W!iAN7(#BDytm2N0 zzEy7d?sghJpJDcF0Hs3LjeLB*(XCeDN(`?7h~m9e%rwm^MOhBv8OJs8@9l^2a(yBf zgwK*m`=-2H?-vXv3idw7hVu7}$FF-rzLVmuF5glJv!f0F05ayi8qdZ4ACAxwI-_mQ zG3{RpMe)7uFWy-kGatGGS4$V|8FcomZ57t`D}n7?IUaW>OI5!^?lWA14Rh>~^!C5; zpGeU!7^TWH9yWu5YnQzJq2E@PT|UJboD!^qwRsPKye;Bii?`8TPcFzA@<1dLUhkp& z6!7PRtyX);H#tnahZ)U%OmMi&Wgcr-`3k&2!}&d4Noym*J}sSBSBb9mCzC2p+@1w^ zM!({ywA+ayd4U1B?V9&L+Kb1!Cx`9iv$wlku13-IuZ_HA;}+Df-PS-sZ4Y!9(R61{!v&b|tK3e$AW7CV@&qd+h~ z$2HvRnxpsz)Vz2bdxO9g>({`n4L@wr^R==60NQhOBcb$6J|WTUbcR=*4%qN{=cRa$ z#Qy+^c9y!Ak!>>IH)E4tPvW22j$2!4Ap2I`!wPp-j`*|Vbho#DYh03DiORKnc5$86 z#l=&mk8dU6`gm#c)OWe`#i#897LcAy*QxRzRGM%6ElZCb17~Y7bJ&jl)$$d`#0$Im zi+!Y>&6elhydZeDYgVe7@aYI%Q+d@(h)499&k31;_^`wK!+z7zHu0P?= z#JH~!OH6nmWS+I@mRHuAt)4UYha}h2!xc{vOHI#^tA(kCmD%y9?F;aZS>v8atjQSb zj-tLV_^aV7zYc1yS`+0KIr)C=ezE*d@CCN1YV2ajE6{hZkiTdj+4Nl*T03#jk~YzU z#JUxEWwF&odsy9{6+#0@3~YMm@UD}?*%JnQV?C*STi_i-!a9?y!rob9_zHK2JhzcU zV>#(w-Y1f+E2C<2YIbL;d^ZmAB0hToiuwb<+Hp3jNZfEb3h;l2C=x^i0GwCQ{{Re1 z$!>^9CxP{^n~e)p)bu4gQ|3CoI?cj}_K-IH>YC?H&R85AaBFhgN(Lo6F~>B?YzrsP z7zZQ_9xJ|_9XE5$b5%DpbX)oC^#q5^;aGEywd_9+bz<0E_{kVGU7d~hi+rh5u z;l``BR9*Nv$I`x+B*waR)LHUeL4l1-JCZ(=@O|)-B>9Q;uS3x8G8ALAc^84N#IeS{ zVCjL1^zRK^?sez4TKzJV`RS&9UsINp+C2wKlzB>V(zb2o^CCE0e5Wi+1#e3J9m}fXb9_^9dLQZjMNL$ooQvKx_`NetG=6d0&M?)- z+4%9U<`T^K^4lPT&2ZZ0uRYbvA25)8`qh0V>hf8FAS7qCevyUCWlECQQ?n6>!$G)M z{Q>=x{yyk>b)s8Ka(J7MHSwS@~BG@a9*H#Jcq+kE6U_@#{{XSQZ*} z=#YY-C^@fE_*vtjeyXq|BD#(<-o9_}C+zp)G`oua_{l3LEOL3Tqdpb*D)Yn9hM5N4 zpbY;24SRWJI~M27S3}{mydDGDDp-9|J;TED-CWzMAuW(FeQTgT5}NiCB*<__JXbm3 zokG^xV1cCr9ZhQSn%GZ zYb?=`a-gs9gHQP1s-}si%Q_4wT-VIW`%K#Do+6gaTmsRzKRzqK#b+3NyBeU+tIDz* zPIH3ihPFP&f$b(u;^_+Q!5;Xk7dlKg>>-W!VB`+Myl3I}$2mMv631;I!|=ejdiOtv z_VeE)cM~#_Q-Pk9^f(;KIL%?P@|nh0Jf{5Hn*I;)3{gMYTHGmY62wBVxN3s>9Xs%VT49^OM~8^(*1jxcMJ z@cqKgdN-W&$tJfvRVr#xm-9RJHOUEEPPqi+C_cSwh1Gu1JLr1UFq8H+wx_6R)^^id zpSZ&q0=(bi&w}i4)tRDK1RQ`mS3}|N7UtYWT#v0%@h!=@jl^=~H>j?RT%lT(taD)L zR>I=aJio#p3~W zBoH%M<(|qBvPSv4tyfmdULv=%w}gxUagkneuYaW67~?<1&2;`V@e&xf?it1hHO^>u zqfxo@=3+e#YqCwwrIE*l=A%!UqW6Tn6qn8Bw0xe};=1SXjjf%ciDHZlaC_4DVoMtc zwmjfuV~V-1>TyqQAheV*9Wz}DtZ78#ooe`s_D@5Uzgrt?g9^qEJc`}XC52-Ja0WUK z)z8hR%Cnhfa7SKx(=K)2?DZ<|umQ#^%&nM04(#=o2~}EZ&Z7HOTUg2ShfLQusC-PZ zu$N@_8Nv0gPsINK5Tb(Fktf`F>TAb7Blzo4XJKbE1CF5A%w~K?PYkaXd%0FX@eL^} zA5LhW5_GHigSE=z9DLQ+>7FOkt^j!FjC%85o%X*SJW1j0KrGTZP}m?buTuDR`%K>5 zM5!y1dnrH8xpM9^!Q!7Q-1M_939Slz@pnhqnum&PEzzV&0XXMr>0V3mW8!3XuuQ8t z1Owi=%WsPFYS#PXav7JL8s_{{q3WJ4cR?YA$mGrNqB&y@Ez;^2~Uy+rxhhlf$a1 zZj1&74RGa{d}bc2oc3q0oMNyTiVBrAsy(wq@t~O&MwFb7THdzS9&< zJSzF0bYO#9c783`z*ghj43h1`;Nh{~%KuN}HZ{R1wZ;0M8XsI}iYvLo1#A311 zrA;44Ujt5tUaC#2v!eKqt>1XD+S9AeQ5(r-Rof>c+PiAw6jhGMVC=LsW4VqP z+Blau7{xx*NDX zdsMT|V_+PLt8|W}NPuAXtesTVlO0+vBFT4-2>NqSHP%T1KO;~|PD2)^nUox76`QDY zsHDpYDPExdbWJjmw;w~AuGb(FwYkMXCCLDi)ypcLM(UD!qSv0F{Klf5?5X*2`PEmt zVBK??f?J?D&+Aw^!&HgoykUu)6*HMm4pN46_Y(lv& z&wNz#KH(}$v2IJ7h02Lo4?t>VllOTi861;YqVhf2C-KcsE};NZhvytqrztJiqO3jS zJB~5Wy-%l!Qgo0GGg%9G+kkkd8nY<0q|iuX(g&NrTu5 zTYKl+7|Ls|oq0!6B})39hhcjRv2ETn>zckMjntyQHNshHk|P3=H`2DW%~I}XP&qlq z+*hF*u4YqEj)>@)&clZ5*wrhoMq6Na9tW*yTR{FwG{6kjM~Tug3!a60`qx8}Q8UuS z68O&E50Xg%z`((%XX4B2_#~6;(&H?9*BLLw8{H>TQF(AwXO_=O z=5^F}G>#e7Ug`9`j=3G&cnSu4=9{VBtXAcVf$DQ!UEq(2wpUi+q`M?~=DICY#ny7! z?-K%~anCi=3b1iAo*lzSxr^hk5yxr1GbTnd4r_wA@m`+t?2>J#xh9W^`plPB@*80DkkX#VV2V-BSegyd0YvX+iIik7_9Rc9+(!N*!0D^-2Ba`B%hpsK{?Ms5> zfEPKgy6M-Y&W_5Hk1CQ!;|v-N?DBc0HzU@zG|dL_CqgaT85tGyF01=5YMvgI)_pcW ztDdRSxsMQd9Qc0Uee^*JpOv=ayt~d%QB=iyNZDvf@H*Pw(_V<&xCpWgsq0^L{3)A4 zn@Cv|pm)(V0Uj z1n0l4MW=YJ@1+sXE=OJsYuZI6{Dj7%9k{PPzCFrXotRqltn5SK%_1v(PB(Fno0bEL z`wRA9)@-b8V1h;jWNqfYTGRB2Bas?ZI0ro|***wfYLML|R`O#kKIdw@CS<8d_VCzG zX+2N1?e$ANP6UA>3|jzT^sj385Af;<%y&LO&Ic?j=dXkMebvR)v)&!*NDMR0eV6cJ z+6!Aq_bvv}x9N)YaMfNWr!zfj@RbTSiPu4)L1CvIpaK_;E6RQ_>IYAOkdc*Y%5#HI$jQVg7~?v=l!*jdjKv@&JSo~(O(lo6OZ$jIhHkHDf=~{ z`2{>4Wt+*k$+r5QH}Qw!o~h#PN^7gz=fe_0mai|l)py8u$V+CEU)86$osXHjC)cH7 zO*jM-Aos5doh4gYZhJXT3RA;YRWT0oI!jG-+kX~#r+!Mr#GrUT+6daV5zE6dLDsq;|7%iq=X++UfcUE`1ZmLZEQ73RDwZ9 zX6S3>+igZbLa`p_J!`Y@2Z~=uyj#d+Wng#!*Oyxa=)2QL*x++m*8Sge^=rWY02aJE zrD#gG7jUi+9mU0YZ|xKNLTHz^?{8~#3y?tA+*iu>za4d_(hZVc;1&KeT&A_-E9?0T zw-+VyIQh60$&aa95kXTvlL3-pX+NyDso8jo;!Vu6J0fMHXVf}p?JIeA zri)k98A=XJjo0a3b>qJs!>{UW9`DRNk(TdXN}eXVjW#Ugf!8$qT~OUikjEr*lT{2I zYKb}ObWvD34f1t2{6(u;$7syXp}*PhTzoz(3vx&Y8RHF7c$#^IyB+yle5c;Hr@EbH z+*o5Z?qP6>P_gn^?qdurl3LvK%TEwTsX`-{AP;)M@qdP*)B{R@6NANM=-RU}2-213 zv98}rjy*;YSalra)xvc!)enNr@X*ZYwIjm(MW9?+&cPFm@-bBO)n%J+-Xo5CSFw20 z!AmS}Dj*|0g?ansxjJ0H@QasM5jjUKrGV%P>3kjB#F{;IA4@ZzN?d00XZz zW5bu86xSNoXc#W+DaI@Glj0wP zpu0zS;{-S-rF_Zp$Kf5l;d}dAa;(3?2YUJ}E^8WVl1DV~@_zM>6XB3q(m5Hh!vhuX z-wUUX7mgfYZ|ZBp{0F67Sx&J@9s=-l+P#~>Gn;v$WCZ-fIjWsyQle;Al&JEv-898X zRRB3GGgg~RSA>>cNGCa}UJt*D?Gan4A1D>o{{UyC$QeN+ZF~57YNb1#Jm)%dcQ|cJ zOK6*NAUyZ24+oaExfvZBk9yhEv~tLHsBCl!DxQ^W@LeL9@I!u8rW%x~;*^dIeMMIc z;(Is4%d+uCRO1JcUbW$Q0}+qI>0T}H>f;4T<7wu-d%{;J#1ZN1U#egqH57hRkD%?x(f+&5&r;m*QFTQqtE3?{Db(zs7-eRMH@Cx zQJ%H)--tXHt!mfOEH>r2&mh;*HyT5Dvr!i0ZuG$GU8jTp0A|>=36H#n?v;`Lh+=d3Wsu}Y#-1Min*3Yhdx9did}sJn*Sh>1{e?9J^6s?@AnPN7#ge|q z_*eToX}WE=R@38BNX{3ndnbmz6=<4Ox(O^&Fue5QzhlGYl;KG^9|4+iR$+*nQD@E{ z3V&iPD@{RvE}XdPtL5>U_P-4L3(|CW5X)k*w?Ycns#w}-T79NrP!tclimfHZz1)EY zLC+j@t=1b22kxIcEOujwmo-T{b};lmhWb6M0xNhnzd*}dtKds`pz_m#M;vvoi^KX% zXg3&g!;%Lz(NCv=2LAvc0v98S?4^NK70(Y7nR^*>qsH|p^&JyajHJ0$KAcyl_!C!q zy)QQzA+g@M9~@XC-4zkxN%>EuZTJs$y}#Mm?*r4dbYU>kr#E(RW_4qV!>41F{iIUv z_d=iStaIg%IIqqR66hW@{{V%7a+N5uKM=HtHBCM3%v*~Np4H-3zX3Fh zy>;*Y#8Hg9w=n2)Upt*}%}mc2rx#=DxKD}cWcVpUNb>&x*~WcGO@qtSp$;mSig6c)=hi+Y_|Ii$r@V_W7+m9Zd1r_}Xi}F7=53>QCwD=b`3uG$ zwRP?M(g-zI0LOqy;I(P^6Ia*#YiPH(W@Qe)Ba_1)TJ!5?n4EOEaO+dM55br`wvV*0 zW9;t>d{wmZ4WhGLmLmieJxyBGyg($~A|c>>*%jrV0K6NhG$<}wQaASKYuw-A_>#s_Uww!F06CNNJV*O+R) z9lp`E@eQGq8RENb5?w~##BWB%ImoYX4;LufM}wF~DXGEkZhvG#j3iY6dgi8#I)i19u!bd3}A4=EJ zwLxcQ9z-OC=QP;#qjo&fgP!=T&lq{yJ%ODZ^yJnP=cjqIhc(C2rG~QTaz7aV0BK8K zg|U6NPi_z zX0Pze;(}?~)}bfZZl(%>+kkUl1&+?~k;k~kUg`09FA+kvZs=9`dLOv@UabbT5SkyD zV6%*)aNRjf$x!Cb^AYjW77Obp~H6-++>s-D`vg7!`kMXdVu9i5y2JoR6U+H zlDj`v_OQcy&|98utNb)`Z6mPrI`YG%c@M;|2+L&~G-Gbu@z*u=Zmku)-Mb<{FmgIq zjeJe;q?bYBwzDo-xfrh}H^}LG2I8*IV=B&9I_ExD)jTgQqk7QW#95!zS7Gq$;dZ!( zG24;-?mJhacsJm^?xCm&=C*tmCkM5AKZU*(T4}aw^4j*p5-aBNjuxor?P@DC+srto zlv1Y`_a0NCcy{wliZqxItI%S*pABBx!zS6xdr16hm&MDSH^TOA@jk-6E5fGu^97cw z&@J%TA8Q@m4SP|=(aSL^oXMk?R?RTp(mNkl+G=_{(fJChax>P8`6Fo5!5;%T9+l+( z00}&MFN&E_510wSV0zcJX*v^K+bLi;A2vHz+TrRfRRuJ(J}WV!3=MY|xy9}2dVE&5T3Sk~FsYHvc<+a_n`j$yDD@c~E7LqHb^d|ndBpL?Yo`wx zQC2>8I7<;y_R#Zx8)_?|__?BRL~^!Mw>hqZL78rLx-V4~7sX!(N2>XHmQR@zWeELg z{tYohXpy^Rumx>OT&~9tC843w*}zqN#_Le{hfe!$n$RF8aH&^Njwtq$Gwv!|Sgv&z z2_8t#L03|*GlXM4#2&ft1h z&YPwziUYp!3P2OO(v}H}$QOaj2qlINloWXevi1%wy*r4MTez5yVK01MNmL%u|j!S38|KJ0m)j zB@J04T=GF8qqvwV5D3m{h1~lA$NvDTtEI_f^Ev6>u&DIVA+dR`vb;4xJlS8Ajw-a5 z01Oe$K{eKR8T_k)QaU3Pg}J#SEYJ0-ziD1rVwrbzyH6bAvfEJHGR^r4j(a1bG?R&` zCE+Ce!||w^`yFKG+Oj6pHpz{d_o$AfPs^UY#d5q?y=w|Ev9TtjwRdBhXZEub_i>8E zmrzww8bg|dY6GZ#zV)K7p|7)2H;$}8;k)|Ou-}>6XgJMtQfg#jcX67vVR-T}@`)rK zwNz_!RI!~}P>Q-b8@$>^B@$=*)mUohX;A!nW~gc!)WJj(40_~NalR4)Gsmc{F*Rce zD;)f?lF;icHOSzVT*=#-)w|RzuVV9hlDIzg=ReyG8vqE!Y3bf0e>oCI;~ZA_OmtiJ zZSH3(5?Z7)U$*-#@sbgU+~j*xV30lo`**DkPiCVNlhYXlS08n)9XjaS&PtGfI^Lr8 z66WYbgrttj+e@;&O`(8bI5g4WtH|A!77RQ3*E6U1hH)n7n4dF1dEM4Kqq` zdFH(?XhCR>cSo4+wQY6_7w;^_1k|L zYB&0LJI;3X&3Q$igeJ4`OCYyl^Y>cyeJ{heFQ$(!)qpFG7Pv5OrP^%z3}i6vCl$G! zbEr$H+Q|&hl12z7yuaet!7qwddpLQ=tbowm5LO z8cwq2j)&r=g>&M)Qa#GY0kMKQR`!Q)s>yu>en~QkhwlRO*1tf$J%7P1G%XK&-U_jj z(q$y7sXI+_9|HdXU>^~Uc*){zG_j6Rha-dUUM^WTm{xI$*!A-~78?(SQi|O6@7P|~ zUDZ4rYA;=w`D5hAwS6J1d?2>hEH>R)sSH1fYEOWlv;Dt-bZFs@)FOo(g~fWk<%PYq zsX_(<=DyR|$`DbKK4Og(Ycu2Di$Ajdj70LLnuVPY8#u3$elz~aa_ezR1+}KqSmO)Q zze)ZoYF1Fh=<2xJ`B#<4qGj-{7K+DFB0evYHbv(s6z}_#UB&@0N|5e82FFzmHc_) zC|*lt{{U5KkCQp?U3mHwD%0hrtj*MFP=_oL`3c~kf`1vlExJMA-wNGJ6o(6O<&d~N z`LA8k{{Y~TKOJ?QG-}$V*oH!x2$7ooqWC-e7koSTePIo)kB6<$M4%{GKm+uzP`LO( z;l$GlLuC-!KIlPQm~5L57tK+e^26dQy&aF|mstIYJ}P`E))vpimZnH~+cK%=>s>aB z@Z4#ZrWq}U@;D?{>euaA@KeLL_HMt~@s}-t0u0y7+OO=P9mL{iw2>GOm&-hUHSqjF zN;qsqscL=a3}sWR3)MbHyzu#ic^+0s>cMN%z6WUNmodg*+Ds}5XRouCnKI$2D7)xWO_(f+1oRBN!nOHCb~_P-4TS=c+{>h&RG@l1}e&2PpV1O7% z7ZvROE%=e8_*TWElHm{{YM4l4bz0a*T32c>-d`%wPYD-E2{Y1hQ$ z5Z^aS+O}&)3x`pr&!)ie{wBP+_ z-^(E;LVjb-TGqZRUU;Wnp5pHH*tCZOam8YIiQ~PrVgSy1*TrM2*2PulmdEIPJ3Iyk zPK3TA%5}74kbu6#)-1b`zJ9%duA^LnCY60Vg9J>(jxk)=h{_7vpF@@AGE-JMJwH{P#ESA9dR3|Q66^B; z$Dyjw_*++o0Xthr1~Z+tO5$lP$&7-$e-(K$j1=t8(s23TEd-k-yVRQ{aro6n)k9$m z^y(^|+^r`*Wl-NEIUA21NUo|$C86Qsa~hRvIW3LLd*)XjTX9;pv-y#d7o7I4X2wTW z8;IIPX+@}rR1SXwS=NNA?2mgZ&ErlsY}Z+KNPNdR$6=bcVSgRD1hV_&*E=SrH|!3} z-y)-JOQ}4O`qw;Y$vux@=XBG#)9X5|wAL<&yn1H1tC7c z3BH+jqP|Eu`MoPV!jxQYknnZgMQJ@xLb|nrYgSf`N!!S-A6wC(j_|s~06jBZv=>Vx z8wu^t6@jmLOCfw`@vbaCGL4bqXOyED-bN%EVmKK=o|R=Zc-Ukhr^9m)IqCRSi0)Os zWng}lH7QD4pErla#xt@bhf5<26mTl_oJKX+-o2?Hof)&7k=xR&!suI%*1T+8B^6`p zFj(!AG<2)kt=tfJ2D@Ji__AAe3bPUDb6f(dgv#V(`c{^ca$!*AvIBb8$K&uys(K$& zgv3IkJ=?;*I+JKmJf55r&3dPVJ|f8ojZB;c=jN}OygRNKz#IGZ&3AeaiGI!%yuwGW zde_HRr&9}3@j7Zzg>4!1biOpT)pY%`I4nnQ_2-^7g5LdQ4xsKoEpt|$Hjw~kpBVlT zQpw^4yo|7M&m$+Aam_1TT+L-v`5uQSH-_W2y=kJ08E)0m_U}uEU$T?go%i^&oza8rI;>*gut&x zxzpkD3o4!mVO+kyr^cVW`G!yN3i=gKTc0ser=g+Xi-8GLQ}|b__%7(NLfGmvUOC~Z zw#Wk!n)N>f+@M5dZaC@MzP~HPUI`x!ozqd7>sl1~83&ATE2Yz!OibRYYm4yp(0!vF zy4PK$wqg6+VAs$_ceT%$)coZ53E&shW?P%MgKi*<4wdxZg?tfXZEmx9Y8g}j2AiIkg>DUovCBVaYY-9}O+;wcWQ9 z#>H8|74QE55A+LB7Q+b~jAp)y?YPovnv`bU)JPX->+4@@7{U;;KPay`;%Z$pf$^kL-QO4^ zL5}tfvF^d+-mZtc)FKw~M9Oo?6sdA4YIfFgbX0m2 zHJx@dGepwoao~S;ps<=4QEs9N+~u=XJX4@v+S*C{N0!cc73LTJ01>qf4&kh)cHrRP z*3!h$qrSw^!MsE{V9w9tx5OPcN3fYv=@gjGNo){tUju&ApAj$oNv@qMOt|wi!rqP!pQ&rj2Bw=@6|j91ZRd^?4}Qe3;A6~&x)TCGEA4P9B>p?Y`Vc`FnOe7jDuf67)G3IkIC_P zx#H^kM%jsZ;CS^NXj~YFUO27w@E8yRBO7+(ZC}upWm3`{W1f1}oVS+o$K}Z?kEx`f zqp6KdZE3W-9%bSW02a6-%Ek|+WiKe~2?$m}*LtR;>K{i81xB%NM?c zAGAm9JAdQ7e&STP%v*>6^sf?s5jF0i_NXQ(@wgy^(z!ifT)OjQmTB3`5=S*(NYC!Hyq!MHDj!d+Yufxik5>X$k8p2VWrW&Tm&{os_UmG+s9u`X%+vlH+G&_62+lg=r88W%i%QcRhX-gR8t8Qo z5?CT^4l`bB@o&cV+E$S~uW+CQCxcx0pJPUolG4Rx^y}BeN1M6w&+VV_Oz76LLX#*Z zPeWfT_yX$l;unhJSX(jqgLuz=^>_B+_^Ne}6s?38m<+@byRCBn0JF!8^xqHag5u^v ziU`Y|rK|BN)t+a>4*viOXY{TM3h-G1abELB=zr|f@b2GB(x6>{u;iZg^fsBML1$@* zk#c?O=0DlX<6euc+C^_+I4-XW3`(mc5t zDtcGCct7DbmEqgPzO+EHfzA(Vj0Ie7CJJ-d;Lj(QQN1dE5x1!6XHd5bEFfpFtQou- z(a6te+cT1JU2LK_aEwnC79*8OjfXw!+lol}PH7}?cQ%@ZoZ%cZejO{d@M6Ls40t$mN8H(5^;D71xDD z~tv+-yRmf=|6x8xKzpPRP#{hvE1{hMN^if zw$7?gIW;7$8Tm*#BB*(;m;m?nsXo+ml5%m{xoO5xjhY=vFKKjAytLgRVgctlHA?zH z2-<%-)0xTTjbm#2q+&79bs^$g};~~EfWAeb5vr~ zS>z}=s5J}m$WhOxYORi^6}+rnwvu?walt}W^?DjNXl`uxYsk`6ekU)l#dx>Q=` zrwgDe<2}f%qjpX{e2${3tAuCDoy(^lWvQoitSBIsILAuX@Xv{@A&r)4lplKHukPKH zHgS%;)=kc`$j1^NV5geS7>9?c&tmG~{j6+{Oz|g*EN6r)QY$d-4RaFsrdXJ&%h#oG z@2*QDXLDe2jnxZkWsl2{Gv2weIN4$+bc@5_l{IE|pYV|dmoDJ_4PCSG8jL#0&tYDE z+LptTbI)qE7PjdYO8_(8v8Ps1e8}2(Xxhmhwc(!@MR}W9BvRj6spB0nZ+72B7!0Qk zj^ez++r|dU=sZu3azH(6zVP>olU=k_GLD>sUqgY$M;Q-iEzchjg`OU2OwiD+BJ#4p z3<}<|iZyj(j+}E`-L#U~&QLJePeN;=(k-qo;tmD?>CJnXSjHM2ZD{i}Z|Um<%aenO zlJif7NcO{wo-k{ly4D*`lgdGc_BAe#;t5NcE*#(c zka86IR>j7n14eCG!Q-cDRuG^1&o?%fo_|K9Adn#-^LI_s0VwIobm~;Pw;2N zO)g7hjivzO)}|Ecz}?rZ?RGa=_%){+zqO^h1|)XrTV5RS4V)3Ekut-k2Wr1*;@vj! z{{Ytq^}((EBIe#uvO*Mkb6dleBIi1jO2;>>_#aM16TEA|&my_s5Nlo^(RA5T=qA-1 z0!J0l{8I7Oo8kR2-s)vi0l=@2J}>-gO+s6ndmTui-gCLQugp9`;|wlCg>a0Mo9KPF z2=Lw`Hm;`r6m@!#W!Oew|y>&kE_^`v zW8=RNd68N{w|4_Mu4lrZv&Y13DVTZaLq15sIIpf|Zx2FQrP;U8j+Ij5S@5Nr2R=~! zGf=5HJx*%MrABxzoBKX^milBD_Xq&Seq2|id^q?u2Zm%>9|iNtBDJ;u0E-?C(bUMd za4ylwX09t${jK~1r(7Fzt=k4X^B@Mc=ch|{WgIs)vS-jbBrz?pKosC~$6B8AMup>P ziY{@}73H4_el%Nn%5-~e#RkH2j>Hh%Q_V!0Kv{MU zyjSQi$3Ke}x(%}C?W(NeY$EPHQVi@s6i^Ac?QoMtU6AUHegd4DnZoylru* ze`CtgNg+G2$6C4YAK*prh|DZM(5@RSTE zTbX2b1dP|Xcn{)KUKqEC$8)#K=Nr3Ly`SvVY;;Dtdmk@iWO^F$uZmhV{{V(vys20s zlfu`Vh*c>ot;*Z+x;IWd~FFN8wY-9sia**U2hE*T3cJ4a9}-KB!1Ru-}OMq`exhsOI@GVY_OMI^iP z&$UpvmwaTL5_+1=yu6hpF_s{ABvRQ==Z(Aab6ZlwygvIMNtWjIaj{CH6sa;22Nfl} z4)X*IKqjoEh-TalO>B63NQBCwW&#h?(e zpdmQozJBq|gf_a9c_=Um>0Jfa#;rb~}zmc-7OLYo%k} zQ>R9TtjoJ}-bM!%O{I{3>nT$@QW;#&z4Y<2(xJ_FSal@$-zWb68&G#^xmBtvt>N z+^wI&vf{XD2poY^8@+>JCnlrn>RO$X%{e1v^(@@TBGyv(n65{!HP>i4 zoe(c$)C%XtsI_C%!D1VqHO~IaSXnKAMj(anSQ`GT<(DfV8RwHzq|I+=OY^mQ3gbLY ztOGF_BOZpiu=JM5S;itRmENX=I`~Nj`H9KzSK-$cz$oMBE6!)t*=5g}!0noSzlzL5 zYq0}7R?iR2^gld!&u56x^(#$K?HHJJ?OM8Z_>piMb9T*no|EEtav7X+(z>lLR~b>e zIT;`t@-f&u#k74^3mXcF9jsF`u1+(Kgw=>7DEziQm5pPl0;&Mcde!lEtT@2Wdih*N z8%B?w{J-Wf9$}86$Nw!ox_1&O8h#q+h-d;U0Q2$D>QiH<`}Ou@SX1Pry~Gh*P-YWZdDlLBEHT#M(4&< zk16^6srdPHi(e)wcCcLMHNp6w<0bUd6kS6KM>rz5?;Lo}d0)#(&&sDGb#PJY8n1{o zbknrEc_*4z+K`|pC-AS~>NqT3KC^C4-4Xt%;V@Ww5a+3HGryO|7kahni|qy3fe1nW z01EZ5f?Ptoc~iu`F0|b81!&=A$Uu4TUubv__F}Q{ zeVjP9P$L;s*~Na3k?=)W^4F@5&3MPeY_ACAgTu=5+d~uK7sA;s?4a{hub0RGb+1*^ z^vQIofC?}@s?LRPW1>IV+98lS5n6hU-k5E7?E^hUezSm7Xj4?vNuSR7o;MSYuID9m z%93geac)DdcK57H*=L^M#1G1PW}w%6Nol2)B4r7T<8^YkKNd7N?olC|GUpB%zm;!S zF~h@0B)(=iT)MqoVFS>7JK>-02^2`*H*;M}_&zN@$jcOK$8akH;ctm-wRJ}rAg?(! z=-S=Im9~b)5U1qmYo`xhjap7|XN8#06HgSWy^jp|ovYbt0p5A`?HD7qeCy(m5NP^` zh$U!bM)|Tva4WX`u>55%k7IK@KPD8BwAaEq=f_QU(_eWlCNdU$U{~dN-x|7Em(^n% z*!^RLI9XzF9^N`XGwP{@o}liJZWJo;+3Qj0Hg;NF__%^a`CF(x>%u%U@mt3FtYS!P zCSVsQAdlx#YabRoPoO9w&RE%-CnCEr`JHLnrz<^5nM|R2Zbxc=PBTMDC@cDT68FyX7ED%ok_2se`*ifX4Bx5M^x3W3R|W|?t13E zO6SJn+T7W*6P^goe7XC4_%r?%AAoOcyzQn~n`Q=ZK(1`V5haCgp6L19%MFjEMamj8 zvG^_fZFooGZl0I6dc0&4h6V6NdZwT8kHdHSN88bXJbd-|XQTXE@khZ8TIWsE^*I9D zxmgrB&Oxm!Z}=$>#N8iI!fFm+QIHE9=ku?X!tpGU`)!#J0D86g3$vZj{1gBjr8wUW@Tl{t7MQUmEC-r^$IREVz6z z=Dt?(w~KE4QK-pxG|ps^%Q0%=%xgv%+6)@1Xn@uQ^PkN zGVyFS`&cgVla9RC8v1?Y(vZg}Ta$|Sui0Zrv9Z=I?rzBhFu})K?W0PC2&T`_@-7T` z$lj_yPk&*L0XKtuF=r~i+yF=m{cGxrkBqVSZXs_i#?=eYJ!|71hF=@>-wgO=BTJyA zBgO-D;;_7X`%_PdbW6YQ&>+P6uvxhi!; zk0+0tJlApX`}UHx*JB17#cpws)%nHn_e=4|#I1b*Gb}G4$!y>QUtoMB_t8wk)L*pbpP=|s z?t8oV;@GXYJM^kw75sIs=z46i>8mMaT=zBk{{Z_>&+%*GPlzU4nPy8cd5q@_DCQWM8Gpl3hrfuz`4meXSaR321U9}e*3sY%tUx77Qb z^NcdeG&v|qJ1s0H?3MdJTj+L+bk@a5Ae@8EeOsY;IV~)eihTuPd^51pw5wqXB8++t z1$I~Zq*m4?qL2bR*XVMLDN?dO2h6ec@wlfmo$)@8Z6&#l_#l(TdA7CTbe8n(j2yOd zMRk54@$IwfCr?((lY`G%(9m?Dsz|atAc8uuuM)iqSjlP7^{{xE;3)G;sb}FugX!w3 zVgLld;=MZ7<}`#k3Qpfz<@BvKJ19dedv??Awd;FR<+Pjw&$V_^ojg4hqZRrC5Yx<3p?RY2OK{v{v={+-x7oxoOViBmTW(uuERm`WstQB^Qa|wC#5pm;a#Q5 zo1UNg8)K}7NjzHax zJJ#*0v~jNG>N8xIgLEk@ltnq*M<+GdEQBJidt$u|C%2V8YKqM8anq$5ytJN(t#y3C z@Dy{7 z>&886j`&017t^)*^(*NKX{EyLu8K?ccmRa*g)J_09wN|Q8*e@&&J!H}06L|h$tkyJq$eO?8tY|^Z|zEL;I9?L_=8k~ z!!tR!AR#?#(!^1W8k#ep}P?FRsQRUxSE8(ELk)Vj<4<_bo0gIx8-s_@H#<8VA~HS#rU%MFS5wmrv66{y_V zmgRB|bCKSuT-_>>$u$~W8RHJ5WQwVLST859Ltd1j?J4sk%d?EE6V4fnHUK>_S(=sH zaEt;^)}%6#kj!g8QNdg{QR`l1XYS~BQ+8Hm8@tAt$p@h1161U=Dl#8A^`}oEjz$NM zTB|(B%kqKs#c*=D9g%TMQ7*3tMkR)QDw(`RkP_#SQC!RvHea_HtgGwyM&1~3F~voi zmD!|K-iD5=t1J+)-I0zfnY-1bc}#)6hM%Zk$#6E4#b;c_A|Ya&pIx=CXv!Bm}S0rjFr&+7ATRG;ZCEdtKbP%dS*+;<-N*YD$pJYalpW;AXvgm}d+%<<#Y;Rn)0tJN^<$<+vML zk-)`vzYIK2JQt$Y=5iKJ7oKa$?eEeHhAep9oC?x^8&7Y46WHN)5J?s1;43;9g)cLD z^dnl6x%959d@Z!Wbgl_h&t8?zLGje-uGWZHllOfqp3?pz+v;8)2<&{_O;w4WA=RL(k5AE?$ryi%7_@nlkwoi!1 zLX!1RmmseH0DAWS00iq}#5z1vBkuDTA+ukVe;2+ei9RN$iY&DZc~>7c*6o`6=k`VT zlWXGN2*Gm=;FEkn7iq^4_e@F zemujc>ITnHy;LN0V~XwkMG@6=44i^%#a5OYW7 z$`W-sVtXfvem!aS>@H%}5R!gS9=jihUm3MMQrzFo8iaD=dke%63;m0hb6I!`%~bD?Hi?P z_7?V{@e6Gk0qgPGRQQc`q~5$Mc2(nIqyyYnqI?qlu=O7eYY^Gbs1%KGp|@Ap@P%A& z4%D?Z?la2byq5!?Q@ky6_R~-C)veSEJW>L7^YeRG3Gp-I?Vp7$XOZufN)DN?l)M%D zXjtkRJZPG(7$Dt;(!+3R4y>}Vt zyPb#a3I6~ESJVC(%=Y`;@K4-1&3x74U-&KNuW@v3wEH;}5O7(|eEadk#xZ!hc{vhp zC(PV)&$WF20Ihgp=4<9?U~-{|&!DTyv&?=nnw?2Be5QTj)*~;*E=f5=`%^{zzJ4=) z&|1OJJU^&OY;GH9ZN%VL)}OKm>__5X8sA&^-^X4dy}p(H^i#;&M?FBV&tKR#Pq^^! z#EVwFwkFx61hj~2_ZRkJ_<`YzUk}?XVGtP9ft+KC`V0;(6>DCs&U{Y~R>I+Z*dD*| zqu{oa;H@^=8+&jJN%yPKY;_A;R{JAJ0qxCkZSiMCI&(BKju#jW)YltrWGFW=t}EhC!ObS}&PeSngKivs?>BS$7Y~oirqwFB9^L=&LrWYL7&pT z&i#fy9o*Y^4%YWmiw`Veg0l09;K=g2wKE3?syTRrh%(>o?s_^t#m^7v7ycU7qHC!L zP2F=|{@cPw5z1ix)xIaXxVlBP`B1Zbx#`V#?X|GfZ3EmWVUL&-Uif|=N%1j_ z=Bne%QWtxgD0(gr~6d+Q%!{= zYb6Q*B&zqT{tEq}Zm#TB+I4mh)GKtZXX7`CuRKX=!WhU5q;X$GI_j)_?GKz;dWv-t zK54$UNbX)BIyX$#Cbwqe=Zx}tit9DKCVN?&$rBbW_|^^OjFM$!2RYzZrG#25A3cbn zR!#FGKT3lRreHH&J&lYq+mc3dI2f#33quUJDo?g6a@ypiVKIzXr$NRu)t{bcm@4y^ zxQN;0Ck{p*5Qcu%7 zR2Mfp(BP7B$2Cgcs1w9n>Z$>MPc&oHPsc*E3?m? zc|uC$zHmLLdyT3|IK@FKAK}d)4na~g^`}y5D#(hsi8XHFHPcD*tcTi?-5y2-n9yPN z6=vEnKpRFutm4|(#WfXvrR_2vO0etNyH5_tw(6rHa5@U*G|P5o0a#-_D|#!XwTlE` z{{VWktA(S&azvxg1ajJ5uptT<^gP#BqIi*aD)i&A=D1x#!sAJhqcOtu z$g5gjuFt3W%~<=~zZVAt&sXqu(7RL&z>iw=9UD!KSn^5aSCV)qR54*g4cX(; zyw!`76-p4HQGi}`TKZd#-cFwNKK3;lPws>lIptW4L0CcY&@P+!r zfOz!9c9xzhFd*F^+6Pi=>o8TR;UIkOK9x*F?#}m7)u4_hVJXHq&3X5YwW$7e5-v|+ z-l^*TK4BXpP2Z()UL^5bPIm^uImfjuW^$V4Mh zpdKs4PPHoDZ!-XQ^shkpd8|lo23w{p-r;;}l}n#B#CTp>C(}L&)K&HXdW`aG(>y(T z`&1G~KDFY$0Mr^nj!4D<#d_a{u6HVv$nDL3wMJgjo!R+5CHwBj?M`vZK<(yjGim&@7TBY6@3Qg9}{?zz`mk>ea@hS2eo^z!SC4*!|>cesSPp`R4ze} zn)-9W9}V=+4aO8(fT0G`MScsz+&A@S?P{a@8;v|xm3%wG;IzK%sqn|)uA87~Q3)-P z2vMJx8;xps!^E0yk7mg()hY!?Kg{32ofYCY_|o3bM3wFWK4tR9-TLOd8^``T^QMuY zl!C*#uV(#=t-L#@**>>1R=t_>Fn^tXqm!)1HmeV;RA~B)&n3(9EUr_9Ep6EQci^{= zwJi$LBdSY>a0v(LUYoCccGHH6;cp`tLOC__FNnS&>H2Prv%JDNRX>1=@UM#h0JN5? zrATe($c;<5h#UacM8$Lknuf*2gH>hz^&WA%gKx5Li~U1>r_DEk1;!UjIITgG1p zG;Ln$G!VxoTrfD~3i!j}kNgyNeF7is-EkcT!M)=-$*-dR0)EuGXT<$R-u}+u$+&#V z-K+J?7Apsc<);{GenXq(d9GVhbtgNmPpf_jd??d2JvtNiRYC~h4%O??$#15@vAb{w zQfth72jh6B(h!F+xgg{U+|>L>b#{yPVpw1k$5UURVd%mXRgcGUl`CR#vi65r6!6_j zN1c(or?qIw6bT3j+ejJPUU#Budi&}CEGKI)z{PqVnzLLk+o*8cd(kBMtdYZubm?_W z;k-wwYBFm6O}K0Z&fHfk@$8F#0$g0(M7x&Vm=+y4uVWqWdyGi{zOZJ1H1LzW9vgxI7B>EtOzsU0Ec2jZA)Pnby`#v!e0eg{-gO+cEpY7y`Uo#{U2Y z^qZUa4;mCB=2D}RS^h8mwKVIOcx=Y@=Zy5OOF;PZuK44`QCmj4BWXZ#d9RYiaRyU7 zN3*7`&uX6wXZW}&$+W#sPxvkHFHq4uLYl6d6C#ewfH7ZH_%6m zZ*!$Y%u6_2;cL?L+k1q%LaDV0<8^wfv^|>T>Rk-C5rUsADmpgt)04d`FqzHV`ZvEtTVFlo}p`Y<50P`fq^6qybN`&YBPl>xg>SsXwIb6 zRJ5_9;vW`U=`cBYlqeh?_2phA_|IxC-qJht&Ujx_RQ@SyTIKMP*1-;4eo{Lc@lAi> z55+6V&F-Bs+;VmmJlDhLxyEMa$%ccHKKq6;OsW+5@s(6xne^|&ABeXXcRoNn!wyb! zUbEr&Ur=R|bOCyU>0S%_Dts#Oj+Lh^=ZlVIQUa*|0PEM$F~JRmvV|lNql)^hi|Z9I zF{xcy_C61atKj3A&W;t{-BGg}K?G(!!2G|JX5X7>NhIWYVzD({X67CKT$};d*126* z;uY4PDcvUniuExV`Z#rIpF4-Z*2GHjXP{f8K>UcHp7^U)_z&}Bjoqt)@b`|R)ZgTn zCp`0AIoB2_19CXz*0iw{tJCI=!mWpu8FI^WpqlR91wSau3|1zkX$||cFd*ZCD;{4F zPV(~PjQ8}W%i_!d61O<-M61z?)aRzEIp~aya?UGx*<@@1oDp7s;@=a&r02|I*QxZc zLA<(MLPs(-NgP)t;?Du;y2%n;#z(1I`RsD3h*vmxo8 z)zMk$x0cr{aKw#`pRHT)kB9WV8p=2=5J!%uxu^K2U$(LRz#x`581<{AN~J0aRUW1| zoMbB3ai+b~w(#Zjlh`Xb?Pmd0XC}8i3*-A;V1$No z$nI;z{CDE(%|}9#++2w8gpf~4$o-Z)NvUd{BA(J)rj;Wn8@{#kaLj$4J~W`Tk@l4` zYHum2iJR3QUs=jzDZ$BvMRpZnRotP@)A!|PGTrTa%I9mF*%LwXz ze-3F*=r7X9SU;NIG3lD}uNUfVZGJ>VkTb=0`qrT=@E9AW(9_bU0N~&~} znq#RuD>GKh#n*CME}d-&+y(|Z;<(QmT+63$JgVrCzkGMo7p$mFHg= z^$BmTiL;LpsrjqXuNgcwI*iw+_(tXnrUi)S9c#z5hYHG9jP^C% z_-|N>#RE)mtNzt}76v@BS3C@CWh?TY)1XCV0tEmMPAlhs+nxu!@RjsdZsJcN_LLsR zyB%BO?t$U(7JKvr3^wNJUq1fTAGEKA?zCy3i&Bk&jf3R%;PGD0InkaKq-T~;#H?&* z6|wop;(rk8anE;Wb2eT%*b|;lHShlb@J-L!%UJN=juYX(i7v@m@o+yu&MJL(0OLKAo8i{^mz#U>mrvxy)Ra63K3l$*{CJ@k>pa^1g+ywZttUxys{^I_A4C zg5Ee9Z;!1_&FM%)gCl<(Ys28zZtUUMvH%C5tv?QUmRozsrAIr9{LP+~@l|N5aOUcM zvy|g$QH`YcKTBiyrk*6wQ%Sd-a!@2Lhqt|L{4o8Rlj24G+pBkEX9s$B=DtVxDe*c> zNh7zsoVW!-o!PIyzh_U3x+jJ-DDLN(09e8M-1Mz5d3GBx#4oHy>32JGT>iFKf^x%3 z*F9swp96j${0{K=)dUL>NViDxk@DbmCcZKKz5f7iO*T7=uL<}B$01-*u3;l;_U&0; z@KRs+D88k7jqt^T4vTBVVn%K^9pL+CriaT${!7TPn`xQ$D}*EiFfq{`&;Uix%a>>M)EHQqc18ypA%IwLP=o=j{8Z=++j}L2Rm3fd_6I^RKA> z7yLK)gdte$HAx_aK-vNAP<$HjCY`M5kw|1vbDg;BUhCmG7etB+KeGZt<15m;>Sgf1 zGwObMi_IqoD5#kl29x6*HrXJ()LvB2&6@PDg3{Xm0BVQ1kQd*AD)qmGE_EwkJIjp$ zk3BtVUxpq6nlBSH{NO)9(zl~Vtt!#7x;}pyjfH$Px$K{j?a8FasQDbT z5l)US@|KrAH28sa9p<<8dp8IPAd%X+yVxykquVQ%%jA+Xn)YuQ{0EN1QI`5ktBMqq3J%!iHG$`c-8KJIA5re+s@De$qb_J|JoO--)5@JELswtKiEc;X5dfxs2;H^zVXDM!H%gO)9({{U(U z?$$gk7WgfZ>*-%F_>=w$ov(Q7#1W;kwU(S;1ep%vGhX&rR-FaULtFJfHpP2PHZSV+ zF7@hujsDgD0JLP9#k{r}0cpnbx=2Pq{Hww|C-LoIklX4vX&%@6tZ>=QMey(A{{X|E z5qKsG^t`r{&4d|Um+>z9{t(;HEy>6z-5c7zLy2%0%J{X3jIR41 z(OeT8B}}K-PB!yDOZ*f2W_VLT(#$v4@r}4v+{23TpW6ra-tfMspiiUdx`a|Z$?~ji z>tCCfe;a&Rd%b7Y<^c8&*0J?J7it%AnXYe{!RwaGRZ^##*TX6`9*3b{0h3@VJVJMo z>sp`1)tc?rdCDAHz6qM|dOQSNtSaaa{SNZBd_>tscr#m6AT+30n_W2%FaD%ib^e9La2EdqXA$ zP8XA2WvzTFyn^rdR_AA3xa3#p7mL0ROQvbfG`2*p%LjM2>V%9b9D8Nt_$oYEG z#A52azeT($`zKjlSQ%c) zS>1kC+uFP*_N)Dx^$i|2IxVvU&&jj(uDVmHDtyiEd9%qdmE_V|pBHLi5LMfEu4@YG zVjqwWy)j*XjJz$Y=#v&}OL9LGRU>DR?PkqP zk+2*Z=TzkPJvcr`%**0HN6VkWo{V#jgS}IXN&q4LwP6^LIS1aesG$UODapw+%WEJD zox>GE_aqz+3F9?jD;4=$@WocX1$YLzX|H_kZ<-|*!W`!yP_kqa8v>Sb^5B3yY8cJ| zRy=mEQkq=49xgS?HjyZi4(uLrQAuzVg+1w#v0ii9p=mtA1Ac;t(3G9ol`4s^d6ywz z%yM|EwydOu&1OaNoPL#Z*t+0?2iCQXIKx(S#q&$Z-m;KE7FVBh+M&PI*<@g{_vuw5 zy(Mr-!Rm8SyCm#-5`AkL)3MVmR3g2Y-|a6oj^`(uyQkdAzwaC#)tpg7mH>6>R_&ZV z0Qr6Entst!yio9u*;KjMcy8-s1~1=^E3W>^gH^N-9vI`EE6?*1U7a{uY+qrIlg;A>`MmfW&hvHhgw{ltvThNW#+fLkor7f&A;I&~>7c53fAe z0kJb_+|;g1wB3PjlK z0~+yr4P{V|n;h|q>wGV*d5o$X=IxsJj8z#zS7+B!lxsxy9V=Nbs_c99&2~N-)#Qw+ zFPz}wyr082?<`A+IXD^ZUFLy!QZRQ6j`hn;`{|=9&{29G&*AS72_r>b318N<=htpL zqD-LYoB`6fKMU%3PGsl?+?wecc6j;$k;V^N^d{SzYklaG8lb9@Ud|IgK{2Cp=`=oeDCmCOXv=A_ik5;e&Vlb6$h+?)S`f8@})Y zyxArTAR)LTKSN!=!`X^Vp$|@(ueRa3a-~%t6U22DN)|qq_-}FyG4(l7iuMl-T!jce zmEvCw?$F!B#ktxuUb*4hwl?9D(~A9z2`NgNKO@CbwOb!e_$R`a+EtRdw+eRj#a-9+ zdwWY}3;-+B6?XpVZ0$k@!N*$j?;LoYpo&z;>B%+uu5*s1mSEnyAEDr=)yy%R((Z9T z6#Pv0(!40Z4x{F;74e>_4YW?qLgW${p7qsyK=C|rGfO1I5Dy^N&Hn%#J}+D7mQ!Ny zI41y_{zc;a#-?wXILB7qk^K|FnJlLmLJLEe_~-FT*Gh(1#p>=E=+*M?k3KLwvpDm{ zPC)syS^oeT{xNDg-Q?K11xtLOdRL0;o+6%U33*7}?_X!aTpXxj)atEI=u8F%j$emP zXzBH@8&5s&m}TAx_2Ru#;LZO4jUl%enmyFX9(NQ&*1lw$UX^8B&4mm*d)L`tvdzwc zs(4n`QF^&%~b;n?n}6a_P<&bMszH@rTFf z$dQ(CUy;G&*Qxw{8a|by%X?=ctDw&9)$#YlpC22Uq=q^7F>X$4#i^Fp#AXVd-O@Zf zs;(MTJ+{Uli#$hhcXZq3$j_~F%c~ZVlX*K%DxJ=uE#=pinPScBT`z~c1$TcKo?(sr z<2CmbFiM>?ta-VPZ&TWS?dV0}Z8K8wX1#Br*duw9$Q0tgR=;3B*{@FU{{V(;QR7ye zZ7(aYFWid!@BN>&ODz-Pm4x^4uF%7VJq>=ne#-Y6MW%&j=Ij$3V1haf*Vkk?dP=3* z)cnVcad^q$8%@~yFTkG(tfNJcK&p1*1$eH;&*5I166FdyLXapg;A6rO3M+BqfR$GsZ+-{ zc_Y|-Q~O1DN5c0qK|ZY#;2qgv#w+C?+V}SL@NS=^+{b0EMA8)~s;yrE{9pZ`{{U%E z5_rDpZjIa#&np49b}^drj~kyG+*>`~zjY*_=XiG~n)!@oX^X@~!Yau5T+XIGt8~B4=1|VRF;+F^zZE_o>bBPncEU%& z$t>C8yHA8)5>jcF%45H9029<#p!mYxFzzXZW^*NP^oyux7#yyM{6JuZ}(#{?8h>i+o#{wD)MY z{J2BwU!(s3vDfUu9+efSzOi7z22ql|{JORwO6y~qt_qb`%=a|4ABkTTAhGl8W1K`+ zpzT~1udDoB@l5+JZf?$VxlxZ5_mz+A<>BShLbaGX)8@+O2DH2x`!D!l+GB8Nz$#6Br{Q1O-$|NOy1%dzFga|k zdLEJRC&Tcfi%S6-@sL+M*O5nup$u2HiaydjQ-{N0{exFL=itAAmfd4iw{TaEMSCP3 z3`t{+LN88pS{i49th_aB3^5J|TvbW@U}DtL(G|158*$dYmr5~8NclWgFYA?Dspxfn z5b%t4=rI`_*G~EzLO&~JeW|_@4Ly=d>&a1CSBmNLbGH@O?s6>;ot=7;t*bG=;TN`z z$tZ(838*i;FK;wHQmA9=TT#I!$OIka^S1|&da-q=MHt#22fZlaXhq#4I@r8jb*z}% zO)ASufJY2ypby~{pQ+s)!BNKEgi>lc)G?49jyv<#xvf)QSCp(SdgNlJwBb%RMpWT9 zE@m3->g{60SDx$f>KdiMx3#xt-rvGHS8)}Q)SEN013e9DSZLZ+n{yZhbbng$u^0-O zj*ZzK&L1BP5=|?%bG`!by!KY=*AeY}@(*EMUZG_xS|2EII25v8*;)dzxCcFrM{%uN z8**A=3Z8)1p-&A<2}9YZvF6mPUlU!;qn^H!?%0sgAC&%end^O4xplg1{1rhfNY+0^t) zQkk`ss68r>@a^wFZ*Z*VaDI zq3JHmmBf9%mHFlSTz=CUHMhoDG%xK?Mw~MsCjc?8y#6a|4<3O#+eyickaN<%GqsP| zI{V^Z#|>Y`J}|nO?5DhE@|U;=fuBn7_?nJBy*hL-z4blb4VGO)89La_lGD_+;cp-K z%i|sF+E0coq?Qoi8I=y)A46YN{?Gpa1c!la1?Aj9u4RlF0qkqZG|!HI0Dop}B5w@J z)^^u6$Q$g42bf1dN$cxepNqfXuAT_+^v>E&k9l;vK#ih{0gtACtt`I_g38=#VrpJ! z^|P$c8J)-2VByP4^FGkH({)RyNZ_|SUl_+~*V4WnMJn#pXA8$n*OdOuUlzU~d|L1< zK05I?hh);Dx&liW8g0z+9*liBtDhdeIQ&NNmxpJ%@MnbQ)b5cn6flndr`o=*GN)Dw zs#+fbQWYsp)4r(mi_Z!8e@`YkWJkf^16g|Bg`-~Sl3IBPRn(2O;NP;p#J`R{Ke*I& zFN^Xmyp1+3tWqMrNc8uwU)L^gZL+U118)QxsVX%o*~Mxk>rt9*>W_!NX`h6D6TA=c zRc(Ad;j63NCsvy_vfD1<8xzQEbgyRkKk$qG6+W1CZBq4~^5P(5hd(T2j+N_Ims4C| zO0lyB$`%>LW@@@o)J|k$$>dka;do0e%klEX(odRScgXr${7;O;;3I~gPsGc;njK=- z&yN9zP%9gMkuYaH4?|r$cv?uJFKGV&HaQ)uk<>1sX&k{Ya~rZq6}m9$E3*+sQJv&) z(5daCqAcoH>2jW2OnLn)o7MnEB(FU)R*s`0oN@+ghOcnXxq}RPR~2|haXV?Iq)?J{ z=vZ;oRxYUca#ZIdwJxElG6n$m?OC(xdzz$!XdnfzPv1ka#J8psJN_m4`vT_{thYus2_ zN>wyR18=LdPVA!uzXrGTtEH0+%yY*T=XyS)GflY6q;KR`W29e3KrJslb6zePDLoHn zx^ao$X;%=%2zdy=8RXXo@w@hn(Ec6i&abE{5~mHnp{y^9{{R_nJPF~Mru#q-9AxAl zYW(T=_4`2hpW)}@Ly zbMvEhuP?p$kK<1h$bYtV+2#YHyYXCwjl`!C$gP9PX6$M5c#btx40nSb~W|5 z=NVE*=2(16r&Y-K3gcLKm&$HWudQ70?d-RADH;r9)&GJGdOx~?3w~IaB;r6mL6^iEMjjBgh>t2Jde$!h`&oLN|py`Dj?UOU}aTDc7^+sZ$P9Pv>~5T>qvLy%z9<-OzK-H3awGa)LPeR}7;m)INuVujDR`qs9E6thSYGxGz-8LFKq!aE}sTjt1$?I4AKd9P`QYMj4&mh}_&176@mrjZu@W5vj%^vjFYG{-E*m2H! zb6zEUJvp}7^mOsFak6$t-aoTzP&kK03SNKW^Idq3>U;5}#df{Fuw8#TK)V9E~z(zt3%6NtonDjJ@Y z=wNB!{;`U7c0W^o3w%h@{v~)$*IUx%-xbsfR|BBMdZ&amh%~SC=i{DpUQ7End^ofC zdEq-94%RO&3wVGVxhA{Mh+h$O{{RbUlOMDXmNHkISJzUC!eLaa?veP;GPLoS*+z!V z=#9S@d|}eOJftd{Ly|wDws_M(E_lFhG?o9>gln({c0H?N&SIDSUCPTZ@4r;@o;z1!UR_x7;(BXxDE&97;I#h$3^i+e zyNg2)+8BTWaBJ)MFCUDjhE!{LeT=3*4PK#Ba%Vp|i zJXL=dc%ofe`K}&kEPYQ}=U^j}tGA41pue5sZfSP49_<=P(SZbTK(EtX zCR_30*c!&jlCg5fo-sun3s7}ZKBFqDiG>wPdoy>&)>=-frZf>UD`OeY7_TSOJSAac zue^?*#z`b_E4HwQS(XMjw#*Qm^IXS^HBqMDe(iY1Nfq&2Ls?Uww34-t(6X9RapaDt zd!gz!I(3kY$b=R7KG~toltU$ z)SaeL$ZjMh;H`t3bO>c4n+%Mj22P)VW_<9*rvW;;c#MvVpffDN^N> zbr~X{Wsw&we=4givZEX}Ju4aV%#20nZc{*A9py^?B-Ptz$p<*iVLiMN@@_;?!nZ$)aZ0s2p}=<#diK3nidL*Jg#a)?&%Ubd zog02TV!WJMPgC!*n($Cjx#?PUkh(x4Y=SYI*DLW4!CUQJqkyq0Nf_&0XM-dYGN?HO zbQRU>ejJYbPu$8f6oc(vRV+sc|!DzH`<^#Z+T z;?IVovz9`sg0IDSthN$)fq~+_nuShiACcmyI(UYgH9RwL&`7vBKK0+|TA^6PW&Z$L z^Q|g2iNuY`2d-*DTXDNRd%|8I7>+YOa(L#w8^hX&E~j=l z>7FaWyeX|bR|*-vP)0M_y;H*a(#aCY8!Qh{c(0PgQ?!5FVW@b&m$6!#r#qYo}<*z@r%!RSq@hG5+12b$+~ zd&3tQ1df@mt5S$cra^=2gIq6)u386Z+p7NnD(<05F_GqCsy9uvF*Rf|?PkZPHQoF= z)I&(5Fyv!7uNHl2V!9*~jQ2I@Uk){x&chp=f-}v1ZdBn3s~;Vib5(;(`m^DlpsbOj zb=oJO*2__^bk6P-yVXH|fG4H^~8Ltiao#J-26K;_FnazJPaprMT zmQlWk?tD9zMb(ann|x;YffbFj$sEV!3`i|s3jWc57OpR^=ZM`gK>R3kUeWtb{8o;} zIb-v%c99y_!@m@KZ*i#V6H58ZH#Pa!1>|za}(d>Uy@U9A;0|uuxV=Kh|A(HK! z`NxuZ2c>e?x}nJ4<%s=io};QWvwX*s$*kFK)W6K+*Y8+78;Y`89hm+gPEAC&eSRfY zAYk?t(EL33{{Z6;hF`hUE}fw#h>LJ5h`YJoq(_rf9i)`)Af7wdvquJ`tt5OtX_&f? z-qiK49DdY(DDlOMSzMNl7;*z=HRbxgu|4CNnq20o7nE=T+C8bZkX>FmjZ>+{2&#?> zl`6ML`RV3$YE$-c(HhZS5SEw_OI0+ z*)L1+0Pt3yZK~W5?F#K&9z}ek`zZd-_EQU6`>S8x;pS8Z|O38t{$j&YNk#PFy~8=-RN zV9o$)?T?5cRU4T+y|wiblwz!YWs8kRR`ExkYJUtHS&ib9<8k}VSX!U#$Dr#0zTIOS zpC?yNwd=56S~K~(aysrk1y=DFh+(p~`$TLq5a437t4@yW^JndAO+h2$zmDIsSA}h? zuGw@*!^aWXBB12g=I`xE@SDXR8`Umg(ezm!-LtfX&pqq)`^6W!*Nt@vEZW$#E=VAH zSC{-z{giaC5b2W3ccqdrP6}rg;p3;z7wlE6dN@p@19Y)Fv+%Q9@E3@KzuGkC2XQ12 zc&w{$0P43lF~pHDF2Qm~O8rXskNXYi_j5AY*lm5^D0f<%Q ztLGzE8#%P%dzH9fD&{Y;`zMqk%Y3NX30NUm8-1Q(B)hkIJEeP1D%< zzu_P3Bcgak2l^C#Vi3*yzLoAC1Mmbs8r1@ub;95P2Ws{v(QKl$C2f}^VC`DYi^0z= z=J;YGC#V(a*Qpv&N}4>pEiP;GjGp{QY_`xRI2Z!DUj}Ixa$adoIsr^^D0*~-xDB$1k-eWY6y%fzL#*A>v} z8)J;^03N_rRmfYCsKW<6Ypzu}bVr>}U2bGsDqGz7aYO*mZuQq_+FbW=ZjA^W8qd=_ zKWz7bq>yLnU8ad_mhh^_&~@Xj6cn#{6Nzaey7*=(p(`wVg8@(Da}hlj*X{%f-?|66?zO+xvf6Y5F`0@tIKZ_DFrk5 zR1Sg|_J%!udQ_;+_OWU@-oq@cmKEi5=}py49!6*5kIs}V=$PNZJXbU0zYwmDfJnrG zK3vf^7((n)i<`0}n&B-$+CT*7Bdt@p)m}wrL->H{L#ahx*{$NCr zu;=E@aytDD;Y*is+O_oZ;5otetbOJ;GRVOFYhP2b!RI^*=5=fM?UqGh)43JrVy5Y? zW_D1bs#H;1vrzbfP;7%GvDns5sp7p_8FrvxJ!= zsH;-4TC!nA5|!YMJ8R{=xcff*6UAuBcRrNrt0~}h&2ZK}AZu&z`N;>ct8;k0uWv$` zKu|G?FxZ$V>h%hkI+J^7c9D3l@-^IL;Ch;S_~tr_p4qX@ibORwVUw=n)7?F7C2!Toci>s?YvUT zxh@WWO5lb%F0s>70OJi2=)NP@qSh^qw1n)(OxH7S;H?b8@`X@j&dd(=p9hFcthtu} z`qh)Ilv1I$ZS*zt6r3tUot`~dRHd==hwTsl00i97J}dl0e~6l1ykotR4=wHAB^(Z) zPIF#IqW=KFGJZdJ*TpvaCyFifT`R-(FSV{=ZPLn_^c@eqeajB8!eb<)^U!9tG~W`$ zh+_(;aK|FJt7TbsCa<)s8?@e(d0RseI6%~7Ak&oAzlUV|8{^K>VWs{%`P=Di50!U_;tBhR$wQPh*vp4Kfo zZxM;?t%QDKeA`9=tq+JVrrX%4$XOV01wp4lZ=+kCPVvCnyl1s?y56sG;!Q})f@01i zrsW7LF-r59T1jxhh#uIfqSma@1-$VtGxxE`t1;+p_HsgJ9qOK;ppzPxUA^;Mlc@R{ zr6@0Vsm}az_{rg~f&TyswSS4a%!-j|)^94t+!-7Us*iGO^4sbHU#g zwbzqP@s6z}=ASLhq>C$w2_Zd{@GIHB;IH2oyhW&9+WZ!^^H)l?w);)|@eBzhAtsNCLI>x+M3Go;qH0!ZCk1Tvh1Um2KYH7aAdR*FA~`hO2_&1Ng`vHE{5d3-gOBRpf4L}`tNsL8y9D>p);alH|_PhB? z92Uv%ky|PIT71i6=P?v#hOX(fQ`Kf{84-9Vn#RAKl|E2&o@&;ictQ})K=iCj+r{67 zbBuCLaaE13YfgGDCR4z3sVvuCiasW*!iPM*3oqbjo7FRK^?28x$%rTPL9__vQ>B)uPndRq_*)j zuPY4&3AB6e)vSttA3S$?;{i2|xnmyO5uPjO z&-g*@wF||=Gg;mtY4`$xI^+50+px-DnGR?8j6GXmsiuNC9J@K8^N zwjU6_8tN84O07600C?dRMmmRPwx4;kh7H4AP56H4n{0wkcA9=v(~)(#DB8JyYSN3MdGN;bAWfA!o1u?S#s^6;NtT< zGMkKKc<}J=no$B}ob(mg+jtV!LD5C++#5@-OLW1iZ=B_8^EOoif8BY~iUt^EH#Me$(FMI{6GOb zT9&?=ec6uU1`Y-W>c={SS2w11QU`1r*729vb$vw|6Z3Q=ccwpscIn3AFn0lxMR7+8 zq^{Z7JatJcC#kPt@her+^q;XoctVBR6rPpD>iUJ{)%2H=7WtneJm#RDS)q~PmO?h3 zb6xMikA_!251LGe#rw^g@sYVM`C zE^GET_BQx0r+hZ>#iAQUjRas3x4n2j?1B3@=)MioEu#f{5kN; zIKPT967!Ra_VCzxSPDMcS~J3}i^k?{VdC38v&8=Z7&IRVX{#Dts<2IYhgZ%+_wI`ipX8o$H{#504?*A?(fPw{W<3*s4VC7xN* zJhRE3hQB=GZ1)+BQp3=`x*w%*&j?^}4mg}%=bU^=@wTO{c%FSi_IC3nl$T~c^^Jey zzwFxv{{TxP3(q}kqy4l#8)_c}d_iQ_DnwU`c9#C|>t8(uaDt`$LPsd&{w9&=U2IgR_)?Ni zbPaF9meJcs9_Nj7xQ6XthF%N5pHI4uXeTkgbDZM7!Tp*40Bjw9!4W}Wcd8bSJYX}E znyBW>4Gu}Kbo=;a(Z^sT3M<}6>EDg?JuCYLS;VnNAcT;>@DI|yGWf;uO4Cc#t}Oo7 zlDn5GdiAIH*Y@^JPs37JS+e==n?nY#mHs09OuN_hX|Jwlb#`KM8;bnXjOtU(sz#n0 zPUqe5WXK;<0XL!k;i#>iHH~Pbr}qan+xI@t$o< zHN?15*%`B3v5#j%+Mii${IY59#-ni&*ivGx_m zIo?)BWalZy(Pg*P_mBx6N@LwA`9P|vX(Jrwr%1LOF{7QLBU6{+WtgWMKnqNi2?;Ex z7^GI%!zUE>kYM2RSaP{Y%2gFbW>J|4$qP-6Uo2pc!`7`lCSSdtovD%8JM~)bgkuQa zMAV9KutP5K!eXY6asCsW@G5(KJ(ma8twmrJ)E3F2q^YRxXG;#|^eMDOhfHzasoYwX zW09KNmeeoH$*lXxSsRt&xFICj+bS}RtWtqMz$@OgZJm^05n0(P-+LK6RH(!q*sjM2 zxLnC63Z0D@u4FS~9S;@JXcO&a4!?=6Ud*X~F|q(WV!C}3YaPQU80%g>EvD>!1_Kh~ zWfppmfTxvR95KTYiuH?BS*!~Yo$>QtOYr_SAqx@@99ODGc97}23i7e=Ys7I*r@DPt zH#HT^c#p>`L|LU^pk#Ef7-VlS5SIpxS3F#o()*FlXLN!xvH@_Xi2L+hkN0xm6VOVZFqB05jvJ9*!ow-&DI?uHFQ|uDm<~z#i0xjR z;akHXjzB>j!LJSQX0Xp1Chh>|KDFwe4Ag)zrtD+3Yv$=z_H}t%40Q!6cSog*SZ1_r zAsdEI6~XxLNt4TYB3{3pcitYkS*^C0-JjwkrDyopMULjlnmi5JITiF&moiNsFJ64H zak2CEsU6L{?;j&AT}Q+1a%e74NZ1G7sd%?V5lnb&^Yk^JqZ4Ozt>zv&`&X}ooS`Jq z;bLOt6YCFzo**J9gFZ>;HS~YOom@{6uwTNyIQU881~EXsXC1R&Onff!2-umF*XY?c zB~Fp~&U1%wlRlC0cCgWhnWP5;rF`4*yT)cGo-oHe{Gk17*S;h8kL=M%(a67du50BF zjeafj?`8s1yC)pi@$VMqA6mqFCDotQ{413wSxQS|=HJ>AR&7f9Q7I)xNav+|x$y#6 z(&AM^jiFmj{?l~q#Dlwmn)}WV!Z=~$6s(Wgu~^vQ@JcJ9 zIm?Tem5fSx>&-=1Y`9N%XFV!#)Jp_4EG#68jQzahzA`6k^j#DENF% zV;CnZ*u*+|d1beJqpN13j?Ok4V~Mla0hC*K{v+1sx_FG}~H+5Z6GELx4~qrezs5bQdd_MK1me$sUNr8fFlh^YC&U%gj6 zD`#)uYt;j41#n0M7_UCPIAJHvJXjFw>^bE)nh2Yv=vX_oNiKxpF}D6eqvmXP{X z9!f?wfsFbJ;C>1CkNy%HWNQ>@G63AQ>^9nT*4FB7Z5U^(8tSKxgQv|6kDbTHtvaqV zW1-V-?&DWQ1wqKE;?wTqkhJFtIs$mEgF(B}*2R?V0av-^yyN1(#XS?lcJi;=LMk>1 z&sxV3n$W^i_Oq6UnT5h)vG|W;O&!!XailDY>c1~WJQ~Z@yxmJ|Ngy~JjB#EupnQC? z*KbzN+V{@+z`^vdZSY+7_m&HG3b0Iz_)?BAmRHugbwLNMJhHGHn zVz%Y&+NQMCqqLDkg8+I~4wbFh2?QXFfsAIaUr!4~=SICNzN)&qK7WxnW~`2%PVsew zQu%H<&*xnYuZV3Wg`C_e7~w~1@edJrk#yuUi9(!rHJ7J+U4l6Lsc%R)#zC(;`&Kli z%O(|aSl5Dd(d|0diz0}e$}o5U8pPE6Wp4~#V#qk_&3NtC#SwR@`Eg!u8R}ZLZS|Db zEE!Vg1df@mN1Cif)E=nop^?UwxzXvK4)L7&-lniaK0=<=?0z3sTR>zSk;$)_{tet+ z!Q!>CCpkQ0>t9f4@rbNr^Eo93O?nwVCUw(u-5w_q;p2v@7{_K>XjRl61y605&&+Z= z0a30tj0$)xhjCo{=DpcjM`PzAyOw^?@#Vsde=4gis!r{==Ax4AfMz@l^d_pws4Qod z$n0ukZpfyUugZ-b;Ngh;E1U86hOInAu*UC=ha(2GWYtlWL^44Kss6X+|0 z8HD=EvUW$amgAv<-YV$v-2+AO9*X9Bh;VV7V!9OZjFxty8;Fi`f<-~``{Evp@P|TO zb5*svk|`K2V7F;G_s2@{w*9R<1+82zlLh24&ckw{kx3rMo)5KoHilpOxX#ba+aH`@ zuBVIA`yR)2uG{L8ZH0ht;F_nXS=<;mj-AM^2f_X+m&86GiLGR24ESXoF<$L$;mc|K zJS6h@9c!-*MtIcQM?GBA2B8^YrF49;@o(a?d^?hBtGl5Zqi|1p+Gg)8oQ;!SZL*aR3(p{2fC8P5Dka}0=5`NIWGWczEEvS}gSU2wp9sd9t{Kq`S zXBlp5%T70e`(FlRS$_~@?x!;6KUo3ync07&&OJPk(X=GpdO{G>hBQz zDDal2;5YEzqozXeO>MD)02pz>2iCt8KW@*7Umw3>{{V}6r^7qXw98{~?>>X8a$}Mb zcmVe+)9|mP%rJP|tCpw4p6A45RdC!#h^is>>E{;G`e!0g@)um_Si#wKI zvfU{?^7Z)5@bC7d{iA$QeLl5!<0;F(hrhSS#XTe9PK~4MT9)f6bqM3Q zFUQJQ14{|SQiVkcMD{WK1v+$PSDjsSKUytyxTFL&a>qFawC%2Ba;&(B4w$`oQLfH8w2D`ft6WKZ}a83??TE>Ppoqbf3F0N5i38bLcsp}GaPSawy z#5Uz*Ba@CQo!`Y9N$gmCmo7Rn9V^dn{7YeBZRg(Iq9Ek6HyEbr{wMI(rn$G(p&`2m zuKIN$C1%m**QGcuj$`9bz|V*O02-u{PZQXy8Achl43X(yCVXc90D?dKJ@J0ECZpib z7ToDZObv?=Wk2CxcR-dGL-!X^u?x@|E_8H&jp`2Zk&e~NS1Q9+ZhX3$VY95m1k$HA zk?&EZXUO%mz9(dTl|G3%vvR2J$aP7}E>*kn zSUQc^T#ex4sI3icrHzOq0Q9bNQM&nxDCezt730gf-w9m_b#}P~$+0Qhr^9+j1QsAfXI93Q)bTnb7Px(9uaWbqVQ4~TSz)GZn{bX=n$>-DcX zeNG#Bm@p)Rfz4sqYr37f7~{HhLO$;94J*8p?U=FT@+-Qe-Fq~Q@5s)_LAj3j;m39+yt>*-yzX-ZUD>TuPm zdaJSVUccfYV|#OPEV!3@ZwIKa0Qi;T4MSVhT@psxu5puIiTgu%wk<2-J^qPi+hb=W z5y7u4ypG;Wrrg;r$t3lxDPdeCc8{sTNjh^|9|XiphL5&04EI=!Mn*$(UOg;zSV^OwImoEuaJ=|wYJE+0`)2$J)3irz zQ^ZkQ%?5vTPdrz|zqR-6dEx&6iXI_0UJJJ-)kzIA3>W9McqhcIVr@fSl1poc^2s~C zVo5oyRkmp&63C?BanCjDsa3?%PNUPY`CTp;uTK)?h^fulb~LWFy-&p&jCyT=ZO+?5 zkIKG}{g}UE>(3EM7PYIR7YrCFTHrrqUxZ8JFB!pmabnJkDB15{sow+^ml506aE%HRxI|iQ~AijLR-rQyI;9tW;H~Z)UfEFG*41?Tjc;BS`gBb!vp4}1de_;%u{XfG zj{(>&x3Amb-w&1@1wsD+1oi!rbe|ivUNYBo^KmS$3_I5Q*pazZLavX!&A9=BXPOg<~gsvf8de64zRuOUEZ-G zhLQCGgb~SH3jEx+hR4I2l1Hj3Sz~NT<{qDgewly4A-r9oYTg&UwbE`8#kJPyHu{h| zSCI^!jS7makE+FIn2cTx#psSJ{tKDl%R65HX&UaIZm8C}pjjK)mcKPEztsX*bKF z59sP%YncK@JLK2aUJ3sIf==t&=Y&SN;vW>qw=D>96cOYMjNsSFUk&_aH-kPP=^iPS zSjTC6j^q8D^IyFm0HU?=x51q^#9A$YlG{sZd6B>7ljGl}A&swrwoZLiBT0xjRE84u5#M(U8v$QURGUs+j zLtm|s{2uVmui)LSYa5>0UbT-Sn{)PhPBbs=vN>}5r^VyxzKOqvV=QeDI8$enrRxO zVqBgNt#vkfw9>*OjZY)7uQwkKc=E{oPlqu{Q%+l*)E+UFE&Pb2I30~=-gu^YnU+Tw z9XYOeT)c7Ri3X_6e%@{bjgo81rI*pR$LJNaudCtPMy92#Jj&xaZ16$pSQqyuK*u2S z)|Kv9bveoAtIWxQGI>4g(!=4^Hj+ofW3zggrsCe^E9rKq3zB_3D>C8T*n-$RR?W;w zBzetI{?1anO?JidOiezA{o0#NOO(EZfKZQuFSY@cQaU%;->derFMR@Y3j8s=U zTy^K^P{iy$P{azI8Z*HrxTQwZJEKu2p&AJZS1h`fi=B`7h$tl56?MbHEtkQ;XPHUlqPe{}NoNXM}C21;e-8~u@RZ;1wLd!!k zo&d#Lhr+Vlx~yP=P6)+pX!h|J^1^2wdskDT>2Mi=a537seWRA_@BYl5njFQqfmtH+ z3@05kimP|w{{XYG-xvUN&3dcbTK$Zn+6b;I#E%NQE<gspnL{PSzo?i6udW<2^B5CV>Fn6akEm^@C|= zBs^oibXpRS;N)k3D-2SLj)$d{U}w*%={^Yg9S#p8k4o*ed&!{F0S){IHN*HP{bB;$ zzVE$un$@FArV0k`*Qlh(RF4@~iG&m8m3 zeEun3=e2}ZnrmaYQYDKy8OK~vWjVK#b2d4w>r3Z&!e*;AhHFSK|B2F#2Bc1qbsEd1;?M#>;jC)tozYH~`Skr0rubgzv z5%lZtF;k;*MSCy9oBfcX0E3Qe=<<59g!xaC&M@ktZ1$gt-Y`~*Zz;&ao~FK1_|M|R z63Zjaf(gk6y?5egj3aA#OUTRek_a{OSH~Y4Zn&BxmxA1LU&q)Ab#qMLne313JeLNj ztDaNwd&Lhu)R>zfgR}!)K`xJd;v4?}E*RSme*P=aFErVFNvRciI48NTm%?5Qvb0#_ z!nVfF0PkPFWO#VcqZE$p%Dh!RTbSPoKWB)wtE79Sa=?x;#eGZg-}Y|OG>Zs9vQ=_M z<}1OjGvNlOr|Ei<+cGIPLF?t9u2swP>!7ZvCs+r!?v6e10yQ zjw9VXTjCdlUMq8V96+{6;8%k9n$uLh*Ox+q5L#=118wK8O8W=Kx>lE{Xs;E#fD+i; zI_A8R%i*Q1=ZzBd$GpcN2aapvxP}v~)FSskO5uwC09V7xQjW;>@7XWmdg&T9xVg5& zG+?po&lT-{Bkw-;%nn2RXq{HYu38XiDPqcs_nH&#yGE!e`ueGn)TkG`ZFQStahje zJ!|w!#6BC+b!|DVW#qDBj`^=5)&BrxUkU3zA&XknY^~-h{{SbM7|7-Fm>jCBsa_Xo__M+K2gDr#ZME5L3JD!Tf=@X$_TRy66JJjXTVLvdqs#%B z4O;O(z^@CGwzvBvN|3soD8S&?Z+WR`nh%A1i*y941y%YA>f!DgjzK}fGH*}B%ZWI- zaTQ1?!q;X*el5|gr~4JjLfv+;t9G9ebg3K3MnyTmAdV~MUx{8k*5~+xErqH&sh22K zx(~v;4+`lz%$9>uxs;V|LYBdcXME5$X@cv&*2^x}?tseJ(;)^MCvSs$7AB1DA{;juH;4BW6lZouby;2jvs6Ii2e-IE=iNk!By$k zn)Kg|A0K=r2ZWwGd(>uFPkdvF_$%SJ#~m+4{iRb@)O8#|adJRh@yY94d4_j~!(ypI zGwpNe_;-hKIgSFoS;OJIkJ7&m_+IR3%c{;u$@z&j2Z+2+6`iv|=Z(wLn&+ndp}aS3 z;ASHBHvPyc^Ltm$-ZJ>ttLyrdqQ>ch6Y}kTE7;Amd_EQlLEo7l5r@e!n4Cb55rjK->P-JfBG%cx;++|@Kl?;GNjw;HBwrEjk4@B9<-f^9M({iAo( z{Jhr(<9`Kdo))r8yGX`ZXRbJ^ejK;{)(azY7l5L@N*F9=4y~m1BBg8{S|0Al(!UNq zF^vnv*8U-q*`6g~=K5_Xt$m;H`^EY%jr15beH!&8j!3~O4bkYUv=LoI8v`Q*jDI@M{foRK`%-vDXtnMu+4Azh)=|B* zH2(l=&)WM?{hdA!-T2eP`U%yw8E1~yPPE)YG}16tE=M6j>x%PFgI+lOr@U9DMdGiB ze-QhF!!|f3u}lrmjDU-Hzd@ zP{u}`F1kDPJy*mMczal~dtFH(nl>0Q$^$9<4SY@eR({H!1lK+-XnM!M-D6GC{6RJF zn)^%*nC}!Ga=*Z9W5-_`bvs)*VvAgsDI`_|mmn`4Yv<3|7W>5-&+V0Otayi1lHxrq z%{;TsAtGGxpni4pa^m`VmT5-}s4L0u*&kzq(L717$Km}lNSN&dKP?^+@ztIEy1ao)FTF|Py$4M3Er|s3 z>UjpgSEm@lnv4c4_NWWNXa!u$Rb9acneS9FnW82szz zvDr7*D*M}?Z;eAz^ma61vL4*Kp@#?;q4b=&QD`2FWenE*VQ-w01b2tsTPkC{x>-q)Ow|_jXB=`EBpwmk1PDw0J9M;b(t$@Sw z;i8DQ$ zzu=F)2l!v`^5er-H^))AoU==RD&U;-0CumRJ}dtK!4Lj9YMwQhSnwx}Y}Uljz6H5f z4Ie!#_1{j0+w6umBZ1i0HRIojUJ&pWp4U3|r3?}D%+41-nXbzDE?XLEQb{Cn(Blpu zScaoFE^nFg&%!_1f8!^BJ``JcQ{rEUEw4Vu5Jr*(AQkk@bRHPfJTu|A{{XY$x4^Bq;a{F77GhZZpW&NT&QQ>b4 zoiZEA8cl^)FT&SZ`&N8#)AUant(^A-oFK_l_j6x4{8sU8mya~F7M6$PIWA8m*MW(s zt702JP2sGWQNqL8N6f+S2kj5>C&JqQ0QQ})h~-t5ZOnzq1Dg9M;79Gns@+2Zc!KP? z058mX*XLjtc6aX_yliY&UEyyZUTGpU7cI0tSZ5XF<}@*slAaoNXWHVjtiLhCxKx6_ zbM>cH_?f5tQt)NYm2Ikdb4_YhS~tN2jybQ&&)Q$&{{Zca`!0M=zwq*WOVw7-&M#$h zSe0ZvfnNLjEqrX$bzc)hqeFP*TLkVo{43i(X@7y<3i#9DceC*xpA3>T2W;CxWImwR zy_4n`n$$V#Emr5@JX6Ee@K~Q^PhQ8s9uNNjg1>&+-V7?1{{RqSy+8mdb8EZ$*QRKn z@L#Xmr^BW=_3sX8T7cs_V=IyR*Ug`_Z|tYyPlMXbve^f@vUVmnj2;Dfw}hJKN!D$3 zdy6!<7PG6gGYqN0=tX^98tSYRqR)e=me*3W<74!*z`yWu4!3zEaeP4dF*Gp}1Ke8A z!1uuw+W5o%3j6Q};ueMdr)7CPz0_bbK`cZtKU)0R_`&gW{uXmXJ;#SEEDT5$VX%YC zPp&)H6=g7hZ<)CC2BQ-ymEy0UP>{jILP3H(xr!uqzP{{Yzfyz3OI$K{QWLtm&m-1hqBogS^H zm3ZYq!I!>8c(|N2`EMk2QpKrFq;gu0qOB$wjzQ+Lq`gr-daiSf)cU%#XK$7P4aux& zoy!xr4tiIW?9-Nqd#L3TqHS8pBB^B`FG|wTwHpMMDI||77;h)0D}cJXVA;WAo(*+A z4AA^Ts_EZqx3gw|K1pGNUWFI$DucNusC0pQ?+rPJ=T1$4bDm1 zh{<9-F-+2?xVL#Bkw?pr<2@@o#&RyLqf2Z^!y!9g~T_K)#pj;HZP z^6Jd|$mLrt?_4qQ6otfwb-?}{*Himf{3zFa5%CK5P`BMBwbPjt{{VNMmF3pYB7i0s zC*HQ5INImYQpH6?0-tf3`b*(QfV?ehp@uqo85ss-*~vcD;2#5g z7pZvD$2NLip8+MM2Rm{HJ?r#?;P>p?V{f8F_txn06&T3iabC7rg>#E-A2Gz-S4#sG zPA>BDJQrN>mX&p5`IUY)%xw8|`JXcQN5XSx@~DGL+q81hFM9RA z27kgud#9`oDnSuU9xB*e+cm_fLQUM0_LKOR;m-nF{{UpY+!J5UxblI^Eqv+XuNG?_BZ@>> zaE%r=FF{-#_li76o>N)XVL6smE=r(ib-SXUIlt_ zl~ecpgAG>=m&Oy7-&>w#;)qt-_sW$?IQfs`TYnGy6RLQs_6vAmW{vnN2U_ZGzh_U2 zULv|ii@~=+8_sNp*H%nclImz z2EH4xg6?ZZM%*)%udn8Ne-7x_6MzyjeLGYh2>5$x;awgidsR@Na-+R@SM8PYlE>kX zfi7nc;NORKHr^yE&9v=Zm?FNl@s^Kc{e;%qlt8dv{iygM;SDE6@bX7( zY@w5mINMe}CfM7(vrP9m3VBd#dia$}kGN$eYFfe-Ny-P&#w}tilRk3+j z1#nf3d01>s2PWk8Jt~-7WjQCx^gk0nZNJ&0#agI0FK`=l%${qWnPc zZ^aE~NAU)kBz6~2D=XlI-ZRp_LB275!ru9&%~3dzaisITZ}_T2rGbZ^>|;Etj2-%`G7VRsm4E>Sla-FXKU`BAU`0Kp>v0BL^) zB!@}Rq15eSxG`HgN)_c9;NbM6(!O)pE10Pod`palMP`JMj&1fu#GY=g&*EkXOEAei^Nvtg8PnBeZ|+yg#n^QaCO(1V<3sK0@JlB&KY;`B4dvT)dxt@slY||4?P?eUf=5>3B@<|xyfmzYtJDsZhRgF$X zmA2!jrDv(y{3ykG)FV%n_{gbyPG(t~GbaS#RcUSajFa^DtM<#i03c?c6{5{@krl>H zDk^)k(H^ZRMx<`{D(UdNb9~;MgI$M&WQsx&&&|+gv^+QP`W;*>ZGpRhan`$y7vOSQ z#*@3_=C3ysIHm6&QyKPlvueHqiq;K>9CXim z;(S}-nC-)PXb-ogd33R`b$L7+?QTA`>_Iu&$HZ5uDpPVTG)@#2JarV$Er8q(J5&6_vjp-wW|~Ze3G78^=~PbU zUfNX^%9oHi&Irw9-7X~lwXt(395rWNF6KK&QCiZJl+GMoIMj(=BB6z)RPeRAc zs=z*dsyLEHlP9OWM<$@Ec5VD=QsBtgPzSX_lC_R%N~b-Mv8KwPfV>f11%tfN1?Wic zT;7ubE*k{o^sbvswYrq5#D@U$#dz4tyt*Uqa-LX4cQ-sarkjhjjx=J;!+Tex=${WY zDeRJDk0g#5)=$FE0BYBFOMT^#frnAlSJEC7@V=iFikpbja;5$Bq-%6{Ga20t8#(g(?)Ux$&vEQ|cHh@^ zUgz~WlRJkImu}p~n*Ck&qf-^Ad2HsQBKtaW1dw*O8c%pKD!frZRpy*2;G!$4I&=-{LU~G_ zrxSj0`I*iQC%HtL%^+XNkhj2EynJ!C6){o=F8Vl1loUJw+yXxwzw<5Ot5~HKeGGzs z31(bVrzZI%3Js+wYEJEc(%OKotW!zQ#7QeA+c;icT&96zpBa6ewm(Bxjd=i>8#mzc zF+1Uh>c>8TzA{OhXfj<}a{O+Z>sD)F3yre!Bm1=Gq^Oa1$L`7yncgn`BQ34%!VTG= z4v{z>m6_NsV>Y{yalL>22-sh~oJ`e!mq=YH`zWGQp(`*s*}`fiZ@)? z#!kTGE@)eALHKrVU&b2$V)sZiB3et|jR*v&k~BBOWt~glK|@evd6uN?>-ASYzelL5 zBzR*T#M$<_aE?`o-qI$DW?z|Lqt;itc}^!qsP5XnXML+MS43)Syj!{|h}eIgk-^w7 zuhN1+!)LXa-U#TlfYE(=DgVg#?~9^gEq1)@Jy`9pnROc8XYOq08aJD5moTBsP=!@d zbD-SCFBLAV=@Qh9xPnjjn>6C?7vr8Y-uz0Ukw38YVABfM2=g5Xa59RFXi!FuPvHvn zRMD6t{I_1d#Wv?DXz>a*VSW4pT~=RUbg0E_hn7F8BLMw><%{7`gZMs8#Z38!W48`F z(2Ym_^trwYm9wB9rm3b1-&>3LYZFxwr*uBs6(HUUk8 zK@U7YhKw!p_>Hzsjm4&Y_d;6W4S3dVM6o)R-si6IyGH5Wl+9DDq~+Un@?r^ogasam zVed5~hG_t5j(+10f0eUYoLohbP29KHWIl?tiH)LbE_I=Mi#|rygvFoL4<9`3f&Pec>I@=%NLDC@h!r ze9iJN+w>n~=+JJB>u(^XbwiN17_}V!lYU8|?|R{~G-3lIMb(m7 z+whFOiCu#3_gvr-d*#y3&Mmk=Oa&3q^NA7q*N#|;-r_M94So93Fep68PF&C!G3q0N z+wfr}FBva@t*(IR$NW4K-^2Ibve#y{Z*sR)ikrnlw^>dB4b92lij%)twQvaD zbT0t4ac$KPHQ7OfdO>M^_(w=KjkyI28^aOW@d%7XtcXDJL!Em#WcJitcD2Kuq8pIU z+QeF7vZ^L)iW=scmcvR!6+F55Y^?H;$IWE;jKyR-7JKd3{Dc&SHu+8kWn^x@D$OZs zYE01dZF)SF4yUovtiECXXdVQ=LIz$KD4hRLX~o?mo@)Fs*JsKH=F1UAP_#9s+;HzT z4Wa*jltT7wsst)^81$Vi_MMd*rXUx(VoVxt@|-ts8ro-3}(Sbyd&SE2g3F^Tgxv{-yF5VR#v<7OT79^x>Gx_wfLq z#R}CCs&$l#W`hL(NaK65*k%V`!ZCqel|Zw6abNzowsnY+ixzR=8t?u!zJ*6Ux0@Tq zalNVGpScTQtiY+DH$eLe16a>*D(Pe@TiT*e_oq7%U=5!*&lOTYohT;_7nyRq%l}Av2DKP9pn2vdKnwRzf*t){D^ql z+eMVjG7~um+Us-Dpr9=YfTE&>`uWOdJzp#S)NX?=SP07+sES=AbnO8%NcR5KXyb`O zOw9Miy>Eir7J&-nH;3*Ra+!BHuCTTz+6}^Apd_pC5qhVX=HK^F|E(E!uM4B^1IcR2 zo|VRgj&3)Hpt5eMdE(iQ^-5Dqffcpf5?z3ph~vMZrWq70lVz!f(&*dI!T`4F+GKGY zit7N9_vB87g67{%h=ANx2KrM$)*R4h@0q+W|COFau9tb62( z&CRDQBL`2w2hm$OBLwai|6U1$RH>Z`^*JR0$H?1&<3310u}`1gtaX5zl8W^hTXwze7EdJ*vs z-vM^mW|e10*VHUOdbJ(xe`HAyoBjFQ4+;($b3w97OcK{gkaJHUgzhJcca~7iVnf-< zm_XyEZ)4Z%qgx$sfdfnVO^2x0`>#4pkSayUp{=O_g90>U@`Vk@)xnFr!p)=({D3YH z&>d-CaY!|og%w>L3sd~mDecl4fu8Jpy4WQliSk~&R3VnFI zbTIkrNq1u})^KcX4GrdSM{)&?99@ZRxK)(>CL#V*$)QWNEe3@Zdit%z_1!V5S$o&E z;8_qO(46>+4YNE-o^s5YNAMuf{rAH!sHq9`g)n=M+ z+|6F&GcPR01XHE_V79s7U8|6B#LIRgjkP`lgW`RK!wkKgy;FCp4xtX zYKD8qoT;Y{xW`MFW?xO94-)d=L*HgND?_7fo@o{xc2}Ct-dxVM2N4eWmQ@D^y7*H) zNBq9~k@V;sV^?A|qjr;zs7D6%^4O)b+;-}{ezQ?~Xd)OmDcQ(!YP#7)JF!u?>7VES z1tOZhOV`tcaqF}V>uf@p^gM7yiFgnlfgZ~L39P^aD$j!9+r9Z1VGMQhzvKCLNhzE` zu3Min+geI=Nb8IURyY~()d&tJv76`j8{2|{Zy(5 z+`M(js!D=-#}bi&H|kC1p_*!+v^~mNA#P_1?%MxQ?)nH5@>8I>P4i+s>6@%RbQ4r1 z*P|`=0%0=LiLTF^P%OF$G0He|>-n2!dE(;3or^fz zyz{e|zP&f`vNHxXT6-&hcjaKVt32{?>@`BDz@Ek3e*0C7{ZV<*!c~(eF_`*YmO)(9>#%KC-ip8J^hA|SS&Js$b32!9`N2k5X4+q zPwc(fX5|!^hbu+R*CwWPZ)g#^Sen+>p5Qwj8`CB)yAoS0Vx=~n4TKAWi%Q0H%{RG) zY>h_JIjH>YBhMUjtCNI0ez=B+=pIQvlA7P-(2_p`Q2s2^EyvFam4~bh0jHNd=HcM6 zNK17Z`SWhD+Ea>z$T|=M*;nTChw?wP>9QoVu2>p8i|ZN(BdX+ttO<%QtOrlV2>!(8 z7PFn;l;!Nbn)pGk9>C%|*`dLFiYA0=j))Rq{H2ViL&)o4{^9k3+F@!GszPsOH@!^R z?tAkl7oJ@p+v4_I!BvS!7~JE{ng8F7cWfx2m}K!m@W)^OiDpKzQ9NKyV8TQ<9%$ z=H+<&`ZLhvldG}8);X+5p|d4<>+Nvnd&a~kNvb>AzD|0C^`In4v2CRa$%N~3Vs~PB zRtMLgp=6WDNbOw7q$uJS$(RAra405tZ~BxncR%SUy)%(!5QSwSQDj9G?+4+M(|3ydBT`Tx|-W}-)#xXe3g?-S? zp@pPNXx&HmJtcK*H?3ZVB%U6c{Gr8`z_9Hko_*g?*Z6&w3GIWSuG3re z5eeUdLg+Z1Y(jlmYmdPrF8n$@15gQzVqOk63*DnrTpeTL?V9 z4lCO(Eku;{&kR`d;=Yz5{sX7%u)l)dAdo@k+XdkwO%Yt_=dNVSE+R!Y{P6XklBUb; zF_|g)-lp?;dG`DDK=Ts**GK+(PPVC*R|ZwpQ)v!B@yZp$JF+%l3b&Z4yMZeZ7eQ9# z*d$vUo+hz*m7Y-)V* z7;>A?5ynd1cEAB@6p9gz>RP7W5)zvjeUGuRAar=Y+eO0$6JvW2K<0d&>knjF(C5S8 z8ikm(TE`BWT66jZMzp8G7s(nrKnkVns)Mf*vjyTRWJ+yBT(ULLw7yR99~x7@cb`iH zQ46bgKe&j4r*A;#sGTc3k9UOH(cAL$wy${EI92>c;1~&x%pm;uAnzq#=DKjxOf%Du zWq%I*FMBOpY-RJB#XdTLfgJ2NGXoE4jJu)%_LH6OA61*{Y^nV79^$=H2K<@3Hq&Os zzN|01{cUJpq5kwxj4%-g1|bcevML*!3IaMmDsapD?>G9U`oxzq=aiA#a?G5Y)`?Mb zbEnmSTK583k-@%9a~(<4wI62k(|!etBKKup*&Ae0CO-ZtW3_0I2{+3pxvvt4tm7K4&-9d|G0Sj7GP5iTE_Jw%j{m-vrgClprT@kz4%+ zcykOVjqt8x(S*i}N0opM<~US+9O;Kjo+S#UF;9BEeF2_o@4|i*$=Qs8f_^u~Z;jIN zcm9qhQ(2*=y2rYaLf-kagq}KF|J|rS6HS&T80>&Fzh%W&2eFmSBi9&4Dz@mZbJ>OU z{fg=l0vT04E^PyauZw-q27X&O#h&Eaa>V zi{SG7mkIq(T>S-jo(WtUtITK3pOk3)ZEH}E9sp!p{7oxfzG zluhlO?S;dXq#?9V68=q`x5w&iY(T~|L1(UJQAsrEqXxO;C$S5*b)YHYd081=J-3|3 zYQ}IL9M5(sdNYE^=r@Ua{z&fC7W0&5SSgeH;~pB5XXT^c0QlfE1=m zDOZvT{b@g6a*oCiRu0&&_hYjg&L)dLPFNZ=xDx-`ztahWz$=)h*4`caj)JcAX~oY3 zZ2?xZYnzep-1Ae1QBC9O$J_$N1HOR}Gl5vdFv#BCz^CZ#HyZ!hbXG0%+BqfmLcJ4r zS06k1LbQ(@P<|ti_S9Tek&46UGgQ-5mhw6`yPf=(Su)SAF9XBN*1}Afe*UA7TTTR@ zqA>x2*VZO!6m9uw(`W)bpdF7XMSpdm=6AWOGss4%_zLZ+D*ttT<2+CfkNhXQ>EPoP zeV(5)Nlg=LI&&%hgHTn2R|+yg6KeYKxglwqhElh%Msk3v8ce4kG;$bke_Zg&M|X3f zD+K3`?C=4ch{)aU0FEHGpi&J8v8X?F8`eyIRxQRjaa2*D$+Ao{yPgUg->{t!$F*O% z_u3z~)HbfFnyZ7A@bGBTvX~BZ==3`ji7Kn7vId&givN7Xaq^-(k<(9S){P9w3IC*` z8Gaq(fOfqf?N`h3ml`G5l<8^4Wtx`e^3&zN?EmSWf33Q;G{=-M+qB)x-ky_t`zG#3 z4fX}X@T?M9ZOy9TCakww=r(T0)s(wo&R<_?E1DKvDsE3856qHhDDXQfGRTG3X6lsg z#E{jc51aAoP_eMJ&@33QTM6Q6t=nRHU2TIryKI@vLnv}2Gz6Nh-a9>hu4?-Gf$dnk zXk{PAZ?Q9>REd`jP`#^`WUPGni%DF)V)In^jG7wdX_1FZM)Z7S=x%mpb=2NHhgzPLfYfluiT?#r&y{X#oV%aw zq#nBU*r>O+t+{~meJjav+Y|ks(=5B$Lb#`16)s&`#`6fHxQP~NqD6b$^VERxs)H4h z&H5?D;=(nb*3F>qCzV9wMnQvGIGMddPpsQ(G2PnWexV6@B;H{3H`m6gcu0h7 zFWw$VpO+z?JG;L8@YLE54%jtBsIxOtUx>S!r~`1KL<<1=#4*4Q{0`0?hTzcLEB=K; zU8$6xyw^1ZiwE6z=Ib3EB4jmU11op5tJSW2W}AhFyZ_>CE=KaIiKEoj)C@J#DFHWC zE`C-yehGQLvQdaljAqv2bnT@RL{4KmLmtP#c&#$gf$I3iq3gjhX)m)oGLK`qd zt(#$UTk?Yb3xUsoN1Nkg@ah~xJg23W42K2qw~4(?S>jz z%)cTDyR`S#*>{9r<|#xrw(~LlC40fdD}7XhOYaGIO(d_g|0y0D5sU}Ej5Cyom?2*8 z;qx(1iI*WGIQDvB%nNIM$Ix#jpZE$hp}Z!iuQ_D1J1t!5fbESO@*8svt;GrN#R^r?z$nw5HCL!tKGEE1uR*m& zwXiSu{8pRuQqhKYSOYD=5Y7?l^wHjm8lo z?#|Iz`e%7$47iFZ124EqdLRHXHv9dutZ&7ONXBOsQ<5#^(A1co1Y>M10-)d8PXc6| z5pT6?=QW1DW70dPSy*j#p!YSkkwKD-{{2y++ZP%JCqdjF z{<%owWpD6LB?ttP*Iu5Qbe%2Lh666U;-C5KVEKh~2E7c1Co4~wE)tB$f=ZJ?;`1^3 zf^Lt*zNT(zt>0D!! zXPik!NXo`5J7eI~Gi%$(Yloa=T#9aJTU!i(FfoBcW453T$BUQd;?iDVp04>FFAls2 z$|DuE-*&Oh!;Bf|5*_w)ML+W}O5TkOn!zuMSnb#}&uN#yH~AfipxBKW&#KauWv>b+ zLQuf@#yjiY3|WpZ*0E+pjQcO_5m<`Iuqq$Teu4YvIG-5>^nLDp`!x?|Z2Y z{C^{ZY!~%gL#HK`c7aT7ZGRV0=Yt~f@rAMTgIpl=nIGBi$#PRu{fvq;nNQ?4#`dpP z<15wiVi3d0+T>n(W;mu88{jAkh-W8I58FJVnRvMHFZg`C-sm0G*uiYX?5MuVh!vE% zG&S3Dj_c!$;&oo{{GF-ZO&s3f{prGE9|@ODs6saDg)061HT*rcMtC4FGqBGD4MMr8 zM%Wh+P#P+h$-`dtsC~Uej;mgw3sbQ1rCfAWdZCofI&md@zLao-KkLEkT#g)^z?&V! zwx-5;P}=M|87eu5vE@k5a+pDrFhg5mjfi;UV2KoyO;0C^+-&}VXZJ|}+I&d6bOU-7 z{I0k9!)H!|J;3#Xohdx1J=P&%3IRX}-5j}-@o@#GxAAX6H=?=de%zT;jAkb&BT{Pq zcvMw1NNcB^Yv@fTzm;j>ehWoRa&& zZ!VA+S?X^QKKf+nf@UV7TRGxb;uG>ODpA;iRjrBbkO!h77KpD%#k+@?OV^8UxQ^Oe zBk3|BvH3?(7;>QoB3fNK@MadylIGso`;Bml;`+(S*go?o<>ait?x*~#S|Il(m=ySB z-uPZYne6BvfnU;EIt#fz>9<&eq*c{0s`+=ms?&lHO|YI)F$KM4ip#L@p^3nwHQXc$ zqU2%*Z;Eoe82%^y#v)t-6-ooc>Px`W)5lDu+%axn7JmHLb{-!{@+b2|2hnQ8AoG#`Qxl%9(f_xKk6Ab)ucGM-aw2oAiuWMxIol?on}p1@+Hn+JGpG-a5} z$}N$_8x=y^fJ?3c(GK04kcGii&Y&hMR!e_cNe6E9CAghSQtwWt{B&=P_X?7LAA$ZZ z-%MM!wPZ%l!jEgn8ObJjvH@CT#wt@tH zY^uq%nYY97HaA(zxtL%oWEb3d^EO<98eeovx9{}jYSuePvPFO`$~<59=*G6;94kr0 zG6R;}L<=?GDAlvJY=3NVMlmyZ$#cxhb$8tKPk;^FSz|qI7iFa|ju={+E*n|^5JqLDTS znIo$PMD*&d;*XhHy2aVX8r6u|>^NMV*%Bjmzu8CNM^r^-h+5|amt<*!PZ{&h|2=Kf zHpr%Jn*_H>MxXG%A%ixV$!PyjJ#iDd;zz7UJ+lv>(jY<%W%1m(F>|m$Kol7<`z=Esd3jtmekliGe6HL)Kb?undhsCK}QkB1f7<~LmsHUT?%n4pA`;bXMUxzexhD^pyd2gW!rm5kdH!)WASQLX@UfW7|AKto|}A*C|Tj zE>pMKXTQJA2qmstMbGXCW)TlfoSNj**6W6L3LVqJzZp2ceB?x^EHv(()$bY0w|?M| z4teNkLixuKpZ>A(x%(_cw_CT08H*D4Ex4R{DH+hV-OI&461vlc+*7DCm~_A@ea89& zY2!uxW8RfeDTm~zMQnq^I*H~WG>b`g$=GY}OMTWc{?hs%r~ofFxZXZW2k3r>55+j) z%^YU5vzN*+jK?%DcFB4gedG|7WX`o58%;1lqBH_bbfolyzvZbOlU-qOSCxy4(iT}WfD3j)2S|ZOUw(Y$fe@BlV=GhP)0$&~a zPW`z~0Xz6;sX-VXmJmxelKN6$vLw>w_bKU(6VWDn4sphp4ohE&{X>G*2XkabKbbwp z+TR@aN5aW~qnfA()BZ0*K;tec6a|^3H5erSIlJSvlgE{(5At@jbd) zh>y!f64KEvI;kIp)lUM$fjY<}tZ*@Mfuft=?LT;4ImHB3P?`x*lib||a z@L72wIuQ#Smm(>Dn$&71fM1XX)ku9Q`v8h$CJGYA8hRC;XJ%0cYF6rW^jO__NfOYh zm~OF9loTfq%B_4Gdd0zcxI5GMA*7DlGD3eWkEv3;O|O3UHftwC_Y#+z_&qakvV8$n z_`{#VmI?oaglDvl)fA(HI<$V0N)a;%SAX`aicNf#(?$Sm_tO?@c_WLJn>nv=GJhJ(-fl{eL#@_0HUnWj+Oud9N-!E+m3l}eh;%kENGMj}9 zilnP5s7rrqpGD|w#qKSQ8~1?U1*Ea|f}mjqWPUz{WV$_-#){@chf5B&l{&MVv#9b4 z^2j{RoQbm?g5TIyID6=Ghl%az?qCH2HMa;+^uw{iKq`p6|1#y`_hRXIIp;#~+!U%D z%!Tit(!vip&luy^zt=@F7g@bc>_bM>f*B|ylG9E+t6EkWg4C0wxIdb457!uIHKFd+ zWF{05z4vI0X(uU~Tk-=vijBzry+b9n3}FI zQQalW7OtvU4SUUDu)chO)Yle-fCA0nBj9O%o&a_DYseB{T~_y z&M;zf+ZB||ssZ3!&Q1i4Xa@C!${-?~}@O12oTF8X_DA}x(eE#^6Y>r(SAy)4;B zax*aF&l2OO!`h*r!nmPEVt!h$zsd*u3;~*hOgS_B7Bkp>iU!NW0_h&Ue|dv<`SsYw z&z2vo$-1*-v*_$DR;VpJ!hg|qidxseq(*wfqjw+0&8tc#4jii!TSUbx3m)It-*mJi zO^TXB9+^B|75T&lYqx&$nFw?mfCmkBk(`~OI%9AC)6qSJFCqJ`;uFA5(>#7IYn6Yn zQoYQrjiN821%^oJo8?C2OtnI(3}APBU?{^GvoZQv0%8!jH>OA!ojsMV=GzYpU!dmJO zjS&Bg+r`lh9)D63`@pF{qPgccNRg>-*fATZ@|4NY^Lh(Xu$j@FGxdw+Q?fmI`VXcR z%k(cA+V7sKIuH^4HlD%-eK4Iw6j_S;HB-!cnfO-m7gfD1!DYUKa}t%E1|go}@%Wr_ z(2A4W?5?nDe;$w4A3ocL+yn%u9eSAB)>Nn%SCD71d~t(q6H{DQ>@BLMMEE0u6lTEJ z;MYF)&oCQsuhO4x0@wWS#KwXf1@YOK>j~S_osvENL6*E>Ip4zF_YG*zsl8|w?+EC& z5}Y-U1a@*8H2i6Q!>3-#VUcJ0tJzajwG3@NB-Eg`H^3AsTg8sAU{Xxm3TbKKUsx79 zljoTca$)pJ9co$)&lBP74W^%0T#Zphe|HyVATH9zr@2?;vH3C@ zIL4>F#noX6F`z-H#zNo}26vtB4yJm`Pk(3@^7rw_vZo@$Y0LCE;qarck{>0m57r4# z(#*1vyUbsNN&jl#zOw&-q*Yy%cg7XFKS6i}GDm;0`u)5Ft-Yfa%z``fl%m`!kzljx zf|>wVVhq70IKjMm>KUtR3^huaw%NYWudqZ?T(YqUK2_f8G0cvs(3KodxH;OF2??zd3a(O~*Jbr?%^ zutU11pvjPiFWAE$S0LR#(rZb9a6InFjqM5&-l?r0Q^uSVpf-+7r`%PnO%vj$=U(B{m|(3O6ODM=tmzNz#kSorK%?OYdwgz? z%?&`$QJe9+z^ox<4_ZUzVbxWyLw(;_(H znk$xuwhNk~HeVE3mC};PeGOideD}x=R~wRiwdTV$pl_j@PJXk?5-*i&5wPKp$`gUR zKjWx9Fe1S>!pJef5YFGWe4-Wl+n?u0sYkGdUmGk0h*!-F?am+S#M#)zuMtds@@aF% zjsiB4tyu~^3rfSA2P%AD%DBGFTV`-TYXyTP(4H3yu%4L@uuE1mV}^f=IS2oJ=D+hU zv&$gA|6RXe`Lrycl@7{d=-1y5d6dIFypQ@N>V!{9#O5HuODbm^QJ8Ddz)OC10M)#z z-xfTng&S<+#|=z0<`raavt9gjPq0JY-h`rRa37?1Pn+iYzk!BCFJDOkHHXF9UNFpg z%Wqi*(1>jDNMx(u*|l#%t|F=iGiWfA-3!y*vC)Eab3H+w9pNBOtLmwQsx-5|jjdec z3FA{Fha~@VlFa5jSwgCRX@9Ux@iM4ko4!^H59B~hw)$5}nJFY&*&1a~I9$~PnZOBZ zH)9g&kJ76_@tI)bbk5WY4mEPp@G8oO8J?>rg{}9^V^`<;-1d>9ybO3A{8^r*Ys(*! zPV>H9nXxumBovWgy`NX)t7OCAPA+-N=-HJ2f%wzY!JvpYNDZkrL;rzkcnH_no4wv? zDr;UdT`%{Q_mI;^cECrrCcjkPqof1|5e`j*42>2}aOiS&Z4t^^w^Sdcp)7KL%O7T? zoLp_*1!#Nu{0M6-znR15;l5P~K}V z$-A~tUutIxoPa5gqpvC8Qbj|Ow(Y^~5`n3TWvWpWJVLSEA-h8)&q(c_?Iy6$mu#Qz_RB+D z@b!lyfiui>-nlY!!?UU~&qTUG`!XG!(cpO2j_U?zGBvS#9lpK8d5D@Ir7S=3(2HHT zs8;mMut0QUki#D{ZLcTRw%*~MZm?CJwKgrS0lu$<>aUwfBI{VIHWjKpqMuyGq1pR0 zYAHY6b@`ed`|oM;(==l2HH)yvcT%U+;i?sw5LxkUmJUyF#gISF5RIEGZ}vqT!?%1q zfD@Lck#z*WLF?)#1-jNNFl)QI>dAw0XI|Dp8&)_|I@*BL73HyN!De-KQ3Kg@ty`eR zLj8t4vP?Tpj>+}X3#^0mE`%8x6E=TFnZgdE;p=r3+A{U$e|f4D)28U*wE|^l+<6%MNm>~WqG1$am0VFnd%QDGm<#^ zT<+Q)&u!RLVmG}5U;%8-4#+CW`U@T5_pDN1ALYkGNMZHPbl>`eqZiwMP=vT1>NT57 za)WgV&XVqKMDdx_>lz`7H3YB*b9N~JPOU7S2tlJ6O#K5L0pV=>isRed@4_OUoD(Ya z+VR?~#jX>6f_>-2Akge9Gwu~=*1z@v&VAFLY=txY+|CdEPIAH%J}oO$m3P$vfRD1( z4olb0egB~W3?{6w^07^0Q`TD5Vs8*8wwEMs_EQi_1ReWMJhMBr#*bEE4L{MeYP?5+}5lZNXg!cb{Y39;8FY^!4Un>zK4ap_Fv@jhUFq)I8MSMG%|fe6n1 z+g=e7gC*Ad5;WYkPQQG;>nwL(u_4|tz-X2#t^$WVr4ZKdQzPe4Dmj<$BrlG&oIGY2 zgOhv?jpTmu{ftyZ1b5fOg39uBU1U6X_bN1f>ORNadx^ zyaS!-tPUbJMAaET>wIhY<%*jQ)CIleTev03(k4 zf%DMZDlr*Q=fZY1w6)&j{Q6O8#;O+5sLdm*OY9;wgrU1DcY682@o0qPE}yCnvRXM3 zAvQDXP|NHWr=9KO73#C{C@ojgcdod1M%%lSa8A$wGvUM(9RW|brXt-8jg(y7-6qvA zt5^x6Nu-=JcYdg$=Qj=_L%;hfv9^J!f7<7@E{qNWm__Snn|gPyR@poy18U%jC8K`1 zT@)q3-h5yPVOsrcgGT{Ov_JA?U*+UGzvO)kKiw#a(pgjkFjWS!IUs{(Sk{QunX5r$ zyxBWUGx!XYFI+$MOvda*OEe)Z5d#0A^^a!9gR7d855waU(BUBGs6c@R6=D!Omuw#; zY?M32gcs>Q1WW&gFz69jK*!GirE6b$tUnZa+F#RY%ki^i1BaU^4JgUbE-5gmUZO^E z#AJqVpan^Ew7nAo5nIEN4D1v13y_H?y^_oNx(t*8;?+|0e4_!ho1 z#Fx1ru4}*wPZ@#qYX{Rl>EDTZT+rEV$I$$t zT|h2Wt>W#ozx5``UcdCga*a_wawkFmc2#mCykw)S4nA;$$CN6{a~XRDp603k<5paz zO`B>Bk%>tEI2a3Nbd~&jexmm7YVp3b!m88tMX<$=zwUux*fZ|I8aDjmi^n*gMGFd_ zuynk+n6ddp4|5a|Qm{bZrG_>it&z2Mz=6qcc_DN#FX+;28MQdB6m6Th9zY)&J0bNgMjm*ZIB%z9e+k|GOsK+1Oej6ZJZaVgqwW zH~G;N*=N7u8(*^hx|>cR%F57LB7d(FJ<=6WHFWDoE0eZm^MipkGsdlCzJA*o7NVlr zpWcQQx$k3UGaMy-O>=$R^$KFXF`+oS#=#vMt2RD0=?e~l82N4&B$yDXwr|D}|EV`H zCAnijBh4)s`yXWfWW7l67wF__>~6JGK21+yWWqGbayI3rCCz&;PKtw|&do6B% zP%Csop!>G`c!`EuC>*5knx?a}^kA}W0`OrTQxOz7>u{u!7dS~8Km z&u>;xSz(q3MiSha$rDn^?*Bf3`{HPCmXT9chWaC0w2>LmkfqSiDXK=_L)RBO+_w@= z1Gd#*SevSOsLdt|;=>Co?HGn2YHHmvGoen%E?=Qlll2)jcaQevI40|4P<#3zP1J)? zP>+3}n9s2`>P3>@_{~ag63<#wZonp?~r&8AYL2-PaU zanB-0*^{-~{>R5@Hd_WENsi+7T*~thBg?=H6Wf;P_im&8ao2(|Kv~#jGfYl+j8<_E z9I`)9dAU%~=)z1@!@^l!If$?ZF!HqfQn;1Gp9tEuTW-cExEuutT-^ z$my@clchZ3_WmZI7kPre5GztKoa$f^$}DG?d~p<2t*>jo`&2~n;p08WNQz;|%Owru zVRyBP{?^&srsk&1x-&75A-rR|$D>Z+JqGn-Ux}~J=$l=yTZzz@C2wVH66Oh!EPxXl z0(kGR)q9M!Ndl3ZYTD(wepo~d#1?-trD_~Ra&E#YJs-ff@BE9laeNJJmSnI$JY^Jn zcRy6iDadA3X`qwCQM#Y!1aHZxFTR56>2Jx`!Fd`C*chr3JZMi)7Xqu>TPZ=eNUu1cf0vk4-vp!vd-)@liIin+;rI9il zi*=o~5H{Z`N+t2x)1j$+-t4C@jY6uS4v(YP6=rYLt$PERMLKh+JH6@9b}kKX`#E%i z@K>4J9e-*)Za*xHJbpmDNm<7AIdl8G&tH#F^rh`hzMI>Prsk$uxD*5DkKY6I`3~KA z|I7r6*=FdgLbLRm+^3|CSrE;G0{TyP$t%E}ivYH;81?xdi34xWB+PTEwY+)nHnLXg zY4#U8sC+2q6xL2I?DpKiSSn6ZuHFpPuSU?LBcAvpU9;QEU<2$$fqwTo5FH{+7#c(C zfZvSzw_8Q3L&}dA+3(E?+?PfyZ-;7y;?IM6tMT& z4{_D=_Cu$xc1`q0n5Z91VyH!4fhCpSTO)rS*(X4r^7+d-hx1oE|Bs@x{%iVeqc{qR zk^<5xpeUoeQIQr95Jn>)IYJu70Hr$xq=s|}lkScY(miscdm~4FpFRHoukpj@zOVaS z=e&>RZ+@fD_@54b@Eo3MGB0Bm{z4zOFVrz6&|aV~I=UCigVjcbV!T6-{|T2qK#dKu zUke-?-M>OVMOP{2xZpQv8&r=mw+3mX>q*M!Q)5IO#+36#QfZ!B45f^f-80-zuRfD; zFEQSoynQ|G!l0@L*{2(P2fdwJ>8M%lj^5AaCbiO@fIj{Dsb5l;L^jqf5Ut_zJYYzop@(}ZynKE2BW2M*%6-X&zEp{==Kx}ol%j63G4I4I(F{iv2513*R zZpHxn#^|&lCu(Dl$_73>dm~8{IZm`npP@jOWziJ!t&V$_`m0~VVH)lvW9`6EQQtS) z{&1>1Xq6^TAv(SpjsxTH?V)>=4(;4kVwG`2kKb}cnd3a|VG`8Nzi|xP^4VJQhdUDp zMc&_zDLoG*L{B^&KKD*<`Q@mlS-g0kVLNPe**#(*%JGYg*xHy$FPpU!aW}HzrO#k; zxp9Upiz?_vzGTiJXXmf)@jkh^niu5Y%dh1GK9*zI)vxeF z5;I0;+I1RkH|VV0xtbNRzA2B2+M1wB-+KP~B^{vV-nt; z2Ne7hr^6Hcu7P{z2O~Gsw~}|zvnOZAPj@C3Pv|eWc>Gmi_7f2Pd^>&AutrTJ#pCDP zXB8k*F`WWmb3V`x`WFM;Ya?RaO>2I8IeYq{!D0fwMk1-Wl*mAgy5dx&*(7RDgmdk& zBbDAgpU>!A_jpBa;{s=~1D6{pZX$rp(C6BE=6ox8OVT_m1}Mf}r_5)-T#F<#uBpqyJEt~YckV#K!aB&deHS2o-&3~}-d1KRO+bXv!yg__aVoXaNODABLH9(o zH>$+R;^Ak9ZyWXHz;8DACtH``v`jSALw+@jgVjUnaomCntdH*(5jaSi`Z_f{@@(0{ zNFrO0_r%*1=5!Zz&!Sr7LwVWD0c`M7U&xErt`@u5BF#ZMFELi@mrq4kdTn5CV)G30FHck$kt)uK1ntC7H^)rpkXK8YIw)i>&MEB`&~|O-_>@~lbNf6>`zJ?@y~(9S4q*h6Zqfjp9IMQ0Yz_a6e)3((hbBs4=SHOBwmWV=;l9+k`$=!;b2;+B2w`KRD?i4A z?|^EL#g-DfGZ;Tw{%{rJ-9`J(8pC;6|7&r>Y``2}Hgc=g)*KVj9wE}3B6~B(90=nc z8meY#^#j_l@|DK?&W&iTC@S9N=NWK25RR1+j6L^KUPOo=YX643OE5Rt<9Cscp)9(C!firWpqnGe9R} z(ovd4L>IU^RBF^Jb)O3ki**0uv&4fDe+!)OEW!ML_W${FYkz0*ao_kiTVv(zJm54OhiUlD=@tj<0vr z1wed2C|Jxb70#Wc5i)`oJUlDCj*2TQ4lRaNLZa0pO6#A!u z6G&ZRSBVeqNkHagt+vR%)V=-fYMe&fE`E3l)~DMvsuwZ{gW*0&{E04WtVuL`*y9-4 z!n45AYaCmS;N-65mKImxB*!QntM~%*sQHigEahL4kvLOjSF@ZO&wyU&qq|nzQm{f9 z8Ss0;X#XjrM(!R;z-$)jYZln-8YKG^)+Ha>K_}yco13{&f(DMdbGf&t z{W$l}kUL@g@X^e%b@N_;FvRsP{hlY_pFADE90kO7yr^pW&lZ3rF68BqEiunm^bjZo z^Ek1zk3GfaOO8j}%pFg_qWs^>gEriyNQJef?ufEzFg06wh~STK95}7WWO9=`xna0% z!vJre?Oz0=`J=bc&?-lMkvO9%v+7SM?Rlx3=75wPt8a~=x*0*O=9bk^kuKD?rN$1o z0`Csp4gLk=Ua-?_8z^t}Eq=MhLPvDQbqjpGI~7V3NBk@KYXXfBD-zWzd=Y&Yd@pt1 z&yeARIR1EJlT=B(N7IX9_Y*Jl`46wp1rgr|-im5|lqsWKeVE>`%|X3B6l_KhY+yqyEdNswN^W!ti!vuWxK&4OHL;7O zn&=zeqS==LAH85?DHjh;HXcmk+TMJIP7T5jE`^z(QGn=XN|6>aa*2m+F)z8^;0Uy+ z0Dh0kOP}P>HKHWx?|YBk$1LLL8UUkZSH|Z-ZZ1|h7$dilJtyvj)3@p;E1_*2&It2 z(-2L8N6$(nYKW$(eP8KDyCwwB)!EG}@PvNie;K~6ZM+NPa8VrfZ`P0dnPb@&yG0u6 zm;eGNm-6Zg1eGfl^LRA;qOK|luxGyD zj#3VB@+hpDwhX0DmQH^+Mjd~nkHDInOdY{QCY~i*e4(26gvk|ZYOEBOGRc%F9bBmr zA8}eHZ8j(dPlG>KJS9;OEMq7{l(11gVM+ulYPxH%IC+>)$aa5K2Hn(gOtNII4tm0WTQj~C#Z=q%3wJsH3w|8@F#*oBpkd@_kP@!Pw8 zfSD(!9db}xqC&Gl`RCJz^6vu~jv+5D-j-#$2d}i|Yqe$YR+n2b@0W~wUga{f==5}m zuCJqS$D3`Ztn0WX&EVs!1v)c-3upA<6pF#t;d5 zm@u}_)g5JLGNdWs#<|U^w$FS2a>3zZQ~Y;l4_IG0!m0Mqa&a{7i;@%!JSY6mjGug1 zt#D%X`!FucIi@3KJ7gx0F7PX`Y-GbRTY)GiW~+qIfzCdE4vg{py3ntQ0$LEVIwX{* zD%AR0F#&G>%KExjbu7v8`g-9j0D09QlW$y?L}>tDsX9_8Nh)|?x>BBZweB_Qhuaoa z02O?Osc>Cq$LMoPHTYK31@(CHc&E^VSIfo4;M5gudU<}9?tJL%yEo3RQBfvkR4r}b zzmSP(EI7`jnL*X(0Rfr2u3{-cGTYy47C;WwXhkCKJx9+OV32i2pf5@lHSJ@fp{O$_ z)h7NaO9jTrY6+;Ba@lIGVmw*$7KQsbPY(>KodD|pR_Qf72y97s3uCC0181=Gfp5?L zeEScNV*QnH4U^}3ov&BHcgY7@M(6Pn&+ZQs){mh187KSumYkO7hikm=X>GT!mN@98 z>8n0knigN_Wzq`MEJ>V|a77SyQroJ>27j4zEGw-b&nYG({Aa)LVF;faho{LMue+>D zc;n{g57vLw^p!1n`i%PZ65}_n{iyYFM!4=&k0n)7(*X+!#%hTA`Kj3t`*LMhvJ2^I z+~pmNmAf~{)-w$w*O8hl^$GlBk2HHHvEdE-37>E_Vm15@f8$r4TNQZI&x6Nt)6=F} z*sLIwB$X8vZ$Xi>e8&6+z1WaZ0BxT+J^wq{w{+dg_ol8~ws>u;mPTdz?l!*A9|vVo zqX`o0fATJ;0+NWH(-8x@R&m)`WC7c;#(fL*dWMhRd&rja$cic}1T$*rm3n|7;t4em zp#R}zdz+CO6i4_v$l+kd0Bck-XGECE8X7Pgz`RQDH2Z6Rly{sXi+4u=QATI!G$j(%ndB**BqzGzPXj;%Tru+?Iqv#!@PmW65(Q?Hx`zrmYUMM90#t7$s zlewqAA2;%75W9xZ82!?*XbY2(3q-(;wqotSa5oN`$tPZqk@$q-@|!%4Fz4cRQ(WiU z4{N$HE<_t~Pq9|K@>QA_GP(A=RNiiN0f_QG7+zOK|A!YV5`M#|rkMIW`lfVFHXc^z zne!mf=0Cj14r;Syk2WH`ddmli2|wb^Qg*tN`v;}KIR#b{@q|3I^(h|`!`HOJ5aDsH zZzHW*gT9qEgi@cg;)Dq2>y^bpF~>ryVHIF0tT^y%;5=oIQqa_mvIgbQ!>eSWFGSbj zy@3xzSf65T-dPv3*Sg*vwt)`N5Q$VOs4WtgI*Y)hiqBh|%L}*SMs0rbg)48BKHZdL z>MlF|r^Wm1xSc@D#utgl?L*kEH9AlIq%Oyg->DL$fZwq#otJVCYZ_bb|0NLk_Xs8l z&jWigWX6L}UvnH_LDI2yT>3^(W}IlRGUHggQDfK{Clgd~c@9g*-NpNNW?spzR#|LD zmw~Jh4(tf#xKGPsEgpYGwW+!GV@dv;#IEUB4-CsWcK8x`2l|MSkE-t6$>jl)DjY_MAM$`D2&rTtH6B`%9SfT~e#Kf2BigsZZv|gkBX2`N01d zk>KhF%Gb;Z_urIaq02{Eu@iue5*(R*@qR##5$norU=Ma==NKaWZdn#j{)eZnLSu}T zy6+5=;9*yt)}uv_b?M3YAmvvQY8C9z5~oEyX?HCbjbXnRtBbTYo#%2<>mWZ^EMO)v z+e=Kdg;dV=BQE~K`;qflMjRRV^a^1bb=7%9w$3@%qD}$eL{!TGsED2?S9p8b--@Ci zK|>2VQ)f4>93)`Cs_L+HO=HP4J3zl5!xn^(&$04UhD`~+MOO;3M-9i)m?*RNhWffA z8e+W%w)sP@9M1Ojd+o*~Fst4E01ZsWKKlp7^rn}OANeTXQcEM#LM!AR7~CrP>7|@y z4rCxJlBM&lYLnwX&doFfFSvT89|Z${NpO2LI=b(h0Wj;PShrMgS~pIsgkS9Qs=ccB zfqpCtUS*j<*;MuP$IE3L?t%IFVOiQf#x00{SJG>NIBayVyN~|LrONDKs>W|dk1vcE z>X|*(TJO%);L~AL$8(>HWd;$R)C5)`r3*Z7*!#N-c^vh*MkGTk0-}x_?rluHpUICN z`GuN}D+2AFkpCW-^TTp)D$+mXw)m@|Lww{+jGp!1)yZeZ7hI7#rBK-GAa% zx)KZ52RTX6s64nDFneP*&(}Ek?O~~7lf&5^K8hw9S@FQhB8qbe^)^C0c|YTH55??L zR=o!ib4@^@B|D}C&K4oF%3Pj#9U;zZQ@If|nqPUqKx>@d&J<{~D_kSCT;7P;0mRIB zs-7>aX=o~>jtvM;>tTqNb`G6YNZA(WW?Q|(^)M9U_bme@W1&8xYBe=6{HIl0?Qbvz zD=%k?r}M|ga6!k5a&S&OPh2of((F~LkXU`dP6@k3#GvQ79cC#z+nIKpA;%QtnPD>`Z(TW+qQV3G;Q zl5{sAi!S2mYc?s7U#Xl@Y&R#~4ZB#`wmll$K!frGF3*+uSc%X7eYBedI_}7c`($dw z>&3sHgo?!xTy4f!p(`n6-;?I}eRDpXo+YcRSaL|yF_F2lXXTu#;2TKP7kawRIV)P| zyBEJ($uNXW-Td2gx0O5Tt25ownqubW-Qvrp4<)pfX?u> zIeIuLsS6=vzQzG)K>fc2_3;U#BKkgGCRRAgyfIF5ufuw~FkLqwxBzV5oj-5|8?!#= z`-6(*tKCveAj!h@E)?cO@|_jB+rQ9dBacrVzL7vfG?3Up`D@;drKMZ?` zQUBa9YpjZf763I9o^pAw1?RLFW+>GG=TboQB7w*ZNs~JQreybJPVkSGTzxopK4%eU>0Sji*#$LFaHJD)6vV8=ia3U zTAM#NzSTF6x|haq413xOYfwbJ9}#BSNCLq-thIszB-^q3d9X-a-@!a$anuHC+@kbB zg2_#}!hcXU6bGMO^9;6Qj8tH?Fk7v0Q`4367Jv_`c4tof!qxbYDxH3Wiex1Wx%HmN zmY-gtG^Xp#xLRw|?i~KP;lyw&RC|l(tDopcmc1Kw5El%7H+Z%Qb(8r^6oJ_NDbHoC!SXWK=P6VHH03 z$odZ6>qYaoAKrz1!k7cGE?q`?@gcNYVi+WwPSi8o-+y-f&L#8Wm1$!)3UbG%uust) zz-bX;S@T{>VPUbeG3x_aLBpK-j;_xb?{4EOj~n^37EkeK&&>>A4i{q&`$&s}D^llQ zvkU&%)xfYO`V|b?ODdCv8VC@4i2*jf99CA?V78ahO+>?c0vB9*8Th*z<@k;EQ7E$p z@HaC{lM@g*oGSW|dOhjO=VH7ZhZ}q(EaIMZhn4KOm9%7Ok)FLV_34^7-cQcC?heXx zoE_nq!of%*KP(K;L?cAbDBR~$x2rlLu?O~ewbyO3$m0I8By3CNM?cev-!#HGEF&gU z=y&o=tYV2qyDk7)lwaT0L=NPj28oKVM)(ruWqp@ z40w1mH;&_b2H#S1nq!P1igFJ9GSk6$05I^^tqeA5g>`)9AP?Z0b@d(mnHIO(GcDaa zHCr;%EQ7Q8hVXiSE>#Pa?iZIAL&>p?>{>E&lg%FZ%@p5Kg(EAgO5rZWKKHv{Jcrw# zrmarPR9q#2&g9+d&WkU zF}mMZUA5TFbylf@;1#P$F1 z4C7jXQP| z^|8pV$ga&a?A!Vg^*slomA_w(qHXG>hFm8-r%g9|OseFr={9^U8eA7b`mi;GUbvA3 z0O!Ism74AeI*7Sfk2oXOAt!Whw-R+2t%&o$s-j4lE&IrUcn;2uxeB7{ zLUc{%a8-J}Q~^{djeF-Nt1=X&;WhRt%D5%HahuXM2Ug)Pdc$MD8fCOl)f~n1;d|0@ zpFKnlr}+lnDq!>dNfCBobi)tTUy=G3-tggG7<}=e{D-j0TnhoV)~NYgfXb)xsFW+M z!hCY8WAWUru^X;spZ?beGOWa+!wHSVoetOe7pk~FXHgmD#5?aNbLqKmnhw&#%#>{wMDn-jA+^nGG*xqo9wlF32%_ zd-Sf9W;%)Xhu_?AIy|6?qjh6qe!UsFLR7LwRnpb0(s5KN2Da|qb-VyVR$TVMVxjzX zlOm>aLX%4-s;4TQt_)m2_j#hTBBX}HK|GPBUg6TrlnnPqxD7n$rkW!u@E>yps}me?Sr8VwF?b$F)rCZn>KuKpN$&#MtXZ~l&)-bf zQk?yAgz6TK@KWrv$YXO!SHrizZ~sgD7B11}v)yGLv`W%yg#mB(o72g$nC#Icy;irc zw7eT1?2q0zl;xOevLG1H3j)OB8=598DG)KrIt;Y_wtR~MFd2K&O^;p-X zG(Sq*<_MwN$T%sV>qcSts8XwdRp&l3kGTkc=oCMm&W6n|V{#7e(8en0f?27+(AvyJ zdYfesfsvasrz@uad_QWdE@we`O{>?1bOEQMhxgoxf6CNo6j)g3|J@it!sohc%!&sw z$8t#KHv;A8_MLC5rgm-$wmY*|rD80N6zwPoXbN_p~?o?=Cw-!Ffq7#grTF9AQA z(ra%gRT1f->$0fT7i*gO)BD2<+HtPh2Mc+N@qFEM6hKQbg}LCZGloCfgFmtZF`FvZ z%segl^ubBT48^Tgy}P;Mt=9CJ`iRZRx)MgQEqkPu9d)4iO(?%qZfUM3u%7l%6XNH` z7wS@~C=<;VCtZ?9`dwmAYO7r#vY7Uro3z^f?qwEhc)2k>|8m3Qwqs5|<@idGRfIQE zufC0}TrLtN+8Fo9&$laFH-YKlHda{I?0$6h$r*nT_!bvryS1xd!Fd-x1{uq0y>ZNR zr$cB-!bO!Y%q!{d2{}4^jpI>D&{_cU9LV>pMzdCoPLb!-CV-)wD#MX?&pl;qaE`m>N+AW zumF_H7*ClPtsIWouS)S}xKOe;sHYe4q@0zV7 z`10TOMuhFL;2AZ>@G)ik!z zmhDr$p}nMfPtBO~(+c+lFgBaM_dYdPcU_wejDQ#h`eX1vwPqaB)fe&~TRP^G!f|VU z8=npx3)is5M^VXL`(_o8raPy>iqU^%Z6quYDS{{a$e)2;DzFa-&f}Z(t(3C2t9vr( z>}7oC);!AAurxEL>`k64^q0fO%O`-W_(LU@71N5RRo?%HH=9XTa&lV^UpF{eeb#y; zbK>5|={@n?NAh~E>~GW}E4hNKTdB<`l=MSyx>d41a(WQomY!MIq^`Z-$<L-0z=mwgSdvZV1L4#Y(Y`26XFCz3UuY1|;_Dh|eaC>AC<%4;a|91 zY@shQG3A}!0W=dFs)w7b9nMcIW;;j&9brF=j=Nat+p_*vRW*5md>^9Q7hrXAbb(#E z2e({I8G-RmN3Yc3EFk-#6;%a6nEzpM0c8P7_!^lXTi z6^TYRbmxjXVciED@^=boIyO3A_oiZVnGVq=DKb2Knf>oqztqi)P4XtIAbXx)Z1|SM zSet*vYfYp4&2mf5%^^ACf`!E0@mamzNvf^2EQS`r$ZhmJq1`{fqcKHr9O-oTr+8|LfX;0teI0woy8prIvdE;mvo5&=@|EiYv zSiJiB4!|y&K+z!|YFZVh^sTFz#rF^QoT|ZLx5lfcIM|0e!>JJug5=40<`^ioSbI1=@KdC8XNVI$X!c>$WV`hS{;$XVJ`wlip& zKRI%=I1H~kLNYe4`Dz*5fBvz?h>7dWTC$m_SMjlzVfr>7q|G^JU|0C-x zX8GRTDAM!Ax~I;J+kQ(7s|tU>gd`WNjVQy2aZ1~@HS&um5fRJN zWBtQpWJs3hsDx-5!1pa*z%NlIcR@+~F|(i< z2AzrGJnc+HLB2o6gQXqX`b1c+rhs8xfzNyq1^WLMIDdq~!cbY&DIcYAokzv!I7o^9 zgnm3oZlt~gb;-Ecpf>S`q=O=Ds;SybZ{^PQ13)RUnytTc(8 zEAAU0qopo`ewg6uU1i@R38=q;=94{$Ab9v2zQAIf?1(qQam!e+hI2g^#xq_S^|1`R z9y9kF>vpD&$GvT5ruc0-pROglmYYjU7m7JiSsd68o^3@OiFrA$K|xck&Oc_-Gg;qs z!lYV5f)3*dbpR^$d5|J^Z=HVXUh}qo!L#xV$crq}oI9{bYVStSW95gHyY)KGOzSLl zbBe`O@XV<1o1yQIh^Oi_NEBFw?mb`>F>YE$5d%gVRmpR+QGd7M3Xn)HZG)w8Z@56` zM#k9~kvczswz$ndfc!@;gEw@$%SvJjF})cyenY)!tWnr_E-J!MS7kyB z4X7K1XwDkf;x>~Yh7MSO0gy8~ek7I}L%RY$pY=^e|q zk~ZW1To*~t*n2y(%+%#M2#Bn<%HW$(uXV&CfDsis#(sSqc;D%4TH2)hwS~$#o30s7 z-Ttk_N@ZO(`G)C51Z^hO$O8HV<~54bUs&I?*KIMx6{P~>8QEkoBSzD=o*ass&TLjp z4C{4m+Lh`WjL68{*(PkXtR5G`PZ1kKFZ0)ck3HC3v=$N$Kctbx@0Vf{SE&K?8u}^` z%COAr2I9a1HvD1GN8>z)wXu9pW?$>QRiwuQ(*HJTz04c)QoX4#OItJcxA7a`b>>@Q zzdKwOG`H_u=;Qv`glMs4rO@pwpU3Nw4~?y{WU1oNOV#j>^6ubIYq<~o&wS@ZKx<0`a)3AMfDbA-&S zd?*9yKNq<|#~5#lB-#C(X}L!b@ww%-nOf|Y>Px)*2k{ji9h9Cofn-Lbyp_GZ?cc}< zMLSnmY4;fd5gz4fkqh$PbX=vEuve+vG39QrPgy+`c;R|QMP+U*p}7!t?7Ou$?0&Lx zGYYSse!%~MfMIL;%;a8$cYS4ZT0e@*I;|W z9`0LArMJMk=!Jx2sg9~b3aey>^)$LMbGqV?bJVJ5mHgn=-|umFU$dT{ zvP5w~Xr%*H!LqHIfT(q>%Eeo=X;tWDVVkHU#gR|you?fM2jO&;Qa9_#^Y-52w8Gq3 zIlP8wT(6fW#%*6`_GNp<+kqjkS9@3{TtoR^tOJfPRES=EY-uYg5?=uk|GjfP8q0is zezfsnC?tTq&{K|S_K^3?-l?`FCxgw~N%RDVHp1mac%^X4&^%frGRR31^1x_?;x4~- zCH{S_xLah+{|}GwKRm}}@?JTnv`^@x z-zTzVta7~biy6gM7dde(T+xQp1$KG*lBr^y+xzAZAUxPYu8^X@YRR~#uG>z|b?4#T zS_np*e6|zmP3aWzE%wmlD($(S(equPRw`pQENC+YSPCSnOGU66U^%)kp!kz8OY~*b zd(QFacEjj*djWMA^1~}MTpjJ-vX`N*ETa30P|~S)BL{poe2f4oRh<@qPHXw~odFx< zi@F#2@qK*Frl|#!lpFXhzejux{+zo$-$i~q+dOK!X8f8?CePD5)TYV_y=Ear0Z4hI z)8e;AjX4V4ar{*bUpLaD7Zf$0y2!rWr49C@wd+H-WSJVe1F+HH`g3d^S7Nlsx!-TFqRy(_;#ZM3&8Gw1nxFmz>1xBZQ(2r|;J%;?hi~hG4lw&Wd{L z?vj6DIaEfzLaJ`+a&E<Ma}eF^9;pK&NEm+Pva0U@wP2$O=gaeanUs+tkC}D z>cZJ=BE?nItW6CH5Z!Smno`e2cc}O0m00TO2q2LsTL2`RVe7wke$!oFf=Ufv`PM;h zak6ux7QMn1xiC5_AzOBsA*I5}l}nvN5+pWAFIs3NVEO#a%FNT}vma~zcxB!kw_N#M z1}d$Og|)#uhSubf_PAx6E)eTB$K5Rvwr!>)0pq=kl?(_GCmC2k|sn-4uiw{Sk-V{VK*!pGuJ6iJ}OHS|?cN}>d#zmw)9dJ^^7ZaQH z8M-rZl3PvOC%f*$T?k$94rBy9wl*@?*Ad5SRboE)IE&xgRD)_QW%L_qK=v&w)&)I< z+n9y0*P>_BNK~pGsXie){4t%p(!tK(&h69C#Uh%lgRU>;5mIpPuxuDwIupGhKRCQ` za%cyjhZJC`REnV%^qsf>N;0{GPEV=Cj*kx_V5K(y@fhsZn)h$;Na(u+k~jE;zU?qn zy76e-##b^4>dgP}h})UL1ENh# zf1QKKP}z12TIG(~aFUTM9&vk_Av$loh35v+>d&~#?x!L6)R~oa0%BB~K_YZim6`A} zFl8d!zO0IBLdB${71BM;4h~N)Fo$}QOzx(Ap$G-bXJey+!6wuV5fyBREzMwpU?W{D zOf!rEV1f_F7u7ELT`TFJoozplQ{stziqy@?Up6(GlyXQO?&?95Q9CcDY|@YV-?O4% zEDhK?P!W}*^Uov4b?N3*IVyohT}0_c34J*eYO02l=R{jDUPn$==E*v-T3p|y=LGW&mzI`^U8#g6 zJMm{5;%bjVHM6$)%tD0wC&hYloB`%t|KX9fny)kK%Zm3%$;@ev#3+UQFgFYs?m%ml zA4hpzG|xqeJ}cX)W|SB|LildX;R4Y$a|e3pVdMu_!*_X$F<)v6ALDY0bFko^nNr5# z4{la_NOW54xjaBd0c{1x8MoybvB}Cm%b98j26T5^4fXYvjVI;}cZEeN=O1v0-q2!z zYw|tJ`md_m{?_~a+8$2Ce?z_A&W;9l@;|gs-c3cwW!gCXNKoVW#B{A49aYH6u_ky$ zei5qaqfwsoQ|>j^6;&a%KgOWZ6OMcvb6yGhJSl2SyXao5K?O4xe1d-p8^^&_Cgp3q zgE!M%aB&r@OFee7cho1BaB_q?vN)!F3WS6`l@=-9%wER}phj^gyH8o#eWmIFjC#u` z%_=D*^*)yrJ z!7H$#l&VFwr!&94W!c(aR_r17K8Lq7o%?Fi!@%GVnEehKhlXaUg;j-mMW$pOb@YCp z*!`1Clx)T19hCYdig&Qjued*O;Zgy8k!RK|wEvn-i8h~TXsZ1wXoVB~7v1Z9lsbvw zWMgev0*0%OrgEj)E)C3Ezr;TM zrn?1=$4a=bxK+#W;K1$bn%Q#ptWpqaoO1rjN4MvmC2q=ia#j^c@Nu_=YqG5aX?{z! ziVP33rX9z+;cYfJQ;?B*&E?d>zVqfnd)_tMJ8K8J6XgrKFBbA2*xxAN)(dG-p2y*= z6t`U7MSEVfuS`X^8@?K6x{t>$tH5+pK9V?;ES-SXs&e!_8B{qYTOXm--%lbWoJ1lf zlmy;nFh~!dKBFl;7C6`Wqwem|T_g?+ayMTuAb6Au-RXbC_SO=-6 zd=&0K(zY+!1?y!lnW($I)_;M^@%q}x;<41-VgkE91+Eq040ZBr`K(l8p;F309;zD} zgzv~NPvJR&WB)eXWC^Kz0%ij1R^c#24IBuSz%H)0>1r=~znH4Ucytr1yDb>Z5Ds^) zON{uyB0T1g2!!mOjk&wJzMB4FVVk5RgGa5*HQCNcpFKUo?Z@zD;lTgMxZ6{Jrq8lL z?c(s>_76@WlMS@wX9!r9uiZ*SPpMKFV{O%cnq$iiV+B%ET@%zcJQD{H`Kz+!msY6I zzJ}0RV{$^QyZ3zU@RLKQ{0yCcRe9fkCl9g#V*|UM;b2p|-WLXU(KZ;|kVkwY-wTy& z0_{huA{Rv4e&Z=HZYO}V{N`{R(CNI|)>OZk=6Y3mqf31n1Co{%GtByp_phyvWi5py zbmcD@L!1R<^-X$1DwhP^2$(-shT4wLF=_>O-{Z>&HVh5lX}HM~=4SY-`3F2-q7ka1 zmVH`T=5KYwp^>e# z#k}hd$<$Gk%2!`)B;XsMJ(mU88V;4T5V!pyx(`a2sW)Bc?H3(useJlk(&Qne71!3Y zd7}W)l9RqG&zret5#8_xd*wa)@x4aCV=aMD6PGu_9h3jzMV~P%Ux23|<|uoZNO`Q% z=HarZ&A-W|xJJUfBzdDNru+rq2uSbS?1e^7zUyN~s-VGd(_qm*`j`HO6Sq3pCX95i z_+PtKGWMh?&tVPHm4bqIQe*<;NVBgp?q$2?R0oa$FZNE%MnqghM^minf6~B-;qNkz z?kRn2*H?Qx=$2RZlkE}CKMI6jX6Z;Ik1UU|q*Sf9xoBcp%YKjF$`-1^9K=RA-J8g4 zzsS!dJ=Sl}sH6uLj!B9@MrrC~+MPpQ&`W#VnJa8ccPzB5ANMW?_BD^@BZH}bVupG2 z6UzupL=IlwrWbkoLL=BL!VVVfK~?^4y3#LyxC|B}X_2LkO1-+Rxk9SuwP) zH)>lAJ7RC1tvFV!D2G40diI)Qn~n**)gcrJGGDYZz#gJk-(mg#!+TPnR&2ae^yyNJ z+1U;LC|Nl`49XE&10>6DQxF1P}MYZDosu(hdAjnbA}JT|9~ zMiDJ^d!Vwc{J{AaaZGCv77Yzc3@F1$1Xd$>zHHjtEzGwj#n9V2i{JF1k(#EwL`lV&Bu`C-=+#0zmPPo}M@VE+AM354Z<-C>w zy^#F;aO5P(cSNu0-A2&U_1~h0Q)*c{%3U^wF=f->p9Rf{N|QcZg)i%deVdk7ziSA6 zhCF7F>yx89G1C`( zKfNUji9ZkQqRvFcOcR}#AP(QE6t2;#s~F&A^8&Pq5}wS z<2Qm;7Aw(h`hL|VO+>BFKpFK!?m3Bb>JmMZ1>0_b{aB^zgG7K6SA{jGdHF6odbiY< zy?33I<}`k&mQOUaHY7;k)0$PA8@a(O(tuWD~+e_vn4Cuf2A+A zPfD3sW=u}Lsd8p`jP^A+;X_?Ddo9E%!KY?@fZ`%}kDutUYk?9i+u6R6n83>ws1c=* z)K<+>IcYutI7*$@NJDTyaR{0(3gd>x^vCLr*@$abY4lQe!HxcbY+{=@_KjbynOCL! zoFOjIe$_QLSI6zKIbt#M$?flsj#%fIv%^gLeCJ~4s~X@xE;~2bKC|i|loFke&-%mK z3LMT#kw+%;Qa#ec7cV(jM|6;f+L)gCjYif4y!ivS%2^4PL_pfJ&v$9Um}Q0=+AkH! zAz)Iq-dIK}^!Fy65K)^0QXaCZvt8ycMQVJu9pU|mp|;SOV|0-T{i`Zb{$M~WJ611p zf-#P~O-=QuKdDrl_BblV6a647yj#9a%SG(1t*oY?p5M!BCo2IitST@7z)nk5 zCs#{zb0WX0jK&CP`nSD_veHhP?|aKUK!`6YyX;GLHHL zRDg9t$sfR|7PX`gwFGQ4Hk~OY(l+VfZSorJC&!ja%4-r^O;ta4fjvXK9p6gw$W{s@mocOJ-9+Zz~ z9qa$d-S^Kvfl<(?VC|QHy4(Nopq6DiKhZ<7T010#Q^}!zXAS$`@JQ%Cx(9D3r`xics_jrfCusxC(1FsF1DHH2)N39g^NI>A^ zIyt?NtCA%t)1Pq2T1M*7@tA3QM~^4lgMe3WHB?}j#*OiporF27t zk@0^DU#0fQs5oWz&F*%l3tl(5To&xY-I7rR%1(DDshnerB5d={oRdj zk$;UcG8<|xiOYKxcfnS$&XE|k?RiKCsM6ameONSEcll#hlFo`5_!lMg;H=1A+{vEpB__4YXuVw1 zS()d#eG9XE#394pDp)|q8>eXeu}|v!_e13z_O80Ip|xHwMM!yBG#;h#qF`x)jea{( z_xZ{-%PZ^fY)~WA_l;fTdtbO(l`ZFul9t4t^i0bLXXMA&XQNv=oI=6=q& z2CczwDXsdAT9o}z!2o{xM=I3kY2D`7SAoG0E)rhvqHe?I&dV&Ciq4Hz)aHgeas21y z}kjA<3s$CS?vK%;98WX^%ISn=la-s}lgzmfo8?~Z&eFj$5r#`15Z zOnc_rQodWO0{a#xCk{QlwL?^!BP-dLzwwt)%%+u3Ib(!=aJ5WJ4{q@B4hdlGO?Gz9ZYJrxFb$X;myXW-kITL zD%F6Sk9Nk8!7j(k@Qn0J_+c!@Tov6A$~V}m2cY0cG>?_1>#*b1JVuSI!0i3G7DBMy z@@B3X7qG?Ti_K=tQt~c?SZ*6DcP|Tc2J<+#bj;UdPUFp>WpDB*2o3gF&qOh8wT*_K zD-(a=N-os#%K-z=S) z{)Z67%pUi<4BreDPRXyltBCfTyOa(`;+!Yg>N)k+R-3Bqpi~qetEc& zr~F$u5}m%nnvt5UycY#x{NeSh-WLy=BAD)Mr6csT62FT@Fuqsnqf~b7Z#d7F;2dq! z*tp-M9b`|L0UIET6w|SwYpb5h2sGo0e@Z(=C1!V_@_qzK<3+ zx_s7_7Piaej0SG|YH!&;_FV8E!0!No3Zq?J#IkA*#1?5W$OQJTQ}&GbzodLWz7~2U z&pqst1&Y=KxsSF5ezB3|-`E*6aTvS_AErGKqknDlvi3_jj~SIIsq@&5qrPkm`FvuEIRu(UD!v(GjkZ*m9H zrSSg%?d_xdC-6+o;&|>Z?@!V9(1{70#(83`bsq*sQ1(BHMzo8p}%ps=}6tjA{Fa4Y2Rj(@d(h(0%b zK%d0=<}G25Ob>#BgoS$pI&+4cfjZGvk`o=Q?!$4u|P^UxidKu&0Q{ zFWy^JdS{G2F~_aMYz_I58}Aa>0=W%K;>0>+TN3+#yFd+IA>+@B64*-0lK>UC0Pt&# z)jliFcMjjQDap0c3yz~&jlQkB$C)Us@^1%Os*A41-Ib!&E2j{K<|Hnmo~{h{t2 zAgzCpy*aO#XYu9Pm6+UhJ&jb8##2b_?&SSiu7?NGlC+Pasf)|$u7|RGOZdZatip46 z%tLX8uN~GrS0h~)4s*M;X6hazxOK?j9;BM*ueDV!&aIzQn)u-qnRw`jDQY_yH}mtvtAX$4cX%+s_J@e^a^K_l_!(w zUeq!y9ckL6ZhTHNiLo?k-kq7~`j72T<2%?M+3v?Du6kFVYo0FEJVAVKFK(VfNjYQ1 zViF9TxC7Rn>hljR21l{4Zx4}Ra53hk1ex%ehZg2I>!nCK&RovIS*QA^(9*M^l7xtu*#$sR!_6DfWDij?D zv8p#$Qs-)PJ*!*BD#+!;S9O-f*``Hfh9GCPMAPmYVm<1dQclE%91grynDB}STxPRv zrg29{sczl|d=~5UtqnYsKQ=J6n+?M;12_Oyo|>`TsZ8Mi01vsWs<}w?aQKv(OLM*W zfANc2@JSKrBxHBS4SiSeGvk=?wuc$iZdgv=E22|P7E#hU;F`V?xSnw~z zU0cCA(_CryC1EEFH*t?@@~cyE)gASDYt!YC_sx%p{2|XgcKjWx^cAE&(Hw zGgYl(nZq#yB=KKDqXkV&@LQ=yvxK@6G;cF(hZ}R8SJ7Xv_lBfQU!GuHrD+aK^u{{Z+UXN*2Jd_|V;_K@&iv7mT2O|)re zH&L5;FDB@bAw2t&Uacy1@yS9fL(InF=;37LCU{nl<9o|SDA z62xHhUtoMr{{Vs(okPagGW-wt=?0sjzT+03u3DL-f&T!yT}aC3zg$<$UmpJeW-p0< zvOmOR_+#-h?@zdz=EW~8FZ9Wa#dQlaF5!?0kW}P;wK%*EBL=lA1CGODAG-BC{{X~V zn^}@e@=pZ!t`Ejv6j#FVw5v8{#|lk#UlDI@wEaEgM#}}`HS?#7Z?zqJRA>@5l*uN= zL>VNLfGf!EYb0cCkQJkNpdY*;hKZ_dmtXht> zcGgg`{np4e-~Povv7NVuKF8yKhzJ)`zmZo@wv}>_gWa+?HS{ONAKB~TPl@!+Qa^;= z4?3LIF_Uv4U5Iv$LZ0W?SE}%Fwcw2Kb=3*-)t2YWwtf%M{5Kz+tdKG(#0 zjCV%SoU4JA9V;?l3u-gE{fGbo&pqq5_dFzHjtY$28>W!hA8xczIx{D1pM*y=i0 z*c#_9SCk`xTzA2HpNAea_?2%5jw}V#63iAT_ibIo_cc!)h?C`nv@yfwcuZ9JtH)G* z3GmPMz4-U>gT|BUejD(lw;E-&yA+B36+Bz-pE%o-DlZkHDlCo&vGDhfR@(_NAOBDHC=%^sGM~{3Y-Q$3F(e;@dq6CcBd^ zWK3=&-1n(-oW3`9ZI8<7R;OwaNc{M^)2^ksGD=;`bGk#C)$r%TyPbDYx3-Qki5xCX zdKbmthG)X}rVD#x62{mn!=-bc9`QqHTD_?9!4b#~;A>}y{olLL`M>O_(Zs0LXVKpQ zzhv(S&kS+uZi{y-F2rNaeMg{tSMeY0$8T%kx6>lNvzgX7omd9R;0pQQ_H=I%c=GD! zQnA&%hO`1`q$8>HHM6M6Uxt>y5p_7NE^q9t z5j9J>#`l&&Gqx~z?buhL_zU9a!#~@Gpfv zF!;~GNvi4EG1qh6!m?0!c^SXot!`Kn)K*!xew{yEcieKbw2TANKm1q*X% z;{{lCz&%BE+CPdk_s+t&$0)-FycbmX^YCL=@lDi*031*n0eC@#+4s-b4SWP7|d}_@TB8@`X5AH_?yFWGo(%< zlNnZGa53J$1^)oxu3i|{ejEPTJ|efg@fNM9Szoxi(=W9k+hj)>D;p7ltJ4C%OZ-Rw z00kkj(zKPnxA1+G^2E5B-R0WMy~yLOeEa)je$n3xKW$F|r@=22d@hPP^(g%9CttSP z=jg1bacq&62N)o5L9ZV-#$YoVKFv>5dU+QOXBj>|HK#`L>fh*oS@>g5@h^`&QDxyB z76-bzzPd?PsbeQQ&1M*`zQXyKL-3GdueT;_@ev8zBRtNFQMNhr;%@`vi|@XJ;>(2NB#-?9=5W% zvApqywlzzgE(|Qq7%FGS%t7y4@YV*E58mB2X~p@iO3T@aJ4FJWJxiEyU53 zi^};UBN)eE1$no{+f63VPnc^rQb!}j`;|rKcfETp=C%EY8_Q^e9FokvE9c*hx}^Rz z@ft@G?+Dl%JDT_Ni8qJ0bL6sGf9)z(*q6atm9K=n9}Ve+QYAa4l2eWBYlQu!zA<0; zBfyuwEwDaXTiQD)Mb7iuwC=Sfx{4hF`7;C!nGyH|`d5zr*gqJ2G2#CJh8q5Zs@lf$ zYFbm;>1whJi3tHn9r5d5FN(zbJPT2J*!#Sn3o5ylIk@v`zT@Zr0Qe`D$886}pC5H2 z;-|L@;$6U9T*9pO@|G+LvHPHtgZNkI7ld?A6zjeZ)wMr}I#->lFPb!KUo#M`zc`sk zPct0mz6$+@{{Uhu{{Z+))8ik*IAS@{D)QdGCq-AN(-!j5q%P z9()I51YSO#$>&`LX}FYQ?jSuIxW#XYRd6_KQ>o1)?{6<7=KiyX#NxfZT0F6qm-F)| z=s&d{yYRo@R=4ASjzVoa#WHx7X*4Y+?0)KNQ2ziRjXEEc1B&?X_JsYlb_>ma5Ixbq*Om+SByJH7<>yj&v@rUeH z`)g~T1XK2uviLW1dEx8S64}RRa3s5EPyDqofEX=)dyK_rJV}i9G!(s@cj>C`ezC*Z zM-F6EDdV#mkyR>6@!HF>M}+v}Q?Yp7YjjlJ*^qUsu1l-t)ex$DDutm@YKK9>VaJkp4wg*h(7-~srW_A>mx*jf`; zXWZuby(NeCv6bV}+cUTL5&JZJNd2O`No8@R_(N5b!-=U2eOp|$gD~5Vyui0j*d0CV z^z&T*0D^OPpU2(_@Pqs)(|jqSd@=CBTbOh!q}i!>j^PLIJo!cntB`OGF<(Fb0Kp!7 z2l2=E3LCfU`#R#!OrexCL9^)WaM1_NCr{`2rm(zbD%=Df7k;}`fJlwY;( zlcnkZ03Uor`X_{LG+0{jsMGE)8dD2`Ln^ZpN{)iOg7)2sIpV&;{kC-NH{&;nZv21X z8)>ej)1hV7W;iI!58h059`*8UdOoMG>M?0reXui6GQ}FX+6ytj1oq^L`OFp~SgE&t zPr9p0o)-Za+SXdG$2)kXl%QPWrfQ2Rd=?q)&2^s|J{NfJ;8(;=L*b{yT|I3zT{uQA zt=0Ez-htHT+!0)ZU81;*H-44$(S)Fqk@?m?6;4vT9))?)mOvYf9+hfcscsnuJ;4p3HPuBX?9UFiB2v(Frs2*V2n6wpk$-XgM4w8IOPQo#C0qb9J?do( zafLzKn%=K0QP+p52NE^7m8ILe9-LMCX$qh${W?{2g=3RxkPbN+=}(ElJ5=>O4RN?R zy$-cg**<8RHdj$LR!({JtFvB1ZvheP8Rr^ zSHCR_x|XCKLRXdSD-!N!a#Tsz?+jI&tEd|aqcu`gVoL%uinv}+LxQDgVkVo}5EzE- z&NGinZ0OP>72^Y?P9T$EFVhs&k8s9&cNNy(vxUqmR#)XlV>GGqhQ}2oHqy$W$>@4h zt|A_8Huvj_crOBBkK>BQGL)W#a;kpyi6hJdZvIcVN~v(BX^WRXT6B}U6S(o%)oEwL z0lTQKs7lVqH0n-H_OW6G4A=w!4{=?;!<`z}Q`FMlTppa)3t%Rhz;9p9yB`bQ&!otg zamWE*O6RX;&g=_mUQG259Q-fRtvo(1?SKvC?(1GRc8Ltkfw63NE>Q1BAE*> zPSxE{4es+Ea+`AZiG(qNS*Z+)fbGU9zH+zAyEMsEk(KG5aaHUuFL>vbM5aiR6Osr7 z3Xt7Lw*=&o>r{5QB$7Zqs%Z9}gRN|-!kW<1okclalR+sbE%|!V&atxObsp6k@ZnhF z28S%9fOtJChY>z)h{RNGUh64eL>Tjd?NHjpM3s7Igq)QJnv2U+Q-$V=Jkl_!Zb3tU zDVGlsUBuBLA$T=W-JI=XUGSwDYK$nDbFMgS;78WY(9!%sOlzAx=65|8br?z;NlosRp)^+0NA9UK2ikh+Y zH|%3^J?VvMAJ!njF`Qz7oT2W93a{g)7bl zcvr<=hcMf~SuHX-10$tIzlqvPNRJyZ2a}L0v+SCTAf-)O#|BBn z7>QQplRg0W&EXSeaV&x`2IHIx@`tr}We9+V-S@q#>EDkY7Fur?qRt*tV&1N$QW&;Z@h7q%7*QY4~LWj%$SR6Ff7S2*A!hwclI7^VxywzMX4` z@vfURDpxpAbCX|vgRd83;y9ugrF3AMxFaRTc9YV&PYmkI<}6`|1aXsGYI(*@uAq-x zR^E$rw$d)g9XguyB`MzM_#7T3so3mwYgjHUnf3w4IPPnZ)L?6Q$N=CTwbf~tpJ}%! zKwx^-ez#(WmR?U_D3xeNI_`JYppMNMgw# zI2d7(UZL>I;FrZujoOIO{vUW!=JAdLOpzUn^)-ZPVlj0&X{(+dWsSq(DD%{kNb((W z)&ynnEZypj89-}q-EBhh;0KpBlty=Ea#UB@; zYnd_|?0#SF*Vy{k>JPvV*n8o(!i^dWdp!!?B!LGD5CB)!WxO-&*R;huc^{VMoJy-n z#|pk@SW{ zYZ}eedvc;u2R`-BUaZ==bfe9pHsbNE=D(=PYaE#pcg=C16SVD4&C*L-yp7*9Yt$N( zOt~5yzi5YtUfHbw02X+X>sfUYGVL8gj9%Os;9akSSvl23D02}xM^BUIM!H7VLz?pcZJ zlh(5uUoF3R$FQNz!)E~CdsTVTX%mG!bv60rI#GnVVt-iHx_WF_lH(F6h4L}Utz8Sn zm%2O0Bo}*jNx6NEVnJbY=PEOv^(=aIJ%wy297s6sgfOIPEE?04e5Tfn~*Z0_1pkxM92 zxxmeSv3M(59wYFbzN*(--9S^u$C&EYsxlq-1dkk(j)NH zmnLQ0VkG4AUp4sq;^w!e>({n=eyt=n%%l*mMr*3P@&2f`^61e!00s%~UK{bA`a1(8 z(aV-#N!)wa#OB$**Q!&eML63>W8UDrnlr}UvgSo|@zy;aTezmU-zL&L#$ldombCbY znzo|am-eBGTWLg*OTPtJ9D+X@@sAw%o>6%#iGFPQ*C^gR)L>=}AprjXb-LHHf#Ia$ zs=?Y_&*(fBQH7-o?R0(V;LqEa;}43aUjbcwOM7cbV8NGo-yy~aJq_0pJ!|Rd zVRO0SAt}{zQQGI9J`%{OdqqwWjAoj4TK(thC&%yjDi_0zOTdz7z6gX|-`v6Fq@h3% z~?)DpjZ3&OuYisv3`vE+#`8OS_J{tJ%!* z{C0C%6)_I{y^k*^;r<=ric?sAuKKgF)%<%TS8_VtGOuC}TE*2qCS0@(v0SijoqE>? zs`#0hoXOXxHI(|}u;ufS>t2Q*BBjckN0XWHCK{Dj7@K}2@g$mzr{{B#+}1eMGDckV z(Ng?BmRR+9AWApGk z6ICxKJHAdc*Cwl@2Ak$mYqFIH(^rx{D<7WMuG_wbt%js6-*G*9Vzi~yR^~7yAb`C+ zYZ4oIW{?YHC$Ug6YgP{l_@d@NGA$-Tq+}7C@cvwy{aJg2$8jR95!JL&pakRPjYRJ_~Ib^sAJiO=+t&ag3eY zB%j060$$k*`UUS>8VAGQ9o^cS`!5aM6i!PBnAchRJ0_cd@e<0*U97NV!8_(fpu(z<938IlncyJ7gdMNd$LX)j|9M!8w4^cRy^KRlzO4Yu@sI0NbKKycI zW}uSU42x+SZf74aHS`zlkML_<()EV%C6|YuD_99veYkDCiLajR%RxArM%+~KUTt-0 z`^T||oT|l0@BRw-;!QJE9vi>7A~raUnDL)V_S2$Cb#Ld3edoYtS`h*AU>W4du60M(3MR9*;)Wy|Nq_sNhX0$TO z4y9+JJ{bL>ehOXP=}F@~D@&8iiNAiRq?TV=@tZ3dLAQ zmZ!$zIHM}XRdY_v`QP>y{fRU$i0`L-T)Xi!I)%NggiEF^vCLBgz+?K?-S(djBhu|+ zf-5CmgQ%}i_*?ro_&?zvham8uf%G_R>|sy@QKu|G#^yi7ee0&Z@BoSvy4^|b$4d7h zm#9;FGJ_uOTd>Gf=JrPh3XHjdGC&X6-BDqwWGtaT#&>Jiu%nw0MjcO zBf$rbE6aaqoln6(1?;qqOUBxq7FPPkwwi6DI{~_9Qp51Vn(?vNCw&gg9y#BZI4%mhYFyVQOD_BcZ4)Z{?JxQPD$hDuTKEIg*1;JAJ1B=l+NQx)$c8Cf<*7-lk)~_ z=Qa76{{RIW__uBGH}<^L?BkeAZQ=bk4KgNPo6~bikN6Xs{VBV;9vkrW-m^P=mUhxi z%zwIyTl!bw_l^D~Pp|mnU-9MC`E5^HxVm6-pPkA701DvG>we)Y?t1(&g;g;UlIV<2 zjIv+eIEPOkyXZw({g{3T-}u|dP)B3o>y1lPl2rcBvx*|iyq+D1=m))LYCb3$?5S@R zn@PMAz#_jvf8dFqvM0n{d&SpZ8+=J?K9%9)VD{E8BzQM+XZSP71J=BpmXpL{Ia)rm zG=wl1Cthz}i1q&fgn#f)-;1_UX^rEpQrZ@dc&5CH-1&;}j&=js^cD5imGGeG{u`Fh zMABhUKIwYyb{tf(Q_&IYvmE)fSPZCC-EcTKu#DDkF5(nd7ufMZD>_zb7K-J%0 z(ez_&ZDyhgG)p#NVLZ$;=Is9P&2c}rkL=H+{5I6JZF9nUb4zPAvL)4^Lo6Yi*z!Bq z#Nun#z~NpZapsEbecleX4;znIXwJ>|?tU5SS6YUrc=lH7lH~E8YrXh$@K)>Wk$8ma z&Mpti zy=td{v`c@7izb0%JAIHbF+|6O9+l4cGgY*)y_WGNZ!Jy(1I9gSpN~E;Sy^7R_RKa! zpbVe9Yf6~9lJ{EF(vBI^lUF#;+JnI|_>axFw1goeAgJqHU+nGs7|-Ls8bf2^jY(pT zR0bHFf2?A78Lrz``0L?M5Jw~G_Yo)&@fbPgyRX)M0(4TvDdCXAWb9i=isLQSYA%{{UerJ{&Z#$sUVsc4T}(>zh(kA+q`j*oe4 zF75E#B~+(uk8V+cv}AePaYtiMro}9b(5*wc>=vB;-0^t+i8ui#V?z3t>)b$ zH_CYRub8erAbe2xyW+?Ad+?vYmwq7BF6Q0#3k!20Mb8C@;YZfI+(s&jm$QspIxFP( ze0BzK#M;ej*)PD~+GD^LzCE7P#y<|9QLwUxNZiWb^4N?a8TaPB1Hs=Cw7(U7uT77` z&^E7l!Z>4sJ+nwN@{{Uz?d;w{D@k8O9ejT>D)Yb2TOtRi>C-3cE zfLGK%1V3PZhQ0yUbk7yrTFtH9>9eG;Y7-{HfFJJrR${kPn|u<0o@DuZ4)yjXvGA)%yYPMYh;23d>zkR_wX7{5@|}Rr-J|JVf1`L>*TmO$ z{yMSvfp>4L*^egcN4S6VNr?XdmU~G1#~o@h*jyeKqN3+}tFKe^jJq|I+Qm~!E;>CA ziZvhE*TVk*1+8_X;@=+JO{rQ&CbPeeL*z06_Xl#^*BN!MzO|ykf8lLf@$YTJ%Q{Uy zQbg;3we{caAMg*vzY)BBrt4k|&@Jx%$jftgquVCv10TGSeNA|W$Di55#$OElOFzV& zZ{bwhWu@>|vx`mLouIn;cRyImF*!bEN(!Z0)R#>(eXM-j z;vbIDY7Ul@-`af4R!N#2tcTLJlf*t7w(wluIMcjI3|=DAw#5W-U_~?xaC7R_llw^g z3DEAMxA>#s4HEmCW`7arw&a$AP(J#Oz@D|ucw_cv__MEm&bmLv?+0nuI*0bRh-9;E zFKJeeCHt;(+M^3o6Nu*aR-T#~W|&N}6yfbB?9+CC*O}`70J6`HJ~ezq_@Nhr{0Dy4 zkTh)1c@$$1De6HTI@jqR!z(Rk!oC-_@YS{3T}>Q0xVv-r+qtitzu=!AvnBrk!4CV%5k7M4dR7*!cL{3afq*!#iCEZIuW2nWrN2}6I$Sk8zbS=O(ssO`+VV*Hm*D>ZhI}vK zC^gMS^G4UNwLL<`AbmIO-fVU;by-4Z3Bl{yyf4KXr-*(Ad}q@xHRGtmql?jT1+zN6#PHpZwBc{RM2#wmrp&M zf#yyB0P@d0h|M?Px9qE->pu{+-8;bk7?(rvx|lAk?z}@gnVD1qP!rHrwy*J0}TwFbWM_lBhY+lNe%8XJgXTqxK`1LYkBU|;-LmdesSK-S2Q);f$4?~?jLKH)jw4(7gN1IHLl*A@Q77YW|h zd&s;;4P$UP6&z*f1$LX&L*#s|{{RI#{f|FtAKKC_JNBFSDf=u02aXRGEH!Q@Ji44Cs)JIih5^_{w99c-VgCUhi9YPLi!c|0NEvh;GJ70UO>u> z4wd@h@bmUs@h^>Zdmj;a{{Z9Or#`2qsDgMj8z46P#orwKA@ObB z!taUNW{u-d0cuL?XAZ@Vdte9~Why>y_4lh8Lj_h9@xJ1Fx7_&|W_g}33oHf-eC{uz zYRl$)@B3Q(Ch;GEydi7x>)_|aeI6eU%b^eL4QdzLbFZ?fRZXf1X**}FeoA=n;ufQ- zYr2F+39TMWmT4lFA=Kxg&pGoYutEYUQI8;xA&~JQCy$8 z3LY>Cug9O;clKiOU&CE*#vc=ZYKye+w~Y0DdFRpmJ#S-+eL@#G^J9^6yK&n&&uV$Z zaM+ouwEgSs_f|dL3C?R_D$fsyjiue_{STdV---JEjRdh-YWFdQLb66=I~-%U?_Wdz z0KqALVT~vD*Z9$Q;*TC{6Iu9c!m=gKr5qeJNi*Rh<#P1bh#P^3i^T z)Y^Tzvy$f^dUdFr?kZGd>Rv-lv5@a02hx@7$xs4tYe*zGTxUN@ks?zczIu9BE5s-( zBiEJ@R7;fSFYNX&%5mDLh!9E2n(4J2JIn;_$gH>1$lOl`wxK?K4?3k8MouJF;Kun1 zagHlW9aP5{L|?|UVPm|8Joc*y2pL24t?I$UV~(8XN_LTQ84Su<$BOHpwb5wes%_SaNl%@b>OLLlraARI7#GgMPi~~%W(m++Kww#m_x#Z$1 z+NoJ)L~YHVtp@!_#(!E_p+LD9s_AS=$n~h4)Sb}|pS0ATIUZNR=~kkTZ&A%d3}P}0 z_VuZR#2!s&K6z{?){K4BIe-MQ&svUImy92!B(r1^4Mij54CAe0sNZmW%Tdv}bBrFe z*~t-=2RJ-ZLn6jL_vukja-Ly1`kLR}r_hOBOJP**;m_q!m4!(xJJc_5?89NDj7u*K z%~d(H%wtAwS7nIW5WtF>HfaC=9(q(Z*5I(*D_X+G%mW9%e$?*^-JSH|322C5(%x_e z(nfkx$)!k-=m%q61+BDEMT}z|PHLs>kr(+u_U58>B|Rc{Mv|tqTAhFF_phH0c-GmA zI1M1_$9nyJ{gAvvd#&gevbJ&LsK7m|@rS@yQ``8i!IK1=mGAB=_FwiCjyCXptH~w_ zfeFWI_*~x$6`j+S_ddf7UaYb?DRe#e#ZkD^3rKhg&TGTIBsx5dUjwfqz0=0`amA;! zW0A9**Uw)QFC#NHBrAe=29|$lP_g4=vX|B`(E01*ZkCq|KbHG|$s)d2@%4eX)TBEO z-PQGf#aWf2o966s&3wP{BX`=|a8B=9_>87aV%kUTSzUDLB+P9BGjVP}Y!ml=s~h4D zi;HG+5hHJ3dfV`mER%UEeqeGzu8YK;6O#V`Ni3){{8_Kjunx^R9ybtSy|g6Hoa9+n zVu+`XwQ|Ne#>U&b=H%ADi2OSyo2knZCJAhEDwof0fVnx(0Y_T)e`DX+ALEDZQ>@~^AMEj8w?FVtzxX4M!w-QPWwxPnpb76vDVpx;JiDL7abCU)BB@p?rFUqb zJ)H5j9}5RZ4e!tJN9WJ{6XX5}RpZZzclKU2_}^mi+sNwUN<+>#;f;RHe$AioPX7Rd zp9FLWbnPd?c5wM_$8t!@1q?6M^r;I#x;v3a7 zdGC@*JlC0co5q^N$YHsif`-S;)##e!mM2eQPDmLZ_2fP;X)fk)v5|w&Ry8Fgu8EB{ z*@dh8M8DGDmc$al_}m+%auVt*a~sPdVTt99ZR$Eloi_J49*la{F0F3`pl#p`bDZ&9 zwQ9+$Tk&AtMAl&&aj)2!cscLdhar?F)j<}`qABnBB^1^$C8Emf8&{qrMFN+#( zoc?5U0nRg@=Uh^$eGZ$&9Y&?Ay}hGI_c`0wCb%CI`0~m*og}$c!R=gMj(#|)^CE`m zJ~{bXyr0CM6z=s2j7u);oNZ1`WZb2oN~+l&r>}fXiZ^GDU{!nPt#F#LLBpOVXxDoMQ^~SOzz42TRJHIcuU{=w>)SA>PR+J&4 z)c8fbIcqd!;*4bel}2p_1N)o-&m(VI+p;WFp(K3`7qLDvxc9HiDNXxZv-<|ED6I@A zG`L`!Z;}Y@T2|JUssZu<=bDaPi6#lj^u;yqQBG9!uR?O?vE(+RRNT@pqtfpqy0uNM z9Du0@7_ZS!*#7|HP42niTX@5l0TdN2?O&P0QFi%M;Nv(o_80sU%i+MKB^53S%jQiFW^ zo*Cnd#A}CSB!EvowZv)~l%d<^0egCy>3nT6q>UQ4C$(^Tt=Z%r4>kIJRfk^1BlV14 zDbl2!_8mU36p$n}N?mEo5Dz)5yUWmrlT=>&EOxdsYw9RqFJ^ouXT+5#-&0QOS#agd zhN?cUT#z2J=esN)<|6}>RiLt5X~6dKhZ7m4CI)ONXk{^~_SS+ad|H1m0}4gUakt4naOj+}H0is_|=gk6t|$79t>?n*<_a^!h{<-hkGUaHYQIp79fwtbt)^oR%0qm2XebFLfJ*iq6@jaz+(LTEC=d6Y3X{!jQ8U zY-1Jl&+PB;SHoILTI=2|iS5Upp=K+P3F(^Ub8?x>C`x;tAK}l3elYOG`rqF8a!{Da zE}6*uYqQmUENPlPn$TJ3K`wl~uOkc)Yu>&xYr1})rZ$|GHqtx3EvL6l%m<(XxBmcS zPxvFp#qZjS#LMGfittb2uLsNf=x#y}KIMHCPfYsPMGQ4;O!;7ml}b1YKGw577uK#$ zxqmg}poeZw0mW32e$5bJ5Fm~-U%5XMf8dy(vd_cMgVA_9!ygQ^nPi3|=jytGt6a_q zBP4;G=D#+7X&>3E;O~KTiEKPK;z$6uEG}S;&SH~1FEy4ng5?u*NDbTgT9(sq7l z$(Lg%F1tzUF;(J%F(&P#f-Bv80r025n(f@Sx|9*mAWflXeX4!Ma~?JLTj9MMT$0mH zzWEjK$h_?yf~JJ!Jxj)ASc-FY)bpg#uI~cLZ*JHi;DBoV&xG|oCJ66!8#1v>xkAIO zdgp|^F9w;X*w12?_e%_+p4MI@Zhb1R$3v=X7yIVYe#f>Tq7t|`_7&ZFagQ@RXwYz- zrgQ1AyO{EAWccIGK=<>sYke0`_TZ=@w)fg-m?58-!+WLFo@B9f!j|V&o)A%F*00i~;_u{`4-i!YL1zc%XI!(32*LV7wfo@tX zk0>}iu{Z*|DPS=VUQ?bJj3lMX7E(9yj}kQ zYi+kTR-k3%g-_+pbK$7sG1c2>b>V}+QnaOHd_$+$Uuu&>Wu(~4adB}f#S9TdUEV@) z6e#4A(!W?g;ELBD9=s<8r~4>;YWQ_|quSfsORRW$ZB3A&lTuD z2EX8wehK}WwJ3G}01CrtKg2y7QpJqeT_d#AJ3Ky@Ex?NNCuuKT7tYN*QZa8S*)fDjB>U8jMr?&#f$e z7{zXzBZnXphuhYN@Ja??(G2>LUYYRM#Qy*fJ~(_mv-q3vuT_rsPr8f~8vXls$ITP3 zRbY4{+Lq_TGB)REa52z|_bJz>I4P%lpO{v{)uUCyakEcmc}Vbeybe*&5$FX});u8$ zgUpUH0R9|xuTK8}gk5~?%^peis@k4~6}$zO-z0U&tv#w@wX{5{-@~@xnC6HKoOG{% zf8f0qQ``7A;qHYL3Sv6V=ApeG46ht$i^tK8ewEy4(>2c!;dAp~bQSq`{{RK!@rqr1 za`1P666q0iZw$5Es;4Dno?eVU-B4@hxQiCB`FnS>XW4L06=O5FLQ3eb`v(5hek1WW zfm1;76-e>K)#l1Xs@j#$#n!^_OHnggufD{@MnkOb2Ccz-%opR%tl zdLK{4d@)*uM~I~#b-sna+DBd1CHM=k_+!I2irVh8qs?ihTX~yWd7NzsJ^J(7zZUi1 z4tSU0roVgOpNU!p)sCm9%458+xlB4Yzp?5MO8qeZ0D^~nP_^*(tEOw-4A)|kZ54jh zj3n9-&Hw`)Pc`y3o22PqwdciKUxov- z*Uz$92*wom8Sh=(zwl6fbK?eznlFqzA>rFy66yfcXimipryP9euWI@nEm}AXKeL3j zKPci45NPKZ%hRaR*)E0e7JtD?J~Q|y;si%Wj>cOCkTlwS@SHM#vPZUgudluX{?z{f z3jY9U9}T(imBSrg&ICH1np5|XvvnYMUwZh{!(Xrlk?~7Ro8s4w{vx)6t6xCo&%>zk z`BssUm_RusgZ*pc<@lHTR(u=%qOCj;`!ZPT7GDt}l1To+Yi;HUVN?FHMQo{$13dcu zE6jN8ejgLfHF&e;f^c!!U#yc^&@0%+F$Ce?`4bV;2rqdLI>VlLG&XW zRnHy%*ZL=lH5c(!)`x2)glYThc;(&pf<2F18t48z{{X=@^(Z5?(*6+qaXPp58sZHL zRftCMMtX%{K4a}(KjA-x9t+X@SvP?GF<)w0L~tE~6qrWL$E$)zBEK-sEAc)CUstTu zBQCu@XY2V!UBa1^ez!vIQQccphxnb~dG$?aQ248-c&-GWQ<0>xL6i`NKP*3h;=WJ# z)A7>M$6swe`CdE7NrGgMV6*h&+P_n@&)LiNboi0sl=w@f_@3h9N`ZdW7P)tGwpgxH zl|V)}1Jf1Y-w^)*;Ep;+?aA>T9}(MlvM&Yc_8`G^sH_>5?V~u%XmOH#YpaE+<`l6m zgOZ#czU#Uxh;uxqczIW|gy6Q4vumO7N5kI`d|&bZ0F2Y&Kf})iYTDMZa>L8hw2L>L z*dB!K;C>_u&iLiwAB-OjJas?Ae~Uf})Vw#WCAf94)-8-uGN0oeL0@|P^!x%bZ z82nN2e}Ha>j(#KErM0|k^GxlKAFdR2{HuJw35TH7NlNlPEYFE!j>3B^EIr59x-$MW zcymtCyb``2zeabt1!Gq{eJjmx^*@LH4Dg@W^$S}~K4_xfJdc5u6_xu&{6zhwzB}mf z{6zQ_@W$lm+I*ndCCmaltyt&gFvf67D&XV}Ys#d&Cv&!j9Z zy{E|E8*#^RS!UkWRTSd~ch<+9mHkr>L-)0pD%~fs&0KiDSNPTOI>*9(1kmp<^t-J= z6H@yku`g-6_kb`Q9l5X3Z`uC(sc=N{hUARrIrRG1x&F(4v}f%P@!#S{h_3ug z@ZofAGeU+Jv(&DkK2+{b4%hx7M>!RSRa%xE`#GlO_1N1VnP9VxF!r=8xBP5<6`*(v zU$fN3>A#BVdvpGq5Cygt{u~_DtM3fz-YAw$S4Yw=bc1Bg46{20jOV`5U2lXuS)*tg zT$dWQsc9|6)ykQg05VDGpK7V1=~o&(_OYhwn*5h>>bE9nWb-#e5Iq!UfnMh|CY{}n zGMBc5wpKjq+fcOd4wrSIYu+n9WwRExxFD2fW7vLms)*BeSd@6&ELB)KYeCI41t54&Gy7?b*kk-qoVih{v)~W8hcYTeI7Jkyc85EuM zUz6|K#_@v+h=54vL_k_Vq+=E>-8t!!ZWu#BIwq~8NP{rx-lUr$-LcWRk(2s9_w&pC zfcv%UzOVB*j`MvG^!!O;k&LtGQWG^pdAGR((m)o1IUq}+oV612SYZ;|Ue5k9&nlb> zoi3i))i!>PH-&8B`1c0;?2i0e-a%V|gTs0@@nxYfIa<t@jZuV?h-NeAAK2{ET6?5llj#?&_#$0~fO}W!Z#Vv*Oi6&mdEq5jt?DjMrWn2!8uU2E6M=QUiOza^)S|L8GwcFYy2b6&@4J( zDGnG^3uHSu@3PM9XkF6{y}3z06`pfG>CB^p+WP=?Oyk(s)wS*~HUsq^QH~Dv-#tZ` zkwtXt50>jA3f#+beQM^J&L2n8@tL@?X^psl$s=>U1&JrN<2gGH-AGR5+jxe=>))rEaZ+rZ|GiCEFZS>398YSO$0Z`K0hqq(7 zT!2v+N*hqp(})z!-Bomn)7vz2I+ukVhhGq3dFlZ)8eX|Om-q%A>X3qqdWRR^pP8o^ zTrgc6&Z=^oR$AI5+5T^R^~xwOV?(DP%GXm@!>3juTz<5gob~stov_QZv^=--{Q&#+ zD^W{=@|)^$g_qaX8SilWE%Rmz+afe_je2e@>Yz1}*h}~BBi#Rp&Rucu=jX*E34(HU z6YmymrFHr(|1pZMA(`+EDVXgKcv!`%Y}{n#25Ich^?Sac6Mxvt(?Vax0WH9DmOlCe4U> zw(ALr-6WS8^?dHT<&&EOU-J4GPMp4)&#U-)bm7pSU5HR$u;Z6Ei+Ax}(%9sCJiNt4 zId$52{TSVy%K-R!P(?ibh2F?YB+zQvIk*qolzuj9ht^y)+5$zl=?FF+uQcs?RNTz( zz=&ZJ02Wo)7y8%yy`Z?bKCJZ{u>$&CIEwoHpeY;0kIx(K()0 zc$xRxD3s0zGM38*G-;7!Dh*U z|C43|E-y+rPNV3RO=o<8ni6C98R9M;eLxPpK7(xMEfJOG>h>q!*&iurYn>p1pA1Qb zB{&XfrDyvzv=saqtznI=P2IfNTVf5l9%0YqZeV%iLoq-+j(Qim8V*PaGo1?3IIEAO zZar@P7otmLmh4-&EF~jsvx2;bTtjd{xa_iin(5hL9azbv1;dv!Jnvw?GMZZ zG37Pe8Nav=3-12rWFwN8EyE66fPgqopf8#gs(?36Xltsw%919hjAt{Z>Yv$4>-LiR zrvU@jSUcW){0Mf+iyAlH$q?`iEs<`X5b1jMO2H1Klhdv*F@B-?8$P>K9*t zmUs`lFCU3OK}rGw3`^(_eAzFsN$RY^O#PJ>Dz7cRjTd7DYV1+zx9KPhGI+HY*k-#DsDu}gHnLWn-y2$3-Dt;_JjjS zX>WBWT6hz0FW?xCaODNNNgrv}!pPpQpP_^1saq!Z;&crKpSWh{Wla}HM4N-z38e*g zRwvbrr}G`VGxX|NK|`oI3XQ0Q~&SPu2LL+mky6makTL zVcrY%POlf)=5u@qAB@LRY<&cHNA7O>$eAnOm-W!Aj>Iw<`Cv#qE_r2?(&(A0MUPtooQedP1eAgjY54j( z&_T)raxeq73S!Q5I&i9gUZanGiR6U1 zSpL-&ox$~~jnsTz{{!8cum82Sx)Jr?Z-fE2&Fr5KlfGZKa&WpL&Qkkr*!HQITY~~S zYn~6Xpx0)Zp;52k!vQf{zY!7vX!|;aFe#PE1Z=g|pZQk`Piw3tFurnB|L^-nZ#v2` zIOlsnQ@7Xx5W}I7?2pussOPyv`JUOWCT7RpsNJMP<)6o`F+TO>zpFlQX*4=xAS#{B zf2xSt4bdXzw?OOQZ|lLcnc{AK$KIBA@XxM>vzu+=v9V{U=*9!O>KNN?=}n-H@uKwj z1p`-jwnOY1dRMv_&cp8})*_dMu@O@g`OwAGC-1Vk5?wd|)gj?-Jx3HGpDJ=S)EunR zZDszO=}nJeFWp`JWF|K=Z0Dp*3+w5cF3%7XX_Fc$Sq@C5jz1G$X^au1Ok1}zt~r%= zK_A{#a(}vHoa{j>EPZJ-2Ch-QrkTMu9=xqI>vCV)tI&9=N|bznd6IUH7_Z50rGr-t z-OKOlZQe9WUSr)G;i<}j^#40tSlfzM5DI99%CCk}j)eG%9T-HAksrF80_*ixfNn}R zBc~H$g%wVq@5LVWr$;C#9`%b~vR#n^Zz2u_I_RBo*E5Q@UHAd0J`>MZyYDFGtGLeG zJAdUQozsI{<2}oZ$EZai`DRGQqACfB|iSxxvT`XH*r91oeTSX<6O-o9d6H8&NFs*v2 z##6bGc$}i&_|JTn+T3<6CxK@s&G?YEu|cNU--j;!xBQD_Rc*6&gIi@4K^M_8YC>EY z#ymVJmS$K~OvArGw<!usy`+dF>ALGL66EY8miU=?0?i&&^GPY3 z4Ns{aI-&U-6Hod;tQ+lt|5?y&5mgxF?kb2+`(D9ocgBVU8W~?_&xGJ+&IJfb*sGov z{EH!4*_yw1;%6pNWB>sGb%($_FOZ|u%gtX9+<%8_`uES4G|h;>R%60N2c%XD5L;vP z3bfq0>t0BRg?|;;5f014X@yccqUkPTY-k2P54URsD^+7wfXDGrus;$rEyiryNKA(X z(V$j;ShD|hqbP|w;!NuUaqJk$%zjj;%&h`F&q|NJRPM~9xMjQsPmC2Mv_#{`h^Tt& zfoP2rRn+o>_>Lk)*6}B+9CZIyFp&8b3`xQ>(W-hbQCXWIWzMH95C9y|GPP+UPF~{G z_i%V_B%o=OJO8yfg~^f5+T0}k<+*}drkDet@*DY2h3Oj)O(+Xv6VO*EkT}isyNIH-51^5g1V06+d6xjD z@sxr{>X->*uBuLhs`rMosK}HK}dSlla(xuJD8XiRoHt>>41F^agDe2Wrqpx?u zFIvt@-FV_fh8&)@#~QxBM$;suNu06HT$n{p|6y<684|1_lcSl;3}doMO-ea1;s{Xt zaNVOUcUqB#@Ci7pZZLWlYjrP45%TSdEMs*Dd8i*KvVKxTo9Pa zc{)8Ky(&p_^*giNClb@+U1>GL!;t$RiU4o@8=tW&7EWBxRHfTL#p5S};R03i=p3wc;4U3nc^4teX75nDVf`-#MR zrU+=@gh@l6+@#lihw`|!crbv4S_G;L(7Gynv1dE0r61M~MDj5Dc`W{-7f!V@v>IH3)SzD-U>y_*#5(dI=uY|o@Q3};g6w(GWPRW@-RfQ| zp|0n#me4_e*h}|!D;MzJzxcQG;LYKYH(FXGUB*FpUvHV|hP~>;Jl4?F0?`T<&_m}6 zF^f047}+*Db06}=dfZl@aI7odz2}Ab(hhFE?MHID>hQ>}vNyy63aH&-LQpmQLFH}Z z)f?z*z+6iNs(!HiAKcwJoe;&mljKKnU>AbvAVEibzeZV@r4nP5MK#?z@F#$e|hld~Q?`N6`3=Bk0rhHjQ8Dll7m3b7OQKQPa#fL5De{ngv z;O@e_atB!6K+VM&;D#GzrYlN!X*wi!GW)cTf10SQS1pJA3P7FPRpTtPTvml6FVnWp z?QZ@@loLV^^j1n?%~R5My=U0=3RAi+mAl|n5TL8!snidSD)zyUuf6T}aZ;n6=370W z1<;?z0rv+jK)ThF;%2R-ma*qZltt1|By@wFG!!iNTLC7| z+|b&!=|dxSX53~}S%+ZdIZF2ZyA#|CBTp8%hnS_S-4(?(X-gcijQOCV6rPMBuEAIewE*T_b>4F`WiqSe8cNF7 z(!^tiq1{xp0W=*j5;z2(ju5Yk@-KTQ1Y45KlK7qIMW=k>eQD^~Gv<{hbuhiCQpwGn z;n*9FHR_|V>1bfTOK>T2_LMoi9Yam!``P9MI?}( zIOS6*bMx$W-1Av}O4|1*#l9bip@6tpk=@7nqyON41N@^r)kM3Mk|9}q*U~qYNb%dg z+!rl`g4ICR=PwU*LkT$nFek0K!(Okz9okPmALN^NAhytlz=4^A0b&_b@(S?#$hYF? z&R$i%L&J>Fkg0&rKR0dHzWgT+<0xsQAR)JRughP1$%!#+o1Tvm+?rC_hjiJ5(##R_(kQcX9Bj#T_oEk0fW7yAbx2G2Y&I%5(6zfoN3zUlj^C1AymhJ_Q3(l2@wOjC#Rp;80ibAepRz)=ryb84~_UypF#rtw&m2(do`#;SLh?+x6QnC@^Q)lJ0PnHaV_u*UDjZs)5kw|uJ^QHQ>j8Lu+ z8r=UMUFM_WseHU#&F-aPQdN6nB=lX)!HKJtb*Yj9j)*c4VM8#&z#r_ur1JVw2!>>G zdfrrXNA*Ca1r8VV{4Md~KEE)@;)0VUOL&(lc*w=|V|DkJo27zMI&a}1lc zo3${iLUIi#>Ak{~L|%i|WJBK1EiH=1==vY`I;0>&Na~^0YdJO;_b~CeQmED1!JB|X z%n4`mR9nAY-$%-{!%(u1ZOYhhRxP;P9Pu_?)1#KUw%1=gXZ~xOmrm!v6z%^w4B0R* zPc40In1fwB0Fa)K2ao-m8^#V?=UuE1xZYkQu@^jRBgZ;?x~9XsysFK^e2giBaCgu% z{I({uVk_VZSq-@n2*qR)qB`e(1K`4RxLkhHoYv9(Rng4TP%_vcI}^ULsDO~m$MuG@ zgA~+!8qYNW%%=PX8eOdO2 zSw2DpkJwMa&0rD0(EJEClR47`?3V`gnb(=kLK#Yy$yL^qY)HABNR}39{Sm}Bo&y0z zXo@ds|11&t=pF@H{NtfAWg611+|ik-pj?hJ0(uqInfI^{!$Qzi-?oM9k4Q*+dU7u- zm=lkR#hydv`c}Q6W`6~5g>R?wsPp076v3?>w~PsmiZp<}FJuGW1IZ`&fr#rkafnV) z=UncaWI2+x;ty_BX8L-iEAvCwoWoiEJhLHMuX1xESF6lD###ewwzUtGpOvd!JoCm^ z$^#g++1|`8*EB>cS?jrmvr9wnu5#+ZT}hpRKH}Jm-L`km!O{X|%^QMo3ok2LA=P-E z=v&@vH9~@oaO|z{wfuw}52ig>J6$030b@s+V7Rrk*cfY&Q4y?X#b0_ws+|DzdZ)WSC5LNkL{-?o$eUG|8 z+f(n1e@u-EY`<{~5xvcr1=K-Zd!3NiIsIo%5?`HBm*F!I909(T7P(4ka?UJt{_S9 z#c0I^WyGT#Ljncs;>>j9{GXF^R?8Lq4F60p+A?XRwZ?w$x}(HCsIib4O3_*j%L7We z=mRy(Lrlxd6B{C{Zhi&z!NE@O{Ud4<0_R&icjai>$G?F_MvtwPl%KHXVR~JF(=-hY z!84p&;T4i_yW~+aiio4CjYhJ&4Qu|A$t%5tZAC|CfM10AfSTv*ngB+W>4HM>gX{Z4 z?*m-rzMJ^1%ZL)KKN5xHf$IGGzzJYVC`To15I{H|{pt(F9re7SU{S4ZA-{K?^CCkz z74iHP_I^sF0Et@No{@31);r8C!|Y~$WJRXElwIVHZMDam6A*^)tL*=e>yHttkDlpP z_1$6&>vlV761E!g_5(liWn<}~-fMvs_?MW!=}F(5xBl{2BuKAK1OH&(nI6eC$+~RO zAz{3f&>yN$zjZ5%A3;ag2cDT=5+yQ$jm=l(3ZIOI?V3XrpF4JC4BytI889o1wrVLh z=5IfO8YHfC<9C-i>on{^V03fX=9_y%4qyps@$YyBtXA|b$6n1|%gv4yyG7AY)L|tD z$d7lX2CjBOKfUR&vgvnL?jKb6Yc1|1&-5ZcY$H4>1Wf{p&c_$k)SSu)AQ%=IW&(V^z)<%`8Lyob&D1HIF)EoASQlxg0m2i+I(TqQIP|!M9|9 z)_0Gc@DHzk-cpN$B-U#@+v?PCkZWoi|EX8h9e34)O;b`LB9`9;5uRr!AOO$#J|Sgo z$$8qhb*#9?g*2P~lhQ_b(?P4>o%7OD6&Tg+AhChM3T}>c7cL9Y zRUK$|{|jR1`bMfAz4pzj?6_A50TG)AuV(&_h~*ZzQ2>jm!VK3MxT1rR!)nssw-XZ^ z_g?P#-EEaD-h2Wo@ATAW>zvyM>9DjLKzc^65qh71^yYZV~4Cg~P zbd~3b^X+tKy+iD22%j9ct3zpkw{A-f{d}8_%6rLDz?s(&3u)Kg7#z^ZK4B!*jRGJe3+~L%_B{a$M}y zq1zH6ou|z`dp&V!4?%!myKp%VEc=qHNSKdq$u%EY%DQ3%?&^tU zH_5Tu8ic&PTcmrSsKoe)k~8%fd`>yd{vX_ z8vRHTr`Su%qbGmW{ztm+BB|YcWy^t)H3-T+v8)17vD?m%om9^jubx$(f%pG^&~0nG zsX*BZv%%+ocYs!coS;hb#j72b0sKZf&B^N;wchci-Wrg7zH_8@Saqc<_w7psC3XeS&W?tI4 zY-sfl4)Gaz`OD4$B(cfEm9n@#gNjRMu9WfF>76#a06!~Gf7xX(6+0W-_$99e0R*&` z%vHU{htdbo=98kkq$$y}X-Ze!r=`Mf8nqZcV&awY+I8gP#v?NJ`oGFlJRnXqB5^aT zbu2@NQrP&{=M;igC0J$pNKU@uHHP9Tgda24#VHK&G;nBA{Sz1&!N>-7k|6OUph@fYE#W2coh1~JNG&W%Ct;$rD-9jL(F)&da z*(|qRviY58WuvXcIPCxZ4sdS%6|j3{+F)ifSsuxf79=@cbD343mn(?F>lGNXCNFj* zy+RL-I#+hNeGKK_5zixc_HYchO#G|S^nqQa*l;0FTbgdT?%kjY=sUuQVt{w0Y*qiq z>ft$Kbz(`S$_o(yZp_nvz^)rS?{=sA7T~GbOVTZhKR1FCHAw@^1=2+x!cMAar7VUndoOG#ylv3EFHt!S|6*J z`4Mf`+jnXV)Z;(T*TR%M?5s5d>GJve*lKDd4ja;gUC*eME~FG zcn$SJe(iOlKy;98&#QkV(v?4&{U6HP7g}T*BzSi4v`eh`*hW5C#b<`Gsyv{{rVPtH zwCSBhPtFLLc&qhU>!#jd{8JE6aMRIq7&v zOCIwba%pRkWPZ^c^-?B5$UBV>JQB1ZEoCf7`s5){k}2fq)$3L9lgEJT$dFY}IE#p= zi*cK8?6)Ep@u~Q|#jBNvqUQDK`ML<W>8CTb z>F&pZXZ5z+>+WePd%EN%wjWHCNGoD&zxC6eidI+^5`A8r)+}63QuOZpepkj+Hni+p(rbV4{OTQduHZky(YQy4dzo>bm%aOeY<|D1e&qG+? zwxr+hY+1mZ8E+Yp)v2|eea6(M?Q#Ib$HG5^ek2s@SL4sadBK4L9^f?36>7i$SOF7w zl{-~N%Ss$C-CCI?9{MpDTPwuh^Rkw5p%eGXSPro_>y-}SX4IHn3$@83LM2=WQ_Od5 zDzkpFsfQI1NIC|72Ee$e@BK%&cZx_wS>??Rore=7eOoc^UF26}a(_H*mZcPaBIncv zvRp;Tb0mEEo93;>F~5+NhMa@P% z?n}wLuynHvv`jK}M$N*u(Q4Q;mtwo8PD-*TmTCOd#Z{v+^&8^riWmx*#akX8>5T*r zBF|69g_`vEM`mehBLhBhiS0ern*jag#53vSKuoCBqH)V#nUJWG7rC#F z;r}6-@eaKZX2SB$p}3IshnmnKj^~Q{<0a4YH@vihjw?3PGF#+Zq@Xfts_X+;FdQil z9D(Frnbty0&kX^;Dtk6H`<8!&LWyM0e($^er}*_nQ2bOWx9an-TI!Tsfvsxe*eSOK z`F)`W>iUUj)yZd3w!b7_u*Nf=gCZ;iTs$A7TWSW%YXszefAMW#JJ{1A^m9GmLDgRo z&%D$Wh*-3LnIGEV3RkBy>#{;+-JTFR{J7m^v`q)+uUF^SQuA>2KR*mIUgo3jlRLCm zN#j4WxU{7rq*d(w>ufA6x? z!vJ%%gAMsezOhsOZSVYV#l+n`9|~Sp?C%jTe3ks#G1C>frH0wqOoWI}`R2^8x>(fk z6?qSeR>%hx=6Nkv6JNPBGJ!7bXV_RAHG_{LO$pzNha_`8X_9pyJZI`AyhC38iun9B zsgJIMu<|-K*cu=IpQfh3zYA(<0H3-cNFtfk zWm#>l8NgQY^uX@ntG;Iq2o6Q-l1c}QpSART&NuvNibe3A+Mw)#g8Q4Q>DKCgn4daa zC4w)4nOl7yA1yVOu<QEZP%PTq$RZb5kx(R}tY=F(pnl9iifUO+7+$17; zMI@;$BUZuWCJ-h!_rYJDC2nT}Mn--i_w}N@os9R8WC5IAFzzW!cbXMap@KI$S}bTQ zhz67-d8msa&cA5!Gi7e~xAXOnhzy^ybn>QyUqhtiHpJ~AoL7?Ob7;X!+6;D6Ev6Io zQ%*K92xs=DTWXq=<5ga{orxmp+LG#SD|-pv<@Nytlfcs{N-*n8AA3uyY*+XD`Gdyy znYebTYg?fCB-AE9;De=eP<)ld&tz)jGiksM4nGgRi9tEIKeANCBBH&`_hPnOH6Dn( z5p!5bfml1D?s5}+*JvkUq|oQ%KJDTM)m9s?D�u0(*g&Yt2U`7fo@zK~=^(D9p8+ zNtx0xNA4LDp8}#e{E7?q=Nv);qZHr&s#xYsQ=?GR68Er+kF`{9lFIpXgn#Hy9+j`G z)`Kp(_OfPoQ4v3ql(F9cW10F|YaIC)DswxBSNJtyQ7FLC1Y|n=jz15s`y#+Mc`$)p zR9$GaIEdb9UITS5UfJI@`rY;Be5*c0P_)Th-f_)}&UYQ=Z8qR1@>HP|j)bCQ2tu~T=4KY8{pE3-+9xqZ#|FX^`*i0|g=2R%5g zrYoPw%uJ8sru$SD?$Zno3>j6%q|UI5;mj|K6DqYn9xv4Cj<5!%ONZ-F7ai`YxF5rNiDgHSd$)6zLIe~1 z^WEvIE+kXt;2AMVaN57@%MgK8sUfjHXws~z)C|pEH_BauQf&%PwbSS^m$@LE7WOFa zmUdAgYYp|Ns4eqL(tK6}dD=t2b(#q##I!pVRJ6VoXP?pMpa7;^M4=?im-h61Bcmhw? z2b-y;yPuSLD%Zi!ANesJnKJ^B*noB@`bW?7@NMEvEzH{B1{AvGm^ zKCoD={5d_|fmIi3ND!|8Y}+5J$eWz1k*U#?we5pVpvCZjGG(M=KA4C~4ks{GhyDVh z{#X}q2J9It;(7}z2>=JsqB!&KVdI1Yrx3@=RKxlQ|0-#N_!8+k$@;%LlO@XJN~wH9 z7H2DZDLB-=YBAIdnSntakm22eQ&DYj%?TfQ;rGB?S4d<5=SigS53Qq$yk))?p8VD3 zjsf3!+0qx~I^EZ`DO+M@*DyEa%GQs~#Dc9>79hQukRAc}Jwkh)32w6$hIeWlayX7V zNnY^o%ztttX1R9dMF4d|*cI;W(chBb0^NGS6TR*XlqA%`S`^b=&)C2csVOA@cn?4d9KAi03g^MU>r?Pj8KU z-12Y>Zx2Ky$_0A5{<7p=PfRWxj^vTDeX<&UWX=!`N>?WaG ze&%p`{$GHc1SOelg=27N9E^j)%r57)k*%^Vo@Bo8Zs9v%P2$cZ<=MbqULCv#JZCx6 zgCa%JTsz`=TZ;&T^CXkd*K1dh(=3lW-8!wG{$BoO>j?ndT+UsF-G-W0*Vpj0%Nys& zm{l|z0ile9> zG{SoxqSf$ng%KFkj`mHQzy}bFRy0E>t*?oF3(h~?s_ z)ImdZtb#`DmMC?~h04WqO7%3`W;Tl2x2X#Q&}i}d+lxub zi<$dq&hTO9-be7Zx&_Ghsjq*By)MOR3mnxA0v$6lY#0M#N6(M;rxOpJ$lA$j(p~`( zM4ZbSBoR=~<1>x^z=^tfro@BuSq0tRklO4Xvz`HyaA*7diLSiHqXXee=e#>|lDHwg z$>NE#N1166k#TfwkA}47s{7hAa=`1*xSqeZJ=Nl16Rmyo zHPj&(GAFRM9EBrULwxyxjGUh;YMvS{7|GoNwK*9BTM7fjNP)jue_E)b4creaU73$Z z_Xpjqt>OYRj)aSqLQN5v98+vpV_e~mC2d=nQq6yiR$1h}JhDn_*^dJ+ADqA`ukRCf zaEPv;nP?~#hAYE-vLv{1Nb( zzE;i#5V(2U8%~ZYp&~vLkJa6YEm~@Z__u6+@9O7`Bh~4?du%etS77)pN2if}1?mw2 zQ8dv$yq(uW8#X7k=gYv*^2VGX0J+J!m2*j#5kiGu>&bNdn0sXv9E}w^cX|!_qkoQe zb3vm%xRcHYsCBft``}F;=W3Szx@}!rS{y1(rfbi&vcVRbg;;UNN=8Phw zt6swchJ#uga+RlHtat&l-GHXH;eh%3hiNO@rnbyg8SFo61(RYL2GF9pQz!H>Sf7^E zR*O>WKc7a>Y%bLVmn)3`qfT~#*r6WL!3O_E7MnD)4a)ZZ)5^72&s05!zxsnN`tEbR z`&Vz*X2Q;HjJmwdi!;X;%xWUPHnDt)s}`!^cFB0yCD ztN!{F`e}ns_Kb6yjO*ehq$g$-PQRV9jg$`S-4Mhin><#$m*F_TQ4vG1xC*`PL{itX zgPI9wo07q#H7wHpA1RlOpM1s1TSMSE{~=^y;Rm2qG^ulDswF*66wk$nSb8HVYZ5i| zozaV!R{(xq0l%q3IUj|gXu(c7%01vaa@2Mu1kFKF2P^sdnr>kxb^5f7T@U$B2p&-P zw|F6j#LkI?>Ga^*!y3>-QR&^Tq}Vi4 z#B9jpIksM9gG5^Kzgi%=5UuM1YMV$Hj-xd@e8VXruf>X8Msh|=Whh|ULyF{d^c^YZm zJ^))^JK%zq@J8X>3_)K+It0DyVvWz>_g|=PvCd>(KM7^KMVoe8a-k+1#t$6gW88n% zKaUi<@ap>H(nRRO2s2tjrSalz45lwLtKw`qbKWhRKYsQ%` zx5{tvH0LeA#U+ny!9{@+|4T;Dv_;NH`&mv~|M0gHN)q#v6ibdZa17+}X@XeSC8!$& zazf{C&mf+qbIfbz-x0*O#RhpOvnBG5@erbO$*WP**MN_AAouo8Q=KdSUpI*iu%MSa zh>w+cq0)1uP9mmiCzM!W1sh7a)55Y40er7~CCPc+9={?XAv%?>z&52QjD+M(^9t}f zU!!Z+mRb65t62rlQkUxeoo|xPcoN$DaVNbi53l9%!Ip#^@x+wEC86O4`g?LEO?nXt zO`~FyMq$^EH%*mLm(?W~^#){$+T?SGVRtL(J4t3l!L|?nJ#(M8(H?`*I)line2HET z0T%&{JL})2xWX5>jG_of*k5rhW+u|cR}$Dv0u-BgGW>6%D1%4UbUb7w>573YK=MqFP2j7cc$){Ew&+S+<5|Fui_Vi#z`+T({sKHb3?2Q$o0} zn~Y@s0Yu?;0>p|Xbl@LvS&l;`%Vu{OBo>a2CNB{oCgK0vz&+6PL$y^1hL4&BUfu zY0ApQ>+l*rh9L0teMYV3fEg9&=XQa8aY?|Yd*U>2oYigl2v48SDQIT2l_~du9ag^@ ziZcN4p*-t(wdu7bw|_%PPNiwqR^rwYH(Y<-%TQ_dUHo8^NSl4Y!_)&kDhjC3CurfV zk9tCDmrj=P%;J2;CRmYm;r0o&BfXa6Hwh&tEAhwjJVbs^-tT?awpmstrUhM$fSG55jz zhpY@bpA-Y+r*P{Wn@G#t$2w*9@MTX{Wo4QO0}dg^s~dHKSf00;j8W9l)Ftm~EE&}mDDGMIz!Lo}xyrB!E~f=ZHKfo>Sg&(^t*i(o%cNB?h9!{;WwouSTQ&={8%O#OZ8$x<`1EUV3lc zOO(}nUruUt_fz0jX_?~ib7g5vWb}(D=cx;a3>~D|*QHPg3HE6wPOC}TLr{dF1Jk_i zXt3U4*r8lf;u))Ufof)JSt0ysahi0D;qH!W=F9k6N!aQuw%?p!T`u~jh z)Q7B<<$@=YpM8vAb6w{=&mF(}?S8gYM*Jb4MOga!Sky%O#W3xn zg}sHyhQlv^8m~zDe_7&gw{#p*yOaVF(M$oc0Eh|RckfwwA8#0!f*xxPh zy0wyOrBwB-#)_iPri9q`o z?HYgWS$$dFvdYOcKKG@)u4Sg2OW1Zh{%V<(WXX9dkGR?Gr zl)2h)t{~>zVehh1Wk%0thXVcW_y@d>LLn>q?SA>_s*3ySV|jX0VGtVYp?#^w=m(k| zwBWh55X9|+uGVO#q*H*K>5UC8S!;uS2M9gpp}Q4_E6m|vT4LUBLuH1cray3gbmBlo zbbbze6gTe<%f!KqN`%;i#!uF_W+4q2q=KfT5lZNvxCb>R3ht9cEcHxW2aOx>A4`a|S zomY$dXDUjH$R4q}A5IfKNFfR`NJ{XC{_as$&l&h=_Kv2)n6T>Kcgr_EmIcWCq2v*y z7~PkB3*kZ1To-iAdmQi>Y5x2uw>at6?#b@JMy+0BCpeY_PKtRTbmZ=@_#7x@rU5*B z!tMlc`@T&U#^9v_P4?gfxFM$p@a?XM>x^aX-u%=`yP#e|MxQXi0eJSat1IiBy@Z_l zi~a=q>q2@050?&Ar>MdLzr`tWeaDnzEUz`>OPKSBiFzxX; zmEY+;g(E1sOw4SX-3%95(spiTFV8y($zL93$$SsfOi;3 zpBN2n_s{WCi!deahta0*k{EMMbF-qI#tj;on$j)!;LzHp{Qlm|A}w-S)_BprS-aG` z2zdSDv3QSo(%HthwaL=O7nC5y9H5wUxOM0kQD%_){(~GX)}g#pfFUZ*xwK$qO0mF; z3E?YW!xI_&?EfG{9n}+gI5QCJm+e2Kj;7gC#b?RI|F!9R#N2dJ(Tl^nTq{ONaS}m!yo~#o`4u zD%!T1ao?0Wk-jjSFhPpNr*Ht$pc+^yE)bE)xt+5Yj_Y%N^#m#?iAjn+Z zHr|Rc^<%zbkyqc>q!x+9p_a?om%Us=e%5WPE?Ca!c{A;Q0shF(m4;#kQv|$WjW%kw z=X{pDqbItW$BG|qECy}1#GfKC4hV03Bd_1se^-%XhGXYD^*DOX)&G%_*WHc~%H zNSFx{Ol0I5G!s?hi&Ye6|VqlZDkSMn?t4yfePn?qn;x0mCiGH zRIF94h=YM#Ji<*B^!terb-1cG%QzE|N|+h!zn~XOJyyn;>PKU4w@{>-gVP_}gmb(u z9PO!C2x$1GpEeH;tVQR)rVwWXc+-*B1q@MRu4O}%r@T=2M~7V0G?c^9K|89?2TuUP zM*Y#W<0s(}unJAO3AWzEdEi1ZLLV8F=Aa>iWuL8l;F#HCVagAOuFJZ@F@KzKCarM% zpF#=V^==C++Tcz7Zh0Wx!LDIT{aIuXUZ1{NRYP0`#@%;G=zHz`q}AJ zKPWz=pGl|NUGP}6q%^0D^L@A55Lp6ZYT(5ZmADbS!+Aa_!3kJJ3Oc5P$8TaUnPv`oXUQh@C83 zIx@H3Wq%r?bunr&Jac+C-ziCy$xLr%ZbD?i&2b)oMyyB5?Qxv6dY;F{?x9WHnIjnt zwfPam96KR}xe)oAY9Mv(BV|i6fzp)-oa`#PS9+Cot1J?6z1xot#!)Pv!f;NM1KVDw zRoo5)xDl2Hh;E^yMrzPv?CUVLx5`Tn2}Ar~ z9W#uU_#C`Kc5G>I&4U@GUf4O`T!SiL+V+WKH%E&%cZ#kM1vQsIf5~ks`#2@w>XUQnUnq*_InWQfeQ-9xk;^or0v&jSFp&vOq`6me8(caes3E9t6O41LjBZW~ zAs~G~>NVTu3tKT`L)}{FiLR#{_5)Au1>6fh1w8_$6%vCTet`X@8>>2=@vcfX0=1w1 zqB(?V-84_*q&}{WrkwMF`C@m(4&JWTK~pmUou_N=9q3FJ>C*xOu9O(cnjwPW`4jti ziBH|e=QO#c=V>(`)k6qb6eWlhDMtck**LvtVnpOZP>iXHe%JUM!-Yhh z&gH#OK-67ulvSKQxpRy0`f=*UT093C)=ffU@lqFs42rkdVS_GQ4jCO==cx~m2`)A{ zA^y_es_~CLtTpS;Ft9&4# zMz&tPZcokh9=7|;2g{cdOQb3oY+Y336RnKci+mufbD$oe4tt3fQHg6*Vy}nrn%^IU zC}SsdHRjv{uj3w5r0^e1QCM{W{7CNDa@*Ny^CRaf{gq+bX2 zxzNr2d_}0@5YNjaVMC#8m};fKq@NWR{z6Kd#@mNCX4O-TeXysJ9B;_0$*(i)fUx!J>qyR60Z0YPTKf!NF~+p8l5}n z2)324`~D|lueK-?hOaHapBotyz4s^UX-R$;Jg=YrwX=ZW%t*ncfy#;Y z#QJM#;+0V+a`FUOrGJUFO~j)|as7{t0{N;iy=bQsQ0FpZYZq5hV0MJiP0i7$=>T%B z1Ib(Qr!HOWS_+39KJkMthh@hz(CA@0w43Wez6D7b=V_vQ5|Kr5qb4!WeuBB0ig5|z zUG(hT{T}T)?jl>41Pkszi=!u4peG~^kXSq+aR?q!aeRT4eA^(5xlkhAd3jU7Jf|Wlyuxmz|y^!j(yv?XwjIYTi_%tLxyHaCqsXp z3td-U_AG}FcZ_-VT=byILByKMmEt-~! zcK|y1ps1iB280{~1q#xe6FjuJQm)xCJCGXdF^yp;-+btW!`pebNbwe3aN7S8+%VYE z2P8T&`1T1 zY$s4cPulZz#Twf#1^)N~pE|NMt6N7sQ$UVhPtn2dn(t4GhJR3m={9GY1KL{|xdcWq z;3|d$(I2=Q8`<8;uY8UTHY*;FFzp=~_5I#WhI55hfar$p+4%IzpfjHt}Qu2pM{ zSFZ^AX~R!Brn{K}bPzVZfkF(gZJp+gj3=0Z&1*bq?tJIBKhv5>hr}UMg(Te~s1~?W zY4yJV4pd;dUD?x)$92{Tz4mbNeH%R{=i3+Zm zNyo;9zUm}?vMDFqmL^4`V$-fY5#cEGLpS;(wpfLyid)OwxoXtfyi=yJKo2mzLIw{w zCv3c1VsW@1ZWk4mnJqQ5Fkgbv?BkM-;o1D~bcUWB*|=ImEkg452Sv z!CD2A$4jnZqV!jL=z4v^RLKrgE~noGF#p?mfHm8H7)>fS(w>QG89P^jC2!MIMmCVc z122|5pN+e$Vx_8|I`Au_@+*C8Uo_;K=>-c3ZpjG~?AjspffC@gCyu#*SCK3;Fse z2HD(MHZOWpT$;qs4+^BOONYywcP@Z*N~bRz&^Vl)`YyCR zXFm79UvA68sY`9_)$(dG^!3QEMi5BwWMPaQX6vuiq5ohbIB~vKO-JoApL2Yhs~Tu~ zP)l$@Jy5)h=8l^3-+z9PZ|#hAe7Ep3GoVSldlx@&80H*e=pFhXD!467C?;X*aYMli zw;<5RJcx(v0*CN*oH~Om*X$ZOXC&dbMg+1Z|0o|ECNuR$W0Wt&&g1(s*~Y5;sGl&G zq^4a~GX8!d_WOt~2wcEiopBQ^SIP_f+VDwMO!(EpR4Xi#PLgkzrba%E(<`l&Tv1`$ ztdw*to}7Nuj^DH0Mr*qYnA({g45d3%os4QCDtpTAbxnsX7p+)S)Apyq);ue@I*-}i zJcbWn<}bzRoMvR1o2!ctE8Fo_Z73~8aqKPyVOe6Au0v)Qe-shBZI7x0^W>JH4HR{R zD)Y~?lvR-Xj=46itl|08cS9=EM)Z3~8yF-^!UjoUi0laqU~q&c@PBRht+{j%4}ZTB zCHj}OIr!CO(m(#|SE!=ftBVFcukd!tZKf zN}}Z?FUjzc2>q<+f_u@37n`KzYFT7(@8*i)pp9(v!WHF}BZZz%cF~1~kN}`15032z zsm>Xg;PNOFe&@G!y`mJqe6f9GE_jQf)iq z-X?D_ws=R30Q?mF%BOr{s)}g5-iJtiutyKtPXB1XPBJUDZy}_(?#?ey5?yR~7p8Xm z^(8wlofl-10inl(cH~3~W$UcC-@EOb0Ql@N9eq}!;6IV_%+ELZ{BQ_2wol6N`h z(ij28txG|aEJ=b*TyMXT!+4{=cfT_8eog1zUbi{I-hbCNR7uOvW^zWWKB1I z*l66}a>OdhHWheQsP!bc)BX0bIEc}{{-3_j-k_deS6#mI_KLhe_swUz7w}OKAF2rc zPtZ5CAp-vBEj%)k=UXk~{QXN>K!npK;JKm#{vX zv2*^Uo6Nf%CM$SKy*CiNBL4qO3kS^k7?CP3iB@4)=4v~&iuqNo_=aA)-MU?FhWU|( z1y*m?T?gX+Z|wYCFG z4Vikziq4#g`zk|31(H6sbikq6efnq7y(xBHwR>xL3q1@+$(Ov_UbQ_@^xV;ZyLplpX^GJMTpJUx%kaaB7VfJi74I5fK|aNT+h{ zLQw5ycGp(T6XVl#srwVdLI54K=k+)Y-)-mAQy`f;+l)|Vb2IHR{{QuhlB;ZOV8ny&DHOZc}w}U`5NQ zLbDYha7i~(XER7ok0#zIJbE;xlr`t@GIwr&BR;qSF%Ziu*xNC*xag7S4fY?3B&B_1 zY5z9Bd$;I&X7oGN-XYi8B5%=WNhr;U-Ri^2{@t3T&51;cH=xa}jWzz9{ZJ{DG`!_N z=Uc2SjSN+Fa-RZB27_J+-JDRjeFw}eWS9TRZD3%rp{}o69xGrJ$YGeX`gsBW_*I3j z=NBh)J_X4nvV<4^mDA=zYgOsVS6|7y~5A}p|N_IdP-})b|XK<_dhu)`IR<{L#$I88}{V$IytJ6>F zN|{1R?x?%I*D35u`n{5MG@~_FqUtY%O0soZ_On;G;cHj6R7uhbeM6jS8)s0xXrL7o*_qp8DlX1j?7E>)uvcXxG z(-lHyRg|a<7}u)(rOny8jle9cS&A&DNFJFLy@gsZgqEq6 zjtc7xDH@iv=`5#Ls#k&It8FoY3&Z!>hsVBdx$fJaT}z}`NEfg%H~7r67crseG6At~%>c^;J%wh~3V=`l0$$ilrZS|n+i5{rhK zhOzR^$~1pm{(4AI*W*vvuFJ574{AqL^V0dk3S!nWuo#Ob*)&5p6^IWV4nU;U!+G(D zmq*&pW?#3-*xi#$w69+yu>|`=LITihj~n*SG>xXF09x{59F>WJ?5=INVIOPU+#xC2 zEZI6OP6vl#3yo_^4C15MmsQ@D%2Bw90ICDi2ojAMM7?0~2Yu)32)U@BN4B5^+RXTZ zJnO0^ykws{>YV5DlJbD}>0pWGe8CC9wJQngMl;4!?5oR)oDb6z_DUmRGVXC&@tQO3 zIt#mL;)G~iSO~v-A;>Vx$hd4j1S3h^fv10^hV6W0c`tq`5fn%&Ku~clL&17tZ zzedFxN!wY*IZ~E{m?TvHvV0j+65@AE815#!01g(0erqU zIoEGEyR>oXex6c&}#iZuNpHLi69xk`t|3tkkMKShTP z7ll*<3Ft8X-l`c^vz=;%S&9jqkKEdri(f3Y>ziks-2B>3YInd>ZAfJs`f91CQ+^V_ zNSxnwWbfwqBQ1URYE>Doxz&9755$j(bf`Fd^(`e2q<{_SShGFyC-=gNg9X@ZwS1Tu zz3F)MkJib}6r%$`V5f1A8DucF6H9wq%jWUnl!dyx%AHtRD$t>zeI6trj(lrMv|EV= zJ3c=;aJK2dcKfAsabEkyhUPdyP1jM%M9pk=Y{%zT&9nQ(5=(>a$!4-T9yJp5`VeBH zLtr!Yq&%&1DZ=V6!kAsho0R)@fj#z++dxqca>*rOpX!SkkLYcOEIH<8_^gLiE6)6SHyia4=uK+v~ zG~d%CzfoB=3hg04a~#q$`mQIxJ+Pv9$n(ETIKYkhM}!mh(4qlfNgjCL%M1bLFko&i z#F2W9u9h+mNO5zPCMufu(1;o@J}%-Ic?7&gJaxYh1DP1mDn)OUO6hNxSnpJmqdOyn!RRdj2KW4PTX zqeSr&&iiiX=RX0wX=#fEbyn$|&$;!{uLl!r8KI_MU;|< zX}mK&mi>Bdx4SGspvb9wcp*Wrn3FSZlg-|O|25CfSNYv6Z0e3Tm?73=U7`x?6SFIM z0-Od}67(GEWAd?~-oxc&nW+&B{_(ErEgfEYZx0HsGVc~hDUY=eQnx&b8$~lk#%`*q?PWKjSD#fRsZ`6}ZNDXYM(x@@SOS(QkoBYMF+gKZrzN z9ds1^YR>n4G>mz~b7hDyBY{vRonb_>AH*7A0B&Bds`2@I2EE18BY(CpKn(j zO!_EFl4o-}FPo@V!u^tR+V7iqBCxPlNVyLxOJ4#xIL-e(FU=pRlJ1eXXNT@DEER7} zYLML7u{2?`X}UM}pszF5Xk0&1_9gZ&MInd>xA9& z_R1WtCYY&2%7VCZbam{NCi0C(;|pu0=7bd2)F~wGwTQj(3;eaFNKvh4rZwibfa&_V zYtJSDsd=s?M*VY&N{ywOUaU;|UkKinj=~7)vrqvL{xl278Mk9}I*&i)L_g-*lpX1( zf8fGtzUx>)F)x@F0Tpi)1QbY&)I*guC*$k83HKdZ5hmdE!EHSf<;YR-1QS#WLUR+E z6fN-h-T3U@?A7SLeNu#=c5^x>qX?BKh)@`0kN^38>P_G9DjPs+e>xOM77WFNIC^#& zskSV<#>`w2h2zr_CS`t-j%${-1<~!_UDx8ocJd2m`<@6Nf*c|N5_u1mE1i0(5T559 zYl9yN`k!N*Xbih#IWm3U>`IF#SO!g1w}Xq0S7siA@jk`#5)TN?p9JE~`0OILt(8@x zBvJ|-&~v61)A-9ixiG85%DR;wKR)a}FZ$`EYc|C(_JE5x+|%!NkW}gWIv!3P67)Ui zE^Dhi@+^p^c(pksF-!MwsaJ9RyI*3)ftAmPxwLvC0|pm+p2cx6Oq&w*00*69~)GtP#c(t2O*x16GUyUDV;yC_K#(2fMxv+hy*>s z#)!B_bFdKgU?1le1GITKTPCHlcP2Ey(ZP_kiO|4fYVQ0q<6-+YRn-g7ta`yaatEo* zM-BlLxeBOj^{5N26upAv!=T@9ZM1hHepe|NwSABC(>qCl(eaHP^#^ejP)T$oRGll1 ze)XgEK+ECPQo=rjL)~xz2DRoSd9X3~t& zvvZZ)X6z7*#_-nN9y(B-tAb-#CJnKay{5n9anM1QkAM1>YZJ0QiF2uz3$)M(=b;;C zUWhCi+Q8By4u%hYJ7St}{kvCdhT0p{iltgYcNJf>a_J3{ssPT^{UNm(+phV7rAf6z zQ~82tbUTHK0vxO@=Mpp)fz|yLoJ$lr39DSK0b88avuR7+MXl)jcfy0g{5!;O-x+BD zBLdGph^Cbwi=M(GbM91B@7c$afan6Rb6-&S-^k}7NWV(y6SC4x|HcB{#H-g zE08ecf%P7(E5H)7VB^(~K*K)~{3ff)u5aUSw5@PHQ%2)OW zrJH*qwc#aRxEg+4fIQB5Q4d{BA~i8suHY4rGE>M1=_gJ6bpcERzLEZp&g2Kb?$p?^ zqfUq1nN(Q*nRjBiA!=jge&d?92_d_dJ|Vmh48TQA-U}I_2w^-SJc>yI-$u0hnRX6E7UR)K{H0fiYTOF$-(2zUGYlL~ zyn0j#G*Zi?`*i0hh;8Rh?Z)+YKbqEtHksoB3oYYSN@xWwZ6f{%ioolmyu2Fb-#vrr zO}~qkiin5o-y9nth0#WTQ`WE~H{r|u29yQ#Cd#8EORsgwjB0^-iiE2lBjknqnaKh_ zb?fkP<eYX_Cvxb)agH1B#U9+^&J3H8QaRlzIt$bSO3NOM9?L*i_tMcoyl#HfkAcccH4J~% zTN&F@*S~fz-#$rPnBEc;!mQdl7JF<}d2oGL2fHaU@qBpBE~lIN1Hb=$ky78^!&=bx zto6v+fv?Y9<3`UbsjAoIn&SOH8?07qRH{2V%BPRcVt^Fo1cx~fQV9f->Mz7(eOb^{ z&VH|Zjc zwi9hPv8Is}#wZ)4cW)3~M{~|HmxMTIRcI$&Y)fq?>B>h$5Xl;MY}C6U)HVw-R^~h@ ze>XrY_fdC*^0cg*hU(`y_W-vt5Gk!TcFt}L1pHh45vKaN>B|+vsv>+S1*r$?YylE< zirdhqIN&UQU-?U#T(Yw|lXJUjyu^KCW#P#oebaVe0vwc1{OGf)^R(WK*vR@l)qI2z zEgv#S${z2;u6&E8rLaN3OI>;_T4&*-pxh{G{Z5anDKsPF@dtZ~RK~#H%-KMYZHqb1K&WX9ayB_L@4`+aQNt5m zcTYZ5o4DPt9*j_DFr@$NSj2lN)tWnom|>!z~^X(EB( zHvxVK{EcepI=>!X2C+1VU5;g!7w z^cmqZAVP{12|96SjlaTomF%qZ)f-OaYQcqu+c==3ksz!4^11oYk+@Q*s)=I!_ga)Qpgp{y3w5w z$Zq$R`savS_MLTiSphwo=aG1`@Z=6N9)!DR>%@VZTC4+3?oIa%f&J5A^vpBS zS7MLyL9`Esh<7M8B)>k&ksj<^i5?MOrwMLZ5wb{*#Ll=nO6u#jq28~t$&s3l+}<#~ z-9g%VGmr*xEkArkzQ`~u*2k9y%wBS+@hH#FSeYe?{0(v)wZm%;Ua7MI&YpR2iE(|} zefP$FOh@OZlgOc~@lbY^v4Jvq5z`nouJF5IEsY~7#;%2(dy^~pME^T)`nAF%1lkX@D({b1GJ%Qg3MLPgy#0*f$Vp8T4eF~teKH{iO)he zvMqQG-1t*g+TT#{eCz>RY37s%;+se)JBU?dLFydZP@c$V@cxygYmYj^4X;-Nd5%)x zLAx7bK-k`Ek>Of{drR^-li#B^B7;6ZCzyCOJa*`&t42k_?}c3ROowZH%7G4cuXi7m z^23^a>g2SM;AW*9m%o>z2L^@|(LpSzJ@rfgjt_^yqN7D;;)=s6zL>#Q=z0i{cZR4q zSsn1Kd*HFCXDbua+~O; zrVH#lzxJ*!cjjN64A3f8M^E1wPZ_P2gDiG8L&dz~c%E47f|&lG7^+dQzB!;6Tp?UB z$Cf(_+boW4WZLk0QoOei>>UQ~lKWKum?vTMYuN{kcR)`!J2^ld2%d-_<07HuaO{Kr zaX8_L+z_dM%#orPM=#9-2hJ$}p-#_KekcPl63Abq^`D3FyPemVP5Ey_Ai*z#-UawY zn;@bJk%{P2@q5HT4&bNf$UC8%*^L1V7Tdfhq-%lVTS$-Mp@&_&$2Nomy$A*%f$>YM zIF{A;oD9Jb->C+x29i(0R!#g=X+VE|2vP^AzOnwu z8x~?Aa%Q}_YM?dL?w;%DCCxVEBRJ?=Msrk^qmc<&yO5?oL`URX5}dKm6}gYaPT_GQ zNVTiAcCx4*cJaUGl$q8HAIzrKo&?Z&9^3c3THB4EzC~uNu1<&5-(_h)y&o0IQ;#~ zY{>XSi`Vtw-l@F)7c=CSrTcb{>1)7FG$on6gG@?>4f~-l zJy>^O)b2wL6}sQLRFj!$v-`#JVZyhrV|m}VDpzZkuzu#M)?A)iC;q~!d2pvsDW#|V zqo|-{e1jaLtczDeDuYWewrkwwX7nayT>y?_| zVPzwqyE~Jc^P6MC+Q-{B1>u5`rFPZ^Zyevh|2X$yileO!x!bTo1HfG#sbBd(1B)jk zAztrN^Z8lm4VB8`DAH^O6^$9&)$6YbXtHN2T#ZzK@nYaHoUV0&GQ0nua!6}R?UwTt zBwLJN`6wLZDR3m{P%vnyow;i?D$Zt2b(W?UCM{6-(anFcGR553*_cqbLN+CjvcXN{ zS+l2&fXlrVuHit+Y99tW?xbL*+A5jP71?FW@d?7_6RHQ1^h5>*_2IH0>8k^n!N|`~ z-|jgXXE4Jg9e$xo(I=9q$KRdLOVDG<;=!dpVPw&hsjFYCFEk`we550-REgD^vpxkk z?JKAV(&b92g-w)fSR^fhn3~5a2yhLr*BPF02k)(p#F&SA>=1JLA6Hf%OJ|cgI)NM7 z()@(K>W;R&ABZom|DeXnQ()E8xXQm#{$JZy{VRZ<|gZr|sQthk%w&Y=F zY&Y+9dNeQGebD3SIht_i8f>Gc+LtUGht4H-+?&+)O$z-qyO(gk8EYVsxg4n*f$rcq zmA@8A`k!-`LnC@QBR#G0A@R^CscFbxy#;bvz#?&#n*Y(LVhw_1JNZ^+9 zARx^9&MOBAz7j!VB0K*ykckGMK?vOOa?1CL)xZ=b-G0?k3hVkS! zCJP!>O@ZobPO>2bUp@cKtMTwpGZ7YKQ&sFF#O+!0u4#`zmDl=rjwbndz=wRbiTMsp z=VPlDO3979ml$qq?@tjD{7rg=e|Y5? zs{2SLTw(rGDmU0S!lwM;;=cd9P7$_ue1h&~h6`P1ZF~^Q5^8Z8$6t>aOinp`iLIf) zeyXk@P^|*US)sL>s?Q_K2AlKnHySp1v8usaF^-i1P>I#8fRsB;EwRPJCYfC=)Y2pCC z3`<=G-bV)>%e*eQ&JXVe|2A_O3G3h4vm&giB2*===}SM;M@s&@3<7 zSsF0a%Pgfd<1I|KL{C1IYyydC$xosr&pjWIqubf5z;KAa zcUQsC#VGV!m}S+^J9ST)&pFZ|YZ94Lx=bRlpOyFOD7_2+OSYdw-t_B(^4%3Z7?$Jo zYZ7hdncPHqe&!weoQL2o$QaiV&+SvX=6QYOO@2({Z}AWD+z%KTp!#~|_iee$upwGT ze|*%_K9MVTJe;=5!H=8BtgL zL0-8Ho4{eNleJm_r4Y0(?=^2NFxj(ZdfgnaA%zXzy|L)T6vd&M^ z#4<2T1AsHewb1@(cm9*k#;y@pC&3#(FG0H9LWfEj*(7fZUCVj7j~TsNmv2xiMgJ;f zr=dolybV+5WA}mt5zI|-ujE=p*JMB(NSm;S*kuzXYztSq=YjlH`=E+49ql?%$^%mq zBwnAMRbG24Md;uckeinTH=U3q%VTaZ212Ixr{Ga_hgG77IOdV&kr@6Wc$H;N-H+Iy zy?Dcr4g4{k^0IWS1-IMUlb#kCFXq|HVls>1MMs>+Yk5raSrS)m87t`=`kO>py*M7^ z+ew$YI-m+U)jJ$56QdW;{Js`6^QY~Io!H-P^aG4kW%v`lmXbo82G`740J##|=t`ZX zyq!=%lE#iOcVk4YF z*}1yxQzmO2V&%E+xW~pIQMoMu?i#6A)ga9`uDFNpE+iLZfPs45r`#y_M&pZ#o6nHL z^q}6(joQcbeESxg;s=_UT9Tbke|Y8MuA2e&p2)+IG#qjw&a?#Ye|nRJsr-q!6|3AH zc@|elZymX*IL|#+Vu8U{r`SgwVfa+JyJjrPN1MC>M&6#_Sf1|09Uj|PA9zi>jjUcpG&w4Wdt?r5xB{WTIdHc%nG0cD$=XkoCv`RKgbJf zI0oP}Z5R-kXaS4op{V5aW~D-cF-OtMWyZ}1M@Nb{30@}Ij26X$Y|xMHk0{3`6Kfwj zp$c=^_uZoiE_5a`ECpAT_dg7k+u`fx8G_sjnwV&=g5TY_k@bnLfVNw2;;l8Bb1C$u zN|R%1dF7!)%%ZtfMhG3MZhpXM!{q}7Hyct2F|o>D&R`IMKI+jgs? z1Qr^Z%?n%FWI5XRH3pP>NRA~8hZah^#6vwB^i$S>5c%7G= zk^J@dAw=cAuoZ}qnSO518#JnMTm#|K8IG^l`T$eE=St>>;@L4x!B#WLGBj}6X)PES@7v(R) zp?IsA0r=5*XpS!PKx&4QWqaeL6IlZCV?1DrVP$n-lqbffLF-0;%M+C6`6J5PX0`B4 zuNWOklgQbOtJ~#PveRdqwmVws`J231aNpSJuy@?onm zLjhkSHE=wn>t1tn-T6B!<#6g2Y(v0d(d{$-ltenUU=Jq~7WoPq~7*4mwx zrEi}l=A@pKWkmRbZd|dH7VQpOu>!6J`q4)FK@gxAV87u>GuRo!CQ7E)D5nn`6HSve zS10e(z*l7>dZs1XT$AYM$%wyv&@F-5tP4;}ed#=!NaGq2!H~{i2;RB}bC>>RZXKo1 z*lU;IZ1A9M7Sk(6G069HrAv0c9(iN)tlOPG%wv7J722s|pre8(H{Z$pt%Va}_4RWc zUeuvtfL?O|KZ>i^r)mR0wETlZGp39=Lo^qH1aY9OptxYw3eH^LR9!r31-@ZO+#SQ; zATp$Sp({?L2W6h`^o1Nd{+8Y-R7X&wxh^OxrqW<;N3&5#U^86Lh0(s7HY;F4$1G(8 z@04TKBZfh47D}CpKa=-yz5nA;lKpb{!wRN|xhuT(bK}c8E zu)D5VPq;=;MKyyxEv*oC0Fbyl?^U5xx?d-hW~+`Kg#E53(e2&=(md#Q6i%e8jD!ls z$mJ0|N6U{qF%*!L?%VNGks&|5vP0>{=o3L05KkF4jrSnf1Nq*);uw)5YLKt*$@y#h z)$I)|E4tHc>pr)K5Ov>oSC-+Es4ZJz*hDbkIpp_5c3VIg$*#XY$#}<5^4@0a{H$!Ad{k$3UKZ?%6ugR~C!zfBfBb^Ee($d}10+OQ< zX-4OO0ZMlYNUL;CTDnV0a?&tLdLzbQgWtRNPk27hdCqh0`+V=~n*3G7wq!V%)5Fr* z`}p_SkyJ(G1XEYgijr(fcd~>^3NR$OQGmMrr5Z2ii@G-s=|A-*<$w^B@76gvMgy^w z2}kz_a9$cM3EDL`v{F`krcW<2e*GkUEq1f%pNnM3Og3VN+b+Nra-GipECvxA`c2ep`LyN znT9$2Vf4iV9d*d`642^Ez&F&5TG!{ieIJdBqV=3uN!b+M1HDuQDgkhiuhg7pnZN&< z^?RctIg(JWt>p0Suwqy*4QdeUBKz(bY0$_}S6)L?;MkFH%<-zZ!i5KqVCCobSFH+0AY zllF}^aG6+_exb)oTky=-`RKeer-0dWMhyGrB}b?Va)e}%>O`(cw#NB}FLLNzCU%eN zgy+o|Wx_miO8;e=nr>Jg5wzYF?)}Q*59&|rSi@g zJxsXA{@No%!uq6<`P=MhMX+-#e59}xtopNfa77UzUbt^L0QnC`ki3S4P!=~;H|yq? zq{)bkpF%DwhkszUbkR7ZSo~osyis8&mHJ_kruZ>Nu-a?}0+h}gREHIYExm2YVpn&} z7R5<_zQgnPjCT$@ojs5IbDzoHc3+I~h6@f+(22Fn=8D#sm9;>hd*h6<;ePSdx_9`$ z`qdU|1qoFaMoB)11o`@;eZFX+WNBf_id{9lI#grToFc9<|WNtovi7wU!EMJJJjYzE810; zZ5Ya#t((*TL#RL{Snao+Bs{AvWvi_P4vkz1^yt=H^Or_oe^wKbam;(=;FrmUOinFU*npJJ#Hjq^IgG@P%X1S%BB*N} z`i%0do$U+o)E8mmIIdN;^dM4f+}43se#@}LucmgpAH~G|LGp>;&RmD! zGCR_94oeh!co9CJU+ZcT>=}i58dzg-Bo2O?Q$0ALN#9X*`wP}xgVe_^Xr-efF*tkJ zpYT@Q-Vp5kF}{*d#%RULXJoFs3XZj+@W&?w!8cfIJPYO(cHGPi4jg7B!iJK_@Q#8# zBV;vU*Vy7Nsijc*G^u<=Y>hEQ$cqUpR>GRp9d!n^jPHB+|IIgsf4-55W&H5QRP)2d zaT`80Poe|RW&yK^;zB|4l8m~;K9 zpNY11D{;I#MXJwQ*S2>?pZ7);I0PO$1Ytrk`!Ta_-`Dqsl&oN{M*0J5Hp2EG5(4N! z`u@zl9sKT4hV9A57{=VqueU1?eEThztoC1w#4p=?5jRMi`^aWUVqR7fEBToiJz#{v z`a4;K9elMm#KX8fg@sHPc!KQ@t3MD5!r9ePT&&SO@?bnH!P!WV|0b?W)4j?= z9FE76rdU$dcfei=DGwtI8|zT$(@PC3rfRuLxqRO>_o8XGbe`>kzxlA>qc=lnpT@(B z5GJSv9CzC@%1C9EmY4;i?{lq54SdPN+PGse_3h^H(PT2=@5(BpmMa`a;{nmc@8Y@6|=+Al7 z7&N{MC<1Hzk$*E>y$XJIUwFle2SP&oS2897e;kX0Ix~(O*7=uR@HW3>CS~Y^ChriD zNI!`|>OJD0iTd#inD1@>DBUzYRY+-^u_hm}-WDUax1>JEn;(ilDs&jJ=VlwFDmx3| zWr`|>F?k=amJ3_guzduLMQ5SyB8&HJrKV*AlT$K8ZKf4ibM57pC2py5~g{9l!1A z>dau&U;t*(=YHn+TvVEO55Rsaa8%=dh{7Y#W4f91iuRr?rBOPfnO)$e&1gqVq8&|f zB}#W%MuoHJ=aFtE<;67JLEyGcJ`5 zhZK458xUO$L-&zvSwe*O9}~IZiTtQ%x+;F7uGcShOMm1}SO*4EOzaaz{Fb-=H7Ztp zM^k3|!yGuTNV#$0ltL`7Ygn&d2I$l%{kqvy_3@jVarsnbr^Tzfp{Uo>LtLM)>c@ta zxtk{HcfZB&EbPXt07HB~3!ieZco$`nd=aWTc-b&lH52&63B(1FWahQe%%f*>$$RA% z$e!ontuI_c|E|fdd`;D(BX0(eJy#?CR&b;H9QhPwftCe_-k1NF3wMFgn4`i6_vb>! zUS2-&vecs?XbX_#`8j?A!S&{NP4gPby=#=`b}=yIL6KYl;Rbq9;Z3al$y^#KBbr-FqMdTg=IdYBN7^WQiKIT_osNEI$wB}I zAaT&5=LprDA)Dx8YR|3QZv`zUxD(ycrBAI0514}^K#zOOXIpfvR|p09-7`=AeUnlq z+foh2A5@^P)nQa&sF%E0_}CK|*N_`+u2aX~lG?M?75lp1*tao9{ziWNJ`J0h!_GE0 zIi((;+DX14o?cJBgXXMEegx)!;YxD|^8uZ&^hDiPcbV0Il|NzLqfQY5WW+~<+Gbd% zW-8elp7Ty-c1WzXgaor3@3x%8jzE*c)(lFujeW&(nttBtoLv&#-xfykWq8_mKrP985|c5s~$%m(^ME&H~40UC_1;)U&_QS;mVU{by#kYZr71iZ=wJy zQ;G5&0LYW|SghuL$0nG>zT)m?k-ZNk@w0c$W?7qRTQ7QWiZQOdP`6Kg*`=Mm#v&jx zYzqu9kQIm~c;a6__`cP7`P&Dh{YEhhHw!A;qTsxHM#8w{`FK<+#on7LdPAqoH5nK4 zNbGI&FR?%WZr)qHeHF;@XbdlB*qUA>lTUvy{?tr-C7{Gr@xlsUS=zCuJ~Qc%vrVB$ zBq^esFOb6|N35(sb`?Z-;yNJx^Re2Ux0&~CanQq!8?r`xj+zd<8;I8~M743Sx(9NK z98`N+5gYQPZ#5;C1A(7 zVw~K-q|?dth8n2b*tsM?xxyIqDr@YKR*~;q&nI}4LPJ=zy_=Yo$^OHZ)?2`PrrQ!( zZssN%6_@azWW&YJodZ}I9!M}Ok%LI-E+AgveTi_6ZBl$hkrBmMJ*jKv(q#M&ca_ih z>H;d^Ty+u5lKdSQY-hgCNk69)8ENM0>aDz|bg=LYofnScfQ-&Co3_Pcth|XOGAA3x z8Gp}dzdSSPfDnfF(}^bjhl4Hh)QzQUHf%mKH(GLKe2?95!~WH5Pk0s!f8-t+%y=1b zU$j?vj2>9#tw!}(MxI(vpcx*96d175a0u&vI0MY^iw#~PjgP_`$I^MitBR5JdcG&T z*LK@B7{+ZhBSxsBaREE_oTCrr%rzFQw~&o&J0~p+!GP)b)WBNbe z=94zpKsCaXL6B3%Ysv7a-bRJkp68<#|28Ds8yX`1=7$&pU1@ja$lVV|gRdZQ3Zz|6 zx2+iFJf9F$bhh3S8#2|X=}BQja>X4>5P!nAPzk<+07dpPk<;3#_ENj)ATf$;d@`Mp zLOB}~TLb^&CG1^E%sn(7EK>5SzG{2@F?-c8L2~^r`ny8BVx2{AT=re`cPlb=i;#FW zTrWCf&oXa4w&E|}`sIdQoP+C9@&zvK8w>3APxYT&^xAXJ`35>wqqJ7h3WKLrplUjb za-0zZk1pT7T7N`uY_oc(f@YLDYo67`x@l1tlr|K`n+yAE)*4_i(%$Vg9>kr+;V|>Q zE&(B#C^x+K*8|KaRT^yOVHG>H5VrXDN)HR-Z*J9VvJgjpP#o0-vg$e;d5<&U(f6~v z7aZ#;Wq`hw_v~_X;7$X>9(iJ9=Dwcm@({kjrIZt zD+k9yx_eiSi(aPcqnyynV%d4~c)wZ~ug-;SS%L_RKXJ`c5WlD`UU0Z!%n+LTyEst+?rG%h%#huL#j3tqfV$Rk=9V4@*uMQ{ zA9ZO24{@S~`8I z?~NU;^5&%WrWYhmpp4aiYth!60eJ!VbrGb(Mse20d}F2ddsYm=E+EU?4}NiTBLiQK z;dJ4u!i>AD#S$lxDU$$C zCU%;}G*XpifY;^y8HLWIIrPrKMb>{VwsK*Q`= z*$_DmkJJ(VT$lCyK*yIGHL|9Tj#g%^G$jo1PsVpbi-4f|k`#!o2c(|s_mdrFU(Jfc zz#NbA7n)o#xMltfw1X45dw9-yoe)wGk>1Z|4yx;i2^&;L6LvAJVsaE?Anon4{qN$3 zI~>!d*9I(m5u=8DE;k>L!ah=~B&?=6BQDHva{uTxVyzR#acAcbUsd zlVrX#;!RxUH9fwH*Cr2=^#Rty7bZXPj)m65&O!{tH%xsO0EQ)NouBuLVv0q6W>FPY zlsZdfzBP)_n327%$2Lj_c@JcKQ?N`2x`j3AQLi;YZ|l$BW{8t28pfd~q33P!h@FC0eGS}VdC1FACpSS+l+|n%Gv^#JJuS#56=$Ck z(IBh0-N`$~6g{?EaJPi{Gd;8WmSD8Z>4+wWvMTxlzs`@dwASwRl~X*Zcc9L_O7!Q- zInHi@i3&vLK6V<#M!S*mQJzC2i6 z&6^y=CdcEuzxo09=J?M8>wUtUVxJ$=euHCnsMRtJC7qqDw}IH+NOqBVIU^G}caM>F zU@%KsM}Y*b1~S9v4Waeo(s0!ZqjRlu7o1B&Yc9(ML4(^6uYTTt(xepiP zqp}tSwA$W}Qh!#M>cap%hvVYId049V+mT+vrqE+Qhcnj1kGlsn>U?Ek?~8L><5k91 z6RwaI8b^~#V>cB8ziIcxW(m2N$%I)BU7uczaj+ERbvm`jy(bkh%kDhz`Bo%tg3Q7( zq90KS2FSCOy3~wa|MWXsoMPr2WsEn_{Kwk@Psx~{jGg#M<~j|JjrH*722kuW**)$q z?h7a7@Me^2kfh%N{I>J<$(;Qc+VeP-&yWwr-nZ$RZtWbiMoIwD!pa}*Tod`c1)gFf zhor+!k9#wMcZj9x@>l=5)mgmYrXP6-166yO)Z_Njm7yW zFahGf`F)VFOSq@SCSLrpOD?gOHEf=tVx&$BdXGEE;GKZ)uw9;$(g?)#-cT&)Fk@Z7-x^R3ot@Evpy0 zG!{vP-;mTu&m@lw0**AV$1EZny`0!|V`(w169O=tSV%g3{O0iJBi+eprcK(Pff?3Z zp3vu=3PEu6lbVm~K$j=rk{#Z?o}P?d3BI8%=*#L34Fb$@ph@VNx%vS|!hyfI!O8es z?}xtkNm3tM)&DqQQJ0z`JkAnQCjBhU>dt_hIILq_4kd-FV5n@Z{wxi%`vEZ5FQCnu z;Yg}qyFMaXsj1Uf>>AI+S0E<8_}%gLD{jib_w+s=@|5oC9c6J@m#MKoO=?+-rk^9t zKQ0mA+}_Sm=T=xWrHXYTsUhxej#6o#d`EPgj0Xz*MN2;WyG=-YWmyUr2}@F6Z=QH> zO|cKR@&mogRQ&zvEr(@0sottjE?-k$Kx3~QG7fR*>6xJOn@wNR4yEEnUf1Lu3zXrFeY{UpE88H+!DR=jD3h2i5Mni)@pZR=V7urF;%QL0;(0z z%$79<*1v6e@Dw!c^N<&H+2CksoqL#e{Mp7YGX@GGn33Pqy0VF}p?@I77{f)2NghQ0 zao=Zp(&y+Po~YdO47}S8v#!Mp=UKXM>UAs(+1`xzXV!ZDLmNd;{5{>dG?~P4le`mdYlgEo z#1W_UMVpj8+^)iq==149syoY1a#zoW@bNz>9`V*9xP}z4ZI?&a>zZQ4v^uaEvg128 zu4u5T6VE+Fk3A-SwYpgU)J6x8Fc2C@e7Lji=4h%93D^Ubcp+T}?U?iM7f z2WR8Bm(;IYLOs*GanbxyAdEM;4DnD6V+v*g?n9CopebDnux`)6FzIml6WvYccG=Qf zju?aBBtYo~MjfRZ4u_f`xJI;(>r_TsMV|cemO0C2(Y3ejw=KH9#2NJ*2D5#Z#<~`K6))4nGAFJWw_T^t@`i!$V+yIB*qA( z6oksvq$bQ^!vpVmt8DHI#hDx3$EJWl>CR^N&J0~(PG>gEg}(m<%QB>T;^%x6B`x{K z5(B+7x7wFJ4U+^<_WF*`ZL}YT6$t!`6Re-NFY#mew+VatYx&Fr>Z(B{Xn||NT%}|? zH_X2ZVnzCyV7N_$H9Z`V943GYALyk7e*nvTvp(`xsR4%D1q~;?^WGP(EY}l3wi8dTQc)aVx$lesNeghz_zHT^J!H1_fsDC zKbR1ODNBbFmcoGEtpHZs@U1U750Bt`qo=Hh)R6{^e)e=gXU-rCV=vE>e=27p?US8B zT;>1Qjbgv1Z>&r$|KTL(Sd+~u*iR_{5TU8jB{Vm6;HBBa4p*jJ&h?pRK1042TL2<& zv^VH{OE6hh2`vaFK(X9BFwFAllz#^PILa^0*<0tkY*@2@s!K2ho`Hn@xm%ykhj}s| zOY6*G-Y%A*rH_{YQ;*sjN0uT#Zb^>*2Au+uriIr8`CkA2eOLD2l$hhS;aRv;iCC*M zhkNSCn6YYr^$~3Ysh|GCA$fQS@-;_&>c}obJj#FH1a-SY__~Dm?$b0-im;4uQ0ijW zc<*-3sTqQGJ9(q65@a)&rwT=m#?opShU4iZ*(D=-!c2A}{~nYDn|wz&4Q`StU}aBa zD8}I)_QWBBqZRwce+RU%-eoty@&zA(Uq&2TFSL}J zy4Tu9oww(9{7AJf@8UatGXfYfD2YK8tKJnm3@$q|Nb;P&No)*%zA~X*0u2+vT2=~L z3W;H{Ld>aaN5gzY}iMk$p-VBafj>F1&%gC9dFes zPiVo#69adxtv5yFMxS_I{J_15a!Lt;=%UQ9<z2Wm>qQ+*F$uDSLd?8TKQ1iPbjoVLlLdgvCxa@(wgHWnIuelL-d{;7O; zAEGGQiHI{dyF)n*2SH!Dd*m#zH{K#!pr!=46hED9V_v~U&M@!BL<8J28oOq)S(_k( z#e~?N4glV}b)M1{3X@}_ZeFaFRdRL1E8$4#bW9Of=uCPX#N2&+ysm>7z_HY&}tPU=dg`?eXSnV3^3m?|Jt7p(5tj z5dRi6@*GyLz&^1m&u=ZmUwTM|)@CZQf;@T_rD!RC%DP=Kf4+f0N}?+0LONKpmS%)Z zt5(Xc*YAqV(0UJl!dPanuK!?z0MBXN1JoNR4u^uO)pY-K-`UAu@8;YTNg-@TbcnV+ zqeT-74F-4#2df*)Q)i>(J_pmO7QCT0clAK+gvKw_+?KyRt3395siHzaD{Iu)XwJ3^cG+t?` zO|_2D9m}j_yRfsMsIuOyOzUT(lt9B-5#EwEa)IafbNh8Tl(Ve5;u#OWi%Gtanirsj3K!C-sH za|@KgjFCky?(5sGado;GMHzq`Sf2BX!m>eNsE(}-{wX`)O+BonF?HB~79t-zT?YqX ziKo3ZU>TGRS`tivjagnxljp_FZ!k{ zW92@@%_GFT(V#-}f@99p%naLWOl06M&)PbtLa`4@P&hBibN>u_wdDWJTt*#T_EFb| zpnoR?k0*?*!OR-9-7@jPc`9p(~JIb zSI}1Y)99fNTd1sBhN^g#pEMoSflyNDSxh}>6xrMa{RoE++C<1xW)20<#PL08&0+8x zc5|a;4Ku@^xVT|iJenp1BpuZO^bGs=rmAf%Uvj^m2BjNb?qcEN1sz9`VT z_2Hx+s8+xEY*_MK6$R2~W;A;6hPnMB&rkm-a-ZG&#{a53ob3fov>?QqQCo>BY zUhK93ruHU&fBTcrZ-FNf`sv}imfKge^YCvi$3JBrLu{ZZ3@O{R?U;jp3l3P66$9EK zYIK*tQkARkWC7l#%c(rWB~J~1+bw>@8egqBUH>NOsi+(eRInmh_^HM|Zf3M~Zf%{-PH|M1uPtXW>{up* zUI5@wn>nVWb}e~mnF)z0FL=dK8aP6qBHrj%wNh;1`rznTX#YUg$TgJ5jI5wM%&)(e z3Hg*xS&{P>G!uNW`4~g;bLl}Ei{WoXK?19sGgg{2)C2N8%=$;an0o)TEA!9u988Uu z$W|H9L{1!2ap-{sKgHWl2XN0!4IA~$*UAF6dr^R5f})TblN`Hu$t@@AjQdaS0NMA= zds7CCanu1V!F`dVEKVu;AC4?QA;_DN@Z&}N>x#UU z*13F@pAkgGg+E$M$7c|aB3z?7eBEh@K1nkbwVW<_E|<^3!_M1WZx%2i*m?y9y4#pX z=FeSq&vhJzk!#Y3TAm7=wx>vw7C;Z=+iJx2)^doqc*QlaKyZ^@m+)0Gr2BnQ?^X#b zZ5J0>3q^5_O}eFS?^-j7SeWz8nQ0(s$4LHUZ0-8mq~GlOL()ePfnszQ$9_93mG$reAHE;?Lo-yo z7bZ#1f3_An0n!}JWiNUoVqJ))Ha&cg3LL^7$*Dl{XEq7a>s2};L+3o|o@Iv2}wH~IX&>~>Rg$Hs=)T%my>CRQH4SQqE ziK6uRHcqavp^_yJ`=t#bS`};3NY03qg_fZzwxEd^5}5Wn?kIyxbGNZ@Z?R7!AuiK7 zG2wbq`;o6@Cx0vylb$M`KZjSJbg-DvI2pS zrQUc^0$DC^H$IG#OE~jX$L~6~phShf!3Jt9SvG62yTp1F_g1N)m-(19#ALIjZ>Kxh zNQkPMcVnL#{xJTJd(l2Ufi6axo3q8kuAyW00a~HaH>5nBVORBV-r_s|lrri7cOx-A zN$7FCh4&r6O^TzLE$rkue|?3oi$o4S)AlD>fZf(6Il(U4G+hEWM#b6>`n%DgBc&Yj z7f7kX)$2h-=gWeqc|U@cxwc= zs0#wbfE^p&)*xeu`|zFs9LkR5DHanF(euQ~pZA?U+vI-q*^8-NgRLlpPKZ3C>yk$L zEF6nktcI$*XR$v<_kL|+*zXGEmQGCcdPeHQYg;_9bc4rFx>e2dh$Ce){LnHm8KYzXOgrl;3a-ltNnGC?_>Yg0b5TVW#Q}6)X64k%WIExjsOIu4gGg_VYS(|h2Z%u zJkfqu^QS|RXBA<*s@yZ7DIEDKM(+Qp>{@tuC)d7>YI(H<$p_*>aK#)Ym3IhySE*;r zA#9D}@AVhhn@n@OEQ1l|J!Ej3IE;UNKNj?8mU~uTX1Ws(ki*hs@C|_K0}3P<71%zf z?OQcaMoJj=LmPT95sJ1Hq&y8BkSLN4A*f&I;aec%M@$H@bpr;2brZLi zCuwMP0zUrcUu`wl3r0GB+>5F3u$oDpdObffw`YX@^BxA-GwiT-zr4#dOg4?#WGYmzb1@Y0 zjs8&^WCSO#;5}OYi)tIrdEovP6S5pgB@#D{-<6=4o8dGejXK}jqAth^Q6SOq4D(Ed-XV zVz@mT!}!2e#M>0FI1c#rEZiI+M2~!y*pwmn4l4<+gAeznAIAs(M)ApixeROg`7#E$ zHxp+A*H8V_^Q<-F;e1s3<8T2^o6;)$agPrwA3IySfpXctebok!Ne!&-uxF1Jf*e}| zryC@zer-EXhq;)Ao}2~=|A3gg%0p~^-HR|(eBC)am@5r6eN3NFb)p(3abyFxu}Opm z$B44VF2P4e_bA6qcP{EP)v^D8u7^MJ5_9ZDDp)U2}ihwXinzI+6_w;UY=V)x=+BVL|wEyqEvsAo=fW6rA-|$Y9K)vFBY5?oLj& zKxAUBQxsFu7LE*bp_rn}ALfYz5;(W+K|Jj_*0z(@_9lP(1*v}q9K1X8j{h#5<-8)a zc;s*A!xQ8oP3I)K$K_xjY&czHt3SqdQS-*tg_K<|5<7<>)Oz3D!?ybSRr?>egSe!u z)EB8EovZ}^;q)tZQ3S(SX`BOF`6Pw8&S@`>Z6vhgT$Jl>&E;zK@Em()G}i9Ub)fdi zxIvMhmsZ`)Pt{{YSbU-O9$FBTr|_ux5>6e^|Wfn0m;v4JO0CAKj*f;LXe82WGfVh zKeAD&9A&Cu!;I!3XJSp3_7WyxC+E(1-j3~X-WB~IDcwN3QK{96%Q243)t1pB6Zlj+0~S>TXq-lXonTWrqFl3 z;>r&gDI!PE&G{d0E7+k$?Z!3Xdef1WoSnsFNBXkPA3P%2@j7}RqmGBdh?fkF{y?m0 z0K^l;Sv^0xgC_DUS6OhQ-xlbjrVu&EAjAxIUh~v&`#+piKbKRdd4Wk1(KKpJ?>-(q zh|Ng5>3rHQTpDoJd#YUyVnsHh=<87m!(or-H%&U5!yByfy*N^KJjdDE^rymJfYALc zJXrYydgdwUSgB~>Vs1O_)qov{b%@L>7&`mG`%ATYX0_}Q%5*?>AFNaG2~?H+dKoV` zFM_+|gK~P&BDSk^ERl-pe$BTD?xRFuD}1PVjU=L)ofedm8>Q; zu_W3f18=6xxXZ9-(RB^|Vz@SCQ;c;@qGZTK?9&p_&&UQxO8fHzeC))&UzqL`)X7uu9=5TBX;ZYf?2LJ zE6*DWz7y1umb}Y*x6LRY(J4rB{ATrK!q%ze5pd?}RBysz95?0L(rNj(SL_)Y%+s#j zW&ON0MOme;?P;=qQ5tzPi`vSdAb7!Fw`GC|8-4rl^uJAg*!_~3BD9iyFgUT7OXMTHbzQg3uaz;k&y+1R-N_mC;i^7x zsE9J+Aa346i30$r>{~*dE9*)-HkEWx3Yq=cLS&k^eV=(X8B{nEz1Nstp8N^tYS?(L z8L%j(oRd(SApPuWU477NW3?egfbbx1PtdRGim1t-iI~be?X}i6!Kz^~LN~61Dp9;Z zoXP3QLY_p5SotUOHJbjdf-9=L=~DmvT1x@iay{1xhV|_H0C#Qf72nQ!=Z?fbxx8Bc zuH;C!b7kiPKj%S`On8#3IAGWj@!^^mb)?3NA?tV5vy zW5?-Z=riBH40oFSIbAY0Ho6I$^Q)R=@Ds7*&oUE_KmU*}DJlI^tTSFMP0fKtHAJO7B5M@k#x_T3tBL`TjZI{p6RFeC@&= z{o;X6J(4xi#>(}JJ2aHAT!hBNGKodRDmL19#X)B@a~PcMuJtBS!T7h3K^FTzy@>gb z`wFjSi2j68*P+#2@yLjUg_BOCxZ0=~|} zvz=@4zqOq|C4A{qbG*{^Eal%FhApE}O3uKv8Ko8fxYz&~hlou&b+`5Ao0yO6sKV$Y z?(1`yYMA>arFNmwSccrwX-EJ}>Z_pS6Q~#iWtGTg1ta@d#`90%Ws=H|)}$AsV!7vA zi0H?+u*475>B)-*kWuL}`8rbwG(&1QTzM1zws=`>1{r>zfKjMF;+d!g)rGfuT6Mp| z87Yp~dJw}rptJ|uHh2*MncH$g3Uk-|T8uyskxQ9qrZBO4qp%qTvNH@T+;>o(yhzbu z?uysK?9Ceqb;_y-<7TL+TWaRCROP%$Ks#_)M4O~RP4;udQ?I;&QI`*4w#QbfNoDPq zr6~rFrb&5cT2^z}osyhBPXjM|xN3Lw>*ItI0u!7Hu%iha475=#%#JRfEJ9pfaRvU! zw25b>!>9+rJZW4R_9mJb^V^wdq{jHXZK_$}1`6>38;?==RjbWtr$uiU3Tv{d@q6A2 znUTEI6)rZ;G3!TS7i$u#(_O~`^`lo6xVAL3FRRRLfOcONOB_EcAKU??!rM!Eq;IHO zs@B(D+D(VR8tm^ccO}QMW;_SDyH$~Zzm-{Of*@6Yy#6(PEjm>w-^JQ@%3UjpIPgqW zzpL3;$}33hxT+o~a#>(WtsYiJXQFY(;(KgpQN#49TaPM{@w<#Vwd01YPTEkW`S{ex z82qTpFi(d;+R^AW`=>X+K&=>VW+jzTt?`!l)ZZoe^+Hy$5m!WY<_NYH%T}o|SeKi- zrmDQ%Kc6FyPEGcoD4lKR>q19HmmJrIqfp?6Ed2-3nwf?u!=~t!U}ynG8s2qh-{aQ! zh{K7er{#!+x*>c2f>XP&Udmxq3)7Y$G*?4Q-GCM(v8~$v*1T{PPTJ7?Gl&z6$7EMo zzqt(=@nmt4f7u)}Kbn^n9i2hx&{7z&TDjZ05sSEnhyhn8jHQDgK^e!6cp)fIhjDR) zth(5cTFCMbZRZl}F=eZs`10I~Jt6TY6LD}J+!`z;1pErQ+A_io{ zIVq)>Pl(LX;H^4CQ;joOksq!ZEAroJSXXwybZJaGa!ng`*sq2*&gvyPC~LQ^Ni`;1 z&`wJ~Am>S-B&24_w?J2ch}1>GK+qZ~58&Dc3jmS}UUO8<7_U6U&MP!-CWPCOJwqAX zzyL||92?;Ss>sKV@Qq60?7$Z+>;`X^WdW0{)DN#wVVla`k%P2t$oR?)xz$ze7xwYF ziPSbqGp?5!UA8Xd83)E@+FxGb1uJxW6-gs_T`5j2%9S$Al(wMaB5&&Qw-0TN#Rx_N zI*OGt;r@J`!c{3%=Eko$Yl~*Dgup?tiTsIM2aD_v%hMKhULCC!zjoU(NBixS9s-1t z^%e%b=>?zrn0f!bp?mwIZ6DbFxzMcZ&O+=*;OaLA@mKk-ROP95-S}(D3v4xs8}} zN0Nb+S{t$QYn-gD=hm%7GIKEaVoOe}*;Yt?XyeH8R*Liu2mb%>0ijmYu@!iY<9 zM5d?)ezOf25nKr$N!}v+{O}kEAN31pJ)+@#dmSkIsQ=jd2JZAg*nn0C>FsbudWcJ4??m2*-WAyxhYqE6Ejq$o5TtH?zPtg3hoevek4xP-Dk&vb=O9c zWzIu%efG0d>+?BN^I>3U{F|PoWG-D%OZWOlr^Vh!T`I7d1Xdz33qeMd38RLGfI}lg zbJJ<6L8}`3Oy8c;NH~Lz%Fbx4A1j#^SP139oVB=-?#6$@*PN)_Acg*}g1aikCA*{^ z_rSN>UuFa5=_rz_6G)@#{&~N$tfsim@Of44IC(z0JT+`$8MTq!$Lrqp+R5CzwN3l`M?L(oxs~*JF>YC7?Ab0SudGW^z?-kB4Sphr zGr~Lj;nzaFERTnn4Wp&C>$3rx1-CaDioLq(uKmAIhg;n4^D3zHAiw@sdYt=Qz}CWHrTwF}30X5nc@q~yZOX9b$athFJH zTE88W3P&WQZa+)Gvcg?$NKiqX^9K=sH8QeY9u~BQir0~1dqv#X;+xE6&eeQ6F_3Me zDg9N~CF{oN)oBv+z$}(^i%09qwa0(i@-7Jy*>p7Ll|uhpunLPpL92s_7a!=E*`WHK zq#KAAKWEwT_BZi4$V+;=?NM#W2-GU9kaK`Fx`_T)*55?jk!52D(qc*5PvQzaOQV48Tb+fXhiuUbKt#6J%p%p(;Qgz{-?BI zzm@2}%DP3X2~VQJX}m`ix=*cJAI@GVqNcJdrFzUw5whR?mvU zR5xg^6J%xTx?2U76Wta5!|JHbWWzEb5ZNn$|L$^JzPY%GHEJK~kEIr`s4K^8!bDp7 zfZX+nC-Cnu4Wkd57dz|)fLILHgacOE5+p$>41^Pi9j7-9rmt&OZ;oaLi{nT{j_p22 zEgkfct!SIxH^@_>a$9t$`D8t}^6?i$St*yvvN&+Lh;;!0>$Tkk^#yo=nY9~!Kbyd_ zi|gz1$V=JD@MqrIV@vGRT0%e8zhr1>|L8bM^+7?m<`Fi1?+3R+x!!r6=oqc)xzxK< zoWQwnIsBC(hPXLca)*^lFhVFDG~dGvCD!Fi8|Usfaj!OF|8v2U>|CIJo$i2S4Cm@` z`^?`7euSe)9PQ2rzz@A^-#YtnSXW+&dfxZ2&3&-zzH)ZN1J^pDNE=4i36Y(hu2 zIX5BL!|3I)br7~-j@x`JhU-*@0vLD4Y&@)4j2{db_sk~Iav-_7gIhmWfrtk4?0+sF zg`CyZViHF_5@cx{=fS;hqs6As3n!sEZcP~!6%72mhZSMF*b^RE(v=MD&-s;E;}Sqh zKxQ_6r6m%=inzZcF^63mCNCG^1O$CflW65FLdEXn$H}V7SRY@!`D@pl0C;60sBbNW zkQmi|g1ngnoi6)vo$iH;l^b)B4~GYMzj5_T4=AeM}1x%@4o&6%IFWs zW>h9$!+G57L*{;RCu#Q`peJnd;UzpLsf*bc=869`z;aUqsvkwf!58#wr`~%ijN@Sz z3#;I4sM7T#Vg9W|g--fClzoLwe4t+B#JXXu&{9B~W*|mm!Qb6th@s}NqbJL^SYZ4# zh1Rp(gb6z9CGT70Xc{-r(XAkN0+JW-@-QAWdp10xE9NR%+v>EEv*O<}`O5wirZz}U z)tBW(%uD>`Zy%q@YcfX%COde`RV`EC6yPRoVJw)^^*DstMt-rGnMSXslxFrpmV~r~ zN>>%WdbKbPsh(G1umUeDnw$SmsD8$%qTdOVOm3>dEti01Ua>!4tuWBBeu{>fYu9%yTZJ&wXQaWFSIM zIfWz`gfvA#-tssZk35nAr9Ju!9nXtB3IG3)$u?pc?os1<2OaFRQd1y5+<2KcY_}*N zCaGw;BtbvG?e^u2cZZ`a8gk0gaG~Is?H@V1mp-TrugtL<>GdSB7*S~Ps3JRoz(@O$ z|5wh+$?Zqn6mF(a!( zIeaXdZpOW#lrN{}M(I6BLIG&d#J3HwP%A3(2MFfL;@ZQ8a6*155@~G__x`JxM4abV z{mpjT`|K&W4((VmSksA(m0%Q6g5Ne>%xTIhzSh^~IfVc%79`WzIkr4TU{8a7Vcu?Y zF&KhpteQ2hg{!Hnx9qqLnmrviZlq=?EG$qR(?d!%8aKR;)y`!yv-FdQ|HsjJ_*4DA ze^?Qr$jWvqvNy-(gzON-u}@awWMv&Y3E7+My~j!RIxQ2w$ znmzxsz&DKo9p)JK*ZZfKN9o( zQ14^y1%QAOxG~eHilS=AZeMk`Rlv}#zW3sM!lJ~dodKG=PAK)Q0fWgOXIMGR3PdL{ zFMAPDJf-E`*``YSiOY?GBk&aK9=6}u*|Dya=J>w9^r8=(Slp1n^ha64)ITq$pEy5O zTzV9KjPA6h)UnRDR=DYF+0nj0SvbVMn_}{j`xm9#lF#4%h5s!ilOo(|rlaXt)XnUt z=W5MprBaE9bs-H6bYj5vvC*GkE{ZQW7KPY_;k-Wpx@Hzv8@VkC59BYX%E7>^wVynB zz2{>ej8+V9zR`m@a$n=D#|w2ww4XF2fb~V+i5R}^VZnHMo_pm-srv-5(+pC4k16x= zh5w45#0t?Ln70Yq$4E*B`c`IaEXYcaSIXd)!46Z=e~?j!ZwJ&rcD<1*N;P9cf|Qv7h|q z;BW8(*xUVse&vc^xb%u63ato*i58GMcwV4HX*Eso!|v9Ns0f?6b4p>raJsm1qHM++eucdey*9YqW|;!n#ElFTVS|;GA4b&3Zi_ z@7WXx&V;!fW_o{Uw6C0S5BM7dtkZC%NOM!5p{z@HF>`#k3p;xnJuqsWXgz(zwmmMv z{aB*%R_EMmUhQ4ik2(nVI)CfL*o-NW{1Basij^RvY3WkG?io50u&_CC6s?P(3DxCv zdo8&=yBR2K!>3BV-A0;d;fx#Xu6;f%%Q+0hO$`1=f)|ke(GwxAT!)MvD)P-*axCXO zDu5f#A&l_D#8Bk}z6|{ziL4>su~;;bRD#aNU}=Z+AJw{v)C>D&7IQz_;m@UiR@Ra) zQ*jjMiCBvMx+nV3!5fjL=|g+tF_D4w5cMi}N{9@xgXMC=fVV?ii){qQ&-HlUqV}Lo zCl5XuhK~Wb2uAn_oZzwkD`dWS_sn(>0riS#VE1MUFQ5O(LMxowSy|skqi zs8N@&_=}5s`n*Jb(V>kIx5meSotRS)1wni|E!^*J5{AdOPPyDV6Pb}Gw>Ei6cnC52 z-L!e8R1PA?M_DM=|1ERmOL7pgI0sO0y}SAC{XL1Cd=_}h92rDcytHBeEs^zx2d;`m zMR*7KLo51J_sb>5JZ2{jbY;p@q;H0PRCyW5{wt8cLpDBBjc8^GhXpXOZg;EyK-}wT zHT9iZl?%J&5s8Sz9)&f1W7H_`QzrT1X`K*-om%LDiKK^fF*`Ax;0c;y%*?(4uorKc zNsai2khvivT#s?gA60|~{Wef08C%aKdBV4R_=?~aJuU@bxOnGOXnDgtv+Of2W!iY& z#CMY*T{aM`L1fx0yo<0bH0bT;j6fKQtkvaEg;yig2fhgNSc)>wI)8=%8P)|Inr%1q znQX>3e%EW?M@z`N2?5J&mR#0~PDC5g5C(j}uY;DU=h&uBwuQt;H0TZh8NRh2AstUI zv1LtnLUhSrCa(7o7*2Fh((bK6kjcOGhF>_R#{O7R=qFxna+RLJ3;xF!pU*)v|3TIb z);nLmzX+hVVfoX1MG_o+D|?QI%>9Z8_dNlWA>07^mHrJ*SCScd zvn~6GOs<>e-Qi+8{a<036{8Tg<6n+!x!?ShtqdgNf|s8&W@an?M?ylpY1IhMh5DSC z18_>DAIZkwS{+lPeQmo67_WzOd?-eRZz25v-fqn6(=9Mj4yD2mo%B)q%KCFJ z+qDttIuhB`J%^kR|LR$$yZ+WAW#rf5CIi=e9^7X2hMVpM?~G%m(b<`MN9JRnQ7dmf%XW(AQ zlI8luP@7$%D(-G`H}}r8yGJVN>29dPl`!ptsWgUA|9|K1y$cCc$41+UK2?8F3dKK3 z*5asYfF~^26f9Jf(|~niSnwOS5?k_L+51JXQ+QFNh-t+1?*d-zcy8@nnX3VLQe$NA zVa#@Ooj|=Ra+Tyv8Cu!tI+{les1$y6M?2ZJdnU{uKm_s}yd|)H7LAKPFk4Y6n#f*C z;Xgs3sIsfZq5b_MZ~~>Z%_KUxNwf27a`H_Gxn#BA4cx8DoDQz$c=&rD_ZwAU7P5aWt4>uK@m5K zOy?f`IP-Q%7xJvdK-Mo(bTXeB3!mlR zKX2_T&I~@`Xtz#YG8<8dI~Jqwajp`qYEAJHYAl(W9o1}jX0z&c!1ey7(&MVGjJ;c%|B(p(`SZm5$DJOkIZ27s z=my&W^LFLVm=?4%<>JCbh0D05d>Ff>O1jJVv7Wii*h4QQ@O!3F&G0=$bAq;654_Cl zay(TS`qeUgd~yG8Hn}2(bUDBT+G)mP@+)X=d3->ZvLvNpR=?yrf8)m)Z$)S0RI|w& zUb42mJtKUgc6`u~cDpi-P~LAKdoV8+Q6v2&gT zA)cz1(ees~Y`|TD8_t`!d}>)owQAe3HG&NUkDbrm!`o>}jYB6x}R$&JahOa1H1LS@e=yT?`?$*cQ`bo%-C7jii?|Bk7Ao8S%Wj+DES@-*D zU-CUVJ+ITk-n%e8eA-Ps4Ez}Lmu_%&3!W8JO12&c>wT=+;bu`n`K&LAPY-h6_Fo5! zc9;z1g}DCJ69%)*kMJ9D~XOvg68AK0iSR0T@vqa(^EK zv1-JkPvqeB!jqR?b9KvAzNFuIuJ4$hQMeoG;zybpTnlVJ z7p5mwdELncNc5focZkE#KCTsNyKdipRdAfu!I|-;&f@aPy>4G8ZX;arD>%-I^uvv; z@Hb(r?Qd30Q&bYtMYRQZJqj3u0!3>owR@c9M4z$OBsI-gSbpqmJjyPs9J%mnO3ckr z%1(U|k@4&&1!sKV={vPN;Bv>;vb#TsYdCEs&`qpAEkF{hYkp9;^nDxUIW~@o+SN@bAvakfeF)J2`8^x;CF7^!z=A>h zUlZfN9sKuB+Yda)u|z?w+7*i>DQNCjF7| z-EkPYTWaGl_xKJ8Kgkx60_YVp{K{7)>QSGLz9Lsvk&T|Bow3@te;x|pGT3c-Kki|! zGuvS#aKM?1GyMfU#Vu=9vOrWT0f?2Ie{T`ktdFK2d!$qe{KxBfwZGg7vKn=6)6gA+ zE(v977p3sU<;&86y_sT$E5hJqu@dMPfF4VP_;gbZV>7iU>|~I{dGrzgNXRbJu;ssy zAh@Rrx-?DxD(erFP1^KffYfwuy4i!316oc(os*|jzu0Kne%on{3OBANT;%4V##5rK z>x{kN!Dy9hb4w75tH{d_?0^AeL+!QYpAM&}7|R3Pgtz0$rlta|myJ=~er77A<5zo{ z5QxQY8BABF^j|`p3XNPQ>+pg`5-inGFAM{V-;PB8sjO7VYMEVzuu5QfR67jn&k51y8eCa&*gcwz`WP?&Rde zxo>BbU}wAKoifW=_hP!H{5L$O^&c+rs*NFro!j=Cxi-Y z^$W-&d64dG6j49df&H^871i%IQDV7V*L7KJ(cYS%xf{7he&v?P*CjB zxzTlg$itwZOkI2<&TV2SfU_d1c;?WDmO}0A=j9TiF3F-^N76@OGhl0u_L=H*E&JZO z+`iXK5le^m8F-Vc`vkSgF6Lr_1^%#j)j6LXr(BiVPg^?7Ie0j_v$f(zbE-yqFCPyt zMhJxV%7+L9`Qbz=20;tceHKgw5nI0xr-Zb6G}=ug#Z#H%KJf576gsl`hoM~nsp1mj zyI8NRf>^MByY1KEy!YQ?UbeMe|GZlwu0`sAWOu+QIRSS=XmCgU1-J2|#yG?NI;x$w z4MCY10gk_ay?hK%5f98QaM>L)8pS4i*Nm<><)m-RNBGvfSpzj<&1@&gXfgk zu^2sUV&vj_?P6W`T%&Z0%JcYd_a}Gcm*x;v)Ahu76)^7}#HVDzbT|X`bG4Vhmql`W zM}A0t``ZmS?4p0|!I5joV;HG*#?q^>+iUM+b+!i9Y0NG0@#_DPP*WRq{suJzIk_%r zn;cD*937X2f`xWC-_w|XxnAt@)AvzE8nWgX$2HLDti8UL4bYIMXPv2X!Df(XIh}N(# zF|$F4;@BTc)=;2a}hI1AeBm zDH=}xa;~CwW1QF*U*-X6vK4am6oUoK1XpZZuy6BJg*>G%J|%YRr+3ml(TodlN4I`i2ad@Qs6` zWxJF5UP@E^{9T7+>kX$Y)!9ULVW04Y7(eCjdB1uc6-ijQs{cf!4~Xwj;_Ie2OK`W! zg~jd&mPvla;Q7+p%(WtQx_Z?2&}!Q-523gg1?BZ|8 zDwU&V;m|B8{-`X}P2{|Eq7y#@D4W+Pf+VQR2H}fZt~CZ#?LT%rO_eFfg|_%`rx_iA z&v-Qw$K*6UC(h~%egKH34QInx!~+DbqCYe;yQQRXVL{(NuoC%x&B+W+*036ouRyE& zw->uDYU2M5=X_zEb^LG6vA^k?V1|s*rop}#v?93Mu!QIV4P9Ka#2R#?0g)^u*L>or)Co6bzHH#op&LZ3V-tLYG?1A*#A8%fJOF220yLhVxRuRT~& zgnsnc)6P>Pxul|c*8nQCA(Q=@tuxZjXZI~F+kX#->j>&9cRC%ZETbxe)H-HsPpMZx zeE?d5YBgR~wME^%#-#(ifa%Ii&oGk-J8q=E{>gdp2T>H?>)22+=Gy`IvdvnrljA8= z7unL|GZ$s2A|ex2Hjj5l-{j6^f12lBoN%Vd;QKMOuEjxjF!mOIg}K7B1zHQ^>mBsw z)w>Lq4^_&$7%*!CV|K{(t?sfhf~u=1_Z2V=9O5vyyL>3Dqo~S?boqAy#;py20Cj^m znNF4aRZdMhPn$bOHy;BAz;XZ!eB4`V_V}7no#yStNl8f62f?p7mJ=r`L-?#KV_m5DL3&V|^OimhTWO-Fl;)DbF4W4ryEMeA*Q2 z4oW2!R}Y96QycPYjRmw`z*gUY)k4}W4Iso*?kk8;Z46kX7oMnQt%`pr88w!a+XQ2P zh*npT-{RGOxxa^*_(16Ox$Kv39gGFP<@o6|+nUVr}G)iFIs4Z@ikg5~VrAM?h zw(4>HM{*?}+WWl6S!^L{aE=*4flscqac1-B&Cn-(+2QZ%#tf3=$gS7%oJ98+hT13M z1;u{!;Q3wutkDnx*OgWJhh#pk>?x=a6;gHcMki!%uaKOoJ1p=%X9K*?d4Jo)rzJfCUUK4T4h)12^6|y-=R6XVb=ps8QJ3xS|2` zkaN~R)9R z?w$Fdb#eH(j5`%joKO()A(RocUI}3G0nxof$WQxVoNC{-h`G9Yz3aU9BaVTuK5*`9 z*JYW@%}@rbvg_DpkgxdQwbl1e7BoifLD!_Abyq-HyQ`4lxsiCsh8xEEX}Oa;C2gi=mi|n(vU=61Ss#r<{3$ZWLUY{>FPJ zOj^Rf{(J8vM5sD=!{}c|T4j}+@afzAll;#ZWtZ9dDo?s!uQ-y&4Vk7etRgxUUz*$& zruTdwd?%;6eR*rHZjDIO-D|lCUQ6Rfj@z_5E2-Dqjq_Y}C9~gpDPd0J#-A1_Y-}oD zuo*~vx-jHs%GS+8=NHw%V)WIG-%i__6_R~M8Y{cRrK}qEPw?%$Vs})df{(4j_t>Oc zk3YVw-XTj?%O%G7TqXi`LgR;xf=hJ0uEyrNDf(5I zO$51(+mvu~&C&(;24or_IeXSzIc0Sn4@#UTtedxW*1VPq|CtoW)82P(@hz9!rLY|g z5Q{3&d&9-_W`2F=cAK7Jr;y|Q$9FQD-?a|Ja*9fDR77x`40@nyi&}P!%LEjM^Be4| zud|u_IX=PUs^BV*_d2XQl{!Afmz4hj|LUOwL3?Om22xaze<%0;eBzr>$%2c|1o_}v zF^B0yenr&C-sOAk!Mxo1bw#K4dif2Bw)9_X!ht!f>HJr*S?5}l1?>Oh5QK2xgWp~H zMtF2Y7E03d=Ll9@mp}x9^slQCvO#w3u;xT2L#XZJfA8z8if`>9c}wt)S3*Iy_^#u% zcOB*_(qBuKm-DJ}&&FH`hS$j0R-L#Ghc|oHvoBkWTbVf`?*#X3t>6 z*(_pXGc(>py?n7)p2WaPD$Eor@ANtuC{Cx*U~!2Df9VkXsdA z%r}ucDmuAIEuN^9b7uUg@0-mRRVOlQl0>hVY%T-F?)`a`&Scq|gLvu%>(5I`AfOr1L)cyNi3dZ+w_`GCIBwkba`A0Jaz7(@l-K z+mK!m`RLy958KLLOe;^d>hK@7{O+UzBK%fayd9~&jzfo<#2zkHSBfPp@ODnz!%rB7 zkz5&a6NkD4%jBj4GWqysE<(wzp8ah2#@_8|lea$D)agKS&yDHi;xi%8{$U?4yI~Nj z3dV=Nlk3POpZSiv>}>-KCgz{-S(N=d;9xsSw$QXp(hm+>qT2=?~px1ZbL2zC% zBDfJ7i_yi$<;iTn)uAJIR6S0H81>J0Rj~55zQYstSqCX)0iQEXv66TTTj|9oT4av^ zDl+T#^I%0x4yP3QZ>HY{v4QMt3SM*_xWkjM{kYNXE*TAS*~@w@)@L3_vM9`!W~?We z;^6f$e5wjY+JD}0SQT+ptf480JLIG&3kK?*LQTEtRov1VSYU#CO;-h30OI#*Jx2lc zxy#lu7~5#-H0kd1AQSbSL*;Af878F!ly1>*of;xS0QGdl-aIV$Fk+dv{ zx$*A_9qQ2za$PUDuOWLLoTggHN3>qrAK94WQqR5cM75ED_(2F=Q;OybO3Y`Wp}+?k zA)z_gYbu6x3y+?029-rEw?QFrhxFmB4|xP%@36wlM^AJ#DktoBJQv8Fx(Yq4o&ieM_sodc!0}27{GIOZZTYg5HH5q_%V_=$Kgu!aX8uH=+jArVhSkLATbdNNMn1X#DDp)sU3hnz# zbhd6Sx`3bhRC33;#@L0-% zKK(d&uk7$G_qBsoI&gT<@3yF8y3XNPFsFcKDPyYA2sT|_u2`cnnsziHkSbTq7Ri(v zj1J~D;8|WNd~`~YJ>HFP-VmbA(3?c=3%Qy3$(85HR`rO>V<=EDW}Kb(?CU;IUg>H|;<+`mi7E5^ZNp<6a@Y&(E_C_RC@4V| z>|%`gW90nD$3ppVB`pW@)+(O_{t`%DjDaST($YGeh;Gh+8f zTp?g)cp*Zstmrf}jG*47!SFpPSZJy1R;5%m|K8djL_7#M!xL=4(n=_ei-h}QouUhy zXYLf`5EXiRGv6DA`E@$;FE$Q}ZV%YmA6fTAo^kVh%-r>7+FoUx%zn9Sd2F zS~whJsb8et&Dz!#b7lxEDN-Bz)<9jPpvzR%^^Sa`6k=`<6J!!em{;=U>D!t z{nyqBZ~qSM7`82!ejVNcO26Vc=)!QLn!TB`KdfqwCV2k5-M*svWAFH%doF2i%#=vl zpWlov3c;K0?KUi6c}x%0?nT3YB!#rOG|%dChP!@Fq=LBr#{7J;bEvTm5(`-69QQBd z_0;OSz_z46$TzYx*flcRMxxuSW1J4mHqnLygt=^<8#s zE1&Ym9wZ*p0vXg>@f*rawpgZJ9nrkCs{p2XdezZ5If~v#87Vbdpd3(Ls>(~JqP{2K zQ`T)(Eqa#unt?AW(R^g^>;?DB>_1ef@<0 zuATgCL?izG6=(_Hs-u-L-?GX26aKY}vwY>coK>mr$`ZtX0dMPZSc?wO&rhex3hCAU zF{FS>RWr(SGo2{;^-V73wL2MX;?Vi(u?=2GM@4Mv{dZgaBQ`N?0CBrVqD-n-FmqL=!OwV~_e+)IF{w6+~h6LFfp%MS<0w zhCWzKGrs|_gNsO(vC*Tpb@%X4!-(13x$=djxnvhC6x$vTH~a4^Ls#7iQeGQ9G&0py zOk3Dpl^A zi<*m;RBncbZv5U$3p44u6H!givEbL%Y@Y4^rb<9f)mw zb-<_IfsEb5+`|YlSfP2Z>{r#rWvYLnNVb)sl4|XZiTgveO#7j%1cyx7*`9V46Nt*F}V7eS;mJIi?%ZvTvveaJ=pgbkM~|66<#BucUs+XME(gIxm+xDmTCR=S@GLkvIM7X~^geJ04gzP%A8bn(;Tt>uLYT zDF%FQ2-7sa_yF4F_`%Q9#!aP6`%Qb!GGVIQ%&t*>giorU}CII-q!<#t_4PM$$YRW{0VcLyT?`K9phpi(bcw$~;qBSXsQ{*-p!O1&-#E0e1) z)Q(^FUN$-B%VV-Hb%FM})K&k69PbER{i*%pan~X_b8B3_id2x$UByGRn?9>ccOG#W zRo`)$F3l^Bkz}PN1I0_&DdJkYbv&O$lF3LM8N?Xo$nntaS&Vnad>4;%7GUdmAzmZ& zP$>{qjwiodUA;}UnNBqKWSB`8wccHWbAT&@o?^%A&~>`h)QV0j&|7UZ%_po~?mFX! z>id!wAF7Nr`6heFf|RQdMu5A2o=;jH&ettp+7q+33TK&snNR)TX~ei_Nt`|pKX~-= z3e1d!^f7ZWVbhx+3-GO9OF^e(pI&=Sx4})Py$>HA^rEr{*JAnf4OTrwT*nZ$-blqn zdpv4$vc|YYL6_Fm=TAs%^vnK@Cz)Vq3DE%TQd0q|4i+891jiRXZ+*8Gb?cctTyQ2x zRnbLCxm|#r_N&toI2fn4=oKmjnU{Rt?`;qjIp*(pw%pR{{@~*sqc-Lp9}~gj0n@b7 zfcf`NN}}Q{JEeS%==DfsaOI>j=8aaw8zje@YZGQUxROKX@4 z3j|i}47@rZRY|w^h_4j1B1^_BOEPkmBckp^#4V;JLHhQzsZrRVto z>~@y&+14nUJ{m~NoUhA5_$vBoZ^6`VoajH#xW|_S7Z^a$DNehEBsyf9=2aom(;0yO*9XyI1PisH|SANTzC5Z_dZtV;08ZMrkZ}+rc=SkC25oqu}<@@=M znsBcX?9+8;PsaR8I~w9eU+Buaqt_-mxS+GW>sVZ`U;{W)?@Wn?XH2Schrah^n!Dqe zBbK+--M-+uwV8H4_5U?Wj`-AbAolwLq7_Kwac=A;>G>#McDbHs{a{aWa9cL~hDNwt z=YSjo^=XY_F^PvJSSjN!h{FY7SL?Yr{z)JPTS z>$Jo$+fTI7GT|2C&tovpWBWtc6Bd81btD2st%82F@6x&nts+F;aoxQxd4dIfx}m|p z!5{R`$?`zz4z0E9fZUtatT&USbi_N~& z@j42QKBV~Ni-9s&nU5=!b19{zA2nD>Wg>~ar{heOBVvOTa-FOKA_U_S!r)f0_{fS4 z__4zOVd>B*F4^b?p6fh$nc}I{XEO zx&I7n7*oHvdOnVPo|UqCv+|5oX!)Y|te$ibLbuia8-e61xZm!TDDi3=iGffI;A>6T z829W{gsiSF*YeRBEnW!pV|5m%*ibq+W!phLH&SSBmeNMm6g6$3F2wG%p-BD3i z>_e?0R}*62-bi{)$6S)$P&N`5J=*ZmS4ITSDIc*#p^c2z(jb{w!~A1wwKJ|+pa7F< z=dSMNllkuR5mcPM;p~h%I(~PM#TI$xZ!6 zzA|%q1o27PWaM_4kAiyt4WUGG@hN=BCnaT;QFAwp z*x*u#lZj*|-HV?kAHjX^%I*WQY{0`N=2ydYBfMGSOxfi3#8s7?TA(X+%wKZi z`H#z%o=@}(q6Pz>3hTT1>-h+^)GZJvxUTbP!x73b_5RC*uq%~SSYV&p+TlZ7X|mRx z&ChS%G7f3?B)NC-xe1HSXH-tlVdIA7eTqMeptv{D z@1>mF;m{L!K@cr<{Bn5q`dC@I?YEc&-U<=@)0O$Z8%|j-e{l)j(ON7kD{TTFgjDnw z(16xsAWRi_{o0^6__KPMbZ@NkO*D2 zm`1&K7iV#nnC4sTioGKCQjFs?lVzwqObnUM^P`D(kt)6!JBrNCEofn&58sk2(~&VuoFvgQlC-*mbHy-SLJ4+?Gs+aByAG z_sWhe5q{yTvKlyt?UVocPbox3BHYXATL=n3C!P2ZkMlXCHjdQZZiwqx%8dy6AT%f@ z=Ge*iD=kS`|8@SQ>075X!iq~%=6@tzLpRNbw%D}YlJyg-^RB&W>CJzp5t~8Zgf3@( zS8|7q2SXL8kz^0n@@$%+S(YCvkwj0*OQYF*Q7)JB=M67#8H>7MDB{8WO&-Ra%q^$= z){vB`Yu#2vs{!e41-W(8+(s*#&$7m%b}t+9x|yfG)!2ypYQ4Ahd+RXl%>Q!mLvVH3 zR&KqYzXJmbIA^wJZ=7ZONk(+wRsav{hnNBbI|inzod3dvll&2 zEH5PnxVK&%Xa)dgO}n&IX5TeRBS)7%C=MD~ivULtX;0{8IU3}2g3JEo)lX#C*pym+ z(-Sb7D$%j@c&T{CC^;(qN!(Cnwy_BB@0b}B)nmG6Hd0;MB824mKM47 zCi*GA{9d@VX_l=!qpJNg`SpM~(Ja1<5N8RbTY8a7WlM5X&bccj({mphLkudYUSqoA z@uE!!{0?Eb-eCJV?^l3RHX6Z$7dj8eaep~gzU|YL^j1d4-y;y^ygotk^t==( z^kd0GW(ZjXcfpRw1_mSv{(Z_Qn0v|;rQ|%J;p@t;wtA(aj}N{A1R0~6eIu?6CwMD4{?PEA|r&E==B7xd!Ab} z)~GSO_x#oQ`g~mZV5JLm#pkj;^&8c<=V59jNZS}`qEeD@|NU+TZqKW1=9^|O3x|%) z%DPwr9ny)Zax@Ohz0gykL|?rr<>h^$9LQBZ9{i~`^E&zmTVq<8MLer%bR@-i)MUS^+6b$ zrO#az>k*-z`y6GWW+U4yT|0_t*GIm{pYg=E8|(EKAb@5KABqg6eScIw)_6Zt9lgrG+;VS1ss3f6-Ftf$_15G6WQPu{g{$!XeyX`R zR{3sf=;K8{6S0b%UBv&LH<

`pQB45>!>Ya8cpL3uNLkwA9X*d0mCkkGdNZG`Od? z$s01HZkd*)CGQF!$$ozSn^18Oau<&y616Y+kYe;OX5&_DNRN*&n@!hb!4qe0Ior-w zB~!O_oUFM9xT#H8p?+>G)BwcPi44Wd)|_u+W#fEYwbpkcn|kQp83uAY3%OU7dI1q1KGxXha@a4{sf*TUds=Ws zQp*o$D$SO8Z5=XW5@Tvh7Tft6>@`q_bU5F|H*;(iX{OW0@=%-~%j0};& zGoWLGeYON1=1coqmvL9ARjbpMdE7ik%n*RJ6NRuG&CYLKO-V>ks6VL|kAXss(7hOR zg3jj}*{5U^)=rc8F{U9}(>;wn1jbk+Y)HGSE6#0jaaj(qC=*T#Xtz1L%)EMXvjn-< zWm}|PP2`ETC^X$sCy#l<`)YT@p9Q{iWrQ=$lu(?MQOwRHx*@wfu_*IIE4&Xnr&|&& zN!47t>9oWN(%mkg5&W>z3vvSe@EWT^7Y##JqtK!H5<$idINwU1Y4_w&xu*|~%}$w) zxG1q_vxsoFDoM);Zi=>lAAB#V2ms=7mI(Xk1^e*f+;TRK7DW$T*O~7=LOoWa)=8qs zIXr^20H|+N@Rmb+@1Ly0~7<4oAP{sbPr{;!#PY**hin z?L-K7_l$R}W_$^^7ir$WCQ71ip|~M2f#~BFXjT<`n8IpSX_2l%bAq_ z#(6Kr)x*|?dLmS~@L`44y8(Z*mj0j0rYiXS ziodDd`8pPo1T7ZWkZ@%^gFhf1wW)S{=m`>j6)qW7DW%9q2kgF2dU(gbx35b8EQ(I7 zCnk?MRz?6OwMWc&fYKsl(2vEz1I$WZ+NI6+v4%^#UaH^t06~YUz|ux1BYwY|51R+9 zBJ)|m&aN&OLK)-AcLL;0I6_!;m-5SeggITWY42SZhTK)f++_z(+AUP}Q5wt9)2bgq zzANg)fLeMfzYST%Y09?zkUi%Yz}TEf`S*g z0Qk;wjbfJzyEUtrM|_c{iyW75Jb33K@=VIdrP$akeL{C}+8wBT`|`e@R)*Qzu>O)h z4^^g}R|cQU^#5L-E5GPVpp^*Uwz=>+Gw2`i)Q#=1x;6y*UAdBbC3!BRMnK>HBRO=Z z7+~QhZt$1tx!OrJ^Z5N}Gn&T1-BDey{N2k&Mf2y=?g>1pVETHkmYQdsj7^t}fiwI; zy_(#Z!-%xC1MQRpWLyhRmUz4Ri3^9h&QyzK<9jBduxCf$O`P&tM&r__5ptaaE4+bl z-$-_@&9+$r7n~Ak{~K)K;QD(aEY5*z1%Cu#37Lg7b|3vY!Ij*wQ=A7iWK%1iO5mqe z8$q3^%5PtD&@H2W)4{sS*q;x`$g|dlvcM8J4K)k2JFhqWwC$4(<;ncum7}l=O0DG& zzw=+WIX&2~VA}Tw^%6}YGB#pUC373;Nh3u3wCfB&-DY6}@^Ao00^vs752|un7kRaQ z5p{ONw69Q*^-k=>mP3CMV6@oj{<(*>)~Br~V-+g{77c4BN90g`oia9#ozhCpaY&u* zeIKmlT2(fNcq6wm4FrQDRzXtAcf1hvxaSt1Nk=`12)hm=!DY+)$kh4lOCZoDKWcEC zCihH5fcR{FBe3K6qC@WM;iNXUs#R(VKV3ZIwXk$Y6Z5#^st|sBvCl8mfGbFx6M9LC zBQ|GFyZtmfrq1L{wM8Gj+ltPNGQRuhS`|KExRiMQ0>}IV65qP$7H-e8C{rXgQK1{- zQK_RyqZo(swKQt$@bVR-V^ZM2S#s&sz;?`MG96H4H1lpys*6|czEA(MMyqrSS%{I~ z%OPpb-)So=i2i7y^xlW}QB&QHJX>za2vhMIC`4hm*?Y8g#?uHIe8#$<;V})t8?J%3#&!ViLy(nY8R;w!3|)R}wA)y6NL8+N-lqlk zXjoqGQ}FRICbh8JHH>NS49>T|XY-a!PrbEY2Api`19yswnzWPUdVx^jT*UpH>yOmVU>}98pQE4t?DuBx{L06PEAuItpK~UbTTh>z~ zDT~wru9@2`m{|#0e(PnWy8o1G)8NFZa+i&eZT$;u6ql}a59MEo@541q_uN}DOSl$7 z5*bJJ z)Y2ucPY!j4+IlhWk2P|3qNV_p1Q=f=Ffc*Os@ckk@z1!wn*Q=dw;)qol@$j~af@x& zF`CqkQn@A@8wrO7KS&j1A&g-|jGE_>s(QkU&Zk zT$6~HsdF#TInf6-5P^@K= ztwzy&MB%+bGLYdODGLGI+p%2ku;~$>yR&v}Fv2QNyq(g(5vpreZxpUZ=VZiW6?|e& z;6F}wT`_T?m&ahKpUC8PzSDFDi>&Fns{HxqGj=kkcbeHrAN~?KkUlF{q%kTF(|#g>(+3J8((r`Pn=N)@$D^A7&%wv z*CKtZC~E8t9!`unr4qRQ(4791$@AC-L)ttZ+JxD)VAQ81p1bPf$~dO^{#PjvQn`VC z%l`R6#%_*;lSi_BH~$Z;MCnqtc6dJ&^Gn{5p&Rfeg!jtPM$MvI5f$;Anv6<(#1)?fOQY5C7JJ$+_uSAdQDx|@Oq*@ z3q~qyD3DWqv-UY6YC19QC1Byiaew`lZNj4*f&$7e^*+O3>EV-8YaKmSL;k@m8ln6& zF%ua1s&-h@Ms%^Nc&ZRRs|;Y`{}wMw^P1gz#>tixozLrl@+d0ezV~nI{i6`UBn!3e zm*TeM(o84itQ##&nGW>dSBXGiz1=E!_KL>iV zj|hvO1r6m;8h6Ps9kepLJ?E;Tf4gD#N$Oi~z(JP2ZoKl~=M>+{PJw9dUKL0MoWfM& zEX7I|26)D#p$2O>G`A0PnaO7I$Vl%k-m#Z?(ac3H|9pdY@avaxGRKX!UxX#71AW?x z|8`{yQUBer4E*L<19Ja%@<=uRnNcCZPe*Kd zkqm~}#-%{#hVvOWg62?dX1n$eKF`bMNbSTcahUtfCfeJ=FtGqtZ$Z&R23dmSB(2n2o)7gy+z-123PPQ51TKc6w3lORI=Q^Oa(7r(&CQ z&}0r7Pz_h#;oBtGmWb@Y>0mu;4zs+LEg%7-=H61h+W zi^Q0n!{oC@0gvmlEySJD5YHs@@gW?WZMtEZ6ZP=1BKwIuULvnMWBE@L!ED*TZ#b-C zq!0i(8CYJ=AOE1?N)dB~EZQb$%ZGdQs6b_nraHvw(&yzRgt@w)))w#lgg2qzdMXKU z(&oJ4{)~ZbIC$7NAXL zu?PXZ_Io---|1}iYeQ)|8CemO27QBzkAT|e$+s^34DF;4{|tLwPiSM_(B3zzGvwfE zNE&io806(@4dO+IUVKBp13*?;0tdcB5{3oYN)EupLKjeNYFH{J+e)te&*xL zS1OBxD$n=LP+};xFEC=}?QzEjAs=Y@c zb8pEFs}pY+^#sp8w5aYfGMwjHLS(6Hr}6yr%x_y$MypI6U#U7AH>#e0VSiVtp_(y| zt*K}5Bf@OD+zBk3x~AdC&U2`;SR4zt6vF3HBrzf-PLALLbee+AFlwO~UbW7hKyB3F z!piD@M0@S4OLsdttJA&+OOV6Pn6HgXoUpEr|6Z}`;eXy7NHg_*xXHg&*bb(h>REn_ z2jTKQU=b1JfjK%j+w<>F?`WicBB~Ajwe-iM{}((vr%zQK*^c6#^ct{%G1973_b>nn7O7$QAkT@n5r47xp#|)vVF<4gOwXiIYJ}L<4L{ zPmJ7QUSB9_OSMQ9QcftAw=etSyt;Go7^9}TDe?Ad6YXFC&r){ry5+~0pe&q4+T}Qi zN*dAAJ2c7-B~^^z<2U$ZWc)%)ZCS%R*gU>jakX5E(3eXcjVhRU{&fWA(E&{*j31GO z0_a;0G?i2j%lglm=cW!H>^ueR@4c%W#;C@K-*BS#t6C9m-qGm-`k8_zdfY>zBJ1`% z5V&9jrl7XI`)ZuVZ0n_lWS;X{LGNjsZH>%M#I^G#q)aT_J5%<|LroAk?)z0b_)*fQ zlNZfnAcRFLv>VOY<4(0TuWvz)Z7V5`<(IK={3Qy7i>yj>bT#HYfJAqI<^xZfZnCBZ z>JAdMijsOhQTZ)b2PsTWDEyjC!T4b$QkQv6)#m4y&7NL$#1*z_>DybtYy7KEE}7IH z|5+XrbzI&*w*&RYQx6x4Obc`xJs(tpsn+^c@46C;q{{vAG>69kT|MPk9s7TNdHCx% zY6B{~YJ3eY?_VEW?-+q1Ex*WDgTW~iwIroKtvzLD{dmUfb?d`{QW)*;EF%g`Sk-X% zTG!Vb9}?z`7Jx}23SU*VgHrbA2~~KR{jyTuPZS?O195d}C;~{bEo!;D8|yYL;Yipe zB2BtJ=YVNEL!a~gmK$$zwtR58 zpkGI~O;?9%ee_C3&a-B-5}>Mnlh`JAJ7FQzd{eBz4JF3=qL;MIClJzZhY~A))ninL zg=Pk*vf{Mf`WJP`19~htutH&e*pPk`EbxzoMxk>X^FRHSir&3o;f&FjAIq{nHw?gk z55N5J={$#LdAqHDz=MX?u3=AJ_9fg=$;3G?(9J)1-}|>CjBDx_R`{y|Z{>GPtkXZm zKdCnvFAP#TZNDpQNe`voC6sQp?^CVP>5WF3$G4L(e!>x>c(?s=8%J~wlc}1xW(7X3OEQY@1MPRx3CVB$>Nvt2xtESF zazzcxYgk}?(ioaabh}*p4xc^eMtiaUjLVyE+4H5dWzxr(IH${r*GqpEdVr4U?1M=2 zUDOx0=blycMPLa@j7f>{c{;QY+~rnyre^jD!lo{Hm@$NDjKqU(a8X+VSDsUQxZjG~ zMSK6gmj62wRxo;W17rQfEclV?sCxBeAtvL(9xc4@i~&bC9Yf&goVZGFxh>_J1hwZ} zp;Fk=7@DyL=60uGcKq{=o#T!m*=esvcfk=U8f%{qP5z%C*D~;Nk45XaQBQuaVwDfM zRQt_}4g*m3+u>m+MBfm-hXmuib}EbMo7#yh4SK?+6rB(i6e=h8|yR2O?0ez5W2@8o9MU-bhe7dzSX5h zdOlOC`ea)fFxeWegNJ$8 z!qHUaI_*us|D{K*T|&!HrEBE@cu3Na!u^q6R%x6<%3=H8;C-Apo`*7Z7#N&ngk0v%Y z;$BGq^<%MHil>bG!Pv>G)tqkaFz_4)}u6h%ud6RBhSh;BCaWaP*)7c1wos(xBmGaGL&JUoaATll?j7xozcy=Z4R@R4&g zQ=U!3U$)_L&IKPSoBlg3uc z-fI7#fW3u>>q(Zkp3(8r@Mm%lB;u#gSL?X&guiSY)Xv`Yj5g3ZDe;o;ZtM@zR4hts zDIb!$XB~rn0`5RYmDZXGh&afuxqb?wi37OZ=oI>n>7kM|Y269J0lma#&o%8;zV)#A zo@JbK0k%Ej@bvkD``yQe_ZEddiCx|RfbR+Z5T=x00h}2o5nsGb8W9#I<# zK|dZFz{2-KO8-I|E`2MSB_2o?vvC0KAW+uKR-cIa+4;_wpBOzoayfWhPa+Cas*>PT(;*9cZin~!OnA)lLLuaB^+{r>2k`HK2!FqsqJ&fE3E`<>5QR8^2Jxd~4b zrGW>PD{46N-infj8Y-@@UrGsP<@{U?eJ_4z+G$=}$>}8&1OzpXH znet8i&ieUREN86Ebg~8<)s+9dYxq?gEi9o1EVTxhQee9sZhs6kE>u6i)XH`DdinAD z)cxB=`Lc~a%<)W1%2etW6=HDK6zmCD!s)wr?HJ-5&4Y{qzFy{TZahkfBxlr#ukb?1 z9{(k(&vX9FZYo`BLbjKN3p~knEH&;X(g`DfZiEPrCbac}{uP)yuYn@lFFUR=i=c|I zK}c$fz3dNGPkj33Smj&Yq_xsCp@CXwG~e&w>cB?xuY@qyX3s3Gs`@q6qQR{co}w_(Xjz@*Kf{_?vhPM| zMT$AJ@v8j32^kS2*<@%_r5$>C>J(5RW`{9ObWi@E8;%ycDlsPZ`j5yvo(l_42wu%s z#Hy}@a$-N^FwH^r|Fzf>Rs=Zey7rr4OL&Lcy=&(zxneE?>HBc3ExCCHO~))%XIO58 z57%w8rmJcSD;To}+_Af|hcyN)&_$*=pcnA15W$KbW>Qtj9x}5NNXz z6(ijlQfB*l0w2p(Rd1QHl|+8};pG;}5v-7MW2=%8RO|4u^)CUGWX5#x)cEl&RWCIC zitwj)a<T2AUkXtyw~JS*#KpVwyAtP(zI0Pr!O+8?qU5N~ z;n^$cGL2)!`#@Horv9*`F%@$8=7a)>V2;UU{Vh4+Vt$VlUvR~9t@7>tv_wGWA*J>c z7FZ@Al~}PQee10s%|7uJ71Zx0s*P(9wGzcapCK=w&uKejrF&LS0ejGHy+dW~Wb%5u zkKL|Y&O8lbF*6lcL4GY;;u2tdk1V!m4+nUgcMz-rCJvG&9L~Yrqf-61cMz#({U1Vs zpYIOAcDk#n!A-rfu63;8i+NcfYS9@k9-iFs zP_t^YqVe_`-|lyC4|FUC(se0*v_V)X8_@u#+BLk4`&n)Jg`COFkHnk);P&bClcxHl zp&cGOnNClz82+zZ{2i!T!=LX6Io3lM%|V$j{c!`O*~{6qd({f_FJZzIu4{4Gv!h%B zWr=ZsOx*HU>(cFHN03i6x6~u1HAAzI`x|wF?n&<^wZ%oaYFV-pm*$h6vi|Tm1S^}Q z$o|;~fO)f2=NaC_hd@17uX4gUAG!m@K5Hj&j?}n(6n}GeX;_!;%T`tHM$Cj^-<>89 zSP6#c3N#g|BK>yV=6;5TomF0H=;awIX*TICmDf}2dEv{kxI5S0-k*IU-g0izGc0({RWD-eHj19xo&F5$Dw$#Yelf1 z69O7w03mfh_KVpA?F+Q_%Vxou^~i5G^7JA2LbT*>hmPLmTO zSB&bhbv&hFlL+{yYPiVH_h51J!~?^Tm!+A}rayq@Xt{Pu=OBaL`NY_vfdIpMgjzYS zY4Jvbd2PiYU8RmY1O*GPOXW7u67@d!Q%nGehvPd(8cE{vycj7Vz$)7)=wKyWQRzSX ze)xrPV3uC`Ab#@=;@UlT5#keYC}7qf>609?T-X4=hZpX%V8Dm5|8_(#FAsI!@LNB* zk3W$2wCJ_s3bB;_YNOfE$rjaxKPEW;b#T5RDvhWqpj4g4h8iWU^;nEI!IO5X#nz)P z?X8v9TBiscT`Q7$;54L~do6+sxwvoQmRAibcWfH*HN*`}asV)jU)vvB!4{1vl7BXg zd?R@@4f@AHi;K4CWmZ#q-TF3g%phAu*K2v}dO>>7_BC8l#b2S?n`)*pRz*vWfP%aL zoNeO`Ve;7IMH7>f#=lDf0*niZm!N;66#_ZD8J|^DsQmsT>S-0M&6Fu$*HvDtlozo$ z>N(OkpaYgN*>a-<<3|+t9IX=bQdgYsD4BWc#y(I1XFMa72Q=eHQJi?CTW+&$TVtyX zWo}-M*3z)rBn`%&N0o%YkU<1||5tuItPhp)U`#|+*gdaC`O)3+4{yzCszhSwB^m`& zLP^AydvdV+C7vZ)3_}__6FB4s{Z>hdN`JCp4%aLkXip^M8Rz(fY5nV{UQQneeb}%9 z{Nec`A3{MGo4xj1x)u@*BL!DbKdRC(8|s_(d`uK}O=TQ+I>icVSJvxAwZ7d;(=}Cn z*>J%_ag@_n#uQ5RolJ2kS0l~J?@7I+SM%9D<2a^_mHO#}r{1Zm*8R)$eI`rX0@Qnn z%gyq4oC9vS2Vb%#9QW#NBxFr{sg_?;rkcAOMcrk70_~UB3F2z5PctwT>^_IAo>h{a zy|4e%Ftgzt-ubXf?@$w$6X{-a$h6ES02-AumG~Q~uN52wvTugUx2st*tcUA=WH~v! z>vs^Nw6k|_D{b!)Np&&AtU&)z=%fC?-*_?OR zlDIs+IL6(HFV3?y;77BI3uWS89?H^z#UZ+va#D0k3Q7bM)UT~JuH->ul6)E^)2^6I z8JpF)-D*hTBzwopRquTRA)p_nhc!3d6n#f!)AR2?%`kB>&iuP3ptMg7LvbP*`>_Bn zUfwF?ZkvE}AQ={QP|V*qYZ zkXMkw^7aJ+U`N^2?wpVcD!@0kxcm}ndt;AimxbD6I3MBvBeF;XBTt!mVS2;O_IKq0 zqc9QaW|sgzH!x6haY$-DwFoV5Wp1t8;Ao$rJ1NGY_PoAEUVbDhgl>%5S@GRpxmS;l zGY+nqb6bah>GQOTfVncP1+rlg+jFFPyb|8OrCL}~eg{HacOEKJtI;2FP_VoTAb*`rm1Xmtw)vZ!+qnG#(Ol$SZ4fvAPn{PPRpNQ4&J|;0^k~j zkN=GZCDcv6>V#_uck`j9?9+NYH+lzot>gl&&;ji=z);(In~a?k))Oo4R5$G>8OKzL zMo~m;$=_}cYD>ssfFVQ)#OAesD*sQCVVvvoN!<6BEN;|ewecA@L)xSEjUtRmM($u$ zoHs{>(L~D(GP9IewaS%l1t8Oh`cO_!3u-wklPI?Zjv3jP6;&q+U(pQl4wUcYO(&b< zsUdOmDimV&IbAJu-9Zv@ucF*udh8;aHEDW#&%QlJ?OQw+@y?Ya!5T)(51G1hn*I?H zF|+=&(b9~r-}{fK0AIHhV$^r_bqy|&_Y2ac(~%`y2&l&{H@o@|v+Ys)Z?*=^zV8#v zKL=83(SB$thYI7I&-8JO5!@NaZ^EK@pE1y*FSBEplYP#lCV;x|d-vhU==QKCnwT1d zYJKc*%&L&{hW32K8KdTFK8Zk%fqr2DIE%H!NkfpIiMDnB<&vyxp9$w<$NXv`)HKo= z_3nr4!-j6pj81|8Q2}8!ab+~EbI4cakLj-Pvf#LQ_2ft%;MX}C8*rJ^)<%pUM+MV% zY&#ZrbC)+TTy@?!BZJ6y7-Zju_2m^h<}sc>RpOLH9C(7`b7se;W z?Ur_T)m#>nw>GQO$4(9ts39$@pa=+JN547YHJ-48sGn(nH`#B|VGVG+`-JlaI~61% zs*rY=OJ*zp8O(`BX+J0_D!JgL(o0br(sURD^<<3r5OjefBChZGK4YIs^OG+L4GsO; z>>3#NXc!MJG}%oyuGT}u=fqz6Mg$D!tV@fPpo!olhU%bi*ln)f_-k~tKE0|C9?a=} z-p)lpqA=rQaVz}@g-2MaZ-?H_WzExcJOyRa6fJH7st-h9g>6z_QT%@@Eb0xD6mz9t z8@a&ALG&`*lXPdkHz{p$L9<|5!5|jG2jVp!JAD*%iJM8CpOb4zvL+%P4XV8Ds!X+2PocNWWGqU_WQ>KcWsril95!ypt_r zt2xAuO%F`AC;EXt3_~%t_ob>I4byreYHP=twg}gyCd##SDakt-H>DA0ZeS(-tVw$` zP^Vw!Kca`$r8@bIjyvV`LFJ1`ckvs=_u-0ylB!312CN{UHAl)Fr%U08k5Px}Ip;gR zvY2qfultXvH%L5clOeVGp}ysxIp(wSX6;92EL_T&km z6e0h#3s`yyRBif{oYwk@s>&iWWaICt`{SMYtW0=i%V;;u!z}Hram|!34^FV0OKdSt?x1{kJW`fK z2s*0Eb2$JYXiU^O{o-wQHAgX;n$`@rsyd$KYFTb~ait)WWg4o5l1kwZ_-fRP#WLp) zcNDja%c;N^-!RL8TjZA%wy#e{KO@FJ@k{Bc2C_vkO1Y_?vupPVv&4$MNwsZw<-1Vc*MXKBoTt&iZjkUM}1OUyGY$w?b=5cA0Ng`k)%%IfRT+V1sxxUqW)0KL~wo zkO|zBEJWYT61ZdDsRHpUusy7BB;|yf(B`g%W#x{*f4lmRNRmNLE65NSwJs58@aU24K<7T9!QmrH&YU5OhXyi*(~h#8 zF$fvX&g1p1^B==GcL25 z0kO#r$0GPBh(E(Sgt<5PDVQi=(nOo5FQ0@a)$#%)e73HWR-mBdWeTR#|h(Tg@H;HK6oZwwS;EyZx!>DqV??&@H0h z=6Hyem`MpQ!Xa48cC!JxS*C(Lh<>XSnZe+$HWO@7H)qTzhZSsf#r3_)1;}UVfi#P6 z+8%DA(mWA53le9ld(Btx@WHwSLxKI39ApR(UGZARWK$`5c&QcaOlh5qd@?u@!#pwv}xBl`?R6^)i3`e(kFXsa&$G$ zX1=RCY}?>wC7HUGJV9*CA(U}D?@8aM`|9OZQEqt!$JqwOz`@f5o{kh_?tzPUGNf&y z;nbpO5^82QS|Z_`G{c%E!^)Mo%}#h!FpZZH)E<3E;v~r+wMEI8#;~R`w>{qKXHi=m zxmc=pT(!|weN(9DPIzOzHn_=lQ6O1%s2g^FtV3ADJE9GK1vYFhgeF>0-+caG`Nhvf z4>lzZ$~ou_HK-3cC9JzTHySQ~Is(le+ho=@<>g}AL$xEV<8?B0a^zT9+W-sgcG)0Q z{g+!Vf#MQO=>+@WD7XK91XU42aWYga8 zA@+y69As%)Gy5cy3qz0;zc-hcpee!wOS?;aoIl|};x%~t#A38)uIcA5KOeD5g#*+e z{nq-mW7;~t1lh0aSUct3Y<+xv9_k5o18GpCAT&}zuGbc6i~7e9xM|)V$eeeqXxZ z?I|31Od~fyz3mIU=4Z5lY=o>mpKov|$m-58Y^nlZ*rNXb!1!Snpj z#Wi!tyPOF}2W3>3Cd$K{=JNL=GoFZ%#t3lvZ zp7j@kin;sIy;>TTadqaOw@pA&<<#DThonE7t}zYBbXc8xY-3G$v4C98Xh*NGxVCsx zLlD@X!NFFU!Bru**Kg%szemK9?d zA-SubLH3r_=7Va1f{2rvw2Zu%nrgB!o9CIcZjH{sCd?C2-s^_@SS&NZEWuOT)Y9M? zcg<0Inii&L$2qrwS#^spS#dtsbDlrpbdl|7(66hq9k>F-8=jW0^J#7=_{jyddmIyC zVZSF#wQ0*MK3!Ss12wK!w1-c;kXYEd^(bEhWCA$G1RqGUcrUosfrA6>^;dtLp06=TRP=Qe^x zA%eR~xkQ^S)B10SNZR6(oRmb(i!naQdbvLn%_(p8@-A}zpZ|~!Ud;y6EOgc>azNcN z$#97Chgv?AtD?~#UF+((q&?OI&tCi8;6LXbH1~rRO?ac*$QDh<3rF^Oqg$lTBpPc& zmLi0H1riP(lfQ1t2)fod)iv$ZK9m<5+&j^7X%QmM8u;v4g&kIiI9u`rN@?GwX+gLw zKJUk@Ji?ozg=6+y-p|Kw8%#aVR#PKY<71$KGjf_8Kv}aJ?nT;MY|sgnq+e+n z$m5CngW^G_5??Tkf1cJ#=Bx!tY=i~mHX{P4l`VVai{48&hUHgF3aawsh%K)us34; z&WYc1teFK=xJSm@1gu}7 zG-&FzUv+xR@kuh!;`hi4kWMvX=MvfveHY!WK#uctyp)RFJFQ3r^p-mA=)1{-GEiu| zJPw4R>MJUR3Sc#F8~qBOHa0omZA}86c5M7r=4cR~;v zn9u5}q2NH$ad8Dd1WV$JG-jZ}Ia4_>S8`QeYd|g>oxN2Lgkk-|lU8lb| z9}#(`hrZI#YmgqF_B1>hf4J=ISv)n)+JuQZ8YhLgTj1Ecv_{ZG-JaFnaCdB7x@F#c z{-;;p?zWpD3Buy@z0*HG`$0Btkbu{%P3gQXwycp$7AYt-7je6z1X{D0V>6yQIGfL} zXMt1k72X!)6Hqnu(rZ>%*7psG>#m9Pktsg6TMBK2p|Fv>9@4n~I^PN|W#c~{QCpDs zL$%{>#z74uF11?R#61)!7*JyN{`KoR!}X*7LXW9k@av$ScC1=EzhN1XBS- z{b^8Zm@pe5D4YNP;_4Deh6Nt(=$v^*Jy$Nxd3JFCqabjp?`-SaG-!*a6Fvt=!Mn66 zO!2JWs5DN%WsKj9&G> zmDc|zXD;>rAFFEfwD5k1=*IJT74^%gE^q{lkN`8#D_SEpJUR`vcBlJUeN8z zD=pXyYC^(!bmO0c9=?BJYIipf!1ZCO@*m0gBQp1)9uVkx-s8!BYgd2j z$spfDgH!$E-;Xf4)v0!dS2GMIvw&qi$TH6T@HdaTIgiDW| za@1Zf6&v?bE!1KA9WfF^PL>rCkoBa0J_-iWmCAQSLTH%acMQrV)ictXJ(oK$yC3;I zdpYC4RV#Zaw0jgH<3}nYnbnzGQhb4W2@?y;EIVkQ_3c%V#0wJCsjdiIPnxJBC^5Z7 z(eq`;_x_d8oc^yC+t}{a`Y!Fz<(D|J2q1@`KrW7>xGCTNQ@S$X$DP+vehbZiMm^hW zin{m3x|i8okSEP3w|VVL+t@*AfPgiRl6;&9SXKE&7z@}eOw<4OG24-G%`(e}fBS3x zS?4h$Cd|@YUo(afLfr3|Y;0T0=un*TH3Z9EJM~Ep^Q=azdwG&d=T)WZh@$K?!e)t1 z%;!{M#+Yw%!(8WRC^^@-&qRD&1Rwob;A;b@n@0TtiScQGap}(y(ao!~NRFFaa-nU8 zC7K_m&(m-fKxZ)f&(RkLTwfEVWPn*#?a#)mAmyZ%j8vJ5AXKTe%Uha&09w3z4+S@o zTQ`4fx4d-4tVD6atpLrYg?65kuZhG+Dxel2M3;cu*vtM-)IV2;JYn;oRNTC7?Nn+BQ$sSzyf>A-Y&z z-rulMeY*c(e&?>xennhs8S5J)_iXZlEK>s)SqP2#l5Db*;jQl9HIM}V+M5EPTnPcdQ4=JJEr^U)o#7D`{ngbe`B@*C z3#G?rQ#Tg8vVtajgDfpKwKGCgmGJeenyMXX_;;J}4+r|6^RnNE|Ga(dWU|t=63Mts zjAMdqR%id7pCeku28Fjem}^Nl1&d`qVlirAD;^B^H+w}p+@s=<`r40|9%L~+a!%3y zi@g;Pi(UDyYxI@wsGI9~odlJN+)w(*H7|H|uK9F6^v$20Yz-rVa<<@%BF(M9a~$>s zJ^HRyWJwOYN1i=I`tG{yI9Uh3iRqd&twr>yJpJMEZMpp;@B?B#C)_qRQpAtPhT~Vv zBXBZjZT}a7f*!HL;;q5XK#Qms^_$?2BjQa2gObo@Ie%gOqnp!J4<1y=3pVyRDblw% zL$otX+ZAGsC-L3jbhS*K#MW-=4n^;B1B=m!)%^EJmySuWGHq71(u z({t}C@rg$5UY75@NfQbxEUcZWWnvRuZ2F_fM_RdXErkmiOfHnZjQUrL*G4a-GkTYp zO;s0<`Hg<~=N4E*kiYBjC3)An(T4cDIKXh0w!`GMlxTbg@7+Y$sb0VTgDnM~mPtJl z(?wC6D>@6gD-pak!Mp^tr-?OOPjb)?Bd$WUx*H<3+KY!v%pO`eQZ({5R-m_Eyr>w1 zlqUPyNrU1vhc6L_5vSi=9l77Dn8E}p+RIVoYa85>ih=!ld1x$;aaO=QQB026hA?)K>6T;k0A~f+}^eC4R2T%BP zv`w0ir-@!3pVOGJx^7x?mH)X`CV{Io7Qxe|XFlHT&g3-jJ#i)Wu$v_2l%6qaEFQhj zLFgS~=Gb^lAq#i$E5gjs5h0%bX$q8Bo-IHUp%BH3V;0VNQ`}BKvn$|Ry*&!H{qz4A*BR)r)l+ ztrwW8jb(RPV5VXZC|@w{>&ay${V}pp@eBP<6MQcb$9l!$3ev5QTN&2-%j=##>&>*k z+>+6d-YYTHcp0S8t(VQZWPSn+R;x z2%W>;{;fqCt0doAJ;YRY`sJqs94Oyu?OuJ$&z!te^7>OG-i3NAPU#*|3!U;BvMq8ZcSm?T{d(MGibR zWoU}c-cc(WjWDLW7x)>lfxSrOeB=*Wykz+RPeVVokf-T24+iBBa^5swH{F)KUAL-?mZ69S$0;_-l@P@y1FD5w>C+8UZT>#XHgbCL4$ry_*Lj0JCK zykWmmlXVWS8ZPdGv3s(o$4N>@4HcXVY8KZK7KQYsfn%-X%rRRdS7UL8u$aJp%qi*b zcS^1eKwpW)AMTz*8!2iI!)F<7xvf5pTne?5TQGsvddm$PJ`8iE)r^6{E=#Msv~sGh zGpv=z+@!zg>xcqe$Z*^Q+urk9@`Xpofgj;1mLAuV#_I7|TA}vX7FPK5(>9-(zjtMQ zmx5>pbBc-i9FW!@G_WBf7qSthkV6$|@N^{56xIH){O8JZV){kAyT!|WqjAnd(73__ zT#x6QQl#Llr$u}&q*6vQG_44V9G63_m!v+6sEhq~l0ygtM7kF-O^lGz$Vd?EG7%9# zkAi58M2=%Zr1@Ff8|-u%KdIe zc?lZftOf7g5NHll;B7?Y-$D~f9YpKSY`xdi#AB=m+9LJ4zQ?M7@5Tkr=z|FUf3cG04By8V%tpK3ew`U^8eKf9nCo0!;CWalc-77+7!Z-4o$4pAUPDZdQCS;N4EEX21E%2t7z!sk;8bnegOyt?21l zIRGe8m-$pOA)^T0f6W)w2O`qYw+_SPj~9IGZPDYgK9?U}zWy?z zM+*pCd5@&>!%(!^9q#K|(94I^({IYAMa`dWsKvxwWrl3lzNK#1ePSs@R_KiJv=ART zhWY|B*R(aYL3g`mUFa!_uf{#6_ggf9>aVgmWEpl11%c-LSp%~iryC;pQu+^*8d)A$ zM*!B|6($#;#9e0l#jBt<<1&?{%p)alrFcCEMF>MR7fp_Q6eG%)&p%8Xifiqy z47C*Xwb$8P!H`{bUqi6Gp2!t?6o&vl0!_oVatH&>ny$SCK1h2r(eV+PBgQeaj}ieY zWoo3-d3@iULM`S13id@TPMX<$nsF=W;P2J##v**&UpeWA%y%tBn|=;^!_8mvW~qMy z7+1J(X>%7pp~Oh&uCLbX-udYN;L?drG)WgO#k=9DEz#A5%hTjpOVpa)NwO`N-uOfv z?De;21!>_>C6N9{|+|sLYwHS++*w&ep{zF&xmokfT z$mx3InUqJGj~?s`hyp`c^F~exdl5H+H>ALK3WfiO@~6C_EQ-VNmllDo`&uaSluDcc zR7Y>boOBlkMWjBip1Pb?mQ&L$Yv$F0Vyrz`2sAavgpxGEkc<`k&ilq^wNA{$e`ofh zP~*i=;j*YdQqgC~NQRIs_XJyq{bOa|QJ5K1^ z6^%vEpaL)6lsYf_xu&o>GL=pFF|``ilUqJgEt<&|Yt&5-2UT6tyr9aPR3^#Zs)iux zasOjD*|k{%m_#@tgQ_Z3=PEJg>mmXkeySn2|G~`6-F)=?+zb8<4K-9`wB>VTkN=Gl zfD@XAQGcPn$bC6l$3M{NKS;&FuJ?DVKi)XZ{*1TyZ8G(?8Ejv_jOR&vy)hL0&s~AM*<% z+?<^vXn_qTg8wqV9tFGx6)M*>Lwa&PSoyJQ7m?DI*;GO2B;|$UU4^rutEVwZIPBdi z$3`z8<@*E#;spcd=fdt8dG1KRzVs25etzli&%r$A>Y%#eW(KuXvuyDOV-;yh^y|8| zBxNX%(dC4ki8GS-?a&T?0S!FkxK-KNs${WsN72qTy9GFWFTXA4hDxEP11=AtE}+jT z>J;bZ`c%RQ_aL?6U~V^U-1O?1oW>RKe|CRX5XgFf5MEr&3SThXDp1k#Wk5GGyMdv zM^yt>?mNi;%)&Sa2jb|CUT0;^E!3*!QT-WU!kOHJN-;JlS67jpZM|N<2dWQfx5PVq zD@}wut<80CCP{u(V71qcggEDFFmnrrJ~Js4A~W#$Dy!N_3s-diw_cIusn`?~8Llhj z+;myuh*)&2-EN+(O4k@x@S&g0A_X{jl#KV28LN_J{}DTp(B>^sC9`bUH8zHHlQ10S zP_(0WLN?mecfZUSI$lAaC%1ltTxAjBZ#kFM%T)OODB%x}%#a+e4!P3MS%^B-^}%I! zUzzvsn+g%e1Qc0Y=F89^szobTL?TyJPCHxo6c$_>A_#!U<38&@@z z<~N~q)-nY(o4Lw)UzYGIOLtcmkJN~N_~sV7c+qfVc*k?k=^Q2wIVM5p^=z8VaLsM1 z(R0S8zYEg;_3na=yzNfpo_`)I8H`_gF};p}cjK|5h@j((7k>&|?Pxz`mrVO^Y{8>3 z???Kse1#f z>vDg4^Ag|+fI`nmDqvN?4z<`8q-RiFzJpOturgBt5PlCt$`I)9Rf0cFR=jJ@ z=~1)b?u}8*q^5rNbQbP!@CVDAQML40@XYkHy^D31Zebt=8$X0S{s#geEOT%$`WZIDWdi0pYUd{UdEboK1m-R+8Z6+Ejse8p_2`>kiS+eC~=@0GoA~>TF3*6_3AZ*Ezcv1%`-z;daf4-X90UZcg`6E2WQ5Ev`!2 zFy((SyE3ot)xlq876(|fOC?7bZGR0an%~Q{19!2V=diW&W14Ph_aG4YEU2U3etz9L z*o5U-;wv{zjXEE%w$jhLlG_uTrV__2e~?g}b(7sgQ6pNn6+%0~WaE_RB`h3@6)19U z^g703x^={4-lnaQ&Fepi!R&lp4@*mtQz6uy63a@VKT-2n&}0z4UIqdCv=Hw-`%EKF)ER7l8Wm=!u!iqs5-J%t9)P>2Z^5ebV4Jd` zGJBQWblD=4VR3*w8r~Dt92&qLy6uE}`U0e7;?0#OGu>Xw+leUprY-^b>1qv(@3XLq z@RcVuvQKjm@yYhn#uFI5@n(}Q_?J*a|6C1-BeB6aM8{_`y%ov>Hi9XyG!ZBi-wp5G z6Crvk*72BQ!y32l+w3F4@7X;|u)t>Hs@>dElNQ7S&xA&G)EW!^Yu5(ytshkan$=QV z94U4thHu|;*X3mY*X8(1UUO2n7r>tyN29c)Z0R8X_CM8nmp6XNdh=7Y4eY>|Ecq^_ zmuyc0Ja5&owGxv<+LT7`0HI=imKGOgFrD|5cnS(490?((z)zAy8#M zx$i%1yj}hWBEJQDJucE_#W7?RXvLvzH+K)W*VieypdvZH_g4wFkRh(aIq_y?opmHL zdl;7>0~yy!0ESc3|4uqgNJXIdmKxF6;hf7PSnYJYJ2 za{vWZhc1$L6NaLPqNj?Wc;6Ld$5nm4ab0V563lTJ1-|(k5Mx?#6u(8w9 zG+w+uJkHJJ#E?=q6m#f^?dABT(g51sTM1^6x}ep34HZg3}y>9W#GMkLtaZ~ zv@iXB<}JJCa8k(o79xhLeUuQ2#M=Y?Q~F0w@C~3Q1dYoBV=h>6UZq#(4RVhW*H|?6 zVffNIYG_|?-#WT+rm<^cx&d2iK#0WfJoXe|n#WFX^k& z6@%6ESl9BsR?mu0>D|JVaoyN={WFZ*2@dIBiMuJAcmr0>=^rXTq1{eAHYHBzf<6vD+B= zc22Nl`_#RkcabVfR>VlVAHlpwE*TZyV5`Oa1oQ{03e*f&!>K{5|4ne{W~-amG`= z_67VK2cLVK7pa)gc|#P8S5l^AbuT082XfrvO=(UH=P@d4a%AHaRF`)G4wr_(-Kod1 z(V{sfl7TcOGfOiisW9*=S|3Jn;8CAYWU&Bm0p$VW^r)12lV^kwGUPnEVq3qPgL4Wx zM3D!izkT3xr=lt)g<8MQXs`pZ+FXKqzr6MGgtNP`SA2!f)CV~T)Mq!^xyED6?j677 z`Jv$>(JZsy7sY3<8nK^Q;fyTWyAz`?BuAK)Ytn6bGkkd1J^TiO^a#@nq8`@6Pyg^6 ze;NuuKGivfeh>b0dAtwEZKF7$VG1^N_3IM`bBlaXIdtD5;_S&-?`hM;F07j8Vk(88 zBaWm60`vj8^t}LQM9V_O4^nnEDHov4Ra?!Ri7M|~=#0`oK1cE6jsK5jOCZMoHN5JN z5o1aqvM_j^S|zcoIpi)-t*xJTqfC3|s_ogr=;`nSOfSk>hlqUUR|T(A^mc5g?a!0G zoX};T5$*w8>4?RhUU^~=-lgFvnP0Lj+y9PBKf{{6 z=5w~ZT9b8BPBzh2%MH?Vqd*DP2;aE=)x|CeFdYqYA%d7R?)B!VWwZ zr%oUDdg4=oT-Mg6w$lqgQUgu=v~ENlU&6zA^69OhBVRsaTYzZe%lfhj;z`$FxD@c9 zGGQyWg`S>Yrfv4bai^hehPCpT*#<`l9dRs#y6cJEWNn;3N`XE@ezjqe#J^2enJbLb zS?24Kk^&CM*f9`{c$P`$*^_5{Sz$8IdV5(po#p!4)z?$9OpVPTX1}IKkc4nAHCRti z;f1tmQa$C@X-frtE;6c3f+z>!n&CBljpin*H$rVYN96B_g8+L~P_ zpL^y11bE?<`O#2s{IDi0&)FoOwY{Ey&B_|${iEizd%`*z`TLO?<7k1n!wa(c zWCYVP6ke~TZbn(yR?Gg=qBgn~Lh@n4)nvELDf+r&IO2Xb^Ag$m#{Ip6Yb@b1Cr8yq zdarzgnv`ybs`=v#1iV8X_uOdQMMiQJ)Ov8 zWe_gHI;s2AC~`>@z0Da{ha%|>T(0SVdv)jd-iIj`D;ZKG-!x&xK#{OFqgJz?b`aOS zabWV{pLE4lsVIG7(zqIL8#tsEaQ_BkH1{t}5oOgGMhx)HaW!QGEh7O@k~RU?8RL1( zqoW%9WP0UXaORsBb$dfcA?pNIbv+xXOXJY^CEenb9H)Cx@ywU@x2Ux`WURUruY6?& z(kF{Uk(7cN0`-?d`90z#W3Turt}^=b&8wnzPq<=J8NOkQ+E_a$_Exbvu)ACY5!=uY zqx{7yBvDL5ab|Y&5|0{telz_S`0|kf%sZuhaCM8XNI=48^7W&1-65j#js48rfu9gG zbjTr#7K&k@3+V5tVd&;vmBa#NEW^PT^h^bsZ_G;q8q& zSJG|JwziaK;}-z~R{r;-{&`^!kz!{?==*gCQo~ddr@wEKF-o}W z61CV`+b3t0yXz}6v~##8?|%)^wb6~;wHzImze1|K;MqyqSqH-@o(&2-qTSzm&CP#x zANnszK{OsRSt@6Uq0kk)P0N)MsnS7gK3TToe7VzWsIg0U@fU4{1R&Sev{ob1gFpoJ zn2DDvrtp5mvfLayExBSV_E1c`bd$3SuZ`U);y2we-tm~%E{MxoJSsN`J4{gK-eCX7 z1i`({jk|uF1G8txnnF@6niU&X&%u2we2LnQlGalP|Gp&SJ_*ja#2T%CV>m{-G)?qY z*_L#rDmk7e2~5IAubLy$U9Yg?wME7~L5%IDs0M$mxR6-I%!Q2F5R;J~ci0D)PnWR0f89Wm zH=rMfJ=fa??0-Sliqy}km6$GJoqdUmLS`I}O?^+s9DF6D$r^smth8m1>$f1Lb!s$6kD z?xE2`>jdxGi`ufq6ub{gbe!?8@yo7e{m;Y81qC$PmM)FC9$lv!fZZfS_>)8EdkoRCAJTFdmk~w>fo4Liyjkp{%5?x zGjRHpeG?M2gX`BQ7N({ySXL`L(&1md>VK_d#@FGk+TGeZ7K7Acu=sNJjgkW|oi4>! zt?@;z0-N33Q@XMO18b!w?=l$33|o2rZduH?=5-mgjMO^|ZqkkU0>W7YKA5>Dn-nnc zmH!HhPwU$#MnH2luGCO(0@xp#TJdaNRL0^1yAhi4qrxYmuoeFgUC*QII&c0YJP*)& z?1WUZ^+{18;UO}1@sTF)XKI(dTFyO=S=m)5T$?mT73Ilq{fa+>N*Pl7WGlH(O%=O- zqeaC9Vs&i%>v{XW_TZ*l*m6lW<-J_qaSeQvugBsN4f!OUf|M3$`gX2#4{m~4U^3mF zI*#+>()wY0MJI;o^WRdptEh)G`#H}bl%+aZ8Bt6>zpF^S?zEL=nj`CPnmLJb(p~TW zgfrhL^BPmIb8V1waCtvOue)*plAJYIkK+++lijYpsi#~CEj~8#pRZ$%Z~J2-_71S| zDn+FUY%6@>SaW?(p_|G$Up>c3r*&w*ALsU5w-GI~IQR~|@^+gAjgMV80O-z*V6l(P z%loM;(NGTHIK3f>mnmE^bTnmyVn&~J!=&5w&jXj3Ml&`A zAovxmJww6xCAByr3Y~FY4%~$G#_CZ2Xyq%VO(5p`CY^0P^@@=iHzuk8wcM}&#iYNX z8&Nd0o_acZwfU>Zyx4%9y1|rEnrS&{s7ysMSBMUVy;q}DOGvMHmgU-+ga=0>k3Sw^;zhZVP+TQECuF96ChZ2eyCOf!gC z^8mfrjVqz^n|^&RiQS(vS{oJ2-J8XJc&cmX!mYAj#k_{7H~ipYzT)#UuCXsUzwA=Lp?~g#aT3LnrMJkLiVpRf z+`KB@*mM}|2~vTc8j;O~+m~t;&oS^Ho~aK&F-|A1|GS}~B#6O@fje%LOF6Hswtj!3 z)oJpxHB%|Y@mKoz4$;|9VA~8>N2>qNd{I1Uf(Vsi8@L`AuEPXmFiieoJq?$|Dv)2B z+TqWmlIuF+vHjYU%b^{%AFZ+0;{Iu11e#Rd!9c=`nyBkVQ}qh7 z1+gK6yl!b)?hiG~WyLPe!~r2=unN~ns%7#Hd4#){=2eXi{iVX$rAU(Erm3;t=ye%aFR>H}5wy5b(+c%c6d5>@fOE9}4cbWNV^h>Yn( zx1ySNQzYM>8LIgiDjBR%uW#QC4?qF4o)4NTCryJm+wV5Xx62}yqzC2@=l0rm`fVDY z(QGl!4)fY^dYkdiRQ*cVFn)NQg*(%#Pdb+m^-l5nL4yEY17(c=Q~L>0Wr<+Y6Sg)# zQQChwu-h-f@u0oDR~j7)1X%AYtrBd zKGshRYpfNZmM_&&x8vP>Z*)&ONnd!87(6_Ahb`euu#ArzK-z1?3zFINYd#J=VqtnEUFDb*M0+@^Y(HO*bfZZ7QQNr?g} z!_6eu$#e)u7s=Q-FSsKbj+Bw5fiW_5xKXu-tJXHWxpt|LRzsf`>NighjqS6fV4P_! za~EQ{zqAT6)F$#BE;$~%WKdtYZIG$)c4)1ux}2+?uN{`D6Fs^EmS>7^-iOJ+6ps#+ zrbZB?iub5()w@sOr)HvaqLPA@zSxuGA+5rQX98azy-P9~^o# zd?Rv{Y3Z>b+KeXC+7{0E--$7Qf1Hhfgw zs>?(pG3#d=k0t7Xr8N~)vbaZwohw<9oqjsHo^}fAf%rVkKRVb4>iC~IoLEgj{dG9_ zezr&xP_s}LHNanPa3}_Kl*`jbiJB>r3!*uL6a+F-XA*^j=h{BL|7Enj8@brH0Mq_<~q?S0ES}P0kdBAtKola;ztfXcD zpL@9AeM-1j^0CQK!M%wofo;+f$Zhfa$;oV=52pg&58p%+Pv0pmxW#I>dEv`~#0!!m zRUgC~NWaNPOm3NN8g?7Ge)sg7rWpJ4Me#;38&z(=&C_=vL0`VeZgRtg2G$OztJvJm^}dY+RI!QW9ot)j9NI(ew5GU8#r+NYri~~* zi!nVM?9>tIDr((zFG|kg6myE4(%{e{iw-EGzBQA+lmxVejpV&3i-+{v;MdY-l6~(; z=4vIeA8ksc)(PGdBd1%LZjwBYt25r|c3Vj_Ra1!4tcxFW|GLJXm2x8Iv2{NRI@N5v z`}%^BYO_#K!utKwQ&rnLs&J-4>heuW^?%X=)H<_mvCpQ{QUQhSQHi% zw0afhZ{aT#+iR&zme2{W0e!#FJ9Lfwr}d@`hw%CcD6!cP@xC5SD1MS{Q%ya~oKH*; zb@_22T^3w)KNID`_gm_pZq>He67w|eZ#JnaW1eWlh0;`h{5koLr8%-^e{reHX>{>} z?ogcbo-CpVV7WZJgT&IjB|}3;vn-hJO2e#qoV0t_snK@XXtw396vs|P0nkJcO%4f_ zl)+0qI8%$FtUOr{^J$Fij&e&Kyl~MI17w-g7ZyEEv=e6{0sW>VAWv>9plbGaM#-uI zub|Fk(^eTGu%Z>>fpuM}K9f(*sVaB#?-M|$X_d{A1UH)%hQrXiHzOSoVUswc|gf+%UdT($EsV2XeB?)fG3KN5?Pm?n z(r|oC?avSW1E~J{wag^x4q!pcx0(%FzBZz3W+=T!2B4RI*#`M!|Q8oDu3WlO!+|*ddzWGbw5$pVJCy(pzMSN zy&ADA#?;e{DY?>T{y-`VJkE=n_P5*3cWgiz$l9g9j3698*T#zfkcBqf4$$w`1y~-u zt;nl5Ahq}1-f92{@Y$pSxJEVH)cjxHQLpMBFnaj;cq`MUTV51RocaC%=g$3JDQGzI z@4#ynRmHqqnNd*I*0cud#d42@zypH{IMp%lA3)!ZvmPx zPcXpVN7DK^Nsga@mIhth&K-&YPNTHgRwD9Q=Ql+kd#L*SI*vq8D%fpb_mqL$6UQ0h zGewo_%HMYA7jJV&Gpbvf0C{oK4aDQVit;K_?z!y;7fr-(s&p` zj6$EYM~LLPK?v9I!h%wHAjZn>9qQbEQvbh_)+CwRxgGKAUgyYlk~-U3I*R`sXHz5w zRtEy1{-L501U(mYV1FS&xWyURn-bahugnf%@7CA`o*;XueIyF(X7QAU{_PyO@==yN z160BAQ4>+SP19?frGLxvW31bjH%F_+ z)~gvD4%f0#rJe(0Og!Vn_$I7csq=1D|X zJ8E^Ad~B_1s{2M^C{b?=gIJg#?%oA<0o$eSiBp%mA=LR;b=aFQ5qCo4FH)J9AXrSD zRKRubQd@sb(Zr6bWZz=Ei9KJ{KPw3r4Kdi>5$lT_nx}+9dWBZb0w~E5G0=*?%A~JI zp~WrL14L%yZ{U9&wi86qU2O7@&OpuW8#Okx%siPSeaN^gqjFluZN02x%FiCMTQFlf zrXI1WL3%GeheYEB-z&Qou5}}lRPCY?6er>9>y>Qx{sU)<@WI*7`F2g5=ZEAS;W<(j;of6-t4vNC)(U1tz1I*V+ z8=L}8Wb3Cv`%R$S>j;3I!N8PDxE(Ds3x2kGG=kw+AB{cw)6>&TNq!1{Ci_K%FJgh@ zS{cTRiP(Jjao!6TU#oJn1n$QqtHpo^_5p0D)#*`_exmXiTJN$krzzfYqOdJ|7W~hw zc@Z^S*mX@<7}uA~7%z>gsrccuEP1=}7}DR@)(WNN+;dHdmV|tLpozZ2fgEbt|3tTT z+IY24j}2D2%BA^VCz(>rN(W(-n8-4k=^yL0^`GhMt}sVxv^`ID=_c4XI_Ek!f9r4E zW}hyeGwqMxD5DyXk-i_kiLvTKxLrlKGY{Qua|s1i&z* zEylp8_7XQIVa4w0sFo(mepPw2Vo$c*7d-*CB+GcX%T6mUqVD{&=uZG+@REPL$}Oi} zOTD;XtyX?!o)~TsvfVRvtX787=pP|(c9UY25R}bx7^o?&O5XwfnDKmQ9M{yKL8^T( z@(I1KF4ovnXe`;ASZs|0q@Qae>qu_?GPpdeIZ_^DwmL31a4T+XJM%=DMIGy6@y0S3 zNw0h8xgcF>XRn<-wsY4&fU8@$GK}n2oC9&3`t!LL`_InwjUemef8h2lA=dFu9G%Mdl{(0EZ ziYkA;@@QsdRw5qbQHHwk5A&GRh1C6O2~2 z04vJo8IY^uDI{E)Vfk<#0Y2v6dZkq)H{awOew$C30sg%G>*Sr2VS{U2lrGpoX+2{Rrwl zUp_}gx#01SNnR^whlZy3*jJF{>_xZ$S12_T?hbjz8hL9Y|GM2RNbl>P?;~0Ga5Ny^ zV?8`+1^YhAXmG8d1 z=0nndI6pOBF$y1|$;{$aX~7Z=j1nrk!&2fz8tgZ~I(Lq?4UBOROY_7g?zb@jlJdue z=G9@Ua!OiSvh}q#-U{SF{)Q$mGrSRKBd&e=xbgnM>sCm_c@3$ zC_)kAQcWbMk0yLxAA3mi)CltH9*DA0eEN5ed{I&D@=UrMEz>KWEFfLSuDP01X@lmz zoXwwbJD$MxG_svjIZiPVv*LD$FTONgvgit6NQsLD(vZ_K_-=s}@2oJ@+MtW3P^Ex9 z^7C{183EF9siyq9&)z&5Y?#=u79#u~c0@rfT0bu(Af#XibVsD6jKka(V^ja8QD5KK z!+qKL@K(-|o1I6gt@YCO=LxNhJ;7sgo+lh7xNTOE_8GI6E|-0MmSsG5xmriK6e->GOag-|JtJV#F>xoOP@O0UXZNr>Z+U9J% zm79Ke59J3{9~*jPGxoU!qCRydnyp9t3+EqK^D>dZn0q6*hqzl{qANYLGmo!qZ|h;m zRhdt9)z%{3iG7KaKUe0-`J%9v@BOpUk~bWF0GdUk%hQN@2J(Qn7_zTu=8V|KWW?%^ zZ43RmMePC?CGuNuR11k7)s9hn1L6H$NHAfMaR{cn*(!{o{2bvZbdA~GdNil^8%_6+67=43UxK$-1_q$Y}hQX`p9axlw=toCd{7eeaWm+;b!zs!qUk{xYGCCx1a1GCV5^>(ZNxO%*m*MVQE95TGZ}UbPX~7<6+<%ds_GCiQZTDT+PenAqWJW zW7G~CdlSH92%d_3iF2s?PLq>1ZqDF?bwl=%(?3pTb)BgQj?OcVBc6mD1<@_|MHwUA z#25-)>qK@OD_a_nqORG9W%?SHrno083^?d`C5v(d8jNe-6z2Zs5Ynr2tWu<}^L;Yf z5_w+|R{i2#aKA4=&b&a@e2~^62PifjueAn!F>UQpJs6v$E}8^;G+wKLDBB0*gJRoBv9l2^ zm~x}Vuj$CMvBGG-`HP6*v?y6jCEW|YsN45Qy|smUW!%3)H^$iDSxprNif*|I$-Ig~ z?=*S3KV%aojCq2T&n?}!kE2cnyS5fW4 z5GJ^Hax2OO_rh=?s)d@aA18k<|7&OKb!ZkX;r}g|w3tmxX<%vyRWTHfPij@J7`@FN z`1A4D;`u6z`X-+ogo*YAtz^*2dYEuNA0V8q##{v*ZCjD8X<;~f$*AvCcD zR052wfp%FN|G(i(>e$NoPp#Nk{bR{jdv*TR(*3=uY~>PZzukcEYja=oVN72Qdr;$S5j%Mt@EPS!#O?8m^Hzw0YsLUU!)=lKN@0y7*|9 zk`9ljsxoO$dPC#*Uly*dilo=G4R1p1bA8@tgx(5ihh6g7dqK3-K!lL(@lHw%@k+=fK+O4fj}1h{S1Bl!SdP<5N$N0@ zEIekfv97Utw7;Ez=IzC{b^1jt^r0SFP(D{$gG-w##mS^)8Twzuj`bG)P5>q7!q|}L z?SZ^CL6CTW(4;+kx?Vrt;DitwoyR4mn;-aPL*S(aim!Og`$z1i8ZV)`B3jquT9&Cw z+otJ7#j4PtTIcaqS>|5w?{Fi&>wS}0c6ouZe)qOtd4DLFiYI&*rt-5!CEFE!a|`zW zA?5l1qjVwsRMFGZ%meY}=yImmfSqR#1gq3JGxcm2EJBHXmjd4;b)rjo^9!J zBk#i#aAxOI_GJwQ(J0Xv+2SFZ^b8%$yhv&thoW(N`(-j#VCv{$ffZBT*G=wI%wM6B zf40Pr3g{cmaya3EMI~bZ*<(~Aa{K<}ifRRJ$uy;zKl>%mm+RClogAWlY%CZIT8Hp! zsAh|fIH--p&3M%x4omo(C(jSWzI$5%8`Y-H8B(Q68!(>{G!m6?xHpi7(~MJuwMjzN zhb>2xEa~WpOsp@#y$rlnz=ml3&s&jC<5TrY6Vnx-NXRH&(O>x(CT=&3Lq^^4GsI;!5K)tTL5gSgVjxSQiXuPHfc3WGlCg7W23Z zYm9Re-VTOkmZtyE0oYjj7vQpWj)&zZPhNCHGSn)@I<{hQY4EiYBBp>B{N~S8!LVwfWG{20Rm#Y1EW~$X*C4e3JalmY%J-=67YQ zdB|CiuZ}`HN3GL#p+y0;2^q;V#50H=Jsd599QeiKLDec_UZlLONEQ1;cv0o5i3)3*RHZPFk7%b%!r)K)_0du`_K&XRUcj*$<%B{lSzLxWpT0L0kL#_GHv|K(zs#N#bAo z_q~|dqx?IGp(-1^vy&46!*+&An>lB4N5}LnX2&PmILIBC|AwJV#4@8Q4KO_8W0mTF{Y48fB?`*;C2mah}H7{-RW?xnsIgJ{k1 zDLKtefLi+a4+L5yM|Y)^C&Y**}DMSjTX>@LJ+CIcq)s3^vC_%)pR zBoVz__h~A3dx#3zVoa{a)YoT2_M4mfwUIs15MNFsYbo{Oa+2$$PL(rl5{F>0&#qvD zIM^!&KcVxY4@k{YbuLuGk<>PvkZN9IgEOCh<~7#J%j~Yc_U679|E)hP>4vJ`-V!RM z9%!Fj{URkHGDyrs8-E>*8`UaNelOBo4>vZrl#Ql>pW+D&-Tr7Af|8=U2Cji*#xCUb z#RK;98Akh;ipkkp*gMRWl#^e76TO~WEagmtJ9BS!jVQkl`V4mUnKZb3d$9p;!&_t9 z{s#R}56$&375qr)kiMA!O~_=$l-ev_b8qX>$GV<*ygN4~4eV0F;zqds znua{m%zn8hNt@k{`x;7@95qe_)^*nZ2KAF#%sMbT+q<*B1)P3D zois@mq*m8j{c}(MBzhnL2>F$tq$N0hw5p%UoNVOL@ygR~c`NDzdKHlTO=FYJYa{Wh zyNTA3{FYU}XBOq5Bd)31B5g|D_px8Qx-IU?rZN^sYA`0T%OdKQ;1Q4SS%AO;Hmpqj z3K*7w1xtM0bOLTt^LnaIRoiVje@!;6!+JgAkGP@sri68YOKE-0E`G6E-~6N)w@tph zjuD8t+s!!Xw@iQqWwSdCb??>`nh5DeGU$T*t^#UjYQvhF8#LQMYi-Zn9)&Q9&Q$K^ z&)5z=u|?hCY*NG`)(J3{FEVOXj zd+6vy)u4c64>{Yfm`ugd0CCixsZ-|ggJrMJ?~v`-S?8w*MoYB`bzBRvdKxW!6jz4` zq*+&0&gM=~u6p?#d*@q%#mi+FkVIrS&ZfrGCYACH9&q-dmhluHY%{{EBCm{WqG6^X zH`{qOb8Et=?fPug+`A|tEoL9H7ZQL0?4DPWw_qQ}Q|d!?mPnm%CPp>p?}--B&+sSa z=2H1DUlNXWw%78c`*Alu%VVbhl{J8gwgyw!{r+86ePIRV>;4u`Sv59*yZj z|3{?SS3_Wkv&}u!6FiG!OxZtyf6w^CAf`tPuFRVN9oV8^(RE}6ACg=bsn6ew%)kq2AfTe`pR zmo-)t#S*sQa0p8MG=ft6G0X?etJx4q;S*b*rK9kf0raSYYRE!^w`(TK0~)YV|Bv@% z|3IY(nViNdeum8eTh*d{03kiR{46 z?-hl)CaSM-LL0a7y_RY>J~?m%aIAJt?6M;y2)vg1v^8Tt<)Q2W#}U_WgDCj;@OlsR+Z;XP_1 z-UC8rhhAM8kE3(r8Wa>@sp80<)lKuKvuo)3q9~{i+yQ<}|mqxCS1`5P+R# zB#mvW_3ZiDx2Tpbmm~Tn`zrKLK%)1BWWm^@=Sg<5!J>iE5LanjBONdj9UHY`q%0Lg z5SLR|%stk3(Y-Oq7x%AplR-#l&xbe^Fa^j=IThKcpN@RYdoJQ7{kZYuNmCH zKG{rb<9k%w{@Vi;ehJGNiWKONc+kJZsu#rb`TwQHx+W+{ab}C@YC7D^=0w_^6#(s8S=$|aUC&^>J@PYX7 z>-OjK+p4zFdMlj%8Ag`=oVyeF#@W1v8H-wH%tRD_i% z``8DsY{jXWG_hQjoL^K3bQzQ?|M!}aGv45UDZ8pIbKmq+o(EV)Ag1>9P(fMTWI+G^rAfTz)i%CTu8KuKZsa? zb2#9`^C=EqebCNtUO+gn&y z#$xM?HXgseg4n*%9J?Rwm*NX11|gl2?C}!hHhyNLfIV(HrMzJGd~#K$ASg9VZs z@~KqUWn;2?rFhcn()&fXZ5899+U;R7)EtLcZ%wVY$HQ@6FvN(=gb7vJ`?`m^)<&j< zg@$bs*_LV2_%_C|vH}@q9DxG`k0F3Vf*N%8Z;h&HS(a_s>Vb2e4I>t5WGckOip>wX zJdA{f?`AW+D!8{76tq!3W!%!Jm1VXh%w|kR+w)pDX=sCj4U;HLZq37 z0`WCn^Vp<7`Q!%+1t|Tx7NjF+E?8xg4jM2B8p9uY>|mM^Qm{$6t=jxsl*;D4qAmqI zW@mW`p%kAhZx4@Y`1O=8`B-NOvHU!;FxouMB<*MleZGq-@qC~Dv6$$z_-W#}R|sa> zry52;xvv|lIWQZgn1lr*U}lH&at6QGu0cIx2IU;z%}#T^_us_K5baF!g?v|b-(lm7 zDv|{c)#aAiyOqB0k1x$M4AYkav9`d7)}cdBDP1N+VWPiY{Glh70v4j2O6t9fon1wr zX2yRuqRwU+^6(Qg5f(fs8q1m5$Y$j%Em|p1R}fh0P{hI`o^AlwL#Ou}y7~MFMKi=` zi56%rgiyR)4-{+Y`3)u0ATM}3$N>{CSFi8U(qvA`O580Upz+7%e59%Z-E^m`l{__C zIXqo83%0=snQxwwuMSx1IK+h3aGX53rY?aHNE2dY5b^P~V+P}k3kpb$HA1?cL*vs< zA`t_$(B{(b)bC@%DT5Q`zJOmcHm3A9{0G+*nhxq_1YQ5N+w0Fe$GDZN6Xj2oWOK2S zE+~(9I%C{+kJ%h^?@cjQ&gQInWRB4uO4B5(9Tza}hpcv76TP*?Lxa_}>drJ)x@r~v zGG9LW6el&@-c*tDycK<8(G`OC#S*}y^~K#QRg_vox1*Lm+Q=M>($}$4cIwz%tlw6b z37!94_{UY46S1vnu-+_jwztvfO?0YOCjzZo&GAR-fmKnNJznrN=BZ78-Z2U5(Esw7 zek#Fu(2BjMI0pOP?H~9PL8=Zv6}qvr`3JO@X~W#y5y5Q<;<>ol;wr-y zlpe$%$>8@T6&3{r*S;^y*DgA(zfao91@BJo!HW=Rg?0e`wFx=F7u( z6ZoPJ9qGT=v7zwRH#|uw$caZ@GB~f=crKl6HWko{_ZMEr=YBQhnNB|wN_cBVqSD)Q zTg=<|(c^!H-Z1fX?}q$qso&ejtIHfd71Zt`Xroqk0I|>BUZXWP?AiM~_-kMIfvb2s z;=h1zuHdwkSXg+D!cc9IIZ{{W&~z`>crcNkyRy{YitRVvcN zVj7xDsy`pW=UBNy8FmU1jHI>lYq9e`$B%-ZDfkibUi;yWxfH=7B1;J->pUl;eQP(v z4G6i7OyNju2PXg;_YV$yb@*ZXZ~RQq?S3R^m!jK5vR0p5ig6sO>B}VEsB6zZX+If! zH`YEaUq$fOL$=fO+axyVZ5Ih}!2bZO_xks*!8qTAaM%e;8HT96oS(ZMzwpQG86J6$ z#6q51ilZ-U82i%cfGf>IsGfh^zCUH zC4Vz7_i^i9q40v@)_Fly0l>in-o77(qh1-^Hy3Af&Ei}nVS<&_&&TiE58+ApH~Ut2 z%fPrR3|hpq+2mvQGBIWS59wZSsFaBa-N+o*+F$ToFNTA{{{SBJFN!vn`(fAgu?)?R z`Dqs)$k)U7_rg<*{8i)o%* zqYgdmsafjJ8jR;Ew??&`+zq(Ty=ud%OAJ0)e4KGu_6Z~WyG}i-c|6G3KvVeFJm(we zeO(HV*`meO>37^@&U%WBTt@4kFcLdeYkbm|7o;B~Io z;irn;$5M}Uq+pH1HOPx)NXFcB<*Hi?nJuoB&~$ISJJ${}kJc$P&sztWQ^QW(54Qdg z>z22gbWUa@06Ly)*ECID2%SR8ryTNY=8xIVOxrR$mopq6^w zthUlOMUhAhJ&k@-jl|CjUB+7@>Yf&sE*DQ$kJN7qUw9ATFNG~UKj6z{wzsxrG1^8s zZ2tfg5!Bbyz82S|)wBys-zx@tb$0wlYxBMhOU0fg@qLD$rQV4qy_Eoq8OrYFzd*bs ztzn^AY0_}fS|X9|0L^LPW^0MfF_@XPYHLJz-YI1iGORTQ<;_-Fx75J+j^ z;oluCKeYZEYnD;R^GT*m6l1B{Yu$Wz36p!PXPo|(?#R*#uubSLTvucqSK z3UsE=i{X0e82VCu8b2|#JCSV=TZF+~!M%7j4!x`hAbCsSLgWKW;-vn;@pi4G&%w3S zWseQV@o`*tjP*Ab4g{ZlumC>w_-esAnC(yOl~GF#8Lp`rk?IoPY6j{?KQmd?DvvoQbzk&61tR8>8Xlgx1voXZ-W)Uk=Ac^d#=^U}U; z{i41hO|5Hdsj}GkEbA8~YEj*u)$mh98fS*rSatpN+ht3Ca94b<50U$@a^Bqq zbG{wZn@!R!bgjgFo*=j*zB#Rxl4zsC=cY18TIR;pRq;}j*to?}q@sdW2>9~)^=#H= zt$iJ=Z!0$gocdL(sS3tf-?t+*=ARgRX!NvCPjVHEu2HF?a}@=V9g}l}qYaRt?kAn#t6Int|?09jZD~MS8`s zRGQ_TMz$i9r5URm{tCU4!^9A>oNw!siuPz_Y2a7K$N<}352xI0)S#Wl0k6p{43^jUyVw~>bw<;l}c`#dlKKGv_*lzC(^O5p%Yx~!3VJwuRFV^%)Ixi zscspS(RTCRxafP@v+XO|JC?jJqkXDT4aEBwIVQSoHd}o%>5~eeoMd;de$0qYcYMU- zin?B3Fcn^-xa&_7Q<54UG$=V$(CS_puZWTvF|=c8IR=?;3|6-#=3Y-@+NaTOmfuWL z%Bv{LpdXNCbv(+xBDYcC}Eys;<;y;LWGoLb^icl{brrc8%om|CRx-m+sHok-08k3 zxLaWqldlPeQ_kwen!ww$M87az7!EN~h;-$%+4h82Bd&R?mM_~S%dwKn>C=s^u^z4C zD6H>ClaaJ?P~#@2xAE8ZjLOn8biqch8(o~G<0Ng6r-M+ztF$YE{L_{S6=QR!4UF~^ z_I(cbS(?vMv2wAhvV+H|t~>2B!t5C19YCh0y&{rK%t092QJa`|V4UtVudEtuw=s9xEr@>Q~NSUQc%iBuKnQ-XS<^Ep;>sO*WvzE!J!?kMwhY!do@lEh zYIPw8c5P|0u_HZ?rF7aVOEjusLy>?$70u{i-Q34-8s(k9RrNXPU#4HOhwNGKXTm?T zy#6%!$D_q@eW_V_(1}|Q2N@U{uNt-*RVm6+(#N4sGE^rAYu%p@NV92{A~}%9D-MUy z)-~niu{KL*BfWJW7-F$c80rw{(6kn-a?5QSocw_CTwJ7<3i1g=`MF;_8bN# zqQXJ zuz|NcH9-~2NOllOV~%ROT+b?nNWrHVv|ibAOw2C&0Z#8;i*j58Y9 zT^e8Q4#}UK7e;+L=WWVh1A&oJ+vyq^0K?s<0TgCFn}6~cJm{t9#PlHxayM9}oB6(E7-#z__Le`#gZ*0MZGI4bwF zX`hPT4gUawF#g~EAu4A5o3%Rx96Y*4TvHKx-E8#V+ofH=u zMu?*IVmukxmUHjPubX}e{CfSOKWUGQI(LLMUlvX@)SJz}7K30BgV5u(ec3#AmYPV@ zt$_&=1swB$Fs$paI-x7aL&e3fh&*}XNPID%X!70N+)0v=uIwP?nEL@=p&znN zj|_esg5Ji=#TB$E5-!|efzCe~`M>sNxU%?n8An6tH=E#q(Zr_?Kgs#60SFh`bt&qPFwyCypjk8Y0~j z9M_5bL$ob$WX|BJ$gf}Vt3=Nx%(obXR;JC7LrL$H)X2(jFH-{ZM z($meKC)obVwl`MmHTQ{R-HufwH0S-7d>H{gcC&I%ek))87Zt_0D`qjqK4q*8d*V*L ze$6Czqv*U>?3C&81xa&rSs$BItC3Lmc@rX&_ImIHQyAH&X2%jUkL6#X{{XNxzx*J7 z1#D)vm57a68&Y^GnKk*Hb>m+VnZg^!+{cU>{Z9Ry{5f%@`~vXg5*$d{L{mw|GR~pD zt!J430BO^LRpoSd;Vb2w()JNavqQ$7Fmb94LgIfgtbu~JIKj<&?}`5aX`cXUp9iGV z?vW1tG(y(<2Qp(B#dsf#G}nRJSpjSgIqzI{vf6A`o?YDT8G!*qa1DNcndTL+H6-nR zWAt7h%W39L7k1@$F}0XryMliuA}m2<3fw5J58>zS1>%o|zBXu{{odF~$`?2Sw0uvg z-`cCiXZJuLGLE(4+UBRI>3Y=C_=`umju{CN%Onf|`VuS8$#9ayDa*O~PBIu;G!m%N zPfbtLkAfc`^nZwYL^4Birba+Y?&)5hFYTp^%v>D$SLH9j&yTv7!%q~!Ak;z;|GDQ{AJ+zw4Wpg0JL&# z&ep|#eE7ddzwob)?{xc>A=2cA^gP$;2DhkO*;~zI3B!U0R=zp?uKpZc*m#9}Ut??X zP5%HNC3-3Ku1p-39=|F_(9xW$;hi}%2jTaLnRY9H2LR;Pronw2@CjpJeo>rPf_P@| z!zjZLayYM5@V34Mw~*<$5!Su6&3#V>wFy&^&|kHq=?!xY{6)oY!>5AgKbZzC*=ZV*B zj0hg}+ugP@fN_paD~R}mYDJ@5CmU4b>t22#I7vw!wgymdQ^P(N-Or==*HF{hh(?n* z{A=ouh5Fp0$x1#zUUOe2PdaJ;01;M1d>L2g$E|%A@TXUAvPZRu01S^>=*%el@u}Sz zRI7Nnc0G)0LRiDIZR3y)Qq*OJZ<%hD4mioGE2=0=s^@U+S(o>4naNajZri$7JXQ2E zrd+wXjOk=YlIZ^cInP@8tHpDDi{hT6BqLxh4l!Qoe>2+$xPoK3l#qb&n)$!S+R~@R zJEWP&EsXtZuP>&ZR&(Q&l{=<;FNU>udsyXX$IHkyx|*%Y%S9ms4hS{pz7o~iOJqho zZR=f3T7rZd+wrdLqr30LTW*Jz z##cO7%wM&gl#8h?=u$EUc<)_!2}xrSmTKIva*sjc>y55) zs>A&`UISLmnw6O;CtPaEh< z;mua{>~o;{5|2z z_~kIi8oYqJ0q-jk!P!fe_X2LrWuPlUcH+u7I;B6Q4XIR~vp;-8Fqp18|y z^E0RSPH|pk8u%=~E|n-lR%y?S#%58eIlCQC!HqcT{yURc(!4afrI@)8>JhA9n4-`6 z&U;qR$K7L0@Q=jF?z}54&7^5(=ikK?jAFTo2MPy3ew9=9eE5~&Ujpk;_{s)0QaA23 zD`h_|+7r$J?_AgI&GEBJ@GiRAzMCzj#*=fWID={dBoa69pK9@I<1lz^mQ^XcRKDF$ z)_8L-#ACDCba{E8)}6j*$$uPvDEuk$UV(q9!=_A{?bZ896w2Q!rg|_u=N0p3inVCx zM~DoZ{ozsg=f&@->z8_wlbPmb!t@z6%+0L5%AJpv2b|ZDfy=7J`$;`sr}Z={(8@3? z4GBHI?;~5m{tEGrj=V)}Vc^{&YsCZF6}HC1$t1iBw^i&>rT$IDg=nK0Vbw3F<%anfTj6(`N9#p?jTDV-S0*NP^?d z^%0YSUvT_p@vG_<-VE`t?Ee7bD_sua<9j`>lQ{A(gY=SR9l*gkuhjT{7+f|Dw=1{0 zKb1UEub5`p%qYS%^Cr(O>Ei>FS!; zfNm2^!-i4qUNz%?_$l_2;13V!-Wk&E-(O|05;o`rNb*14j$HC_>s?rk!zI8+nsc+4 zL*O#(zld?w;INeL;Owk?>-$ysTf$$nj)|f8d%Us})_Fn@;k(i86k>!p@ zt$u$>g286g=N^pt-X_AQgq&Yf&cEQceg{t){>qj=5G}1Y&*9BxCfuX$#W%`N=ZgHA z+GhX?{o4JYei@IBpR-Si{ucQnS#>=gPqPN+6GpybemPNJh5C+{ccALmS{?jmIqfEB zsAjVrwhG9vEzj~J*_ICUUk4w$K3ksOfSzgmm!EEo(_Q*lh`l%)q`U$>BvhGT#* zdFxWZ9NZ*vp=`Hd+Ni}WlBV2on&>S@h)vWY_x*}RzsO2!Dc^N`S0M5h3@Wjq-#rdWMvtX z9M`jWbHUe_G6(`N-T(r>p16xMlf=}7Wn|vR>sf9O2MTeiHL6G1{{XR0p*FkmdfH3N zB$ha=+D|x)?KSr%w-Q4!#6x3$FMhUYdD6&$eCTA$>)C74Un< zF!;Mbve&nU7ipEpt$N-P=9nl(t_B|+@26wuyjh_<%M%=ZC^b&jSJ3MGR1Dk6Y~*kc zYV#i%!bCDGN+-?YtETRF@9iVw1=MwP&~2p* zu`XDH!8Oi&7||`Y4->8J#t5`uFazJ(v3?_Ji>r8|=TQFuri3WZQC+vd$P@cAd(D6s zkPM!pzekNfspSeW?0hXcj}e(lQrRA%Xj(!X@=s${XA=R52nQXyR2ojj^`oz>afQma)gWa=}ewYvh@03Ll!eCP3#TblR8&uivnCU~!X@g3xUWsyk# z0Cv7+@%8DK#aB+lfak4!FAY$Kw{zV2UMs{nVN_$($(na6LE@k!DFFBDM3Nokk;O*R z262&&eQWegJR;vQ`Sp3iibIAQj8w6>LxGIb2tcE0=7`jFCc32J(Zr=1+cMHHa54d@ z1S)!9Rc|w9FlmPA_kH7BbE(oMT+W>IEw!k|NgVX3lH4v&8K#T5NyjFXyRaiTtdwd? zb4KS;=uR615C;HuuT}VOcGK8KmvV4ZWDsiT@ z+}o1+Z?O`N1_!NEx7B5ebars~Jc4SBmjXFiU?&@S+<2U$(Hi23Y$R6HCy7gG|U z8zyb|yT!3y*)65H9Wca-pETCnk4OP+dqYX?^#KCI!l~j^ck;2*7Xf5P|=nfRX%Kl79?=K!nl2RLXswHn`5|V zI2FZ8yeICn)WlP)q0YBAycyz1?QY_Z=!_$M>hx_-$2Jpat!)&6PBW9%ydn#kB9{T% za5d1wtKCUx7FOU6$C~r8*<>*l6w^IiyEs(nI(1g2uBCM))XZRP20e~5SvS_VFrzz2 zK|jJpKTeW+yMVUpK+it4zx|!Q*t0I;2l3O5Ds&>ERBO53UC%n!C6gq#;fl|U5_d7< zJ?m%1$plwt%K#qWcC1Es+CD-#>x%X;@O2=pc=)QG&V11?8%>aXpv{0e!S7X#+Y^lA z2d}MEy1ux4D-*SP3a>rU10lM4*K8<84^x7sHltT8-P||c^D~p{S=Y13 zc`3j(UQJP(059oXafDXL^07Ib{rxJS zxISh|pYf(oaP82Xp4F^d?0n67wIw&Dk$Nd&i)Kh++>_F;w6>-eIZ|=K&0xa|%E6sT z8SZM_5VARJlj~JF%_|X3s#;upg~nX`qo6d^f!qclXSX$=wy0p;BIF)wti=Jx@dMCT zB6F0E`cPAkH&J7XNTdzCWRJ?MuAp!ecOYXGStk+?Er&j$ltHv644 zCn{?4Ex~sp9r(r%QD1KU${+AmEid6W!&?uBe-OMObErvWh;8q!Wf}6b(1Yp4d@hoL z2I6YPt=8ExgN*g8V;c0G;_P9GoeaW0%Bs-)qVPw@KZZZFPlHCGq0bEV*HILf+T})h z>Z~jC=l1yj0D^7F7l?1Xd+>75F_-t#X;FcZo=Hz%rFq}%;rm7FKLq?xlSg?g>GF^+ zU}yPP{h{w)ZR!628+<$Qd&3aiMX33VP&K{GXXjJgA8N|3D-lg|bw`s^D2*G^def2l zPvHAI?F&%3(zPpQhInM#BvP){QV&8a(lkwT%8V~wIs?+ZSN7`oO{3b)XYpr6vWg3q zo9z0+!hv_bK3Q-*dz$dOE5-;4`fzI>X&hJYQatZrEVuVhWOiw}slapS#`yBjT zy8W3xHEEtPoP%jM`X;KIxM&=JPvO__uN$_#X|EZfnULp#4ha>av69|w_XWruS&etq zr_E25oJw@+R88)F+s}VzsA-VudTr841%iOAzI*+E zziN9Q7W^S!hJG*ALDhUa5RTu?Z<^xQ_z3qME9n0Kf*u%8jGq&1JUeX7F)#UXZ9I6wNvuDQU ze7KBsxpqfA@kaju%+e>05BcZY>0dqFL2#a9&j0~jlnvUwyT#gqOpP3myCa(NFCNPc zm88;d+5jgt`1c-Q;a@V2HOok?kG$cmRH23Xv{tes_z|z#{{X__v(kmKNhl^s3H}jZ zYFppWYo{tk*fT|v2c>)g`x9SX_`CLr)^&}!x3IMdyS7QMxpe(Vv{Ex<>|6}`SKN44 zTBFIQIW69Y2Z(q|czkqe-&Ga(*u?Rst1K4o1_9c6uM+Wn?4exAak%81;=N19+JT-H z0}6Q<72{qmWqs&dx2=1;L5gk;DEYjqoFOG-eDnKYd|e+0{ugVX5ZX1Bw5vv*G-tqO zA9Nqazbe0KkN7Ea^)n zpq`w!Fv4rskp;EV?i?P4P@1rw2=TS-d5=w4f#}l7wy$1%LA&y{3GfH^02PyY@dEUw z=el+S`^{gY(Y3ZdCdv`J)tzs`{{Y$R#J)7~#+RaB>$Wj{qVhFp)@{s2?m+x2^@~f> z?Q~xcTIqI!Y*v=33J+vZ2lB7Y{{Z+Wj){Nczm73@mf}73dM&%o#J3)4;Xhja6S>w3 z*{G5;%KXPY>*ce|N_D-TM13AdhH9TX>T>=wQz>+h?(7)LXf0#b6=R}RAX0j^`08UPK?rt$arT?)8_Hkm8NR0KF>a(5>34J zEI6;cJXP=m;g5^F1bjQM_+rB1(p!HlA-7%8f-o6RT-WD^jP(byW}4boNn&8jAZ}ES zK_2z$o&x^>f|z*6!rlOz!oDT&c-6INU=djAM(1jP1_LneMR)MEN;IfWGTrq(PA zSAH^jR~_P=XIJsJi8T)vYgbJ@-TlK$aU-5tiOpeZ7qjUn%|>F&%HZ^_j79NP91{|c zHXT!*uE*&gg#2#?xnmsgOfW}0^{t7u0Lzf$>0c{;${!k|@fMJlGu@VWfKnLekzYVr z>ru%8luSu)ew5=cXYM`G@;JO2p;)ux3z0kFz$Ed3US;t=L$uel%Z)Qpwpk*aZFN0I zxb&|;yS?)VoCrHkYmo7ru+3>ZmHWrHZ>3{iI`gg3cu6%aPmz3G;Qs*lK)hWxkt~6u zA2cnVs(Mzgjjp6SV@U?@OxCaM2Vrg&EpAvyk~i2}cO`4g01fpR(#9;1HsZY;YlV7Haa|5~I6`ebS?a5F$pFiI*y;^e zxVlW?(eepi(Ky&-vIrpyYpPF@Ne#ZuFSC)#iJ$u30 zvkPVCs2Clqu(#B=qO+(c9Xks1KMh_q7V);?c;tFlSEp)=a-fF*W2S4maeUeyb{_4c zr-xE`qI4steAkbD)4DJAmWlSY0J5&`eJj$ewE=HqBQXV?@Hy>VPsde?>%sEvf4aP} zJ*nZZ3h`!`)p`yr`Ee8mDg9~ta5@euX)M6FU>l6|6rvN1o<5cK5vhAv{Mx1&DYn^P z2L~WjZrf^E8w7we^r)GObxP}vCna`G8W5J@5ohF)Qpp;G`B)P~F(WPr#XLm4Gg?L* ztRmGKmgrV>+A_UrCcAKPiqJO7W6mnPwgN-7eQ0$xvpS_sR@Dx(k)|VerCIV7Iad1A zcD5c;jm=ga02^{ou&iXIeGZHqYIE|jgsb5=$Q;*C@N?pRweV}l)_yh7E=Unci--ep zEPco2{HvQ?p|@c1>q`4%``E|wsGSv3E;8Akl5(Y0H0*x9{>|SU{5SD?z}A}ghpz;S zDghwf^CJ$b-%9q~J4?2cPZuI^NgRs&cKwP!Xv@EWe-y2(bzM!{EyN1ZJS~vQ!y$${5nm0MWZze+-s_?DxIA236z{4yKZ=^A zy@!X*rKQB8NZd&qM)=yozFYYH@xRBOHi>L}O{+AiYzo8x#z()ceUtGc;Qi*IWbTkL zJ4xCB;=Wh-!SEMcXLBB%CVBo7UyjzT3~7_Wut)L}UnfDOJ{9{nrk{{W7E1@x^C!H{dZo}(NW zaa>6ZaKQWb;C~;Xue5Z(iysgCDdPw{GpQGQb#{|Rp6L0Ist>%bf2psLKW`6>z76p% zvgzPWCeG5{N#uP=qjCG$0l*-373pz(Jh5f|)7mFd{3!A`3FhD05uqARG2Oc#7I^Pb zo()B$E;o#Y!0C$Rt+ghONeqKC5b~Ya0y*2xx@RMGWNI$WzfebJXd;0#B_^11IXf~b|{jt1zq*_I>7hjdt zdH@*i{djo#(zMBZk(I~-s_Paj8NpIAdg8j>D$6l~$^hGdD=Wp8 z&21A%-u)|$)aO@c(a}vt$l`8o6Gr%JR@~|;%j-*Jb>Yg z4%Nq+)uD#L%2Mce<#!(c$;ZzVrX9w`DJ6nu^?8M`ste+jLU1*cW6yU^vm3$A%*f+~VliHq@Mh+C{5-Kpn?~%Z$6Eat zFsE#_HRODK3aMi!W7c%nM77BToO4iKZg?`7y1Eu?b&Gr{d&H+Zt)ZhS*^BXgM#AB}w_;@B4A(m=|7?oZafX7M0s{6TW8 zI!bf-*VXW}7rf7t;>xLE<0ssuCgubI)1^Q}gOkoZYG=DEz)#YnYldI~rg^Wvqa_%h zo8qxbwuuU|IT*)oDtH$u^B#S3Q7B{1&c4+$W4QX%Rg!B%o}G6>RtaMxkVj7Dh~o-# zO2m=(GsjwunX}IW(x#LnEg8G4>O`;#DcymZgxoM=wSJU>PCjGXiiRg1VdzJ#bjA^l z%;{B>YFN6r5;R9VabAt^y7t~t9lV4Qv~C{t=MjjCW3d&u?%*w)*lbuJI-P)%m?jKtqjSM<#)Ibtnr>y|hHx*ETzY1Wpp zg=bzy4lCm0HyI=P2~tv8Bb;a^h4zdNI@W#7vj_9DkPkv@TJ)i2SlET=53OcS&613m z$5Ksun0Yxw`P^qTDzdqnY~UP^aaC`mm!b?DaaTZAPneAQW2HkpQIyGSk5gJoPWES$ zSNED&%bGWo1|a(McQ3S!RTwM5YWP{JIO1TEMRwUl;C!))W5he#{`^+_)+Uw zmN$_x^41-`qZO+asF78bxgNc%&c)tmy@JEI!7~1xGPv847#`z_*3~>Qe-@ZERaGee z0DA+aW9fc!OXRUqGr>LUt-RN_=nZ2ajW@0c9qYivMx{CXr>_oZZ>hmuX;w_ecvCI> zRV9=$w0=`AKia9h;_{U?{AaaQf>&F&M?$!7m(sl)Jns*CbBh;6MyE6l-3v~N-M1^? zp7qe38&%frB(u??MsXfVW&^ciX^-ZzrtEyZMQS~^ryZNEoyi;efO$3M){=9xJ2364 zc9!Rt>$-L18jO)&018M$ocF8NnsuGUkVOLoe9hjj_?tw!UBtvoH&I-j_|_~~h)b$3 zQY+ufF%;B!PdhM+gj*!mbXaZWK2vk(YbNJNxblpJ4&2qbwfLlTSwkETn74CI{{V!B z$<(ixjGp}kd$Feqx~6>V#ojGM!IA_~lBa`Jq_rMkU^%V*MjI(1^HUkeBpStSg|ZHP z#T<$nHhGomDw;@Oj4xNmwMQUTQdzt4RvOcI`BE6p01?Gc98!V24CbN2 zJlYpZeDT$mrktSPo_@5{xh_<1&*N0*c|qD^Ak$%*KtKzMPKw;?p^BU|TNjF+jQ#y- zT2+i;q{ynq;#>vt`Nv$;4D!2lC!Uly%&dZ~YOkx+QddS=$UNPFNS(Ry3XP0W8q*Zen-S^1HQJQ#XASrAvFi0HM zs+?SPGg4J2c$yM0Sxhj=x-rm-uPvJ&-YOnXQAok#Ol>N-`_(JoOZI;$3C23tJxIw* z-91b_XB`_W%Mi80mgJ7cnAblrecOQ?3Xv0b3m&89sBa@k*D;aw9jc<{m%Mhmkg(}) zl2-vi01k0Vv+O^6il);^7&%(6HS>;%!K>y^W67@EXU!8`bpV7CN%pSO;XlWJ6nr_< zH(6X4w2`qbvoSn-dRLl7a?+4O^WLmOE02^6bIoH`5vdMVbz!k}ryWnRJW=A04SZzy zIdiC7NG62ambi>J&2Gv*T-VAvjkMY{zbl7<$txi5Q}}zt`sS5oKAmBxO$;|Hwq!@Z zap*-l(V)AxdpnX?dSj>3yqY+Ft5ml}XODu!!%fkXbE*e#k@CMx!p08 zGfG>G_pAQ^Z;BPdp4=L$lSy)|$fr2cS8VFM7vt@7zDv~3Oz8%lvMh`g0 z({&f{-QSz+da%s#S$S4n(NIp_~*Ngmf@i5c8IdyRynIbkM2KKJs z#aiT%!y_{sW2JoE`$qVK#9l17PluiiUpH3Oq!3xe8w9aEN$dx$ejCL3v&=DZ!BhCB zB#+cMS1U@ot34wfS?<5^Of4qbKLF}}Di;vkUFuq+nM`gOp;hRo(AU=waUF)9UPFT1 zwrjz^;GSQz6yLKa!G-YFwWzJ;tABKFbz7A=9%Fw1_7&`2FxDYhS_S1muiiLkEsjx= z#v5#W9~tAR=2`VgQg>S?-p3*1ja6lKR43lNzs2maMAFG72GicV4-#Mbj7W1{Pw}tC zQ22AfI)=A~>^5;F25^KTEYs#x>bUNah>l4YipW!ZWKP*;i+v!$m zzShkPWA8{a&3%qnPM!+^3Q<s?Yubx z5iYNN=6c5$LHO01o7rYShEtBiIIX{ho)5bCh2z~X!d7vTF0RpLY<<}CmM8G9RpK0I zwk3e3q@xqNBLxvN4jW1=M z=4CQu7snOKGAqr<7|ttqUw<}N2k#S}J63cSihgeR&oxK) zBb@H8cgB&3Y-N##07LV7*IOh@<&Xza*0_&~`iyoO1oAn++l=D9>>eIZ6BX9y62(Rk zZsv#Vf8#5QJ#J{w;RLwGee3OC4eM=jYLhc#Q5E3v)N8akz&UDSB(D6O(mCstl@OQZhs2( zxCvz}4peu~O6JU)m1#72SX#<-lP)H#C!J%M0oY=xc#8EcEoIv`AY&CIXuE@ayNBJ) zbAB?_CwreJBl4-keQUz0Cigvx3z+gRiW-H**NPy7jHqvvfnC3VuLL&gH+BGudy&7@l04e6<*eI1IxSv77S$)GQffEVIr${%*FEbJ z+WitG-rp`h@jTNmH5Cn<@#|i7N-j#`Rrxd@;;#^!eJV+$IKVt+zF(64ul!{_=wd@m zPC?Il_8*HLDEk(HI|*=h6~=4Fyd7+AH3Kt`;>~*5Mj6wDrg+(=A6cLTj6_m@EpsfIC*6pC~W;#?Pp(*-+|fMrl)4Ixn_?5GA^F=cRCe z8vIDZ!xF-01hM=prdVEJ4-ApIps z?Ko*iEsj7PYpk^KKm06$*LomdDY-k22>dJJv+TPj%%kk8a=NW)-1>Z@6`SR?eXJW$ zU3wo5d_(vnrTAM?7jeJ`mj3`ISG{>Rj3T(QHxWuiLj+`S2sQVA#$San-^Fk)qs!ot z^7pTnd{ywyC_G_hrfZu;l(&c_Ge?kDaLBL1b6l4blxDC|PnER258U{FiYQ{}bJv$O zwXySG!w-w+;ct(X?KJ`x2R>Rk%w(Tc=xf)lelneASuZBl#L@0D+eH|{B&VFZAy+1{(jz3>QM&q?+|_L5naEg1U+ zg}gy7pEK@AoE4QiFg+{ZJ`w8fkgmx{B#v`lK0gk=x~9*^c$X6E(spOp{t3~oP5fGg zq-QJv$n~$rKlmy&w3c7De~w{jwuZk)uZnV<+quPnqa?3mOH0#)hb*`oc&s0YwyA4l zEStZBraRYVVHzY$^5bad>0A%S4O%p|HfTO)IM20ZNxF}f9*a}b$DN^%c6W|y%ms@9 z@5U;vt&=;3Vcxau>;VeIj0)|avVyVGhQqna@@35`-KCT&^&|{pzL@=*{u_%i74_;d zE=Jxf=kEnXllY3p*_Vg8K-_iazTfy;qDbv+VW-#@mw>8^&p}_Eai%vAsm6pBvOe!2 zqe3usA4A)|5@`{*wzId4nLN+{3_Q|0_&407rq|!yI z>Jb2>22@~o6JJ_r9vDq8O+gG{nO6q~g|2=L;fmB{^@{4vpDV>#2`gngL0HK*x;czXGkFPaDK?%x%i~ld?ac^31E*aNEJj-8jAd57JMCy0(>n z0?({OB!wU#BA%xl8o}|3!z7ZmzsjYK)%31E_Hp=!Z{j}>t31kFL>0E417f<}d83e! zIc~YH>Z~R%o@13lta_w>DCRgi*>-JGl(e%vZ{oIx7O|yW>DO`Yj%ZyRk)EQzDZgla zBUkXZi{{YvYet0G+_XTl<8UW}Jw_|_>&3cy+@tU6bJwMQbbrB7{ugTcCcCEmLbHX< zoxt*L*;j>>atZp3*TZ9Q3K@c^k@}wn=E@Z3hLiY~o~O;$H``ar>MA{caXbbb9#GbP zlXTA?C%y@*H)yKS&ksP-w#bwm)@{#((pdpcIp-$7K%-hS z-2C4Ylvaw-nCActoKh7S$j{+hsoQbWChnM}GidvYk zxZ9r8j)9v!g=x*<4Ne|RiI3+@e+_E|If-yOVzazqt;b58YI3xZlF{!xo;}4d#}YTq zCMwnDoqJ*ln%$Qt?-9jQ7XD#7r03qcY1C3%jA&L36C6Hth^C=REFV-m_=B-uRRek=NAHT*yNG{!e37m1muVah#v6dl+fQl_Sc= z^2$t|?p8DddY@cSCxTLZ$0`WnYLwzjcOpy!*k++?x7-z5uUhQoPebQrS<;=2OKUr4 zbuPt}^gi`@G)tS%7jMX083UzbS>1^KZ#aSoD_u5;;y6|#6o7AU;%=48i}uvvcpkxtUS?*!muf4ItKI5qVO+eHfZ$`^7!55V$?@(-oNhWR4@qJ5Ory zF$-N;^buasB$;`2MUG!G72T7LYeK~nQu%@fZuk|O4x1_4u!G-<={z;3JbB$1JC8W( zD~6^eL1=U1IVr*SE@=82#(!`dC)d4ox(|o$uT@~WKuaFgKZdkbC1WF=xvu9#@XX&U zLk4*4E155ABjqt!qoE_tJWcT8-~%PBL;^Y7E6TiW;eAThHd3v$k`B>dTrPrLHeErU zIL&i@E%;4ktJ|@P*ajmw0AN*L5rvwRb}^@&!n8SLd{up?%WokJry1@lq=--y^2yJ7 z_Rol)2d2|yca1WmIoh2o%e6ff?PYMl@Nv?t4)UswB@AtR)$efbgBHM?Hw`QBSueln`ha zj9eoepHexdPjHL~+%wYxx)dUCPNR&y?LiPXA#?au*zMyn9qKt;=N&6DIm#Y{`(gq*iJBB;3CvYw>`A@UliDyp%^Ao|t2hVvvl+niN%3NZ}LjCxm18g|j2 zKZ}$blu;rTgqQZ<9+e~)VY02y;Z-G8^GXLKQp8vS0IsJLqoKnJe6YD{>SU2IjC%@X zK@^PCG8qb-w_eoPyv2XL#w!;Yv=Nq@nh+QncS^huK}hmj!*Cumob(u|VnR6G!J>3A zo(L31jkh-TiYrmK7Yv#ISD5Y9jfi1fGCZcxFbBBg}T^iq3gh5rL1T5xk|dlZwqrZaQvpsBcSmI8U=Qs=ywg zRBtmV3eAy@NUX$_hJJi=nvzXOs_Xy^^*E@PD`#KUlqaF36~a4tK49c>Y6G<-A-U(R zP>M+y7WeH^Ad@_k^sFkqEOo3!TB_R@Bvh3+0D4tBs7`P{K~`fsR|M2^oVG#cvW4%V zwKU}$GAlGOki_*pD#+T;(2V!RLL+7wKD~`i3imwMN~D}_%ZwZ)rp1`uk%3;`l{_~R zM<}?DqV+XGMwU0)b4n&{r9mVf)r6=)=yXt$gR!fw+OD&w?uQ$;{Foe*Ts6h4k;qnD zf~1DeYV7p8+2Z2^9fdKx9jfd4!lsvOSWMY$FM5b#`@ZUzB?)v)RDKVid5@(qk(MMBXLi{uT0D^XW zQ~2qo>3UU-y|iu-Rc9k;RX^Uxr>%apf5ACF;Gf?PzhRF6nXIEr?-ObfulAO$Bw(JO z>d^HISVuR*PMhT1@v%8O!%0r0zDKG4!auS`zX;3l2$yTZ0GzM^f__02$_-*s?D z%t`K`^siLZH4u@PLEkwQo^@E~jLPz0^x92f>-zfI#KC3Y^uez_rejK-HzSc@>ei`w z3*s*kF&N%)-ni{=Su!vB@O`Sch`dCshc_8Mgx4Xec$CJcrVV~`o#$#zq|dF(GO0T| z7Cc?5z_t>*vC5$*CcL}$NYiXSBYxF-hl!`ThUVA9wrA|JMskKn7$fR4`d2giTYPvv z0PqdOwrmaM!WgZQPbH5P+ke3_yaB2BJK@j#Bzo1jOWk4q)hiyScOy0P+$onsGtU+_ zqPmOse^c$Tah6lVRi(6~tbM^_sllab1~p-v=ZfU~QLVMnVzQhQRqnh*T?#V{fzB%z zRn*JJfsVav_uOVCDwirh4W&j-cVDQ+|-NW;ZAJoGhMZ7)zHVVB>U z{Q?RuR!H!n3EuJ~6C<|w9A|-FQU1Um3}TYv=i*kOYs)v0VqJJ6_?TDDpAJ4AYJM~E zRkni^%eGe@YYujf_4*a?Z^Bj{6Yzzeho>k=t)mKoc;~sVJ3GayG(DUitoCxeBl(}iV51Sfl?wU0DhRLmRQUH7}VSI&?h+eTEMY z$w^r8u^6W+t=t@dKIcmDZ`wm#0K=i$G2G{LiajgYbqmOCETp@ZK&pyZ=e2mIw;X;e z@g&-W=omXbixpO_N*s}Tv$g%5?H0<`)-N!S%a-=9qP#z@jW^47 zZNYl(HS_nvm{&}iIQQUmTJ+sl#3^s4LYDFaERXYh3iGk;SDdF~(#zpx8h%HxUF-H5 zg{(K0mpGU&&l>is*B&K%nS`d26;6j8>oU&3+-fn~WDV{CIIR}8 zNTWMQz^h&o(@p1yA^9?)vU6W!O?wJW#UBBPr&Aj>IkV_*hE^v|(XH(5zzZB|0L6L@ zl!U@Jx6O_~JXZtYtG9;MG%Pv5BD(u&L4Bcs2d6dixW0OoTRxiwK`Ln)?GQ-((;>*~ zT(+%aGtBd-B|#<4XzLy!lEMRSg{e;MFbr7`c|Et!H?X5T53Xu-(@cks zD%Gk3*YvFmTV-LEh!1mF*2TE!OlT^I$Tkzi2w&w@u5F}N`Ii;e$EFDs5Tg}gJU4GB z{o)RL*F8LBAnzl(I0qtO>puc-~3g4KqNxy9#+Q zKv;qAishx4VX-wiVI#h`4(51RsXA@?9Ng}ar_0Y3QI)0{0E*l34}m;u;mg0U>%JPg zw3w#vF5&`*9=z3i9S2{!8*Q{%WXD~Jn)H3OIyCfa?>S|cV<7$Na{HN^2xv|@$F6JC z{sI2ipB_JEt!mR<_)X&XvX;W&&utsTB#j)55*4~+R}*RAuNGd*asBL-y-!mCmgcU~qe=b8>L2W%`(SwE_L%UkpTrLw z_`mI1xs{HnzGebRo(PN_0DY_UBldCe?b8S~tEietmOZnW&*jB`Jzov=e;@n@@q%hz z8P;AoBa{~i$7F-tbT#``{{RH-__uNKOTcYkF%*wYo`8Z-#WjA^lfv(arlhrl1uiex;W))<}Z!k z4rGwcEuD(uGEUZyk@(lxzYg_SJW=2u>?ofTSt*T~x;8u4j{H{9Q&f)b(!}Y_%y#o>VcIV*=LMIMk6Qb?_L2B--X7DJ#7%9-orzR-_pbrc zeiA3d&m0Hv?ujf@nU$_@AWg?;?pD7imTg|8cwn&+Zc))b=^xZw3y7tb<@KujMeL=k zUT3WML-r5&z57f4$I0;{#=b3v(&^PWib?_WUb{{XbNi##pheJ{iI*Y`SZqbo#AAeodbCLnnXgRcbF*JgRb;j$sOP!xU@IUqg+dE2%T{T=qY>XVmZ4bvYjI-;GHt5eF@Sc{@l$psK4z=9ra1FfzhckYyTz!lrTinlf+iUgUjFGo^{-#+OA#xkb5z2r z9_zA43w5X^t*MoOP^2ft&Fff@{v*=S!3 zGzjC5B4lkMCh5l|fhN6gLH&pS0AyHvWu|;hpWvs5?e+aGPcqW}*2!Wh{{Y_FqXgGZ z8V?fhEe{%|dqWE+7{=$~=fPk2Bwy|C;;lnW@jcJM>9nmnNn?~vs@)_`vGvK1JBJ>% z`h)vC{=%9s!rLoNd&Ksp_WNq1gnhilZu2z0~_6uZDafrD(Ed3sfpo0OO|>qaCaob-eEz z5Fb1fQ(N7|c?gCA17p2q>r+W<8zhov0MmH5B%C%l949vUw?1R|_wd~pOq0ak9fg&l zOx!>_j^tO)de)_ICD{<;`PbW?E7R4{k>d`fcDkO7r>%T@`#^X;ZvgnI;R{9_LZioWS5XQA(ten<2CuKrTC`vOVn+(4Kn8v+qiNMvFTrM{0sPvY2vR1 zTU;xy(-*dnWY)ubgQ=FXoiD3Rq+Vhd3PHf5%^>`s;;IYa%k#_gQw`}d zU=vzKoR6J_bmeIp0Erh2fw|~vcl1pAD!#87oO71vqbl~K6zPf=H5yN=-sZ5TasPK@UVcF6Wr<)o41clurB zgKYadFda=qs=$DF{*~(&{ur^(1j{SBCsVZV)7|hgAWAuM0K7 zsZZKK{U?F=imn=$)UglH>)R5JFfceI)S7OOaep?;MfrgFh&4vu=}KqJ1cAmX-kqw? zV=GAnq~&nodsmTpLu2(gRE1qin#P+gjO`TTET^2B&u}srum{qwYIlO}ZO-Mlbz*s} z*2A#?o(>Lc)WX$LR*~W65mJ}BSOzB{1c&MkO4FEAXjWiP-t?-m!WFlJf(1x#7t0Fy zEuOfp=gAeZ<P$zhI!;}uroRNMkSL8P5SkimKEY5;`fwn+zoD=i+U zO08O|_nB(ZKG4|`5Ig3rZF^u@B=*RvcCs=S+z1Awn(!6C2mEQqn{VDUqg_H;Gg{MB zXHP7)<^zM8w*-pkCAl@u+uSOh$&vmwa??^|JGW$x*{?qzg>7Ca`YawgohaT+?DTjG zI)J3-72bGf!_ps}V>_E1`&Sw8;{MxKyAW?ykbe$q+&n3v86%I%0CdkCYvS=9(iM4H zvCo=gsnmtyFNgdx@dX4Pam{zyJYQqBRB-Ku=%d=F(7Z936DyvbMR%GHgY@T!18(v; zAS$hEhMT$Yd4*WXHqK7cz};KT{ufPU-paa1p~ zOAC1l?8^>>3|D;XQ%jai@uL}Twmiec-T<-HtsOqXGTF{XGhPGnYv4YiYvx?(_FIl} z6-R3O3SAy4l&T?d*n8GSz2Hdo7fDprc- z*v7!~xOA>tP4B5_m=Fbk`>DJ zub|8GRGOz#MtYN-JY^ney-zw9N6gDG8R^ACCGsN`Tn>8r)YqDOL~u_dwPxJgx)%L; zucoIZCv)X7JnnAnvig#Pg*6iEG7?ZG0PR%cw|Nc;>F-lTYPjXGp7pv*spmTKQ?q2= zaGZnB;Zm-{@{$Sq(pxlk$8!+edQwR7xy)=7=7kx_=n}02v>t8Fav1T`6!9WPLNNo@ zqX{|;ti04S?_-mL$6D#-mo1UcJVSA^L++-Fou`_wX)LqK7AVUR)C!EDE0WkJv8gQ; z(C!l8{{XXEMv{zf&L3%MUPbhg;ZP+&8ysZxs`K14OY@xb+J#b2%^^PI)8Mu9WdnG| zd)GZl-PpH3X!LpuY)fMcQvtY;$N;IPw@Yxl80WP@UBJdhYnDyDjU#CFBbFs3ka|-- z>wr`8?N-tN!*g{sEv^YU1JF|Ej90vlzF5vSOsga%vPk35R6^7{INo*mFUA7yzJyiv)peBnk<74#0B z4@%^!8uc2USY*GId2!C7(N`0 z?$)PYuXv&<9ISGlJLb8KZ^Ueo2@K}|XOmpz?~4N?JnxP;=DfS(FUKnn1^8~`P?qs` zs*xI&`_0%_%GJkK%&MwXuH`Pr)ZntH(P?xtJb9trd};XWr)d{#WYe_6=FFM-OJ^Wg z-9HD?(?;;+wX`Ux79gQeU?}vjiT?m&+3);mYW^|QVEOLkBph>&)%FjCFJ2oJO!nr# zUf}#COcf-iwT_3%@s=~&+YKF}dJOuSg3cuUYd28UR4HOk2fbwtO&KumPt?|auctek56VYec%dlJj5@usJ@}_;veq{93#Cwej~; zze~23-qJ%2wi}ENzpZ|xe%K4*?N7s(*19#%m%b7Mo|&)7uN3?^)U>}HUs!9==20;v zkMU&ZHS=5#nbXE&>(#9*${Hox%==95C3t+EkFZ+J>iZlQ`d5|$hWgcwFT2wef+boOQ@@#J3}E+_o_0- z?8We64-3w>_sN;&1I>`~RC)^fgF%8jJ9P;Bbk9yJhVXWU1&m3w?ICz3yBj&$I7avB z?_WKP#LByJp69Dhj3AU?YF%E%3`LY@9+iWw>hY8$fDS(KH0$kBAgnyYpG?;q;yqEW zU6mvxDed0AM=_~&WP6wjE1K-kn_Gg^JcavwxjpNI@wbV---sEpmON*zSMe8!#hG zLDYqtDG!cvGHa-|@l1YjX`&7`{noEO)3r2Us%2g~X0|Q0cnJxRTRefuuO}5oe71~k zeJ&1gQ%N(e{{V!hNZdaCtKBnIHJ^#rRuu0CR_ zH-S+!s*GH}KGe^*qa0V+P{Go|r3fSQY*t}YF|Ewrgc8Wk$2<+ax0M|a zTIMuYjyORb0p!-4x}rxI5fXUhlhVFYyt$KT`aC}@ELQO*vn}fcBRhR74m+_M?swpv zu+2MJ)$QYyw6kE5(z#C%_?p_@5F(t8{55#AaMh|-x$M=i7}H6ypAa>`{#~<39PY`* zVp?7>KIT1QNMm>V)H_D|1HF(b+@EhR1rh2dP*}suZsG zJG(3Vw(>?kwW6Au5rW{Hax+|ouBeJY^S~X3YSZdoTAWNixvol-d70=!eCwL!{{Rwe z%l1Z9C#E>A7IhN4Gss6d6|bsXTue6>$OnqeXe5!#U}F?fj3V?gsTo(2zR0S!vP%0S zLP_)$de6hx;eZzJgWm?V{3CB_x8v;uXCEjv-p6NTVEH3a#y1hhYr0d4M^td}nmq0s z86i7lOmT|6Y66Tp)~ASLwuQ1WkZ?e()P^@;RgbtPxaZ3!sWGJ(PRzS*0V503y=mEP zW*G-P>oU}c*JwYjT!QJN!C{0Rf9bcimcfX^WZc*tYH_OF@ECaEo1`bP}l-B~4Lp!k3A-uuQHs>KYpLm9Z;sPdj!TOo50;2OiO2CTx3zs6@vGtoz@OR!;ytFd7m9AJZR`{6jWvJ> zZ)9(kU`1Za0IwedDj2#>r#HNw$orgSJWd9rX>vYRzAna3#4p%~;10hQpQ?E0$M+FO zaiisREl~qV^YeqpRW-M?&z{ zn7xA)i@#`3 z+0R1wMK!jw;r(|<@g{++T1eK@TY$F~xasDnRnB<&SAl7IBHcqGi~j&883`JWJ*)IX z_IdFC0K!S5niZg80W_XwrPm6o`ATVd`t>4+Q?i{{XY$@TRBnCePtdh1bNl zvp@D7)Qm-h(ICp<8;$I_&(^r`I7+#kTFM8(_?tz)qyMadE{V8jJV(u2Q}p1vme5bi(dpk zZM|>cr^Ou-Jr_{%4uf+w&8@QH29SR7S}-^&IRnz8h;lm6z;j9SrRB3aaJjw*H^AZ| zl7mS#cXuV%L-oh@U-2~hfttoyB9aO1)Wpfo8yxdiJ|uWX(@&mchTTw0xw4#O*BkpU z#NHP8bE8FX8T(97{{W;-)Hyh>P57hXw7!q~T1`e&D*z!t%93mDGJp22AExTNJ0H$@ zqfpOkdp#A;lfF6l9&Kkqp4s5u955xh0=`-OpFTNgAG5c_-xcZpDU#(i7%g$6l#CSJ zoc#Ii&3&cu5;>;xqTEW32c~QD+xGC$buR>bMY8e6lWnyw*UPzbnGE^BQ}wUNvsUnV z!mUYr%ISR1+PDkY=D2y~_=fE@-rFAcd3)h+kN*GzFT7RaZw=hui8V2MCDxMrS~*np zBd*?+^N+QIaOkus8gZh7liA0E~{H{r#;S82Jr3ZZ)7S5KhCZuNAvf->8ifNPKa zs`aF|)U@9V{hXkIJjoY5>(a<*PNp_5qCL#R7yIbbXOE`YKy@Sz=Dio-eb3poFD@WY zy>J1OU{qdDQR!Zz@LmfjG|BE}QNMS{-j((kt4bDU#^-+h7h~Q$F@GAwt|bAx0kZsAnCm#v2X0Aw2NbiGO&Z7NHbISB(3&r^;omGFd~dpr^`C77`%uO_^# zL}ckJJI}ubI3g{zz18h=#Lj4gQtn+zUSDU81WZ@ zQ%`$j)8Y{_A1)-vTAxYr46sElmAC@kxQSfkr_#JX!hR|O6-J+T9YN_`=857YWdWu9 zN$X!pOFLDhq}7j)#AULkn%wj~GJAVQGTTO~Pb^ectQz#P5PZ&_rnqP|$k*;koR3 zP91!x1Uj)D0W^nEYpFv!9DgeF^E#2o#w|-!bm3{s44O+ub(&_eBv>mU#yB-{?@*5N z)x6LS7p-#E`hu$bjz}Fzte=Q~8$KU=D)98)Bk|9Qtu*T>Bi$s|GHnEVcRz)C*?e4S zsJC;Dyd^2#DAVx;fGWs2`FP^Ii{q!mNIoI>I_pOse;5tq#miCoEp@6uZ4AcjWbxZcxHjh zN{l(k<3CFJBla%w`e`>y;cMMPf1G#33E{KYRnOZG;b(%iUjz>xM{4%E=AAf>=2(*{ zJPqB4s6Fe!{4uCr_(nTTb57NAZ6*y0$HDw-;c%3mN1Dc^^!*Road@g+UBj*^T3dY2 z(Rp)oY9#6pOja(Yiz1k|r(}T(FFS`? z{f>Ue8-rCT>ZFh7D)ey~UNz9Htm4lM{i1#fcvIrfg|7TB;w?TrO%WE?WqxaEe}+Ny zJ*)E%<0r$7f8d|QEkohoiJ2Z-R~}UH@{=JM3@i2*#5#0$F&N~GoPrzNSI8f?ckIVu z@dr;|5oz}2bvtO7MKX`RJ1Ox- z^TX3#g5fgX!oKz1YqomFh4m@4U1kFeurU$|2TWHnsy?BvPPVoK4()-EYwBoNgL2i;Rp93SM34F1EUiepwhGG3{Q5r0cLcXLd*f74;9m{{Z+VPr#C>Jv9kAOoV#4`sZp%uFJ~EYN2^Bq-=xW%<8c`Rqz~qRa5Ii^T&9hzEy9N0 z3CB-L*N;#VI7rwR{^IiAm3fn<)lO8db<>`(PgHrT_?5!+$)Na|f#e+Gu*b;c9QCN2yUBn~4SkQZgx%86 z{C>4uMLE4rk{=Y6`@oD<>t7k$?h%bnxyy4|x^Xnaxy=>;n_5%4-{^w<0^5AqOt(M z9Fx-(gTq0)pL3Sy6=6j}Pf~WeDg!ae&orWBbtSL~^r$7noMWGQh#jrB%pBIVu<(nz zJr$0PO<8FqW;ohLDeRdFJLalIbQQRK`tj1NM>@C7p43&l(8l*kI4fJSpooxHT(oPs)?Mx^G?Z-6@(FigQIjrSUFGEU?lAW6` z>3%2FJU@RPr>DwM(ZM4+>@=dh@RtN;V4^%c=|80d4Wa_OmS?IBdF zIL;~tz6uI}F;wnln7aZh0IbIV5n1y#jyKNRTNheSFoH!%9EKsn{V9;!r<4mnKT2|i zM(dxgHCLsKr|imUEtml?@6#rzC{|vzp>FFSRpPqArSg-;!fyQc2B&giTezmis zXmQ>JCSH3er`}jY12iRpDt>C?mL+U`1q`*g^B8RgbGbkjURVm9pab-+E69_bho7ZW z*bJD*;wz&MQ<<7hkFwRh3Nl-_-cUt3a0e~gtz1M5XKQjQgs{kE;Nra(Dx|(Y5gey+C$|`h_6-n zSF5y_DQO&_GAJV_9V^FUXN7@AJa#p+rR(=LZm$&Vjf`bZE5gk%4;N0y)nypCQGZASzU|Hh&uV3Y8XcIQ6g6aFprc>0JD4 zGp|z_igNQicGe?O0m=2nLwl{;2wb#_$lx)q3s~{>y{ub|*e8L_d1sG4HD6fCBo@*z z!1e228O0n$^=R@^k27bz!`Tf?RZeMWXnxcFE3;1zT=^3zaHlH0_^-?#8tNC?_r#q- z_VIDHO@MlHUtD}-_?@ZiT3pX{IF2BtnaSyaUmpFjwQmz$Y6bJ=Oh_ZR>}#8Vu$B3% zVroXtN7Zp9N_ZR%Iv2Y~XW;!HFkRe})RA79;XNwO*@UeUlk1xEzY5(Qpg%!f)`fnN zjfzhk^{>!Y8An!pB}`RHlD9*>wUHr>N);SpnRl*vJjUel&{sZn)IQCoDPH@ zwZnL?#!9XBXs)vF_r8_j<@r`47hm2z>Ekf?YA#Kh{vq*HnuWY9ij9P(T^`}0D zxmkQbk$lokoc@*0_^0BMY0$JfV~J+RI6N(WiG<4~N}5rL@Y#-W7Tnq1c(39nli`~N zp6cLlA95kpjd<^hd~tu`-AS+S+=K`BaR%W>+O9?8$!+71K$^8|@An&#z#e9WqK z=T0gydYax9lJ8NzjyYH53^QKO;TtHUfmA5U^#oU$_%}#cZWQ@se8V;BIxNw|ib2Lj zcp05oN!a?Fv)j%owsf|KV8#zU>S;G3IAffQiq3}nJY?W@#Xem`q-@X7bgvFqlx%%H zCg8PaBk>w@9ji1M_E&56xuiI=kJ;wj+W3N7ka$?$J)j+%46bA?cIYQf&g9r!#&tp?4pAf6zyC?b_@7jMd()#bKbtFY4R#LJvH9cbH@>1@kXQd^qvlavrYp1sGq!&MWKTg%Jr+hJ6(jvEvmObm5 zrY002=6bX+Ri&#(9|VCi?EH^fw`p}505Q&eYt%dq`zvaGH`VR!E;UuLww_@VUAhvH zvFKaSW~=z$_A>a*;BOKQ9kissf?3L2Y8rf=WD*SUcY0S6#l~sscT%T`r&pCDn()Ql zuzw+#0OycJdMAghouq=~9wX&ItKSX(0Amk}x|fLM(KQ_{Eo6ZdnWZJ7CBEq10U0&z zK0p10J~aFk@Z5eT)$QyxO*vsln&(EmS^S1S!scn1n|tIMTMqY>?8sbeE$GwmezoF00sW!xzAk^kLOc(y z4~Wz1I!(WurRjH?vW1yqnM&c$d~hr0doPP#A@KhI!fUUC_WD!=@885aZ;GvLymzO|C&_I;o$`YPc@DT`G*WU*Oqw3+UiX`reHv2B}(-b z`PO)#v5ryRF6Z5F<|eKil5uutmj2a$8@>emboe(mt9S4-R?&4`Ez-`%T(`3cZRRTA z#>~CCSH_q3H_k5;jYXqg=d3P+`Uc72MDGQ4WpU8_A3)cwbXXv-(drO)um-a5CZ&tm7ojS|~b zx&F~xU|KV`%LAXSct7ncr|Vk(0LGi`D^9d~$)O@BAb@4CImRpMuZO-K{{Vz+$Wk$q ze)#qk@rTCW6J1a7i&@q5`>7$3btm%Xh03zv;~j#k&5caLQc zo%oUBi+D9CG;?yG+otAR_cZSe_>R#)W|c_CUrOek7{;QzJ`uUf>gTX%_srueakzG_ zr$^M(GAl61;=Hp*@inMT(cNPMx$9ecUyG7P`K1ek>s)x5SqeB;Cv)1gUl$8m1Zw!e z?Onfy{AY6PyPGTSYrwRx7FpXi`L9<1WNu+waeRLGf8l5h+TV&Sp#W!mw<-euwdN^e zob43;TmA}D`#yMIHs{7x*D@%^ zErEaORV+W?rrrwi3JCl)X*8bXV!eDmA2Xe=2`@9jsR}K-GwrKAaRPxf!F^6@pNYOV zcn83iO?%=G5J3cJq_L7w{JPh|9yYUQsl|A2#9xVCEAd3XYVjt$ zb2~bc1d^OD;a^#T%PY99c6j){NUKGkU4GI30Pt6@4){va<3;#MpvkFT?I^c%osc2- z?r~oP{AK;2J~DpLTGQ*F5SyJf7%W^nWh=CsBvd;Jj$7E*$ltVIz^y_7HmRgpF=Pb1DtSJ*ugqzGZgA|L z8ms1~(Y}Z5xn66R@dY|?oUHth620-(x2wYyj9QBeOLt|r02@K%zJ*(qT%2|62Oo_m z#J>P&e-M5hX_M>G+07*C;Y45SAS!S(^{mM>M1+NdOrATekX+|zK8Mn}Yd-||%fen5 zwAMUNqNudL1+ATA&l&IOS=7Wz6IJNT-bd(D!C~DsO|+u=cRU-z{{XSakM-?)>^>aT zq1hqId50LUvp-@l*yG|a?62dq@n2ql33!r5(pogpb-OP%Cf$tTyD+W~!hS4^MAogO zip91C`NrHD_NxzyI=#k?ZFqERv8SZ_BDs{*$F6)*Ni#c+4 zqp1Wnew=H1KY_kA_rDr*3tsgoQTv6o!*1o^sWoVehTqFgEfgfKjN(l=Em_+Nw&F&qkzLB zAo`AL^_;sM8PZ9k@ct#sC4$AiNc*h8GT`#P>Oj~zEGZA`?3Y{eHlNCxh1G+mLv}uu$yN}kwMRw+7her7SE@dR z@W+Uj33nub1{fnb$gZXxS{b4X4Y*PV1Xs_V4)MkImnz%eNM1h7dW!XL4fvKDxMP)P z3%GjcHQ-_?PMoc653IxEsZAx+==IgR)?y2EGKjccssRzZZGpDqyzUr;O1EnM8D z^plfN>e?dRLS&Hx40=~9smjk%H7~K{lio|eV9hKH?)Yqj*qYVx7l+?Yb0~-9X7uS< zXy9ds9+|G1ZcS)*LlrtU)U&ZvjnV<@ikVdcmWohEr=Y4{aL>+%KT3K!D~xh8>smIp zJ>0u6RbMQNM3R4YXVw>(6STdxd{pt5!ao&3;*Sby5?jp^4d{)4>z;EpYIQ9{I>BPnf7!O4msqC=zanGZ1FF^uZXrDIq==nsDfoP#h;c`&nidK zy!_KAp_$>`^pfg)vdpSraU8MLAGpIuySlb57}`%t$-D5S+}A!+LOK2{b*mq;=f-aa ze1G_9W8*&yUZja}3i+wKEex3D2=%W^pTn`T2ZjJ~$vx}xyk`Q!rC(^C$F)m|+!`{ANaq9ExK{A&iy_|w4W#7uuLBmb z8D=R$lv+onMsUW_yVmAh1m|>W53OC@j-+F-H5JR*Sr$Vg@M^T0ovRPN4m}NhzD=Ca zsZB<6S3Z9qm(`4$iqNw>CURAfVD-g()BACL$#&lrv?=^q;j2`SS+j_cLjM4+QO-E8 zq$AX!E0AzKb5LJN7357D@~%0}eRdwaSHBmj=G>IXq=%UFzQ`IlrD!f$94koL5Z1$1~+=rrWD`+aq4Q;o~bLlIRk3( znzvpM*x{y>r>P7u%Nt>j8LRe)!d`k*Fx$y9?1naW?TWP4x0cQX*3PHUis!1mvcATy zjJfnJ+x(|)dCvl_wYo^2U;`NB)_t(Z4#Ov~tAchWCrnkA^i86EPNi31$3tK@dMwQehN!X_&1j@PQpxmVA(w6m^oYQTs&eC5s z(x=R|JX&!|cVvi30I2{}cMZp>=))T-cv$3aNAQVFS@H)FL%@aG>lN>G=(W?HKo2{eG5 zaZHZoq01@GX_Ll5IVTyYvN$SgQf*jtXHpa9i!>tDWJV3p4?|lPKN9Y=ae_-I>4RLq zn}L=lsp=D~nE9$@g~L_SYT=H~@%SiSl(ju3!{aWceEw8U6ON$sTly!6ZZ(MlUEVt> zJvY~eL8(t?F!DIC}VkoK- z)buIh>tdDLK9TV6iTsGy=Nu1FSMPjw(5e3bNY7jh*NIzr{@xT;H-|XjZmN3M#aD?* zLRcv2>t1#%7yG7>wjVE2tIs5QSBd;*aXG|SZSD!LKJk~2G=C4k`mcy}SRjE$UP+(% zR~zDAi#M`BS|lKlag$#jd}{cM;=dF4w&vem)ns^~n5=PmhiDn}ub#_zP3Jh@X;xn8 z>tlG187>m=j=LVG@%Q$kvhl0GgnR^z9f2ioF60<#y+HI8!gzA}IG8p-;}z$YwlK>fJ*9ab!J_Ls6f+Px;0o$AuMvS3 z9#PZQyh6{$s}IWZU}NuQuBXJO0~>LYX;i~cmqV>qg_2D6iM(czsY58i99AB! z;^+#nS@|QmuNu_;E=LeCTih|nU~^o~x$#?AxkiS`83E&YH89F5;v;vU>gH4^Lj2mJ z(Y$f-LJc?=ZRA#&fxrju*PH4dIYP}e*Bhi}Cjz;Pi>u4onp>NfEO{iJwFI#)-<0$9 zuVVp~(!*I>Q{!=Y{wEVt-PqIA{AZ}%69iKfL*Kn-6TkP4+@7aBYQDlVoS(*@8ZaC& zuX1sPHSX?l)vaE;_nBp6{oc~R{b~n{mfQ^|Ds!~cq9R7a0xBT~H+D;uDMVOUgS48p zA6`e}P(=!>WK^v@;75tsR51D>L~iyOb2X>Q#soYNg67zAUv&1og*C_o>GuL~6Yq_juSP;sRj zMChZ`8Adqi$2D6~){HjgnZleL);el<`8gnUte+F>%WG)pWFWO+PYj&XXJ@o=rspG; z@s6=2?x^LY`9pOz%<5O{r^|;V9`)_tvFGfutNzyC7UA%yxRTpSfEjgL#UFA%LGEk6 z{{Vu9{{X=^*o+>zI{qD!j;;3UP z)0QPh@jgD4q-=)=>rI9>-S?_u&v7BzIHw4i2+HygrF-rYK7Foe^GRw*k!vP z9XJ$=gTbXvCfm34sc=-)*~jf=HZa=)Fsf=9;3}k5=*VN!zoin|2?jDb=qm|UZbx+~ z#?~_sHte@*yKgW|Z6IQxp2^X$pz~2HJZ>_%1Fdu>r5mypRH&SHX<9v9p03rl?MmbyH5}7MU2Lv9P+@|o9Mdo zJ_v3|?_FkxuZ1AF00ZmRzJC`=*SYlgInHSxM0_#v@AgKB`c2)OH?gAbg$n-w5%u-2 zMzhv$HH{Vvczj83VR2xh=Gxt&%u_FPK8N1EYw)j&?bjc?$2HyQel>NL-ul_&k>PBF zlkT^%uLia%rX5}g`dkJUoT?=HpQ2x|ua3SQ_{%|#<5knFG`(QRTJ|#-xEF+QM$B+K z*Q0o&;FLcS?JqoA7l&iFv9P|FJ*Jx=jNjyZqDZ|x#eB8!-^czC((W`riJuMpFm(NP z3s~Cr;u$vEqhxL(F~f0@_}9J5@vp;w@UQht%}e49yX%q_xU{yjCLwUS4AHm*b?KVh z55>7|BMmAQq@1F9bvSD>!s8){pxU|c-|e~kKIs1d1pXo0w}dqtNo_SaO~m$A_X1O8 zDJOJnM?>viCt@ZNvRs^V*ERI_?Jc4BJL0ah9--hUj-7FNFP(jFB*m@bIOK0*>s|@r zKLu#*@qa_`pO5uPH2o_3SdPz7zC)338@Ao~^c))a*jK~mILA(dTC?{ouNhMhlc$K2 za*EeYN$qdbN40GuPh%~u&{uYehW`NfeXHF*4EXoJUM}&R{t=VIH=0g^;vE(A>#Z8z z>vRoq8WxULX6dxs!-;;yK8sqdynl= z@SnhbIfqa1@9it`155aE<1HIRh-owF+Gm|*;y6zNP3wTIj`j0b#-D?}D*c!B3lD=H zC-KdWr)%QS4~6_EruZ+!T5hYPAco%N+TRhw%Emmk82qc!ejr~J&>HEK z861UI&jD+U&~B1#6GfKB-C5z&?R>;@ox{prYj5HfrGDNPA7s;vu*)FxpFiq_eH0(n z@YQ{um(>1b#m4xkx|+1w{7)13g9LV$^DHQWA2SNdde;l$A1?mE;zr0^;0o*fX)F_5 zyg|NHCkH)ijK18Kjw9|pEAm>{ziWz=TRztdPE_f}NcaQxqws{V{8Q9tw26~*Zfp*| z^&i0td)t`hu$8z9b_3Y@*Q0*aaYtwICn_1Fcmcs28st1D1e#W(7NnNj9z{73`Wo;GA#=HN;qH`(7a5!~pl=rTEq2 zcytekPh({$^Y1Ydg>F@Uh}B@}&lieIq0Pit+fZsJg#1SM!Qzc`$G7^`oyhW}oUCh} zHmSg=bU%(BH?ZL?(eN?NW2TV|t%A8Z;;-p?G)7&ed)L|4!{Ac1ptL_Y!p9Ri-tK!|;s)fXN4;<9xg+GpbWvE8k_1Ev)Pbb1GcZ0<3b)rjf`YU*_Fftp8(5ldNu zPkpuJ&li;8YJ2pRkEV?Z*osS%rH>rE_`mT_T3z?PFumB{!j4Dstf?)v&2hJ1>K76g zJ!FuY`UdOvFuJ*AIwzQfjF%(ju2aRo0)8TR0ZfzldP|5D{{XLp=D_zA=~2xxdbajS zMV=;SfW=NWQsy|n34At+S<&Zgf_`k*ws;f4b~VtO>0_d#{rdaJu_aUDWtA=QkB&iMkB&cV-+~F@`}g>f;kXEfR{sEr zgi)^w-`=ky@E?vf9T!xB=F)eK;b6*RI47F>yT`g6kBD?>^xaM+k}0BPmONw-MSL^y zKjC(@@XyBv+87D4mCM`AqvZ>qyY;WmGc3Mz@f`HmTc5Y^-g!?j%bphy?$Ybj`)BrN z{im(Gb7L2Wb*V%*h(gA$+_)X<->oj~?sk=9^9}}U^6SDrCDr^ttXu2aP1B_I?idwp zSKB|cuk9OLXW;$3Fl4UxH7 z2A-%@Sy!<199P$I{0OzyZqRB3wi^VmJBEALrT91YRQP9ca}1iUn+wUXEMr-`(01Yhab5Xt_R_GsCW z6b?DXCb{$y?rt9SN3MUweYOo8Te!4 zRMX?R0q<>65X&N{=m{egb?$gi4zX&;NacZuvF(>xy9qiI&Z>aF!x^9*OyH(L2FE6aFmFQ%uL)wI`3 z^*+ahbDk^BFM$D%mDEf-^Z-~DI zz88F4yPx6?tD#zI+US2OOWWu+fS_Sl1A~Ez)9{Cf{AckK#X5DKn_!n&okVI>VR`Cv z+P<~$x50mhI#OF*n5+wz%e4@j+N+ehBr0G(vy`-CHfh0oHA^)7anSA zO}(ss68_p+CX?YGh*Rh`eh{~?cvy*KwU25@dt(*iQO5H=?$!GR`)K~fpA0@RLe~04 z{1RQXtk(B7$Ty;PS01(ScZ>f3;GX_H@V2ontdjw57?|bQdBuKaK$3o_yC<0h@kjiTe{Z9elb>Fwt5KU%XL zs6aPh)sH948^O*;VMx)mU>q9nrlV7H%BzQi)eRQxpqva=uZDa@eWFYT;?WcH0(w?* zZ#l~SX~9l1ss|Mqh{eXt=Z8HR^CQ~)7vjrZO4S)A;P5%F#s?Fx%6T1Y=bsIDlIO$s zX*wU>1M?uSYw%BsEp@9Do+cS?yz^f(h@$Dr=hR>^u&R_>)YQ7tVrTPA5y`iZvMyBp zMQ~m#@Wk;;EzB?k%NE%h;YZL{s$0brk%c_AY7I|9f<-ATVB}e4S6P!d9tL8thS;)ru3e zGM`)u%1v_^IXK4BGR2I~(P|rwEyi&l>0Mgr>LbXH@u=t0 zRK`%RZcZz%6|A&8y6Q4VocM=Jo$Z|&4?QcBm>6R*K6pJV*EL-X2uSkxam9Jhj5JoW zn9L#}NUpjy`Q@qcIkhJVxyN(eKj5DK0JhG*`z(Azj?Y!Ql5Y-blB!?Z7ykfN&-%sm zBEM;V7yL-@$HZ?8T6m+w+MHKbw~z@Sk`~BiBEOt?rjj9@WB%T{r@olK)HEn{3vMA?k=DLnG|1`Z6kMXU zI&e9)Tpey%Jr9{YRpD!^MUvpAR5v*5Ts_{b*0Omr40Fj9^jE}B1d(YXEO;%D2(O!b zb){eF+LVx#BLvss8P*#ak>k>imp@g?YvHpxCHqZLCIBfs_9m}~k)kX!jC$7%r|Oei zP5a}V3}&`s@lX)?W7Do{^e!0Bqh5-QN2%avIbS624TpHD&c{~M-opA2=-`5{&TI4c z_RRgA-@u+Dm&P6)vLjEO_e!d~BUj%y7vk-A6P(u@@lU||caFR@b)jk&=^{@VQ@wN7 zxUagyU^(Z z6LSrxlUD650BKy~94CWGS3VWJrB!4tr97+>QR*l2z+(C(~7)#k{_jPKp|fu-!? zsp`p6tr=aT%_fRSF5Wwc_AqXqhMO#@AlgPqu8+og-h-;a0%=ykWl~JC0r#t!5-Pc8 zz!~Pe#}=od>f!3U2b|HVMwg)JO*bf44A}aLuX#GnFYbo}+Msgy`=+;6j^{&jz1e!! zVdiz?HDXvvaO=|*LfkNJd8=)P5XXbq)K6~8q?4a)SE_tJ z_!Xr1>t56BEj&RLycYlqi6l5Nn(BT%e#@R0_+eodrJ?w>E<=s244-(PQBJg+SCP*I zx%pZ1@ri?DA6gn9h;l1t??&=p_hcS=`h#A8x~^PZTjV=i`Y?9^VR25`NqHef(t zY4UJL#Y4Q~1XowN6=N(#DDvG6o=~|N&$U;HBwzsp^QTQNB*QQ1PLAXc;K!v`w2F?0 zEjpDSx`rE4!#Voaef_q>v=RkupWehs<2~wzrEW@YR_3vUq^ez$iL|NOMo!-;M?Fui zShu?JSTGy|$j@4(J2Ynmcj-?oNy+c+UH;B8S7*vi&XrrExA2FLA5OVYBq#Bgx8ad%V^@=_Gi-Ib9G8S_NUX) ze$bk2qbl5LF^qA7+=_RIKWNVnX}U|@wcKt>E zxSWSZEvm!H^IhJ~n_BL9a+{XJ3o-3Yd07_&b;ZBA(VasRQw6w1eYPyPrj@*DxCAwEjr2ICF z5iRVSi2fC>97Z}Z(hTmSg{eP<9F4Z0b#Wj|c#D(hE1}W+0ef+_OXn)5_(gV_e}^or z(-!uDa5*4V*lmKvwR(5R>s%7WPHR)qr-0V-F|^+fS?TB&2$e`2lh&-t(MGI;9Q3PG z@J`j|9Whmw2WKq&3~^j?s;T@(eJmtnsZp%NxRH;3m0D|iyNl@b?K;X?=asg_6ewAo z`T#oC-Hwy0>$lu+%gi9uyBJ znE_d5J$8)r&#ims?Mwdv1gZFU;SYp*kBPn+Y8tJkwWL}!-c8KDVqB_>o-x6%(VaKJ z_BXQs0B-4##S=vySwexD^B;|x(btlF7^VY{E^kfOL;&f4)`_t z7vf)o-V^X^NvQaLOt*?gC8T(l1*-<{#1Ctr&uH)*Fk$9rk?midOT>E17X6$OiTY(e z2E$`;(NmK-fA}W%ggi0uZ@}^B-zn9vCRueSEAs+6FVeRE0N|uox=+TB1<&xlc{cfv z6v*SBDb!-T%i&JDtZ82rrZ#OV!15i0DX(GFz5skf__N}ebxRE@StFYV%#lb_?sHr< zv%JA(%A^}|e)6~Edl2E=uP^=8TXI_3v-2Cp-?D$iKMGtt+I_#ct@$mMdh=KRU_yoBI;zzYTPo zsWh2kytBE1mUtcjcOf6nzeCITuQ<|px*qR)>-QgPY3DI z1G3m04oRn-0@gPUkPMG%<#^s{d#Hz#xI4CS4W`~)AC}pO`c-!D-lEw)M;?{mwS;QC zF5Di&tu>;zB!eH*6;a03PegA|E~jl-=W_UGQS&zl!1m2UHj$`7mGX1%UX3M{tDa#Q zBdum?ws3{aapxRXvCJn{R+1F5oIORM#%a1;$deODN1)AX+gs0W0ZH(ED#Vc@KM;ZM zOppM(_VO!9LDbbVI9~}aLiM({sM!yZgL+l#ZystEsy87V_04ne&fAG$j(MxG2xL1- zXYj6h;Gp+C>@``+3zgXFbia$!+7Z9Z1HN-zE|L2{*}|c29zefSUT6KA9K;1)wI}ux z01AxP1#AuuS5rr&hs9#>^5*8b_pj`Q`$_m);y#ZbfR|Yj>UQ}y@j2RNPeJN&Uq;+s z#XX(7T_)Rr$0Yknp8!{E8K z4Nt_H581RsKM~uoTnSHY;0nhmp@N}Glb-L2$+hB8iYQ$BiS}DV&w>;zG7l1r_ z@k_yWU$icrCEmSnYJ*SlW!!dxIP~U)0>TT98z=|O)SArrci~NUSMZjn<7hAA)AUH; zeGxV+<&uC`(a<2(pz1LxNtNTsTMH}NBHaao&te*)$ z+Lw>CD+rtYcC~7``pUIaSh`OwrAe}DZ}w@h)I3-FM0h8|`gCg3T=`*iBT}9rZslAQXhaI4t zt(lQ0eilz`de=Ab`h9ZHUuo963}w^KiH9Ln4h3XKUn;1Wub2m-x~ z%Q2^c%c@qWz0`jm(ZJTm=4udkZJuf4JBOQ7x@Tb-UVO2})9GBqw+zk@unXR%hr~z3 z7SDUAF*h>Ecb{HrgtMq)fzabVmH7?=y*zd)*Mq!UKJK;}cxq~!-5)l7)mnYDzY(p9 z%ZGSywZ_`nO=ywJEM&gzo0qt*^Y)TkoBseG=GNU^ZQ&c6xXp7~Rgap`TfSS6I+}@zuwN^q=fIDL|U!&D z4bXB8Yv`Jim-Bb=o=0(AmFUz_a%Y=@sSbOy(|j#%5Q!B)$@S}A!Qc>qp+dU>0B7F3 zU%^-5l@9R3AdK-|$KY$oZe&BZmBHFOSLgY^I;hX2r-(`xNcYbH+FMHuaLC(+Imqi? zv*7R9<3jkGdu^h^v0KS_!Zw;fHjMF}Ysr2Yl(^WA**k$8SJB@HE|8eaipbHw%E0sZ zRpm4&*QmX&$Ij;1+W1(vN!j!{ACBMfM*jc<_-0#sv2T0kNOLBwrALGw?4yeK=k}oe zlm0Gz1kvv`_%utMTSA6Xv1*nOvjt!JXa_2NPfGow_fk(@Xz8#gK?&6))H)%3y8|M)OIR*f;(5Q zmgA`6mnv3kba3M+IJZ*wOzSKpiQ)+u&O6oELq_{~;;?jku@Wv=5_;g)c z_FGRxco5!riU~%pk;=Kdlf5z)ihWdp1j-#o}pt`HUoW^535`LAzd|B{~ z_Jygup$d*!TODhPy1u-!w}xxjSyEnA$>56qTZh(ijLV&9(?75()4}IidQ#Oh^si37 zv+)L%Z>eclD=ZVlB#nn~Mr$hZ?5?Dq3tOn9mfYf4(ehJ0jd@S(d;3A@-aXNq!qVJE zm%3yKs~=7A^vL>GsGkh@b41l|wf!pHl6^&ix&iVD9qaX+E>Vd-)%QW;t|n z%w<|t6IOZ{I>aJ4q!LHe9#3lV{{W1B3(~w*qRDF{XM5+%kPo=k?Rt%?U9&4rFn_&V ze~GlzYhwz8!EO#Xqa8tVmzCouMzWH2J|ew}{{X{UwE7;W6GqXw0FjO3v8{g&__JE@ z)$DqghxKQ+l}K&KdgHZvm&b2}Y2vLt?zH3ydk`%YpmoP;`OY17^GVcX(=~`%IpZZt zxW^-m*UQODoVzaP>KJ;|vm7<8b=4nm{?Gpawmy}tygC<(^+dG1$W=177DMZr`fo(> z6^6GAcUID)mK>-#ugHx`@ZVWHQC#ek8I9GjLG`af{g=Psqy8%R3*sw(0sLr+YwZoC zSnc%`8>P0%>VLeo*@nkrYeCX}^88Pd%{WbKUk$%w^)lJbyy95m=RK=($5Yg%ge9S1 z;P8DbfcRnYN5X#)^eb-?_+MCp{?6$GLo^ZstPk=&)#y5Hf}HdQzN;gt7o=T} z&2w56X-du<+&hafIFI@@xS0N#1D;{W#5VX6X1(IH%D38duut63xeZ=!RHyTsC1v% z&*Q(qohIAE-wTeNcNB9Oe#?1p^D=sF^sa&{kBAqrN2mDr!BA;(N*m6&(_-8ZeJkkl z?1GGPwT8uWDfM>i>UmhcE5y>yC62+_#e6>Y%ltjhj(!IJ0KqkW2y0&o?tE|iNMFKr z34=37YjNe91sUuh|a_ z{?2ykWiFv>;zzxS=4YBY;lN})REov$*TG+hUj?qNZagvJ^J2Hx4KvMZ#|w)q2?8ETS_Mz~!9ufGd;OiUPJ77hfldugIPDw@`4Rp}Y zXwa|hChqzlHD4~mSA?r5@~B zQnI|fR%x`2MjtlXB02#mHzwkL zdV#?KYn|2X1i{xC&2!M7vG*bmy;ZN}j-){m)TPqP6>4mO( zCA;wUuNJFskC2W30PPO-^dEw>Ei=WM1-7qZp^|990CqjadHJn+&|LJ-sL63OlsPJH zk0iDDe{gerOnIAjqegf334t~hnd;FD|r(3O!&y_S}Wbz2FySz91H28bOwy@f3 zGmCGueaF~29<}11w6E-^ci;)GHLniHmlhGW!y^6VHSFQ>*qA{|s?hN43& zyfDKgWJaK#+*DT@X9bF5k4jrLYXGS+JBwhC>snInc*m9j0bRAQF%+db*ctQhj96gKWO7~uOrRs zU}wnRQ{;adcwhTE>Ol-lyAM)2*DWpNuuHJFPJ+GT;%9{Sq|$829Oo6`ekaq7ozpM} z9f+^V=Yf`0i_=7YiANb6Z7+GZ&Map~H*+e^Y9Xny_cR@!-320x{J z-cgBIh-i3N+&{CO&m8@+z5-kPPw+gNo|R}nv^4c;AZ1^ffEgmb1ii4hww0i`Mp+Tc zvYy0nMSii}Mz=Q26u4mPf)Ay9J^NyO0DlGeo=+L*Hkq)zkhH3+#BbdFEAH~_Y^g72 zCU`iARC=T5DWf3p=Zbx}b|A4laaZTCRV45-dQ@@S73eYVUi2!atZ>dg?bhPM3W4Sy z#;!pkMo26(>+Mi1L@o*EG~)v|L;BQ8q}Gj_(tg&Riz4kv+qm)HHEKDeAP`9Q&jz#7 zAqOmTS3w3Wsq534QIo%M#d{~IKbeH#uydNPA# zMik*F^DW3Ck1hwbO4hq$3|MEpJ*+c~(ngM(;w?`}on+JR9zCZjRk|9xu6WPH{wX4DQ&XDZm2sCj zz^-?D2E*~sL0bAX<6C7B6qX!hX0e-GmuGA!Ml-TnkUh-70DirOD<@EfMwGpD7n*do z&azBE-_ISYxYN>hCjjxrYAM&`JAbo%&TgkP_D;-67^=5cQUu(;sIIsB5m~aa(ip4Q(K+}BMUq|T~&(^wj;v>gXvUtTdyn?L6hEzb##ao2chd(*O#VA3n|We8tk<< zq2gof#u_ciCHcN`4_-w!OQ1MaKa~w}y>P=Hg;r_6V#6JHu86osHascMo*wco!*Hx{ z*&Oy2C<^4@5%sDoF<+Qq)bQLLx#Km2r4;lzoShw0EC|aUSEo)WV5k8tpK7ly=|k6y zQrfhWJ{_5Aw)GvIrh=@RN#3j^C7 zE0YzN)5FH*m^`koBBiOV3{5M;mv`hBQWw2-S`UJwP!EyLKa5vRrg%{cbJsdlh`9^Lekz3an?^S0+PX{CUE4yQeXB0wAgFS^`quR8 z&9W*|gmr6E0{YZ-BlvwfS3Rt`*#mYokahwX|^}R zVx>5>(5Q6)Nf;;arG>o4`M)ZgPtdQfqzMcpljFdT_&bMPY0k__*k5D~2(hq<6p~v=Y!pM|^dt=eF;(9%^SoEv42;(w)~ULdZ(he*%j?vm?G##%euu1o!7+c}o!_*+z57q-UlV*` z@cQpb(KI`sJ>=9P07`Ih0QIldZxQ%<^G|bWsHPm`KH=7{{>n+>4~KdcwWo=+$84_Y zUPQ^=>s?>Q-ET|rB&l&>Yq5te;A1uI)^rwK&$B*JMzhEArFGo$c*MGL7~+wFw++p4 zelXA?l+7aBD>DK~Jxz8x!w6F=CNg-*t~bTrAp0wr^ODGUugWvbT`JUEYCE5Il;P@8 ztfdrqH^i$O`L16|)2!X4l;IV6lj&YrfAF{B-mBtUZwABRE5SYFp->p{fKD@?O8P&+ zR<<53__t+ia~=dmq&W2z^?$(cg<5X8;)t#0 z00Ca>;tva3Xf_brz(7+TLXS*W6{*7sjoXd^$Gv{>hvCd7SwTg`Id?x5#c?iSn_^t* zN!gxD;=L6t(acfdK0dN_X7R`Jb8 zx$Rv1IqWkUsx+l$&!Ns9D=wnk<9i>Sx@Y_nE939PuN*#?@Uur>?8nQ?dwAH25$T%o zU)sa=I{35uAnW#8FU0$6W{XlrHdl6XZjM5|N3pNyZ^K>(yZB>e6_u1q)?{X9z{zia zTKwq$0D|#+H1Ti8Uj%4V{5#X(6L@CwY40tOF(byqs{a6me!ca4zXODfYfjRAS@~=+ z8HRC>e#Rc^=#R-OtEfg}1A|%e+z9ehkx}2ns%e+)rs~#`K`NXWH`}_+ZhUwLGB1WDUCC`;Qs(R>7|C0ed!#K#wxOQdI@%SvlTe1%Wwk~JpFsq z?j&f*Wl(ZDR(#V+K@o%RU21;qk;RBy^SUmxqqkbIZ7vAg&oz^49L*+Jky@6~?Es8^ zbu}tmL#qc_Q_-_h(eutK{2OuWS@x^EWE>j5YOFXcE0(HUoZU}f4&6$Z=u0M^H2|?h zFj;^HHTzxv00gY~3oXyVzXg0E)OE$0(^1ktv~N-pGalwAZhIV86_EQ}65q~z@wfh2AwS4h-#@ositK(V>Fa-F;!~s{mJ67U zTgh&rzaG{3=lezpHt#AEWBa>OO=7WX`+*TVif@n4KBJR_lK($A~hHv20mfb$zCjAp*L)4m3LBk?D} zzYhEWli|Jhg+3=kbh2ID>Toig5Jh`~&f8!=5$qe~&b85S?ek z5JCNmsp%IRC0XTflb>v6zeD^1p!lmr&|~nwh_wqna^pqPlu4}WmO-zJvMKUehf$nY z-Q~CoP)>bMpW@ypQO7DYKF*Haetl8m-W>4Wz2KjKmp%;f4~BH#6lfYWl1H!VTHd10 z_l^%pP@M1&2EQ`?D%mZDwSH_oHu`0o-ALAU&c%vGSm0#xc(2(Vd*O7N?u`bqujtU& zUCIfX-r2JePTwK!Yv(VF{{XRvg8W%An^LkYT1Fy8UAP{cSI+SUa||9LH02wm*ze#x z#|ea;O0^xOKRrI<^4fEAA!W;T=qu^}0Qe+Nhcs^ze%l@l@ZPK8p?xg6fVhHUIkz#P zZNOH)jz45y3H(2~Ec#v4(n%>6>X<4=Y}c;;0Kpx87Fb9B00j!uwH-D#SJCu4%ZZRD z3dKSWex|wd{O*Q76&f_tO!|%~#ldj?BD$T|FXn!|{5`v}zSZ>`Su7ExNpeaIHr|;d z@UK1nq)hgjC_19s!lH5@BkwXFTJ=AMk)#^7v3Yf5kzEK1EZ}hC*1R9~r||)YT7uV8 zl^n%wk{EDnqm3~X@O(2-)auXYE*gzYR(l>!i5@+xYmwMpMv!D<8NlMMSYNA#X*u3A zn&v!NuG?KpYiO~oFDNXn#tmxdd*RaB>fy3uh>~;1O8jGlabDXN>BHei>p9jMFv7-7 zS+mD{d2t4*@ecXr1cs0xJ+K8*(yb!3p6gM#+JX>v5$Gz#(i!|l(~q3bI}pv#b69^G zd_=w&&_2s%RNPLCt&06Noh3Orv%>eKQWWBR`|*Foui`%)-)gckXf4-dGGl>?@|{}l z&e{u|Zbb@)GIO3luG8W#h~x0pf>=Ch=UlNfXP&j^dd1bQt)OZ!K#0)`M$0Ma*so76 zz|GB0Ncs$BDtK%qqo$TVdhyP`FN!=%aenT>S+I(F04pKBSTUUAxT!8=M3s=XLb)HU zQoXytWKsaf2ERw6?$e4#Q8Vr0ZAENWwWApSkT+9tJZ9 z83dj=>s=p)yf|f%XONAj9G=y!dXS|~?@?h@X;tRFhh5z$;yi&i9gf`7$V9C^kcQB$^Pd2@?yM*1QKPkK==K;gmEZ9KA2KOC_n zt~>$LU(zb%xinfJhazVTk~jGcHCN(@<%iJ#x?DXk~5A9{^ zqn=~35^uXV%+v+MIrX-^=N=f)CCEEaJEf$9Y;U%|#ECiw<+?L!)0;72QdW+hSHuhxoYvsZQVX+`WhleFB)S5j^}Unyq)p}b+A~8o zCI=liM(O*m+r@+P-t)*zyuJkdWpLJqws<>N=9jE|Y)>Sa+(9`(GH8cUKeprnU$Tp@ z0{{A3u6)mveT$MnCE8NTTQsBtPcn~Hk8TgmTxO_+qrrR=zJ z$-xAH(HCLrJZf5M^e(N1ZKZeNuWqAn?yi#2q|!#V|6$?dg>Ms-tcK_V62$$P{BOBT zl8E8SO(y{Wfr^z+%;LT7KUlv1XxDJUb z=4onOu_`PUO`mjKwuuquH<2{pR*yUZbSGzG)h#e*) zqeJ8-Ep-)Lz*zi1fRm6HSM+D!8p(7Ok6ygvIaVuyNTXiW9y)EU9f}bfQ|(vDkTk)j zNtOd1U&J6|5Dyg%Nr=(@srt5O0aAl2vBAqh5eI&-(b^yj`6Q^ku1Q5M$O&n`ICizv5|PcCNalM^&@U@w;IKMv|RKCWAebe&(7@) zql$q;FkPZP)So%7!2P|O&WNQoHU?v@3_qq{{yrsy!wGx|#x=91zI;~zg zM)_A8^r*sEgnSQO7=Q`+(i50sS=!HRWS+<`9hyrN7(^$O7a6^qY`YIGZzYqznBXxP zI=nn}IU10AU1$>@J1}7)u^s-d4BPYN$fDSk|uv1BcHFWQYr-Z85 zOX2kMaMAnXQ$17>Da&DOdQ{Ff( zJn#gA<(u~+dmYT^YMF?hanD{L+Zz$_*Q|=oWY&=GDY0$fG6EJYOB7ER89~IRNl_w1D@FU53TpADhG|-&O`mXZmizD!8g6Hz$uiSfhETI5WyMMMmvsU_C zv;LvtEQm@4!AEM};*#eY_h^&lpDdOdUqGxhQ-qd5QFR$_6xtK#c}fUaRe#SeyW8Gq zlx09u^||xrF@|}*t&I~kl_y9Pg*q&v-)%8P&k-05xYEmHNY;1t;>lvz*KTQ` zlyHwlv8vZ5&=*{LDo}7&cx5hLMv)@ChB@N17tv(TGp$@;uw>flFUn!sRN+>Jvr6 zhO6;o?x@>R27I2sXUyKFF3Y7r5K1zFl)~>HMP=DOVjFD$9;$ z2R%dQ_ZqaUQJqY$6WNSg8ESvqB^E4~qP@9Te0;8pd3)@lo#Cw!bFu^s1{d-yqXB{5 z?B+IyRqwj4Drs*5`ZzdqNYwDNid8)1lF*fWdAV|e>}y45Ni(&4mh3uy)FQX@+aNRx z)k$x_mU$6c#|+Th(jw2caJ`Ujqipy$%nP#PbqLCW(#lyuSE#lWJKcYyq40%#Z+lW( z4b_q5f+)n(`+J;k@s4}D)Sv39yf8l-4ZUVe-i~qD4^L~K_|%o43Fu3zo*-5b{;}te zn%3a8w1b4h6Cw`{cULL6H9=BOBW@|sK3NbyKQX_|n#Ls2fxSdeQ&7@+T-Edt>SUKj zI`PtQK<#SqDc~2{q*rv*^(x`DbeGgW7&O?1Bqj;n+Iv`dwAkg~?@l%myVepP%XI)F za1>ZKt!VI24ef{Q8tWYZc5?S?Pmd--+->y>meqs>dgNc=;?>)Ld*Q#qBO`&FS37ks z{PHrN{PDiE(R-gVVp#Y(A)*atJ@_EP-qnRl!-K>o!v%g7@ydhG`sFQ*!l>jt_U&fm z;wjf1gJTfRY(m^U<+-25PscKSP_CT&k!&j52T;*EoXkRHN{b5jmdfl055+17^U`z?|BD0mcxwq&_H{aQ$g=|TsU+;E=GKkh(HwV$fw+P0&MS7k+-+W9iUoN2R0 zYWl<7SL`h$QIhlv$(X)Q9C7xxM*Vr03O2h?FPVVFez?Hqa(%PW)j0vZHLqnAw6BCK zF&b;o(iGMgNT80|jNrX=97>(B8_DS{YPVq`mHsG-P?4S=@~*e~q6pqU1eu3TutwIU zNSEGN{XS`seTB~7%9_MN_Z;_LC!y$4$cbay%Q7+y&8z>zT7(sMIzYo?5YF7_TJ^74 zS=wnm4?D1o9ZjFqfsJ2_n_svT$mLaJv~tVTmh|isO{x+-1p3vVXp6bayhniMW)3-8T;31fWW+lg`nByoEI~HZjAhLW^_}<|BVYeR`fk% zij$PG;Yzn0+7?bKCW*lGs&xuA&l-~^?|r;VcA;jzEaPtmDv1E8%=(@@Gkp3K-YoOX zZbkKEBF=E`nwC%Aotl~r=TNs>*snMH#blV?>r91^b}AWr19!HtpQ-23d#N8Al&s%L z3QbMrVm|JVdv0?{`Px8NI9q6U=~YebTkLSGW8L@45^I9Qf?R2yDKcx4IR*vxiw|vJgoY0q-QTU9#=|XFroFwv8;AR z_(4Iz8Cl}Z;BJ`tCRs&1N)cQc!XSAyO{14wF}FBoQO%<_4Gj+M`k-eJ!T%o?$**~Z z(t*|}8OmnROSIVMX`TX=ikb+{Y``>#u!F(W!98Xe)90G6-^p;#qyZEvGG3wd*>WEi zsicxEQ7sKWxN$mv+hai&J)_hISTQXTdg3PZnSh}ZU#jeok1A*&h_^B9T(+0f%Icjl zhJ8wl7BWD(6yLg(xJPq@VGd=t-td`cI^v8rHr(Xgt{-rT-se4le<)#~wQm1q4;Ty-hA0%n#g zeKFZKzaGb<4Th=jIrByy_AEI)Fm*P@!WoYjRf83F#x1Q?YMvjZD8)h+k z&+6>RboL8}fxrG7NcQma(iPjT`Xjd>YYbcU#BC@Lt=VvExX9xq-PC*PJ*WVJ+BB7- zqbIcenF_4RNIrzb)l19ne^j8c`Oa57Xgn|{Jjwl%JV5~ntw8&ZTK>FspK1CJi`2eG z(Q`LhrWKH)TE03DRsNOj;;y?|ic3#+oLSJN^h54fg^b67A^yAmFF+8dlwx13FU9#6 z>vzg1Pf5LP+?hF(G(s`P=L}p-dd=TIs&-18*x%+>R$Jx!RC-+N1Pn_T<5=Zl!3uk4 zaw@N6ZGL-BRTPgb>Igd~YrW@E=nW=u;VU5D`1t+^yJ8X_E%($6#A`I5#K6^1HOm}M z@iF16K+EqA>*2Fc@icnJDiRfhGD6)8qF>%NcK>@)N2M4qG@GD0@96av3wAjUNc^7S z6&K&rOv$d}pMxCLSvD->&5kaJ>?BifY>dXoA`A+LnF37)vM@XSf_=-%SRXQAuLhc`ft-oP}@uA}6{Lkv9r{kg8G!!r}#jEmXd`!W*_6-9aGeSR;n zhR!pKgyCk3U%I5}|CRbavenf-)ext*t#HrfM(4OBwi!e6vXR7&9r0xI_KeO>8nm*P zJAn~*Oc24Dd;Pre@^LGeG^T)&Rpjg3j>M{7e)tn>P*H-`K^S z>-(45ur~9p`o^Xv?x`a-%nAz{fbcaeTtZ2VR+h%^I(aX&NI^p+AL%Eg`zVsD*1)CB;H_i35)EXGb3G|rf&^}J^Wk4nGwZk-8-cj7UO%S)U%jl*VU1@QL;5{aLZy%W$UksP)fPx%+@VBEdy#9vl4(8T!uhAC_M$ zpc}})AZ>zRP~A358FQ5wUJD4oSVkLRu{$%uF*5Hkxpj=qw~gKhmf>_C=vj~t zj}nSh4#o?bdX%Wg)U|pyVamLG069EUezRZwgGBxfhIQcIbD)XfF|ODU5oBskO)pdW z&dTZ^>(dNk!M?jiD!4178Q4KvChSIW;JNW>5Di3g1}6Olx7C--m#KwYiM{jla2dP= z4p-G#9EWD_yA2}T1xu3~^N>pSl;n^2QOIKq+xjZ+P6r}|bT|aE&;EPu3wu-Cr@Jb_ zoD=$0e!vaS_&&!kb=zQaJYFOC6BAMg6uCiJ|M25J&=x@#IbN{Akoh)E@kTx3EIweb zGlzp&c?b_Xa?tqpmV!ddgZE1ILp`4SyJ6$)R9p#(f5rFpfbGS?muAm@RC@Nm=M@{G zwv9dtqfKur(G=x{rN16h_n(WB?AyXl((jmhMUob#l(qMCp(gfA{VRz=onr-C_4HGY zou^+GsvE)S<2GBw8lDSOJT_9vlJo#2xH#QF5NkEE{=28zo4I2e<dYJOVaoAxv5!$=)}l6{ zdxbZSIBKZa)=I(?OwS$GZ7%0UnPWM8|RlKGX1cuK% zGyv@jUhRIn0Rh{)t%(==j<%StusxkC?Np4ER7u>}7$+^q*8}@24vhnH&1dS`!XM=4 zMY#Al?Ot2f8~QJXjKZ11pp<-k_Y9~(#4TF%(j;u^Y^O09Zrk+q@if)5mN`h|xPyHP z5{PJ5cqHwNc%*4w_$PD9MqpQ{DHOXu)9Sj+bj6B3-h!>L&tg2ub-rDO?8w}rLUqQy zE3co%ld+0u2HFnYaF2T*`&`M3xI>Okh12ThG*GIQXF~mMaaxqiq#0V_PQ)P{W6ZVc zoc~>g?8y7IVr*I&=6q#T$@}ZBwt2vhesQXU1|0~8ksJowMUNMxQTIIj+50f=){DlV z*C<>DxGEG$PF9C>cx2}|`zU-uUVYKmB0Y9bzu+lcei{30JSZ=rZ@PQhr$&wvJZ2!< zDLmG09b6~&P{}qa&J2(%L=${K@kKd%J?LQXNOxOJ78);jgAsSh`y30^yC}oi9ks|k z&XKN45_ix~L^W8&^3}0p$)%owpfJ6Ct89cPYVnE*NfIah&LfvJM9Q(Q;bJ;NNk{U* z?-(SDnRajmTZ5<|m_|6vBROmER=r`|)|Mzf$KyY&i)A+Fv60{{*1xclAnKqQASo&^ z1?&tn4>T5j_*XVsAIZ>ld}yW=p1G*^m(N9aSd8SZC!g)xtjHBvuT?6i?trQQ$3+O)fu&scSe6w??Orp89?okst9?ImPZ$}cpQwniy4OD*ut%p>?pv5M+VyE~8o zWjp9^SKW$|t(RRjF^c33NV$H0@3z5*k=FQDyq7n@O;$KVOkIUd*V0G$y9QqM33@lznE41+Pl zniTorb8;%vtQfa?&U9|k@c8B^?U5m_@D7k1!$c^V0;EDP_#F31t{(aNK8yG`=??m3 z?dOs6CNK$ZdsA11vRCu1W?L-d_cQYe>2JBs9BFN{XM5f(Q8ezy2HWkyKLM9zx<=SG zduK$9`NskpCskvrQF@09{gat11bOz!YfXOGL(*24m3ImaAuFrHR)p?<-HlcllM@-f zJ;y0wZ-5lOmxmB9!K5h6#+PnQMZ|kjGue|>o=@;UFJ~fk_^@oq<)-M1GgHr3|Mv8B z0EG6V=sV7AT(K4y5X9cWG1aqEEmX^1PP41}>GOL;tdXx+dF(waATN^;$Cr(c-UjVEY}> zM9&fp^9+av-Z0gz1cctH4lQ3Vt`<5F$JmP?IhjOJGPufjM49FJKG_>yEhkr+H}u+1@V|nFJ--#{%M6Pv zv-&i3Dcc4Z+w~H9jxeAsn}uZ@Sy)DtDXjMU(_%~^AHCGTXWs9ADJKzDuYXW;8PsR* zz<8iTTG>M};_j9cl%$|ut-@9wdZ(n+it&2SZ?h=RC%gyH`!61G#Fz8j+anR?^i#dB z4BDPgxb=Brwgh!F=#|hU=6F*JXjL(HuA$h$F><#i1Q2c8Tk!K=dpVuKqnFeNvP%%4 z_8K|5du)V;BfK%r3F4Zs#Rf>DBH(_5e?+VdTf!6%aARcK)*?^t#l&(fYX#94eXP`r!pWhPNw zQGs#aXEV_p+v&aQCfzL$^L14%%t>t+gVH$IXu`Nk zyhw*&-FC?>-kLs$Zh1bvtxhcH3B;)l+27|+3@gQr9d2MAOe6{v`vsoVm_G$LcLhO3 zf_&GG(*3PRhOsRCpTZ3zmbMY#Po9duo&@CUka|7lxzpwXYM$z!H zQVsNt7pS6*r>ctt6SN8P3TLo>aYjC_&N=IR=D zjrq;uOfY!|1=tpWo9#7jF|PJiRXv|DVvhDk3z5(ZqPwS?8?9-d_&EkCt26y7LpQaU zuZ5>ZN8-I=me9~;dwHak_PbW{SS{^Eo^>|^Z97wUFnL~zCT2YZ7_Oc0zfSdkar5^R zIszw;(PeMTqUD?z{hKi6D~|ix;qHYrK#KJi+Pc8=UxF8N0<|%%=eFjM=xh~ zdE$_MI*Q_miAc`bbwwAhKHZSD?dL6}{~uPm+2@Q`3Ge-*Z+g{GYpU>_FSwG`m7Tr* z>$#QR(sq?^7r#-(cXik`Buls(`4G?$=!Yc#RoZ0258kSJ#BWosn~Y_A(Khp^i=@~H zDoPrteDo|(0oiz!jut?LYM7iq0blhXtDg2XVw(o){RJf|eXQ^JaEFs%96#8be#`i~ zB_hIGE#}=^xDn4EE9OVs_G~^Xi?@Ze5d)S@cyIe<{fW9en;x2@Pi?UgZc3)1E5DmH z?(-7^%Pt1twx_3e+7OmBF13n4h`c2ucBr0dmG+YOUdL}V55|$8g9dEl|DKg)JDddv zomMg0haU(%A`j7voeEKIhN<>aOsw%+PK!9_v!=s7vPtvOGBUks>i6q`$Gjxifxow+ z*tl}H3^qd_S-?VrX~)HAlfDJtD#>s_V~*jM2O_V9hJ{K6EV}71TV3{%TF&T#WZS5R zqMz2+;p4udvec_++g>13mTJ0&K_=Yo`bAr=)Laro6ghsnZRR}j7!!|Odetyd;Ah!b zXm|Km%cVP#HS>0bn!a(V=SN8y24dn)eG*W?*>-l~wE6T5vY4foVIV>JB_E@4oIO%Q zl)x&p-dXi=9?eGh^Z-};O4Wt|ikwK+)I($7E0o*Ja9w+-*QX}7OPEEj$@gq$8u449 zZ+jGn?XV+o{RRvL>UkNMey3^l(FNZqT`tbm5aJ)tRam>Qv&>S=!_4#!ri|K;sGEG% zwJ73cfA3m1QP05D*lR8x58G9FWC7PZV z0hHwvb`vy~{8{*Ub#AY;f4iOdWZbbrb%tlsG@%t}k6_`2b zO|lLq``dvOnY=gyqb)9$XOpAren%vQUY|ExYyjbaQE=%r3cyqjiI#;H|ek7!@ zdb7@?iOh^KiHKP5*sJOz$y!_v?iU`)<+hT82+%13n7JxdugYp~OH6W&pZ4SK>Rj;v zCisMbOCdG2R&n6ci@1Zm0*(z+9ZS1a!5I;jb&Ka4)$m}VW7^^91^CPP4hs@XQ)pb! zOM8IW8=Qiedup5t_d$}_o?#ibR>ng5qg4I7gQZ8e`eTw6f-6UKjClq482(uHYn-Kw zO$;f>c9^RyL))?Tkn_lQC`aaH0wP^9#b)~nh-B2Rp6PgesOtPeM0HhiWH)KifSb!la%%77BPdKI zxb4S50Yxz38#~gPrd?FAXc8-FA8MO`-WSsa2Wf!f&+q|zL0rcMENQ%yF8h-cVa6gd z>ywF{_LED=Jc@f-FYjgRP1Jr9jJhbk1i2qL?tsYb{9)nsM#6DE1163?hl!IbiGo^= zB)!8c94vD5znMO{N3?NVMLhgydyO&}P;?Z}Yl)Xu3Z#Y4-7;w#dXD0A4Z>xzMFmCd z?<7~R<)lAgdO0`Bj>yWpis>Iy7w^AdVotCS|KZs-tv{{CwH9>qI@%Ggd6m45ONBa? zU$h+l6(dqV7Q~j$1n?vXp;O_5YJ>C6x+&LP6vJJEE!mhlpU#RE1z)EP3*0sSp0>$8 zTJB7McCu7d995bCb<-lueisFPA&VSBx*0no;3NiuV8`VK{|6%st9nSFSw33INP_+T4 z(oa{7TtidrC_#$AAV=wC_i>dp`_r;5U=N8dMw@vlU(wueRmuEL@Ogw>W2P#Z< zi~A{~SY9_}>gMX`)n^qau|M`{vX(;Yy;d>jLm$4Qpqy1Y+kII$j`hSL1}e3i7PG{%$M>WN$AmEu8Hc=`3x9!M`bU(QN;+U};b?in->dMsr;BZ5T(Whja8~@umX`>9?C7I- zp=Di&Ur{aBA3{iUfAN4Z1GS4o(p;Q+p|8~=MS$9qPkemxt0G3$%1F9z2F%SHS>~T# zTnsK$%={R9+ajc%tOqU_-XZT-9V;Lk5YVnT@L!r?@AT5zONXSm zvR5J5q$cVu#{t)msjW@^mGwZ!>3b@8Ygq&U%X4DQ8-`&9Et^5z72DtHOr1Y6^?EHh zI=}kl1}4~O*X>%tP0t61Eb5?M;VO@Py^!=gH-h3q1YU0TUJbR?v7VD{v`*54y@`iT zn81S*N)CIYTZr~DrTvU6Svf;D%|*#h+M`X9Gnc%0I`N_pUj?11r2)ej>0iqLvliaT-2IDYeSRe~BT;+27LY@`H6-(Jy!-dXR32q+>@9c0k6mq*6R=z)56h?$ z%U5?(zQ{WC^v^BQPyT>NZXIUGShg=r1l8a3e&R_3zlsRleE*a^q1<#PM)_|il`!Q& zSJTj>IE)u1TBR|p<|hz32hc*1_gQ-OzRDNbr_x3x+1cKWBL@;a01EmH7H31i(~NH;uxD9(ev((pzy#h*7kC_Qie zXLtD8SGnb~m&T(n!d`U6H&+fU;F#O7TztjrkSKr9QhSpd`1Pe{KP3GsLb!46b~kNS zbC|xaMm5$b{A8Z4_2$JEGc|)dpk8^`h~a(LY4m8Il-``Fe9_V{*H*S-GkZHdkBS6P zU3b*fpr5YwpT0HEU&inA67@tyb}YrmLTWa9QpL+OCeKX2ED+}*o+<634uQLzg5~XV zkPQi)Ua|41qu@rCC}j?6`b0;9z79D@tu>R~%_)FDF*x&RtM2oCD|+3i%sY*TmD1@# zBcO?}0|UWAW;D30-Mp<(ldNnlbMOeyw4mtxPJ1Ye2YIm=BnIw=$Q6uK^%aQBk9x%g zx|>G5nLzT(rq&|=A{yg5UODE2kOox?vwRy^)`tFMYk)w1du@mKWNA|oNpAt|T%HXX ztp(R*lC8;a!B$$&@?0sE#4@66o4D#do&`ld4z-Ly$uDDqDU5*O)jx1ZIbIJ|1WRO| zpF&Z`2(N_|gLcGz^m2i|opT}!8K{#9sV?{OBlJm|fUUC!9j%o~^Lg}rLxVPmX96DC zH;w_P&F^PfH?4(uA!9H+)2j>FW#=ao0o2>^{656bVLWJfp@zA`L`fPJ>J-=rze+nA zlwB{2M-Ri*;?&fVU9{+F?<(24q&1Z|v;vsYt(wp?f4N_Y zw(KBp=UwB;(&5vaeGO+@#z=Wfu1eTnn+%b=Fnrr645C6>pZA+T7yXDQ^c5RX*+62@*dk$NwO{e`kTYZ0*_Gt!!2_ihcaDB4?Ewf8zNg=QM`gZ`+ zm8nHy_OaTp9~meB(hu(bEG@gpNM5t})M?YIorc-#sV9idPUF==_s^tx{hE#R4@%mmkr*EzkG>=NP)HgymE{ziw>48-ye+8lq5 zH(9)!t+vTD$lFpL8Gp}6csJcU+4(H4jrRwe=7?-#C43wAKQq7sykL0 zDHIjn|GEhavNRprU<=vURf2xaFLcY?JQXMaS9UvJE>G)y8*=E8mHxZKJ&VhDvbsY_0ccu5J>j@knO%AaaXA?2_y$alH(5 z6zgN;a*WjEjn3U#tW^-nF?6-@n-^5>0n|^jJk)n^+alGp9K^V_4&p&Oa`C^lLXFhll}BB(>7XOXE9 zNTq1w{m_A%N4>?yGkVT_hMvU}3IXB#r--08(r;At>1pQ)V>E~r1N1|mbW`7eFDrrx z(F%PVZ(B{w2$VC;2I+bj@=-8*wBwz?L5tPY(dJ9;N^hgw_YJo^NrRb#zqnh2qE4mJ|hjT9w1}xmhzU+sg$CY`t2Ry1awax`iSoc zcU?a2S#j+LkrJqFk~{dQUVAt+G(V27Pb{^%FxH3O;XV!~Eu)i3m>|F@K(A;6ckO{J zA=$S*Ucons@g}j1?Z+oDI&dm%FN87{SO3pJnT&aQ;%9 ziOg&rt!oNZ)*@qTVH2vAf0`78R2@7I6avasI{sY)PaJDbJBN)5XEFSyc`?}Zq}}3f z%e9X$@{vCvM{vV%Tkm1ZTipB@*odUN(wUNFyjeM`cc}sWpLG|Ga|j~Jo$lWwzYJ#;}VR}77Ol-6@E`Q2EG`~#kwUxzc0AwDl4kTypX%Q~;92{r^ z`hiyU4K73>pAbXrB+p8JXmTdo9RkEIij+DgFk8i~Qe61K zUUHuPnN+|_758GOaCC?sF@5wNI@(}P`P><*QJd}Bl&D0TvX-qMOo=Yt7{IlMrcF}( z?Zmu@op@N`jr|(Dh>dB!t*Z8y>%XMG7C{H2(YN~haf6Bk`hkHY^&=PDikyO1suyYs zC$R8Mq}Bl19(5XQM?wnGOwloz=`pceK>bj*HXpX(*c<%Abtp%0>~VOJ)yg9-^e`Wn z&YbyobGdeK7&QnFOz4l~6T?hxZK^H!KZM3IdkhS^>^udW8qz;| zGgLL2hPT^%CE2;~MvL;Mh=JR^lg`9o2J$MMT)9faLw=dGyPGlfZ$Bb&Lcv91Lc@RX zlmkWW2z=)#MVHFI`tIXrFmBtoTqTPc`jkZqWVT9VIG3pV_saad^BrO9h|#&7^4$_+ z`yCu^eyk(Mz7A$^1_x^FSn7<7C~A@#W{L7Cb1pm(qe^Dz(EE6H&$TC&qiLr0Iqg;@ zRA@|Dwrznr$Y0bz=I8|l^S&k5;G=BA?B%%5wch(AKdW=|#L{*R$TN(im1c6AiNQgZ zHEnx0*>h)We}|rArlahkLE7B5;9jh`Iv&J{*m<5}ewB75JLr5S?q(zKUak!2LexB_Pa*;oMp zF3XcMGLnFet>a;HKeC$*&#$TK%gyair@hoXRNf6qr%`kg)sd!}g+3`=k!Shy z(1f=FQ!FlUZ;i( znEaVx*LEeBuUee}DCL(lqGg+lN8Q%Zh@xCr*-bxhK&JEpZLzWILoJV2ZM;xd-FpfQ zRG}zywkXtlwLjB*)Gzaix|k;clRgM>hv;=YWXRb+a2e*F(^7WwUD-6#SAO7Bejjh@ zH6ltJC}WEBNG=?{n*_}g%=Y_b=i=n7y#t#x(1KIes9~I0^1RBGF}?mwCneMO%W{W59j_U*oxTJ zI5goB(@geyUn%(oOk^GHs;@-`ChqOKGOjh%5@`u1TvL4IMN=l0j_C&JUu7|$kXl@w z^YY}DQZvclud3(THJsO$$+*cuk@rwNk}Sr&Vo_!rNds;zGL(a%fE42<6x~L6Q!Yr* z4TB?cJmFh8-#S`bm2EFtg|y0n0mZNgNT8!&7`3Hcoyqn|6IQs_!bZsoP5zo{!{tW|BX=Y}dp92ASWnlfP(6tQrF1Xlq%W zWq-F_cR{I6VLg3TpG9~0UFug>9VLCra4)^r{D!0#oh7Tvh^?D%!a1-_@Wp6YSq&Ov zUEJ`1TyZGbICBJn#AUm|uuZsp{crE5WM-8WIG7X-7ewdcca!}258<7ASn4xMA| z74j}8pQBnD;E1`DBAk30OPq;Tg#Au>+ARAp@kgq#rNLtRGb);w*aNclIgSC|&Mc$0 z9@CA1f9hdLTc|qYKi9%VedrV63S%*))eZ*KdihDNe%BC#&^NQ76a<&^hxf_YN1h-u zu;n&hrsxalTBHF+OiQNQD)ZpkqR$brb9d;ff5DQ~1ER1^~lP>Q(MFFf{9y}Ewx zOFYmfDFLy1*tMtLb{EeUhEe4u!0AAdP$Jl(0g@x;?8GV|CrN!(Yt9Sii^imLp0T2b zx7edc5sXok6)rzXJjs?{Gij#oilwJrxzkvsD&>`%^oIfxJjQ4|mjLP=&7lz@G@^Hm zZydzA`!~rncumQZ@Mc$5!-t2UE}#lqQHmgFcrDr7@N^ekgq+Z24>lk%3^pA?eJ#Nc) z7oDJ-vLn}s;5w>atte4#%>UZg&`8%MrSo3-Hu^5MGwNc@`3HyXUXbrkx%qp zdxBzkFwa*NpF}~<;kmQ?p969=ZtR9c|Omx)j?I$3$ zYWuDNJ=?(z?!Ud+sl0Vvt&B0FsE3VcKk z09P$O^<_?uJV`@tTINQN@6evbhs-L@;i0Z0M^YAT)K=)VismWE)QSH&fA2Pc?J{ZH zE|*+GpV0T^jjgoPzKHof4@HgcB!Lf)! zpU)Fj9*}xVdusiWG7nGrwF3ypAA2INe0_}rT(P}8bW#!66Qq1EvlHc%7+hvP;7CdG zYHh`}QQ$4s!&y(3ePw#9pqg7H{e=*4&6Dt%F>qALb0i**N)skhuapfQz0yEV#+{y> zVlb#Ef&`CkJa_ktraQ+lQWCM=MOFT5{7uSPxlH5W@lk^kMuqM^IgCo)dzFAe^4 zfZO-Els|iF(x~_AAy<|?&5BZ+nBT=$nF) zL<2acOHOiiR@&O!r?V-C-cpVW2L$0edDtQ$cfc_~DTO(?hw z+?`Q$Tdv<@xk$OnjqJBhm_6G8_gp=FI~jkpBfKqn!LT+=vz&9oh?mG%Za`6Wdx>$IR1=`~8Ey4K5yOUUmbxmb@sS8%SeIfN(#EO!_= z1!<`Hy1Po$&9T_+0BPvIM312ulNe#nUlo;@5glms=^ZPWX)sBp*x0bJVsEwq44}XJ z24aV(K0w<7kqbv!vi)bLuTato(gBOGZ(}pXDbwLB-yP*wZ6Fw6lidOpk;_q#!B%F} zSM%ijB8nMw#{f%*ZF<8e>ClDd2*Tz`hr1Wv=En3x=-41Kc$v^qCz_q()ZHBkLVSS8 zpeU}^nE;90);m4De3q9Yx0`v#S1TJ?3~OT8(^K@JNuWt?fS9Q6pr0ixTQ$FxwRDKY zcu-|Vb&>C{k>c zY&=0tu_neCl2Kuouiqeg_Nxpw_Pq2K5`EgNv~5DF4Egnw=EU*1CUe3~{Q~7ymsyQk zJf%SHY2ktk&rW}_Y;5o8#$#W?BtDc14ps!Fpw7E|I)sqkb3`v6@;=vbD@jE-pjUMo zvot%B2X^_VB`_*CY({Vm(3n4Rtu%>X&SQ~KqZ<16htk?Z{&*z}v+7;m87q$@ogM=V z-yWCjsQxbcjN=~S;m{|JANyCItY!(f{DbF|!K@0aYLd5waxZIm!Rv0C;PiHtMw}wJ zhR@KJMKdei$iXT;*zRk}+ofLtv>swV3Q|@pxK%5ETE?O>YwJm+rG$7(AG6M`hLHP{q(vt%7)#tK3*vHRzK2dyMcb_dBM)fhme@LoXTYDU7wwQ zB@53-&Qz3UMf4kPzMKw?i_mK9FpHOH25KLhmnm79m^6evDjmo;>=s2?H9FCVKcY?z zJ=VTtBc=YT{}uc8ZLr;3Cyo=Q=T9PB5*lrvn*t|c;8>*^`$@>S9|}^uXF?IxuW6@S zo7YkFL*#jO=j&SYQoqs1`c&@2Z3pGZ(YBwXhm@@ioZj+)1agq`>iUfSbd?d)*3!=5 z6(r_HMwfDGZP>+2H`>kb+k8ZpOogemqtf2H&)Hz+zI>-D^!N-7l{Vkz;`%OO0L+Er zR1S_QRg|nVbiyUXPcnfb5Q0MWXu{DCE0!tPzha9d z@|G#eDL-@KLKI*cS5E9$9=|Yi88u~#!{E5u+ca(m3F64jKD7*+JAIRw(pY>Ly(6sx zhr82pPZ4`Aw1?_VnO{VyyNOfuqtn&w*3RjPvDu@ilK~3(+V7aeAqHstLUiTQH!NcL zo9JlY6Pi29d(nj;Q6MyKfr={4V9Hi$b_?#pD=5%(OQL$sBh@3;!MR{D?KP@%p}Mph zzdjxqb;lVr9)v&THMqDN&X%?zeN#iMwp}kxEcL3I^ZT1-!!=P#Fu=HU)gq9!OHwqV z$i#lyUc1-zx#6$)S6!d+p0tc6;G(6pW;y0Wj#KN?#QzdYNDR_Y@I^^0{W&nuF@95{ z#V=tXgV*jy6$?=)b+AyeZVt|prbM*U1FaRXiM*2#@Tz(Nq1lht_W%s0v?4FHsnxe7 z`a@EkGCu}xiPP^cR4bq<=%3(!Ir-95wTi;$P4j`{7{uC7Sal4IBU_KmNV6r>yUF;s z97OD z4+k2BxcM`GYI`TGDV_WZKH(4W(NLL7E_w4?rsc=HBp)zTmN>Xu@{#3n%J>zKy8&g^ z9t$97V$HL_E4i#JyYj)@mV%)8k24WHpjR$i?Txc!wqJkUaB!UM1kgT^J>)*pAIOnN z%b4f$W|9^*rs?@xmcXDG9Rm|;nMA5>mm3dc+29^ia2~WlD*_+XJ$lUZMIoBxc-*4` zUKW+hXa8Z5{?ibG5B7R*3V(jSeKB7{q5r;XE0`0*LIzVKQYm3fQDVd z^#3S23%914s5C$SCOsX~|KeyOHh`7^7jZ5o6@< z-TM#h`d+)X=Q-bV&gZ^8VhfEl8D|&j3D(+`s6mR<(#acbXA$|h-o(*MYrHf%xUbE! z3)nk81OW`2=Z#l`mKh)6$Wa=y=Iqwd*zcsLxc z+|1X?p^P#QVX#IFZk*8iW1GtL^F&z%xUn22?RDZ%bu_nev|74c60gNclm2I)y|WTF z9OE4gc?*;Uqi8Ic@B1;!mdSUbHM!7E!3~)YhctVi>U^-=!RGNe2}=^9DmUK@e6ysR zS4{v-7*|`P>pj`E{DRkUDTq80o)Ht6qK)V3egzJdR!78Ne_E(%T!}yAb)jYN+4|*f zQ7zXjHtdQE6lhcz-4Id#V|2NuFDMC!M9WZovH9K{aO6H)f7(d%demB6HuX3lJ}b5g*!5m~IH0Ekfo{Y8 z8S4585KJaq$^WyTs&Cm4_~K%;@y388ED^3ZSn46&J=-3bPiVUHb6Xr2_f2_s6o>La z)39slD(Z0iPz`rzyCt~FL_0B}aG>cnoJ^cfzt<}3@jGT~FhiDe?u!3ABlSXK>#IS& z>=$$a&x&xmpI#ACSOp|wIJ14q(~&g`x*^l6ca4=g z8!ykG6&Cv5wO3RF3G?|;l7*%YJ(1Y(0}ZdN_+ME=nY&uga{Chx&FQxI>6$V~#afsH zlFh-t(Wi8GUZ35{mT7OReu(#dp?)!2$rktx80?;6?bgz4kb45WzB2$~Cul>m&Xhhh zU~@e81ONT+$!d1rBQHWMy2ge7(*L7v>IFWqwA%;L`D}Bb?kv?8=ZM+|ob%rk2a6T; z{ztyq(}H;~@fxU`JAsn;HEwmV%@94E7H)?0@N`6M`4&!wH`!fCQzO#&z%zo2`yKW9ls%QU)W<%8v10^Jt^szM??j#_&*m zM1CnL8jmfMDtlXYSB&8exeKPMtsguLe-}u79pM$g^g4Jg|CRR|6ZJ53O+&}<&@ZXD6tzWQfQ>3w5b2lkth4%z#N)xyHyY4Pk{ z5*fZhu`nkjmUe>m5^166$Ffa&^r~hoCpILYiyMSCcs0;gXDoD;N4c%CFX2Vri0MR3 zyQ627Wv#V-Nnsi0y+wyxMXxIN!a>{4((0@r{E;ApL;AJ$(a%xttF2@Ux`?p?HE?I$ zl~6cs>k1nGbZXex4BmOU#vZLo6kr2!dt{lOh^!Q;`CVc}Oc+n?BT@ooaN#F3v!fPl zi}1U9Hx-b-o&U26abdwvl0~Ku$7tqs^ zU^~k~y(TLfxwE|If%6q#yo*LDHw!LWs`$(zG)wd;_AYtRy1lggQNrjV&|hvJ`af~+ zCjz2+_@mDKx`*Tt8DF&zuNw{&T>z@BE$6FS`_vYu7 z6qy?}n6o~}gu=wZZ6RM0&>Z$B4|j47AK=2a3*PL#N&h|3);=Y6lX)Jn;-Vbqo%&5n zbAN0PsF-SnIL)Q;R7EXMq2&=aI_MJUgw&wq_lL-X0o-48JIdz?=^PO6vaN}-2*`^X zkurcvmoD|;71*?WURaL!u>2vObp1`x$WQ$z_()<5^@(iH(ttZGU7SwCW(cSp3ax5g zNCC6WwQlXu`0z^(LMXm=_n?UJ$DrM{#C11}h8pcpXzn#^(UD(mh|*kIro`bEGHCwo zQ?OOdpeyeh^$N0)5ptaDm&*SQIA|#}%Dgm+*r$i0)A>9wEQ^1PX%79V^+01C-%pXZ z+lHn5ob;D_tq&#!>i(kX44xpDR`Bzb@55!)9&KH8}*GyfoW>A7B0OPkr5_aVhOz|AyNTz`?^kB!S)$cHa&g@?E zmz_L}&(%Aiy~fOvNTe!gmG3q*JS%*wLKn^b{#yAYU<%&3IOtr(HrO)w7Q zMn#*koM;d~!}fm9jjfwe$lah~AWEU@$vwpnlos0RHxRpPkx9F0#h0OLe(thdeHPm#j94rv>*rnL)*^p0^sk@xW^MS9nQlOEomA80 zhp9ARaC60U-fea!KX%8T!gQ~GV0&S2qx#BLatY*eCAGqq{5-ebeWP-?LMr!U%Y6;v zdH6@=Y+p%}I3SHX>g#%Ptj`rVY*fe?Gs&3^Sic?b$I>Du(#Kr+s3hb(t#U+2Lt!$h zS$FzU4j`Ppz}cx3vOV=Zs)%W^I(P=to&Vj9j$w>?{YBjnLF5Mj(>(m{;^OE(iv+Ya z0_53~SSVUxXNiNs(c$G+t$)A@S*gVKFY;v`mqX|h;vQei##nh~tK)E{y3n*|ohMhR z)j7vCB=&GrdkP(i0osb2({<7uOzT*$Sa%i%bMREkcx^zvQK$Xg!1?Nh(hV zH1rMjL`h*Bvww^n+;ABL@{}CfA}h(ddUN*AzEmn#$`h$1Q(uXkZEY>^Cd5n~>&A0v zOxzeO(peTLw@lbg0e|IbO&e&IZ+qd3s{Qqx_N|JU(Ah(RipQTylb8|x2x!>l`ts{{ zwyVzmPoVNXb90lM*11TglM`zaZi;Q*O}S-VY~18iiBqnBo%J|`KYC9bVb7m?0k4cx z1t9(vWW+mHevK1RD3l&WsnyUd&TkWi86A6kB|m*LKf~>{`bhTN=k2?oMEI zfLm-+^QimNUx<0i!<Y{D?DnMUuC4YdDGQ@i3%$3Cz66KZiGD=(Aba83GpVzCK=-(mgvnsknvVxY4r|NO-o_h~ToL^aSB{n_pz6}j18?Lab|7($z-EE|rh z*5DY(@*Z%iSb;9Sr?JKT1D<1AT|=260P*)&eCh6`??-v;#|>g%6=Mv20rx3OF7{|S zB!Lt1i;@iT~LCmj41lt74NiO@99X0i>Q?2?#=Bt-fKXyg=m z=ZrYq8E|=E7WvXxD?3^I`{X{vIs2z{}Q{S!BzM32~i{A%&4%3km;@`=oQm52OWPaODqe zIt>jKS9A#?$BfU=C+a*@VxSHWYD{j#CR2NNiUzL3SNq!MojpVAUYp}q5yu1}&H3_u zGad+#7JJ9^5uIOlgp5qG^BEG9;gT^ifwX*BtS6G(Ip^LDAwb`m)~CC!E-Oh|)qya1 z>-&#XO@HlElACLeF5F3~qkPlK24O9M%gt}U1@P{HBUfkL<29YAjM^b-WGy=+BOKM1 zm>e4b8(GWBIV8ClvKr^CQEehgaGU7lRax-=1!gy#!WjUNK#kjkdBYSA@DDZW#0I(?yKaK)ga z3KsKI)AM@Fgxaf_RGb=@u@CG}$5ZCznQFBr(whLiC-iG}l4Duy*{3L8keocY%a(sA z#*pu-K5wW!u@*|(0u&u|y+bVgtC+AUS01*=2Hig*He+foZRkCddQ~JQohm|N+alWV zc=^nQ657BPG$wBbbSu9)CRD_}%g89xG>|=sF>CfEN~deO?A=-zx#FgaG6_PnA_#$I zn2>0L>6|Ue5swkMCF%N7jh_%I0kK&7Q{$B0`=vhx6m~vtCU*DN8T_Uws*LHoYQxIC z>hI_rK|yHLd;bY-#Afs(Gf;GXNeAKkldoz=X_REdErjSQ%m&G=IiJZ6+EV*{^>4N* z9}uM*Grmr3v%B`-v{$LEXE%YfeJ4YdrMD|Gsw1X67xJuotHN9ek<)a`G&sQzr#$%3 zc~7&B`zg`F?w;YH>X8iirO61REg3ViwZrcF^0fEACtgcC>_x(Y&%l)LFOB;n@64vX z6&LO~D$NZ4y(Xk@niZ@ap6#qE0=X}%zEuOLoC`cM*{xLX6MKAm*oMrrU2N$mO{qq@)I$sm3x!czmUuu&A{H)8V%7L|Du14D~<*dl=82ocMR^( zS*ndNJFX$Wo{<{OP3?HIYH|>OLB5jHt(YWdzlJdA3ms)=ZsBPrV!=UGOaF`GLEf|3 zdkvoI5b?ODf8lV$U@^p}F2Xp744u9h!HAdp2!+q4ehWm(Dm8_%MR+W`oG(muhld!J z1siwEc&W)SoHTptV_BC0qM-N(TfP|a4%CpxK)*;vR{rJ*I#Ci3Y&NOxe0o2Mk$fL8 zvLGg-L%!Ln0L!CPU)hLY_LztdfA!3+hbMUjWB+J!+WKlolRV6&Q*Mj0Ush*`h^JT# zce$fBr=EcvuIL0B$p$}t({|%EIPEpVyPCWIovuW6dz0y_ISu4h=uh-fSwW4<#Wp$Z zZstE9(Wg1BC~#-Lb^SU-QmCp#7`WEIeSr@uDxNpIk#;4egzy#NKEcwAW1ig37QOJL*1X`4!}}Uyd~6QS|StP9bYfgjb^rvR`{3^@E=k`de$pScxM)3K%<9nptTF1NH^AzI%{AIodlJ` z)NcuJsWcrdS4@Xna$InqgT0-mxM|)UA zeWMq*OBkLVZGfJJ&{1s*r(S}zh;sbB)59-mk!?V__H@qbItAN@pxNNm3)}&4XqN;6 z`_$EIlr`Kdl|0cRc%n)XqkoiLm_ma!XCZ-KK&TBd9U^(trJ{8Es{^&s4R*uz!gwy( z+Xgv_zc(7Ye_O1s!x*@`AJT)RotPTBhcv{70=&n9QpQ=jqk^#phH>T8uBJdt4N;Na zqpY2?@Wl-!*0fklK&)Rormx$WyJ^cU2xg9lTvMCw^AGf=c{JEhPDLRz{5^F6%JA-f z3x>mB8H9Iwv{z+G8bP~W8ar`*N-PNC?~@c6o^y3`D&@7^y+BozIvSSeXYzy)mM73h$W zm({HKV;p-jet#gxs=}S0>T905cyMvLgl8MEV3UmElyRpw0OO7VssC!KXA&=3$D`JV z)+6c3EZBo*8J{>~4PbT=hJe|Hsy$n@w_Qe9e0V)@Q;xyzr)z$5{q@U@Hskolg18dk zma#h0B7NDYALco>S@y$(qW!}zt;H7#Sl@9RH=~7P;^;7ZCqD#oL_Z2<#3~*X`imRR zIibcma6_*sT0!OflTUEnw{pyz-1+?3x`(2@(NN5rObQzAQ=@23Sl4=3G< zIK!6nHlIu_^f7+L~5Tv!e@;t>kxwH%C?B~2kqQVccJAPxR8oMK4Jcu_Qf zMrWXeJ&|YD#d<5Rm3baBeiO7|nbsXs5PkA1a9&JD`Q^9Y=K%)ppm2!TfPqfwP-jo3 z()(ux%1(h{?&J|hCS0D=^Eze}TYs43HPn+P=HwzArd$>F=8uNJf*5E{H`!K8G>`W| zGH2hUGJBiRy#^Ewb+AjMWNtMe9u0M?jtyBKDo}R??>O6}qZmViaR-!YY`hq*Z26->v)YYuT z{{Aki+U~gR@4wa$J%J<^x!r`_-=2Z5E|&3q^3U|d)|UUIbk zx_o`4WeoF94Hs%)EULkTRdPN6obw_r)LcgCaiigVO5>PBt%g5SY*mNqcCMnd*cP1K z`NQLGQ@^uIaRXCdC)GAMn%7I6#x_a21($xQJfv3+A{QkIh&lzD5N32C*>)nt1Z~%b z^F(g}V>=nX*|LXHu{u`Tv(xt!Vo}7i+ZMG7EbrA=KA&@}UJdVs?WO(Ctx^KMg zNY-ujO?mbEzC14Z$$iiD0PYYcKrdN&K&koZ`M?%|`lqbwN#o!EO2NtBoi~N?G z4uXOz6%zDI30ZrY`uJXl25Hr~M*2KMd3=T-0=jt9oxMt` z`K)nY;(orWk?IYmSdi8&=Z*a=9Mj=ely#?oD$ANe<c*i0f!<@kU(lcLP z?-4H;Gt3EvMh%GR^h+l(;=c@)0zuQTj`bf(*zUUaSLUYH-vPkLe7<{yhqSIy2aL?u zNve^Hpld?{4nqC+WErr{Do51CxMAp5Wa?ybf^vN$CXP*y1fhuO4gy35TwPxM{O8%s z;M9BXuoV*|x$WRQUD!_A<0TiNJ6@7Bm6caKn>2~om~5yUxpznZs6fvR0ch8$s{3o> zQczXhFM_9o#Sx~c$H0d!ZhtIR-8$IXn*yX_hxUkp_S4~w>l(pFzc;@sM+BAulf&Xe z3xjRVM?umaCsgw=wa;xTKTsLy{2HJGa?Y>a@WM)5;|= z%7YCv`VWD+a$dIwAt~odBjuFzeyL@#H3nl?_>=RfR!<-JHrlix-$r# zc;tLs38C;4(%kfSO~+yi?g&J=ECNjm{2nhppu?QlY6^t2(>V3OWAI~(V!%?H;#EJ} z?st_JcjK*KPINCKp|9&rTa4jcqqqTz=cFb6T@`Mr)2{nEgHoDxU5l#G2I?TF3ovw1 zo-R$qPGtbO&52$L_o=|L-@U4l8wh?40(?`QzFjPf^)fT*_xS;T4<1|WXc)+2QPKN8 zkmN6cktccm`7k#fOGGJBGQsfk-o~3i>95^$L$cUQ@UIYuGooRuZNGaB#|)60jt(vj zxCnx2**(;_zL5x=n+8qIygWLu)M`{I&ZQ9=WYFny+yd|n6CGN5{mA3c`07tktbXEIMAkGNX^|tsJ#zqyf5yy*hbZet*!g425mUiVPqw=XjnfU+vAUKF7RQwHr8LVsfQMLa$D*DJt4&yN@wMwX=_>; z!kY!s>ubM``eL>2gKm#n<1KQm&S**#5_)GR{Cuf%Qj_oEO%#wnNn9?XG{YCG7zN^p zU>?QIclw+6VweNk3HS+#eN5T8ZQ?{eiL>y<5!H>2sr}w`i1{;1P7P!)GQa$3JuDXu z&Pw|nQOWUlsw;{T>Yx~?yC+AmhHe<%VEfS@j5kv@FG3JGK)DFx*??Ti^EUtl@;_7T{Bfz>IM45Hl7Ka)I^q)!geS^pkrm09guyV6;w8YFm%r%C_}l^( z%_)+5H=Z}-R_UqD-X9T9Trg?`wgxMNhLa!VGvuRxfV>%a$iD$q5xS2rz=)~dO;X+~ zEvij3rz~5_BOi0ZinHFG0KnQK8QK9uLcYajWI&Wfo>alBd(O9;Lb+nSuK$4V-9_s> z?=rC+{qV(W_7xGV?qA)Gvxcf-=zpE+VNzt=8wJwO$rs2|6T*hJ8%!s_RjTb79*(YV zT_63ZkHhrAkk-p%3=!cugQuy*PgzkkX5UW8&tqfn-GRzWi)!0rCK8r*ZBjSwD@&(6 zPd>I@wznS&+hnS`Ba!YYb_RqNt(7oWDvRMSQ{?M{IZvGS|9e7*2APaq`|mT`ZjoVB zCehNLQOjmJupd2p9m#{nLAw*DBmGR-KaMTN%~`CGMx~N?qR{d?ExfG*ONVA&>MX{V zrrRn{vUVG%2+@u^FVn<{;R+L6^lTHnj|(BGDp+iiXv z1kq!!B_@n{rlQLf=yasB8eNZRHw;HvbK3xP+X>W$4vS~*{$6zvyD_rdndky!bAKLT zZCc>J_x_>4M9gQ$cgbODZcgOjN!qH}d>Sg3QrISq@#=EHNpEOrr{p^4##~DW+$mxk zKn0h-(a)B| zaLY(NNZhcD&1b5jGW+xBdHI7mFzMj-K4F$=xf{iV)nC3f6R6Z)qJjnVdfX=z<_9be{VpBv5$k3XTNxvnum zg<+N$zHX4GYh4uoMZA#dmc0E2!$jbW7}Qc<&mrY7DNLbwuBq)~eGlKJ&UQutWC)Cu ziA2Me)i#cUYf}shr!~9N+j~r@rJ-dV4a+O+u-OiY2J?#q#zb>i+orqsGiFxjM=%QK z?H%z-uW@h2ZiYT14Q#xEzL=Rd3XR&b)6vsQF@m0vMX zdiLyN?V+;sjBd`@=K23c?O&Oq-GBsWkw1PF+g}Hl!-pNHr5oWxxEJ7w7083~fN}Lu zd&PPtM@MFH>hig&1b7AQ`8I;OV!{zvW1sV27AR5~62|hc>fZ*Jqnf;4h?*IjP+csAtpN%)& zZGU-v)hp-AX2Oh=Ngz|0=_@`Y`{yNQGwLYIpThx9z-%ip{ylh_X;})k*l_p${^^f_ ztkY^#b)ZuIiSc^G6LG9}typ(GHVBd3kfx=RWA_R?5hreaR;{=uY@{`r??Z{HgBjjau49S!vzT~vim*1#;Pe;ce{3+V+ zR_rkxgx$35WU_1m)ahF<{`rrZNqbjp&lV8?l*!i(UrUBe&HtG%FgVp<5;#KjY;>j^BC<9trw4ItU2v2G4y0e-y*kR1&La@__Wo|9q-uMkJICIw z*t<{euCFidxUyxR)Ngqx31Kfx$Trxv{q@dTHnqCUNiy6@hbCX{pScsf`&CFUL6`7X zIRZ9xEij~TU4f#uB|%eT+kxd&nVOYoVfT^X0o38|TK;L(H*9yoc#%miER?F%MOh71 z*C<<+RLL*}>P3p0#K(5#+RpoSMMZ^tb1ZxZ=H93OF7P>KY%oOdgS^Pq(*I}d`FSkJ z!^f3#>p57Wp1ce47em6g?aqWg3xpD;`_ddAwJjENCFMUL{*{N^*@#%7B|dfE3IV1n zX7;r9OK0wPl6Y9n&qX+0BN?(2WN;*xl%Z=(bjOL&UUAP(cU6187Vv@|@DLdL+%Pnu zFX5Fe<;DoI`cO2^?RP+dOJVl#}%->ZKlh^;g0rL zSCn|GtGqKYd`}94?bNH>+8Q|B)gQjRNc{r)z1?xtkCJ})eI@f{zyDUOzgp?^eD*_f zNrw@>=aHYtU4m>(DVJyxuXUaeJ5>JZHE!QQ5G-l|?|pW>vi z%MwoAWdp1DLj5skpCQ08nD%h9b_xy%k*H-Nfaw1ceUbFi^1%~y(jkI*s{=n&V|GZ| zrwbaChVOyvPn5TWEFhE^S%#|)mKVxV)t`akh$EVeaiUOI)upKohde*rZUTH zVf&0Q>*~zb4zLFW&II=vX4UUi>NhNw45pG?d`06GehSwrfhKs9FL&Ed&~k>+YQdU1 zAJx?}3uRir>*hv`$ z|6Up~XOU!U;;s1@m~r2ZOf(W$=_MTLoaN8Ld)n8*pRj9oqfD{y&+B4bzYmfgAB#Ny zw#qo8oY=L-RQS$uaY-QhO{)QJDVISjVI(d!7^9KDQvs?Y%Z`Zu^ATaX+#iIJjY-x`luZ*LHisL78r8Lj2diTMGjg20}~maFJCp z$Jy=d4plp^N2Eb%OZv_2+Ri)cFGg<4XNMZ>4C8~i|A8rfMu~`BC?`ba{85{;V41i# zDnS1*ZWGg)JfX_LsVbfG9(8wM=>M+tj(g7A@TA}e%}o?N{y=?Gm@9Bw@8ZxRYY!+T z7VAx+j5(H2exLpg6Qy$*NK6sm^DZi$J0`EIwEIsw<`0ELJ+dt|vQTFGDLhd_j)DPm zB5w;d>ff%#3mKVnF7ochrM}0<5iqYG6W`%2a0!c&J*dzV6g!q@=y-$qU!d=Ea{SL$&h^$NN+Vu8U>O z_sr_S_`ViRO(GgE@Z(+R>t%>PE&QHmJ>R!%J z*I_1#s-^&Zhm+9sj)qtkxl+HqE`#nDl8h)ufb10B5V_-2G8vtJe!AFdBR~Ovs~iT} zgBp^>wE)*QghIWUkTaX&6?xg<0Q;T&mT{>Xu14pM}Ccz6$ zMXnz7(w4FGOM7{(fkd0)PZPAA{{8pFS}Xof(&*ORT2TtH4-bB5X6Oj9tV!(6guXTS z;pB;tsr)>R;hUT5RxtO4Ig5}BaUnFkeI)<>TdU;W5!|n~@~eF){FfO@pJUxLFeG9% z4qPNP|J%$h%`&{IBt699JFWsNagXLU$>+?@xddw>Qy6V$T`c9P2mSNvTr69qYz{Dx z#WGF3uATN#4(M@>VWN9S0*1)0_k~uNQ8HIbSLo<6ex?-0CV7u=`RRo*4#mS8KLT!Anh& z3~1;AKoKeldH%=R*;~Pg`*KqI#3*U|Qm!JkSMvuYW-aCe6W%P}--yFz3u)xjSd*mY zXNG_I@(F1Q5EhfEo8dbl7z8D9AS~GGp~llep}`B?g%pqN2~-b=I;4%5^+vcf{1`#0WNG{K#vWbqf_Z zdo3obdWoh5B`<{+z}9xG<>%{Z3sls22Jhs}ei8-D%9FvF|4bM+G=5>Oof6SZ5fdsc zU8s@Vc(pI(iC$>oQTU}Y0?$fXowD9~YOp}q;h%^r!<|)p+WL77yRe#JJlA+b%8x)C z*S!Zk7lMABeiGUi5>-DOB#TGJ+(XrWs3^yU<7(@o4sN2tY|`@A8#^IlqP@sKS682tdE!VLB%&rZVLjC%#83ee9CVpToA`Y z`P6%~ScM(iWrU_idBUVTX5{aC2?@CQBp|KLZLcCBpovA(kjyV;n^ExG4Eu>^){ZxS z^rEbl4X`VNP_K*IL37KZS9H0jKM;^+hM5TWju2i~Ud9%27Z`(rh=u}EwSh?vH zxYzi#L? zf;_Z>wB}sDOKPk$KAp3&RNyW4vwA!L8z5wM+#E^j5>@F1^IB_kp**jqpxJubF}g0Ax_cw=sNYi5V}qvn-_Hv&S`DK@oT_8Q`nODG_l!uUl`7cJ zc&V(bV`37Pp}0wR3ohUr@^6AF{#WH11HJ|m*v0ghjyrD_%=LBmuCJ<1WHf>gE&cYo75lyjRGdsjgCkNz zRCc=FqPYPJRjhZi7B$4b!HHc$M(F0=+qxLNx>V+W=i^knZ@N(Ndjys4Se6L^Tfss= z&e?mLqOmWl-#l)9IN>nh{t?+lQW1wZS>=@f9lxqKJvDN+tpIzW0a^=|nUB+_#1o0e zbEbp7Ph;#i7~D0`5QP0eH-UF1MoA62lJMkcz+BYqP~3>$h1Fxx$SX9&;r>NH!j$ zW-Z4Oxo`oOjr~5>Ih<3HH+n%Kkw%uj5kEY$OFd|w&T`;#VC*pxW~;p^ri4{i5=4mL zTj)?I{_lxZCdEgVr|%DBa7%!@Wg8Q&#Mw0a^!RHSZDG-%`3q#lKZe=&`=qZAL1S;n z_tUNUHrwYr=GY(S`$ZUNW%F7PGEj5Ii@2AL84T^8RG|9v$A_)iE5;jtA0g6`J<&^w9 z>WN1hevjF(w|jPc4|j|6!Tcmj_)mf(o69pid0E{Rc&=+^jz0TA-n?FEqemKbQ*}FW z!|9X~l%jN=sr#vMQ?k%d*WiaRuxh*diy|jA{P~B>43;cf?`%D5wnbAo&H#h%qOKX! zy^x4pa4QNZ)Q>hbkd)n!d_#;j!O@};E%NoUO;Z|8%${E=G6n+b?xfua9#%|$aqn)D zZ!(Z%9qwxDxp|tJm))tFf34o--QszQPmTgl2_8vAsH$nZn69lqOoH*Ihi78TOA1i2 za%7j;l{<4;59(4@I01{RM+Tq5_7X7w72sF7M$Iy+=r{Az(!CbGDMOvyY7b@iX4|h> zed=yhikJe$!4`LtB^&|B)~zq=0)S^le@_ysAo=tMc?^wCm6ZqcPv;?iA%h{SEhaa& zzL6_tQP1S31D9g_=CkC zG8!7)G{BFG(g3gPfUe^6CJdbwAL?zU8a$J}W8Vo_61j{fVoE-6TK(^dZ9bYKe>h>a z=61;#GGjN_A;a1~zTg7mDp5Aeyj!?j`J$z(Or2|(V@^k`ex39gBp~<2b$MxEvk}$d zZQ?ZM=T%9tyrdJhMmj;~{X#11A?_h37~28Un*$QtZB4LmSkxN7uXJYEv3Z+8jSH&0 z6AkV$$9u@@;B+W8PPL0pome()mz9tJ8!dEb-gbf(eZ#=|@v0@;2PwwxXIta~)0Mnaxxj!^evHdgMWh@d zJ83fU^^@SAVxK9I>}@okD434wLasgEsP$f&0X^vyC=EyQkXe@+z)r zR@C;=paO>8W%f@+cFL9I+xoiA{L-)-WC||!2c4pU*z9e7Vv_ur?Mddbc>;PEe$xjS z=jszM0LkHuJL&d(cCB1m$Pt6b+^KHfdmu+=8Uyzv*)s)9J~FTdM@;xf#A>~?cz5fr z<6zFJ)QILS7#^Y0B^#iT&&;=M~;d zaZ$L&{LJS*0ly{wB-czStIFJ+RL>hJaV?F6Ioww~G{p}_vK((xl$AMDP+@x=c$!Fh zfB%K`mSZuiOoKP-WY_r`$jwWXilWUL z4-6?-^6d)O)ISEZFqZ8e11LFdf04HCXU#N6f5tmUR~515{d6T0t)c~L!}aZYF9%In z`(*X+BlHBr<=Qp#7K9~#S=B4jdrx?I_6fAJbLrT1;;}D?jPbIs{wqn^J7iJm*^Lnu zu&51b&`jJ~K=26oUGk;#=5}`GJo`XXtPpGI^8~QheUgn}8N_7oF41vi4vAfP>BBhaJ|85WO&B>@L`7uxJcQ*+lLC5_ z(NU=D;qKwnCcA~Sk9ox9h`{|(CGGT;&+=TJlGG+_8HPbTU~(Zs#Rut74?l_gsafvS;@Ld`o$G}jcsv}o5c^}7zGYjU|XAjQ$Z`u1ui+QTIL3j7tx z)L&Vdp)?|h$vMB+Yt4|QoN$uJJEAi6Qleiva{KG3Hx%g4l&(K+MaxGb4cG?v1%^BtS@9IGes*rF^PZ*)%Y4*CtPbtjKDblN2BP^6ESgI%b%NVzyC6mj15KNJiC6Op>m(1v;Y zCKEx!m%sXB0ZE7N&}*2r+gK}1&O^rBS@3aDqo;@gd{ia*Dv0|+_BFIa)Lmn!(OEY3 zfj`Qxh$;;-a;8-wr}s^ozjPwooiwzcXTQXKLjvttrn(PFVYGe;W0>$@YLJ5`Z%xH9 z62?yrETP)uKqs)Ff-((`wB%5^#lbwHfJ6+sq0ldWO!p1-FA0~}0F09FURfNPm8o~5 zXM7=NHQJJVT)C=~K7vHcQacRYt+ z82zgzJk`HQRm0Qs;&Ii+ZDbSPIyptDQlA4rMxwraOha53ri)^eHy7(Hg}YAqR`;&R z>wZ6B8GdyH;jCGqYLkhT_hZcy2@6kCqRj1($faP8`k8x?t%c^=%oiPr%atSl=JQYY zItV_@i(pl=-wj(%gr$MCW_eb{2NJu-CB(t41w-gbmO%+6d`>SpiOT+E(`3Stoi+RF zgD##~X^b-5@Yq->Z`HJ!(e~9xg$(TglG$qX9BiWbCxq&U+T80X4gQRxhuI)mS<-VX+8p(c6fK7;Bm__3l;Iq-jZwOLW&r}u2^ttbFhU>!cqq~|)U z)9N1@rUjX^OHi9B1YO#UwInI@PRa3FI#3mwyv91UVN?qZT#zshjK&`cwW=rLF064d zAc=)icsM|7W$T|L5|fLT4H2xbv0j2A7-nfZR|_&gwdm|del6U zS%2Er%kd(guNIxcVHBVu7hef1ddvC5bsif=%OR;e$ng|;@ocP)eIh=g*!Ws&nRkuZ z`dd}J9_pc^ira`jC4DA>fASSV<{$k54~`veDFQWnHaAOz0T}yu8~IL?BBVVJFK&Q| z+qw76?{j$Id?tiSU-bRhGphHQYT5!@fgsMOqnr130~{iFP8;CT|;RhCQ3MK~MzVffd* zcmwu<@NbPGQ*WuF%ty@>rwk2zt#PNvHLl`01JbbM7n)_zy0y7z-41gZ!4=zw&$E1v z$-ZcLo?bH}%kv#zxnzCUb*RI9o6+(QOd7765abhIEBrkDs60{OV;zTxvvq148~*@^ z3irPRd|&XdjO_&aZmSt=4aitzeiiiC9w^K6RlTLzpFN1lad;JLU3WWQw7MuEj&u1| zZl!N`b0OLZ2e_*ix6w`mM!|van%2^7;+_6w>UkC0JQVX8s3~Z9&cD?)XE$r%o0pT! zD&E!AXkHSynFsFABcZN~O|h|#Fzllv+O9);1ZQk)2jO2^lyK!d96jfA#Kz|nt$tKx zXubity4GU0k|Ke+lhVCIPWW$Or$HM=vF;1=WOG>BR;dhVw+NuJ4yLr9;v`Buxu#G+ z=Qa0Pej5vdrhMiywj&i}Qt>v0D2?GlNZZajSCRPNM!2<+oE8};wR(oDIUAE|U|9R}v(E9E^Hb!rvc%;Fg~czB55_;vWW}(RDX1hKL=E-s@j&>N<^< zq{?q!Jax}njw{_F%H2}k6fitDt$Mh(gU2<^D;eOiS%z62N=?}OH~pc%VQ<=3;0@G| zu6RmqLrX>>bqzEbP>)ssWoyU03F9mO004OM*I0{uMmd0)dU2c_*YqRej}GbHFmf+o zibNv~89+fEhc)tN?K}Sf1flpf@s{%MThMgVp=y#TXzuj%3WfbyIvVrw7$`zEq^_Ct z_--YqT0YLSZ2O)^@U!9_w~72cYpdGa??;GW=e{eV)ir@NvnzD$O>lk^{gnPOe$AGW z=z84is%h~_`tQ6}C1cQqIu)O^gSpFi-%^en zh7L#vsjdgaSI|BrUcS}2rMKp<{ z)Cg{vp1_8kIsQi}f^$T9E>|oQ9Ln91krPOigb5rMP!`22Z%Xh0#!V{8eG*h?|)nP$= z1+Hq9Wi1WlJ2P=Pq3cRBz!)6)IZ> z;X(ZB%vx-nf%?_kO+O9u6Zugpm17f@5u1^k+I*xBkp2~$d22LskkJv2_1r^Y3!?7_ zBc~M0od()d(0k-kp;m5MoBK9!@P$VssmDy!>t8Lt@dMK)w=TRTV#ji)KGj{k69&(g z;ziGV^sd=Xl2>fxs|oYyHr*iF36XR4t7;+%lpvbTJ`uZ$P0{KMtBi7aCZ}yDP>EL) zf?Fvzm=W;~m-8LUF=XjY(@`_RscX@wSn!+G(C2(4O&r&ZHD9${$vlh06N>VW7V3ss zWQ}?eiuF$wcw1R(87~=dvFE*b{IFU|6at(Zs7USzsT(3)>o6#)&SytdQoM(Ys9zDNJItP=?<7w~Gy?6Ev_(3)7Qv68Ptdtqt zi)MBHA5mWw!+Cvd=Q)Z?NV~J{F}Ru;W?4Fw9j>g8ptRdJ%?KTJ2%DR^6Y&$ z(uL!;hrwVlkj${toTaYkHTz6UW2fsfUumup9N+=Pcn6C-K`O%g4oMtW&>t7R7x?d8 zxMyo-IgdM7bg!E}Bw610Q&Kh>ltf6MF&OP%4aKv}>Eaz(({Y39eu2YyOeJ_moFt_s z(B*Vr3wZm+9yQbEgTs@XAGNDXE3Oo%JoT@uJVoO9R>t2=vtJ@O;#3hY7yx3qZ`u3C zx^AK1*SDSs!y=VJ_{gryP1L+4KZNy|HU9u5q!AXioA{Oe1%3Ah)64SwubQ0KG}Z3U zoZ^Rz#$#0})3jjD8&uS!ecRU zmRFbn5aedO{KFmX)7Io+aGu4ak>y_*{v%j;cG7#R+trN(WU~{%uZMm!e$sGw%T$|6 z(k^oVVhfY^u~)xt&)Od2Rn_G1R)?q^vAGv4jrYB)<+$&z{L<>p$?IO{4e-PzPwQBX z-pJ<7II-o}sNwMOy|+DH(^iU7&Swk=;BMxwTi(V-G6`=(UT@*V*R>gxLXU!Rn)Lqw z40tA-{(CzD9~=ZoY(9Z+oo9a-oJXo-2;= zwc0!n<+vFKBvsU;oMK#PQ*)*_jOInG12dJ?rPQt{J11V3sotrS)jmhGSDT#LEvgvGmTD zuSD^xEQgL~Kc#((4y46Nau3wiJ#$6U zyhCR8_tC3v7|tt$GQwf=C3{*&rI%x|S#+(uPtUcwa9+~kb~xeHx{C8JjQ$76{@<~L z5}he zwfKY{KupRw6-BUGvYP6 zef!bV?&hsrd}!0{S2F4#k-!=0U39S%r5;$L%&C{gtnb|Ss61~Kx&h~7>M3XVo{+#@ zuK*pg)$=~B`#{2*g5QpuVAmh6e$Y}fuuGPIqt@}OnAUcLPe%*G^y*1m`kTZ*9BrV0 ztKV`u9M_Y0@8jfh-ZI0ehEQ{o)!-VZ#(h@(%wplTlhbu``lpEQE?CId1&e#vU-k|o z6>qFF^sXPtFj!a5BzjN8KaM(%xEk4KAqPwzwdXgw{I_#&^N(KUv92#9ko?b|#-I~B zH^@)cy}TY*hr&menm)Ez^-cRaD?_i)d{OMEl8F=5%6`?+oTzhlVX5-nDEAh;+szy{?T);q|oh_4K5*=_jURGabKn>;qL+XLqo9B{3&g3XKif| z+iPzaL<7(PUVCY*={j|yMQIWi3xdol3(puyDxA&_9nS*3-w8(xfTPNONt@>xmUmwB zm%5JQ#J>=xxq#ThLc|9E8sT-_eOxdMLlM;0Mc%s8F)cSATIcnBY}&-f?Ih>9HQ{GD zyyTjD6_;i5Q&vS^80u1a63ARI9dXvYPvRc8=I3mnazR{IXX5_=7(|gsKGHfKro2Dm zUym(gwse;5H)Nlb^Iy;WV~C@Ri;8z_{hNgFHKA55qeDs3ye}*N0BPzvjAliD^MsD_CgSESgr8s49#RE30Dyy#D~ke-dwOtS)tndEH^R1+t{%*UjG+{xfUdJc;Z? zNX|&CI9DOBimM-X%enR3K{}D8%_&~p3~!A802lSY6FlA=@b$}xiNu#E$L99U zc*d*Y^QY$G>ym$ox|-|lp8o($8!q53o4G1F)_0F2g8u+Y0?`Ktka~*#h|4ngV5ulm zN8?$AdYG!My^bzx*tTp6ayZRoYj@6(64HFQ>6+IXV-(s!$Q)<2bABYYmPYbKm_5aP z79W*KT>1QEXu^w0>SM_vmSSW>&roXZjgqhoMn|9>sw*uynm}cIoOLy|V{DN{w0?E# zI*J-Z`6(+;nr~6o%&>8=fCpN?ZFg={i3V#r>QbmcQ^iyJM8KV=7(FpqRK!XI(#aEIW8oc#Hb8@aGk5=uO0k8YYE+w7H~<)3ty%m96lYvd1o>5 zeq)eB4l!RK{95o7`Wj6tLQ0+)6?4Zm;A60<-PrnkvlQvWmQ485#NHm3JR9!02PAAu3@tBHJQ=sgApnN3w_3-oKFNWdO{1M_id$nPS zU^0c4Biu5cYtd7|R{EMP!y-3K6WYHVJRg6kczafg>q*t_bo-S*yPno!gdf7cP5%Jk znB(HV?H6II{64?^p{(sSO*2F~hLs11bn_x!gltHyGLhw~4Wyr$u-OViwmusjz?eRF zC~Fto`ib#nWRzCtvP1CdJw8RbHu0Ansjo`ZJ_vkD@Mpt~AL6FD;fIGn*BO?={Uuoy zh|dqSlNoF_(AXzyU~yDGZSNlG9tHiIygR7)Z@~HumyY}k;Vmlb#`ZdW>UnydrPDpX z+R@bFcNY;8k;sKTPqZ<-Oe2^V{A; zld{1Ca|0M?WsoK3de}`6gSI)Nq2c{Z1@SH#y8{#!!v*)i$>B*M7f;n)+wp zhwOK6@lV0pKZ|w$0En{b58^p)MuA~r;hT+0N#eJWVVZeNECsx4D=HRlK43{170vuR z{huZA_l$g9;QOx-e`ddgG)+TMx|U5oLT39bsu=vY*zuMv{6H1K1#`uGtMEJi4kYkr z!Y>Eueh=0D8ONy)4qnZ3;mhl(E-mhCgsr+*OWVl4Y%cg_iL$JI@NKELn}_}mANXD2 z-xv7v$9kuSB)z`z=81o%-^T^JMwY8|RI^42C*|4z3{M?NJlEDx!r`i@r3T+$yZrwE z1H`R^&gi+yOHP|xfAZe_c0R@MPr)A=c(38Nh`uZMb{HQ`@XwR|poO2!kU8YW0+7K6 z%=se{eK%5x5u~-a7+|b7DdU0&HS^c(+5Z3sGJeb7417J| z--Es-_-Wz=@Wu9_s>$&N>Fr~jUqwB!h6F@Wy;KmJ@sdLE#zjT&Q~nDj;cwcb_M!NJ z`$~AH;GCD<2=Lv*T?jAXlH||5yqhTIf)!~DNfLRIFh*4n{NrgRx0X*i(_D~u>woLf zedJ+kV&`5;cDA+_HHy8i%z!+4j) zKNhXL7yB%HD$;Cq31=3!)|wuvCatGk-MK**_I!jauC2I&j=5L)S0H5k+3{2M@%^;5 zU2gZnemC)j_J@6CJ3)J<#cTF=h1B4y0x}4zj$MwU4J?mI<$m2=Jx|&lQ{rcZtROO8 z>k!1t)X6Xa55(6G;_uq?;E#kLch>$PX_0NtLtUb2(WDwa3HZ7n=t zZ)NBb$`)cz*0>)Rf5BdS9)QE)j{<79e0-8f>Wbf;d>`<8;YvlOF1x8&`I6j>$Gm4? zuJrgO+BD`4-j(Iz@fd0{<-2V9OfE%*hs2Lm@#p*%7vqPC)#C8hhij#w&Ka1yx(jj(Mtg)9ykeBop5?scWW|Mv0dv*l|_vGzsLC zl~azx99Pc9tZ7cn`mbpj!$Ts^Sl4uRGwB-D#8Lys$w;ol!e6yd#a{?d9Cx~by@5#H z9lmUOX1Q%cK%U?CgdCp2x!p?M;?_(SHR({RO0%;`oU^CO_0;<#;Xm!Mp!ls_%Nr8%oU|FzeL9)p3!;Ynn5v zh73K_cC5WKR@Hnxd*^8vvO^#_${w}c>RLIFVfr5R&QGCXnN(wFCvhD`esP=R_3%~s z@aoUV_wabic#3?oI!zn)(9}L3M33V;c_p__9%%mngnE%)@9;zRy!cb`M%6AnJ>rXb zWG4*~kOyC=uaA6H;M-kePnyQ=J?SzLUnhJ{@V|}z3;4+D-U-)lH0RyEDJ`Wd-@mPW zF9+wGrec{g)TTId@%xd5`K81SM?08)A zoaCWf-Sg;vX^!FSZV|tB==bYQPDq!_J^d;R^=ZL%ZhO{EhQFs?k2c!v9CpDJ`L$sj zF3&jQIIp0dDwOCyc{F%*=*|_Av6ZcfAa_1#4?;}!T{7x66|Jz3J`aedFy@+`h{N=;nn^}iTOw?xUDhUc1w zFAVsb#}XyyhvdLU`Lb&V#aBY+>4mxxxc086;NOY0e-59r$pL8C@(8cYxHmq<;xq4O zUZZr|=zW%Nf%aKjsYxzn-0HOd0ENCYhe=n|;>l6ebRxMA8R~L)X7lXVG7!M@uW8hN zDo8vy(G53qDjX;t)$&)y&mU`gmx|Jo>gbLJl9#KYId5FsFF;EHS?E;yjw7Tp6>w$a7}mq9r41&YOyW>yne`Tq~E~DYa7SrIB z5wFVMQC{1n_`c&+h;DpL zjDyq_uQ2$j`z81j;x)#ZF1@F$*7|keona&8aykR;Un7a(#H;gD*!xT;7DsBawUPP# z@LTp6_?ht^z^UV{QuTD$u9UTuG9k=^u_LgrF!2_Ptaum1_ns5iV<~fIDMtgNdLN;$ z)h~)V)`j401X)KEHjqUSMNpVkP%r@W;=ein0BucILDW7f+UwdC#Aoexn7H!KR#Vj1 zqbv;!JvGYpFAJE~%qmNnX!bZb?nqF@x%3p-(I<)1JGXWUs?vmI zW3rqhHFR8sS8oERg3ohg4CHsHE$%@W#a)6|3}fShw#)l+EbN(*8bbd}l9G}hC~fC7FdwP9dD0eKy< zPiP}=nYxa`xZ093pS;XLVnNQ{C~G7G<{eFI$!Lr)0rohm*B0zS*xZBMRJAVQ8!gOv z>_L>28T81b`!M6?P-{jTUzog{`&6P7#|m-jij{rQ(9&iL>0kqx`OQH3bmJi-#~!uO zN|1rIn+M!gOOX!N1o9WRde$zXDvERWkJNAYATNhw(*6-^8qCoQ-fKABj-Ufyq>=nX zk4y6{A&WT}99O}=@I}oSXy3C|p3tV#cRNeduO`0Rlfsk5a4#E!^j%e}!0Ey;)jfD;OK>cdZ!!3LX7Fm;mer7f4VJlF_=j+;S6V7yT7)+i@I$ZB{ zDYWV3d9_<%?`6A_$UlM8&4Bh-FRT5Wmrg1bJD(#{ipslSa>VJS9g%ytZuNj z0P0PCZTRQ*fYkg+t;un$UaX6_?N%KtpTykz1v~@GsYjYms_1>sgxoKVsfu{{H_Eyj zpALQ~OW`|-?5C6EX5%f2=sZL5uuE*#)@(B(;O#!O^RJ4$Vs7;i+H|mO|UycH+MKA;QYM;Z9G@pOs^w z6)DO)qPczXW;#k%zRwfi%?(iS_j-nTRlhIW8~C8yUSx*Z=yc64DLKDF6sS|Yd& zpRInb@iK%IV)RGB$1J7o9a*hehvClxL2P!5WB?qEi(Q_Pp{!@fj0IvkZL7LwljV%O zeGdQ%+_JTbEICjB>TzCeY1OSGy<%ZTouVd@pfJH=274OlZ0sYq!n|3?>*-XZyS9Z( zhXZKpYObf_Z97d*p6PiR;L?T`+Ow9fBBa*#Iz^frb?BoV^I4jQh-0(ezV96LE!w=( z#h)Fty)7BO+Jl^O+}D=)%l3%ByDqv|E`Pqpx~bu%QRt60wr7Q)?=4yM9q+_z7(=1D z2OZe-u2)?6!=+nqSucri{{RJI@gLxP&;D zAf79`0h>Zm_ny6vj>hG6aP=CJJ0W?n<9Oz*L#VF-oARzwJy|6?#GC`gUAELqbtBrn znmLSREg9xi!Zpz7r%G*1;nWF^VH$6c-)2?*JoM-JWbMIdN0P&;#3FqU#hnDke zei##8LYZT3N6e$zyb9y(+BMVIYBui_$lF;YToGP-qj${|ebyTrLlmz~kgtO0x?Jwh z9cw>O_;~CINlKH{X0}trF9F&l{HRO&i@AF42iC1ssZUgURBOr$Msi>9uJsW6k%7l* zl>9%|-Y`YBeuBLRNwN|q_QySVuDiqf2Awop=b2H!`9?aL(PmYijPzrh;xO?%M%Up- zif<&qv0x8;S54vn0DwA$#E92~f_e;Bw|IL;f@LbIxn8ZuHMwWu=se7z9{g2H8&$0i zTKrXtisgc_h47Q$rjeqoGHMV;Up$;}Yv_Lo!5x;KwmN_A*Bq!7#CS_bv<{HXHpD)? ztI=$3{6D3vaoEgcd|~$v1z{Oge)DI>=9#5TWbaYko|o{x^l7Lhxwa%<;J3&jXrFPnp&mDuRsI7A{ZGbC59h0NtfHcb5g05QPN6;qj`+*8FP z%UsG%X%C3uC(GZGdm8fVe;SbEb-^Cg`~Lulwsui@vd%uWt$g7q>ds6)Nm5eL9)qa( zmeL5BnoJeRD@n25Uk(AsJKtIJ^dGS;BhST&|86&<$W1M%*eAn>{_JE&NzIOX# zFeib!zc$XZ{8lnhzmuBoPlUN>Y9 zO8JxH-NvJJrAKMb(=X0(&P9He!#JKhm!j+$ z&0&ASz&ftAA$1}rZ;?sqSikU(yT-D&W5zvd33N;NXEOxI2dEYL%+_0BRfmbpo0g>0~}yiN2S`q z9Ew>0z~Gvskd-5lz0L<((6^32o?;A)WD`y_DM8$BqpdAbyJXQIRNMH}v&e&#U_OVX zbK?4VhFFvBK-KOf#{2+V7!=)Fadt%1rOeKf-s5Uxc1ZQ2T~a8HJJuX}nItCTijhg5 zEXN>nD>{|ZMMDcItxJ;XCFk4@Ijfe}k0vpeJ%FsWjFK`q^~HMs0PKPAU;Y-C#0d3G zUd~xGeL1(z;c%t9a%<=FeDa=Ggq>=SZISNf86`~97aBCy`<=(^z55#Le;95e*Sudm zmKtH5e7G7iTI=cPEA#_c_-lRpF!)DJx$u34n_&anY`18k19O4SN$P8!@MnuWBk)&3 zw!XEJ3*A=Lw_4iZG)os?ao)IpjQ;?%508E>>ImK?l_If$%Uammj3h+$BEKZgILixD zFPy4Wy?5}Xw0%$3vMvzo9pOy9y$H0{v`Oj!0`A!?c#zoMv7@7P+Cxu0aIkWW%NS6Sh|g_`cYa2HOpa;`p77mB6u*Tc)=4-&(tD(((*f#0Ql zyZb(TAkj39HFbOYM=lo%vh)@A{vGf=ZY|7jCu(ur_?|xFDm*oUZ+X3sc-8GMZ6Q~& zU90GNn#Pv;+)QGha>{td?g_2UPh9e*SmlX{^sZ0EmU7*!_mIp}x@{HxLzrXn7%IH+ z);&+;+BjNxn7Hi89No%TNooi8NIh$o@lKs_G?IC+s;CDXRZTm@z9Oheu*B>9m?&5oaidwFJIM>3<&4bLYVfUAR-yzY57#4ij-bsGkZI+62k74tX7 z&xQFCPaU%=mLw26SJ^%&(_)g@l0kqyYsfw&cw#HGF)T&YgPQBDLNykqbSp}nuXFR4 z;@5%BpES&+fFHy~cwdctI>|G(7c39VMSacj>*3U4XWM8MnBjN#ubr&CFYv(Fh@@?6+2w35Jw;pUonotxa@sq4-+boR%grFb>^?|{{Y2$ ze~k142ir8e_^#~D#kxv@Sll|6M+IDD4hSIE-(L$qXkUifmbc*l0NVrLR3^ZQhO$yPrJ?QA|F_y^)MJTi7 zj}drt%a_YB9k}a?;5=cVh+}Eh0NQ;z*WJIfZ|pbX@7m+yguV;V$=32 z<&o=Nf`?5(M@~j#XtBVzwFyThb6Moe0k$LZ9!5`D*kJ`SrEAc8{!Zy z_JB)idjKLU%V*{} zlRW_df8>*cEkH7%Ib2)<=jR#!z+hQ%C>21CpN@Dw=j7mT=n4>#Prz0CX(Q#PF!98JUj(RG!s^c}j7$k6QssC# z@c7p?yKOQ&11!NnBi_B&_OSho5A5^s^GLby$AbJxcr=@XZQ~yr-k^%Y=JH^sIIW>< zN#vndUp%hIX9ID{it}&T<4PL8#Ctd#ZxOcQJ!`_nMs(@6eNGKWmzei&4|q*2EhDvx zFP1b4IP?{vFNYZ-EI=dI72fDx4h9u15go=V_5T2al~G5MMr+_P5uBTo2;$+*t2{$q z(mc4!`FvxXitrzfEFg8WQ_IPcJ*(@l9Qa+W?OeKj)jxK=Klt@Bw~P&_ULzIL!qWEH zpDXBfVyeaA8#p1y3xUTOKVMq+{{V!awe79+OQQJV+sc;!3zRumKK1i`-O9>%Jv-EKN>`9b ztTA{jK02F>)~7B5GsIx(TXop|5%5#!z9Z6xtzmN%kw`O@Z05Q<4-ROvKK6AX!0;G1HTjNjkk1EK-i_Br zeZC7kjwUI_TJ|<|y$0S~h+qc?fOz7$eOJO(^5ujBImq^|+HFqO`FA6)xd*7Klf!E3 zgdI;x`B#UjLU)Tkik4*O1#`^xKLy)Jtosz_gMreyjdQ^XYuY^PppDm;r0jjIzG%@g3(&xoI#$cup3?_OKt9TjD? zmfl1JGLS|q@?7HvvYgerF6lkby~=UFTcH_AS+n!^QSkbiUe;LoQ-TPr{VP%NpTkXM zJVoF=e&WXB76lAa5Hs!vTJ=wh{tG@G@s#n$7zreWlLEP2Lqv{P@z0bgZ;%w;-p zl6Oy`-(HP3q?YHr{>}dY@K=8u_*UjU2jc~yk|iM)`UW}w0D%hor{N#%kMREh;@*oL z-i_i5d4nh~Rhy>Qss|Fd!HH|lhrMZ~=-+P|5`SbfZ{{X>F zyg}jr0JG@d6=8RcFb(8fgY^~m_rg!wr{Pz`Yj|$7je6Qfbp($v0zRU?A2i{(XE>W; zmAv{K%2o1+S{j{xpYX~%iPYRfk-M%dv+(!BOG{^6kcinjXfESmdJt=? zv%9w_JcbQlb;Fzylx6TzgjcEZIlmF(s;38Zik7|!fnYIQAaT@WSD*Y@_!XphlTDH1 z)6lR7w@UWiEWPufv%l7Y<_wC zNBCo}_y+l#%4UOc0IKvApQL!5qJ%+$TOe~^b9m?BPlx0qjz;&;kz9jz2 zy8fMhFT5k9$uaXmSR);4@Vw`O^K6qF?Jx(#!$%r0PC}k_3FMF{gAwQ;%^2+Hn(9Nw8b;!oNgR_r46_z_KS5xsL#GV`RPs80^^e-3bGTKcOl{?b_ z`-9$_9r$?5WE^+RezAOf{gi$hd}z@wMx&xzi-(N)a(QELKA`rm&;J09AFyY|FNGHq z>b@R@wF?NzI~VV${#4X^fa-!d~7+Q?E4-wJgU)ogm6bn^3RPv4LA1htXf8Q z#xg+eE7Gpzy3_8PPSvd?x0wzaIZ&w|olw-{g8CB_iWvyUHP?g2OAj?9XI>uC4iY|P z`$U%K%Vl7Gl*ZI#Oa(2R^u>CYiTnqr+G-6bwqGsILEF&QbY2qERfqaA5uUiOt)Y%p zqjY>mCoXEMIL$WUrv$SQd;8XmEa(FlC(^o=@OGNSh~W#9z{OsJ!FEzdwc*A(Vu@Av zxtQpwQ}7Ip($PwMfazy>JWQ&k@?b=;v|)sn<*KRlh1<8Q^Zs zMB3=hDAwrKr=r8CMqD?W_-3sQO;x$WX1vqw{wT5eq_=WA^F)3()MGweu*e8JnwV2^ zS{w7ho|gKmx5|=oDy`O}sy8bs&syQfjU5y_h8u^=&svTAX&W}s#2ocBI&LpxpV~!B zL${Am4bBEneAA8j9b^Dk7axjal!6@e_3Kit+wylo%aQ*?pB>5PojwLV9uLs;S|!Np0RxWDj9 zT?10k{{Uw%4aa+L<+s@bg~wcyYwW#uT6?W6B38!5!LJhijx>!9KZBkgn@d%WFYKs% zu%iR+iuPS^O`hLQHw%I`@+;GoO{>eTwP9bt&==Dp{|`hB&sFWKf9Zb$~cRQ;1L?^jj0HuJao;6moUnYwGKQ7$eVleZ)>&3@Iw zn2LCg6^w1FN62wL4pcM#Yu@J}oBc-f^3_;^H@Rq%-mC)T|m!xv{@1&&GLyax%(7?WBeIJ&jdjn1!9yi08<#zP&I@vq0o z8&~o4^2np)I1A}tSZnuL(Pdc)#@=visZ$Mg{`vgc^8j*&G84sdo4N_RGb1Al56gvO<4Hq z^|@Cxo`+{Pt7~MW%`oG(MP=$=5H!6$(XJZr4cBWCj8}?w56qjy``+X+pvuBUT3d-YVkI? zF;$<*jk*PHYmIwr$0WKI$)q+Hn}#i$IImKM4upK!^LU)kFvQByMzs38_piEpg5>oq zY8hwTNC0&1YLaO>vwXq^_NO+Ftc;NnaR$}8SQNC3`G54fqr;|ekORV_QzHuZBEDr+(2Nv0slQK3&{) zkT3S@KR3R7);+-5!93;N)k;PXFMx{S#N!=cq2B@X7z?@X3^fw8G_BDZPf9A#{P%-K%%y%4& zwsY%V-5gy+^=Fe;4+|P^dnyd?A6lyS@yJh@*%{-Gl?zI&2y=l|m?1bPp4HJjbfaT6 zgNsDEpN4epYR%%f2`ON4kSpb%kDu^QPY(Q8)-JUjG8MDD+QvB|Oet@FTKc}i<|TDd zzbN~_*3?UtbItzEsX1E5u37k| zJ7Rat0Uq_QVeu9_Z=A{iAH|+)<}Va}(Rvn?<&#iQKqRPalU{q`Pul8r5=E!Gvwshu zr3zTwL$xSBQ`4ctj!8P2Bkf&(_KMLwKLZ){1>6Qf-(FkeZ`)SY?Pa&Jy6eZ@uZ#8H zj2h>NZk~J1LSn3Qw6Vafw((Sq327Pg-1e_Vmk#19$M@3t9_AY?%J6FMR(rROzi16x z#B5|sv-h)>Ij$E_@#di{rde(X?s9t92M>xQ84ksUI28>3B9d}Tf^a`7^f25#Lk{_4 z6Vs=eVKC~@JGlINbs`i=JMqm?yz#q9A&z$pPfQ+_!B4Hr7-sW|`e5R@&k}greL6or z7w7nWNv}?ZLkU&1R;Lza#kg!87b>%|N8Miu<%d)Fd1nRI(6U3^^WL%$4e4 zxPKQyTITWXs3iqv_ z9(+;PzAN~Fyh-9ajqPbLZzc$=PaU>sknO^WaPY+Nju~ zz<5EN^Ibckkv8XWteg7*X*Ttb8;=dx*R6oAqPb?z6EnfCPiTp)>cAD7HT${w}|v~xtj0>wzYHe$MyHG8#2r(;N#wJWOiV%)iJduA2Bd zBkC~u)48nAXt?=G?QxyfzEl()9x02ps zsGaz&EhkTV&y|Q%T2pAk?f@!5)xL7D!ZLaq*6^o-^-1-MXe6_f6HF5zbkBO^%jj3l zF%Y3fEw<718HRU54TY6SO4e62z7}|T^?o5~8XlD>`&HcI%aKLXIR%HB ze8}Sg&m5lh!~7%oN8!&0X_oqwmQcxcZy)aFc>ZZ{C)t@+TkhkNVMyP=d5zdseZdApPrukTaZD9pax9X<7_ZpeWAJ?JFm4wd$9) zG2BOn9YuMc#NUMvtt66L&#LJX2lxfqM+Bme58^0s!mB ztzYoIt$BZIy&zmFeC`L{HJPb=E7x@GLB7c@-bxZdV8Xh;5BPcw8%|w5Nm(T;&@Wo? z_-h4r5=YpQh`QzfRfsMe*WJ2>ICy-f$VMIVwX7v7)TyjQ<6c-@7(x-;oFnATUs19 zb~WO@Ht?T}bS+JMOW|!^>rT8yf3x1{_bm+3WlEi{gfitx2M3`ey?GeOMsa7UgT#9& zCv)LX+5&F_YTiE7wXcPq4Y=_3gEXpabX&bauH>~F6f*g&0dc=^1o8m@lZ>As)MR}! z`EPC8WU)vYNbO%q{ABoLeWoOrLr!m%j`aieuLkiRjRco6J4O|hV`=MO4UUqbQpea} zs#B{S9_Q7c0sLL3c>e&v5ZlLkC3S^&VC3$_UDUKSv7SKCZUmjciutqlMEIL+@cZHe z_*+!CWPwtTxu_RZE?qvl7mV z6+tAh&HyYproOcc9dV!4D)T+No`?Sc1bDmfq#FMKj=TePdmLK6uWe;{VF4KP;Q^Irj$a!(VvTaC%0?qG={QCIO`zw4~{f)jZ+2}qa z)di-ZZxZSnM!%rkNUJ1UnM0%DSsWE1m#EKD2q&ueh5LH`+WrbJ;eB`GHOSHRTS5EC zG>aId20b&N{Vp`mVOI`ThR@ z1<{|w*Z%;uC9j5bAq@9^9PsQ`TD8nX60@|@%j6OV@GGh6aAaYCLC?ou3;qE7Uijaw z&*A?7guVm6*DkH1^KMqr07z3j7j_|2J+el7X1_wbL;nB-@$t{?dGT&{_-p3HqsTtd zulT0=7IyGR)ceV+SlwNmUHP>c55m0JE>dU)hVmw*LSO{u<~>Z3G~FuJTm| z=KgQuns)$ia&zpX3ZYa0K9`3xH;SieJyG=8mI92hzSSRu+V6+_LE#O4@4|j5(k^uU zO2Xlxy|TQB$0RZ-!B$hqH9v=ZSK==PYj*x5@Q;XZw7o{rw^`rm*KY*yu>@ru2xT}V za7P5!(tq$+Ux$z2@7mY?63A^f!{TiQ*6hfsfYU?`yRZAfVvL_>74pX8G=a9`1E8<1 zlZ%Fj#waRMx<1(WkN*G#7WgsoPejqYC#U|(m);?5D%>WMZ{oifUEEtoaMA2(WVVdv z?oL}0%H~73oxq$|oPNrl7}m63jqzwcBGN8&`@{fCXL)YXN}H50$~XjM5Jq}}4|?bS z0B6tI_v2^5uL#B9UxHp6&~*JoUP&`}n$N<}+-mYh_yz`+MQ~%!LAP+uH)D$SpN4-8 zJ~DWB^-FIMcw+3U$!189C?yH%xHZ93CZ_1~GxNIn$sXfmayMr6K9XPDJ zi%Bz`!x+tLT*@LYzFv6inx?$D}N0Gbm#d;Lu3B>U0Q;el}ZX}37;N$6AehB!P;%@_87t=3X#81x~9x+(b z9qIp?&nbkZ?_H|XHeM@txB(V~bf-#Z-thv8`mJNV@?-l318S(wgk1AHq2RzqTrTCHy zrBY59o-5&TSXtARnm)@9eoK?;7D2vJqZQ3-Ix->qR9rl43X$twT+?~rlp~&|vvn&U zIx+yy(y_x-bZRY4B^M>B`W62G1l#e==fod`cBbZF(pj(B=Un9EbT#yy#(>gF842gL zd|Ceh1aMtyYwr_jTN$lmXx%vBPkQ|lwea@aQ$5YD&^ZLx+Tifg#$mk=%d?eF1&@-8 zz4bh NMYc?&ASFs&y$oyBaCU|j>Gsxn->*6K4v@fx?;PfM{cvpD`1-P#^LRIB{XmJGwl{sk7h{kZ{>uzGO0PKE}QLEm~5vmZ#5R zF!f^G7KgKG-XPQDw-{gnP%vx8ziH_7``rTO-Uv{$sm2FWS8aS(sM(-|Oi1hvVEDVu z*MmuOm&oag>dx_S#pRNr1*=v$^1LI%V`EXir^lWk&^37VA{yMN5Bwv7E7Cp@{6Fx< zt8TyBOu_c!*f{{RoYeglV}`nQ^{ps%b}zKQ7~t3FIDEqmkEET`O!=DlI@oz> zba>D075fVOH~7U1-oa!n?@D}^Hpl}8?VNS5f_^@J%6}F90cvmkrKL@KrtUj0pp1|Q zzB*UyO|9W4d3@*Asd$IOdIyR0>8$mAJ}KjoNOV^TFAFH)*u>L}lq$DGS+ zX5YJL!Q;}iZnRA)_-k-5dg8XVtp~(D7`|;c#JWwzmBO4D?qX#krqO;I_@~D$7}Bj> zAT9)#45w<*Uc4~T_nn7_!%4laa+CN+N}GiP2j8Vso57lHcE$iC^)>W{mHPnr+gb4B zn$L%PS8H`Vaql+U7!p0J$*+DF_}fm^CDR{3xoP2$qawz7gO5t%hA~Ut=Fsf=lqyG5 zaewfF*=))Z8`#!u--73f%ZbSz=DlxE{hod_Yg%*adS8TOWnAVa;m_w?KfvGF-^KdZ z#2bAx;(vi}ZS?(zdop~by8~L&#W^d&S4TB$I;OThb!`N}kYzP@!#Yfo_@hp>4}kYb ztbY+-aC~w90D^VtdWV6Jh&~zG?9?Jc!qzjyp5I#ePgD38@ps|Rh%L1b8F&`{&KPG> zWq1i*2<|JXV&f_=-dC~2uy(5#F6ZxO?0aDPKf#X=>2Nu9hfaxiH-D5@uWEiEyPCu7 zjQLT)uPObD+4Vnxz8bqx#JWT*ay?X6u-CO#^Jj0y?g>3B)AdU*kxLO=V~%Tw@icFwTTOQcL+m?FEAt*D z!zs((*=~KNNr-A`Lv~&LmOMQMo8numC%6PGpD|xr_wO2NHhNy4_5%*DOShHsI#HFXAtbUgi>^$pB>I*1aqr7GScT6O>``YF=lPVljF0l(w=jYntAl zf1=7KxETs?GhZY8d-0UM7}VvLOt~q+`M#Cuo+I(Dqi??6H#j{oYr(!T+Fff=OEiy` zV%%5cnC>KwbzYw`vrkrjnZkK&FgTCiKXvpxFXM;D9dE@NWSVW&`xy4%4_fdUwMl;1 z-@N3tb{`u4)uP==6R87t7_Tw3)Xeu}9&&lF*0O9YV}etLw?Auyu+%WojU5pI`6 zeXYjcee2@ii@N#KJaK&hX5vzCEA4$=Hf;t*Rmt-H?R*vcOk5o=;zitQa3tiK`z{=h zJ|mR;M~E?sg*K7SYkv`?jIu!}$>XhaSGty^t4dA8hX=Q6&bqltt`8vkRMwIWm<06Z zze0rt9*5_0#8kx2*BqKud&zhFYE{u8IAXZ;;-s{dS-i2^v8#v^0F#QQyvvRfl6R3- zKMvZ<$Q;$Xe+*h+I}_7^=~KdksA2&0HFhKpp+k^*;8e=9kCD?(l?AdY#iH7wRT1Ys zPo+%SEwbiKfDZYq(OWwVs??$apzn{4_^w&vqt9b_QZhfVp+7n<;pl3e&xe*nf-uDK z^sWB@Fh)ir=g`wi2^*301Xo2YV@q44&Dq zfy;4qFw^#RIv-D&V53{#iao2sK0CYc{^WyW#~{NajDuWf$8U!o2>92fTv+&*M21D3 zPDajD`h!r~YSD=DlALFrYU~=Xm^PQ|(!R1ehcpsQ`M;_vbuQ1&pV|xd6Y;;oeQEqv z@VTaz(l;_H#hvl@?OhM-vHt)C5AoN+y=vdWUlmqowTXm(XkbId_0L-S((6vJ)L~?v z6?~53yf5~H{fd4ad{eWU^)%SD{XmSzb!?c){{UzT?3G+TVN2QK<9|M=n8I`=d^@%J zbUv>58~aRnXW~AGZ>eb7)OS!yqidss&!P0MVbk@fFHlVx2hirf8GJqA-`Y3!GVy=* zhJhsa`bd{9s9B?Q9^I?$pV^c4>F}q-TS)D%?oFNQf#zDw_&N5?e5PTSPcNS}7tIgE z-ZWKs-g-XA*}~(@7#vZ{NrqxGT$hFXd#CGgB)0c}f&u&0(p%~gIp5|A^J7vqwPsR- zgEVb{jPg%jwS6vC6c;>aj%y)rf#pvoya zBJx$owoPI9it^%Uer5+F2D49yU<75(?{ib8p%uzy+jd84^eSdB$6?(rVku#zg{Y>H zYRde+Wmx$v^ij3L$m+^Dquf%VMz9D#?2_&{>{oJaj1XeDU zr+A|MKrG^f7>mcLxZ|*{cKgE~FOj^r)g{MaUr78=_(9^o zhS!f{t?5ote(dU>DHZ4QT0tzi2nIcCwuU;b7imFX(CVR?!lZfQW{)6mhB}SU&vP?5 z&Uq%Im*IW7wj_+^y^H%g78ykqo2gksNjwiziuGw`c&ViHNW&AE)Tntq&%^YEKv;TK zeg6Q2+9Wb6;~&Fbm#IS(Pk^B3y0R@yi|E=IZ2;afoZ`u^$U9z z0ZQQdjMRFTl_an;U}Ls>R{R=9%zo^T!kc-ev})|fBm20nM>|@_O=@wRT4y7sM8)>W z^v!PBT0qJ&nIf{U7F$hBARi}D?ON6oxdB4sJ#k-0hF;o|X!y)kCq}bL8Zg_ytM@?1 zQ|nd{Vjuvcj=>Frl-;RVJ|1XWlg z5gfN8(yV=lNQR}+J;JDUATaES5 z&Kn&U(!A-;73SdyYH7~muFom`n7`ncmzq|yrg(!@65VPNL=4lh{lrA|0D9Nm8aKl4 z3H%(j)9oy6!MehkWMA)h#dTIXmX+as4m~49x7cG|8GD~v!@ktNwC+CHs2%RVcXw*@ zC70o_m#ZhuKK+d;WAQa@Nofzv(eZAwn)j9(EvXXV4&u2DZ&8BX3aKCtF;eSV(2qV_ zZ63LMX8 zb!S!Z*=#&rtR|lU^GBP?spJa!tKoOStrJPMhWhr$4x~8q4u73^@9e+uFzHuA#FDF` zI0Olo4_`)KYqr{Lsf0_nV)r%s&xbq>gUK@WTD>*S_0=Dlapx4(#bVtG6)zsPH1(}k z&rY#rGXORd#})GL?Irs{{{X@+vRm0+;ea^HS5fiD<8&S$vbl^)jj@ugyRCdH@sr}E z&y2OHneI!dCvI!>9K$t+8FIy+pWrYL6|wuy(vXY0o;a-iLfl=p)*ZhwR(AlSK9s}r z4cYHrys7iao<;2@a{8}@EaHXbh7cHJk&5H>9X{;J1Qr8#JnijXr!+x<tmz*WbiHFo@x8Bw?&4|8TJ`KHBv!eCKytDI zr*I`X?!&!p{{X>5yg}kG6nr;>PSL(G_!D2!Urw;M)-3HVE#%ba)FXFkF5?lVV@3f1 z8}Ik80+AH2%%M4!j?r z-ZT7UvD9_jb`hNp*Gq4jPcv(&lsO7i1}AVFt~!xibs(A32Pi#H&L4@M0rTzd3tR3C z3F(uMO8KASU%^{BxaPk?ekA-RnNn#i(EabaJ68$ttM)DUiTh7{NpFUJ z28U0E>N_bfFE4coibHK~jEc_NLoxZJMFBx1u;?;J&E`3GwS#RQ-Uk}#)K+Ka55#{0 zU-(h&{70w3vP)Zl#zgCYYx)rX0E2)04QfAR7(O%Wc7b(|8th=MerW8V+>vlVDqYO4 z!6A^udsi9YzxW~_?7!d*EwwL+LhHvG$5NVDz1{r#)H z4R~9`DQDw<3PSffHN~nd6EDoLupjEqI#=v$M#o&!3Qunw*7q?LmKjtsy0F1k2Z9LT zgI-VK@7QPHj7z#y}C2QO5@uuc@z>$1uV=YSw4U zU^7a%3el5##PAQYnNP{{X=GpF95m!FPTKLuK&m!5Nb_N@Ah%Os%%&rN80 z5A3n=7xt_8eS580{>zf<{&oGNa%-{Z8W_HD3}u2yuE*dUM;h2%)}(E1 zS>k^Zd>Ut$%9_pK*a(pra;CpN{{Y~s-?L_?s`x4&iyj=)OpT;DT|uKcw%DND9#eAM^mQMwk z!xshc{zuaA&VNl)bfovt{(LV%0kQ|adYaF;otNbUA33{{UlMTlQf1lP0UA zt4-m}WsJIP!agG;q5UpUJG1Aqg2*VSRMF~m_*TQk|jR#lseGC^l;z>rDjihtSV z+;DT>rA+D(_*#>29Bl2$2D@OS)`yzYr!5g(ZOK13Ty&`skq+DtdsMM>pTVVCi_RbOD$Q8 zLQVsBTKPQRB~@bQO#3{WIg}qIeY>ARX8frTALcNf#DzOtRmWI8hKeNndZKJfeSm1~Z;XuHySH{}Z`p%&(=VIIg*jMS(p+_pq)E4$X3&i6q zW?7u-J+5%xDTQT;fnlFQE6Dt1qx{k{432wruUpsKaxunObj^9Mj_wV$q>jiwU~BRI zDR8S6C#`~Rl)ZqtfNIj{Skyz^Xt!pR_uDCm^k9~8@ zd?S;`YSPhl8Ksq1JZGhQN>%84>!FX-qd3{4<{#Q;;l8`@qsAx29vd=6Ydh~(h#V_t zrE7c_{ink`T0e`X5xKxyhvPN$pO5|@>Aw;DCq9X&!mM2m#v_$D>t74}Qutk|{4()P z#*wPV;Z@1ElNby$#eS!d<``Zh!2MpnlU+~BIGYWQ$gy7E5BxW}KI70e%`05AN$oC{ zK)@9!au2;~wX>!OIjzIjXaJkkm1veF{Ik9;LUx|k@(nsx%gW?N4ZZ(qLWo(Y4y=k?(%AYtNg?@Pc z%^$bSx;4C7Uyd&Vx)oD9wgcDO9}d1ZcvHruP0K1vp-A9xSEY%p=T|Vl7Y7a%KBH%P|dSUl3cTH$o8sWu(;)T z^*tj?@hnV7?{l8jtp=|I%6?!5E5_{nLp7U$B;;eD&2Q;GE|83w1`ll4-{hQoMzT<> z^*(bS!*hk@6`|c+I<$l%01r`IXT@)WJ`nh;q{*b~_7KY(p;t#vzV)FWi6NC&bbxcm zHKPWxZy*FX&3)}G(vB`Z>pp5&I@BiBtz+}MO@Wa3|c&Fi~i*I!+t0+~jZEj*vBtONlfnT7P*7h2Td2XSydSG>} zi0`aCNfow(;u}PGRdzIlbsahNuTL+ol({NDL1X7Csxyk)e>3ub_A>p5J|X_m-w-0P z*7SSJ9S+KG_?TWx2~ha{1wGAvglnI%N5d}$_YQ4MusCraQ3BdbpfM0-SjqmCSH+%qi7Vq_jR# z_>cPzcn434KM{=!o-m-0<(HKmv5L_BrM^E#_pjjX z0!Z7;Ii16O!RcJL!aEH+#AIA*L0p*t!`H%9&wZ9`u=F+aKJuwjohzlui{?-_+!L6anm)O9^U36xG~KL zW&LZfySnq7mfM5rS<|}7xE1MMUMmmD7h}+)SDCAxoBKBlT_3^L_7*Cs4Y=~-i3bV? zVP2&77W3ILl6|~q2E3c#?VQ?fiN9%B<&=~)+)pFCTeF_Wp0(#v#7c!H86#ShRZ7ZN zD{8T|)b6&6yf0jRYmf1zqJL&g(vy`#jEd_8!`i#D5H~IlHOhForIyx3R^Wn3KDF?< z%^1{#oVuS+mseEl!FZv7rdi!hA>z*o z{kFu$U?AZ0iuavg!U8SZatJ*ui}6o~*yVh%Kp4-ZdH9|Ucnh?2;W&yl8s%rfpS4fI z=UWNxqkx7%o&|iLV(kW~@Gv`-@rwOD_|5Rz4-jc{Mi3HEgOOha{9O1ssQ4$vQrs<} z5j&C&IIq0nJalmQSz&8$(ETec&ekrBV>xP#9{{kG5HJAXb+1zJ*+g)FV`v>K%Df9I z#1UCeMmVoh@cTJcWEjZjH0vhuEb+6p>RsIJmlqmALUZz0!(XOuF9=T|p>7?_W`o;y$g3gr8ILtnUd^8A4EkJ|>ZDkZ}&2umSebqhdEBiqq z8TCB-QcJ3&id)X<)0#b$($O2NOf6*1c;|&KcPnl5+ge(6;%V|d&A38jQm1(Ju3Fz# zjJ^x69rIc?n#2>SR0BN?Tz#w~?;_{O<9*EmJTbWn1_wX|NhC4eqO6D*9@we?7Jb2i z_N$hSwQ$1+(x=+WPqCjgB#Iipfb^|S0%n0i^v*gO;rvJNw^q=m8m5XIh|c9IIa>9L zIRcQ94iDv9kXYGXTL`8>w2w;ar-+>u+~cj5(XSn?F6gKGH2%~QM=U-E@#WD;R0OxW zlc_$2zNYZkj2_&Hpt=Qc2q5HF#y&jwA}+zzv;b07`LLDOd=vQFt#~>G@UM+ycaAa^ zDVz95rF>@)@ck@YULy&k>W`<$b39cS^-NZ)q>s9ME#vsERU?v6%zGN@thJ~w7kW$A zsIQv*JL9Wc)LYFq`B)#~jk;H9p!`Le3@M)^dbTUz7B031e3etV^_4Qn)?DyC#w|tT z17q<;&Zs09et7R*eWiR!j%bw0D8M~4&2(12D6zgp3nK14>)xS{g;=dm7On<#XQ~~o zSn1lNPaVy=NDuI^$ge-wJ^}b1eML2WGSO_V)a53$8wfv#TJ#5+HU*K!eN9nzou$G6 zMtgRx-B&2IwMr00on@%%dCy%2#Gjogi9rUoYYA<;oXFC5q3dP5Pf~JJ@f~`flGl2iML}ol z*06QWX5AFRlJa_2wD{xx3I725N%%8HeOl|no@7yiTtfSp52bwL2Pdx; zV`+YaA4-)GNCROf>t6g}1s#vi)FnEOrAAeX(WD`qlg(+{!6axqflLq;Pyy(86)2F# zfm_q2D07Xhi|5tbV@t!c#SWy7Ne8C}y??>aE%Ak883Ttk;<_n6ZAW9Dn;=)Z{4_*@ zz+6ii%W^p8yexd7PDl;{^ZPB8Ufd@Uhir->PWQI@= zIIKN-2AVdPW=I@yUNq_@tI-~{Ix?!x2Vd2oOZALk1~>z~Zs>>}5X?JQKdJaj#Cp!H z8PfE*XJFaFrfb(e9{$cB75qyCYj2>KgD5$hj90CJ%<$Cd^0_2=8Kzl?l=-6*o{3U0 zHxcjaTAm)zb)OR6CYzwx$#XLwmN=Ity_3Seaqc20gbA*5q*t)bKw5~4fty6$~|{Y(V~r} zj6MuwapS43y2keI5#`AUz#pAq?P}MDHA!#S%A{cmys%a{F9-ZJ@K?e8F)VbABHqyy zfE8l{a09=scHR*9gQMR_@Y&6^^RyArVz_UMI(Lh;3$b%$6Feh;tJdfUsl7@e3!F*&(W~FKa|zU)oRmR-H&my@hFYLJFz(7%~sYt zQnd%w)=1kARvoLz_E?<8ud}AE+1(dM z3l3@YFuY?vr>b8jlcU-+von0Cqb9uv;HT{2rs=nYmuLf_HTpLS@E#K`pVqN{^xxoozI(;EiuBeS3i(pq_b!uV zp;&~sMO8({MR}LSkBczq_VSgvP(UPoDj$kJ7p*i)8G*Z10Zu+{roJuxp*}w9I@RM? z>G#|aRkoV?+{-higr~~`>;csM{^6t#F5vIqPc$&X%;L^yTV*D^(O#VV)2K_$6C(SH0bBf#rd(o zHO*QLY->%X?r>IG6#gH%k5tg_E^TfhZP80@AyS9b5y&;#YhMWd4Oz7MhmL+L-RoL} z@%fTPpu{9TdveTXa?2+q^(aqMj+Is`JD9j|dR;s}e$bHFvi_!r_g zjsE~=e;yx-{{RPKv6AiVWwpAtzqc~Mbpw@S8-*&^+=#%dDasCUkbbfKtUeH%Yx$>3 zT|U;!R(AU>rK52Y#;Bx~+M#z7y8|E+Mth%|WB7k2o|4$iK?=th9Yt#>)2U9(LAJiy{fM>Y_?`PWPw;ET zQara_9gq7t-I%i*Mg-e9CxB2XKs$LL_pg!6ayVjKxm}&K_47WnC*lg3bx6r;AGEx@ zk52eU}hRkbd=Z-yS{=d`s|e!TX|qaiq=@3UUy;LZRuvA2h9V{o-*<}jZv^ja!*#yDA%iksHIKpeqVmv{{Zk$Tdxwyb@3nJ-khIpL?-ISG{KPfBOOhC zdick|+J}cPX417iGFy4!Y)c%dLn-&oe?*=+_RJ<Z2OQg(>6eZ#T`;GZlLUIUn z$pz3hUxMwf&|$p zT(RBgeyvKb9+UQwca5)fevkhE!9F~BZyb%`d%KotVpVXic>wkquf04SaElrk+wTBY zzdt`~pM@U{{{Y~azqAj-y(_{#DAX^8v!F*WiQ>Py4r7~e8`OUa^&nT;-w6C;XW}gq z%U7|wNuinqc%+T87!i;+{Q-W4zRIl%7}`9OKQ*IPt`c`l{OSFrzi0md418?xKBw^K$2w$f1;k|_k>mNG zle4MqgXv$3AGC+;-{Zg8%i;y!gnV72%$DmjeTMIE?(QQ5;gzfUSpBBH7U<35dw1~7 zy2(A$?ULR;ow=`>{{U+5*jK~<02n+Xd^zz8!zM_`W`<;o_m<}$d0M_2tHk8EnJC4| zpIdZ4Ny@WaCMuL^)RK(4KhdAhx>++xow>=U{e$9 z^ki3%!yeR#@s4nM)qk{Y8w#8b)nV;ta6Vy^(wX*0Rwa*0>-Nw}`HW%q36`_|IC-ohs9J zMb4!Pac-lsInUW2z&{zjA$(KQ{1@=MNVvRfi=3V5G>NbpVfn8C{@R&#SiiuqKyw^IdidUYFv1PB_GY<-po`&3sG$00kQTqzA(fhxb<&H>>ukL5kgT z$s~2J$nz=BGsVXXMSIhBN9h?QIAK41Q)ueXo_}cn02agIFBkbXw<1J{k-^RmO?mu` z&@MBOdRLSDHu%k|>w46Yc&6}4B(fF+`jcL_6UTiDvfzWz*U(qVr;@@rd2G6$qF{48 zK4%VUF6fHZ=*u7g0bLf0x;G5WROj=oD6C#N@Q0zUk3(4{kz?8m9_?OsEy2S>)llqq z8VuxsxZroL7xtF;4P)^`!f$hBY@+%Po+igEeJi~1^l2H)i}g9Ks>%yn%W)i-VUCr@ zlVfUN@yc*}A2Xj|>Ekf+rzN@hY2ptJY8noyK9j237^Q|ntb|k=R;R3Za`AN?7Wzme zLy%R7udhFBzuBzbM=y^2HEa9yT+WERF!isJ?rmj-%rYqqIAD0ie*Ko?>1LQz-mj`Z z0L?Or*>)-N=(-;D@JIH$LSoT8Z!X8-Mak%Xwe((r;!PLCmWgv?dmNFvbswMUTJY$ zCgtFcjhcu+sR@PCJoPo@-w8izzYF-v2<){f*4})4$s_l#QjcHLbqH2VIg#_|YvQpv zeOfd3<7R#S8yQa%1uAm7BAVF*Mbu!g-5IWz;m7R<@fYDW>=!;C*6tbz+bwS-gsJwf zapX{^o_gY`UUL`(ahmgeg?e>4qU?9_(5V|#NuQvfvse5SL&Lr^NVLBb-Yuosz%fp6 z3HCMhUxz$%p!k~AC%3zew+;fZuf$m;xPUD7Q<&sN%DNm8UeWL`_SyK4@QQ1DEqd-v zLr+!!7Gu2!^{=YR^V;-cpee^#iLG1}Bkxz%IN>q$T+>IzN`^NBQ7Nlf z^RE*4BG%?P16u~jI6jr+J~{Yfa0G@3IL|zkJ7PZYnviO(K_I9Ur%_4;jN^M97E1A zf-BDaSMZ^wX%+}WH`2c)%s49-fNrHGewCN!xJ+aeXQOA6uZX0LLbDu-(6aHI$Uq$P z(=}hko(j~o7Aq`dDesEsz?LQi6V|-^zdy(2wY`-`YaO_}mWC>msSQt1(lx)e5avU` z&syko`NZ2@U-8X(c7?B{x5&;;R~XH8S~rQKRNAeBjw|*2t2?EMqMtUW!DDi&v}&}L zhexYeM|>G3V#T`_?OtW^W8v3{uC$AW@XhS7Oyh(`IIlpp*3xG4qx@{ZX^Ssua(5*m9Ti& z)Pioz{Tf-;4>`a(Ye#L5sC+-;xXro~IXq*nbXLAI44`?q{CTgAbf4O9L$HW>n#wNU zSqJs2S0A?Lgml#+=Us;&b@K&%#vd@M=_@1iTy|9$)OKs|KFLoQDF7MEaoE)H_{Jg% zz1L&jz7f>F;G~`j(*Y4`fym?JIj%ol{{Vuk+FCOWF2UtE`Eqexbg>x9j?sf>i&G7T zqpYTn(e?41DmIIZ4^nB8c;3@+SuR(e!MU%>?R)+Tweb>nKiN7$ApR2XT&|`600lSr z?{RVVn^%H7bW@XFjBwek){tL$%&{44U%gZCKVA=tZ2-zo*Qm`Q@jbf}n}9LUX1^ie z{{Vude$dFvzwD#~$l7_XyTiZmRv#P(39Wc>Sdq&t$E|YLz-Iid@~87X3@uEmtkR8N z!2P87SL0iKPs4EOs6l&aNk2nf!(FPvNe4LUEA#jM35)w-$@@X-8q8PGExX!AjInu{ zED6SI?nt$tG7v*?jB{TtiLG9jGODc6>DHl26(Xs-re2!rHU~K$jbV6)WYCpTamnje zt}hfWcjwqvW~p#q-L^Mw0k4wB;#A_(JsdqbCw0){beQd|Jauh7&J`K(N$Fot+sCC( z;n*a%bU2TVxUZ7@b944r69*xeo;z2mda_%wrhly!*xW(XQfRKK1E3oJMqJ4aVN4xK9?@v~e+K&U@Ea z5jLs44=*RH@2IoPwGA@f{&NhZV7E%({xtjn@b8O!IWFs1+CmQ`bmF_86Fl~EGsOK5 z2fbkGUMZeyM}pyw3gC0zz6&qS@OWyj6_&jj_f%Cf9GCP`(%F2nt#Aw7Qj3?G?Hnzk#3Eo z=WBNr_V$6}j|lif;YHq{WPoY~NR7|l92)t1_NMX0pTrLb2=(i|*AV>H@7ljj%JDOy zN)c}NKQPYn`C=+m03Jc%zLaBYA1yNRxDIM53jv+D$6B;w*iIzl)}fe(2PEU^UsE1- z?tXhJDpF~b)uob5FBNtVw!#7PbLtHcEK9-bQ<9?ueif%LInV8?VWpt;)b8V4$r=0& zShevSVE*(n`_^o-u5q~iDk)w(6NAlZ92{Qfqf0Wc%G){>@iv~yV;4W2AKJFkE<}Ur zn&oG>g)zKxJu0l4ibhG?MtTg>cq&qU>A_busT!Ica?IO{DLCuauIbvd!zaw-dJ$YK zTHWC0JA=UIby>IZHOdj^eBPyN8n{Ez!DiB%w9i)4b!cTkoAShUtvDxC100^*^{+J1 zd`Jr`$3D~CHFcVIiDR8nNd)xWn)0gBPff5P{An8bFnI*y?^`;2%NTFufKI_*c zp;}mJR))ya>vSn%qe<@1hAk%XN5kv3lEPqrYJtABCOKr0SC9ppAd~5qbcwYO#-Xi;5w2v0{$YmsAyc{MMF3;*ZRdkN7AIMsJyG%gne}8)R4LnU+S~Tu@ zl`s_cN33dB7W#qnLxG-vR#o1dIoe1K+#2SrJXuO&HH zLi@uqhgtW>yOk!q*&)9m#anmns#j)JX; z)?^15?a8h$PSsd|=`kZcjcVHJ(iOwF#(2ej=39=7v{~|a%siFNrJhI*N#IoRYD)b{ z0CWPfmsW>w&cuQ^H4L}N0eJJDP;2VpQWB1*i#l>zGoU)Nf;RN;%~^+Bg@YulPaO%a zMjci5fjozU#wlA?hs-Ffezm+RsP!{((CRNgvb76mx4X4RNd_1Y4r}G#+MoUj@9@Lp z43lemb%a(|kI8tG8@{#gvFhGhe6r`pPfE1 z{{Vs|XkH+(GU@&@dwa4ohg_~}=5LHY@Jasw+AraGn(Iolk5{)wIBQ1aFRm;0Zduf^ zA*(-B)9iICcT0%X_jTO0g*?|0gpHu?YhA}3TiRBNZJ&a3{>^_HH2GQ^p9I~8Q;ZZj zsE@!;i`L2+{1JQu94YcoTK%E%XTkpf3eBtk0BM~vJcH0D6#0G__%!L^X3?R}G7pw3 z&7TKWz-{d3kcET3yQ; z4y#^&<2(C1%Oulu{XG>>j7j`PzE3>Nu@S_|m$b55A8nFllqyn;)jjtsd^hm-f}p!W z_9z$EDl6!JhxVQr5Ju1>KoACUE9Hx4md0s5pQgsI4++kq;)*k4pp2> z2|HAGzgB6VaOFz$?@0Q;L-78QAoBzYax;uptUesj_lB0jJ`Yj_aNiHSc@Cpy51a`I zI3~K0;wTCY&yU8xOrwdzSD_iCwLcriVKF#L?hU)6vePdhvw%Ya1yvXTdR3q8lm}?X z9jl3l#HS<6Z2mM{_>9L9lg;VW=DOY?a%l88RT($tb+=w2k0eWgJ7m^X=ZoQmN(nX& zg9oj7=ZL-|#cuvzo;r0VxStXHWwNniXT0g1pf!B{C->8|71`?0i=|B_&s^1fZm=Pl zrd$Jn39d_2@xxDW!U=Lg;|9E#{7${`6y+_YAz0*`_O749KWVvTbpgri>}$}#@cnF0 zy)&N`no`5XrMcH>wznGG=1be78WYI}t$KHaG-z!gjynWZJZ^A0*C*hQ2g`SEB%C(W z&MVNZt~Bot*skNZYV)|)=$SrCgv;a3qp9&(j%!;Pc;7>QJE-)yJlW*{4(Fh+JovHk zlTOerWMh0XxyS~&&x;=)^t}?+TXeWy{Y`w8@o(c+r{cTWVva}h^MkbT1$1UPqNg5M z9GE-}Ttv0l-T2klZPIVRvG3 z%y|{={{R600A>wq#@dW6EsUE986gi9^3bPSFsW-FPX|K>PeYRUGw?fJ@s70|QE75{ zY}W;%HJIkot9U`0XP_R{83%F54Rcta*6|nIV*}g@vEl33JUy<#uRe_h#Xm)Se6Bsz;Qkz7 z;E3dkRelCVByrCo?r5>}tfHdTtX!Wdk>g!kTk&6rq}Mgodx<3ZRTX|%*Rmq^1JkJ_ zlafh1$BTX+>K5{>numn%boQ8$8(vwwUS=_zZUZ>T{Oi=b4|k^M9xbxfb%o3kNX5xb z+euObW7v>=c;lMsej@mm!$Z(7G%Kw}J#yz%xOw68aU^z8IQa_y0JXOTKsXuCPra2$ z^2sEGl9F<4`C{wA#V?P@RP@?KbzcSUd{OYb#nSkf!SUP4aW%1q;v2Lr9Fdj{gMtVk zs33#exX3<@;{O1Gy2X}=zAA$9D}+l_lJ?$cj1$`vxE!P4mCE2A3wB&{T;GSp77w~e zaJ}2suu+7y9AzlYTu+H!Iq^@7JV73};yYMX#-VNhJ3@_~-xw;t$$$ z;LpTAhW<0ve`yG=(*FQNxx0;ZRpn&2Ry$R(-AAo`%c^+K!tCb8Wr;Y(~#CtADh zE$~N;z9i|gYkm;W4xwvps9IRhGF(QYIqqZ;z^8Kp133%3?t(^0&3yh-kEe>HoONjH z%`(~G>$uNj)chmj9WTY2IPkspfYJz|cZuzxB*dzvnaBg8fLNZu5^Eb;@aD7OjZPmB zXnK{!v&HjTUs+1gNQW58x{yKl2E9L5@DGcBWVn1K@n1lW@;17+OJ5Rc*OxIiE=b=b zJEJ5tZWZ?6Hs2^QY>+Ci#eaxD4>e1Ot-c&uB$`d_vfMqawx%YrnX|ePlY@iL%as|z zk_CMXqN>A|Tb?v#>CW-k_~ZTxVelmW-aZ%8@BClkofgx@dOQ*&J}J}m(GA|BrB`BH zk;8@Baex5cKmxxZ_5T11{6YPbel^+4f1<6nv!LoxU0I9Ba~!Q3hE@Re1d9HEo8iso zy=vF`VDm>lVlO*IeEIuWf5AOI9sbmw9(z9%X-x*LY>c{gv2M6{8?bLxC)eJ-PcX`> z!mFth^zIVl8dx|iCNKC;Pegu1e$~IVSH(Ztx8k+m#lMQz(aot`+eL1v0+Adspuj{O zhI#yJ>(BTk=k0PPvy0%5jP*QRTb$_cD}Ss`zF$6_tL0DHgZ3!+-}^3jl1)3uo*j~1 zG)z|8S+Kc5|$@?;iva&3F|Yp(S@_kJ&L; zJg+~=KCgml#V>Z<59lAmULcO?LAH=|`A9uQb~;YD(a40e23|q0ihp6B+dISm02{m` z6~3i(kZTr*68_tFImh1~#=fZ0d_tFM9y!NOIj^C>XAqouZ>jw0oACZCFU3j~V$tk= z6o*FF@4vKe?c3Ph~IcCMPCnVb8aSk6vR$?{LB6o`hBcw`d5lB{EL80MY0Jst)YB5@I95x8kV0j z2OoI|7$&|Nw+_m-XZ4I7M+En?)9ya&1;yCR76;hYT+6dRoezkA@JBz|{{X>WD?T2F z!`AmUjUf^0i8tO2Vu=O9=SHKH<737*6DeO>SeJki;m*$bOa!=0265^aB({2v_r}J&$5BMf; z?RR_PI~z|F{2tV8H0f>?ZmzPYWMuMqK9%|vrvCuJGyD_r1K|MpP2q2c3uO_GHU89l zTwF7bu1~IO^{&U@$HQ$N(kNuGfUew-MPIbiJ{o9bmm1~7L^6x)YzKpaz3cQl;a}|A@ay3ohZVJ!g=uFE ztb0;8VpR$UrYokt_>-h*=wZ>VqXaKfwa4q8Gt_m4Epu?ebC9{?^sVb+u(&!($DKSz zW0=&u-iH4Gi2QGAkT#p7p+o~4!0ldpuFlsayRjsmrl-2qavKuX7Okn4lN*jP*ER83 z#%$|BtLS>TEDEOdT9Dmbq79^S1#?~@z6%+00-PT8dsEca6_~a$*1Z1!$KEM|*43NO z!2pw!UzBH=v}x07=h@|$CpoK}hs8e_+iBW#(!99Zr*X}GOaB1DNq#4fh<^~{@XQxm zp|K3i5x6XI{uT6x{1hwp>(ag+TF>CW3ThG0tK6wpRA59v`uo@CZT`Qg_>)naS=A?L zX1H?lGxfa_P?CSil zNVDo|7LIa0VbhEX+0w1NrVWmB*1Y%Ow~OHMEsSjpyPPjm+PmF0Ss8-JNF?{|Uyo*( zTDV%S66k)%g2qzB(dL6X-6Sio&C}3VS>ecw%)6Xm_O35UlqvG}=N-j$ejmIx&X@!N z*ERFBr|hvQv(CpR;_P%(y)H{Ti%`nSMd_>QL?Od2ax3)wqcffkQJgmFes!4Osp4Yc zQ+|i#Ik(&}M}9>{)|)}vhaUCqU$m#}<*58R)+N`p8#jGANlxIZe|B?TNVcxP<0N`w zze~g8XyT}-Gu;WTHYa#~0xw2_g=MpJ`cgYf6$&x-sdG&cI3+>=|Q4*58) zQavm<`GDuJ6>1$8HUUES;@0#jW^PRh|iAU_zr|Tk0$A@7Xi{3Q=vY+r#0Hh)!+q z0|oP4u#f%~@Ws)zzI?}m5AjuvKSYi&C4tJae8H69gIwQSt$~tIcCp`#z~FNnKP5=( z-2Jccr^GD-#oBsbX_xW1Wyr(UvpUM8FC)}fRm{Fcwp($!{myldrLT8H^9WhPOyj?2Iz<801+{f*E)sxa~8>`|uiWzLn7FUMz}7+vjtSTKf7KmKzZrS@P9zc&seB zqS`(t{jWdZoBlS2>VFpeB)n&d%RHKX;t22Z*TjA@@RyAI8LCO8c(YErww^P!l5j>V z_deeK+V(P&m5?`P?Os*;Kz_*o01JLMX%hbcY3Nq}0BI?Ol_LRx`ilBo20Ap-S~Pe# z%r#6~yxUUsKN|d*U*_`(roFMocN3cWC-#Z`f<7U9EY)S7!Zx#N779rL6qZrxUIlxm zYF0Bfo#mX7N{xnBLI^eV^y^{jM)7CQ*2`pqg-bjzZ%eoK-De|^^Aah zs^oqMypdIzTb}r;mM;%aM9PL)R$9Uhm}|5W2^`bo^5c_pC_cuz2)qp-C0aKm^dMG+ zr^EFmGsJpzHP2HUMxL%mbu5!Ks;wAU^`H1A$H)2p2maEUG?U!2X}X-BV!m9Rpg90B z9>%|Mv<+-uX>nYk+^(YxJNwu3>!o;HXxGtdwtKM^oS4usl&>lEgvllg=xd_zUqGFA;co5Y7qO7wKJ=vSVc- zvH1Ge?6oS=&2q@qj<-kUQNvWrak8a7(>$}q+8C7sEFAa6c~6V%r?Q=k1GFBM?>-^7 zideIQ)0|h5`1eV9nolkhobpMp%s7uK_Aqmb)ccH^8_d#Y&wm)9f?Fw`Sh-|9E9Wgj zceT}CD7ibU>Mx4k7`vL;*&{eB^H zS+JV&IF}o~W1Lq@;r{>&-)ffac_NllI9lR#uN6fU$qbkzaCZ)s>K_igd1o9#&RExJ zUJf`l^>k@Qoom(69J#z|x>tfc%g5dY*RC!a3wh)b@=QZGHS)j4>pvFQ>#t*|EXi)K zgj1YX=$)a{H62Bn;77>eNanCUB>v3a4ET+w%QV(-qm(Cm4s-2aYmw$Lp*!x6%=7v( z#Npj9hs^xWkjW9+yyK}f!s?_c>t2KVLwqyS{uub~T>{2($_DHX)#uNGcp1fhn}nOK zNl9$a$g!Azsnm4uc7?$sh5V_Cu1?U$G_5pt>6avv*b1Y6eS2_)EaTWZdz$OKQ#`uL za*o8($dNwQahig8+D0Ks0E%UnjdgQ4mUbWyeznqRo*uIc<~jmE<29XH&N?HSG;2L% za>q}-xR7}W$MB^-7}YbnNcMwW+F4t|2G~_d>4m80Nb`sLqp1S3g=xKw^jPYBjxo&2uNlV=+9Wvh}sly)i6mx2p zQ-+RGz#f^aj^yOyj@1J`<{MAxSx%H0)k)$h%INF772-6yA`73D$M?I}viu}lk*Fk* z9D;cVy?o_kB(_ql`^eheK<2)i{g^bXuNz(4L2nyAa1P$J^BKey7~LOdk=A&I)Sk!N zzp_o{pQB4{tKMnK?FigM4u-uW<2Q?;@l~9bcDA7--MoN4@uz;ze+~Q{rC8cpc)gwj zBv@A?<>}O#webtWnm(VZ+}mh(LgEEKEQ`>OYVq;-jAkDye7Z%KPL>v~Y0WKK*zrq^ zB3NRQHn#vM<0GwnZ~J)oa?;03d&uqnWsqQj(wmyY;u~E`JG}%;yfFKj$KzA@ z8^9W;i?wL&ZSBr8^JH}u^k2gdgwSZ#A{NsYHUlh5a%=U>_dCL1>AG^W@;*O>xL+HM zsH;J^#W3)$vEl{OV*b-Z=WH9trD#v%ol8q~D=}aWap_%tmGHw?*5h-eSvw*R-ag(I zy?5bP?0@0iXT#EJ+TtwtFBmBc4h4R1N?2@mC8)(}e%0(&3k;n{dmc66zm2+N%CXq&l7+-uU_=|U>*ak=&4+ulR2eo;G9xU-Sgq!X4 zIgCA_!IW=_|@T!N;!Vgv)d{8;v=`#y(n;I zZ-s|7Cba2u!(giG2_}6# z9;dlR)(z@&RCNtrSQxT5e#X4}LinwvYVa%C>d_E+3K(XzXYp;DDa^UgLz>bVq|{S- z7`(dAW39LGQj9E;5PdyrW8yg3NZosqPs2#JG8S18{jJ?EoSsjKs$*+pU zXPAl9l1JLm$|%vMxs54&dlXl>X&dTmr_r^THHeg~px|`JTJt;Y6v%g^%tm>w?*9OD&f^uebpFmITgjCQJDZn%!%v~eDL;;!1OUX&LEj(AblyCY7d z?GfYgP^A}UcmDth{6(t3?2<~yoDA2eMdFFvi06Nx&3S)>-VHV_k|`KmmaN|wd~(tJ zF2(_NrOD@SCcf`3r;MdRLN{#ub2+Jpt&5b|?T5y&Y0(#!?Nnn8&3W&{FWN&u(V=^G z)QDsJ3UW<+_v63X4_(&YOKlo+6mlFF2a;=r2T|1T4A*nW?A_V2Ywj>PRw}$ddhCym z#banvm$-V*h<|7eTf|Z{dTeqc=azQns%XF4I_1cg@@XUtdy`!EhIA;*xGs(Yri3s@R#>_nAH+X?`BCW-I|! z$O_$S$o?&U(@`W(WvAURkg3|LdK&RR6#QxN4!-XOnKJ~=PS&nU-sf1=Z%E!{c5h7A zCNmvUO4QlYM=DXY~=>{mGvobqXWDe&uE@mH8Gv`Ydoyf2L)&fy#>H!)3UPZ26_3(_0xw3s2q9@K4#( z_9)eLIW6w>8w?G=;IC0%dwdW5oOG`V+eI2_@u|Qk3GH3KfIbmvdJUX$$7vb_z`$N> z(sZpJ3pj*n2nQc}zQYHX(ZR;Y%;K{OmEDnBLhzl9gi-B`W-*SHxfQ%oL`p}F4_XyA zCKObXk>oebKI#5d=qX;up4N?=rt(yC+M11njCZ0nB;XFzY*P+)dN+wpR;smg`5H+zm7|mSp#-(d%sKcwxronQtu`g|N9^`OA9FJad z4`KnL^G}NKk2Ww*V@+5EpeisuK%<#j#wl{Mt@w-MS-gLu+W2!qic58#P}hQK01Vu1 zK-;iWB%F{sDdQE@>b@WFkJu7TM$p~E;X5fITU&svCCm|!NIiZ~0LULUeFb@ayYE4O zN3}XGVUrnaBUXWZ7? zcKINqAv?DQAv&Gw&@YHvR?wsH)R%d)hC?h+lpibe&T*gOJ^R(v@u!TT(SNYC-w?=_ z=>dfT#+#8bj0W4dV01ai;}z%P@D%F&YUt~wim3@+GO_a4z+c$!_IUBOzoF}Pe+%KW z_>bVf5!>Hs^FgUKq9CHRr-l+SkTDC-FRY zH+pWX;!Qy{?LIUL^C`NLM4b_~ub7N?E^)zS%I?Ver(XDL4A8j|@XElCV~~GJ`6Kq1 z{giJs+2!%ahONKS(n89RCfu)~^{YxNeenh@d&7`;mgIPITGnR0hf2E<$k4{q`Kb_)g378w{Hj9bcpRM9(eEBa z7{&)-UoCtx_`14&pmbelFx^7i8YvGVz1zXwE{6WtJaZ1+4Nea_jcIbt*{+3Rur7+x zXnN;_yiarC+vT~pZ;`l?F3c;t&?L~jQE=La+7jL6`3mhuRnhIWD|d{UttU;&eg3ui zYx`*a!y324NDbG*p9V(yA}icqglxULWSy&?abK@K8u7N3s!Jq#rLc_29t*EVKJ~_Y zUic@a{7LZj)#j6?D=N0!qXM#hYsSxVEG}V9N($0lkE-ELE5>l%ElT{+>Y4aG@b2s4 z7s78ATX@sLQq5^`AI{J)Bg{`m^{>)z*`xNRviODJds~@?%Yhn;h{^k}+*e8Qv-Vf; zzr?=>$Kc&ZMX}JeYk9|)sz+;s|4QJVQb#lH@`W&1Apx;sr%ST54s6N_lbM~rk} zagJ;9%-0Q@WRKfUH%qbmSN66^$5>uS+BLg(KSXpN7FyoKg=85cIL&TFs1}T}NC&-q zeel2bgNFVXJnQC-fEf&W*Rg1y8g%KFJ@=@0aq_lma9m-8s}!dzKF7f0_*)l*sU+c{ z?DkhzIaiKA0O~8L(6#rCm0m!7E6sc_;=2u209H(r2O!r=X{tt&0;9JWudBl5id59; zy$_qjW!$pmlGOFj4|vW^p_tII>E)ci#(fbE+efSSU%)>)6uC*Vb4$**~%=9`+Kb*!Jdyq@?q z<=4+;C3JSt%cBj^yDqJ{$l3_)RW3D^4T3?){Hp@zRh3(bI6j7{Nvk%}T;%rVyc|Y! z+tr?}90yZ!=TK+Q0R1Z~QPlt>`MtVkvafteZvvKc7?8(%PqpLki7;7|<*Z9B1wwg_pv)xV=2+moBd>#9De&6<<3e)ave$*eg?})s8HP41VAKXb5qaCL0e~5cm%(s3XyVtH7_WnkmNn0{B zrzF>(m*g_VVpby-z16LccAS+8c8=`v4-@O!mybM0bK)NoUA)(q6Bl-K(9{u>jBHcV zy??~t1+=|N-K4j+5x882kAs@?9~1mK@kfO>PPWZFJmG-mzgNRzDb%MLaaK!Yc{9S) za&wlZY>%D^>)xUe7H|hSJt`^#DzbZN(1Ngf4 z!WyW!jfa#@9RaVUd2nO+cjw9LKzC@iukwV-|Ww+d_UA?*StAy(pWQP;#K*h{{ZV(=%%S_9MK#Mf=+9N z@t1-$?ON61n#?(DhTX^`>0d*E#8<#ox_;^8R)suWJ5PS+=Ay$B3}d0e%})|2`BuIE z0OQ~6nQ;qA;vWOs1eCF9EkM9OO8N6sk5SNX8&1}&Fk+q*7s1- zW)fLS!`PlHV&BG-+z^vC)+ehP#q;8ofebNDk{el0RON@ErB@A3tTsIyCO)PPqI;f? z;ja`&wN>T=rcQXS{{Te1itZ3Zp|Uuyo^ADutv!TszD8@)z8-ki3y6{#XZe7@TKMeO zD4bK5L+&s+gmIAOJtd=$Zrzz3>i2~_W8#ki>Zz>wV^fj`M#)lsTzX=%^xOMuh=C`9 zcqX-Aw2EWk=i0uq@b`^0 zzZ6)d#jJsYfJPWs(G@dA>laRQ9wN$o&%3Y?A3&PZ?e_T!)B!6>V_|c3sXpf!4YI0EAz)4}m@< zX^~H7t43rvSC1obKH|M1RF3u?_RpO3An{&~J1@azl)0j>bH$-s6_(bt?wUMn#NQ9Z zYR?o$o}>Palg-I0!f_phKlN1@9Ly`+}gkoPs^ zo;mOp+*7l|XxMC=j)J}=drUoENuPa37efcBX?v3L6n@|I}>s?-vRMf2WeRX`B z`$*(My-2SI`1|_}d^Gr#X+EWAVI8&J3oOwn1Rv{OgJI(&V5Cpf^Yd1;J|K?kAkC0R zAe#DY9}&{5(o(ZNZnjNcgVkLho&Gufm;6)ky8cLSteWE04itCX}{)3_TirMY1tmAx+cPKfp6Zot9GU<0n*O~-N1cZ_mN6LQ+_M?fbS6*W9v!zXP zxTJiiap5J2L0k{^hf1k&YYmY3)E+Cg@%M#1P2mZi&r`5)i_%12mleX^OTCGZa7T02 zx~Wce7OeN+sM_fiC%TS8Ra6Z174_fz5>NJwm%+afZoUrcvnaOIr1R{9J~1HRkH)@2 zF|JM&Ro?}8vr_nLF9oW_fqk zDSJh~Q|B=FxzVlcCr(fpS&0hr1!5wgz>KpQ|gso`wxk*P{h&t?JfG9 z-lt;i9@!9c*jFXuFAl>#@bT>gbppK#_5_;W&QN&+ze>Z@Y`1VueJkwo`CU9jT5EIT zFnIcuWVJjy#GeiS0JPtn!WcV0u?QSn4h;7L^E5&`8b)d%OgQ#qp!tp17 zw9Qt@?XE3Uh6G`TMSR5`7Nv%6bo6J{VK}C)9!Z~--YW3r{)ebEZy8)3nCn{p62HEN z6Ok3nBo{Bix0VAigF)u&skWxA-ZO#PdGVV?>^psn7mV;rtfXL#>k=3kU!Gn$`Q zOI6jMVUOdSvkR80wH^GAjdb4=>JeYY(@Ig7+ZSSIytGC;St#+r&C8*Nr?0q%EYe?vaue;qlK(`+S!=sX9@l*Ub1F?o|=8@uvV6F$&tm}F%sp0KjeH&A@Xy$@Kj;uS1fg@rv>-yK!MLEjppBYv% z!_CH`vt0>o!l>!rioa(ou353|S%zhay(&oWUmv^YKK0P4M|N|2^`&!3@7pC+g;$no z8;$unro=$|<0SXZN#(0J5yoAZ@E4O<&ZHvm6KGMpO4nq5UPv$Bsl_$YJ@{UIE4}a^ z?Dyl}8pdzMi(A_-@k_yB>?_{>7kQjz(b};-oiTj-VfP>;jO6i<*_)!J z{eeCc_>)HqcYmZj6SHJAi-U^$PvE!gq3}9871Sh1F17$&l~*ABE7kNbf&K>4wU{n+ z4IUMVaq}P@D||)+5s0FtQpi;CIbI3PQ08yUcf&6X+{xi&zwr)%=3B)U0g*5X`qw$* zEh68~m96bOh>DQ0sQaS6kMSPAquls1FAQn=ba5n6jO|c5SB-eS+RfyWDI6#(fsn<$eE8ytgPo<6+sSc}5Z_%7hNqX*gV7x1)`^_#+B*uZaRVyh6p@@0T}9`HWw#}=SPr@ES(i3S zv6F+5*oxW}pqtZvsF+kfuJngCU9~wxr zsAghu$F*`Aa>sKpB#bv^Jm$M!4R~VSWD?7adMM_r3zA}0Ymtb|u>Q;5Hs$eTaalIj zB=_CgxGxq%d9B=*cHFr6N$Xyeejcp0!O~E^nXY5Rz8IdtiHjtTG7WlI?5i6nrq+n% z!Dcvk%Fue8mxR1Fk=!%N`3i7z+nVTZJS_uCBq#zA&sb1S?ImRdd+hItlNlgabu6X|R*X&=<7-JbDwjP!#v0Y9hjhn3 zYT9vwmam?CU97$Srz3r&K3hA1&lLxWJ|yb?E}5EllWA=I`pIii6oFBV?9o5rqT4LERzB{@-tZaRkU%+9moXtt*G_rETY@AXV~Vx zo}3gF+4C=-nCdji#-6{uIKk=#Y&VYWtPlja0f$_2YtE+D?{Cy3Ta|rk&Xc0v-jL4B za(f(}v^+{*2;C?@i0ZW;h`PP}o?H!u#t8MSz?awa2Dy}-=K)FU?OVPU_-CwZmodd* zrq0`r3l*=Tz6O59TIRhpH(H*T^6v!i9V;(kiN(v^M7&NP4;>z-hWG>F2EF2&)Pmm0 z`RTM8`lH|v>>71Dh%dEUUP%WAK0cN7@4*k)<45rPF-ta_g@!rUE9nmg{4Ub;NTZ4? zB@Cm07ZvSb@|e@Js(9GE=2b4B&n);g`!Q&~5wnRSu|;8w;RhA%-Wu>#y@~-0TXu1b zb*=3uMS|RKc!BlBX(VYF3V`+F+P%m^6qTTOv#D0AyqQi7JIKb=;PuU1O^P~<@MhK*rAQCgjtukANZeU!HI+{ys)wu;72x0fC9AenxHiA9UH_kDR zl<6(wlak##Qs!`e;3Mf$4D-8!J*iqT9Hg#89Az-M0CvSkdv2Erv4?(mt6&Z@O_5|q z+^7itbq>wwTx6L_-${WEMnJ_qEt=A7s~xAcTT>vw*^g>hjJXAqtrAMwk5of6F0f#C0Rx+lV99^F>-1sZ@wEc*0VZ5Kl z{s9ebcAq;02M>zyPXPRA@gIP7v*BMGMJ>FMzTYxIfZv^duK0t)_PWds4csC3f(RqE ze5w0a{>54^k8R}Cbn6sJkxMJAcp&~2`R0GZldVx?2EHeP`{G~h<*)w$Wq6L(XVc@5fiVY>n)g44KePvj zyia701@LF+$(}*47X_PE!_U}aS1i%$*TBM2_OxF#S?+e4v5b}TuzinO+tB=Vsc6uB zn%u9GoU&(veQV9M{{Rx&-Qkj8c;IfU@oQf;1`bDB`brt37PqLjB%J7%_#SSxCW3iY#Z3jxME}A+r`gpv$x~oEVl0Gu=$LwqIW8s1u-4gaY zyF`78>Tp5y>s)*4zY{zcHqBQ|xQl7*q>B3=#}H~djl7UG+q8Mk&D>|!yo1D=RrFJ^ zxy*w+2IOMD0OFnYgbsrs8`<7-*ec~`3W&!e-x@mFRvvHa4rN3ZEtk%S4qIPK|O1&nGLKyCod za6TXLT1y&6&PyH+E3DG>G?rCx{W{hToTBtv9=$4wtFxm^bd(SW&{cg)!?t&^G&8~q zk&cynL0fyb3deBIAlB1qhX=|*l6#IZSkIa&=+)Hdc>e&3{{UwV4_iZVU=6&dBorK1 znE0RI_l*21%_7=8#0&obEfjNKTE$>dLo$Qv2&k_#D;-7?6~s9QsKs0}98D$fJDiij z(x~pYJ}Z*$TXJEZCPxPaSYn})>7;O0O8Rrhzq8+md_snJY!paasxA$ACyak)zZPiB z^XS9RMakKU@1dJvt3FvfGls4LqIZu&58>y;y-UE_n#~l&peN>1dRNl_01tjA+G^I2 z-bZw-jG*v&2EIPl^v!QUn8T~-lEll?Dsfsq1o*Lk;OpTmGnRy~-x(*0oWmxijfb_T zxxN!Ko)P;xn;)Q_7x8_o6qMW?`jium)vA02IV%S4(B%x*vG z*zzmTG#`nRvo0n(SDsCL!=qOVD@h*QJ&qOz>byT{{R`j2Y4UG zvAbT_shla>EMNv6_3C%p%H6IrkELL1GC?Aj$S2;qBPmeUo`)?;YNE7}@n?_!0A^nl z=yz{7hUYUv56Z39xGhJ+{y6abthah>psB&gHTMR)Z3{6^EMpyOjqxsrrd?aKQ(4AI zC*~?MUcMhUs*;UGbIhJsMpAAqqvR9foodnl0DBvZ9!VW4-Jir6;$S2r-k|m)gAB zP$uO*)#=&}k9Ttbc@c(3BEBYBbwxY1sp!(dRPf`_ ziu%9yZ~eGp*H$kN_{v4Jx#4o{!=I_IoaWFWPz;fRz*ZK$r>>m8YqyijC~~DwAo|xv z8xxJePfd;tULOycLzU?JpQ&0OiS!>6SS*lEkGXl;z|%F&4mqP;v2G2(Aat+HkA^?+ zQGxJF#87x&$9M9{X3_;qkmQsZ$2Iqt!B5(kz+V-#TYW=8*PywMWCSuvpU=|1;*GJG zMJkcC&xyrgD&T9$Q<87Xa$gnrA!K$UE`5b@y1#>ii)DQtYz@dM#4w9(5Yv_Rw^n;k30yiMTy zts-|O0zftPcCX>YxK>Hy+%R#0-n`GpKMf$gNe_{NeXHT|nHDDjE_kgwAEIR3Lkh`8 zqds0MJWaHz9MqBD$q*;aAdc1B_{-tt&YvSRaC+AbsaVTs<~Bkx>t1YS8A{RtoiSq0 zTAy)7bg>lfwJqEDn&Jrpq^BdcYja5P-NL@v2*B^UxEqP4NkLJ8?MrWeBqwtWel^ny zPA{4pm{-6;tR2zp8fT55l%3Dl-nB-lX{bevsRVSd9JJToB@G*6)0)}Q{AV%{l1b01 z*VE;>jad7&50J;?u#T}tr^G)3=^h}rkuIQ(gRU7$*UbL_75)KDV{s+lhf85{S~4;} z3i^O}h6~t8cHBVcHO}~o($eNP;9!n&E9z)tIUUjADm5t|6kqB3mX&aAudSLQr=7s_ zOq$v&xQu0if3O3Or?J z;=2OM_R@mpGJa*xBEMaKW53!1N%*($>c>&Fxdn`2Sy{eiVbp#V%U#7*eWZAqz7`RR zohPa5I<>Y5zz5qE&FWTc?oe=j``2B20GJs8!6UVET9hU^513E2c$oYJHtx}m2NupV zThW%`A8=fBtX~Ui7k&-#7?uo(82tI@Ij+|F2;_~GlOwN6njRQ2rvRPX`AB>s@Yxi#+vFa~%G+qQiRwGe$G~C$%Sl{wGi1%|vKdv3#u9N5>Vk z(5?b)&U<3A^)C$DUdk>S72KyF`_>e(7>v6Vs?vU^WNA>v(R8aTo{O&Q_Hy`cO+Y$> zw>S#|r3Ze(T- zIlktWqoVsUjW@+U+Xd08O|0#yWCgO{shx(bgO7djX_`zYt6_p)v8S>=#lR6 z9uA%wFH5uLZ;oH^OTQ2NOz~~siL~aL-KS#8u&Ks>3ix~Wll_ zh-cVwk9z&s@D28lHrVb0wp*Nv;D2kc*@M8J8GI{oXQ=70v3SnbC^DxN`o0H}6g{st zhvfL_{bLcyRzEZFJ~$+im9SX1VVc>~ydkb?jH##Tj@jr!n)-joKd?85J|Fm|f3d)~ z7GhK7gD6_HB zij%Q6f)onEIQZ3t>NBtEijI`xW?6VP$4U=rrNj}N7z{X3UAC{RT3Gld-%5Z+lc>u% zPD!snx$(RMQEAbzxWyhK@y)cFYV4G61M;pZXLRXCNi%nWt2UgDsXRmQ?^5_{t7&?^ zu{EEFkgMtU0zJ{o>V$T#W5FIOmquwBqzKFg;~>uCUVE%~nMB@TknVQs4;9^h&tC`p zb@8Lb@FtyX>0(?7=8;Gk&3WrKua2u2(T=6OzMdA8r8#KzJ-7C5);wwPL&c*(@PbH< zB637PTCpkFgv%Uy?81S#a-w4MZhhrMgBxBFGRQa8; zT3WuX_PfH;$((i1YWnpmv~bJXO6>V+_2G)dT9VLgvxbTDHzje_yi4P!j4pgVeGITM zA*9;g!o5SpTFiD<&*eW)Lti!i(7rE%OOUN`_a8Vt>*M&N5|&>5>oe=PQv)j4^qSDO zt2~}Jx{liR*hq}LiMZq2*1bF6r|iiU-G}z=mWLC?9HR(>1-Ykw$Dawb&yH~0c%t%V zmMd^02u1)k_4bwE8-D<4154Ab*J_NG74w`k!!HpJuHrRJ`i>^zYI$OK96PkSoE5*r zwa|2^TEY~!ofJ0EGmmQVPuj6BHNOo_1d=kk4yrIK=pPkp7nY-Jig1TLI@gALZ<%%7 zI^_g-7(C*i7~`nnvgy&Kq2%zbO0&!%TCT3=!yYHIo*Tv`j|Y=fEpALxD=dKE^flIe zOwpv%@8yj_E4bj|xLDazP4Apn<@i}FMjc6?(==+vwiPwl==Gf{*TfzlVLFWM=xgP_ zjJ_0GFA)u?yK%t)bg!bdFC1CvdP7HWOA*rtrFgH5UT+p_hT=5NTdphXIAW!2HD=bw z$!65~smV)Yj?nxuZ*vKPOF%<+BDdS&<%O+)lEx-UK>F8R;jaW-Bnu>w_TwvBmzpA4 z#Kof^s&^$3r9OfS3gShzlhU>xRNg@ zj(eK({YDujeae~Rt}D>O)^1vzn7h=&DHsiC^3Dp92c|mKg}sWRZ3+OzSdPfX#hC&0 z$f=ef6D5Bf*V19E<8$*|=A(mgWl19101OAUX6kb#ynlNc_3v7`gweqnEHH7!aegM& z)R7n)j(GO2+H;$F9~FqiMwy-B9Zq>SK^G+Rob;>-VYw%9>T5*8*9>K5&VA~{7AqkM zvpjR&y7}af7ISsyuEt%Zgwh?%c=xR-;^&6++fp8FjN=G*XC}B0h<_UFqtnbn@lJUmTOWmZ_Pcecq(T!k%OVcsoSNi3 zLb|z(z2mb6JYd%a3NiMash+MWIK!C`{6q20=DV!Pa|roMaC+ja_-SXoxWln#>&-z2 zhjkrA%G=vWs@$BdbY2+nh4z;qmHLyA2(NmD8c!w0sJPnr>jF}BJ058wJ@%Fn$Wq1Fq?vNjxkmIki#J>NoYgekZ#&ByP)Et zolK)XVI8W=+qpR)R7F)z7*Oe&%Id|i82p2jCURBv9R5MDtMTL{AkoU ztgJ*LD4twqoo@ohcQW#6>{y;@H;{gB#+pr&C1s&1sm9<5qzfia7wb-iiH{qqK*Kz> z9Eeut+-O{{BL8*m~rVnl=)C zP(Fr-8|q}@nc+XQ2kfokFNvB}%v0%T@-iG1LBX%je~w?XZ^U1K`rO)|hW^mD%Gp;; z@$75$?ptJ#?Uw)_YT4-oIM+p9Oql()>eivcr6=t@lct)_3ha{{RI405&f+ zx8Yd!0!tK)diz((*KvGE_%Exn!ji0L*fBUbugLQZCUcP0_mgn$euaXqmuB$2Cuw#* z@Y6N;E~64dAm^I3CF8tAxgOQ?7s3Ak+M7z(BU>F#bb{enE5jq;lkHyPqj=8GQ?*8h zWjnwi16?>w=AI%6H>(wF4O~36MZXZ;#DRyHc@@ZOR(9HgWg-KPc6JLSPv_ub&Q3d=Ky^v>jHu{ zV;JvVH}*9NJyGn(F{=f+!s_1zyd&b9cd^%WXeUV&f+UC<`Oo9;{1anOy))~-4-@S2 z6(m8Al-Jp@47fdi3em8)hJ1%n&jc{9Lk*Wp4zr5Yr;dMTp8$MB(&V(%^tg*jl3COqYvn(TfACD} zxbAK>9|3BwZb{FU>y=G290(^Jr;h`x;W*d95?*CU>osnXslhS=XY;}yhcemF~nRz?74xvq~*@eQ@Z z6%B))PHT#AlXhtAReKu4PcpW^cTRzc%bJP)$Nn-JMcl)_muVYTK_u`L-HGkR1;xB@3KGUGfB#NgXgVwHid&XWY z(Hz?Nj>ctSz;dS*_I2-uH2cIUJ+kArE0ysl!3_^qQhvmlKBBH!#OEGb(ND~}co@On z4&FzI=^qqUIFK#a7{DCz4R(4b#u>bU8{Fpup*5f4&)KP<-#(ol(ioApy#D}K_amAriS$p7p$)Nk`pRFr5UYW7qW$5u}R5NHdNGde$ZWq_d2Z&N^2F zzB+j@hF}glnjgjBuGW;3j-re*tIhAD8klDlb9+;~C78DhjtH)EP`ZVcvPT`wR=x2u z2+1wqipJFZLZL>|2{r4{f|Jo6Y?6lK@60ZY74i>5T+Xt{a-{Se@l)y^AyFCf$m6iC zb5hmhxhxgNO?w!6RTH(37Z%f%?Wsdif;pu5M}BIw0_wuyZtNv!oeou$;-x=mcjp*A zY0G6Kua+YNwR;}PD0|wTZKW9}dmlvp&0ijL{Z{TT4fvW^;Oe2iii62?Jq`e8xK^b?Wm| zJ^UtT6&_lQ`$t#SY%HQ@GpPRnc-M>kYWUxv_+w3#p7l55kL>UJM|@oP1L6p^PYP?d4QaR;?4CkV$I$frD~q+z5I@W?bI?}3ovb$C z0#}b(`lUjgX!{i(botM&Vk%Ykjq1vCJd57*dUK_UsZm}pYT(!6nGl? z4-R}snWlu0iFB>}>&V1mXGMPPT^)3!>Q3EH)y+cHo!F}`)6jujzP+QzB0KHpll876 z;ScRq;J=A_bW>PraY$rbsS<)K(cb#j_TPL=q&IMD=drkYb(2i@@V?g%Ezd0RcZ1=+ zgs_7hab7#)?*M(B-c*PVI|}-1TCiE;!Z7BzPZ;=3ZeqxYG0E#+37TX|PoAFV-sIWU zT1`f7=gPXKi1wSd=dU34tV`(RjdC-eYW9yB_#NPB7$RP#vt|@#E@b|^v6ty#L zp(u*oKiAy9-BDjbgrO?!SqmFa4~M7ERoNR_e~iX9n~qOh8nrix(s+*RfI;XDa{e~> zZ{tsgI+3xzfX^F$sRh^XozVCUb*XD{sWoE-bc&hDDq0e zYW6T(Mphs&dgm4DAF*%8oBsfVo<4`dcY2A7PShp=Se_$){=InJ$B6CBo?7~KHJx>< ztZpW@Vy=uCTy{0;D5?9X@vBjVMaeCh`)A=F7~1%UMTY7b!yd+8HSXZl_eYTM zj}22rp_QmckjCYJAPy^*)zC^lOyj>b)ZQeMM#QK&tQ}SbiX7}C74tZ&PU-A>SXpw} zqm!4;({2UrsVd6(E$v>l@Soy!zM&FYPcRLP5Y5ebZl-=rC<8vVnWXEIcz0D7QDYgw z$*(g9jd^A*IKOwP=;H8iFvD4`Ppeg-ytfl8oE}HLQM!X_lB22MS0C`_;*FNCYPRok zrImR(;}zPy#CJC0Sc;H&1Xt;Fada`1RG`n#D`D$lD{@p;GbC9R{jup)+$yPT@zSqc zK;-n}Ti!5wSP#X{Si*lDdBF?>U$+v-smr5q8(c;Cg}gmLQjcLLTY1OvB< z_Uobh!N*F=)T2p}(a;`v1Xsr9nRP17$*oVU%Cj1nX(@C^!j`@m@ehD}P}dUMx&YsJ zrx>pK$Ktl2B3MM2!u~bw9zOVMrfb%67PCPk4%PGTk3Jgf-W9lMppD)=+-)5z(e{Ngcdvz zoK^$qUL4f}M>UE9Pb|w(&*E!qr1_%xy?CmtuiGrSm2gL)uPS-wHW-e0O7}H~57V7K zR}vjd;HHnMFlz|*65MSy#r#3|apGSM+RAkIX=6{k0O?+{;mvvl+VIL4sm>VF^_%q6 z4!Py+_6_6w;D+Jjrm~>sz7nW!|%Tvo*Bkk+aaRHE&JP{8Qq`78*{MG|uAz zg4et2KMFi8sa*@VuvI_a09Rq~<4ExDhG$TgNER{o)kif{c%K7^sTC`(?3_kJIyBs% zuFpI8A^Q<{kHyobqvJy95}b%Y>-Dd^J_dXt@b|;d4clqZ>9E1&fE&v|GCk|gye07y zOwl03ajwVYdW!VV1bC}f@kP2DOIF(52HO2gAkA`!#d{fZNc`74%d=Td^)DEnlc?V5 zKiQvW1QLsZj@9SBBDskl{TyWTkUG}o{{V;lQF9u{6hvpzrSSK|ySQ)G<64Vy=b`Og z`JQblKZf3Cm4VCQPB(1x%ijomMDdJLT6kkccXt>KBH#~v)!&8R@K2jPT3eqHd}?;P zo^gn5qHmHR>z`WsGs8E!R*M3|X&NXCgTSlz{w20ZT0XLtjtxT~5z6(&c)!Ho4y(s9 z0&vQB73z1|gH9Q;4l9ZAmC_q`E(d((z9)#~OB+et`b?G7!_`S2A$(u(OC<73f)~M%fr*7;Kp0%mqkB4(#TzSt4BOFD3%wHUQZ!C94!rAiD6VtVQwY}98etU;wOvX_+(1*_ zx$ug_DQ>6JiobDdEXpL2dh?vsHHr8|7zFd0=)%&x(mtmZiH$lOtDy|A#>iSCGt;F} zy^veruM#+X{W35|* zOI@W95rfT7Z)LTH)c^xm1YuZ(Tn_b>ruQLU=6CjcqwrhTiV(flVg>r9_PZX;c#9iWnuABrI)}n<4tQ1(Z!8MWx9>8JE6?@VH25xzc8G)q!jL%?+sE8?CYrOMn5*#pkgx0E7-mce#|o5loH!yagb8JsQ5|yH9=tvLK{`a2i;Ch zdbljIQ`Se#WAjwx(mZ?MpX|M->2bv@mH@EF&QNB)sQ6p(Dl1Vxg$JN8W2GtarCCc zcN~Back(!%U_l6b9xk>jYU(b0!RItb*XAd!BHcts zG7rv`B#F)hst43j%Qw)gG>ZU?jC99ZdJyA|aZm#ZI1+=8dX_Sg{M(1;Ny1jxX^MF` z1Dw=S!@0xMrg*W9!#`SUO&R1!KAEXdmt$DXp%U{{X_P@uc{#&zjORgCh*Euh)N! zzq8(f;_FFRS>?79!q>t-w6E-oZ)*1s1(6>yAeFDlID>-S8jijW=>0Pz;#wHEDmbNo zF!*ut>%$&9(~^5lLN;PBQHb4|?Tof}cEVI)k~%4`hc&;4J~sGss+;c)+)D^nB|s+! zn)DBapS32nrB4=%Tp(N{mY_vH81qtAwRZ>P({AI=d}OLA379J?l=_P>$iYO%N;o)p3@Z zrZ5@$n$xt^LXVmLoY!?a&76~`V_RGJf8oCquFH!!es8~&2O_+O;&<#-@ZZG3Nf$?j z9GvcqVE&cs+Ge;F!xAJ2h$02yC2v+VcMZ7<>rQ zwb@MeWuu#uxOEj2^DI^ls&ZV>JLN+Ujit)pD?dN}IeZw?d<)~-iF8{AhQjr{*e!v| z!=9==vs^vbg5E`U`!^WkzSQ`w`zm;&R+tfTWJTy5PZj09FZdO#cseDQQM5it7-tKd zSFJ-g!eeL1mn&u;QJxN(v${TB*L){6w0>k(=N{F~UBP{4I3Y+N@N4N^U%(MfYrP=( zNyxzJD~0h-!$>z3`!VOBuBqWwv_q#>D^t$2PaEH9qwLoWkPb^`yMGM)NU)e93(uL4 zIXJ~~n&-pYyLLWYXO7j&YC2}CrZ!?yrB4KLUR7)yyB!qq5x(cpwjMKuWf^?&*EQ1U z9yGSOK2uMCI{juixEuz zncT;zH<-nacAny&b*{#U3dhL%#Gcj5TKKLhmu!qiM^GwhH58Z;Bc4Yb^{zURR`*W2 zbtQ9h9cIq#7|d#SF$XmsrQxj?RE?BcMl;-wYbwp58)j3C`__lrnIzsAfP3b+omo@d z>yNx+Yaa!?J*puQ+QwPC?XEY(U$Zrg#EWZi4m%3=EgJY+sA>5fnAXOXtLb;Q9!#*Q zeBcw_x1n9Vo4Kl+hT5Mf>z}hWwQSq15x_hz%bMr4FWK+nHIZk6$(#Ns-p?M7a5Skh!jRP_x#c z1+H|vkPidoHTqeod=c=*p(y)RvgOa+z^d0DvY&!)kNQP~M4pSZabCU~Fslcw4+|UY zDm%s4{Jy-NXo*SfreCTWrYDPkDo3+d>Bgt~5_~wkJ7m%zXKtS_O6PTN_$T+niP?f^ zXb2pYuSff1*}MBTa4Z%cYxsxfXPX&d7lBrxXu^^ZWDi>Vi(LNzf`0flVreb3vIl0| zYme4{V9N+qni%Db5y1~#3qQlS`|)VztCT6pa#T$DJQn-=zJoN62!3(N?rZ5wkJw3L zAwRTcQhIY)dWY;sdux#%rzS^0O+w-f4>wbP>^l6nO&>f>3dX<{q*h3dSn*!o-?AOT z&SINUaks4`-?BxL{HIZuaR(gM)8Y)e+nGlp#7*=(HSXhy%c=co6H-~Rk%B!d*YE!T zWENR*9<3^man`H1?3*pF7-}-{=qp&~*<*tDj#~K^H9Zy2Eoqsf#yP6d+Ms_nOrChb zuT{DIny%tcmr{@)LMo&F%=+m&q17?zl&rtCa1z&;+MZL3QA)-~fc`aj{{Z22?Y^Pm zU3O@rg~NFsVcOo+_YdsB{{RI!&^$#1J|FS**s{9Hu4F$epJQJdkJ)F&mp^j-ThL;d z-?C4|80Hdb>RG|yr%_xOin&fLTiPWq=WQ6{uuE8TBlZu%{{R&HGvkXin@rSWm1JL( z#NgHaLJ@L+MGu4>ALrc z;j!0l8XI?i-a3xe>~{Lk!;geeTIjkGq^|hSlyjB!W9wA>W8jNT9_Y;+OSiD%xLtnT z8gYn%p3JA3@Nl`7F9h53T=RG@#pw~3>`q2FIqhFcFTh37^?5IKM>7!LDBD`{ zcsEA3gbA#;BOfTQw1#-;(~USr_C7-&PYG6ci#|cUw%4@tvf|27^#-aYsdA*Oh`^8G zYWmXe;m3#X3#quftI2TD9s$s@ujEXkEE%e-N*VHU9wF z0vBff%(|6W4uod0{1M4q_qrzNIkP$@9d}hNN8UkJPi{p%O0a{fo|C}xj={-)&rm9tWW_iulqbNwiuJOL?-7+(_nqQ$=2;dO zGN9+Q(f9ekg5ZL(mg6H81n&_TVa`2k%Ki_0ZnN=Dof1zrBnkmid9OrU_-@CTgyS7+ z^x9bbJ~oo62f6vJOg&sBXHH!P-H7&*Tk@=JKuA!X!yUzGPU><40a^Dfte6ZxI^)FQ z{j5&9HDyieU}`f4jK;*`xz7+-*=i89a@s45bYWdir);}P3taX3l-w9#AI`orFv2)D zb##3;3l^Wk?9VCj*TPLQ{SD2O2@{Y>3MMXlZ_ zf&e!)dUu1rW7u^_lIHHz?I)-`YOs`R;@|FRHhXpWi!8uG^<`t{{{RGO`o^t!w$9#0 z0CRwAr@PX1TUiwv#!|rZ@|yb#z#p>DhWs&P1-_7~?6@q$t#z7b?C0V8%a7ma@t5nJ zqP@6qUTch!m6AR~7suILYPS`k@fC-JyiKZ=g4;*9V&mn+XBFum4nJiNh~6f6=AO~5 zqH~{^SJ?g@_zB_LMp*`(7}?N->0S1P;O#d|hzn>`U}L3y9#`R$FN-1k&(Bb^I5n8-?o`)H&_@VU6 z)>V`NyMPU6U3mUiQW9Rhd9Fjn8tYtMJE_M%-mHlA5@g)()cV)*b}x!ES>k4)FLvkc z6!KaTqMKa~{@=t>-LM<+=rdZDx~j5#&7Y~R2T#=zFie>D%~`+kg}sHTW|(Im-WB#7 zJ)G8rIqI{*$K{Grv{BXS-XgWKLd_@#9<}5j7d|IJr$Z{+P4kuDZmfSD{8qS^bQaQ# zbI$Mny?JkmH9OsP%gG}-`te?dKR&IE{oOUEbEg55Q^UzrsL{dvQSp|uuUxz_yWp|G z&2icnfbaE7u`Jf)fO1Yc*Q(k04YfJ9vS%lNNUp<2@J^khKqi_X2pQuQ^w|ziUbQsZ zvS*=~=hQINeVsFs@OQ)OO)k-9mI2eatJ>7|Rxx1{0Y*C3kBGcWV`CE_bpUh#*N6OU z_~)m1TFOSA;{*oUp}8j&`eqLc>?x>0YJPo)#Z<;s=4MC7--?=NhjiKAd*wnDfHTQ8 z@*cP2IlNKgDK#l%ZKRPOJP}fSPVw)E{wHd(Ufjs>+J_+%JRWPGv$ktSA2B!>0E+q= z7$pjP)i*Sui1u@`-0Hs746%e(K&cS8|D_$n>s;&r4aK z&X5vG&*NH9+L*@BynQR_S0yMYMQVOSimmm!u5xBAw};|^ODYVGwPp)tjlePB3eji} zKQ1W}&Brox^sT8bRd#dAQRKDARaK4@{b}%6JoAOda&S#eZ>7yNz{WT}g0}R{CGB5i zksaXou6#x%TF@$tEYoLo@blo#m*Tw}W1$k1;!i_dmy5gut9Tb(eJfM9aJeQaF~bV= zPlVd{_7|5O(b~BAk8DzWX7L`MiXhEaP zBkfp@bJsP}+4xrSLYTJBO?0{!!<*^mQZ3GVfGRCkTM;S7_c?77!FTr(fG{zF4SF|( zz8_qvN0L3khsp=7Zuo2A#msnSw=09(S9zg$P}g7A_-DZ$ zWGI5rt^nLXuTSughV5)4Lf{MxWxCcCmHwNf#5~!yoM-D#z4))ASdT6@5-=ISJ?pu0 zqmEwETOOlgBZ4x>-~cPSMxQx}7pU@hmf5JE;6;*01B+rYKcC7@EnR9~F3> z>*gv5)GQ>$cMHdP(ddZ#NjlXDnNMdU}CPG`DR=kT-xbeTltxcI@or9cWy>q}H zvt-)++CA)Bi(uf_(0>a50AXc(tm~myDxBNbY>y@Q3zG)^%Cq zyVW%E+kol~eMRuU_F1yBlG&e1Qptd%uS)lChrb6j%TXM0+9<&3n)QDO{5i6<3hfH6 z4+6bRE>#K}a(87-KS_cV3LzQoiqnQQSJ}55b?r*KRd``iiD4&~ z9XaNyLG~+^Ex|w>{{SOS1%V{~w1Q*@j!n_|iuvTZK6% zik=uj0g98)aZ;eSUKx~Q-k~=w3sJhb#I|O`2mCWr#cs(S PromptTemplate: + raise NotImplementedError() + + def data_examples(self) -> List[Any]: + raise NotImplementedError() diff --git a/llama_stack/models/llama/llama3/prompt_templates/system_prompts.py b/llama_stack/models/llama/llama3/prompt_templates/system_prompts.py new file mode 100644 index 000000000..27b1a3502 --- /dev/null +++ b/llama_stack/models/llama/llama3/prompt_templates/system_prompts.py @@ -0,0 +1,311 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import textwrap +from datetime import datetime +from typing import Any, List, Optional + +from llama_models.datatypes import ( + BuiltinTool, +) + +from llama_stack.models.llama.datatypes import ( + ToolDefinition, + ToolParamDefinition, +) + +from .base import PromptTemplate, PromptTemplateGeneratorBase + + +class SystemDefaultGenerator(PromptTemplateGeneratorBase): + def gen(self, *args, **kwargs) -> PromptTemplate: + template_str = textwrap.dedent( + """ + Cutting Knowledge Date: December 2023 + Today Date: {{ today }} + """ + ) + return PromptTemplate( + template_str.lstrip("\n"), + {"today": datetime.now().strftime("%d %B %Y")}, + ) + + def data_examples(self) -> List[Any]: + return [None] + + +class BuiltinToolGenerator(PromptTemplateGeneratorBase): + def _tool_breakdown(self, tools: List[ToolDefinition]): + builtin_tools, custom_tools = [], [] + for dfn in tools: + if isinstance(dfn.tool_name, BuiltinTool): + builtin_tools.append(dfn) + else: + custom_tools.append(dfn) + + return builtin_tools, custom_tools + + def gen(self, tools: List[ToolDefinition]) -> PromptTemplate: + builtin_tools, custom_tools = self._tool_breakdown(tools) + template_str = textwrap.dedent( + """ + {% if builtin_tools or custom_tools -%} + Environment: ipython + {% endif -%} + {% set builtin_tools = builtin_tools | reject('equalto', 'code_interpreter') | list -%} + {% if builtin_tools -%} + Tools: {{ builtin_tools | join(", ") | trim -}} + {% endif %} + """ + ) + return PromptTemplate( + template_str.lstrip("\n"), + { + "builtin_tools": [t.tool_name.value for t in builtin_tools], + "custom_tools": custom_tools, + }, + ) + + def data_examples(self) -> List[List[ToolDefinition]]: + return [ + # builtin tools + [ + ToolDefinition(tool_name=BuiltinTool.code_interpreter), + ToolDefinition(tool_name=BuiltinTool.brave_search), + ToolDefinition(tool_name=BuiltinTool.wolfram_alpha), + ], + # only code interpretor + [ + ToolDefinition(tool_name=BuiltinTool.code_interpreter), + ], + ] + + +class JsonCustomToolGenerator(PromptTemplateGeneratorBase): + def gen(self, custom_tools: List[ToolDefinition]) -> PromptTemplate: + template_str = textwrap.dedent( + """ + Answer the user's question by making use of the following functions if needed. + If none of the function can be used, please say so. + Here is a list of functions in JSON format: + {% for t in custom_tools -%} + {# manually setting up JSON because jinja sorts keys in unexpected ways -#} + {%- set tname = t.tool_name -%} + {%- set tdesc = t.description -%} + {%- set tparams = t.parameters -%} + {%- set required_params = [] -%} + {%- for name, param in tparams.items() if param.required == true -%} + {%- set _ = required_params.append(name) -%} + {%- endfor -%} + { + "type": "function", + "function": { + "name": "{{tname}}", + "description": "{{tdesc}}", + "parameters": { + "type": "object", + "properties": [ + {%- for name, param in tparams.items() %} + { + "{{name}}": { + "type": "object", + "description": "{{param.description}}" + } + }{% if not loop.last %},{% endif %} + {%- endfor %} + ], + "required": {{ required_params | tojson }} + } + } + } + {% endfor %} + Return function calls in JSON format. + """ + ) + + return PromptTemplate( + template_str.lstrip("\n"), + {"custom_tools": [t.model_dump() for t in custom_tools]}, + ) + + def data_examples(self) -> List[List[ToolDefinition]]: + return [ + [ + ToolDefinition( + tool_name="trending_songs", + description="Returns the trending songs on a Music site", + parameters={ + "n": ToolParamDefinition( + param_type="int", + description="The number of songs to return", + required=True, + ), + "genre": ToolParamDefinition( + param_type="str", + description="The genre of the songs to return", + required=False, + ), + }, + ), + ] + ] + + +class FunctionTagCustomToolGenerator(PromptTemplateGeneratorBase): + def gen(self, custom_tools: List[ToolDefinition]) -> PromptTemplate: + template_str = textwrap.dedent( + """ + You have access to the following functions: + + {% for t in custom_tools %} + {#- manually setting up JSON because jinja sorts keys in unexpected ways -#} + {%- set tname = t.tool_name -%} + {%- set tdesc = t.description -%} + {%- set modified_params = t.parameters.copy() -%} + {%- for key, value in modified_params.items() -%} + {%- if 'default' in value -%} + {%- set _ = value.pop('default', None) -%} + {%- endif -%} + {%- endfor -%} + {%- set tparams = modified_params | tojson -%} + Use the function '{{ tname }}' to '{{ tdesc }}': + {"name": "{{tname}}", "description": "{{tdesc}}", "parameters": {{tparams}}} + + {% endfor -%} + Think very carefully before calling functions. + If you choose to call a function ONLY reply in the following format with no prefix or suffix: + + {"example_name": "example_value"} + + Reminder: + - If looking for real time information use relevant functions before falling back to brave_search + - Function calls MUST follow the specified format, start with + - Required parameters MUST be specified + - Only call one function at a time + - Put the entire function call reply on one line + """ + ) + return PromptTemplate( + template_str.lstrip("\n"), + {"custom_tools": [t.model_dump() for t in custom_tools]}, + ) + + def data_examples(self) -> List[List[ToolDefinition]]: + return [ + [ + ToolDefinition( + tool_name="trending_songs", + description="Returns the trending songs on a Music site", + parameters={ + "n": ToolParamDefinition( + param_type="int", + description="The number of songs to return", + required=True, + ), + "genre": ToolParamDefinition( + param_type="str", + description="The genre of the songs to return", + required=False, + ), + }, + ), + ] + ] + + +class PythonListCustomToolGenerator(PromptTemplateGeneratorBase): # noqa: N801 + DEFAULT_PROMPT = textwrap.dedent( + """ + You are an expert in composing functions. You are given a question and a set of possible functions. + Based on the question, you will need to make one or more function/tool calls to achieve the purpose. + If none of the function can be used, point it out. If the given question lacks the parameters required by the function, + also point it out. You should only return the function call in tools call sections. + + {{ function_description }} + """.strip("\n") + ) + + def gen(self, custom_tools: List[ToolDefinition], system_prompt: Optional[str] = None) -> PromptTemplate: + system_prompt = system_prompt or self.DEFAULT_PROMPT + return PromptTemplate( + system_prompt, + {"function_description": self._gen_function_description(custom_tools)}, + ) + + def _gen_function_description(self, custom_tools: List[ToolDefinition]) -> PromptTemplate: + template_str = textwrap.dedent( + """ + If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)] + You SHOULD NOT include any other text in the response. + + Here is a list of functions in JSON format that you can invoke. + + [ + {% for t in tools -%} + {# manually setting up JSON because jinja sorts keys in unexpected ways -#} + {%- set tname = t.tool_name -%} + {%- set tdesc = t.description -%} + {%- set tparams = t.parameters -%} + {%- set required_params = [] -%} + {%- for name, param in tparams.items() if param.required == true -%} + {%- set _ = required_params.append(name) -%} + {%- endfor -%} + { + "name": "{{tname}}", + "description": "{{tdesc}}", + "parameters": { + "type": "dict", + "required": {{ required_params | tojson }}, + "properties": { + {%- for name, param in tparams.items() %} + "{{name}}": { + "type": "{{param.param_type}}", + "description": "{{param.description}}"{% if param.default %}, + "default": "{{param.default}}"{% endif %} + }{% if not loop.last %},{% endif %} + {%- endfor %} + } + } + }{% if not loop.last %}, + {% endif -%} + {%- endfor %} + ] + """ + ) + return PromptTemplate( + template_str.strip("\n"), + {"tools": [t.model_dump() for t in custom_tools]}, + ).render() + + def data_examples(self) -> List[List[ToolDefinition]]: + return [ + [ + ToolDefinition( + tool_name="get_weather", + description="Get weather info for places", + parameters={ + "city": ToolParamDefinition( + param_type="string", + description="The name of the city to get the weather for", + required=True, + ), + "metric": ToolParamDefinition( + param_type="string", + description="The metric for weather. Options are: celsius, fahrenheit", + required=False, + default="celsius", + ), + }, + ), + ] + ] diff --git a/llama_stack/models/llama/llama3/prompt_templates/tool_response.py b/llama_stack/models/llama/llama3/prompt_templates/tool_response.py new file mode 100644 index 000000000..3df4dac14 --- /dev/null +++ b/llama_stack/models/llama/llama3/prompt_templates/tool_response.py @@ -0,0 +1,63 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import textwrap +from typing import Optional + +from .base import PromptTemplate, PromptTemplateGeneratorBase + + +class ToolResponseGenerator(PromptTemplateGeneratorBase): + def gen( + self, + status: str, + stdout: Optional[str] = None, + stderr: Optional[str] = None, + ): + assert status in [ + "success", + "failure", + ], f"status must be 'success' or 'failure'; Got: {status}" + template_str = textwrap.dedent( + """ + {% if status == "success" %}completed{% else %}failed{% endif %} + {%- if stdout %} + [stdout]{{ stdout }}[/stdout] + {%- endif -%} + {%- if stderr %} + [stderr]{{ stderr }}[/stderr] + {%- endif -%} + """ + ) + return PromptTemplate( + template_str.lstrip("\n"), + { + "status": status, + "stdout": stdout, + "stderr": stderr, + }, + ) + + def data_examples(self): + return [ + # success + { + "status": "success", + "stdout": '{"results":["something something"]}', + }, + # failure + { + "status": "failure", + "stderr": "brave_search encounter an error: could not communicate with api.brave.com", + }, + ] diff --git a/llama_stack/models/llama/llama3/template_data.py b/llama_stack/models/llama/llama3/template_data.py new file mode 100644 index 000000000..620816ffc --- /dev/null +++ b/llama_stack/models/llama/llama3/template_data.py @@ -0,0 +1,120 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +from llama_models.datatypes import ( + BuiltinTool, + StopReason, + ToolCall, +) + +from .prompt_templates import ( + BuiltinToolGenerator, + JsonCustomToolGenerator, + ToolResponseGenerator, +) + +INSTRUCTION = "You are a helpful assistant." + + +def system_message_builtin_tools_only(): + return { + "builtin_tools": BuiltinToolGenerator().data_examples()[0], + "custom_tools": [], + "instruction": INSTRUCTION, + } + + +def system_message_builtin_code_only(): + return { + "builtin_tools": BuiltinToolGenerator().data_examples()[1], + "custom_tools": [], + "instruction": "", + } + + +def system_message_custom_tools_only(): + return { + "builtin_tools": [], + "custom_tools": JsonCustomToolGenerator().data_examples()[0], + "instruction": INSTRUCTION, + } + + +def system_message_builtin_and_custom_tools(): + return { + "builtin_tools": BuiltinToolGenerator().data_examples()[0], + "custom_tools": JsonCustomToolGenerator().data_examples()[0], + "instruction": INSTRUCTION, + } + + +def system_default(): + return { + "builtin_tools": [], + "custom_tools": [], + "instruction": INSTRUCTION, + } + + +def tool_success(): + return ToolResponseGenerator().data_examples()[0] + + +def tool_failure(): + return ToolResponseGenerator().data_examples()[1] + + +def assistant_builtin_tool_call(): + return { + "content": "", + "tool_call": ToolCall( + call_id="uuid", + tool_name=BuiltinTool.brave_search, + arguments={ + "query": "Who won NBA in 2024?", + }, + ), + "stop_reason": StopReason.end_of_message, + } + + +def assistant_custom_tool_call(): + return { + "content": "", + "tool_call": ToolCall( + call_id="uuid", + tool_name="trending_songs", + arguments={"country": "US", "n": 10}, + ), + "stop_reason": StopReason.end_of_turn, + } + + +def assistant_default(): + return { + "content": "Hi, I am a helpful assistant. What can I help you with today?", + "tool_call": None, + "stop_reason": StopReason.end_of_turn, + } + + +def user_default(): + return {"content": "Please tell me how to plan a trip to New York"} + + +def user_images(): + return {"content": "<|image|><|image|>What do these images depict?"} + + +def user_interleaved_images(): + return {"content": "<|image|>Describe the image in one sentence.<|image|>Write a haiku about these images"} diff --git a/llama_stack/models/llama/llama3/test_system_prompts.py b/llama_stack/models/llama/llama3/test_system_prompts.py new file mode 100644 index 000000000..b47b1ff2d --- /dev/null +++ b/llama_stack/models/llama/llama3/test_system_prompts.py @@ -0,0 +1,199 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import textwrap +import unittest +from datetime import datetime + +from .prompt_templates import ( + BuiltinToolGenerator, + FunctionTagCustomToolGenerator, + JsonCustomToolGenerator, + PythonListCustomToolGenerator, + SystemDefaultGenerator, +) + + +class PromptTemplateTests(unittest.TestCase): + def check_generator_output(self, generator, expected_text): + example = generator.data_examples()[0] + + pt = generator.gen(example) + text = pt.render() + # print(text) # debugging + assert text == expected_text, f"Expected:\n{expected_text}\nActual:\n{text}" + + def test_system_default(self): + generator = SystemDefaultGenerator() + today = datetime.now().strftime("%d %B %Y") + expected_text = f"Cutting Knowledge Date: December 2023\nToday Date: {today}" + self.check_generator_output(generator, expected_text) + + def test_system_builtin_only(self): + generator = BuiltinToolGenerator() + expected_text = textwrap.dedent( + """ + Environment: ipython + Tools: brave_search, wolfram_alpha + """ + ) + self.check_generator_output(generator, expected_text.strip("\n")) + + def test_system_custom_only(self): + self.maxDiff = None + generator = JsonCustomToolGenerator() + expected_text = textwrap.dedent( + """ + Answer the user's question by making use of the following functions if needed. + If none of the function can be used, please say so. + Here is a list of functions in JSON format: + { + "type": "function", + "function": { + "name": "trending_songs", + "description": "Returns the trending songs on a Music site", + "parameters": { + "type": "object", + "properties": [ + { + "n": { + "type": "object", + "description": "The number of songs to return" + } + }, + { + "genre": { + "type": "object", + "description": "The genre of the songs to return" + } + } + ], + "required": ["n"] + } + } + } + + Return function calls in JSON format. + """ + ) + self.check_generator_output(generator, expected_text.strip("\n")) + + def test_system_custom_function_tag(self): + self.maxDiff = None + generator = FunctionTagCustomToolGenerator() + expected_text = textwrap.dedent( + """ + You have access to the following functions: + + Use the function 'trending_songs' to 'Returns the trending songs on a Music site': + {"name": "trending_songs", "description": "Returns the trending songs on a Music site", "parameters": {"genre": {"description": "The genre of the songs to return", "param_type": "str", "required": false}, "n": {"description": "The number of songs to return", "param_type": "int", "required": true}}} + + Think very carefully before calling functions. + If you choose to call a function ONLY reply in the following format with no prefix or suffix: + + {"example_name": "example_value"} + + Reminder: + - If looking for real time information use relevant functions before falling back to brave_search + - Function calls MUST follow the specified format, start with + - Required parameters MUST be specified + - Only call one function at a time + - Put the entire function call reply on one line + """ + ) + self.check_generator_output(generator, expected_text.strip("\n")) + + def test_llama_3_2_system_zero_shot(self): + generator = PythonListCustomToolGenerator() + expected_text = textwrap.dedent( + """ + You are an expert in composing functions. You are given a question and a set of possible functions. + Based on the question, you will need to make one or more function/tool calls to achieve the purpose. + If none of the function can be used, point it out. If the given question lacks the parameters required by the function, + also point it out. You should only return the function call in tools call sections. + + If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)] + You SHOULD NOT include any other text in the response. + + Here is a list of functions in JSON format that you can invoke. + + [ + { + "name": "get_weather", + "description": "Get weather info for places", + "parameters": { + "type": "dict", + "required": ["city"], + "properties": { + "city": { + "type": "string", + "description": "The name of the city to get the weather for" + }, + "metric": { + "type": "string", + "description": "The metric for weather. Options are: celsius, fahrenheit", + "default": "celsius" + } + } + } + } + ] + """ + ) + self.check_generator_output(generator, expected_text.strip("\n")) + + def test_llama_3_2_provided_system_prompt(self): + generator = PythonListCustomToolGenerator() + expected_text = textwrap.dedent( + """ + Overriding message. + + If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)] + You SHOULD NOT include any other text in the response. + + Here is a list of functions in JSON format that you can invoke. + + [ + { + "name": "get_weather", + "description": "Get weather info for places", + "parameters": { + "type": "dict", + "required": ["city"], + "properties": { + "city": { + "type": "string", + "description": "The name of the city to get the weather for" + }, + "metric": { + "type": "string", + "description": "The metric for weather. Options are: celsius, fahrenheit", + "default": "celsius" + } + } + } + } + ]""" + ) + user_system_prompt = textwrap.dedent( + """ + Overriding message. + + {{ function_description }} + """ + ) + example = generator.data_examples()[0] + + pt = generator.gen(example, user_system_prompt) + text = pt.render() + assert text == expected_text, f"Expected:\n{expected_text}\nActual:\n{text}" diff --git a/llama_stack/models/llama/llama3_1/__init__.py b/llama_stack/models/llama/llama3_1/__init__.py new file mode 100644 index 000000000..38ee47d66 --- /dev/null +++ b/llama_stack/models/llama/llama3_1/__init__.py @@ -0,0 +1,12 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. diff --git a/llama_stack/models/llama/llama3_1/prompts.py b/llama_stack/models/llama/llama3_1/prompts.py new file mode 100644 index 000000000..edbce3bc0 --- /dev/null +++ b/llama_stack/models/llama/llama3_1/prompts.py @@ -0,0 +1,259 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import textwrap +from typing import List + +from llama_models.datatypes import ( + BuiltinTool, + RawMessage, + StopReason, + ToolCall, + ToolPromptFormat, +) + +from ..prompt_format import ( + # llama3_1_e2e_tool_call_dialog, + TextCompletionContent, + UseCase, + llama3_1_builtin_tool_call_dialog, + llama3_1_custom_tool_call_dialog, +) + + +def wolfram_alpha_response(): + return textwrap.dedent( + """ + { + "queryresult": { + "success": true, + "inputstring": "100th decimal of pi", + "pods": [ + { + "title": "Input interpretation", + "subpods": [ + { + "title": "", + "plaintext": "100th digit | \u03c0" + } + ] + }, + { + "title": "Nearby digits", + "subpods": [ + { + "title": "", + "plaintext": "...86208998628034825342117067982148086513282306647093..." + } + ] + }, + { + "title": "Result", + "primary": true, + "subpods": [ + { + "title": "", + "plaintext": "7" + } + ] + } + ] + } + } + """ + ) + + +def usecases() -> List[UseCase | str]: + return [ + textwrap.dedent( + """ + # Llama 3.1 - Prompt Formats + ## Tokens + Here is a list of special tokens that are supported by Llama 3.1: + - `<|begin_of_text|>`: Specifies the start of the prompt + - `<|end_of_text|>`: Model will cease to generate more tokens. This token is generated only by the base models. + - `<|finetune_right_pad_id|>`: This token is used for padding text sequences to the same length in a batch. + - `<|start_header_id|>` and `<|end_header_id|>`: These tokens enclose the role for a particular message. The possible roles are: [system, user, assistant and tool] + - `<|eom_id|>`: End of message. A message represents a possible stopping point for execution where the model can inform the executor that a tool call needs to be made. This is used for multi-step interactions between the model and any available tools. This token is emitted by the model when the Environment: ipython instruction is used in the system prompt, or if the model calls for a built-in tool. + - `<|eot_id|>`: End of turn. Represents when the model has determined that it has finished interacting with the user message that initiated its response. This is used in two scenarios: + - at the end of a direct interaction between the model and the user + - at the end of multiple interactions between the model and any available tools + This token signals to the executor that the model has finished generating a response. + - `<|python_tag|>`: Is a special tag used in the model's response to signify a tool call. + """ + ), + textwrap.dedent( + """ + There are 4 different roles that are supported by Llama 3.1 + - `system`: Sets the context in which to interact with the AI model. It typically includes rules, guidelines, or necessary information that helps the model respond effectively. + - `user`: Represents the human interacting with the model. It includes the inputs, commands, and questions to the model. + - `tool`: A new role introduced in Llama 3.1. This role is used to mark messages with the output of a tool call when sent back to the model from the executor. (The actual token used by the model for this role is "ipython".) + - `assistant`: Represents the response generated by the AI model based on the context provided in the `system`, `tool` and `user` prompts. + """ + ), + UseCase( + title="Llama 3.1 Base Model", + description="Text completion for Llama 3.1 base model uses this format.", + dialogs=[TextCompletionContent(content="Color of sky is blue but sometimes can also be")], + notes="Note start special tag", + ), + "## Llama 3.1 Instruct Model", + UseCase( + title="User and assistant conversation", + description="Here is a regular multi-turn user assistant conversation and how its formatted.", + dialogs=[ + [ + RawMessage(role="system", content="You are a helpful assistant"), + RawMessage( + role="user", + content="Answer who are you in the form of jeopardy?", + ), + ] + ], + notes="", + ), + "## Tool Calling Formats", + textwrap.dedent( + """ + The three built-in tools (brave_search, wolfram_alpha, and code interpreter) can be turned on using the system prompt: + - Brave Search: Tool call to perform web searches. + - Wolfram Alpha: Tool call to perform complex mathematical calculations. + - Code Interpreter: Enables the model to output python code. + """ + ), + UseCase( + title="Builtin Tool Calling", + description=textwrap.dedent( + """ + Here is an example of a conversation using brave search + """ + ), + dialogs=[llama3_1_builtin_tool_call_dialog()], + notes=textwrap.dedent( + """ + - Just including Environment: ipython turns on code interpreter; therefore, you don't need to specify code interpretation on the Tools: line. The model can generate python code which is interpreted by the executor, with the result provided back to the model. + - The message body of the assistant response starts with a special tag <|python_tag|> + - As alluded to above, in such an environment, the model can generate <|eom_id|> instead of just the standard <|eot_id|> . The latter indicates the turn is finished, while the former indicates continued multi-step reasoning. That is, the model is expecting a continuation message with the output of the tool call. + - The model tool call response is of the form `tool.call(query="...")` wher tool is `brave_search` or `wolfram_alpha` + """ + ), + ), + UseCase( + title="Builtin Code Interpreter", + description="Here is an actual example of model responding with code", + dialogs=[ + [ + RawMessage(role="system", content="Environment: ipython"), + RawMessage( + role="user", + content="Write code to check if number is prime, use that to see if the number 7 is prime", + ), + ], + ], + notes=textwrap.dedent( + """ + - Model starts with <|python_tag|> and continues writing python code that it needs to be executed + - No explicit mention of code_interpreter in system prompt. `Environment: ipython` implicitly enables it. + """ + ), + ), + UseCase( + title="Built-in tools full interaction", + description="Here is a full interaction with the built-in tools including the tool response and the final assistant response.", + dialogs=[ + [ + RawMessage( + role="system", + content="Environment: ipython\nTools: brave_search, wolfram_alpha\n", + ), + RawMessage(role="user", content="What is the 100th decimal of pi?"), + RawMessage( + role="assistant", + content="", + stop_reason=StopReason.end_of_message, + tool_calls=[ + ToolCall( + call_id="tool_call_id", + tool_name=BuiltinTool.wolfram_alpha, + arguments={"query": "100th decimal of pi"}, + ) + ], + ), + RawMessage( + role="tool", + content=wolfram_alpha_response(), + ), + ], + ], + notes=textwrap.dedent( + """ + - Note the `<|python_tag|>` in the assistant response. + - Role is `tool` for the wolfram alpha response that is passed back to the model. + - Final message from assistant has <|eot_id|> tag. + """ + ), + ), + "## Zero shot tool calling", + UseCase( + title="JSON based tool calling", + description=textwrap.dedent( + """ + Llama models can now output custom tool calls from a single message to allow easier tool calling. + The following prompts provide an example of how custom tools can be called from the output of the model. + It's important to note that the model itself does not execute the calls; it provides structured output to facilitate calling by an executor. + """ + ), + dialogs=[llama3_1_custom_tool_call_dialog()], + notes=textwrap.dedent( + """ + - JSON format for providing tools needs name, description and parameters + - Model responds with `<|python_tag|>` and `<|eom_id|>` as `Environment: ipython` was in the system prompt + - Instructions for tools added as a user message + - Only single tool calls are supported as of now + """ + ), + ), + # FIXME: This is not working yet as expected + # UseCase( + # title="E2E tool call example", + # description=textwrap.dedent( + # """ + # Here is an example showing the whole multi-step turn by taking custom tool outputs and passing back to the model. + # """ + # ), + # dialogs=[ + # llama3_1_e2e_tool_call_dialog( + # tool_prompt_format=ToolPromptFormat.function_tag + # ) + # ], + # notes="", + # ), + "## Example of a user defined tool calling", + UseCase( + title="`` based tool calling", + description=textwrap.dedent( + """ + Here is an example of how you could also write custom instructions for model to do zero shot tool calling. + In this example, we define a custom tool calling format using the `` tag. + """ + ), + dialogs=[llama3_1_custom_tool_call_dialog(ToolPromptFormat.function_tag)], + notes=textwrap.dedent( + """ + - In this case, model does NOT respond with `<|python_tag|>` and ends with `<|eot_id|>` + - Instructions for tools added as a user message + """ + ), + ), + ] diff --git a/llama_stack/models/llama/llama3_2/__init__.py b/llama_stack/models/llama/llama3_2/__init__.py new file mode 100644 index 000000000..38ee47d66 --- /dev/null +++ b/llama_stack/models/llama/llama3_2/__init__.py @@ -0,0 +1,12 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. diff --git a/llama_stack/models/llama/llama3_2/prompts_text.py b/llama_stack/models/llama/llama3_2/prompts_text.py new file mode 100644 index 000000000..29557f4be --- /dev/null +++ b/llama_stack/models/llama/llama3_2/prompts_text.py @@ -0,0 +1,235 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. +import json +import textwrap + +from llama_models.datatypes import ( + RawMessage, + StopReason, + ToolCall, + ToolPromptFormat, +) + +from ..prompt_format import ( + TextCompletionContent, + UseCase, + llama3_1_builtin_code_interpreter_dialog, +) + + +def user_tool_call(): + content = textwrap.dedent( + """ + Questions: Can you retrieve the details for the user with the ID 7890, who has black as their special request? + Here is a list of functions in JSON format that you can invoke: + [ + { + "name": "get_user_info", + "description": "Retrieve details for a specific user by their unique identifier. Note that the provided function is in Python 3 syntax.", + "parameters": { + "type": "dict", + "required": [ + "user_id" + ], + "properties": { + "user_id": { + "type": "integer", + "description": "The unique identifier of the user. It is used to fetch the specific user details from the database." + }, + "special": { + "type": "string", + "description": "Any special information or parameters that need to be considered while fetching user details.", + "default": "none" + } + } + } + } + ] + + Should you decide to return the function call(s),Put it in the format of [func1(params_name=params_value, params_name2=params_value2...), func2(params)] + + NO other text MUST be included. + """ + ) + return content.strip() + + +def system_tool_call(): + content = textwrap.dedent( + """ + You are an expert in composing functions. You are given a question and a set of possible functions. + Based on the question, you will need to make one or more function/tool calls to achieve the purpose. + If none of the function can be used, point it out. If the given question lacks the parameters required by the function, + also point it out. You should only return the function call in tools call sections. + + If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)] + You SHOULD NOT include any other text in the response. + + Here is a list of functions in JSON format that you can invoke. + + [ + { + "name": "get_weather", + "description": "Get weather info for places", + "parameters": { + "type": "dict", + "required": [ + "city" + ], + "properties": { + "city": { + "type": "string", + "description": "The name of the city to get the weather for" + }, + "metric": { + "type": "string", + "description": "The metric for weather. Options are: celsius, fahrenheit", + "default": "celsius" + } + } + } + } + ] + """ + ) + return content.strip() + + +def usecases(): + return [ + UseCase( + title="User and assistant conversation", + description="Here is a regular multi-turn user assistant conversation and how its formatted.", + dialogs=[ + [ + RawMessage(role="system", content="You are a helpful assistant"), + RawMessage(role="user", content="Who are you?"), + ] + ], + notes="This format is unchanged from Llama3.1", + ), + UseCase( + title="Zero shot function calling", + description=textwrap.dedent( + """ + For Llama3.2 1B and 3B instruct models, we are introducing a new format for zero shot function calling. + This new format is designed to be more flexible and powerful than the previous format. + All available functions can be provided in the system message. A key difference is in the format of how the assistant responds with function calls. + It is pythonic in the form of `[func1(params_name=params_value, params_name2=params_value2...), func2(params)]` instead of the `json` or `` tag that were defined in Llama3.1. + Here is an example for the same, + """ + ), + dialogs=[ + # Zero shot tool calls as system message + [ + RawMessage(role="system", content=system_tool_call()), + RawMessage(role="user", content="What is the weather in SF and Seattle?"), + ], + ], + notes=textwrap.dedent( + """ + - The output supports multiple tool calls natively + - JSON format for defining the functions in the system prompt is similar to Llama3.1 + """ + ), + ), + UseCase( + title="Zero shot function calling with user message", + description=textwrap.dedent( + """ + While the default is to provide all function calls in a system message, in Llama3.2 text models you can also provide information for all the available tools in a user message. + """ + ), + dialogs=[ + # Zero shot tool call as user message + [ + RawMessage(role="user", content=user_tool_call()), + ], + ], + notes=textwrap.dedent( + """ + - The tool call format for the model is the same whether your function calls are provided in the system or user message. + - While builtin tool calls end with a <|eom_id|>, notice the <|eot_id|> for zero shot tool calls. + """ + ), + ), + UseCase( + title="Code Interpreter", + description=textwrap.dedent( + """ + Code Interpreter continues to work in 3.2 text models similar to Llama 3.1 model family. + Here is an example, + """ + ), + dialogs=[llama3_1_builtin_code_interpreter_dialog()], + notes=textwrap.dedent( + """ + - Note `Environment: ipython` in the system prompt. + - Note that the response starts with `<|python_tag|>` and ends with `<|eom_id|>` + """ + ), + ), + UseCase( + title="Zero shot function calling E2E format", + description=textwrap.dedent( + """ + Here is an example of the e2e cycle of tool calls with the model in a muti-step way. + """ + ), + dialogs=[ + [ + RawMessage(role="system", content=system_tool_call()), + RawMessage(role="user", content="What is the weather in SF?"), + RawMessage( + role="assistant", + content="", + stop_reason=StopReason.end_of_turn, + tool_calls=[ + ToolCall( + call_id="cc", + tool_name="get_weather", + arguments={ + "city": "San Francisco", + "metric": "celsius", + }, + ) + ], + ), + RawMessage( + role="tool", + content=json.dumps("25 C"), + ), + ], + ], + notes=textwrap.dedent( + """ + - The output of the function call is provided back to the model as a tool response ( in json format ). + - Notice `<|start_header_id|>ipython<|end_header_id|>` as the header message preceding the tool response. + - The model finally summarizes the information from the tool response and returns the result to the user. + """ + ), + tool_prompt_format=ToolPromptFormat.python_list, + ), + UseCase( + title="Prompt format for base models", + description=textwrap.dedent( + """ + For base models (Llama3.2-1B and Llama3.2-3B), the prompt format for a simple completion is as follows + """ + ), + dialogs=[ + TextCompletionContent(content="The color of the sky is blue but sometimes it can also be"), + ], + notes="Same as Llama3.1", + ), + ] diff --git a/llama_stack/models/llama/llama3_2/prompts_vision.py b/llama_stack/models/llama/llama3_2/prompts_vision.py new file mode 100644 index 000000000..c3cfe5e7b --- /dev/null +++ b/llama_stack/models/llama/llama3_2/prompts_vision.py @@ -0,0 +1,133 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import textwrap +from pathlib import Path + +from llama_models.datatypes import ( + RawMediaItem, + RawMessage, + RawTextItem, +) + +from ..prompt_format import ( + TextCompletionContent, + UseCase, + llama3_1_builtin_tool_call_dialog, + # llama3_1_builtin_tool_call_with_image_dialog, + llama3_2_user_assistant_conversation, +) + + +def usecases(): + this_dir = Path(__file__).parent.parent.resolve() + with open(this_dir / "scripts/resources/dog.jpg", "rb") as f: + img = f.read() + + return [ + llama3_2_user_assistant_conversation(), + UseCase( + title="User and assistant conversation with Images", + description="This example shows how to pass and image to the model as part of the messages.", + dialogs=[ + [ + RawMessage( + role="user", + content=[ + RawMediaItem(data=img), + RawTextItem(text="Describe this image in two sentences"), + ], + ) + ], + ], + notes=textwrap.dedent( + """ + - The `<|image|>` tag is used to indicate presence of the image + - The model isn't an early fusion model so doesn't actually translate an image into several tokens. Instead the cross-attention layers take input "on the side" from a vision encoder + ![Image](mm-model.png) + - Its important to postion the <|image|> tag appropriately in the prompt. Image will only attend to the subsequent text tokens + - The <|image|> tag is part of the user message body, implying that it should only come after the header `<|start_header_id|>{role}<|end_header_id|>` in the message body + - We recommend using a single image in one prompt + """ + ), + ), + UseCase( + title="Builtin and Zero Shot Tool Calling", + description=textwrap.dedent( + """ + Llama3.2 vision models follow the same tool calling format as Llama3.1 models when inputs are text only. + Use `Environment: ipython` to enable tools. + Add `Tools: {{tool_name1}},{{tool_name2}}` for each of the builtin tools. + The same builtin tools as Llama3.1 are available, + - code_interpreter (for executing python code) + - brave_search (to search the web) + - wolfram_alpha (for querying wolfram alpha for mathematical questions) + """, + ), + dialogs=[llama3_1_builtin_tool_call_dialog()], + notes=textwrap.dedent( + """ + - Note the `<|python_tag|>` before `brave_search` function call. + - The `<|eom_id|>` tag is used to indicate the end of the message. + - Similar to Llama3.1, code_interpreter is not explicitly mentioned but is enabled via `Environment: ipython`. + - Tool Calling does NOT work with images in the prompt as of now. + """ + ), + ), + # UseCase( + # title="Tool Calling for vision models", + # description=textwrap.dedent( + # """ + # While Llama3.2 vision models follow the same tool calling format as Llama3.1 models when inputs are text only, + # they are not able to do tool calling when prompt contains image inputs (along with text). + # The recommended way would be to separate out the image understanding from the tool calling in successive prompts. + # Here is an example of how that could be done, + # """, + # ), + # dialogs=[llama3_1_builtin_tool_call_with_image_dialog()], + # notes=textwrap.dedent( + # """ + # - Instead of a single prompt (image understanding + tool call), we split into two prompts to achieve the same result. + # """ + # ), + # ), + UseCase( + title="Prompt format for base models", + description=textwrap.dedent( + """ + For base models (Llama3.2-11B-Vision and Llama3.2-90B-Vision), the prompt format for a simple completion is as follows + """ + ), + dialogs=[ + TextCompletionContent(content="The color of the sky is blue but sometimes it can also be"), + ], + notes="- Same as Llama3.1", + ), + UseCase( + title="Prompt format for base models with Image", + description=textwrap.dedent( + """ + For base models (Llama3.2-11B-Vision and Llama3.2-90B-Vision), here is an example of how the text completion format looks with an image, + """ + ), + dialogs=[ + TextCompletionContent( + content=[ + RawMediaItem(data=img), + RawTextItem(text="If I had to write a haiku for this one"), + ] + ), + ], + notes="- Note the placement of the special tags <|begin_of_text|> and <|image|>", + ), + ] diff --git a/llama_stack/models/llama/llama3_3/prompts.py b/llama_stack/models/llama/llama3_3/prompts.py new file mode 100644 index 000000000..14fd86853 --- /dev/null +++ b/llama_stack/models/llama/llama3_3/prompts.py @@ -0,0 +1,258 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import textwrap +from typing import List + +from llama_models.datatypes import ( + BuiltinTool, + RawMessage, + StopReason, + ToolCall, + ToolPromptFormat, +) + +from ..prompt_format import ( + # llama3_1_e2e_tool_call_dialog, + TextCompletionContent, + UseCase, + llama3_1_builtin_tool_call_dialog, + llama3_1_custom_tool_call_dialog, +) + + +def wolfram_alpha_response(): + return textwrap.dedent( + """ + { + "queryresult": { + "success": true, + "inputstring": "100th decimal of pi", + "pods": [ + { + "title": "Input interpretation", + "subpods": [ + { + "title": "", + "plaintext": "100th digit | \u03c0" + } + ] + }, + { + "title": "Nearby digits", + "subpods": [ + { + "title": "", + "plaintext": "...86208998628034825342117067982148086513282306647093..." + } + ] + }, + { + "title": "Result", + "primary": true, + "subpods": [ + { + "title": "", + "plaintext": "7" + } + ] + } + ] + } + } + """ + ) + + +def usecases() -> List[UseCase | str]: + return [ + textwrap.dedent( + """ + # Llama 3.1 - Prompt Formats + ## Tokens + Here is a list of special tokens that are supported by Llama 3.1: + - `<|begin_of_text|>`: Specifies the start of the prompt + - `<|end_of_text|>`: Model will cease to generate more tokens. This token is generated only by the base models. + - `<|finetune_right_pad_id|>`: This token is used for padding text sequences to the same length in a batch. + - `<|start_header_id|>` and `<|end_header_id|>`: These tokens enclose the role for a particular message. The possible roles are: [system, user, assistant and tool] + - `<|eom_id|>`: End of message. A message represents a possible stopping point for execution where the model can inform the executor that a tool call needs to be made. This is used for multi-step interactions between the model and any available tools. This token is emitted by the model when the Environment: ipython instruction is used in the system prompt, or if the model calls for a built-in tool. + - `<|eot_id|>`: End of turn. Represents when the model has determined that it has finished interacting with the user message that initiated its response. This is used in two scenarios: + - at the end of a direct interaction between the model and the user + - at the end of multiple interactions between the model and any available tools + This token signals to the executor that the model has finished generating a response. + - `<|python_tag|>`: Is a special tag used in the model's response to signify a tool call. + """ + ), + textwrap.dedent( + """ + There are 4 different roles that are supported by Llama 3.1 + - `system`: Sets the context in which to interact with the AI model. It typically includes rules, guidelines, or necessary information that helps the model respond effectively. + - `user`: Represents the human interacting with the model. It includes the inputs, commands, and questions to the model. + - `tool`: A new role introduced in Llama 3.1. This role is used to mark messages with the output of a tool call when sent back to the model from the executor. (The actual token used by the model for this role is "ipython".) + - `assistant`: Represents the response generated by the AI model based on the context provided in the `system`, `tool` and `user` prompts. + """ + ), + UseCase( + title="Llama 3.1 Base Model", + description="Text completion for Llama 3.1 base model uses this format.", + dialogs=[TextCompletionContent(content="Color of sky is blue but sometimes can also be")], + notes="Note start special tag", + ), + "## Llama 3.1 Instruct Model", + UseCase( + title="User and assistant conversation", + description="Here is a regular multi-turn user assistant conversation and how its formatted.", + dialogs=[ + [ + RawMessage(role="system", content="You are a helpful assistant"), + RawMessage( + role="user", + content="Answer who are you in the form of jeopardy?", + ), + ] + ], + notes="", + ), + "## Tool Calling Formats", + textwrap.dedent( + """ + The three built-in tools (brave_search, wolfram_alpha, and code interpreter) can be turned on using the system prompt: + - Brave Search: Tool call to perform web searches. + - Wolfram Alpha: Tool call to perform complex mathematical calculations. + - Code Interpreter: Enables the model to output python code. + """ + ), + UseCase( + title="Builtin Tool Calling", + description=textwrap.dedent( + """ + Here is an example of a conversation using brave search + """ + ), + dialogs=[llama3_1_builtin_tool_call_dialog()], + notes=textwrap.dedent( + """ + - Just including Environment: ipython turns on code interpreter; therefore, you don't need to specify code interpretation on the Tools: line. The model can generate python code which is interpreted by the executor, with the result provided back to the model. + - The message body of the assistant response starts with a special tag <|python_tag|> + - As alluded to above, in such an environment, the model can generate <|eom_id|> instead of just the standard <|eot_id|> . The latter indicates the turn is finished, while the former indicates continued multi-step reasoning. That is, the model is expecting a continuation message with the output of the tool call. + - The model tool call response is of the form `tool.call(query="...")` wher tool is `brave_search` or `wolfram_alpha` + """ + ), + ), + UseCase( + title="Builtin Code Interpreter", + description="Here is an actual example of model responding with code", + dialogs=[ + [ + RawMessage(role="system", content="Environment: ipython"), + RawMessage( + role="user", + content="Write code to check if number is prime, use that to see if the number 7 is prime", + ), + ], + ], + notes=textwrap.dedent( + """ + - Model starts with <|python_tag|> and continues writing python code that it needs to be executed + - No explicit mention of code_interpreter in system prompt. `Environment: ipython` implicitly enables it. + """ + ), + ), + UseCase( + title="Built-in tools full interaction", + description="Here is a full interaction with the built-in tools including the tool response and the final assistant response.", + dialogs=[ + [ + RawMessage( + role="system", + content="Environment: ipython\nTools: brave_search, wolfram_alpha\n", + ), + RawMessage(role="user", content="What is the 100th decimal of pi?"), + RawMessage( + content="", + stop_reason=StopReason.end_of_message, + tool_calls=[ + ToolCall( + call_id="tool_call_id", + tool_name=BuiltinTool.wolfram_alpha, + arguments={"query": "100th decimal of pi"}, + ) + ], + ), + RawMessage( + role="tool", + content=wolfram_alpha_response(), + ), + ], + ], + notes=textwrap.dedent( + """ + - Note the `<|python_tag|>` in the assistant response. + - Role is `tool` for the wolfram alpha response that is passed back to the model. + - Final message from assistant has <|eot_id|> tag. + """ + ), + ), + "## Zero shot tool calling", + UseCase( + title="JSON based tool calling", + description=textwrap.dedent( + """ + Llama models can now output custom tool calls from a single message to allow easier tool calling. + The following prompts provide an example of how custom tools can be called from the output of the model. + It's important to note that the model itself does not execute the calls; it provides structured output to facilitate calling by an executor. + """ + ), + dialogs=[llama3_1_custom_tool_call_dialog()], + notes=textwrap.dedent( + """ + - JSON format for providing tools needs name, description and parameters + - Model responds with `<|python_tag|>` and `<|eom_id|>` as `Environment: ipython` was in the system prompt + - Instructions for tools added as a user message + - Only single tool calls are supported as of now + """ + ), + ), + # FIXME: This is not working yet as expected + # UseCase( + # title="E2E tool call example", + # description=textwrap.dedent( + # """ + # Here is an example showing the whole multi-step turn by taking custom tool outputs and passing back to the model. + # """ + # ), + # dialogs=[ + # llama3_1_e2e_tool_call_dialog( + # tool_prompt_format=ToolPromptFormat.function_tag + # ) + # ], + # notes="", + # ), + "## Example of a user defined tool calling", + UseCase( + title="`` based tool calling", + description=textwrap.dedent( + """ + Here is an example of how you could also write custom instructions for model to do zero shot tool calling. + In this example, we define a custom tool calling format using the `` tag. + """ + ), + dialogs=[llama3_1_custom_tool_call_dialog(ToolPromptFormat.function_tag)], + notes=textwrap.dedent( + """ + - In this case, model does NOT respond with `<|python_tag|>` and ends with `<|eot_id|>` + - Instructions for tools added as a user message + """ + ), + ), + ] diff --git a/llama_stack/models/llama/prompt_format.py b/llama_stack/models/llama/prompt_format.py new file mode 100644 index 000000000..f42620d57 --- /dev/null +++ b/llama_stack/models/llama/prompt_format.py @@ -0,0 +1,204 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import json +import textwrap +from pathlib import Path +from typing import List + +from llama_models.datatypes import ( + RawContent, + RawMediaItem, + RawMessage, + RawTextItem, + StopReason, + ToolCall, + ToolPromptFormat, +) +from pydantic import BaseModel, Field + +from .llama3.interface import LLama31Interface +from .llama3.template_data import ( + system_message_builtin_code_only, + system_message_builtin_tools_only, + system_message_custom_tools_only, +) + + +class TextCompletionContent(BaseModel): + content: RawContent = "" + + +class UseCase(BaseModel): + title: str = "" + description: str = "" + dialogs: List[List[RawMessage] | TextCompletionContent | str] = Field(default_factory=list) + notes: str = "" + tool_prompt_format: ToolPromptFormat = ToolPromptFormat.json + + def md_format(self): + section = textwrap.dedent( + """ + ## {title} + + {description} + + {dialogs_text} + {notes} + + """ + ) + return section.lstrip() + + def dialogs_to_text(self, generator) -> str: + def _code_block(text): + return f"```\n{text}\n```" + + text = "" + for dialog in self.dialogs: + if isinstance(dialog, str): + text += dialog + text += "\n\n" + continue + + elif isinstance(dialog, TextCompletionContent): + input_tokens, output_tokens = generator.text_completion_raw( + dialog.content, + max_gen_len=64, + temperature=0.1, + top_p=0.95, + ) + else: + input_tokens, output_tokens = generator.chat_completion_raw( + dialog, + max_gen_len=512, + temperature=0.0, + top_p=0.95, + tool_prompt_format=self.tool_prompt_format, + ) + text += "##### Input Prompt Format\n" + + # FIXME: This is added to undo the hack in chat_formatter where + # vision tokens are replaced with 128256. + input_tokens = [generator.formatter.vision_token if t == 128256 else t for t in input_tokens] + + text += _code_block(generator.tokenizer.decode(input_tokens)) + # TODO: Figure out if "↵" needs to be added for newlines or end or some indication + text += "\n\n" + text += "##### Model Response Format\n" + text += _code_block(generator.tokenizer.decode(output_tokens)) + text += "\n\n" + + return text + + def to_text(self, generator): + section = self.md_format() + dialogs_text = self.dialogs_to_text(generator) + notes = f"##### Notes\n{self.notes}" if self.notes else "" + section = section.format( + title=self.title, + description=self.description, + dialogs_text=dialogs_text, + notes=notes, + ) + return section + + +def llama3_1_builtin_tool_call_dialog(tool_prompt_format=ToolPromptFormat.json): + interface = LLama31Interface(tool_prompt_format) + + messages = interface.system_messages(**system_message_builtin_tools_only()) + messages += interface.user_message(content="Search the web for the latest price of 1oz gold?") + + return messages + + +def llama3_1_builtin_code_interpreter_dialog(tool_prompt_format=ToolPromptFormat.json): + interface = LLama31Interface(tool_prompt_format) + + messages = interface.system_messages(**system_message_builtin_code_only()) + messages += interface.user_message( + content="Write code to check if number is prime. Use it to verify if number 7 is prime" + ) + + return messages + + +def llama3_1_builtin_tool_call_with_image_dialog( + tool_prompt_format=ToolPromptFormat.json, +): + this_dir = Path(__file__).parent + with open(this_dir / "llama3/dog.jpg", "rb") as f: + img = f.read() + + interface = LLama31Interface(tool_prompt_format) + + messages = interface.system_messages(**system_message_builtin_tools_only()) + messages += interface.user_message(content=[RawMediaItem(data=img), RawTextItem(text="What is this dog breed?")]) + messages += interface.assistant_response_messages( + "Based on the description of the dog in the image, it appears to be a small breed dog, possibly a terrier mix", + StopReason.end_of_turn, + ) + messages += interface.user_message("Search the web for some food recommendations for the indentified breed") + return messages + + +def llama3_1_custom_tool_call_dialog(tool_prompt_format=ToolPromptFormat.json): + interface = LLama31Interface(tool_prompt_format) + + messages = interface.system_messages(**system_message_custom_tools_only()) + messages += interface.user_message(content="Use tools to get latest trending songs") + return messages + + +def llama3_1_e2e_tool_call_dialog(tool_prompt_format=ToolPromptFormat.json): + tool_response = json.dumps(["great song1", "awesome song2", "cool song3"]) + interface = LLama31Interface(tool_prompt_format) + + messages = interface.system_messages(**system_message_custom_tools_only()) + messages += interface.user_message(content="Use tools to get latest trending songs") + messages.append( + RawMessage( + role="assistant", + content="", + stop_reason=StopReason.end_of_message, + tool_calls=[ + ToolCall( + call_id="call_id", + tool_name="trending_songs", + arguments={"n": "10", "genre": "latest"}, + ) + ], + ), + ) + messages.append( + RawMessage( + role="assistant", + content=tool_response, + ) + ) + return messages + + +def llama3_2_user_assistant_conversation(): + return UseCase( + title="User and assistant conversation", + description="Here is a regular multi-turn user assistant conversation and how its formatted.", + dialogs=[ + [ + RawMessage(role="system", content="You are a helpful assistant"), + RawMessage(role="user", content="Who are you?"), + ] + ], + notes="This format is unchanged from Llama3.1", + ) diff --git a/llama_stack/models/llama/sku_list.py b/llama_stack/models/llama/sku_list.py new file mode 100644 index 000000000..6f4a5a885 --- /dev/null +++ b/llama_stack/models/llama/sku_list.py @@ -0,0 +1,1000 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +from dataclasses import dataclass +from functools import lru_cache +from typing import List, Optional + +from .datatypes import ( + CheckpointQuantizationFormat, + CoreModelId, + Model, + SamplingParams, + TopPSamplingStrategy, +) + +LLAMA2_VOCAB_SIZE = 32000 +LLAMA3_VOCAB_SIZE = 128256 + + +def resolve_model(descriptor: str) -> Optional[Model]: + for m in all_registered_models(): + if descriptor in (m.descriptor(), m.huggingface_repo): + return m + return None + + +def all_registered_models() -> List[Model]: + return ( + llama2_family() + llama3_family() + llama3_1_family() + llama3_2_family() + llama3_3_family() + safety_models() + ) + + +def recommended_sampling_params() -> SamplingParams: + return SamplingParams( + strategy=TopPSamplingStrategy( + temperature=1.0, + top_p=0.9, + ) + ) + + +def llama2_family() -> List[Model]: + return [ + *llama2_base_models(), + *llama2_instruct_models(), + ] + + +def llama3_family() -> List[Model]: + return [ + *llama3_base_models(), + *llama3_instruct_models(), + ] + + +def llama3_1_family() -> List[Model]: + return [ + *llama3_1_base_models(), + *llama3_1_instruct_models(), + ] + + +def llama3_2_family() -> List[Model]: + return [ + *llama3_2_base_models(), + *llama3_2_instruct_models(), + ] + + +def llama3_3_family() -> List[Model]: + return [ + *llama3_3_instruct_models(), + ] + + +def llama2_base_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama2_7b, + description="Llama 2 7b model", + huggingface_repo="meta-llama/Llama-2-7b", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA2_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama2_13b, + description="Llama 2 13b model", + huggingface_repo="meta-llama/Llama-2-13b", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 5120, + "n_layers": 40, + "n_heads": 40, + "n_kv_heads": 8, + "vocab_size": LLAMA2_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama2_70b, + description="Llama 2 70b model", + huggingface_repo="meta-llama/Llama-2-70b", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA2_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=8, + ), + ] + + +def llama3_base_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_8b, + description="Llama 3 8b model", + huggingface_repo="meta-llama/Llama-3-8B", + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_70b, + description="Llama 3 70b model", + huggingface_repo="meta-llama/Llama-3-70B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=8, + ), + ] + + +def llama3_1_base_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_1_8b, + description="Llama 3.1 8b model", + huggingface_repo="meta-llama/Llama-3.1-8B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_1_70b, + description="Llama 3.1 70b model", + huggingface_repo="meta-llama/Llama-3.1-70B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=8, + ), + Model( + core_model_id=CoreModelId.llama3_1_405b, + variant="bf16-mp8", + description="Llama 3.1 405b model (BF16 weights)", + huggingface_repo="meta-llama/Llama-3.1-405B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 16384, + "n_layers": 126, + "n_heads": 128, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.2, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=8, + ), + Model( + core_model_id=CoreModelId.llama3_1_405b, + description="Llama 3.1 405b model (FP8 quantized)", + huggingface_repo="meta-llama/Llama-3.1-405B-FP8", + quantization_format=CheckpointQuantizationFormat.fp8_mixed, + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 16384, + "n_layers": 126, + "n_heads": 128, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.2, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=8, + ), + Model( + core_model_id=CoreModelId.llama3_1_405b, + variant="bf16-mp16", + description="Llama 3.1 405b model (BF16 weights for mp16)", + huggingface_repo="meta-llama/Llama-3.1-405B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 16384, + "n_layers": 126, + "n_heads": 128, + "n_kv_heads": 16, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.2, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=16, + ), + ] + + +def llama3_2_base_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_2_1b, + description="Llama 3.2 1b model", + huggingface_repo="meta-llama/Llama-3.2-1B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 2048, + "n_layers": 16, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.5, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_3b, + description="Llama 3.2 3b model", + huggingface_repo="meta-llama/Llama-3.2-3B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 3072, + "n_layers": 28, + "n_heads": 24, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.0, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_11b_vision, + description="Llama 3.2 11b vision model", + huggingface_repo="meta-llama/Llama-3.2-11B-Vision", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + "vision_chunk_size": 448, + "vision_max_num_chunks": 4, + "vision_num_cross_attention_layers": 8, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_90b_vision, + description="Llama 3.2 90b vision model", + huggingface_repo="meta-llama/Llama-3.2-90B-Vision", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + "vision_chunk_size": 560, + "vision_max_num_chunks": 4, + "vision_num_cross_attention_layers": 20, + }, + pth_file_count=8, + ), + ] + + +def llama2_instruct_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama2_7b_chat, + description="Llama 2 7b chat model", + huggingface_repo="meta-llama/Llama-2-7b-chat", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA2_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama2_13b_chat, + description="Llama 2 13b chat model", + huggingface_repo="meta-llama/Llama-2-13b-chat", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 5120, + "n_layers": 40, + "n_heads": 40, + "n_kv_heads": 8, + "vocab_size": LLAMA2_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama2_70b_chat, + description="Llama 2 70b chat model", + huggingface_repo="meta-llama/Llama-2-70b-chat", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA2_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=8, + ), + ] + + +def llama3_instruct_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_8b_instruct, + description="Llama 3 8b instruct model", + huggingface_repo="meta-llama/Llama-3-8B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_70b_instruct, + description="Llama 3 70b instruct model", + huggingface_repo="meta-llama/Llama-3-70B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=8, + ), + ] + + +def llama3_1_instruct_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_1_8b_instruct, + description="Llama 3.1 8b instruct model", + huggingface_repo="meta-llama/Llama-3.1-8B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_1_70b_instruct, + description="Llama 3.1 70b instruct model", + huggingface_repo="meta-llama/Llama-3.1-70B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=8, + ), + Model( + core_model_id=CoreModelId.llama3_1_405b_instruct, + variant="bf16-mp8", + description="Llama 3.1 405b instruct model (BF16 weights)", + huggingface_repo="meta-llama/Llama-3.1-405B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 16384, + "n_layers": 126, + "n_heads": 128, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.2, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=8, + ), + Model( + core_model_id=CoreModelId.llama3_1_405b_instruct, + description="Llama 3.1 405b instruct model (FP8 quantized)", + huggingface_repo="meta-llama/Llama-3.1-405B-Instruct-FP8", + quantization_format=CheckpointQuantizationFormat.fp8_mixed, + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 16384, + "n_layers": 126, + "n_heads": 128, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.2, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=8, + ), + Model( + core_model_id=CoreModelId.llama3_1_405b_instruct, + variant="bf16-mp16", + description="Llama 3.1 405b instruct model (BF16 weights for mp16)", + huggingface_repo="meta-llama/Llama-3.1-405B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 16384, + "n_layers": 126, + "n_heads": 128, + "n_kv_heads": 16, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.2, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=16, + ), + ] + + +def arch_args_1b() -> dict: + return { + "dim": 2048, + "n_layers": 16, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.5, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + } + + +def arch_args_3b() -> dict: + return { + "dim": 3072, + "n_layers": 28, + "n_heads": 24, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.0, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + } + + +def llama3_2_quantized_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_2_1b_instruct, + variant="int4-qlora-eo8", + quantization_format=CheckpointQuantizationFormat.int4, + description="Llama 3.2 1b INT4 quantized LoRA", + huggingface_repo="meta-llama/Llama-3.2-1B-Instruct-QLORA_INT4_EO8", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + **arch_args_1b(), + "quantization_args": { + "group_size": 256, + }, + "lora_args": { + "rank": 16, + "scale": 2.0, + }, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_1b_instruct, + variant="int4-spinquant-eo8", + quantization_format=CheckpointQuantizationFormat.int4, + description="Llama 3.2 1b INT4 quantized SpinQuant", + huggingface_repo="meta-llama/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + **arch_args_1b(), + "quantization_args": { + "group_size": 256, + }, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_3b_instruct, + variant="int4-qlora-eo8", + quantization_format=CheckpointQuantizationFormat.int4, + description="Llama 3.2 3b INT4 quantized LoRA", + huggingface_repo="meta-llama/Llama-3.2-3B-Instruct-QLORA_INT4_EO8", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + **arch_args_3b(), + "quantization_args": { + "group_size": 256, + }, + "lora_args": { + "rank": 16, + "scale": 2.0, + }, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_3b_instruct, + variant="int4-spinquant-eo8", + quantization_format=CheckpointQuantizationFormat.int4, + description="Llama 3.2 3b INT4 quantized SpinQuant", + huggingface_repo="meta-llama/Llama-3.2-3B-Instruct-SpinQuant_INT4_EO8", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + **arch_args_3b(), + "quantization_args": { + "group_size": 256, + }, + }, + pth_file_count=1, + ), + ] + + +def llama3_2_instruct_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_2_1b_instruct, + description="Llama 3.2 1b instruct model", + huggingface_repo="meta-llama/Llama-3.2-1B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args=arch_args_1b(), + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_3b_instruct, + description="Llama 3.2 3b instruct model", + huggingface_repo="meta-llama/Llama-3.2-3B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args=arch_args_3b(), + pth_file_count=1, + ), + *llama3_2_quantized_models(), + Model( + core_model_id=CoreModelId.llama3_2_11b_vision_instruct, + description="Llama 3.2 11b vision instruct model", + huggingface_repo="meta-llama/Llama-3.2-11B-Vision-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + "vision_chunk_size": 560, + "vision_max_num_chunks": 4, + "vision_num_cross_attention_layers": 8, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama3_2_90b_vision_instruct, + description="Llama 3.2 90b vision instruct model", + huggingface_repo="meta-llama/Llama-3.2-90B-Vision-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + "vision_chunk_size": 560, + "vision_max_num_chunks": 4, + "vision_num_cross_attention_layers": 20, + }, + pth_file_count=8, + ), + ] + + +def llama3_3_instruct_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama3_3_70b_instruct, + description="Llama 3.3 70b instruct", + huggingface_repo="meta-llama/Llama-3.3-70B-Instruct", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 8192, + "n_layers": 80, + "n_heads": 64, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 4096, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=8, + ), + ] + + +@lru_cache +def safety_models() -> List[Model]: + return [ + Model( + core_model_id=CoreModelId.llama_guard_3_11b_vision, + description="Llama Guard v3 11b vision system safety model", + huggingface_repo="meta-llama/Llama-Guard-3-11B-Vision", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + "vision_chunk_size": 560, + "vision_max_num_chunks": 4, + "vision_num_cross_attention_layers": 8, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama_guard_3_1b, + variant="int4", + description="Llama Guard v3 1b 'int4' quantized system safety model", + huggingface_repo="meta-llama/Llama-Guard-3-1B-INT4", + quantization_format=CheckpointQuantizationFormat.int4, + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 2048, + "n_layers": 12, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "rope_freq_base": 500000.0, + "norm_eps": 1e-05, + "hidden_dim": 6400, + "use_scaled_rope": True, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama_guard_3_1b, + description="Llama Guard v3 1b system safety model", + huggingface_repo="meta-llama/Llama-Guard-3-1B", + recommended_sampling_params=recommended_sampling_params(), + arch_args={ + "dim": 2048, + "n_layers": 16, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA3_VOCAB_SIZE, + "ffn_dim_multiplier": 1.5, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": True, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama_guard_3_8b, + description="Llama Guard v3 8b system safety model", + huggingface_repo="meta-llama/Llama-Guard-3-8B", + arch_args={ + "dim": 4096, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "n_heads": 32, + "n_kv_heads": 8, + "n_layers": 32, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + "vocab_size": LLAMA3_VOCAB_SIZE, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama_guard_3_8b, + variant="int8", + description="Llama Guard v3 8b system safety model", + huggingface_repo="meta-llama/Llama-Guard-3-8B-INT8", + quantization_format=CheckpointQuantizationFormat.int8, + arch_args={ + "dim": 4096, + "ffn_dim_multiplier": 1.3, + "multiple_of": 1024, + "n_heads": 32, + "n_kv_heads": 8, + "n_layers": 32, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + "vocab_size": LLAMA3_VOCAB_SIZE, + }, + pth_file_count=1, + ), + Model( + core_model_id=CoreModelId.llama_guard_2_8b, + description="Llama Guard v2 8b system safety model", + huggingface_repo="meta-llama/Llama-Guard-2-8B", + arch_args={ + "dim": 4096, + "n_layers": 32, + "n_heads": 32, + "n_kv_heads": 8, + "vocab_size": LLAMA2_VOCAB_SIZE, + "ffn_dim_multiplier": 1.3, + "multiple_of": 256, + "norm_eps": 1e-05, + "rope_theta": 500000.0, + "use_scaled_rope": False, + }, + pth_file_count=1, + ), + ] + + +@dataclass +class LlamaDownloadInfo: + folder: str + files: List[str] + pth_size: int + + +def llama_meta_net_info(model: Model) -> LlamaDownloadInfo: + """Information needed to download model from llamameta.net""" + + pth_count = model.pth_file_count + if model.core_model_id == CoreModelId.llama3_1_405b: + if pth_count == 16: + folder = "Llama-3.1-405B-MP16" + elif model.quantization_format == CheckpointQuantizationFormat.fp8_mixed: + folder = "Llama-3.1-405B" + else: + folder = "Llama-3.1-405B-MP8" + elif model.core_model_id == CoreModelId.llama3_1_405b_instruct: + if pth_count == 16: + folder = "Llama-3.1-405B-Instruct-MP16" + elif model.quantization_format == CheckpointQuantizationFormat.fp8_mixed: + folder = "Llama-3.1-405B-Instruct" + else: + folder = "Llama-3.1-405B-Instruct-MP8" + elif model.core_model_id == CoreModelId.llama_guard_3_8b: + if model.quantization_format == CheckpointQuantizationFormat.int8: + folder = "Llama-Guard-3-8B-INT8-HF" + else: + folder = "Llama-Guard-3-8B" + elif model.core_model_id == CoreModelId.llama_guard_2_8b: + folder = "llama-guard-2" + else: + folder = model.huggingface_repo.split("/")[-1] + if "Llama-2" in folder: + folder = folder.lower() + + files = ["checklist.chk"] + if ( + model.core_model_id == CoreModelId.llama_guard_3_8b + and model.quantization_format == CheckpointQuantizationFormat.int8 + ): + files.extend( + [ + "generation_config.json", + "model-00001-of-00002.safetensors", + "model-00002-of-00002.safetensors", + "special_tokens_map.json", + "tokenizer.json", + "tokenizer_config.json", + "model.safetensors.index.json", + ] + ) + elif ( + model.core_model_id == CoreModelId.llama_guard_3_1b + and model.quantization_format == CheckpointQuantizationFormat.int4 + ): + files.extend( + [ + "llama_guard_3_1b_pruned_xnnpack.pte", + "example-prompt.txt", + "params.json", + "tokenizer.model", + ] + ) + else: + files.extend( + [ + "tokenizer.model", + "params.json", + ] + ) + if model.quantization_format == CheckpointQuantizationFormat.fp8_mixed: + files.extend([f"fp8_scales_{i}.pt" for i in range(pth_count)]) + files.extend([f"consolidated.{i:02d}.pth" for i in range(pth_count)]) + + return LlamaDownloadInfo( + folder=folder, + files=files, + pth_size=llama_meta_pth_size(model), + ) + + +# Sadness because Cloudfront rejects our HEAD requests to find Content-Length +def llama_meta_pth_size(model: Model) -> int: + if model.core_model_id not in ( + CoreModelId.llama3_1_405b, + CoreModelId.llama3_1_405b_instruct, + ): + return 0 + + if model.pth_file_count == 16: + return 51268302389 + elif model.quantization_format == CheckpointQuantizationFormat.fp8_mixed: + return 60903742309 + else: + return 101470976045 diff --git a/llama_stack/providers/datatypes.py b/llama_stack/providers/datatypes.py index b92f9dc0a..384582423 100644 --- a/llama_stack/providers/datatypes.py +++ b/llama_stack/providers/datatypes.py @@ -7,7 +7,6 @@ from typing import Any, List, Optional, Protocol from urllib.parse import urlparse -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field from llama_stack.apis.benchmarks import Benchmark @@ -18,6 +17,7 @@ from llama_stack.apis.scoring_functions import ScoringFn from llama_stack.apis.shields import Shield from llama_stack.apis.tools import Tool from llama_stack.apis.vector_dbs import VectorDB +from llama_stack.schema_utils import json_schema_type class ModelsProtocolPrivate(Protocol): diff --git a/llama_stack/providers/inline/agents/meta_reference/agent_instance.py b/llama_stack/providers/inline/agents/meta_reference/agent_instance.py index 8ba7885cd..fc597d0f7 100644 --- a/llama_stack/providers/inline/agents/meta_reference/agent_instance.py +++ b/llama_stack/providers/inline/agents/meta_reference/agent_instance.py @@ -17,7 +17,6 @@ from typing import Any, AsyncGenerator, Dict, List, Optional, Tuple from urllib.parse import urlparse import httpx -from llama_models.llama3.api.datatypes import BuiltinTool, ToolCall, ToolParamDefinition from pydantic import TypeAdapter from llama_stack.apis.agents import ( @@ -63,6 +62,7 @@ from llama_stack.apis.inference import ( from llama_stack.apis.safety import Safety from llama_stack.apis.tools import RAGDocument, RAGQueryConfig, ToolGroups, ToolRuntime from llama_stack.apis.vector_io import VectorIO +from llama_stack.models.llama.datatypes import BuiltinTool, ToolCall, ToolParamDefinition from llama_stack.providers.utils.kvstore import KVStore from llama_stack.providers.utils.memory.vector_store import concat_interleaved_content from llama_stack.providers.utils.telemetry import tracing diff --git a/llama_stack/providers/inline/agents/meta_reference/tests/test_chat_agent.py b/llama_stack/providers/inline/agents/meta_reference/tests/test_chat_agent.py index 4e3951ad3..b802937b6 100644 --- a/llama_stack/providers/inline/agents/meta_reference/tests/test_chat_agent.py +++ b/llama_stack/providers/inline/agents/meta_reference/tests/test_chat_agent.py @@ -8,7 +8,6 @@ import tempfile from typing import AsyncIterator, List, Optional, Union import pytest -from llama_models.llama3.api.datatypes import BuiltinTool from llama_stack.apis.agents import ( AgentConfig, @@ -41,6 +40,7 @@ from llama_stack.apis.tools import ( ToolInvocationResult, ) from llama_stack.apis.vector_io import QueryChunksResponse +from llama_stack.models.llama.datatypes import BuiltinTool from llama_stack.providers.inline.agents.meta_reference.agent_instance import ( MEMORY_QUERY_TOOL, ) diff --git a/llama_stack/providers/inline/inference/meta_reference/generation.py b/llama_stack/providers/inline/inference/meta_reference/generation.py index e60c3b1be..2d2ec5c8f 100644 --- a/llama_stack/providers/inline/inference/meta_reference/generation.py +++ b/llama_stack/providers/inline/inference/meta_reference/generation.py @@ -23,20 +23,13 @@ from fairscale.nn.model_parallel.initialize import ( initialize_model_parallel, model_parallel_is_initialized, ) -from llama_models.datatypes import ( - GreedySamplingStrategy, - SamplingParams, - TopPSamplingStrategy, -) from llama_models.llama3.api.args import ModelArgs from llama_models.llama3.api.chat_format import ChatFormat, LLMInput -from llama_models.llama3.api.datatypes import Model from llama_models.llama3.api.tokenizer import Tokenizer from llama_models.llama3.reference_impl.model import Transformer from llama_models.llama3.reference_impl.multimodal.model import ( CrossAttentionTransformer, ) -from llama_models.sku_list import resolve_model from lmformatenforcer import JsonSchemaParser, TokenEnforcer, TokenEnforcerTokenizerData from pydantic import BaseModel @@ -47,6 +40,13 @@ from llama_stack.apis.inference import ( ResponseFormatType, ) from llama_stack.distribution.utils.model_utils import model_local_dir +from llama_stack.models.llama.datatypes import ( + GreedySamplingStrategy, + Model, + SamplingParams, + TopPSamplingStrategy, +) +from llama_stack.models.llama.sku_list import resolve_model from llama_stack.providers.utils.inference.prompt_adapter import ( ChatCompletionRequestWithRawContent, CompletionRequestWithRawContent, diff --git a/llama_stack/providers/inline/inference/meta_reference/inference.py b/llama_stack/providers/inline/inference/meta_reference/inference.py index 61f0ee3f4..c79f97def 100644 --- a/llama_stack/providers/inline/inference/meta_reference/inference.py +++ b/llama_stack/providers/inline/inference/meta_reference/inference.py @@ -8,14 +8,6 @@ import asyncio import logging from typing import AsyncGenerator, List, Optional, Union -from llama_models.llama3.api.datatypes import ( - SamplingParams, - StopReason, - ToolDefinition, - ToolPromptFormat, -) -from llama_models.sku_list import resolve_model - from llama_stack.apis.common.content_types import ( TextDelta, ToolCallDelta, @@ -41,6 +33,13 @@ from llama_stack.apis.inference import ( ToolConfig, ) from llama_stack.apis.models import Model, ModelType +from llama_stack.models.llama.datatypes import ( + SamplingParams, + StopReason, + ToolDefinition, + ToolPromptFormat, +) +from llama_stack.models.llama.sku_list import resolve_model from llama_stack.providers.datatypes import ModelsProtocolPrivate from llama_stack.providers.utils.inference.embedding_mixin import ( SentenceTransformerEmbeddingMixin, diff --git a/llama_stack/providers/inline/inference/meta_reference/model_parallel.py b/llama_stack/providers/inline/inference/meta_reference/model_parallel.py index ef133274c..64f94a69d 100644 --- a/llama_stack/providers/inline/inference/meta_reference/model_parallel.py +++ b/llama_stack/providers/inline/inference/meta_reference/model_parallel.py @@ -10,10 +10,10 @@ from functools import partial from typing import Any, Generator from llama_models.llama3.api.chat_format import ChatFormat -from llama_models.llama3.api.datatypes import Model from llama_models.llama3.api.tokenizer import Tokenizer -from llama_models.sku_list import resolve_model +from llama_stack.models.llama.datatypes import Model +from llama_stack.models.llama.sku_list import resolve_model from llama_stack.providers.utils.inference.prompt_adapter import ( ChatCompletionRequestWithRawContent, CompletionRequestWithRawContent, diff --git a/llama_stack/providers/inline/inference/meta_reference/quantization/loader.py b/llama_stack/providers/inline/inference/meta_reference/quantization/loader.py index 9be35ae70..a2dc00916 100644 --- a/llama_stack/providers/inline/inference/meta_reference/quantization/loader.py +++ b/llama_stack/providers/inline/inference/meta_reference/quantization/loader.py @@ -14,14 +14,14 @@ from typing import Any, Dict, List, Optional import torch from fairscale.nn.model_parallel.layers import ColumnParallelLinear, RowParallelLinear from fairscale.nn.model_parallel.mappings import reduce_from_model_parallel_region -from llama_models.datatypes import CheckpointQuantizationFormat from llama_models.llama3.api.args import ModelArgs from llama_models.llama3.reference_impl.model import Transformer, TransformerBlock -from llama_models.sku_list import resolve_model from torch import Tensor, nn from torchao.quantization.GPTQ import Int8DynActInt4WeightLinear from llama_stack.apis.inference import QuantizationType +from llama_stack.models.llama.datatypes import CheckpointQuantizationFormat +from llama_stack.models.llama.sku_list import resolve_model from ..config import MetaReferenceQuantizedInferenceConfig diff --git a/llama_stack/providers/inline/inference/vllm/config.py b/llama_stack/providers/inline/inference/vllm/config.py index de2bae265..51ef2d273 100644 --- a/llama_stack/providers/inline/inference/vllm/config.py +++ b/llama_stack/providers/inline/inference/vllm/config.py @@ -4,10 +4,10 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field, field_validator from llama_stack.providers.utils.inference import supported_inference_models +from llama_stack.schema_utils import json_schema_type @json_schema_type diff --git a/llama_stack/providers/inline/inference/vllm/vllm.py b/llama_stack/providers/inline/inference/vllm/vllm.py index e75a9aac3..5536ea3a5 100644 --- a/llama_stack/providers/inline/inference/vllm/vllm.py +++ b/llama_stack/providers/inline/inference/vllm/vllm.py @@ -11,7 +11,6 @@ from typing import AsyncGenerator, List, Optional from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer -from llama_models.sku_list import resolve_model from vllm.engine.arg_utils import AsyncEngineArgs from vllm.engine.async_llm_engine import AsyncLLMEngine from vllm.sampling_params import SamplingParams as VLLMSamplingParams @@ -35,6 +34,7 @@ from llama_stack.apis.inference import ( ToolPromptFormat, ) from llama_stack.apis.models import Model +from llama_stack.models.llama.sku_list import resolve_model from llama_stack.providers.datatypes import ModelsProtocolPrivate from llama_stack.providers.utils.inference.openai_compat import ( OpenAICompatCompletionChoice, diff --git a/llama_stack/providers/inline/post_training/torchtune/common/utils.py b/llama_stack/providers/inline/post_training/torchtune/common/utils.py index 735af8c79..98e16f9d7 100644 --- a/llama_stack/providers/inline/post_training/torchtune/common/utils.py +++ b/llama_stack/providers/inline/post_training/torchtune/common/utils.py @@ -13,8 +13,6 @@ from typing import Any, Callable, Dict import torch -from llama_models.datatypes import Model -from llama_models.sku_list import resolve_model from pydantic import BaseModel from torchtune.data._messages import InputOutputToMessages, ShareGPTToMessages from torchtune.models.llama3 import llama3_tokenizer @@ -24,6 +22,8 @@ from torchtune.models.llama3_2 import lora_llama3_2_3b from torchtune.modules.transforms import Transform from llama_stack.apis.post_training import DatasetFormat +from llama_stack.models.llama.datatypes import Model +from llama_stack.models.llama.sku_list import resolve_model class ModelConfig(BaseModel): diff --git a/llama_stack/providers/inline/post_training/torchtune/post_training.py b/llama_stack/providers/inline/post_training/torchtune/post_training.py index ba11736d6..c77d9305f 100644 --- a/llama_stack/providers/inline/post_training/torchtune/post_training.py +++ b/llama_stack/providers/inline/post_training/torchtune/post_training.py @@ -6,8 +6,6 @@ from datetime import datetime from typing import Any, Dict, Optional -from llama_models.schema_utils import webmethod - from llama_stack.apis.datasetio import DatasetIO from llama_stack.apis.datasets import Datasets from llama_stack.apis.post_training import ( @@ -27,6 +25,7 @@ from llama_stack.providers.inline.post_training.torchtune.config import ( from llama_stack.providers.inline.post_training.torchtune.recipes.lora_finetuning_single_device import ( LoraFinetuningSingleDevice, ) +from llama_stack.schema_utils import webmethod class TorchtunePostTrainingImpl: diff --git a/llama_stack/providers/inline/post_training/torchtune/recipes/lora_finetuning_single_device.py b/llama_stack/providers/inline/post_training/torchtune/recipes/lora_finetuning_single_device.py index ef379aff2..4ab59fec4 100644 --- a/llama_stack/providers/inline/post_training/torchtune/recipes/lora_finetuning_single_device.py +++ b/llama_stack/providers/inline/post_training/torchtune/recipes/lora_finetuning_single_device.py @@ -14,7 +14,6 @@ from pathlib import Path from typing import Any, Dict, List, Optional, Tuple import torch -from llama_models.sku_list import resolve_model from torch import nn from torch.optim import Optimizer from torch.utils.data import DataLoader, DistributedSampler @@ -46,6 +45,7 @@ from llama_stack.apis.post_training import ( ) from llama_stack.distribution.utils.config_dirs import DEFAULT_CHECKPOINT_DIR from llama_stack.distribution.utils.model_utils import model_local_dir +from llama_stack.models.llama.sku_list import resolve_model from llama_stack.providers.inline.post_training.common.validator import ( validate_input_dataset_schema, ) diff --git a/llama_stack/providers/inline/safety/llama_guard/llama_guard.py b/llama_stack/providers/inline/safety/llama_guard/llama_guard.py index 32d6d5100..af0987fa8 100644 --- a/llama_stack/providers/inline/safety/llama_guard/llama_guard.py +++ b/llama_stack/providers/inline/safety/llama_guard/llama_guard.py @@ -8,9 +8,6 @@ import re from string import Template from typing import Any, Dict, List, Optional -from llama_models.datatypes import CoreModelId -from llama_models.llama3.api.datatypes import Role - from llama_stack.apis.common.content_types import ImageContentItem, TextContentItem from llama_stack.apis.inference import ( ChatCompletionResponseEventType, @@ -26,6 +23,7 @@ from llama_stack.apis.safety import ( ) from llama_stack.apis.shields import Shield from llama_stack.distribution.datatypes import Api +from llama_stack.models.llama.datatypes import CoreModelId, Role from llama_stack.providers.datatypes import ShieldsProtocolPrivate from llama_stack.providers.utils.inference.prompt_adapter import ( interleaved_content_as_str, diff --git a/llama_stack/providers/inline/vector_io/faiss/config.py b/llama_stack/providers/inline/vector_io/faiss/config.py index ae859842d..9eae9ed67 100644 --- a/llama_stack/providers/inline/vector_io/faiss/config.py +++ b/llama_stack/providers/inline/vector_io/faiss/config.py @@ -6,13 +6,13 @@ from typing import Any, Dict -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel from llama_stack.providers.utils.kvstore.config import ( KVStoreConfig, SqliteKVStoreConfig, ) +from llama_stack.schema_utils import json_schema_type @json_schema_type diff --git a/llama_stack/providers/remote/inference/bedrock/bedrock.py b/llama_stack/providers/remote/inference/bedrock/bedrock.py index 917ac7a25..e896f0597 100644 --- a/llama_stack/providers/remote/inference/bedrock/bedrock.py +++ b/llama_stack/providers/remote/inference/bedrock/bedrock.py @@ -8,7 +8,6 @@ import json from typing import AsyncGenerator, AsyncIterator, Dict, List, Optional, Union from botocore.client import BaseClient -from llama_models.datatypes import CoreModelId from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer @@ -28,6 +27,7 @@ from llama_stack.apis.inference import ( ToolDefinition, ToolPromptFormat, ) +from llama_stack.models.llama.datatypes import CoreModelId from llama_stack.providers.remote.inference.bedrock.config import BedrockConfig from llama_stack.providers.utils.bedrock.client import create_bedrock_client from llama_stack.providers.utils.inference.model_registry import ( diff --git a/llama_stack/providers/remote/inference/cerebras/cerebras.py b/llama_stack/providers/remote/inference/cerebras/cerebras.py index 2158fc5b4..1ce267e8d 100644 --- a/llama_stack/providers/remote/inference/cerebras/cerebras.py +++ b/llama_stack/providers/remote/inference/cerebras/cerebras.py @@ -7,9 +7,7 @@ from typing import AsyncGenerator, List, Optional, Union from cerebras.cloud.sdk import AsyncCerebras -from llama_models.datatypes import CoreModelId from llama_models.llama3.api.chat_format import ChatFormat -from llama_models.llama3.api.datatypes import TopKSamplingStrategy from llama_models.llama3.api.tokenizer import Tokenizer from llama_stack.apis.common.content_types import InterleavedContent @@ -28,6 +26,7 @@ from llama_stack.apis.inference import ( ToolDefinition, ToolPromptFormat, ) +from llama_stack.models.llama.datatypes import CoreModelId, TopKSamplingStrategy from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, build_model_alias, diff --git a/llama_stack/providers/remote/inference/cerebras/config.py b/llama_stack/providers/remote/inference/cerebras/config.py index 6eb4dffec..81682c980 100644 --- a/llama_stack/providers/remote/inference/cerebras/config.py +++ b/llama_stack/providers/remote/inference/cerebras/config.py @@ -7,9 +7,10 @@ import os from typing import Any, Dict, Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field, SecretStr +from llama_stack.schema_utils import json_schema_type + DEFAULT_BASE_URL = "https://api.cerebras.ai" diff --git a/llama_stack/providers/remote/inference/databricks/config.py b/llama_stack/providers/remote/inference/databricks/config.py index ae2b056ea..6aaf7e594 100644 --- a/llama_stack/providers/remote/inference/databricks/config.py +++ b/llama_stack/providers/remote/inference/databricks/config.py @@ -5,9 +5,10 @@ # the root directory of this source tree. -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.schema_utils import json_schema_type + @json_schema_type class DatabricksImplConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/databricks/databricks.py b/llama_stack/providers/remote/inference/databricks/databricks.py index d56be1465..3d306e61f 100644 --- a/llama_stack/providers/remote/inference/databricks/databricks.py +++ b/llama_stack/providers/remote/inference/databricks/databricks.py @@ -6,7 +6,6 @@ from typing import AsyncGenerator, List, Optional -from llama_models.datatypes import CoreModelId from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer from openai import OpenAI @@ -25,6 +24,7 @@ from llama_stack.apis.inference import ( ToolDefinition, ToolPromptFormat, ) +from llama_stack.models.llama.datatypes import CoreModelId from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, build_model_alias, diff --git a/llama_stack/providers/remote/inference/fireworks/config.py b/llama_stack/providers/remote/inference/fireworks/config.py index aa4c2d1de..005dfe829 100644 --- a/llama_stack/providers/remote/inference/fireworks/config.py +++ b/llama_stack/providers/remote/inference/fireworks/config.py @@ -6,9 +6,10 @@ from typing import Any, Dict, Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field, SecretStr +from llama_stack.schema_utils import json_schema_type + @json_schema_type class FireworksImplConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/fireworks/fireworks.py b/llama_stack/providers/remote/inference/fireworks/fireworks.py index 7e8f85313..acf37b248 100644 --- a/llama_stack/providers/remote/inference/fireworks/fireworks.py +++ b/llama_stack/providers/remote/inference/fireworks/fireworks.py @@ -7,7 +7,6 @@ from typing import AsyncGenerator, List, Optional, Union from fireworks.client import Fireworks -from llama_models.datatypes import CoreModelId from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer @@ -30,6 +29,7 @@ from llama_stack.apis.inference import ( ToolPromptFormat, ) from llama_stack.distribution.request_headers import NeedsRequestProviderData +from llama_stack.models.llama.datatypes import CoreModelId from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, build_model_alias, diff --git a/llama_stack/providers/remote/inference/groq/config.py b/llama_stack/providers/remote/inference/groq/config.py index 7c5023410..cb2619437 100644 --- a/llama_stack/providers/remote/inference/groq/config.py +++ b/llama_stack/providers/remote/inference/groq/config.py @@ -6,9 +6,10 @@ from typing import Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.schema_utils import json_schema_type + @json_schema_type class GroqConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/groq/groq.py b/llama_stack/providers/remote/inference/groq/groq.py index 59ec8b0d2..441b6af5c 100644 --- a/llama_stack/providers/remote/inference/groq/groq.py +++ b/llama_stack/providers/remote/inference/groq/groq.py @@ -9,9 +9,6 @@ from typing import AsyncIterator, List, Optional, Union import groq from groq import Groq -from llama_models.datatypes import SamplingParams -from llama_models.llama3.api.datatypes import ToolDefinition, ToolPromptFormat -from llama_models.sku_list import CoreModelId from llama_stack.apis.inference import ( ChatCompletionRequest, @@ -29,6 +26,8 @@ from llama_stack.apis.inference import ( ToolConfig, ) from llama_stack.distribution.request_headers import NeedsRequestProviderData +from llama_stack.models.llama.datatypes import SamplingParams, ToolDefinition, ToolPromptFormat +from llama_stack.models.llama.sku_list import CoreModelId from llama_stack.providers.remote.inference.groq.config import GroqConfig from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, diff --git a/llama_stack/providers/remote/inference/groq/groq_utils.py b/llama_stack/providers/remote/inference/groq/groq_utils.py index 2445c1b39..f1138e789 100644 --- a/llama_stack/providers/remote/inference/groq/groq_utils.py +++ b/llama_stack/providers/remote/inference/groq/groq_utils.py @@ -24,7 +24,6 @@ from groq.types.chat.chat_completion_user_message_param import ( ) from groq.types.chat.completion_create_params import CompletionCreateParams from groq.types.shared.function_definition import FunctionDefinition -from llama_models.llama3.api.datatypes import ToolParamDefinition from llama_stack.apis.common.content_types import ( TextDelta, @@ -44,6 +43,7 @@ from llama_stack.apis.inference import ( ToolDefinition, ToolPromptFormat, ) +from llama_stack.models.llama.datatypes import ToolParamDefinition from llama_stack.providers.utils.inference.openai_compat import ( UnparseableToolCall, convert_tool_call, diff --git a/llama_stack/providers/remote/inference/nvidia/config.py b/llama_stack/providers/remote/inference/nvidia/config.py index 9bf5eb469..abd34b498 100644 --- a/llama_stack/providers/remote/inference/nvidia/config.py +++ b/llama_stack/providers/remote/inference/nvidia/config.py @@ -7,9 +7,10 @@ import os from typing import Any, Dict, Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field, SecretStr +from llama_stack.schema_utils import json_schema_type + @json_schema_type class NVIDIAConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/nvidia/nvidia.py b/llama_stack/providers/remote/inference/nvidia/nvidia.py index 82343513f..0c5b7c454 100644 --- a/llama_stack/providers/remote/inference/nvidia/nvidia.py +++ b/llama_stack/providers/remote/inference/nvidia/nvidia.py @@ -7,9 +7,6 @@ import warnings from typing import AsyncIterator, List, Optional, Union -from llama_models.datatypes import SamplingParams -from llama_models.llama3.api.datatypes import ToolDefinition, ToolPromptFormat -from llama_models.sku_list import CoreModelId from openai import APIConnectionError, AsyncOpenAI from llama_stack.apis.inference import ( @@ -28,6 +25,7 @@ from llama_stack.apis.inference import ( ToolChoice, ToolConfig, ) +from llama_stack.models.llama.datatypes import CoreModelId, SamplingParams, ToolDefinition, ToolPromptFormat from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, build_model_alias, diff --git a/llama_stack/providers/remote/inference/nvidia/openai_utils.py b/llama_stack/providers/remote/inference/nvidia/openai_utils.py index c757c562c..9799eedcc 100644 --- a/llama_stack/providers/remote/inference/nvidia/openai_utils.py +++ b/llama_stack/providers/remote/inference/nvidia/openai_utils.py @@ -8,17 +8,6 @@ import json import warnings from typing import Any, AsyncGenerator, Dict, Generator, Iterable, List, Optional, Union -from llama_models.datatypes import ( - GreedySamplingStrategy, - TopKSamplingStrategy, - TopPSamplingStrategy, -) -from llama_models.llama3.api.datatypes import ( - BuiltinTool, - StopReason, - ToolCall, - ToolDefinition, -) from openai import AsyncStream from openai.types.chat import ( ChatCompletionAssistantMessageParam as OpenAIChatCompletionAssistantMessage, @@ -87,6 +76,15 @@ from llama_stack.apis.inference import ( ToolResponseMessage, UserMessage, ) +from llama_stack.models.llama.datatypes import ( + BuiltinTool, + GreedySamplingStrategy, + StopReason, + ToolCall, + ToolDefinition, + TopKSamplingStrategy, + TopPSamplingStrategy, +) from llama_stack.providers.utils.inference.prompt_adapter import ( convert_image_content_to_url, ) diff --git a/llama_stack/providers/remote/inference/ollama/ollama.py b/llama_stack/providers/remote/inference/ollama/ollama.py index 1c12d0d91..f524c0734 100644 --- a/llama_stack/providers/remote/inference/ollama/ollama.py +++ b/llama_stack/providers/remote/inference/ollama/ollama.py @@ -8,7 +8,6 @@ import logging from typing import AsyncGenerator, List, Optional, Union import httpx -from llama_models.datatypes import CoreModelId from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer from ollama import AsyncClient @@ -34,6 +33,7 @@ from llama_stack.apis.inference import ( ToolPromptFormat, ) from llama_stack.apis.models import Model, ModelType +from llama_stack.models.llama.datatypes import CoreModelId from llama_stack.providers.datatypes import ModelsProtocolPrivate from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, diff --git a/llama_stack/providers/remote/inference/runpod/config.py b/llama_stack/providers/remote/inference/runpod/config.py index 1a9582052..e59cfe59b 100644 --- a/llama_stack/providers/remote/inference/runpod/config.py +++ b/llama_stack/providers/remote/inference/runpod/config.py @@ -6,9 +6,10 @@ from typing import Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.schema_utils import json_schema_type + @json_schema_type class RunpodImplConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/runpod/runpod.py b/llama_stack/providers/remote/inference/runpod/runpod.py index a3c615418..1abb17336 100644 --- a/llama_stack/providers/remote/inference/runpod/runpod.py +++ b/llama_stack/providers/remote/inference/runpod/runpod.py @@ -6,11 +6,11 @@ from typing import AsyncGenerator from llama_models.llama3.api.chat_format import ChatFormat -from llama_models.llama3.api.datatypes import Message from llama_models.llama3.api.tokenizer import Tokenizer from openai import OpenAI from llama_stack.apis.inference import * # noqa: F403 +from llama_stack.models.llama.datatypes import Message # from llama_stack.providers.datatypes import ModelsProtocolPrivate from llama_stack.providers.utils.inference.model_registry import ModelRegistryHelper diff --git a/llama_stack/providers/remote/inference/sambanova/config.py b/llama_stack/providers/remote/inference/sambanova/config.py index 1798841df..a30c29b74 100644 --- a/llama_stack/providers/remote/inference/sambanova/config.py +++ b/llama_stack/providers/remote/inference/sambanova/config.py @@ -6,9 +6,10 @@ from typing import Any, Dict, Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.schema_utils import json_schema_type + @json_schema_type class SambaNovaImplConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/sambanova/sambanova.py b/llama_stack/providers/remote/inference/sambanova/sambanova.py index 3546ee977..b906e0dcb 100644 --- a/llama_stack/providers/remote/inference/sambanova/sambanova.py +++ b/llama_stack/providers/remote/inference/sambanova/sambanova.py @@ -7,12 +7,6 @@ import json from typing import AsyncGenerator -from llama_models.datatypes import ( - CoreModelId, - GreedySamplingStrategy, - TopKSamplingStrategy, - TopPSamplingStrategy, -) from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer from openai import OpenAI @@ -23,6 +17,12 @@ from llama_stack.apis.common.content_types import ( TextContentItem, ) from llama_stack.apis.inference import * # noqa: F403 +from llama_stack.models.llama.datatypes import ( + CoreModelId, + GreedySamplingStrategy, + TopKSamplingStrategy, + TopPSamplingStrategy, +) from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, build_model_alias, diff --git a/llama_stack/providers/remote/inference/tgi/config.py b/llama_stack/providers/remote/inference/tgi/config.py index 4f690dec6..6ad663662 100644 --- a/llama_stack/providers/remote/inference/tgi/config.py +++ b/llama_stack/providers/remote/inference/tgi/config.py @@ -6,9 +6,10 @@ from typing import Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field, SecretStr +from llama_stack.schema_utils import json_schema_type + @json_schema_type class TGIImplConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/tgi/tgi.py b/llama_stack/providers/remote/inference/tgi/tgi.py index 72eaa6c31..1909e01f8 100644 --- a/llama_stack/providers/remote/inference/tgi/tgi.py +++ b/llama_stack/providers/remote/inference/tgi/tgi.py @@ -11,7 +11,6 @@ from typing import AsyncGenerator, List, Optional from huggingface_hub import AsyncInferenceClient, HfApi from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer -from llama_models.sku_list import all_registered_models from llama_stack.apis.common.content_types import InterleavedContent from llama_stack.apis.inference import ( @@ -31,6 +30,7 @@ from llama_stack.apis.inference import ( ToolPromptFormat, ) from llama_stack.apis.models import Model +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.datatypes import ModelsProtocolPrivate from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, diff --git a/llama_stack/providers/remote/inference/together/config.py b/llama_stack/providers/remote/inference/together/config.py index a56cb5bb8..fda3b8f43 100644 --- a/llama_stack/providers/remote/inference/together/config.py +++ b/llama_stack/providers/remote/inference/together/config.py @@ -6,9 +6,10 @@ from typing import Any, Dict, Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field, SecretStr +from llama_stack.schema_utils import json_schema_type + @json_schema_type class TogetherImplConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/together/together.py b/llama_stack/providers/remote/inference/together/together.py index 916e64ad4..054501da8 100644 --- a/llama_stack/providers/remote/inference/together/together.py +++ b/llama_stack/providers/remote/inference/together/together.py @@ -6,7 +6,6 @@ from typing import AsyncGenerator, List, Optional, Union -from llama_models.datatypes import CoreModelId from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer from together import Together @@ -29,6 +28,7 @@ from llama_stack.apis.inference import ( ToolPromptFormat, ) from llama_stack.distribution.request_headers import NeedsRequestProviderData +from llama_stack.models.llama.datatypes import CoreModelId from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, build_model_alias, diff --git a/llama_stack/providers/remote/inference/vllm/config.py b/llama_stack/providers/remote/inference/vllm/config.py index a3a4c6930..c75cc8926 100644 --- a/llama_stack/providers/remote/inference/vllm/config.py +++ b/llama_stack/providers/remote/inference/vllm/config.py @@ -6,9 +6,10 @@ from typing import Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.schema_utils import json_schema_type + @json_schema_type class VLLMInferenceAdapterConfig(BaseModel): diff --git a/llama_stack/providers/remote/inference/vllm/vllm.py b/llama_stack/providers/remote/inference/vllm/vllm.py index 8f9cf68a8..b22284302 100644 --- a/llama_stack/providers/remote/inference/vllm/vllm.py +++ b/llama_stack/providers/remote/inference/vllm/vllm.py @@ -7,10 +7,9 @@ import json import logging from typing import AsyncGenerator, List, Optional, Union -from llama_models.llama3.api import StopReason, ToolCall +from llama_models.datatypes import StopReason, ToolCall from llama_models.llama3.api.chat_format import ChatFormat from llama_models.llama3.api.tokenizer import Tokenizer -from llama_models.sku_list import all_registered_models from openai import OpenAI from llama_stack.apis.common.content_types import InterleavedContent, TextDelta, ToolCallDelta, ToolCallParseStatus @@ -37,6 +36,7 @@ from llama_stack.apis.inference import ( ToolPromptFormat, ) from llama_stack.apis.models import Model, ModelType +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.datatypes import ModelsProtocolPrivate from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, diff --git a/llama_stack/providers/remote/safety/bedrock/config.py b/llama_stack/providers/remote/safety/bedrock/config.py index 8c61decf3..1ca8d95cb 100644 --- a/llama_stack/providers/remote/safety/bedrock/config.py +++ b/llama_stack/providers/remote/safety/bedrock/config.py @@ -5,9 +5,8 @@ # the root directory of this source tree. -from llama_models.schema_utils import json_schema_type - from llama_stack.providers.utils.bedrock.config import BedrockBaseConfig +from llama_stack.schema_utils import json_schema_type @json_schema_type diff --git a/llama_stack/providers/remote/tool_runtime/brave_search/brave_search.py b/llama_stack/providers/remote/tool_runtime/brave_search/brave_search.py index 564f76088..8ef9f5705 100644 --- a/llama_stack/providers/remote/tool_runtime/brave_search/brave_search.py +++ b/llama_stack/providers/remote/tool_runtime/brave_search/brave_search.py @@ -7,7 +7,6 @@ from typing import Any, Dict, List, Optional import requests -from llama_models.llama3.api.datatypes import BuiltinTool from llama_stack.apis.common.content_types import URL from llama_stack.apis.tools import ( @@ -18,6 +17,7 @@ from llama_stack.apis.tools import ( ToolRuntime, ) from llama_stack.distribution.request_headers import NeedsRequestProviderData +from llama_stack.models.llama.datatypes import BuiltinTool from llama_stack.providers.datatypes import ToolsProtocolPrivate from .config import BraveSearchToolConfig diff --git a/llama_stack/providers/remote/vector_io/pgvector/config.py b/llama_stack/providers/remote/vector_io/pgvector/config.py index 2a64d7c67..7811de1ca 100644 --- a/llama_stack/providers/remote/vector_io/pgvector/config.py +++ b/llama_stack/providers/remote/vector_io/pgvector/config.py @@ -4,9 +4,10 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.schema_utils import json_schema_type + @json_schema_type class PGVectorVectorIOConfig(BaseModel): diff --git a/llama_stack/providers/remote/vector_io/qdrant/config.py b/llama_stack/providers/remote/vector_io/qdrant/config.py index 613cfa6e4..f212882d8 100644 --- a/llama_stack/providers/remote/vector_io/qdrant/config.py +++ b/llama_stack/providers/remote/vector_io/qdrant/config.py @@ -6,9 +6,10 @@ from typing import Optional -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel +from llama_stack.schema_utils import json_schema_type + @json_schema_type class QdrantVectorIOConfig(BaseModel): diff --git a/llama_stack/providers/tests/agents/test_agents.py b/llama_stack/providers/tests/agents/test_agents.py index 45b276cc3..2e7bd537f 100644 --- a/llama_stack/providers/tests/agents/test_agents.py +++ b/llama_stack/providers/tests/agents/test_agents.py @@ -7,8 +7,6 @@ import os import pytest -from llama_models.datatypes import SamplingParams, TopPSamplingStrategy -from llama_models.llama3.api.datatypes import BuiltinTool from llama_stack.apis.agents import ( AgentConfig, @@ -25,6 +23,7 @@ from llama_stack.apis.agents import ( ) from llama_stack.apis.inference import CompletionMessage, UserMessage from llama_stack.apis.safety import ViolationLevel +from llama_stack.models.llama.datatypes import BuiltinTool, SamplingParams, TopPSamplingStrategy from llama_stack.providers.datatypes import Api # How to run this test: diff --git a/llama_stack/providers/tests/inference/groq/test_groq_utils.py b/llama_stack/providers/tests/inference/groq/test_groq_utils.py index 3eba991c1..34725e957 100644 --- a/llama_stack/providers/tests/inference/groq/test_groq_utils.py +++ b/llama_stack/providers/tests/inference/groq/test_groq_utils.py @@ -23,8 +23,6 @@ from groq.types.chat.chat_completion_message_tool_call import ( Function, ) from groq.types.shared.function_definition import FunctionDefinition -from llama_models.datatypes import GreedySamplingStrategy, TopPSamplingStrategy -from llama_models.llama3.api.datatypes import ToolParamDefinition from llama_stack.apis.common.content_types import ToolCallParseStatus from llama_stack.apis.inference import ( @@ -38,6 +36,7 @@ from llama_stack.apis.inference import ( ToolDefinition, UserMessage, ) +from llama_stack.models.llama.datatypes import GreedySamplingStrategy, ToolParamDefinition, TopPSamplingStrategy from llama_stack.providers.remote.inference.groq.groq_utils import ( convert_chat_completion_request, convert_chat_completion_response, diff --git a/llama_stack/providers/tests/inference/test_prompt_adapter.py b/llama_stack/providers/tests/inference/test_prompt_adapter.py index c087c5df2..323c6cb6a 100644 --- a/llama_stack/providers/tests/inference/test_prompt_adapter.py +++ b/llama_stack/providers/tests/inference/test_prompt_adapter.py @@ -6,19 +6,18 @@ import unittest -from llama_models.llama3.api.datatypes import ( - BuiltinTool, - ToolDefinition, - ToolParamDefinition, - ToolPromptFormat, -) - from llama_stack.apis.inference import ( ChatCompletionRequest, SystemMessage, ToolConfig, UserMessage, ) +from llama_stack.models.llama.datatypes import ( + BuiltinTool, + ToolDefinition, + ToolParamDefinition, + ToolPromptFormat, +) from llama_stack.providers.utils.inference.prompt_adapter import ( chat_completion_request_to_messages, ) diff --git a/llama_stack/providers/tests/inference/test_text_inference.py b/llama_stack/providers/tests/inference/test_text_inference.py index 14ed2fc4b..f25b95004 100644 --- a/llama_stack/providers/tests/inference/test_text_inference.py +++ b/llama_stack/providers/tests/inference/test_text_inference.py @@ -6,14 +6,6 @@ import pytest -from llama_models.llama3.api.datatypes import ( - SamplingParams, - StopReason, - ToolCall, - ToolDefinition, - ToolParamDefinition, - ToolPromptFormat, -) from pydantic import BaseModel, ValidationError from llama_stack.apis.common.content_types import ToolCallParseStatus @@ -30,6 +22,14 @@ from llama_stack.apis.inference import ( UserMessage, ) from llama_stack.apis.models import ListModelsResponse, Model +from llama_stack.models.llama.datatypes import ( + SamplingParams, + StopReason, + ToolCall, + ToolDefinition, + ToolParamDefinition, + ToolPromptFormat, +) from .utils import group_chunks diff --git a/llama_stack/providers/tests/report.py b/llama_stack/providers/tests/report.py index 3901dc2e3..febd13045 100644 --- a/llama_stack/providers/tests/report.py +++ b/llama_stack/providers/tests/report.py @@ -9,11 +9,12 @@ from collections import defaultdict from pathlib import Path import pytest -from llama_models.datatypes import CoreModelId -from llama_models.sku_list import all_registered_models from pytest import ExitCode from pytest_html.basereport import _process_outcome +from llama_stack.models.llama.datatypes import CoreModelId +from llama_stack.models.llama.sku_list import all_registered_models + INFERENCE_APIS = ["chat_completion"] FUNCTIONALITIES = ["streaming", "structured_output", "tool_calling"] SUPPORTED_MODELS = { diff --git a/llama_stack/providers/utils/inference/__init__.py b/llama_stack/providers/utils/inference/__init__.py index 64fe30f55..cab3725da 100644 --- a/llama_stack/providers/utils/inference/__init__.py +++ b/llama_stack/providers/utils/inference/__init__.py @@ -6,8 +6,8 @@ from typing import List -from llama_models.datatypes import * # noqa: F403 -from llama_models.sku_list import all_registered_models +from llama_stack.models.llama.datatypes import * # noqa: F403 +from llama_stack.models.llama.sku_list import all_registered_models def is_supported_safety_model(model: Model) -> bool: diff --git a/llama_stack/providers/utils/inference/model_registry.py b/llama_stack/providers/utils/inference/model_registry.py index 9345da949..c5f6cd6b5 100644 --- a/llama_stack/providers/utils/inference/model_registry.py +++ b/llama_stack/providers/utils/inference/model_registry.py @@ -7,9 +7,8 @@ from collections import namedtuple from typing import List, Optional -from llama_models.sku_list import all_registered_models - from llama_stack.apis.models.models import ModelType +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.datatypes import Model, ModelsProtocolPrivate from llama_stack.providers.utils.inference import ( ALL_HUGGINGFACE_REPOS_TO_MODEL_DESCRIPTOR, diff --git a/llama_stack/providers/utils/inference/openai_compat.py b/llama_stack/providers/utils/inference/openai_compat.py index 33f0f4e22..da8e3ce2d 100644 --- a/llama_stack/providers/utils/inference/openai_compat.py +++ b/llama_stack/providers/utils/inference/openai_compat.py @@ -7,14 +7,7 @@ import json import logging from typing import AsyncGenerator, Dict, List, Optional, Union -from llama_models.datatypes import ( - GreedySamplingStrategy, - SamplingParams, - TopKSamplingStrategy, - TopPSamplingStrategy, -) from llama_models.llama3.api.chat_format import ChatFormat -from llama_models.llama3.api.datatypes import StopReason, ToolCall from openai.types.chat import ChatCompletionMessageToolCall from pydantic import BaseModel @@ -37,6 +30,14 @@ from llama_stack.apis.inference import ( Message, TokenLogProbs, ) +from llama_stack.models.llama.datatypes import ( + GreedySamplingStrategy, + SamplingParams, + StopReason, + ToolCall, + TopKSamplingStrategy, + TopPSamplingStrategy, +) from llama_stack.providers.utils.inference.prompt_adapter import ( convert_image_content_to_url, ) diff --git a/llama_stack/providers/utils/inference/prompt_adapter.py b/llama_stack/providers/utils/inference/prompt_adapter.py index 15149e059..b7945dee7 100644 --- a/llama_stack/providers/utils/inference/prompt_adapter.py +++ b/llama_stack/providers/utils/inference/prompt_adapter.py @@ -13,25 +13,7 @@ import re from typing import List, Optional, Tuple, Union import httpx -from llama_models.datatypes import ModelFamily, is_multimodal from llama_models.llama3.api.chat_format import ChatFormat -from llama_models.llama3.api.datatypes import ( - RawContent, - RawContentItem, - RawMediaItem, - RawMessage, - RawTextItem, - Role, - ToolPromptFormat, -) -from llama_models.llama3.prompt_templates import ( - BuiltinToolGenerator, - FunctionTagCustomToolGenerator, - JsonCustomToolGenerator, - PythonListCustomToolGenerator, - SystemDefaultGenerator, -) -from llama_models.sku_list import resolve_model from PIL import Image as PIL_Image from llama_stack.apis.common.content_types import ( @@ -51,6 +33,25 @@ from llama_stack.apis.inference import ( ToolChoice, UserMessage, ) +from llama_stack.models.llama.datatypes import ( + ModelFamily, + RawContent, + RawContentItem, + RawMediaItem, + RawMessage, + RawTextItem, + Role, + ToolPromptFormat, + is_multimodal, +) +from llama_stack.models.llama.llama3.prompt_templates import ( + BuiltinToolGenerator, + FunctionTagCustomToolGenerator, + JsonCustomToolGenerator, + PythonListCustomToolGenerator, + SystemDefaultGenerator, +) +from llama_stack.models.llama.sku_list import resolve_model from llama_stack.providers.utils.inference import supported_inference_models log = logging.getLogger(__name__) diff --git a/llama_stack/providers/utils/kvstore/sqlite/config.py b/llama_stack/providers/utils/kvstore/sqlite/config.py index a616c90d0..6a8b0a7cf 100644 --- a/llama_stack/providers/utils/kvstore/sqlite/config.py +++ b/llama_stack/providers/utils/kvstore/sqlite/config.py @@ -4,9 +4,10 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. -from llama_models.schema_utils import json_schema_type from pydantic import BaseModel, Field +from llama_stack.schema_utils import json_schema_type + @json_schema_type class SqliteControlPlaneConfig(BaseModel): diff --git a/llama_stack/providers/utils/telemetry/trace_protocol.py b/llama_stack/providers/utils/telemetry/trace_protocol.py index 80c58a2c7..924274c42 100644 --- a/llama_stack/providers/utils/telemetry/trace_protocol.py +++ b/llama_stack/providers/utils/telemetry/trace_protocol.py @@ -9,9 +9,10 @@ import inspect from functools import wraps from typing import Any, AsyncGenerator, Callable, Type, TypeVar -from llama_models.llama3.api.datatypes import Primitive from pydantic import BaseModel +from llama_stack.models.llama.datatypes import Primitive + T = TypeVar("T") diff --git a/llama_stack/schema_utils.py b/llama_stack/schema_utils.py new file mode 100644 index 000000000..56b9e5e4c --- /dev/null +++ b/llama_stack/schema_utils.py @@ -0,0 +1,50 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +from dataclasses import dataclass +from typing import Any, Callable, List, Optional, TypeVar + +from .strong_typing.schema import json_schema_type, register_schema # noqa: F401 + +T = TypeVar("T") + + +@dataclass +class WebMethod: + route: Optional[str] = None + public: bool = False + request_examples: Optional[List[Any]] = None + response_examples: Optional[List[Any]] = None + method: Optional[str] = None + + +def webmethod( + route: Optional[str] = None, + method: Optional[str] = None, + public: Optional[bool] = False, + request_examples: Optional[List[Any]] = None, + response_examples: Optional[List[Any]] = None, +) -> Callable[[T], T]: + """ + Decorator that supplies additional metadata to an endpoint operation function. + + :param route: The URL path pattern associated with this operation which path parameters are substituted into. + :param public: True if the operation can be invoked without prior authentication. + :param request_examples: Sample requests that the operation might take. Pass a list of objects, not JSON. + :param response_examples: Sample responses that the operation might produce. Pass a list of objects, not JSON. + """ + + def wrap(cls: T) -> T: + cls.__webmethod__ = WebMethod( + route=route, + method=method, + public=public or False, + request_examples=request_examples, + response_examples=response_examples, + ) + return cls + + return wrap diff --git a/llama_stack/scripts/generate_prompt_format.py b/llama_stack/scripts/generate_prompt_format.py new file mode 100644 index 000000000..ecdde900f --- /dev/null +++ b/llama_stack/scripts/generate_prompt_format.py @@ -0,0 +1,65 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# top-level folder for each specific model found within the models/ directory at +# the top-level of this source tree. + +import importlib +from pathlib import Path +from typing import Optional + +import fire + +# from llama_stack.models.llama.datatypes import * # noqa: F403 +from llama_models.llama3.reference_impl.generation import Llama + +THIS_DIR = Path(__file__).parent.resolve() + + +def run_main( + ckpt_dir: str, + module_name: str, + output_path: str, + model_parallel_size: Optional[int] = None, +): + module = importlib.import_module(module_name) + assert hasattr(module, "usecases"), f"Module {module_name} missing usecases function" + tokenizer_path = str(THIS_DIR.parent / "llama3/api/tokenizer.model") + generator = Llama.build( + ckpt_dir=ckpt_dir, + tokenizer_path=tokenizer_path, + max_seq_len=512, + max_batch_size=1, + model_parallel_size=model_parallel_size, + ) + + use_cases = module.usecases() + text = "" + for u in use_cases: + if isinstance(u, str): + use_case_text = f"\n{u}\n" + else: + use_case_text = u.to_text(generator) + + text += use_case_text + print(use_case_text) + + text += "Thank You!\n" + + with open(output_path, "w") as f: + f.write(text) + + +def main(): + fire.Fire(run_main) + + +if __name__ == "__main__": + main() diff --git a/llama_stack/strong_typing/__init__.py b/llama_stack/strong_typing/__init__.py new file mode 100644 index 000000000..d832dcf6f --- /dev/null +++ b/llama_stack/strong_typing/__init__.py @@ -0,0 +1,19 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +Provides auxiliary services for working with Python type annotations, converting typed data to and from JSON, +and generating a JSON schema for a complex type. +""" + +__version__ = "0.3.4" +__author__ = "Levente Hunyadi" +__copyright__ = "Copyright 2021-2024, Levente Hunyadi" +__license__ = "MIT" +__maintainer__ = "Levente Hunyadi" +__status__ = "Production" diff --git a/llama_stack/strong_typing/auxiliary.py b/llama_stack/strong_typing/auxiliary.py new file mode 100644 index 000000000..fd183da18 --- /dev/null +++ b/llama_stack/strong_typing/auxiliary.py @@ -0,0 +1,226 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import dataclasses +import sys +from dataclasses import is_dataclass +from typing import Callable, Dict, Optional, Type, TypeVar, Union, overload + +if sys.version_info >= (3, 9): + from typing import Annotated as Annotated +else: + from typing_extensions import Annotated as Annotated + +if sys.version_info >= (3, 10): + from typing import TypeAlias as TypeAlias +else: + from typing_extensions import TypeAlias as TypeAlias + +if sys.version_info >= (3, 11): + from typing import dataclass_transform as dataclass_transform +else: + from typing_extensions import dataclass_transform as dataclass_transform + +T = TypeVar("T") + + +def _compact_dataclass_repr(obj: object) -> str: + """ + Compact data-class representation where positional arguments are used instead of keyword arguments. + + :param obj: A data-class object. + :returns: A string that matches the pattern `Class(arg1, arg2, ...)`. + """ + + if is_dataclass(obj): + arglist = ", ".join(repr(getattr(obj, field.name)) for field in dataclasses.fields(obj)) + return f"{obj.__class__.__name__}({arglist})" + else: + return obj.__class__.__name__ + + +class CompactDataClass: + "A data class whose repr() uses positional rather than keyword arguments." + + def __repr__(self) -> str: + return _compact_dataclass_repr(self) + + +@overload +def typeannotation(cls: Type[T], /) -> Type[T]: ... + + +@overload +def typeannotation(cls: None, *, eq: bool = True, order: bool = False) -> Callable[[Type[T]], Type[T]]: ... + + +@dataclass_transform(eq_default=True, order_default=False) +def typeannotation( + cls: Optional[Type[T]] = None, *, eq: bool = True, order: bool = False +) -> Union[Type[T], Callable[[Type[T]], Type[T]]]: + """ + Returns the same class as was passed in, with dunder methods added based on the fields defined in the class. + + :param cls: The data-class type to transform into a type annotation. + :param eq: Whether to generate functions to support equality comparison. + :param order: Whether to generate functions to support ordering. + :returns: A data-class type, or a wrapper for data-class types. + """ + + def wrap(cls: Type[T]) -> Type[T]: + setattr(cls, "__repr__", _compact_dataclass_repr) + if not dataclasses.is_dataclass(cls): + cls = dataclasses.dataclass( # type: ignore[call-overload] + cls, + init=True, + repr=False, + eq=eq, + order=order, + unsafe_hash=False, + frozen=True, + ) + return cls + + # see if decorator is used as @typeannotation or @typeannotation() + if cls is None: + # called with parentheses + return wrap + else: + # called without parentheses + return wrap(cls) + + +@typeannotation +class Alias: + "Alternative name of a property, typically used in JSON serialization." + + name: str + + +@typeannotation +class Signed: + "Signedness of an integer type." + + is_signed: bool + + +@typeannotation +class Storage: + "Number of bytes the binary representation of an integer type takes, e.g. 4 bytes for an int32." + + bytes: int + + +@typeannotation +class IntegerRange: + "Minimum and maximum value of an integer. The range is inclusive." + + minimum: int + maximum: int + + +@typeannotation +class Precision: + "Precision of a floating-point value." + + significant_digits: int + decimal_digits: int = 0 + + @property + def integer_digits(self) -> int: + return self.significant_digits - self.decimal_digits + + +@typeannotation +class TimePrecision: + """ + Precision of a timestamp or time interval. + + :param decimal_digits: Number of fractional digits retained in the sub-seconds field for a timestamp. + """ + + decimal_digits: int = 0 + + +@typeannotation +class Length: + "Exact length of a string." + + value: int + + +@typeannotation +class MinLength: + "Minimum length of a string." + + value: int + + +@typeannotation +class MaxLength: + "Maximum length of a string." + + value: int + + +@typeannotation +class SpecialConversion: + "Indicates that the annotated type is subject to custom conversion rules." + + +int8: TypeAlias = Annotated[int, Signed(True), Storage(1), IntegerRange(-128, 127)] +int16: TypeAlias = Annotated[int, Signed(True), Storage(2), IntegerRange(-32768, 32767)] +int32: TypeAlias = Annotated[ + int, + Signed(True), + Storage(4), + IntegerRange(-2147483648, 2147483647), +] +int64: TypeAlias = Annotated[ + int, + Signed(True), + Storage(8), + IntegerRange(-9223372036854775808, 9223372036854775807), +] + +uint8: TypeAlias = Annotated[int, Signed(False), Storage(1), IntegerRange(0, 255)] +uint16: TypeAlias = Annotated[int, Signed(False), Storage(2), IntegerRange(0, 65535)] +uint32: TypeAlias = Annotated[ + int, + Signed(False), + Storage(4), + IntegerRange(0, 4294967295), +] +uint64: TypeAlias = Annotated[ + int, + Signed(False), + Storage(8), + IntegerRange(0, 18446744073709551615), +] + +float32: TypeAlias = Annotated[float, Storage(4)] +float64: TypeAlias = Annotated[float, Storage(8)] + +# maps globals of type Annotated[T, ...] defined in this module to their string names +_auxiliary_types: Dict[object, str] = {} +module = sys.modules[__name__] +for var in dir(module): + typ = getattr(module, var) + if getattr(typ, "__metadata__", None) is not None: + # type is Annotated[T, ...] + _auxiliary_types[typ] = var + + +def get_auxiliary_format(data_type: object) -> Optional[str]: + "Returns the JSON format string corresponding to an auxiliary type." + + return _auxiliary_types.get(data_type) diff --git a/llama_stack/strong_typing/classdef.py b/llama_stack/strong_typing/classdef.py new file mode 100644 index 000000000..d2d8688e4 --- /dev/null +++ b/llama_stack/strong_typing/classdef.py @@ -0,0 +1,440 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +import copy +import dataclasses +import datetime +import decimal +import enum +import ipaddress +import math +import re +import sys +import types +import typing +import uuid +from dataclasses import dataclass +from typing import Any, Dict, List, Literal, Optional, Tuple, Type, TypeVar, Union + +from .auxiliary import ( + Alias, + Annotated, + MaxLength, + Precision, + float32, + float64, + int16, + int32, + int64, +) +from .core import JsonType, Schema +from .docstring import Docstring, DocstringParam +from .inspection import TypeLike +from .serialization import json_to_object, object_to_json + +T = TypeVar("T") + + +@dataclass +class JsonSchemaNode: + title: Optional[str] + description: Optional[str] + + +@dataclass +class JsonSchemaType(JsonSchemaNode): + type: str + format: Optional[str] + + +@dataclass +class JsonSchemaBoolean(JsonSchemaType): + type: Literal["boolean"] + const: Optional[bool] + default: Optional[bool] + examples: Optional[List[bool]] + + +@dataclass +class JsonSchemaInteger(JsonSchemaType): + type: Literal["integer"] + const: Optional[int] + default: Optional[int] + examples: Optional[List[int]] + enum: Optional[List[int]] + minimum: Optional[int] + maximum: Optional[int] + + +@dataclass +class JsonSchemaNumber(JsonSchemaType): + type: Literal["number"] + const: Optional[float] + default: Optional[float] + examples: Optional[List[float]] + minimum: Optional[float] + maximum: Optional[float] + exclusiveMinimum: Optional[float] + exclusiveMaximum: Optional[float] + multipleOf: Optional[float] + + +@dataclass +class JsonSchemaString(JsonSchemaType): + type: Literal["string"] + const: Optional[str] + default: Optional[str] + examples: Optional[List[str]] + enum: Optional[List[str]] + minLength: Optional[int] + maxLength: Optional[int] + + +@dataclass +class JsonSchemaArray(JsonSchemaType): + type: Literal["array"] + items: "JsonSchemaAny" + + +@dataclass +class JsonSchemaObject(JsonSchemaType): + type: Literal["object"] + properties: Optional[Dict[str, "JsonSchemaAny"]] + additionalProperties: Optional[bool] + required: Optional[List[str]] + + +@dataclass +class JsonSchemaRef(JsonSchemaNode): + ref: Annotated[str, Alias("$ref")] + + +@dataclass +class JsonSchemaAllOf(JsonSchemaNode): + allOf: List["JsonSchemaAny"] + + +@dataclass +class JsonSchemaAnyOf(JsonSchemaNode): + anyOf: List["JsonSchemaAny"] + + +@dataclass +class Discriminator: + propertyName: str + mapping: Dict[str, str] + + +@dataclass +class JsonSchemaOneOf(JsonSchemaNode): + oneOf: List["JsonSchemaAny"] + discriminator: Optional[Discriminator] + + +JsonSchemaAny = Union[ + JsonSchemaRef, + JsonSchemaBoolean, + JsonSchemaInteger, + JsonSchemaNumber, + JsonSchemaString, + JsonSchemaArray, + JsonSchemaObject, + JsonSchemaOneOf, +] + + +@dataclass +class JsonSchemaTopLevelObject(JsonSchemaObject): + schema: Annotated[str, Alias("$schema")] + definitions: Optional[Dict[str, JsonSchemaAny]] + + +def integer_range_to_type(min_value: float, max_value: float) -> type: + if min_value >= -(2**15) and max_value < 2**15: + return int16 + elif min_value >= -(2**31) and max_value < 2**31: + return int32 + else: + return int64 + + +def enum_safe_name(name: str) -> str: + name = re.sub(r"\W", "_", name) + is_dunder = name.startswith("__") + is_sunder = name.startswith("_") and name.endswith("_") + if is_dunder or is_sunder: # provide an alternative for dunder and sunder names + name = f"v{name}" + return name + + +def enum_values_to_type( + module: types.ModuleType, + name: str, + values: Dict[str, Any], + title: Optional[str] = None, + description: Optional[str] = None, +) -> Type[enum.Enum]: + enum_class: Type[enum.Enum] = enum.Enum(name, values) # type: ignore + + # assign the newly created type to the same module where the defining class is + enum_class.__module__ = module.__name__ + enum_class.__doc__ = str(Docstring(short_description=title, long_description=description)) + setattr(module, name, enum_class) + + return enum.unique(enum_class) + + +def schema_to_type(schema: Schema, *, module: types.ModuleType, class_name: str) -> TypeLike: + """ + Creates a Python type from a JSON schema. + + :param schema: The JSON schema that the types would correspond to. + :param module: The module in which to create the new types. + :param class_name: The name assigned to the top-level class. + """ + + top_node = typing.cast(JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema)) + if top_node.definitions is not None: + for type_name, type_node in top_node.definitions.items(): + type_def = node_to_typedef(module, type_name, type_node) + if type_def.default is not dataclasses.MISSING: + raise TypeError("disallowed: `default` for top-level type definitions") + + setattr(type_def.type, "__module__", module.__name__) + setattr(module, type_name, type_def.type) + + return node_to_typedef(module, class_name, top_node).type + + +@dataclass +class TypeDef: + type: TypeLike + default: Any = dataclasses.MISSING + + +def json_to_value(target_type: TypeLike, data: JsonType) -> Any: + if data is not None: + return json_to_object(target_type, data) + else: + return dataclasses.MISSING + + +def node_to_typedef(module: types.ModuleType, context: str, node: JsonSchemaNode) -> TypeDef: + if isinstance(node, JsonSchemaRef): + match_obj = re.match(r"^#/definitions/(\w+)$", node.ref) + if not match_obj: + raise ValueError(f"invalid reference: {node.ref}") + + type_name = match_obj.group(1) + return TypeDef(getattr(module, type_name), dataclasses.MISSING) + + elif isinstance(node, JsonSchemaBoolean): + if node.const is not None: + return TypeDef(Literal[node.const], dataclasses.MISSING) + + default = json_to_value(bool, node.default) + return TypeDef(bool, default) + + elif isinstance(node, JsonSchemaInteger): + if node.const is not None: + return TypeDef(Literal[node.const], dataclasses.MISSING) + + integer_type: TypeLike + if node.format == "int16": + integer_type = int16 + elif node.format == "int32": + integer_type = int32 + elif node.format == "int64": + integer_type = int64 + else: + if node.enum is not None: + integer_type = integer_range_to_type(min(node.enum), max(node.enum)) + elif node.minimum is not None and node.maximum is not None: + integer_type = integer_range_to_type(node.minimum, node.maximum) + else: + integer_type = int + + default = json_to_value(integer_type, node.default) + return TypeDef(integer_type, default) + + elif isinstance(node, JsonSchemaNumber): + if node.const is not None: + return TypeDef(Literal[node.const], dataclasses.MISSING) + + number_type: TypeLike + if node.format == "float32": + number_type = float32 + elif node.format == "float64": + number_type = float64 + else: + if ( + node.exclusiveMinimum is not None + and node.exclusiveMaximum is not None + and node.exclusiveMinimum == -node.exclusiveMaximum + ): + integer_digits = round(math.log10(node.exclusiveMaximum)) + else: + integer_digits = None + + if node.multipleOf is not None: + decimal_digits = -round(math.log10(node.multipleOf)) + else: + decimal_digits = None + + if integer_digits is not None and decimal_digits is not None: + number_type = Annotated[ + decimal.Decimal, + Precision(integer_digits + decimal_digits, decimal_digits), + ] + else: + number_type = float + + default = json_to_value(number_type, node.default) + return TypeDef(number_type, default) + + elif isinstance(node, JsonSchemaString): + if node.const is not None: + return TypeDef(Literal[node.const], dataclasses.MISSING) + + string_type: TypeLike + if node.format == "date-time": + string_type = datetime.datetime + elif node.format == "uuid": + string_type = uuid.UUID + elif node.format == "ipv4": + string_type = ipaddress.IPv4Address + elif node.format == "ipv6": + string_type = ipaddress.IPv6Address + + elif node.enum is not None: + string_type = enum_values_to_type( + module, + context, + {enum_safe_name(e): e for e in node.enum}, + title=node.title, + description=node.description, + ) + + elif node.maxLength is not None: + string_type = Annotated[str, MaxLength(node.maxLength)] + else: + string_type = str + + default = json_to_value(string_type, node.default) + return TypeDef(string_type, default) + + elif isinstance(node, JsonSchemaArray): + type_def = node_to_typedef(module, context, node.items) + if type_def.default is not dataclasses.MISSING: + raise TypeError("disallowed: `default` for array element type") + list_type = List[(type_def.type,)] # type: ignore + return TypeDef(list_type, dataclasses.MISSING) + + elif isinstance(node, JsonSchemaObject): + if node.properties is None: + return TypeDef(JsonType, dataclasses.MISSING) + + if node.additionalProperties is None or node.additionalProperties is not False: + raise TypeError("expected: `additionalProperties` equals `false`") + + required = node.required if node.required is not None else [] + + class_name = context + + fields: List[Tuple[str, Any, dataclasses.Field]] = [] + params: Dict[str, DocstringParam] = {} + for prop_name, prop_node in node.properties.items(): + type_def = node_to_typedef(module, f"{class_name}__{prop_name}", prop_node) + if prop_name in required: + prop_type = type_def.type + else: + prop_type = Union[(None, type_def.type)] + fields.append((prop_name, prop_type, dataclasses.field(default=type_def.default))) + prop_desc = prop_node.title or prop_node.description + if prop_desc is not None: + params[prop_name] = DocstringParam(prop_name, prop_desc) + + fields.sort(key=lambda t: t[2].default is not dataclasses.MISSING) + if sys.version_info >= (3, 12): + class_type = dataclasses.make_dataclass(class_name, fields, module=module.__name__) + else: + class_type = dataclasses.make_dataclass(class_name, fields, namespace={"__module__": module.__name__}) + class_type.__doc__ = str( + Docstring( + short_description=node.title, + long_description=node.description, + params=params, + ) + ) + setattr(module, class_name, class_type) + return TypeDef(class_type, dataclasses.MISSING) + + elif isinstance(node, JsonSchemaOneOf): + union_defs = tuple(node_to_typedef(module, context, n) for n in node.oneOf) + if any(d.default is not dataclasses.MISSING for d in union_defs): + raise TypeError("disallowed: `default` for union member type") + union_types = tuple(d.type for d in union_defs) + return TypeDef(Union[union_types], dataclasses.MISSING) + + raise NotImplementedError() + + +@dataclass +class SchemaFlatteningOptions: + qualified_names: bool = False + recursive: bool = False + + +def flatten_schema(schema: Schema, *, options: Optional[SchemaFlatteningOptions] = None) -> Schema: + top_node = typing.cast(JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema)) + flattener = SchemaFlattener(options) + obj = flattener.flatten(top_node) + return typing.cast(Schema, object_to_json(obj)) + + +class SchemaFlattener: + options: SchemaFlatteningOptions + + def __init__(self, options: Optional[SchemaFlatteningOptions] = None) -> None: + self.options = options or SchemaFlatteningOptions() + + def flatten(self, source_node: JsonSchemaObject) -> JsonSchemaObject: + if source_node.type != "object": + return source_node + + source_props = source_node.properties or {} + target_props: Dict[str, JsonSchemaAny] = {} + + source_reqs = source_node.required or [] + target_reqs: List[str] = [] + + for name, prop in source_props.items(): + if not isinstance(prop, JsonSchemaObject): + target_props[name] = prop + if name in source_reqs: + target_reqs.append(name) + continue + + if self.options.recursive: + obj = self.flatten(prop) + else: + obj = prop + if obj.properties is not None: + if self.options.qualified_names: + target_props.update((f"{name}.{n}", p) for n, p in obj.properties.items()) + else: + target_props.update(obj.properties.items()) + if obj.required is not None: + if self.options.qualified_names: + target_reqs.extend(f"{name}.{n}" for n in obj.required) + else: + target_reqs.extend(obj.required) + + target_node = copy.copy(source_node) + target_node.properties = target_props or None + target_node.additionalProperties = False + target_node.required = target_reqs or None + return target_node diff --git a/llama_stack/strong_typing/core.py b/llama_stack/strong_typing/core.py new file mode 100644 index 000000000..501b6a5db --- /dev/null +++ b/llama_stack/strong_typing/core.py @@ -0,0 +1,46 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +from typing import Dict, List, Union + + +class JsonObject: + "Placeholder type for an unrestricted JSON object." + + +class JsonArray: + "Placeholder type for an unrestricted JSON array." + + +# a JSON type with possible `null` values +JsonType = Union[ + None, + bool, + int, + float, + str, + Dict[str, "JsonType"], + List["JsonType"], +] + +# a JSON type that cannot contain `null` values +StrictJsonType = Union[ + bool, + int, + float, + str, + Dict[str, "StrictJsonType"], + List["StrictJsonType"], +] + +# a meta-type that captures the object type in a JSON schema +Schema = Dict[str, JsonType] diff --git a/llama_stack/strong_typing/deserializer.py b/llama_stack/strong_typing/deserializer.py new file mode 100644 index 000000000..4c4ee9d89 --- /dev/null +++ b/llama_stack/strong_typing/deserializer.py @@ -0,0 +1,876 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import abc +import base64 +import dataclasses +import datetime +import enum +import inspect +import ipaddress +import sys +import typing +import uuid +from types import ModuleType +from typing import ( + Any, + Callable, + Dict, + Generic, + List, + Literal, + NamedTuple, + Optional, + Set, + Tuple, + Type, + TypeVar, + Union, +) + +from .core import JsonType +from .exception import JsonKeyError, JsonTypeError, JsonValueError +from .inspection import ( + TypeLike, + create_object, + enum_value_types, + evaluate_type, + get_class_properties, + get_class_property, + get_resolved_hints, + is_dataclass_instance, + is_dataclass_type, + is_named_tuple_type, + is_type_annotated, + is_type_literal, + is_type_optional, + unwrap_annotated_type, + unwrap_literal_values, + unwrap_optional_type, +) +from .mapping import python_field_to_json_property +from .name import python_type_to_str + +E = TypeVar("E", bound=enum.Enum) +T = TypeVar("T") +R = TypeVar("R") +K = TypeVar("K") +V = TypeVar("V") + + +class Deserializer(abc.ABC, Generic[T]): + "Parses a JSON value into a Python type." + + def build(self, context: Optional[ModuleType]) -> None: + """ + Creates auxiliary parsers that this parser is depending on. + + :param context: A module context for evaluating types specified as a string. + """ + + @abc.abstractmethod + def parse(self, data: JsonType) -> T: + """ + Parses a JSON value into a Python type. + + :param data: The JSON value to de-serialize. + :returns: The Python object that the JSON value de-serializes to. + """ + + +class NoneDeserializer(Deserializer[None]): + "Parses JSON `null` values into Python `None`." + + def parse(self, data: JsonType) -> None: + if data is not None: + raise JsonTypeError(f"`None` type expects JSON `null` but instead received: {data}") + return None + + +class BoolDeserializer(Deserializer[bool]): + "Parses JSON `boolean` values into Python `bool` type." + + def parse(self, data: JsonType) -> bool: + if not isinstance(data, bool): + raise JsonTypeError(f"`bool` type expects JSON `boolean` data but instead received: {data}") + return bool(data) + + +class IntDeserializer(Deserializer[int]): + "Parses JSON `number` values into Python `int` type." + + def parse(self, data: JsonType) -> int: + if not isinstance(data, int): + raise JsonTypeError(f"`int` type expects integer data as JSON `number` but instead received: {data}") + return int(data) + + +class FloatDeserializer(Deserializer[float]): + "Parses JSON `number` values into Python `float` type." + + def parse(self, data: JsonType) -> float: + if not isinstance(data, float) and not isinstance(data, int): + raise JsonTypeError(f"`int` type expects data as JSON `number` but instead received: {data}") + return float(data) + + +class StringDeserializer(Deserializer[str]): + "Parses JSON `string` values into Python `str` type." + + def parse(self, data: JsonType) -> str: + if not isinstance(data, str): + raise JsonTypeError(f"`str` type expects JSON `string` data but instead received: {data}") + return str(data) + + +class BytesDeserializer(Deserializer[bytes]): + "Parses JSON `string` values of Base64-encoded strings into Python `bytes` type." + + def parse(self, data: JsonType) -> bytes: + if not isinstance(data, str): + raise JsonTypeError(f"`bytes` type expects JSON `string` data but instead received: {data}") + return base64.b64decode(data, validate=True) + + +class DateTimeDeserializer(Deserializer[datetime.datetime]): + "Parses JSON `string` values representing timestamps in ISO 8601 format to Python `datetime` with time zone." + + def parse(self, data: JsonType) -> datetime.datetime: + if not isinstance(data, str): + raise JsonTypeError(f"`datetime` type expects JSON `string` data but instead received: {data}") + + if data.endswith("Z"): + data = f"{data[:-1]}+00:00" # Python's isoformat() does not support military time zones like "Zulu" for UTC + timestamp = datetime.datetime.fromisoformat(data) + if timestamp.tzinfo is None: + raise JsonValueError(f"timestamp lacks explicit time zone designator: {data}") + return timestamp + + +class DateDeserializer(Deserializer[datetime.date]): + "Parses JSON `string` values representing dates in ISO 8601 format to Python `date` type." + + def parse(self, data: JsonType) -> datetime.date: + if not isinstance(data, str): + raise JsonTypeError(f"`date` type expects JSON `string` data but instead received: {data}") + + return datetime.date.fromisoformat(data) + + +class TimeDeserializer(Deserializer[datetime.time]): + "Parses JSON `string` values representing time instances in ISO 8601 format to Python `time` type with time zone." + + def parse(self, data: JsonType) -> datetime.time: + if not isinstance(data, str): + raise JsonTypeError(f"`time` type expects JSON `string` data but instead received: {data}") + + return datetime.time.fromisoformat(data) + + +class UUIDDeserializer(Deserializer[uuid.UUID]): + "Parses JSON `string` values of UUID strings into Python `uuid.UUID` type." + + def parse(self, data: JsonType) -> uuid.UUID: + if not isinstance(data, str): + raise JsonTypeError(f"`UUID` type expects JSON `string` data but instead received: {data}") + return uuid.UUID(data) + + +class IPv4Deserializer(Deserializer[ipaddress.IPv4Address]): + "Parses JSON `string` values of IPv4 address strings into Python `ipaddress.IPv4Address` type." + + def parse(self, data: JsonType) -> ipaddress.IPv4Address: + if not isinstance(data, str): + raise JsonTypeError(f"`IPv4Address` type expects JSON `string` data but instead received: {data}") + return ipaddress.IPv4Address(data) + + +class IPv6Deserializer(Deserializer[ipaddress.IPv6Address]): + "Parses JSON `string` values of IPv6 address strings into Python `ipaddress.IPv6Address` type." + + def parse(self, data: JsonType) -> ipaddress.IPv6Address: + if not isinstance(data, str): + raise JsonTypeError(f"`IPv6Address` type expects JSON `string` data but instead received: {data}") + return ipaddress.IPv6Address(data) + + +class ListDeserializer(Deserializer[List[T]]): + "Recursively de-serializes a JSON array into a Python `list`." + + item_type: Type[T] + item_parser: Deserializer + + def __init__(self, item_type: Type[T]) -> None: + self.item_type = item_type + + def build(self, context: Optional[ModuleType]) -> None: + self.item_parser = _get_deserializer(self.item_type, context) + + def parse(self, data: JsonType) -> List[T]: + if not isinstance(data, list): + type_name = python_type_to_str(self.item_type) + raise JsonTypeError(f"type `List[{type_name}]` expects JSON `array` data but instead received: {data}") + + return [self.item_parser.parse(item) for item in data] + + +class DictDeserializer(Deserializer[Dict[K, V]]): + "Recursively de-serializes a JSON object into a Python `dict`." + + key_type: Type[K] + value_type: Type[V] + value_parser: Deserializer[V] + + def __init__(self, key_type: Type[K], value_type: Type[V]) -> None: + self.key_type = key_type + self.value_type = value_type + self._check_key_type() + + def build(self, context: Optional[ModuleType]) -> None: + self.value_parser = _get_deserializer(self.value_type, context) + + def _check_key_type(self) -> None: + if self.key_type is str: + return + + if issubclass(self.key_type, enum.Enum): + value_types = enum_value_types(self.key_type) + if len(value_types) != 1: + raise JsonTypeError( + f"type `{self.container_type}` has invalid key type, " + f"enumerations must have a consistent member value type but several types found: {value_types}" + ) + value_type = value_types.pop() + if value_type is not str: + f"`type `{self.container_type}` has invalid enumeration key type, expected `enum.Enum` with string values" + return + + raise JsonTypeError( + f"`type `{self.container_type}` has invalid key type, expected `str` or `enum.Enum` with string values" + ) + + @property + def container_type(self) -> str: + key_type_name = python_type_to_str(self.key_type) + value_type_name = python_type_to_str(self.value_type) + return f"Dict[{key_type_name}, {value_type_name}]" + + def parse(self, data: JsonType) -> Dict[K, V]: + if not isinstance(data, dict): + raise JsonTypeError( + f"`type `{self.container_type}` expects JSON `object` data but instead received: {data}" + ) + + return dict( + (self.key_type(key), self.value_parser.parse(value)) # type: ignore[call-arg] + for key, value in data.items() + ) + + +class SetDeserializer(Deserializer[Set[T]]): + "Recursively de-serializes a JSON list into a Python `set`." + + member_type: Type[T] + member_parser: Deserializer + + def __init__(self, member_type: Type[T]) -> None: + self.member_type = member_type + + def build(self, context: Optional[ModuleType]) -> None: + self.member_parser = _get_deserializer(self.member_type, context) + + def parse(self, data: JsonType) -> Set[T]: + if not isinstance(data, list): + type_name = python_type_to_str(self.member_type) + raise JsonTypeError(f"type `Set[{type_name}]` expects JSON `array` data but instead received: {data}") + + return set(self.member_parser.parse(item) for item in data) + + +class TupleDeserializer(Deserializer[Tuple[Any, ...]]): + "Recursively de-serializes a JSON list into a Python `tuple`." + + item_types: Tuple[Type[Any], ...] + item_parsers: Tuple[Deserializer[Any], ...] + + def __init__(self, item_types: Tuple[Type[Any], ...]) -> None: + self.item_types = item_types + + def build(self, context: Optional[ModuleType]) -> None: + self.item_parsers = tuple(_get_deserializer(item_type, context) for item_type in self.item_types) + + @property + def container_type(self) -> str: + type_names = ", ".join(python_type_to_str(item_type) for item_type in self.item_types) + return f"Tuple[{type_names}]" + + def parse(self, data: JsonType) -> Tuple[Any, ...]: + if not isinstance(data, list) or len(data) != len(self.item_parsers): + if not isinstance(data, list): + raise JsonTypeError( + f"type `{self.container_type}` expects JSON `array` data but instead received: {data}" + ) + else: + count = len(self.item_parsers) + raise JsonValueError( + f"type `{self.container_type}` expects a JSON `array` of length {count} but received length {len(data)}" + ) + + return tuple(item_parser.parse(item) for item_parser, item in zip(self.item_parsers, data)) + + +class UnionDeserializer(Deserializer): + "De-serializes a JSON value (of any type) into a Python union type." + + member_types: Tuple[type, ...] + member_parsers: Tuple[Deserializer, ...] + + def __init__(self, member_types: Tuple[type, ...]) -> None: + self.member_types = member_types + + def build(self, context: Optional[ModuleType]) -> None: + self.member_parsers = tuple(_get_deserializer(member_type, context) for member_type in self.member_types) + + def parse(self, data: JsonType) -> Any: + for member_parser in self.member_parsers: + # iterate over potential types of discriminated union + try: + return member_parser.parse(data) + except (JsonKeyError, JsonTypeError): + # indicates a required field is missing from JSON dict -OR- the data cannot be cast to the expected type, + # i.e. we don't have the type that we are looking for + continue + + type_names = ", ".join(python_type_to_str(member_type) for member_type in self.member_types) + raise JsonKeyError(f"type `Union[{type_names}]` could not be instantiated from: {data}") + + +def get_literal_properties(typ: type) -> Set[str]: + "Returns the names of all properties in a class that are of a literal type." + + return set( + property_name for property_name, property_type in get_class_properties(typ) if is_type_literal(property_type) + ) + + +def get_discriminating_properties(types: Tuple[type, ...]) -> Set[str]: + "Returns a set of properties with literal type that are common across all specified classes." + + if not types or not all(isinstance(typ, type) for typ in types): + return set() + + props = get_literal_properties(types[0]) + for typ in types[1:]: + props = props & get_literal_properties(typ) + + return props + + +class TaggedUnionDeserializer(Deserializer): + "De-serializes a JSON value with one or more disambiguating properties into a Python union type." + + member_types: Tuple[type, ...] + disambiguating_properties: Set[str] + member_parsers: Dict[Tuple[str, Any], Deserializer] + + def __init__(self, member_types: Tuple[type, ...]) -> None: + self.member_types = member_types + self.disambiguating_properties = get_discriminating_properties(member_types) + + def build(self, context: Optional[ModuleType]) -> None: + self.member_parsers = {} + for member_type in self.member_types: + for property_name in self.disambiguating_properties: + literal_type = get_class_property(member_type, property_name) + if not literal_type: + continue + + for literal_value in unwrap_literal_values(literal_type): + tpl = (property_name, literal_value) + if tpl in self.member_parsers: + raise JsonTypeError( + f"disambiguating property `{property_name}` in type `{self.union_type}` has a duplicate value: {literal_value}" + ) + + self.member_parsers[tpl] = _get_deserializer(member_type, context) + + @property + def union_type(self) -> str: + type_names = ", ".join(python_type_to_str(member_type) for member_type in self.member_types) + return f"Union[{type_names}]" + + def parse(self, data: JsonType) -> Any: + if not isinstance(data, dict): + raise JsonTypeError( + f"tagged union type `{self.union_type}` expects JSON `object` data but instead received: {data}" + ) + + for property_name in self.disambiguating_properties: + disambiguating_value = data.get(property_name) + if disambiguating_value is None: + continue + + member_parser = self.member_parsers.get((property_name, disambiguating_value)) + if member_parser is None: + raise JsonTypeError( + f"disambiguating property value is invalid for tagged union type `{self.union_type}`: {data}" + ) + + return member_parser.parse(data) + + raise JsonTypeError( + f"disambiguating property value is missing for tagged union type `{self.union_type}`: {data}" + ) + + +class LiteralDeserializer(Deserializer): + "De-serializes a JSON value into a Python literal type." + + values: Tuple[Any, ...] + parser: Deserializer + + def __init__(self, values: Tuple[Any, ...]) -> None: + self.values = values + + def build(self, context: Optional[ModuleType]) -> None: + literal_type_tuple = tuple(type(value) for value in self.values) + literal_type_set = set(literal_type_tuple) + if len(literal_type_set) != 1: + value_names = ", ".join(repr(value) for value in self.values) + raise TypeError( + f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}" + ) + + literal_type = literal_type_set.pop() + self.parser = _get_deserializer(literal_type, context) + + def parse(self, data: JsonType) -> Any: + value = self.parser.parse(data) + if value not in self.values: + value_names = ", ".join(repr(value) for value in self.values) + raise JsonTypeError(f"type `Literal[{value_names}]` could not be instantiated from: {data}") + return value + + +class EnumDeserializer(Deserializer[E]): + "Returns an enumeration instance based on the enumeration value read from a JSON value." + + enum_type: Type[E] + + def __init__(self, enum_type: Type[E]) -> None: + self.enum_type = enum_type + + def parse(self, data: JsonType) -> E: + return self.enum_type(data) + + +class CustomDeserializer(Deserializer[T]): + "Uses the `from_json` class method in class to de-serialize the object from JSON." + + converter: Callable[[JsonType], T] + + def __init__(self, converter: Callable[[JsonType], T]) -> None: + self.converter = converter + + def parse(self, data: JsonType) -> T: + return self.converter(data) + + +class FieldDeserializer(abc.ABC, Generic[T, R]): + """ + Deserializes a JSON property into a Python object field. + + :param property_name: The name of the JSON property to read from a JSON `object`. + :param field_name: The name of the field in a Python class to write data to. + :param parser: A compatible deserializer that can handle the field's type. + """ + + property_name: str + field_name: str + parser: Deserializer[T] + + def __init__(self, property_name: str, field_name: str, parser: Deserializer[T]) -> None: + self.property_name = property_name + self.field_name = field_name + self.parser = parser + + @abc.abstractmethod + def parse_field(self, data: Dict[str, JsonType]) -> R: ... + + +class RequiredFieldDeserializer(FieldDeserializer[T, T]): + "Deserializes a JSON property into a mandatory Python object field." + + def parse_field(self, data: Dict[str, JsonType]) -> T: + if self.property_name not in data: + raise JsonKeyError(f"missing required property `{self.property_name}` from JSON object: {data}") + + return self.parser.parse(data[self.property_name]) + + +class OptionalFieldDeserializer(FieldDeserializer[T, Optional[T]]): + "Deserializes a JSON property into an optional Python object field with a default value of `None`." + + def parse_field(self, data: Dict[str, JsonType]) -> Optional[T]: + value = data.get(self.property_name) + if value is not None: + return self.parser.parse(value) + else: + return None + + +class DefaultFieldDeserializer(FieldDeserializer[T, T]): + "Deserializes a JSON property into a Python object field with an explicit default value." + + default_value: T + + def __init__( + self, + property_name: str, + field_name: str, + parser: Deserializer, + default_value: T, + ) -> None: + super().__init__(property_name, field_name, parser) + self.default_value = default_value + + def parse_field(self, data: Dict[str, JsonType]) -> T: + value = data.get(self.property_name) + if value is not None: + return self.parser.parse(value) + else: + return self.default_value + + +class DefaultFactoryFieldDeserializer(FieldDeserializer[T, T]): + "Deserializes a JSON property into an optional Python object field with an explicit default value factory." + + default_factory: Callable[[], T] + + def __init__( + self, + property_name: str, + field_name: str, + parser: Deserializer[T], + default_factory: Callable[[], T], + ) -> None: + super().__init__(property_name, field_name, parser) + self.default_factory = default_factory + + def parse_field(self, data: Dict[str, JsonType]) -> T: + value = data.get(self.property_name) + if value is not None: + return self.parser.parse(value) + else: + return self.default_factory() + + +class ClassDeserializer(Deserializer[T]): + "Base class for de-serializing class-like types such as data classes, named tuples and regular classes." + + class_type: type + property_parsers: List[FieldDeserializer] + property_fields: Set[str] + + def __init__(self, class_type: Type[T]) -> None: + self.class_type = class_type + + def assign(self, property_parsers: List[FieldDeserializer]) -> None: + self.property_parsers = property_parsers + self.property_fields = set(property_parser.property_name for property_parser in property_parsers) + + def parse(self, data: JsonType) -> T: + if not isinstance(data, dict): + type_name = python_type_to_str(self.class_type) + raise JsonTypeError(f"`type `{type_name}` expects JSON `object` data but instead received: {data}") + + object_data: Dict[str, JsonType] = typing.cast(Dict[str, JsonType], data) + + field_values = {} + for property_parser in self.property_parsers: + field_values[property_parser.field_name] = property_parser.parse_field(object_data) + + if not self.property_fields.issuperset(object_data): + unassigned_names = [name for name in object_data if name not in self.property_fields] + raise JsonKeyError(f"unrecognized fields in JSON object: {unassigned_names}") + + return self.create(**field_values) + + def create(self, **field_values: Any) -> T: + "Instantiates an object with a collection of property values." + + obj: T = create_object(self.class_type) + + # use `setattr` on newly created object instance + for field_name, field_value in field_values.items(): + setattr(obj, field_name, field_value) + return obj + + +class NamedTupleDeserializer(ClassDeserializer[NamedTuple]): + "De-serializes a named tuple from a JSON `object`." + + def build(self, context: Optional[ModuleType]) -> None: + property_parsers: List[FieldDeserializer] = [ + RequiredFieldDeserializer(field_name, field_name, _get_deserializer(field_type, context)) + for field_name, field_type in get_resolved_hints(self.class_type).items() + ] + super().assign(property_parsers) + + def create(self, **field_values: Any) -> NamedTuple: + return self.class_type(**field_values) + + +class DataclassDeserializer(ClassDeserializer[T]): + "De-serializes a data class from a JSON `object`." + + def __init__(self, class_type: Type[T]) -> None: + if not dataclasses.is_dataclass(class_type): + raise TypeError("expected: data-class type") + super().__init__(class_type) # type: ignore[arg-type] + + def build(self, context: Optional[ModuleType]) -> None: + property_parsers: List[FieldDeserializer] = [] + resolved_hints = get_resolved_hints(self.class_type) + for field in dataclasses.fields(self.class_type): + field_type = resolved_hints[field.name] + property_name = python_field_to_json_property(field.name, field_type) + + is_optional = is_type_optional(field_type) + has_default = field.default is not dataclasses.MISSING + has_default_factory = field.default_factory is not dataclasses.MISSING + + if is_optional: + required_type: Type[T] = unwrap_optional_type(field_type) + else: + required_type = field_type + + parser = _get_deserializer(required_type, context) + + if has_default: + field_parser: FieldDeserializer = DefaultFieldDeserializer( + property_name, field.name, parser, field.default + ) + elif has_default_factory: + default_factory = typing.cast(Callable[[], Any], field.default_factory) + field_parser = DefaultFactoryFieldDeserializer(property_name, field.name, parser, default_factory) + elif is_optional: + field_parser = OptionalFieldDeserializer(property_name, field.name, parser) + else: + field_parser = RequiredFieldDeserializer(property_name, field.name, parser) + + property_parsers.append(field_parser) + + super().assign(property_parsers) + + +class FrozenDataclassDeserializer(DataclassDeserializer[T]): + "De-serializes a frozen data class from a JSON `object`." + + def create(self, **field_values: Any) -> T: + "Instantiates an object with a collection of property values." + + # create object instance without calling `__init__` + obj: T = create_object(self.class_type) + + # can't use `setattr` on frozen dataclasses, pass member variable values to `__init__` + obj.__init__(**field_values) # type: ignore + return obj + + +class TypedClassDeserializer(ClassDeserializer[T]): + "De-serializes a class with type annotations from a JSON `object` by iterating over class properties." + + def build(self, context: Optional[ModuleType]) -> None: + property_parsers: List[FieldDeserializer] = [] + for field_name, field_type in get_resolved_hints(self.class_type).items(): + property_name = python_field_to_json_property(field_name, field_type) + + is_optional = is_type_optional(field_type) + + if is_optional: + required_type: Type[T] = unwrap_optional_type(field_type) + else: + required_type = field_type + + parser = _get_deserializer(required_type, context) + + if is_optional: + field_parser: FieldDeserializer = OptionalFieldDeserializer(property_name, field_name, parser) + else: + field_parser = RequiredFieldDeserializer(property_name, field_name, parser) + + property_parsers.append(field_parser) + + super().assign(property_parsers) + + +def create_deserializer(typ: TypeLike, context: Optional[ModuleType] = None) -> Deserializer: + """ + Creates a de-serializer engine to produce a Python object from an object obtained from a JSON string. + + When de-serializing a JSON object into a Python object, the following transformations are applied: + + * Fundamental types are parsed as `bool`, `int`, `float` or `str`. + * Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type + `datetime`, `date` or `time`. + * Byte arrays are read from a string with Base64 encoding into a `bytes` instance. + * UUIDs are extracted from a UUID string compliant with RFC 4122 into a `uuid.UUID` instance. + * Enumerations are instantiated with a lookup on enumeration value. + * Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively. + * Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs + using reflection (enumerating type annotations). + + :raises TypeError: A de-serializer engine cannot be constructed for the input type. + """ + + if context is None: + if isinstance(typ, type): + context = sys.modules[typ.__module__] + + return _get_deserializer(typ, context) + + +_CACHE: Dict[Tuple[str, str], Deserializer] = {} + + +def _get_deserializer(typ: TypeLike, context: Optional[ModuleType]) -> Deserializer: + "Creates or re-uses a de-serializer engine to parse an object obtained from a JSON string." + + cache_key = None + + if isinstance(typ, (str, typing.ForwardRef)): + if context is None: + raise TypeError(f"missing context for evaluating type: {typ}") + + if isinstance(typ, str): + if hasattr(context, typ): + cache_key = (context.__name__, typ) + elif isinstance(typ, typing.ForwardRef): + if hasattr(context, typ.__forward_arg__): + cache_key = (context.__name__, typ.__forward_arg__) + + typ = evaluate_type(typ, context) + + typ = unwrap_annotated_type(typ) if is_type_annotated(typ) else typ + + if isinstance(typ, type) and typing.get_origin(typ) is None: + cache_key = (typ.__module__, typ.__name__) + + if cache_key is not None: + deserializer = _CACHE.get(cache_key) + if deserializer is None: + deserializer = _create_deserializer(typ) + + # store de-serializer immediately in cache to avoid stack overflow for recursive types + _CACHE[cache_key] = deserializer + + if isinstance(typ, type): + # use type's own module as context for evaluating member types + context = sys.modules[typ.__module__] + + # create any de-serializers this de-serializer is depending on + deserializer.build(context) + else: + # special forms are not always hashable, create a new de-serializer every time + deserializer = _create_deserializer(typ) + deserializer.build(context) + + return deserializer + + +def _create_deserializer(typ: TypeLike) -> Deserializer: + "Creates a de-serializer engine to parse an object obtained from a JSON string." + + # check for well-known types + if typ is type(None): + return NoneDeserializer() + elif typ is bool: + return BoolDeserializer() + elif typ is int: + return IntDeserializer() + elif typ is float: + return FloatDeserializer() + elif typ is str: + return StringDeserializer() + elif typ is bytes: + return BytesDeserializer() + elif typ is datetime.datetime: + return DateTimeDeserializer() + elif typ is datetime.date: + return DateDeserializer() + elif typ is datetime.time: + return TimeDeserializer() + elif typ is uuid.UUID: + return UUIDDeserializer() + elif typ is ipaddress.IPv4Address: + return IPv4Deserializer() + elif typ is ipaddress.IPv6Address: + return IPv6Deserializer() + + # dynamically-typed collection types + if typ is list: + raise TypeError("explicit item type required: use `List[T]` instead of `list`") + if typ is dict: + raise TypeError("explicit key and value types required: use `Dict[K, V]` instead of `dict`") + if typ is set: + raise TypeError("explicit member type required: use `Set[T]` instead of `set`") + if typ is tuple: + raise TypeError("explicit item type list required: use `Tuple[T, ...]` instead of `tuple`") + + # generic types (e.g. list, dict, set, etc.) + origin_type = typing.get_origin(typ) + if origin_type is list: + (list_item_type,) = typing.get_args(typ) # unpack single tuple element + return ListDeserializer(list_item_type) + elif origin_type is dict: + key_type, value_type = typing.get_args(typ) + return DictDeserializer(key_type, value_type) + elif origin_type is set: + (set_member_type,) = typing.get_args(typ) # unpack single tuple element + return SetDeserializer(set_member_type) + elif origin_type is tuple: + return TupleDeserializer(typing.get_args(typ)) + elif origin_type is Union: + union_args = typing.get_args(typ) + if get_discriminating_properties(union_args): + return TaggedUnionDeserializer(union_args) + else: + return UnionDeserializer(union_args) + elif origin_type is Literal: + return LiteralDeserializer(typing.get_args(typ)) + + if not inspect.isclass(typ): + if is_dataclass_instance(typ): + raise TypeError(f"dataclass type expected but got instance: {typ}") + else: + raise TypeError(f"unable to de-serialize unrecognized type: {typ}") + + if issubclass(typ, enum.Enum): + return EnumDeserializer(typ) + + if is_named_tuple_type(typ): + return NamedTupleDeserializer(typ) + + # check if object has custom serialization method + convert_func = getattr(typ, "from_json", None) + if callable(convert_func): + return CustomDeserializer(convert_func) + + if is_dataclass_type(typ): + dataclass_params = getattr(typ, "__dataclass_params__", None) + if dataclass_params is not None and dataclass_params.frozen: + return FrozenDataclassDeserializer(typ) + else: + return DataclassDeserializer(typ) + + return TypedClassDeserializer(typ) diff --git a/llama_stack/strong_typing/docstring.py b/llama_stack/strong_typing/docstring.py new file mode 100644 index 000000000..9169aadfe --- /dev/null +++ b/llama_stack/strong_typing/docstring.py @@ -0,0 +1,399 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import builtins +import dataclasses +import inspect +import re +import sys +import types +import typing +from dataclasses import dataclass +from io import StringIO +from typing import Any, Callable, Dict, Optional, Protocol, Type, TypeVar + +if sys.version_info >= (3, 10): + from typing import TypeGuard +else: + from typing_extensions import TypeGuard + +from .inspection import ( + DataclassInstance, + get_class_properties, + get_signature, + is_dataclass_type, + is_type_enum, +) + +T = TypeVar("T") + + +@dataclass +class DocstringParam: + """ + A parameter declaration in a parameter block. + + :param name: The name of the parameter. + :param description: The description text for the parameter. + """ + + name: str + description: str + param_type: type = inspect.Signature.empty + + def __str__(self) -> str: + return f":param {self.name}: {self.description}" + + +@dataclass +class DocstringReturns: + """ + A `returns` declaration extracted from a docstring. + + :param description: The description text for the return value. + """ + + description: str + return_type: type = inspect.Signature.empty + + def __str__(self) -> str: + return f":returns: {self.description}" + + +@dataclass +class DocstringRaises: + """ + A `raises` declaration extracted from a docstring. + + :param typename: The type name of the exception raised. + :param description: The description associated with the exception raised. + """ + + typename: str + description: str + raise_type: type = inspect.Signature.empty + + def __str__(self) -> str: + return f":raises {self.typename}: {self.description}" + + +@dataclass +class Docstring: + """ + Represents the documentation string (a.k.a. docstring) for a type such as a (data) class or function. + + A docstring is broken down into the following components: + * A short description, which is the first block of text in the documentation string, and ends with a double + newline or a parameter block. + * A long description, which is the optional block of text following the short description, and ends with + a parameter block. + * A parameter block of named parameter and description string pairs in ReST-style. + * A `returns` declaration, which adds explanation to the return value. + * A `raises` declaration, which adds explanation to the exception type raised by the function on error. + + When the docstring is attached to a data class, it is understood as the documentation string of the class + `__init__` method. + + :param short_description: The short description text parsed from a docstring. + :param long_description: The long description text parsed from a docstring. + :param params: The parameter block extracted from a docstring. + :param returns: The returns declaration extracted from a docstring. + """ + + short_description: Optional[str] = None + long_description: Optional[str] = None + params: Dict[str, DocstringParam] = dataclasses.field(default_factory=dict) + returns: Optional[DocstringReturns] = None + raises: Dict[str, DocstringRaises] = dataclasses.field(default_factory=dict) + + @property + def full_description(self) -> Optional[str]: + if self.short_description and self.long_description: + return f"{self.short_description}\n\n{self.long_description}" + elif self.short_description: + return self.short_description + else: + return None + + def __str__(self) -> str: + output = StringIO() + + has_description = self.short_description or self.long_description + has_blocks = self.params or self.returns or self.raises + + if has_description: + if self.short_description and self.long_description: + output.write(self.short_description) + output.write("\n\n") + output.write(self.long_description) + elif self.short_description: + output.write(self.short_description) + + if has_blocks: + if has_description: + output.write("\n") + + for param in self.params.values(): + output.write("\n") + output.write(str(param)) + if self.returns: + output.write("\n") + output.write(str(self.returns)) + for raises in self.raises.values(): + output.write("\n") + output.write(str(raises)) + + s = output.getvalue() + output.close() + return s + + +def is_exception(member: object) -> TypeGuard[Type[BaseException]]: + return isinstance(member, type) and issubclass(member, BaseException) + + +def get_exceptions(module: types.ModuleType) -> Dict[str, Type[BaseException]]: + "Returns all exception classes declared in a module." + + return {name: class_type for name, class_type in inspect.getmembers(module, is_exception)} + + +class SupportsDoc(Protocol): + __doc__: Optional[str] + + +def parse_type(typ: SupportsDoc) -> Docstring: + """ + Parse the docstring of a type into its components. + + :param typ: The type whose documentation string to parse. + :returns: Components of the documentation string. + """ + + doc = get_docstring(typ) + if doc is None: + return Docstring() + + docstring = parse_text(doc) + check_docstring(typ, docstring) + + # assign parameter and return types + if is_dataclass_type(typ): + properties = dict(get_class_properties(typing.cast(type, typ))) + + for name, param in docstring.params.items(): + param.param_type = properties[name] + + elif inspect.isfunction(typ): + signature = get_signature(typ) + for name, param in docstring.params.items(): + param.param_type = signature.parameters[name].annotation + if docstring.returns: + docstring.returns.return_type = signature.return_annotation + + # assign exception types + defining_module = inspect.getmodule(typ) + if defining_module: + context: Dict[str, type] = {} + context.update(get_exceptions(builtins)) + context.update(get_exceptions(defining_module)) + for exc_name, exc in docstring.raises.items(): + raise_type = context.get(exc_name) + if raise_type is None: + type_name = getattr(typ, "__qualname__", None) or getattr(typ, "__name__", None) or None + raise TypeError( + f"doc-string exception type `{exc_name}` is not an exception defined in the context of `{type_name}`" + ) + + exc.raise_type = raise_type + + return docstring + + +def parse_text(text: str) -> Docstring: + """ + Parse a ReST-style docstring into its components. + + :param text: The documentation string to parse, typically acquired as `type.__doc__`. + :returns: Components of the documentation string. + """ + + if not text: + return Docstring() + + # find block that starts object metadata block (e.g. `:param p:` or `:returns:`) + text = inspect.cleandoc(text) + match = re.search("^:", text, flags=re.MULTILINE) + if match: + desc_chunk = text[: match.start()] + meta_chunk = text[match.start() :] # noqa: E203 + else: + desc_chunk = text + meta_chunk = "" + + # split description text into short and long description + parts = desc_chunk.split("\n\n", 1) + + # ensure short description has no newlines + short_description = parts[0].strip().replace("\n", " ") or None + + # ensure long description preserves its structure (e.g. preformatted text) + if len(parts) > 1: + long_description = parts[1].strip() or None + else: + long_description = None + + params: Dict[str, DocstringParam] = {} + raises: Dict[str, DocstringRaises] = {} + returns = None + for match in re.finditer(r"(^:.*?)(?=^:|\Z)", meta_chunk, flags=re.DOTALL | re.MULTILINE): + chunk = match.group(0) + if not chunk: + continue + + args_chunk, desc_chunk = chunk.lstrip(":").split(":", 1) + args = args_chunk.split() + desc = re.sub(r"\s+", " ", desc_chunk.strip()) + + if len(args) > 0: + kw = args[0] + if len(args) == 2: + if kw == "param": + params[args[1]] = DocstringParam( + name=args[1], + description=desc, + ) + elif kw == "raise" or kw == "raises": + raises[args[1]] = DocstringRaises( + typename=args[1], + description=desc, + ) + + elif len(args) == 1: + if kw == "return" or kw == "returns": + returns = DocstringReturns(description=desc) + + return Docstring( + long_description=long_description, + short_description=short_description, + params=params, + returns=returns, + raises=raises, + ) + + +def has_default_docstring(typ: SupportsDoc) -> bool: + "Check if class has the auto-generated string assigned by @dataclass." + + if not isinstance(typ, type): + return False + + if is_dataclass_type(typ): + return typ.__doc__ is not None and re.match(f"^{re.escape(typ.__name__)}[(].*[)]$", typ.__doc__) is not None + + if is_type_enum(typ): + return typ.__doc__ is not None and typ.__doc__ == "An enumeration." + + return False + + +def has_docstring(typ: SupportsDoc) -> bool: + "Check if class has a documentation string other than the auto-generated string assigned by @dataclass." + + if has_default_docstring(typ): + return False + + return bool(typ.__doc__) + + +def get_docstring(typ: SupportsDoc) -> Optional[str]: + if typ.__doc__ is None: + return None + + if has_default_docstring(typ): + return None + + return typ.__doc__ + + +def check_docstring(typ: SupportsDoc, docstring: Docstring, strict: bool = False) -> None: + """ + Verifies the doc-string of a type. + + :raises TypeError: Raised on a mismatch between doc-string parameters, and function or type signature. + """ + + if is_dataclass_type(typ): + check_dataclass_docstring(typ, docstring, strict) + elif inspect.isfunction(typ): + check_function_docstring(typ, docstring, strict) + + +def check_dataclass_docstring(typ: Type[DataclassInstance], docstring: Docstring, strict: bool = False) -> None: + """ + Verifies the doc-string of a data-class type. + + :param strict: Whether to check if all data-class members have doc-strings. + :raises TypeError: Raised on a mismatch between doc-string parameters and data-class members. + """ + + if not is_dataclass_type(typ): + raise TypeError("not a data-class type") + + properties = dict(get_class_properties(typ)) + class_name = typ.__name__ + + for name in docstring.params: + if name not in properties: + raise TypeError(f"doc-string parameter `{name}` is not a member of the data-class `{class_name}`") + + if not strict: + return + + for name in properties: + if name not in docstring.params: + raise TypeError(f"member `{name}` in data-class `{class_name}` is missing its doc-string") + + +def check_function_docstring(fn: Callable[..., Any], docstring: Docstring, strict: bool = False) -> None: + """ + Verifies the doc-string of a function or member function. + + :param strict: Whether to check if all function parameters and the return type have doc-strings. + :raises TypeError: Raised on a mismatch between doc-string parameters and function signature. + """ + + signature = get_signature(fn) + func_name = fn.__qualname__ + + for name in docstring.params: + if name not in signature.parameters: + raise TypeError(f"doc-string parameter `{name}` is absent from signature of function `{func_name}`") + + if docstring.returns is not None and signature.return_annotation is inspect.Signature.empty: + raise TypeError(f"doc-string has returns description in function `{func_name}` with no return type annotation") + + if not strict: + return + + for name, param in signature.parameters.items(): + # ignore `self` in member function signatures + if name == "self" and ( + param.kind is inspect.Parameter.POSITIONAL_ONLY or param.kind is inspect.Parameter.POSITIONAL_OR_KEYWORD + ): + continue + + if name not in docstring.params: + raise TypeError(f"function parameter `{name}` in `{func_name}` is missing its doc-string") + + if signature.return_annotation is not inspect.Signature.empty and docstring.returns is None: + raise TypeError(f"function `{func_name}` has no returns description in its doc-string") diff --git a/llama_stack/strong_typing/exception.py b/llama_stack/strong_typing/exception.py new file mode 100644 index 000000000..af037cc3c --- /dev/null +++ b/llama_stack/strong_typing/exception.py @@ -0,0 +1,23 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + + +class JsonKeyError(Exception): + "Raised when deserialization for a class or union type has failed because a matching member was not found." + + +class JsonValueError(Exception): + "Raised when (de)serialization of data has failed due to invalid value." + + +class JsonTypeError(Exception): + "Raised when deserialization of data has failed due to a type mismatch." diff --git a/llama_stack/strong_typing/inspection.py b/llama_stack/strong_typing/inspection.py new file mode 100644 index 000000000..69bc15597 --- /dev/null +++ b/llama_stack/strong_typing/inspection.py @@ -0,0 +1,1034 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import dataclasses +import datetime +import enum +import importlib +import importlib.machinery +import importlib.util +import inspect +import re +import sys +import types +import typing +import uuid +from typing import ( + Any, + Callable, + Dict, + Iterable, + List, + Literal, + NamedTuple, + Optional, + Protocol, + Set, + Tuple, + Type, + TypeVar, + Union, + runtime_checkable, +) + +if sys.version_info >= (3, 9): + from typing import Annotated +else: + from typing_extensions import Annotated + +if sys.version_info >= (3, 10): + from typing import TypeGuard +else: + from typing_extensions import TypeGuard + +S = TypeVar("S") +T = TypeVar("T") +K = TypeVar("K") +V = TypeVar("V") + + +def _is_type_like(data_type: object) -> bool: + """ + Checks if the object is a type or type-like object (e.g. generic type). + + :param data_type: The object to validate. + :returns: True if the object is a type or type-like object. + """ + + if isinstance(data_type, type): + # a standard type + return True + elif typing.get_origin(data_type) is not None: + # a generic type such as `list`, `dict` or `set` + return True + elif hasattr(data_type, "__forward_arg__"): + # an instance of `ForwardRef` + return True + elif data_type is Any: + # the special form `Any` + return True + else: + return False + + +if sys.version_info >= (3, 9): + TypeLike = Union[type, types.GenericAlias, typing.ForwardRef, Any] + + def is_type_like( + data_type: object, + ) -> TypeGuard[TypeLike]: + """ + Checks if the object is a type or type-like object (e.g. generic type). + + :param data_type: The object to validate. + :returns: True if the object is a type or type-like object. + """ + + return _is_type_like(data_type) + +else: + TypeLike = object + + def is_type_like( + data_type: object, + ) -> bool: + return _is_type_like(data_type) + + +def evaluate_member_type(typ: Any, cls: type) -> Any: + """ + Evaluates a forward reference type in a dataclass member. + + :param typ: The dataclass member type to convert. + :param cls: The dataclass in which the member is defined. + :returns: The evaluated type. + """ + + return evaluate_type(typ, sys.modules[cls.__module__]) + + +def evaluate_type(typ: Any, module: types.ModuleType) -> Any: + """ + Evaluates a forward reference type. + + :param typ: The type to convert, typically a dataclass member type. + :param module: The context for the type, i.e. the module in which the member is defined. + :returns: The evaluated type. + """ + + if isinstance(typ, str): + # evaluate data-class field whose type annotation is a string + return eval(typ, module.__dict__, locals()) + if isinstance(typ, typing.ForwardRef): + if sys.version_info >= (3, 9): + return typ._evaluate(module.__dict__, locals(), recursive_guard=frozenset()) + else: + return typ._evaluate(module.__dict__, locals()) + else: + return typ + + +@runtime_checkable +class DataclassInstance(Protocol): + __dataclass_fields__: typing.ClassVar[Dict[str, dataclasses.Field]] + + +def is_dataclass_type(typ: Any) -> TypeGuard[Type[DataclassInstance]]: + "True if the argument corresponds to a data class type (but not an instance)." + + typ = unwrap_annotated_type(typ) + return isinstance(typ, type) and dataclasses.is_dataclass(typ) + + +def is_dataclass_instance(obj: Any) -> TypeGuard[DataclassInstance]: + "True if the argument corresponds to a data class instance (but not a type)." + + return not isinstance(obj, type) and dataclasses.is_dataclass(obj) + + +@dataclasses.dataclass +class DataclassField: + name: str + type: Any + default: Any + + def __init__(self, name: str, type: Any, default: Any = dataclasses.MISSING) -> None: + self.name = name + self.type = type + self.default = default + + +def dataclass_fields(cls: Type[DataclassInstance]) -> Iterable[DataclassField]: + "Generates the fields of a data-class resolving forward references." + + for field in dataclasses.fields(cls): + yield DataclassField(field.name, evaluate_member_type(field.type, cls), field.default) + + +def dataclass_field_by_name(cls: Type[DataclassInstance], name: str) -> DataclassField: + "Looks up a field in a data-class by its field name." + + for field in dataclasses.fields(cls): + if field.name == name: + return DataclassField(field.name, evaluate_member_type(field.type, cls)) + + raise LookupError(f"field `{name}` missing from class `{cls.__name__}`") + + +def is_named_tuple_instance(obj: Any) -> TypeGuard[NamedTuple]: + "True if the argument corresponds to a named tuple instance." + + return is_named_tuple_type(type(obj)) + + +def is_named_tuple_type(typ: Any) -> TypeGuard[Type[NamedTuple]]: + """ + True if the argument corresponds to a named tuple type. + + Calling the function `collections.namedtuple` gives a new type that is a subclass of `tuple` (and no other classes) + with a member named `_fields` that is a tuple whose items are all strings. + """ + + if not isinstance(typ, type): + return False + + typ = unwrap_annotated_type(typ) + + b = getattr(typ, "__bases__", None) + if b is None: + return False + + if len(b) != 1 or b[0] != tuple: + return False + + f = getattr(typ, "_fields", None) + if not isinstance(f, tuple): + return False + + return all(isinstance(n, str) for n in f) + + +if sys.version_info >= (3, 11): + + def is_type_enum(typ: object) -> TypeGuard[Type[enum.Enum]]: + "True if the specified type is an enumeration type." + + typ = unwrap_annotated_type(typ) + return isinstance(typ, enum.EnumType) + +else: + + def is_type_enum(typ: object) -> TypeGuard[Type[enum.Enum]]: + "True if the specified type is an enumeration type." + + typ = unwrap_annotated_type(typ) + + # use an explicit isinstance(..., type) check to filter out special forms like generics + return isinstance(typ, type) and issubclass(typ, enum.Enum) + + +def enum_value_types(enum_type: Type[enum.Enum]) -> List[type]: + """ + Returns all unique value types of the `enum.Enum` type in definition order. + """ + + # filter unique enumeration value types by keeping definition order + return list(dict.fromkeys(type(e.value) for e in enum_type)) + + +def extend_enum( + source: Type[enum.Enum], +) -> Callable[[Type[enum.Enum]], Type[enum.Enum]]: + """ + Creates a new enumeration type extending the set of values in an existing type. + + :param source: The existing enumeration type to be extended with new values. + :returns: A new enumeration type with the extended set of values. + """ + + def wrap(extend: Type[enum.Enum]) -> Type[enum.Enum]: + # create new enumeration type combining the values from both types + values: Dict[str, Any] = {} + values.update((e.name, e.value) for e in source) + values.update((e.name, e.value) for e in extend) + enum_class: Type[enum.Enum] = enum.Enum(extend.__name__, values) # type: ignore + + # assign the newly created type to the same module where the extending class is defined + setattr(enum_class, "__module__", extend.__module__) + setattr(enum_class, "__doc__", extend.__doc__) + setattr(sys.modules[extend.__module__], extend.__name__, enum_class) + + return enum.unique(enum_class) + + return wrap + + +if sys.version_info >= (3, 10): + + def _is_union_like(typ: object) -> bool: + "True if type is a union such as `Union[T1, T2, ...]` or a union type `T1 | T2`." + + return typing.get_origin(typ) is Union or isinstance(typ, types.UnionType) + +else: + + def _is_union_like(typ: object) -> bool: + "True if type is a union such as `Union[T1, T2, ...]` or a union type `T1 | T2`." + + return typing.get_origin(typ) is Union + + +def is_type_optional(typ: object, strict: bool = False) -> TypeGuard[Type[Optional[Any]]]: + """ + True if the type annotation corresponds to an optional type (e.g. `Optional[T]` or `Union[T1,T2,None]`). + + `Optional[T]` is represented as `Union[T, None]` is classic style, and is equivalent to `T | None` in new style. + + :param strict: True if only `Optional[T]` qualifies as an optional type but `Union[T1, T2, None]` does not. + """ + + typ = unwrap_annotated_type(typ) + + if _is_union_like(typ): + args = typing.get_args(typ) + if strict and len(args) != 2: + return False + + return type(None) in args + + return False + + +def unwrap_optional_type(typ: Type[Optional[T]]) -> Type[T]: + """ + Extracts the inner type of an optional type. + + :param typ: The optional type `Optional[T]`. + :returns: The inner type `T`. + """ + + return rewrap_annotated_type(_unwrap_optional_type, typ) + + +def _unwrap_optional_type(typ: Type[Optional[T]]) -> Type[T]: + "Extracts the type qualified as optional (e.g. returns `T` for `Optional[T]`)." + + # Optional[T] is represented internally as Union[T, None] + if not _is_union_like(typ): + raise TypeError("optional type must have un-subscripted type of Union") + + # will automatically unwrap Union[T] into T + return Union[ + tuple(filter(lambda item: item is not type(None), typing.get_args(typ))) # type: ignore + ] + + +def is_type_union(typ: object) -> bool: + "True if the type annotation corresponds to a union type (e.g. `Union[T1,T2,T3]`)." + + typ = unwrap_annotated_type(typ) + if _is_union_like(typ): + args = typing.get_args(typ) + return len(args) > 2 or type(None) not in args + + return False + + +def unwrap_union_types(typ: object) -> Tuple[object, ...]: + """ + Extracts the inner types of a union type. + + :param typ: The union type `Union[T1, T2, ...]`. + :returns: The inner types `T1`, `T2`, etc. + """ + + typ = unwrap_annotated_type(typ) + return _unwrap_union_types(typ) + + +def _unwrap_union_types(typ: object) -> Tuple[object, ...]: + "Extracts the types in a union (e.g. returns a tuple of types `T1` and `T2` for `Union[T1, T2]`)." + + if not _is_union_like(typ): + raise TypeError("union type must have un-subscripted type of Union") + + return typing.get_args(typ) + + +def is_type_literal(typ: object) -> bool: + "True if the specified type is a literal of one or more constant values, e.g. `Literal['string']` or `Literal[42]`." + + typ = unwrap_annotated_type(typ) + return typing.get_origin(typ) is Literal + + +def unwrap_literal_value(typ: object) -> Any: + """ + Extracts the single constant value captured by a literal type. + + :param typ: The literal type `Literal[value]`. + :returns: The values captured by the literal type. + """ + + args = unwrap_literal_values(typ) + if len(args) != 1: + raise TypeError("too many values in literal type") + + return args[0] + + +def unwrap_literal_values(typ: object) -> Tuple[Any, ...]: + """ + Extracts the constant values captured by a literal type. + + :param typ: The literal type `Literal[value, ...]`. + :returns: A tuple of values captured by the literal type. + """ + + typ = unwrap_annotated_type(typ) + return typing.get_args(typ) + + +def unwrap_literal_types(typ: object) -> Tuple[type, ...]: + """ + Extracts the types of the constant values captured by a literal type. + + :param typ: The literal type `Literal[value, ...]`. + :returns: A tuple of item types `T` such that `type(value) == T`. + """ + + return tuple(type(t) for t in unwrap_literal_values(typ)) + + +def is_generic_list(typ: object) -> TypeGuard[Type[list]]: + "True if the specified type is a generic list, i.e. `List[T]`." + + typ = unwrap_annotated_type(typ) + return typing.get_origin(typ) is list + + +def unwrap_generic_list(typ: Type[List[T]]) -> Type[T]: + """ + Extracts the item type of a list type. + + :param typ: The list type `List[T]`. + :returns: The item type `T`. + """ + + return rewrap_annotated_type(_unwrap_generic_list, typ) + + +def _unwrap_generic_list(typ: Type[List[T]]) -> Type[T]: + "Extracts the item type of a list type (e.g. returns `T` for `List[T]`)." + + (list_type,) = typing.get_args(typ) # unpack single tuple element + return list_type + + +def is_generic_set(typ: object) -> TypeGuard[Type[set]]: + "True if the specified type is a generic set, i.e. `Set[T]`." + + typ = unwrap_annotated_type(typ) + return typing.get_origin(typ) is set + + +def unwrap_generic_set(typ: Type[Set[T]]) -> Type[T]: + """ + Extracts the item type of a set type. + + :param typ: The set type `Set[T]`. + :returns: The item type `T`. + """ + + return rewrap_annotated_type(_unwrap_generic_set, typ) + + +def _unwrap_generic_set(typ: Type[Set[T]]) -> Type[T]: + "Extracts the item type of a set type (e.g. returns `T` for `Set[T]`)." + + (set_type,) = typing.get_args(typ) # unpack single tuple element + return set_type + + +def is_generic_dict(typ: object) -> TypeGuard[Type[dict]]: + "True if the specified type is a generic dictionary, i.e. `Dict[KeyType, ValueType]`." + + typ = unwrap_annotated_type(typ) + return typing.get_origin(typ) is dict + + +def unwrap_generic_dict(typ: Type[Dict[K, V]]) -> Tuple[Type[K], Type[V]]: + """ + Extracts the key and value types of a dictionary type as a tuple. + + :param typ: The dictionary type `Dict[K, V]`. + :returns: The key and value types `K` and `V`. + """ + + return _unwrap_generic_dict(unwrap_annotated_type(typ)) + + +def _unwrap_generic_dict(typ: Type[Dict[K, V]]) -> Tuple[Type[K], Type[V]]: + "Extracts the key and value types of a dict type (e.g. returns (`K`, `V`) for `Dict[K, V]`)." + + key_type, value_type = typing.get_args(typ) + return key_type, value_type + + +def is_type_annotated(typ: TypeLike) -> bool: + "True if the type annotation corresponds to an annotated type (i.e. `Annotated[T, ...]`)." + + return getattr(typ, "__metadata__", None) is not None + + +def get_annotation(data_type: TypeLike, annotation_type: Type[T]) -> Optional[T]: + """ + Returns the first annotation on a data type that matches the expected annotation type. + + :param data_type: The annotated type from which to extract the annotation. + :param annotation_type: The annotation class to look for. + :returns: The annotation class instance found (if any). + """ + + metadata = getattr(data_type, "__metadata__", None) + if metadata is not None: + for annotation in metadata: + if isinstance(annotation, annotation_type): + return annotation + + return None + + +def unwrap_annotated_type(typ: T) -> T: + "Extracts the wrapped type from an annotated type (e.g. returns `T` for `Annotated[T, ...]`)." + + if is_type_annotated(typ): + # type is Annotated[T, ...] + return typing.get_args(typ)[0] + else: + # type is a regular type + return typ + + +def rewrap_annotated_type(transform: Callable[[Type[S]], Type[T]], typ: Type[S]) -> Type[T]: + """ + Un-boxes, transforms and re-boxes an optionally annotated type. + + :param transform: A function that maps an un-annotated type to another type. + :param typ: A type to un-box (if necessary), transform, and re-box (if necessary). + """ + + metadata = getattr(typ, "__metadata__", None) + if metadata is not None: + # type is Annotated[T, ...] + inner_type = typing.get_args(typ)[0] + else: + # type is a regular type + inner_type = typ + + transformed_type = transform(inner_type) + + if metadata is not None: + return Annotated[(transformed_type, *metadata)] # type: ignore + else: + return transformed_type + + +def get_module_classes(module: types.ModuleType) -> List[type]: + "Returns all classes declared directly in a module." + + def is_class_member(member: object) -> TypeGuard[type]: + return inspect.isclass(member) and member.__module__ == module.__name__ + + return [class_type for _, class_type in inspect.getmembers(module, is_class_member)] + + +if sys.version_info >= (3, 9): + + def get_resolved_hints(typ: type) -> Dict[str, type]: + return typing.get_type_hints(typ, include_extras=True) + +else: + + def get_resolved_hints(typ: type) -> Dict[str, type]: + return typing.get_type_hints(typ) + + +def get_class_properties(typ: type) -> Iterable[Tuple[str, type]]: + "Returns all properties of a class." + + if is_dataclass_type(typ): + return ((field.name, field.type) for field in dataclasses.fields(typ)) + else: + resolved_hints = get_resolved_hints(typ) + return resolved_hints.items() + + +def get_class_property(typ: type, name: str) -> Optional[type]: + "Looks up the annotated type of a property in a class by its property name." + + for property_name, property_type in get_class_properties(typ): + if name == property_name: + return property_type + return None + + +@dataclasses.dataclass +class _ROOT: + pass + + +def get_referenced_types(typ: TypeLike, module: Optional[types.ModuleType] = None) -> Set[type]: + """ + Extracts types directly or indirectly referenced by this type. + + For example, extract `T` from `List[T]`, `Optional[T]` or `Annotated[T, ...]`, `K` and `V` from `Dict[K,V]`, + `A` and `B` from `Union[A,B]`. + + :param typ: A type or special form. + :param module: The context in which types are evaluated. + :returns: Types referenced by the given type or special form. + """ + + collector = TypeCollector() + collector.run(typ, _ROOT, module) + return collector.references + + +class TypeCollector: + """ + Collects types directly or indirectly referenced by a type. + + :param graph: The type dependency graph, linking types to types they depend on. + """ + + graph: Dict[type, Set[type]] + + @property + def references(self) -> Set[type]: + "Types collected by the type collector." + + dependencies = set() + for edges in self.graph.values(): + dependencies.update(edges) + return dependencies + + def __init__(self) -> None: + self.graph = {_ROOT: set()} + + def traverse(self, typ: type) -> None: + "Finds all dependent types of a type." + + self.run(typ, _ROOT, sys.modules[typ.__module__]) + + def traverse_all(self, types: Iterable[type]) -> None: + "Finds all dependent types of a list of types." + + for typ in types: + self.traverse(typ) + + def run( + self, + typ: TypeLike, + cls: Type[DataclassInstance], + module: Optional[types.ModuleType], + ) -> None: + """ + Extracts types indirectly referenced by this type. + + For example, extract `T` from `List[T]`, `Optional[T]` or `Annotated[T, ...]`, `K` and `V` from `Dict[K,V]`, + `A` and `B` from `Union[A,B]`. + + :param typ: A type or special form. + :param cls: A dataclass type being expanded for dependent types. + :param module: The context in which types are evaluated. + :returns: Types referenced by the given type or special form. + """ + + if typ is type(None) or typ is Any: + return + + if isinstance(typ, type): + self.graph[cls].add(typ) + + if typ in self.graph: + return + + self.graph[typ] = set() + + metadata = getattr(typ, "__metadata__", None) + if metadata is not None: + # type is Annotated[T, ...] + arg = typing.get_args(typ)[0] + return self.run(arg, cls, module) + + # type is a forward reference + if isinstance(typ, str) or isinstance(typ, typing.ForwardRef): + if module is None: + raise ValueError("missing context for evaluating types") + + evaluated_type = evaluate_type(typ, module) + return self.run(evaluated_type, cls, module) + + # type is a special form + origin = typing.get_origin(typ) + if origin in [list, dict, frozenset, set, tuple, Union]: + for arg in typing.get_args(typ): + self.run(arg, cls, module) + return + elif origin is Literal: + return + + # type is optional or a union type + if is_type_optional(typ): + return self.run(unwrap_optional_type(typ), cls, module) + if is_type_union(typ): + for union_type in unwrap_union_types(typ): + self.run(union_type, cls, module) + return + + # type is a regular type + elif is_dataclass_type(typ) or is_type_enum(typ) or isinstance(typ, type): + context = sys.modules[typ.__module__] + if is_dataclass_type(typ): + for field in dataclass_fields(typ): + self.run(field.type, typ, context) + else: + for field_name, field_type in get_resolved_hints(typ).items(): + self.run(field_type, typ, context) + return + + raise TypeError(f"expected: type-like; got: {typ}") + + +if sys.version_info >= (3, 10): + + def get_signature(fn: Callable[..., Any]) -> inspect.Signature: + "Extracts the signature of a function." + + return inspect.signature(fn, eval_str=True) + +else: + + def get_signature(fn: Callable[..., Any]) -> inspect.Signature: + "Extracts the signature of a function." + + return inspect.signature(fn) + + +def is_reserved_property(name: str) -> bool: + "True if the name stands for an internal property." + + # filter built-in and special properties + if re.match(r"^__.+__$", name): + return True + + # filter built-in special names + if name in ["_abc_impl"]: + return True + + return False + + +def create_module(name: str) -> types.ModuleType: + """ + Creates a new module dynamically at run-time. + + :param name: Fully qualified name of the new module (with dot notation). + """ + + if name in sys.modules: + raise KeyError(f"{name!r} already in sys.modules") + + spec = importlib.machinery.ModuleSpec(name, None) + module = importlib.util.module_from_spec(spec) + sys.modules[name] = module + if spec.loader is not None: + spec.loader.exec_module(module) + return module + + +if sys.version_info >= (3, 10): + + def create_data_type(class_name: str, fields: List[Tuple[str, type]]) -> type: + """ + Creates a new data-class type dynamically. + + :param class_name: The name of new data-class type. + :param fields: A list of fields (and their type) that the new data-class type is expected to have. + :returns: The newly created data-class type. + """ + + # has the `slots` parameter + return dataclasses.make_dataclass(class_name, fields, slots=True) + +else: + + def create_data_type(class_name: str, fields: List[Tuple[str, type]]) -> type: + """ + Creates a new data-class type dynamically. + + :param class_name: The name of new data-class type. + :param fields: A list of fields (and their type) that the new data-class type is expected to have. + :returns: The newly created data-class type. + """ + + cls = dataclasses.make_dataclass(class_name, fields) + + cls_dict = dict(cls.__dict__) + field_names = tuple(field.name for field in dataclasses.fields(cls)) + + cls_dict["__slots__"] = field_names + + for field_name in field_names: + cls_dict.pop(field_name, None) + cls_dict.pop("__dict__", None) + + qualname = getattr(cls, "__qualname__", None) + cls = type(cls)(cls.__name__, (), cls_dict) + if qualname is not None: + cls.__qualname__ = qualname + + return cls + + +def create_object(typ: Type[T]) -> T: + "Creates an instance of a type." + + if issubclass(typ, Exception): + # exception types need special treatment + e = typ.__new__(typ) + return typing.cast(T, e) + else: + return object.__new__(typ) + + +if sys.version_info >= (3, 9): + TypeOrGeneric = Union[type, types.GenericAlias] + +else: + TypeOrGeneric = object + + +def is_generic_instance(obj: Any, typ: TypeLike) -> bool: + """ + Returns whether an object is an instance of a generic class, a standard class or of a subclass thereof. + + This function checks the following items recursively: + * items of a list + * keys and values of a dictionary + * members of a set + * items of a tuple + * members of a union type + + :param obj: The (possibly generic container) object to check recursively. + :param typ: The expected type of the object. + """ + + if isinstance(typ, typing.ForwardRef): + fwd: typing.ForwardRef = typ + identifier = fwd.__forward_arg__ + typ = eval(identifier) + if isinstance(typ, type): + return isinstance(obj, typ) + else: + return False + + # generic types (e.g. list, dict, set, etc.) + origin_type = typing.get_origin(typ) + if origin_type is list: + if not isinstance(obj, list): + return False + (list_item_type,) = typing.get_args(typ) # unpack single tuple element + list_obj: list = obj + return all(is_generic_instance(item, list_item_type) for item in list_obj) + elif origin_type is dict: + if not isinstance(obj, dict): + return False + key_type, value_type = typing.get_args(typ) + dict_obj: dict = obj + return all( + is_generic_instance(key, key_type) and is_generic_instance(value, value_type) + for key, value in dict_obj.items() + ) + elif origin_type is set: + if not isinstance(obj, set): + return False + (set_member_type,) = typing.get_args(typ) # unpack single tuple element + set_obj: set = obj + return all(is_generic_instance(item, set_member_type) for item in set_obj) + elif origin_type is tuple: + if not isinstance(obj, tuple): + return False + return all( + is_generic_instance(item, tuple_item_type) + for tuple_item_type, item in zip( + (tuple_item_type for tuple_item_type in typing.get_args(typ)), + (item for item in obj), + ) + ) + elif origin_type is Union: + return any(is_generic_instance(obj, member_type) for member_type in typing.get_args(typ)) + elif isinstance(typ, type): + return isinstance(obj, typ) + else: + raise TypeError(f"expected `type` but got: {typ}") + + +class RecursiveChecker: + _pred: Optional[Callable[[type, Any], bool]] + + def __init__(self, pred: Callable[[type, Any], bool]) -> None: + """ + Creates a checker to verify if a predicate applies to all nested member properties of an object recursively. + + :param pred: The predicate to test on member properties. Takes a property type and a property value. + """ + + self._pred = pred + + def pred(self, typ: type, obj: Any) -> bool: + "Acts as a workaround for the type checker mypy." + + assert self._pred is not None + return self._pred(typ, obj) + + def check(self, typ: TypeLike, obj: Any) -> bool: + """ + Checks if a predicate applies to all nested member properties of an object recursively. + + :param typ: The type to recurse into. + :param obj: The object to inspect recursively. Must be an instance of the given type. + :returns: True if all member properties pass the filter predicate. + """ + + # check for well-known types + if ( + typ is type(None) + or typ is bool + or typ is int + or typ is float + or typ is str + or typ is bytes + or typ is datetime.datetime + or typ is datetime.date + or typ is datetime.time + or typ is uuid.UUID + ): + return self.pred(typing.cast(type, typ), obj) + + # generic types (e.g. list, dict, set, etc.) + origin_type = typing.get_origin(typ) + if origin_type is list: + if not isinstance(obj, list): + raise TypeError(f"expected `list` but got: {obj}") + (list_item_type,) = typing.get_args(typ) # unpack single tuple element + list_obj: list = obj + return all(self.check(list_item_type, item) for item in list_obj) + elif origin_type is dict: + if not isinstance(obj, dict): + raise TypeError(f"expected `dict` but got: {obj}") + key_type, value_type = typing.get_args(typ) + dict_obj: dict = obj + return all(self.check(value_type, item) for item in dict_obj.values()) + elif origin_type is set: + if not isinstance(obj, set): + raise TypeError(f"expected `set` but got: {obj}") + (set_member_type,) = typing.get_args(typ) # unpack single tuple element + set_obj: set = obj + return all(self.check(set_member_type, item) for item in set_obj) + elif origin_type is tuple: + if not isinstance(obj, tuple): + raise TypeError(f"expected `tuple` but got: {obj}") + return all( + self.check(tuple_item_type, item) + for tuple_item_type, item in zip( + (tuple_item_type for tuple_item_type in typing.get_args(typ)), + (item for item in obj), + ) + ) + elif origin_type is Union: + return self.pred(typ, obj) # type: ignore[arg-type] + + if not inspect.isclass(typ): + raise TypeError(f"expected `type` but got: {typ}") + + # enumeration type + if issubclass(typ, enum.Enum): + if not isinstance(obj, enum.Enum): + raise TypeError(f"expected `{typ}` but got: {obj}") + return self.pred(typ, obj) + + # class types with properties + if is_named_tuple_type(typ): + if not isinstance(obj, tuple): + raise TypeError(f"expected `NamedTuple` but got: {obj}") + return all( + self.check(field_type, getattr(obj, field_name)) + for field_name, field_type in typing.get_type_hints(typ).items() + ) + elif is_dataclass_type(typ): + if not isinstance(obj, typ): + raise TypeError(f"expected `{typ}` but got: {obj}") + resolved_hints = get_resolved_hints(typ) + return all( + self.check(resolved_hints[field.name], getattr(obj, field.name)) for field in dataclasses.fields(typ) + ) + else: + if not isinstance(obj, typ): + raise TypeError(f"expected `{typ}` but got: {obj}") + return all( + self.check(property_type, getattr(obj, property_name)) + for property_name, property_type in get_class_properties(typ) + ) + + +def check_recursive( + obj: object, + /, + *, + pred: Optional[Callable[[type, Any], bool]] = None, + type_pred: Optional[Callable[[type], bool]] = None, + value_pred: Optional[Callable[[Any], bool]] = None, +) -> bool: + """ + Checks if a predicate applies to all nested member properties of an object recursively. + + :param obj: The object to inspect recursively. + :param pred: The predicate to test on member properties. Takes a property type and a property value. + :param type_pred: Constrains the check to properties of an expected type. Properties of other types pass automatically. + :param value_pred: Verifies a condition on member property values (of an expected type). + :returns: True if all member properties pass the filter predicate(s). + """ + + if type_pred is not None and value_pred is not None: + if pred is not None: + raise TypeError("filter predicate not permitted when type and value predicates are present") + + type_p: Callable[[Type[T]], bool] = type_pred + value_p: Callable[[T], bool] = value_pred + pred = lambda typ, obj: not type_p(typ) or value_p(obj) # noqa: E731 + + elif value_pred is not None: + if pred is not None: + raise TypeError("filter predicate not permitted when value predicate is present") + + value_only_p: Callable[[T], bool] = value_pred + pred = lambda typ, obj: value_only_p(obj) # noqa: E731 + + elif type_pred is not None: + raise TypeError("value predicate required when type predicate is present") + + elif pred is None: + pred = lambda typ, obj: True # noqa: E731 + + return RecursiveChecker(pred).check(type(obj), obj) diff --git a/llama_stack/strong_typing/mapping.py b/llama_stack/strong_typing/mapping.py new file mode 100644 index 000000000..408375a9f --- /dev/null +++ b/llama_stack/strong_typing/mapping.py @@ -0,0 +1,40 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import keyword +from typing import Optional + +from .auxiliary import Alias +from .inspection import get_annotation + + +def python_field_to_json_property(python_id: str, python_type: Optional[object] = None) -> str: + """ + Map a Python field identifier to a JSON property name. + + Authors may use an underscore appended at the end of a Python identifier as per PEP 8 if it clashes with a Python + keyword: e.g. `in` would become `in_` and `from` would become `from_`. Remove these suffixes when exporting to JSON. + + Authors may supply an explicit alias with the type annotation `Alias`, e.g. `Annotated[MyType, Alias("alias")]`. + """ + + if python_type is not None: + alias = get_annotation(python_type, Alias) + if alias: + return alias.name + + if python_id.endswith("_"): + id = python_id[:-1] + if keyword.iskeyword(id): + return id + + return python_id diff --git a/llama_stack/strong_typing/name.py b/llama_stack/strong_typing/name.py new file mode 100644 index 000000000..a1a2ae5f1 --- /dev/null +++ b/llama_stack/strong_typing/name.py @@ -0,0 +1,182 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import typing +from typing import Any, Literal, Optional, Tuple, Union + +from .auxiliary import _auxiliary_types +from .inspection import ( + TypeLike, + is_generic_dict, + is_generic_list, + is_type_optional, + is_type_union, + unwrap_generic_dict, + unwrap_generic_list, + unwrap_optional_type, + unwrap_union_types, +) + + +class TypeFormatter: + """ + Type formatter. + + :param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604. + """ + + use_union_operator: bool + + def __init__(self, use_union_operator: bool = False) -> None: + self.use_union_operator = use_union_operator + + def union_to_str(self, data_type_args: Tuple[TypeLike, ...]) -> str: + if self.use_union_operator: + return " | ".join(self.python_type_to_str(t) for t in data_type_args) + else: + if len(data_type_args) == 2 and type(None) in data_type_args: + # Optional[T] is represented as Union[T, None] + origin_name = "Optional" + data_type_args = tuple(t for t in data_type_args if t is not type(None)) + else: + origin_name = "Union" + + args = ", ".join(self.python_type_to_str(t) for t in data_type_args) + return f"{origin_name}[{args}]" + + def plain_type_to_str(self, data_type: TypeLike) -> str: + "Returns the string representation of a Python type without metadata." + + # return forward references as the annotation string + if isinstance(data_type, typing.ForwardRef): + fwd: typing.ForwardRef = data_type + return fwd.__forward_arg__ + elif isinstance(data_type, str): + return data_type + + origin = typing.get_origin(data_type) + if origin is not None: + data_type_args = typing.get_args(data_type) + + if origin is dict: # Dict[T] + origin_name = "Dict" + elif origin is list: # List[T] + origin_name = "List" + elif origin is set: # Set[T] + origin_name = "Set" + elif origin is Union: + return self.union_to_str(data_type_args) + elif origin is Literal: + args = ", ".join(repr(arg) for arg in data_type_args) + return f"Literal[{args}]" + else: + origin_name = origin.__name__ + + args = ", ".join(self.python_type_to_str(t) for t in data_type_args) + return f"{origin_name}[{args}]" + + return data_type.__name__ + + def python_type_to_str(self, data_type: TypeLike) -> str: + "Returns the string representation of a Python type." + + if data_type is type(None): + return "None" + + # use compact name for alias types + name = _auxiliary_types.get(data_type) + if name is not None: + return name + + metadata = getattr(data_type, "__metadata__", None) + if metadata is not None: + # type is Annotated[T, ...] + metatuple: Tuple[Any, ...] = metadata + arg = typing.get_args(data_type)[0] + + # check for auxiliary types with user-defined annotations + metaset = set(metatuple) + for auxiliary_type, auxiliary_name in _auxiliary_types.items(): + auxiliary_arg = typing.get_args(auxiliary_type)[0] + if arg is not auxiliary_arg: + continue + + auxiliary_metatuple: Optional[Tuple[Any, ...]] = getattr(auxiliary_type, "__metadata__", None) + if auxiliary_metatuple is None: + continue + + if metaset.issuperset(auxiliary_metatuple): + # type is an auxiliary type with extra annotations + auxiliary_args = ", ".join(repr(m) for m in metatuple if m not in auxiliary_metatuple) + return f"Annotated[{auxiliary_name}, {auxiliary_args}]" + + # type is an annotated type + args = ", ".join(repr(m) for m in metatuple) + return f"Annotated[{self.plain_type_to_str(arg)}, {args}]" + else: + # type is a regular type + return self.plain_type_to_str(data_type) + + +def python_type_to_str(data_type: TypeLike, use_union_operator: bool = False) -> str: + """ + Returns the string representation of a Python type. + + :param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604. + """ + + fmt = TypeFormatter(use_union_operator) + return fmt.python_type_to_str(data_type) + + +def python_type_to_name(data_type: TypeLike, force: bool = False) -> str: + """ + Returns the short name of a Python type. + + :param force: Whether to produce a name for composite types such as generics. + """ + + # use compact name for alias types + name = _auxiliary_types.get(data_type) + if name is not None: + return name + + # unwrap annotated types + metadata = getattr(data_type, "__metadata__", None) + if metadata is not None: + # type is Annotated[T, ...] + arg = typing.get_args(data_type)[0] + return python_type_to_name(arg) + + if force: + # generic types + if is_type_optional(data_type, strict=True): + inner_name = python_type_to_name(unwrap_optional_type(data_type)) + return f"Optional__{inner_name}" + elif is_generic_list(data_type): + item_name = python_type_to_name(unwrap_generic_list(data_type)) + return f"List__{item_name}" + elif is_generic_dict(data_type): + key_type, value_type = unwrap_generic_dict(data_type) + key_name = python_type_to_name(key_type) + value_name = python_type_to_name(value_type) + return f"Dict__{key_name}__{value_name}" + elif is_type_union(data_type): + member_types = unwrap_union_types(data_type) + member_names = "__".join(python_type_to_name(member_type) for member_type in member_types) + return f"Union__{member_names}" + + # named system or user-defined type + if hasattr(data_type, "__name__") and not typing.get_args(data_type): + return data_type.__name__ + + raise TypeError(f"cannot assign a simple name to type: {data_type}") diff --git a/llama_stack/strong_typing/py.typed b/llama_stack/strong_typing/py.typed new file mode 100644 index 000000000..e69de29bb diff --git a/llama_stack/strong_typing/schema.py b/llama_stack/strong_typing/schema.py new file mode 100644 index 000000000..ddff7cf82 --- /dev/null +++ b/llama_stack/strong_typing/schema.py @@ -0,0 +1,752 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import dataclasses +import datetime +import decimal +import enum +import functools +import inspect +import json +import typing +import uuid +from copy import deepcopy +from typing import ( + Any, + Callable, + ClassVar, + Dict, + List, + Literal, + Optional, + Tuple, + Type, + TypeVar, + Union, + overload, +) + +import jsonschema +from typing_extensions import Annotated + +from . import docstring +from .auxiliary import ( + Alias, + IntegerRange, + MaxLength, + MinLength, + Precision, + get_auxiliary_format, +) +from .core import JsonArray, JsonObject, JsonType, Schema, StrictJsonType +from .inspection import ( + TypeLike, + enum_value_types, + get_annotation, + get_class_properties, + is_type_enum, + is_type_like, + is_type_optional, + unwrap_optional_type, +) +from .name import python_type_to_name +from .serialization import object_to_json + +# determines the maximum number of distinct enum members up to which a Dict[EnumType, Any] is converted into a JSON +# schema with explicitly listed properties (rather than employing a pattern constraint on property names) +OBJECT_ENUM_EXPANSION_LIMIT = 4 + + +T = TypeVar("T") + + +def get_class_docstrings(data_type: type) -> Tuple[Optional[str], Optional[str]]: + docstr = docstring.parse_type(data_type) + + # check if class has a doc-string other than the auto-generated string assigned by @dataclass + if docstring.has_default_docstring(data_type): + return None, None + + return docstr.short_description, docstr.long_description + + +def get_class_property_docstrings( + data_type: type, transform_fun: Optional[Callable[[type, str, str], str]] = None +) -> Dict[str, str]: + """ + Extracts the documentation strings associated with the properties of a composite type. + + :param data_type: The object whose properties to iterate over. + :param transform_fun: An optional function that maps a property documentation string to a custom tailored string. + :returns: A dictionary mapping property names to descriptions. + """ + + result = {} + for base in inspect.getmro(data_type): + docstr = docstring.parse_type(base) + for param in docstr.params.values(): + if param.name in result: + continue + + if transform_fun: + description = transform_fun(data_type, param.name, param.description) + else: + description = param.description + + result[param.name] = description + return result + + +def docstring_to_schema(data_type: type) -> Schema: + short_description, long_description = get_class_docstrings(data_type) + schema: Schema = {} + + description = "\n".join(filter(None, [short_description, long_description])) + if description: + schema["description"] = description + return schema + + +def id_from_ref(data_type: Union[typing.ForwardRef, str, type]) -> str: + "Extracts the name of a possibly forward-referenced type." + + if isinstance(data_type, typing.ForwardRef): + forward_type: typing.ForwardRef = data_type + return forward_type.__forward_arg__ + elif isinstance(data_type, str): + return data_type + else: + return data_type.__name__ + + +def type_from_ref(data_type: Union[typing.ForwardRef, str, type]) -> Tuple[str, type]: + "Creates a type from a forward reference." + + if isinstance(data_type, typing.ForwardRef): + forward_type: typing.ForwardRef = data_type + true_type = eval(forward_type.__forward_code__) + return forward_type.__forward_arg__, true_type + elif isinstance(data_type, str): + true_type = eval(data_type) + return data_type, true_type + else: + return data_type.__name__, data_type + + +@dataclasses.dataclass +class TypeCatalogEntry: + schema: Optional[Schema] + identifier: str + examples: Optional[JsonType] = None + + +class TypeCatalog: + "Maintains an association of well-known Python types to their JSON schema." + + _by_type: Dict[TypeLike, TypeCatalogEntry] + _by_name: Dict[str, TypeCatalogEntry] + + def __init__(self) -> None: + self._by_type = {} + self._by_name = {} + + def __contains__(self, data_type: TypeLike) -> bool: + if isinstance(data_type, typing.ForwardRef): + fwd: typing.ForwardRef = data_type + name = fwd.__forward_arg__ + return name in self._by_name + else: + return data_type in self._by_type + + def add( + self, + data_type: TypeLike, + schema: Optional[Schema], + identifier: str, + examples: Optional[List[JsonType]] = None, + ) -> None: + if isinstance(data_type, typing.ForwardRef): + raise TypeError("forward references cannot be used to register a type") + + if data_type in self._by_type: + raise ValueError(f"type {data_type} is already registered in the catalog") + + entry = TypeCatalogEntry(schema, identifier, examples) + self._by_type[data_type] = entry + self._by_name[identifier] = entry + + def get(self, data_type: TypeLike) -> TypeCatalogEntry: + if isinstance(data_type, typing.ForwardRef): + fwd: typing.ForwardRef = data_type + name = fwd.__forward_arg__ + return self._by_name[name] + else: + return self._by_type[data_type] + + +@dataclasses.dataclass +class SchemaOptions: + definitions_path: str = "#/definitions/" + use_descriptions: bool = True + use_examples: bool = True + property_description_fun: Optional[Callable[[type, str, str], str]] = None + + +class JsonSchemaGenerator: + "Creates a JSON schema with user-defined type definitions." + + type_catalog: ClassVar[TypeCatalog] = TypeCatalog() + types_used: Dict[str, TypeLike] + options: SchemaOptions + + def __init__(self, options: Optional[SchemaOptions] = None): + if options is None: + self.options = SchemaOptions() + else: + self.options = options + self.types_used = {} + + @functools.singledispatchmethod + def _metadata_to_schema(self, arg: object) -> Schema: + # unrecognized annotation + return {} + + @_metadata_to_schema.register + def _(self, arg: IntegerRange) -> Schema: + return {"minimum": arg.minimum, "maximum": arg.maximum} + + @_metadata_to_schema.register + def _(self, arg: Precision) -> Schema: + return { + "multipleOf": 10 ** (-arg.decimal_digits), + "exclusiveMinimum": -(10**arg.integer_digits), + "exclusiveMaximum": (10**arg.integer_digits), + } + + @_metadata_to_schema.register + def _(self, arg: MinLength) -> Schema: + return {"minLength": arg.value} + + @_metadata_to_schema.register + def _(self, arg: MaxLength) -> Schema: + return {"maxLength": arg.value} + + def _with_metadata(self, type_schema: Schema, metadata: Optional[Tuple[Any, ...]]) -> Schema: + if metadata: + for m in metadata: + type_schema.update(self._metadata_to_schema(m)) + return type_schema + + def _simple_type_to_schema(self, typ: TypeLike, json_schema_extra: Optional[dict] = None) -> Optional[Schema]: + """ + Returns the JSON schema associated with a simple, unrestricted type. + + :returns: The schema for a simple type, or `None`. + """ + + if typ is type(None): + return {"type": "null"} + elif typ is bool: + return {"type": "boolean"} + elif typ is int: + return {"type": "integer"} + elif typ is float: + return {"type": "number"} + elif typ is str: + if json_schema_extra and "contentEncoding" in json_schema_extra: + return { + "type": "string", + "contentEncoding": json_schema_extra["contentEncoding"], + } + return {"type": "string"} + elif typ is bytes: + return {"type": "string", "contentEncoding": "base64"} + elif typ is datetime.datetime: + # 2018-11-13T20:20:39+00:00 + return { + "type": "string", + "format": "date-time", + } + elif typ is datetime.date: + # 2018-11-13 + return {"type": "string", "format": "date"} + elif typ is datetime.time: + # 20:20:39+00:00 + return {"type": "string", "format": "time"} + elif typ is decimal.Decimal: + return {"type": "number"} + elif typ is uuid.UUID: + # f81d4fae-7dec-11d0-a765-00a0c91e6bf6 + return {"type": "string", "format": "uuid"} + elif typ is Any: + return { + "oneOf": [ + {"type": "null"}, + {"type": "boolean"}, + {"type": "number"}, + {"type": "string"}, + {"type": "array"}, + {"type": "object"}, + ] + } + elif typ is JsonObject: + return {"type": "object"} + elif typ is JsonArray: + return {"type": "array"} + else: + # not a simple type + return None + + def type_to_schema( + self, + data_type: TypeLike, + force_expand: bool = False, + json_schema_extra: Optional[dict] = None, + ) -> Schema: + """ + Returns the JSON schema associated with a type. + + :param data_type: The Python type whose JSON schema to return. + :param force_expand: Forces a JSON schema to be returned even if the type is registered in the catalog of known types. + :returns: The JSON schema associated with the type. + """ + + # short-circuit for common simple types + schema = self._simple_type_to_schema(data_type, json_schema_extra) + if schema is not None: + return schema + + # types registered in the type catalog of well-known types + type_catalog = JsonSchemaGenerator.type_catalog + if not force_expand and data_type in type_catalog: + # user-defined type + identifier = type_catalog.get(data_type).identifier + self.types_used.setdefault(identifier, data_type) + return {"$ref": f"{self.options.definitions_path}{identifier}"} + + # unwrap annotated types + metadata = getattr(data_type, "__metadata__", None) + if metadata is not None: + # type is Annotated[T, ...] + typ = typing.get_args(data_type)[0] + schema = self._simple_type_to_schema(typ) + if schema is not None: + # recognize well-known auxiliary types + fmt = get_auxiliary_format(data_type) + if fmt is not None: + schema.update({"format": fmt}) + return schema + else: + return self._with_metadata(schema, metadata) + + else: + # type is a regular type + typ = data_type + + if isinstance(typ, typing.ForwardRef) or isinstance(typ, str): + if force_expand: + identifier, true_type = type_from_ref(typ) + return self.type_to_schema(true_type, force_expand=True) + else: + try: + identifier, true_type = type_from_ref(typ) + self.types_used[identifier] = true_type + except NameError: + identifier = id_from_ref(typ) + + return {"$ref": f"{self.options.definitions_path}{identifier}"} + + if is_type_enum(typ): + enum_type: Type[enum.Enum] = typ + value_types = enum_value_types(enum_type) + if len(value_types) != 1: + raise ValueError( + f"enumerations must have a consistent member value type but several types found: {value_types}" + ) + enum_value_type = value_types.pop() + + enum_schema: Schema + if enum_value_type is bool or enum_value_type is int or enum_value_type is float or enum_value_type is str: + if enum_value_type is bool: + enum_schema_type = "boolean" + elif enum_value_type is int: + enum_schema_type = "integer" + elif enum_value_type is float: + enum_schema_type = "number" + elif enum_value_type is str: + enum_schema_type = "string" + + enum_schema = { + "type": enum_schema_type, + "enum": [object_to_json(e.value) for e in enum_type], + } + if self.options.use_descriptions: + enum_schema.update(docstring_to_schema(typ)) + return enum_schema + else: + enum_schema = self.type_to_schema(enum_value_type) + if self.options.use_descriptions: + enum_schema.update(docstring_to_schema(typ)) + return enum_schema + + origin_type = typing.get_origin(typ) + if origin_type is list: + (list_type,) = typing.get_args(typ) # unpack single tuple element + return {"type": "array", "items": self.type_to_schema(list_type)} + elif origin_type is dict: + key_type, value_type = typing.get_args(typ) + if not (key_type is str or key_type is int or is_type_enum(key_type)): + raise ValueError("`dict` with key type not coercible to `str` is not supported") + + dict_schema: Schema + value_schema = self.type_to_schema(value_type) + if is_type_enum(key_type): + enum_values = [str(e.value) for e in key_type] + if len(enum_values) > OBJECT_ENUM_EXPANSION_LIMIT: + dict_schema = { + "propertyNames": {"pattern": "^(" + "|".join(enum_values) + ")$"}, + "additionalProperties": value_schema, + } + else: + dict_schema = { + "properties": {value: value_schema for value in enum_values}, + "additionalProperties": False, + } + else: + dict_schema = {"additionalProperties": value_schema} + + schema = {"type": "object"} + schema.update(dict_schema) + return schema + elif origin_type is set: + (set_type,) = typing.get_args(typ) # unpack single tuple element + return { + "type": "array", + "items": self.type_to_schema(set_type), + "uniqueItems": True, + } + elif origin_type is tuple: + args = typing.get_args(typ) + return { + "type": "array", + "minItems": len(args), + "maxItems": len(args), + "prefixItems": [self.type_to_schema(member_type) for member_type in args], + } + elif origin_type is Union: + discriminator = None + if typing.get_origin(data_type) is Annotated: + discriminator = typing.get_args(data_type)[1].discriminator + ret = {"oneOf": [self.type_to_schema(union_type) for union_type in typing.get_args(typ)]} + if discriminator: + # for each union type, we need to read the value of the discriminator + mapping = {} + for union_type in typing.get_args(typ): + props = self.type_to_schema(union_type, force_expand=True)["properties"] + mapping[props[discriminator]["default"]] = self.type_to_schema(union_type)["$ref"] + + ret["discriminator"] = { + "propertyName": discriminator, + "mapping": mapping, + } + return ret + elif origin_type is Literal: + (literal_value,) = typing.get_args(typ) # unpack value of literal type + schema = self.type_to_schema(type(literal_value)) + schema["const"] = literal_value + return schema + elif origin_type is type: + (concrete_type,) = typing.get_args(typ) # unpack single tuple element + return {"const": self.type_to_schema(concrete_type, force_expand=True)} + + # dictionary of class attributes + members = dict(inspect.getmembers(typ, lambda a: not inspect.isroutine(a))) + + property_docstrings = get_class_property_docstrings(typ, self.options.property_description_fun) + properties: Dict[str, Schema] = {} + required: List[str] = [] + for property_name, property_type in get_class_properties(typ): + # rename property if an alias name is specified + alias = get_annotation(property_type, Alias) + if alias: + output_name = alias.name + else: + output_name = property_name + + defaults = {} + json_schema_extra = None + if "model_fields" in members: + f = members["model_fields"] + defaults = {k: finfo.default for k, finfo in f.items()} + json_schema_extra = f.get(output_name, None).json_schema_extra + + if is_type_optional(property_type): + optional_type: type = unwrap_optional_type(property_type) + property_def = self.type_to_schema(optional_type, json_schema_extra=json_schema_extra) + else: + property_def = self.type_to_schema(property_type, json_schema_extra=json_schema_extra) + required.append(output_name) + + # check if attribute has a default value initializer + if defaults.get(property_name) is not None: + def_value = defaults[property_name] + # check if value can be directly represented in JSON + if isinstance( + def_value, + ( + bool, + int, + float, + str, + enum.Enum, + datetime.datetime, + datetime.date, + datetime.time, + ), + ): + property_def["default"] = object_to_json(def_value) + + # add property docstring if available + property_doc = property_docstrings.get(property_name) + if property_doc: + # print(output_name, property_doc) + property_def.pop("title", None) + property_def["description"] = property_doc + + properties[output_name] = property_def + + schema = {"type": "object"} + if len(properties) > 0: + schema["properties"] = typing.cast(JsonType, properties) + schema["additionalProperties"] = False + if len(required) > 0: + schema["required"] = typing.cast(JsonType, required) + if self.options.use_descriptions: + schema.update(docstring_to_schema(typ)) + return schema + + def _type_to_schema_with_lookup(self, data_type: TypeLike) -> Schema: + """ + Returns the JSON schema associated with a type that may be registered in the catalog of known types. + + :param data_type: The type whose JSON schema we seek. + :returns: The JSON schema associated with the type. + """ + + entry = JsonSchemaGenerator.type_catalog.get(data_type) + if entry.schema is None: + type_schema = self.type_to_schema(data_type, force_expand=True) + else: + type_schema = deepcopy(entry.schema) + + # add descriptive text (if present) + if self.options.use_descriptions: + if isinstance(data_type, type) and not isinstance(data_type, typing.ForwardRef): + type_schema.update(docstring_to_schema(data_type)) + + # add example (if present) + if self.options.use_examples and entry.examples: + type_schema["examples"] = entry.examples + + return type_schema + + def classdef_to_schema(self, data_type: TypeLike, force_expand: bool = False) -> Tuple[Schema, Dict[str, Schema]]: + """ + Returns the JSON schema associated with a type and any nested types. + + :param data_type: The type whose JSON schema to return. + :param force_expand: True if a full JSON schema is to be returned even for well-known types; false if a schema + reference is to be used for well-known types. + :returns: A tuple of the JSON schema, and a mapping between nested type names and their corresponding schema. + """ + + if not is_type_like(data_type): + raise TypeError(f"expected a type-like object but got: {data_type}") + + self.types_used = {} + try: + type_schema = self.type_to_schema(data_type, force_expand=force_expand) + + types_defined: Dict[str, Schema] = {} + while len(self.types_used) > len(types_defined): + # make a snapshot copy; original collection is going to be modified + types_undefined = { + sub_name: sub_type + for sub_name, sub_type in self.types_used.items() + if sub_name not in types_defined + } + + # expand undefined types, which may lead to additional types to be defined + for sub_name, sub_type in types_undefined.items(): + types_defined[sub_name] = self._type_to_schema_with_lookup(sub_type) + + type_definitions = dict(sorted(types_defined.items())) + finally: + self.types_used = {} + + return type_schema, type_definitions + + +class Validator(enum.Enum): + "Defines constants for JSON schema standards." + + Draft7 = jsonschema.Draft7Validator + Draft201909 = jsonschema.Draft201909Validator + Draft202012 = jsonschema.Draft202012Validator + Latest = jsonschema.Draft202012Validator + + +def classdef_to_schema( + data_type: TypeLike, + options: Optional[SchemaOptions] = None, + validator: Validator = Validator.Latest, +) -> Schema: + """ + Returns the JSON schema corresponding to the given type. + + :param data_type: The Python type used to generate the JSON schema + :returns: A JSON object that you can serialize to a JSON string with json.dump or json.dumps + :raises TypeError: Indicates that the generated JSON schema does not validate against the desired meta-schema. + """ + + # short-circuit with an error message when passing invalid data + if not is_type_like(data_type): + raise TypeError(f"expected a type-like object but got: {data_type}") + + generator = JsonSchemaGenerator(options) + type_schema, type_definitions = generator.classdef_to_schema(data_type) + + class_schema: Schema = {} + if type_definitions: + class_schema["definitions"] = typing.cast(JsonType, type_definitions) + class_schema.update(type_schema) + + validator_id = validator.value.META_SCHEMA["$id"] + try: + validator.value.check_schema(class_schema) + except jsonschema.exceptions.SchemaError: + raise TypeError(f"schema does not validate against meta-schema <{validator_id}>") + + schema = {"$schema": validator_id} + schema.update(class_schema) + return schema + + +def validate_object(data_type: TypeLike, json_dict: JsonType) -> None: + """ + Validates if the JSON dictionary object conforms to the expected type. + + :param data_type: The type to match against. + :param json_dict: A JSON object obtained with `json.load` or `json.loads`. + :raises jsonschema.exceptions.ValidationError: Indicates that the JSON object cannot represent the type. + """ + + schema_dict = classdef_to_schema(data_type) + jsonschema.validate(json_dict, schema_dict, format_checker=jsonschema.FormatChecker()) + + +def print_schema(data_type: type) -> None: + """Pretty-prints the JSON schema corresponding to the type.""" + + s = classdef_to_schema(data_type) + print(json.dumps(s, indent=4)) + + +def get_schema_identifier(data_type: type) -> Optional[str]: + if data_type in JsonSchemaGenerator.type_catalog: + return JsonSchemaGenerator.type_catalog.get(data_type).identifier + else: + return None + + +def register_schema( + data_type: T, + schema: Optional[Schema] = None, + name: Optional[str] = None, + examples: Optional[List[JsonType]] = None, +) -> T: + """ + Associates a type with a JSON schema definition. + + :param data_type: The type to associate with a JSON schema. + :param schema: The schema to associate the type with. Derived automatically if omitted. + :param name: The name used for looking uo the type. Determined automatically if omitted. + :returns: The input type. + """ + + JsonSchemaGenerator.type_catalog.add( + data_type, + schema, + name if name is not None else python_type_to_name(data_type), + examples, + ) + return data_type + + +@overload +def json_schema_type(cls: Type[T], /) -> Type[T]: ... + + +@overload +def json_schema_type(cls: None, *, schema: Optional[Schema] = None) -> Callable[[Type[T]], Type[T]]: ... + + +def json_schema_type( + cls: Optional[Type[T]] = None, + *, + schema: Optional[Schema] = None, + examples: Optional[List[JsonType]] = None, +) -> Union[Type[T], Callable[[Type[T]], Type[T]]]: + """Decorator to add user-defined schema definition to a class.""" + + def wrap(cls: Type[T]) -> Type[T]: + return register_schema(cls, schema, examples=examples) + + # see if decorator is used as @json_schema_type or @json_schema_type() + if cls is None: + # called with parentheses + return wrap + else: + # called as @json_schema_type without parentheses + return wrap(cls) + + +register_schema(JsonObject, name="JsonObject") +register_schema(JsonArray, name="JsonArray") + +register_schema( + JsonType, + name="JsonType", + examples=[ + { + "property1": None, + "property2": True, + "property3": 64, + "property4": "string", + "property5": ["item"], + "property6": {"key": "value"}, + } + ], +) +register_schema( + StrictJsonType, + name="StrictJsonType", + examples=[ + { + "property1": True, + "property2": 64, + "property3": "string", + "property4": ["item"], + "property5": {"key": "value"}, + } + ], +) diff --git a/llama_stack/strong_typing/serialization.py b/llama_stack/strong_typing/serialization.py new file mode 100644 index 000000000..c00a0aad5 --- /dev/null +++ b/llama_stack/strong_typing/serialization.py @@ -0,0 +1,97 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import inspect +import json +import sys +from types import ModuleType +from typing import Any, Optional, TextIO, TypeVar + +from .core import JsonType +from .deserializer import create_deserializer +from .inspection import TypeLike +from .serializer import create_serializer + +T = TypeVar("T") + + +def object_to_json(obj: Any) -> JsonType: + """ + Converts a Python object to a representation that can be exported to JSON. + + * Fundamental types (e.g. numeric types) are written as is. + * Date and time types are serialized in the ISO 8601 format with time zone. + * A byte array is written as a string with Base64 encoding. + * UUIDs are written as a UUID string. + * Enumerations are written as their value. + * Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively. + * Objects with properties (including data class types) are converted to a dictionaries of key-value pairs. + """ + + typ: type = type(obj) + generator = create_serializer(typ) + return generator.generate(obj) + + +def json_to_object(typ: TypeLike, data: JsonType, *, context: Optional[ModuleType] = None) -> object: + """ + Creates an object from a representation that has been de-serialized from JSON. + + When de-serializing a JSON object into a Python object, the following transformations are applied: + + * Fundamental types are parsed as `bool`, `int`, `float` or `str`. + * Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type + `datetime`, `date` or `time` + * A byte array is read from a string with Base64 encoding into a `bytes` instance. + * UUIDs are extracted from a UUID string into a `uuid.UUID` instance. + * Enumerations are instantiated with a lookup on enumeration value. + * Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively. + * Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs + using reflection (enumerating type annotations). + + :raises TypeError: A de-serializing engine cannot be constructed for the input type. + :raises JsonKeyError: Deserialization for a class or union type has failed because a matching member was not found. + :raises JsonTypeError: Deserialization for data has failed due to a type mismatch. + """ + + # use caller context for evaluating types if no context is supplied + if context is None: + this_frame = inspect.currentframe() + if this_frame is not None: + caller_frame = this_frame.f_back + del this_frame + + if caller_frame is not None: + try: + context = sys.modules[caller_frame.f_globals["__name__"]] + finally: + del caller_frame + + parser = create_deserializer(typ, context) + return parser.parse(data) + + +def json_dump_string(json_object: JsonType) -> str: + "Dump an object as a JSON string with a compact representation." + + return json.dumps(json_object, ensure_ascii=False, check_circular=False, separators=(",", ":")) + + +def json_dump(json_object: JsonType, file: TextIO) -> None: + json.dump( + json_object, + file, + ensure_ascii=False, + check_circular=False, + separators=(",", ":"), + ) + file.write("\n") diff --git a/llama_stack/strong_typing/serializer.py b/llama_stack/strong_typing/serializer.py new file mode 100644 index 000000000..5e93e4c4d --- /dev/null +++ b/llama_stack/strong_typing/serializer.py @@ -0,0 +1,497 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +import abc +import base64 +import datetime +import enum +import functools +import inspect +import ipaddress +import sys +import typing +import uuid +from types import FunctionType, MethodType, ModuleType +from typing import ( + Any, + Callable, + Dict, + Generic, + List, + Literal, + NamedTuple, + Optional, + Set, + Tuple, + Type, + TypeVar, + Union, +) + +from .core import JsonType +from .exception import JsonTypeError, JsonValueError +from .inspection import ( + TypeLike, + enum_value_types, + evaluate_type, + get_class_properties, + get_resolved_hints, + is_dataclass_type, + is_named_tuple_type, + is_reserved_property, + is_type_annotated, + is_type_enum, + unwrap_annotated_type, +) +from .mapping import python_field_to_json_property + +T = TypeVar("T") + + +class Serializer(abc.ABC, Generic[T]): + @abc.abstractmethod + def generate(self, data: T) -> JsonType: ... + + +class NoneSerializer(Serializer[None]): + def generate(self, data: None) -> None: + # can be directly represented in JSON + return None + + +class BoolSerializer(Serializer[bool]): + def generate(self, data: bool) -> bool: + # can be directly represented in JSON + return data + + +class IntSerializer(Serializer[int]): + def generate(self, data: int) -> int: + # can be directly represented in JSON + return data + + +class FloatSerializer(Serializer[float]): + def generate(self, data: float) -> float: + # can be directly represented in JSON + return data + + +class StringSerializer(Serializer[str]): + def generate(self, data: str) -> str: + # can be directly represented in JSON + return data + + +class BytesSerializer(Serializer[bytes]): + def generate(self, data: bytes) -> str: + return base64.b64encode(data).decode("ascii") + + +class DateTimeSerializer(Serializer[datetime.datetime]): + def generate(self, obj: datetime.datetime) -> str: + if obj.tzinfo is None: + raise JsonValueError(f"timestamp lacks explicit time zone designator: {obj}") + fmt = obj.isoformat() + if fmt.endswith("+00:00"): + fmt = f"{fmt[:-6]}Z" # Python's isoformat() does not support military time zones like "Zulu" for UTC + return fmt + + +class DateSerializer(Serializer[datetime.date]): + def generate(self, obj: datetime.date) -> str: + return obj.isoformat() + + +class TimeSerializer(Serializer[datetime.time]): + def generate(self, obj: datetime.time) -> str: + return obj.isoformat() + + +class UUIDSerializer(Serializer[uuid.UUID]): + def generate(self, obj: uuid.UUID) -> str: + return str(obj) + + +class IPv4Serializer(Serializer[ipaddress.IPv4Address]): + def generate(self, obj: ipaddress.IPv4Address) -> str: + return str(obj) + + +class IPv6Serializer(Serializer[ipaddress.IPv6Address]): + def generate(self, obj: ipaddress.IPv6Address) -> str: + return str(obj) + + +class EnumSerializer(Serializer[enum.Enum]): + def generate(self, obj: enum.Enum) -> Union[int, str]: + return obj.value + + +class UntypedListSerializer(Serializer[list]): + def generate(self, obj: list) -> List[JsonType]: + return [object_to_json(item) for item in obj] + + +class UntypedDictSerializer(Serializer[dict]): + def generate(self, obj: dict) -> Dict[str, JsonType]: + if obj and isinstance(next(iter(obj.keys())), enum.Enum): + iterator = ((key.value, object_to_json(value)) for key, value in obj.items()) + else: + iterator = ((str(key), object_to_json(value)) for key, value in obj.items()) + return dict(iterator) + + +class UntypedSetSerializer(Serializer[set]): + def generate(self, obj: set) -> List[JsonType]: + return [object_to_json(item) for item in obj] + + +class UntypedTupleSerializer(Serializer[tuple]): + def generate(self, obj: tuple) -> List[JsonType]: + return [object_to_json(item) for item in obj] + + +class TypedCollectionSerializer(Serializer, Generic[T]): + generator: Serializer[T] + + def __init__(self, item_type: Type[T], context: Optional[ModuleType]) -> None: + self.generator = _get_serializer(item_type, context) + + +class TypedListSerializer(TypedCollectionSerializer[T]): + def generate(self, obj: List[T]) -> List[JsonType]: + return [self.generator.generate(item) for item in obj] + + +class TypedStringDictSerializer(TypedCollectionSerializer[T]): + def __init__(self, value_type: Type[T], context: Optional[ModuleType]) -> None: + super().__init__(value_type, context) + + def generate(self, obj: Dict[str, T]) -> Dict[str, JsonType]: + return {key: self.generator.generate(value) for key, value in obj.items()} + + +class TypedEnumDictSerializer(TypedCollectionSerializer[T]): + def __init__( + self, + key_type: Type[enum.Enum], + value_type: Type[T], + context: Optional[ModuleType], + ) -> None: + super().__init__(value_type, context) + + value_types = enum_value_types(key_type) + if len(value_types) != 1: + raise JsonTypeError( + f"invalid key type, enumerations must have a consistent member value type but several types found: {value_types}" + ) + + value_type = value_types.pop() + if value_type is not str: + raise JsonTypeError("invalid enumeration key type, expected `enum.Enum` with string values") + + def generate(self, obj: Dict[enum.Enum, T]) -> Dict[str, JsonType]: + return {key.value: self.generator.generate(value) for key, value in obj.items()} + + +class TypedSetSerializer(TypedCollectionSerializer[T]): + def generate(self, obj: Set[T]) -> JsonType: + return [self.generator.generate(item) for item in obj] + + +class TypedTupleSerializer(Serializer[tuple]): + item_generators: Tuple[Serializer, ...] + + def __init__(self, item_types: Tuple[type, ...], context: Optional[ModuleType]) -> None: + self.item_generators = tuple(_get_serializer(item_type, context) for item_type in item_types) + + def generate(self, obj: tuple) -> List[JsonType]: + return [item_generator.generate(item) for item_generator, item in zip(self.item_generators, obj)] + + +class CustomSerializer(Serializer): + converter: Callable[[object], JsonType] + + def __init__(self, converter: Callable[[object], JsonType]) -> None: + self.converter = converter + + def generate(self, obj: object) -> JsonType: + return self.converter(obj) + + +class FieldSerializer(Generic[T]): + """ + Serializes a Python object field into a JSON property. + + :param field_name: The name of the field in a Python class to read data from. + :param property_name: The name of the JSON property to write to a JSON `object`. + :param generator: A compatible serializer that can handle the field's type. + """ + + field_name: str + property_name: str + generator: Serializer + + def __init__(self, field_name: str, property_name: str, generator: Serializer[T]) -> None: + self.field_name = field_name + self.property_name = property_name + self.generator = generator + + def generate_field(self, obj: object, object_dict: Dict[str, JsonType]) -> None: + value = getattr(obj, self.field_name) + if value is not None: + object_dict[self.property_name] = self.generator.generate(value) + + +class TypedClassSerializer(Serializer[T]): + property_generators: List[FieldSerializer] + + def __init__(self, class_type: Type[T], context: Optional[ModuleType]) -> None: + self.property_generators = [ + FieldSerializer( + field_name, + python_field_to_json_property(field_name, field_type), + _get_serializer(field_type, context), + ) + for field_name, field_type in get_class_properties(class_type) + ] + + def generate(self, obj: T) -> Dict[str, JsonType]: + object_dict: Dict[str, JsonType] = {} + for property_generator in self.property_generators: + property_generator.generate_field(obj, object_dict) + + return object_dict + + +class TypedNamedTupleSerializer(TypedClassSerializer[NamedTuple]): + def __init__(self, class_type: Type[NamedTuple], context: Optional[ModuleType]) -> None: + super().__init__(class_type, context) + + +class DataclassSerializer(TypedClassSerializer[T]): + def __init__(self, class_type: Type[T], context: Optional[ModuleType]) -> None: + super().__init__(class_type, context) + + +class UnionSerializer(Serializer): + def generate(self, obj: Any) -> JsonType: + return object_to_json(obj) + + +class LiteralSerializer(Serializer): + generator: Serializer + + def __init__(self, values: Tuple[Any, ...], context: Optional[ModuleType]) -> None: + literal_type_tuple = tuple(type(value) for value in values) + literal_type_set = set(literal_type_tuple) + if len(literal_type_set) != 1: + value_names = ", ".join(repr(value) for value in values) + raise TypeError( + f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}" + ) + + literal_type = literal_type_set.pop() + self.generator = _get_serializer(literal_type, context) + + def generate(self, obj: Any) -> JsonType: + return self.generator.generate(obj) + + +class UntypedNamedTupleSerializer(Serializer): + fields: Dict[str, str] + + def __init__(self, class_type: Type[NamedTuple]) -> None: + # named tuples are also instances of tuple + self.fields = {} + field_names: Tuple[str, ...] = class_type._fields + for field_name in field_names: + self.fields[field_name] = python_field_to_json_property(field_name) + + def generate(self, obj: NamedTuple) -> JsonType: + object_dict = {} + for field_name, property_name in self.fields.items(): + value = getattr(obj, field_name) + object_dict[property_name] = object_to_json(value) + + return object_dict + + +class UntypedClassSerializer(Serializer): + def generate(self, obj: object) -> JsonType: + # iterate over object attributes to get a standard representation + object_dict = {} + for name in dir(obj): + if is_reserved_property(name): + continue + + value = getattr(obj, name) + if value is None: + continue + + # filter instance methods + if inspect.ismethod(value): + continue + + object_dict[python_field_to_json_property(name)] = object_to_json(value) + + return object_dict + + +def create_serializer(typ: TypeLike, context: Optional[ModuleType] = None) -> Serializer: + """ + Creates a serializer engine to produce an object that can be directly converted into a JSON string. + + When serializing a Python object into a JSON object, the following transformations are applied: + + * Fundamental types (`bool`, `int`, `float` or `str`) are returned as-is. + * Date and time types (`datetime`, `date` or `time`) produce an ISO 8601 format string with time zone + (ending with `Z` for UTC). + * Byte arrays (`bytes`) are written as a string with Base64 encoding. + * UUIDs (`uuid.UUID`) are written as a UUID string as per RFC 4122. + * Enumerations yield their enumeration value. + * Containers (e.g. `list`, `dict`, `set`, `tuple`) are processed recursively. + * Complex objects with properties (including data class types) generate dictionaries of key-value pairs. + + :raises TypeError: A serializer engine cannot be constructed for the input type. + """ + + if context is None: + if isinstance(typ, type): + context = sys.modules[typ.__module__] + + return _get_serializer(typ, context) + + +def _get_serializer(typ: TypeLike, context: Optional[ModuleType]) -> Serializer: + if isinstance(typ, (str, typing.ForwardRef)): + if context is None: + raise TypeError(f"missing context for evaluating type: {typ}") + + typ = evaluate_type(typ, context) + + if isinstance(typ, type): + return _fetch_serializer(typ) + else: + # special forms are not always hashable + return _create_serializer(typ, context) + + +@functools.lru_cache(maxsize=None) +def _fetch_serializer(typ: type) -> Serializer: + context = sys.modules[typ.__module__] + return _create_serializer(typ, context) + + +def _create_serializer(typ: TypeLike, context: Optional[ModuleType]) -> Serializer: + # check for well-known types + if typ is type(None): + return NoneSerializer() + elif typ is bool: + return BoolSerializer() + elif typ is int: + return IntSerializer() + elif typ is float: + return FloatSerializer() + elif typ is str: + return StringSerializer() + elif typ is bytes: + return BytesSerializer() + elif typ is datetime.datetime: + return DateTimeSerializer() + elif typ is datetime.date: + return DateSerializer() + elif typ is datetime.time: + return TimeSerializer() + elif typ is uuid.UUID: + return UUIDSerializer() + elif typ is ipaddress.IPv4Address: + return IPv4Serializer() + elif typ is ipaddress.IPv6Address: + return IPv6Serializer() + + # dynamically-typed collection types + if typ is list: + return UntypedListSerializer() + elif typ is dict: + return UntypedDictSerializer() + elif typ is set: + return UntypedSetSerializer() + elif typ is tuple: + return UntypedTupleSerializer() + + # generic types (e.g. list, dict, set, etc.) + origin_type = typing.get_origin(typ) + if origin_type is list: + (list_item_type,) = typing.get_args(typ) # unpack single tuple element + return TypedListSerializer(list_item_type, context) + elif origin_type is dict: + key_type, value_type = typing.get_args(typ) + if key_type is str: + return TypedStringDictSerializer(value_type, context) + elif issubclass(key_type, enum.Enum): + return TypedEnumDictSerializer(key_type, value_type, context) + elif origin_type is set: + (set_member_type,) = typing.get_args(typ) # unpack single tuple element + return TypedSetSerializer(set_member_type, context) + elif origin_type is tuple: + return TypedTupleSerializer(typing.get_args(typ), context) + elif origin_type is Union: + return UnionSerializer() + elif origin_type is Literal: + return LiteralSerializer(typing.get_args(typ), context) + + if is_type_annotated(typ): + return create_serializer(unwrap_annotated_type(typ)) + + # check if object has custom serialization method + convert_func = getattr(typ, "to_json", None) + if callable(convert_func): + return CustomSerializer(convert_func) + + if is_type_enum(typ): + return EnumSerializer() + if is_dataclass_type(typ): + return DataclassSerializer(typ, context) + if is_named_tuple_type(typ): + if getattr(typ, "__annotations__", None): + return TypedNamedTupleSerializer(typ, context) + else: + return UntypedNamedTupleSerializer(typ) + + # fail early if caller passes an object with an exotic type + if not isinstance(typ, type) or typ is FunctionType or typ is MethodType or typ is type or typ is ModuleType: + raise TypeError(f"object of type {typ} cannot be represented in JSON") + + if get_resolved_hints(typ): + return TypedClassSerializer(typ, context) + else: + return UntypedClassSerializer() + + +def object_to_json(obj: Any) -> JsonType: + """ + Converts a Python object to a representation that can be exported to JSON. + + * Fundamental types (e.g. numeric types) are written as is. + * Date and time types are serialized in the ISO 8601 format with time zone. + * A byte array is written as a string with Base64 encoding. + * UUIDs are written as a UUID string. + * Enumerations are written as their value. + * Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively. + * Objects with properties (including data class types) are converted to a dictionaries of key-value pairs. + """ + + typ: type = type(obj) + generator = create_serializer(typ) + return generator.generate(obj) diff --git a/llama_stack/strong_typing/slots.py b/llama_stack/strong_typing/slots.py new file mode 100644 index 000000000..c1a3293d8 --- /dev/null +++ b/llama_stack/strong_typing/slots.py @@ -0,0 +1,27 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +from typing import Any, Dict, Tuple, Type, TypeVar + +T = TypeVar("T") + + +class SlotsMeta(type): + def __new__(cls: Type[T], name: str, bases: Tuple[type, ...], ns: Dict[str, Any]) -> T: + # caller may have already provided slots, in which case just retain them and keep going + slots: Tuple[str, ...] = ns.get("__slots__", ()) + + # add fields with type annotations to slots + annotations: Dict[str, Any] = ns.get("__annotations__", {}) + members = tuple(member for member in annotations.keys() if member not in slots) + + # assign slots + ns["__slots__"] = slots + tuple(members) + return super().__new__(cls, name, bases, ns) # type: ignore + + +class Slots(metaclass=SlotsMeta): + pass diff --git a/llama_stack/strong_typing/topological.py b/llama_stack/strong_typing/topological.py new file mode 100644 index 000000000..28bf4bd0f --- /dev/null +++ b/llama_stack/strong_typing/topological.py @@ -0,0 +1,89 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +""" +Type-safe data interchange for Python data classes. + +:see: https://github.com/hunyadi/strong_typing +""" + +from typing import Callable, Dict, Iterable, List, Optional, Set, TypeVar + +from .inspection import TypeCollector + +T = TypeVar("T") + + +def topological_sort(graph: Dict[T, Set[T]]) -> List[T]: + """ + Performs a topological sort of a graph. + + Nodes with no outgoing edges are first. Nodes with no incoming edges are last. + The topological ordering is not unique. + + :param graph: A dictionary of mappings from nodes to adjacent nodes. Keys and set members must be hashable. + :returns: The list of nodes in topological order. + """ + + # empty list that will contain the sorted nodes (in reverse order) + ordered: List[T] = [] + + seen: Dict[T, bool] = {} + + def _visit(n: T) -> None: + status = seen.get(n) + if status is not None: + if status: # node has a permanent mark + return + else: # node has a temporary mark + raise RuntimeError(f"cycle detected in graph for node {n}") + + seen[n] = False # apply temporary mark + for m in graph[n]: # visit all adjacent nodes + if m != n: # ignore self-referencing nodes + _visit(m) + + seen[n] = True # apply permanent mark + ordered.append(n) + + for n in graph.keys(): + _visit(n) + + return ordered + + +def type_topological_sort( + types: Iterable[type], + dependency_fn: Optional[Callable[[type], Iterable[type]]] = None, +) -> List[type]: + """ + Performs a topological sort of a list of types. + + Types that don't depend on other types (i.e. fundamental types) are first. Types on which no other types depend + are last. The topological ordering is not unique. + + :param types: A list of types (simple or composite). + :param dependency_fn: Returns a list of additional dependencies for a class (e.g. classes referenced by a foreign key). + :returns: The list of types in topological order. + """ + + if not all(isinstance(typ, type) for typ in types): + raise TypeError("expected a list of types") + + collector = TypeCollector() + collector.traverse_all(types) + graph = collector.graph + + if dependency_fn: + new_types: Set[type] = set() + for source_type, references in graph.items(): + dependent_types = dependency_fn(source_type) + references.update(dependent_types) + new_types.update(dependent_types) + for new_type in new_types: + graph[new_type] = set() + + return topological_sort(graph) diff --git a/llama_stack/templates/bedrock/bedrock.py b/llama_stack/templates/bedrock/bedrock.py index af1d48b7f..0b294824d 100644 --- a/llama_stack/templates/bedrock/bedrock.py +++ b/llama_stack/templates/bedrock/bedrock.py @@ -6,10 +6,9 @@ from pathlib import Path -from llama_models.sku_list import all_registered_models - from llama_stack.apis.models import ModelInput from llama_stack.distribution.datatypes import Provider, ToolGroupInput +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.inline.vector_io.faiss.config import FaissVectorIOConfig from llama_stack.providers.remote.inference.bedrock.bedrock import MODEL_ALIASES from llama_stack.templates.template import DistributionTemplate, RunConfigSettings diff --git a/llama_stack/templates/cerebras/cerebras.py b/llama_stack/templates/cerebras/cerebras.py index 870240feb..4f6d0c8f3 100644 --- a/llama_stack/templates/cerebras/cerebras.py +++ b/llama_stack/templates/cerebras/cerebras.py @@ -6,10 +6,9 @@ from pathlib import Path -from llama_models.sku_list import all_registered_models - from llama_stack.apis.models.models import ModelType from llama_stack.distribution.datatypes import ModelInput, Provider, ToolGroupInput +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.inline.inference.sentence_transformers import ( SentenceTransformersInferenceConfig, ) diff --git a/llama_stack/templates/fireworks/fireworks.py b/llama_stack/templates/fireworks/fireworks.py index e2e2ca99c..a6809fef6 100644 --- a/llama_stack/templates/fireworks/fireworks.py +++ b/llama_stack/templates/fireworks/fireworks.py @@ -6,8 +6,6 @@ from pathlib import Path -from llama_models.sku_list import all_registered_models - from llama_stack.apis.models.models import ModelType from llama_stack.distribution.datatypes import ( ModelInput, @@ -15,6 +13,7 @@ from llama_stack.distribution.datatypes import ( ShieldInput, ToolGroupInput, ) +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.inline.inference.sentence_transformers import ( SentenceTransformersInferenceConfig, ) diff --git a/llama_stack/templates/nvidia/nvidia.py b/llama_stack/templates/nvidia/nvidia.py index d24c9ed48..ee22b5555 100644 --- a/llama_stack/templates/nvidia/nvidia.py +++ b/llama_stack/templates/nvidia/nvidia.py @@ -6,9 +6,8 @@ from pathlib import Path -from llama_models.sku_list import all_registered_models - from llama_stack.distribution.datatypes import ModelInput, Provider, ToolGroupInput +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.remote.inference.nvidia import NVIDIAConfig from llama_stack.providers.remote.inference.nvidia.nvidia import _MODEL_ALIASES from llama_stack.templates.template import DistributionTemplate, RunConfigSettings diff --git a/llama_stack/templates/sambanova/sambanova.py b/llama_stack/templates/sambanova/sambanova.py index 6d7477c8e..c7a9428af 100644 --- a/llama_stack/templates/sambanova/sambanova.py +++ b/llama_stack/templates/sambanova/sambanova.py @@ -6,14 +6,13 @@ from pathlib import Path -from llama_models.sku_list import all_registered_models - from llama_stack.distribution.datatypes import ( ModelInput, Provider, ShieldInput, ToolGroupInput, ) +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.remote.inference.sambanova import SambaNovaImplConfig from llama_stack.providers.remote.inference.sambanova.sambanova import MODEL_ALIASES from llama_stack.templates.template import DistributionTemplate, RunConfigSettings diff --git a/llama_stack/templates/together/together.py b/llama_stack/templates/together/together.py index 9ec5b38ba..f7b18e32a 100644 --- a/llama_stack/templates/together/together.py +++ b/llama_stack/templates/together/together.py @@ -6,8 +6,6 @@ from pathlib import Path -from llama_models.sku_list import all_registered_models - from llama_stack.apis.models.models import ModelType from llama_stack.distribution.datatypes import ( ModelInput, @@ -15,6 +13,7 @@ from llama_stack.distribution.datatypes import ( ShieldInput, ToolGroupInput, ) +from llama_stack.models.llama.sku_list import all_registered_models from llama_stack.providers.inline.inference.sentence_transformers import ( SentenceTransformersInferenceConfig, ) diff --git a/pyproject.toml b/pyproject.toml index feaae153b..8b0135c70 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -25,6 +25,7 @@ dependencies = [ "fire", "httpx", "huggingface-hub", + "jsonschema", "llama-models>=0.1.2", "llama-stack-client>=0.1.2", "prompt-toolkit", diff --git a/requirements.txt b/requirements.txt index 497feb764..40431e446 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,10 +1,10 @@ # This file was autogenerated by uv via the following command: -# uv export --frozen --no-hashes --no-emit-project +# uv export --frozen --no-hashes --no-emit-project --output-file=requirements.txt annotated-types==0.7.0 anyio==4.8.0 +attrs==25.1.0 blobfile==3.0.0 certifi==2025.1.31 -chardet==5.2.0 charset-normalizer==3.4.1 click==8.1.8 colorama==0.4.6 ; sys_platform == 'win32' @@ -19,6 +19,8 @@ httpx==0.28.1 huggingface-hub==0.28.1 idna==3.10 jinja2==3.1.5 +jsonschema==4.23.0 +jsonschema-specifications==2024.10.1 llama-models==0.1.2 llama-stack-client==0.1.2 lxml==5.3.0 @@ -35,14 +37,15 @@ pycryptodomex==3.21.0 pydantic==2.10.6 pydantic-core==2.27.2 pygments==2.19.1 -pypdf==5.2.0 python-dateutil==2.9.0.post0 python-dotenv==1.0.1 pytz==2025.1 pyyaml==6.0.2 +referencing==0.36.2 regex==2024.11.6 requests==2.32.3 rich==13.9.4 +rpds-py==0.22.3 setuptools==75.8.0 six==1.17.0 sniffio==1.3.1 diff --git a/tests/client-sdk/report.py b/tests/client-sdk/report.py index 543562541..d36fa827f 100644 --- a/tests/client-sdk/report.py +++ b/tests/client-sdk/report.py @@ -13,8 +13,12 @@ from typing import Optional from urllib.parse import urlparse import pytest -from llama_models.datatypes import CoreModelId -from llama_models.sku_list import ( +from metadata import API_MAPS +from pytest import CollectReport +from termcolor import cprint + +from llama_stack.models.llama.datatypes import CoreModelId +from llama_stack.models.llama.sku_list import ( all_registered_models, llama3_1_instruct_models, llama3_2_instruct_models, @@ -22,10 +26,6 @@ from llama_models.sku_list import ( llama3_instruct_models, safety_models, ) -from metadata import API_MAPS -from pytest import CollectReport -from termcolor import cprint - from llama_stack.providers.datatypes import Api from llama_stack.providers.tests.env import get_env_or_fail diff --git a/uv.lock b/uv.lock index 97ae52124..ed1e4bc2d 100644 --- a/uv.lock +++ b/uv.lock @@ -265,7 +265,7 @@ name = "click" version = "8.1.8" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "colorama", marker = "platform_system == 'Windows'" }, + { name = "colorama", marker = "sys_platform == 'win32'" }, ] sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593 } wheels = [ @@ -577,7 +577,7 @@ name = "ipykernel" version = "6.29.5" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "appnope", marker = "platform_system == 'Darwin'" }, + { name = "appnope", marker = "sys_platform == 'darwin'" }, { name = "comm" }, { name = "debugpy" }, { name = "ipython" }, @@ -724,6 +724,7 @@ dependencies = [ { name = "fire" }, { name = "httpx" }, { name = "huggingface-hub" }, + { name = "jsonschema" }, { name = "llama-models" }, { name = "llama-stack-client" }, { name = "prompt-toolkit" }, @@ -768,6 +769,7 @@ requires-dist = [ { name = "fire" }, { name = "httpx" }, { name = "huggingface-hub" }, + { name = "jsonschema" }, { name = "llama-models", specifier = ">=0.1.2" }, { name = "llama-stack-client", specifier = ">=0.1.2" }, { name = "myst-parser", marker = "extra == 'docs'" }, @@ -1412,8 +1414,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/61/74/49f5d20c514ccc631b940cc9dfec45dcce418dc84a98463a2e2ebec33904/pycryptodomex-3.21.0-cp36-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:52e23a0a6e61691134aa8c8beba89de420602541afaae70f66e16060fdcd677e", size = 2257982 }, { url = "https://files.pythonhosted.org/packages/92/4b/d33ef74e2cc0025a259936661bb53432c5bbbadc561c5f2e023bcd73ce4c/pycryptodomex-3.21.0-cp36-abi3-win32.whl", hash = "sha256:a3d77919e6ff56d89aada1bd009b727b874d464cb0e2e3f00a49f7d2e709d76e", size = 1779052 }, { url = "https://files.pythonhosted.org/packages/5b/be/7c991840af1184009fc86267160948350d1bf875f153c97bb471ad944e40/pycryptodomex-3.21.0-cp36-abi3-win_amd64.whl", hash = "sha256:b0e9765f93fe4890f39875e6c90c96cb341767833cfa767f41b490b506fa9ec0", size = 1816307 }, - { url = "https://files.pythonhosted.org/packages/af/ac/24125ad36778914a36f08d61ba5338cb9159382c638d9761ee19c8de822c/pycryptodomex-3.21.0-pp27-pypy_73-manylinux2010_x86_64.whl", hash = "sha256:feaecdce4e5c0045e7a287de0c4351284391fe170729aa9182f6bd967631b3a8", size = 1694999 }, - { url = "https://files.pythonhosted.org/packages/93/73/be7a54a5903508070e5508925ba94493a1f326cfeecfff750e3eb250ea28/pycryptodomex-3.21.0-pp27-pypy_73-win32.whl", hash = "sha256:365aa5a66d52fd1f9e0530ea97f392c48c409c2f01ff8b9a39c73ed6f527d36c", size = 1769437 }, { url = "https://files.pythonhosted.org/packages/e5/9f/39a6187f3986841fa6a9f35c6fdca5030ef73ff708b45a993813a51d7d10/pycryptodomex-3.21.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3efddfc50ac0ca143364042324046800c126a1d63816d532f2e19e6f2d8c0c31", size = 1619607 }, { url = "https://files.pythonhosted.org/packages/f8/70/60bb08e9e9841b18d4669fb69d84b64ce900aacd7eb0ebebd4c7b9bdecd3/pycryptodomex-3.21.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df2608682db8279a9ebbaf05a72f62a321433522ed0e499bc486a6889b96bf3", size = 1653571 }, { url = "https://files.pythonhosted.org/packages/c9/6f/191b73509291c5ff0dddec9cc54797b1d73303c12b2e4017b24678e57099/pycryptodomex-3.21.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5823d03e904ea3e53aebd6799d6b8ec63b7675b5d2f4a4bd5e3adcb512d03b37", size = 1691548 }, @@ -2305,7 +2305,7 @@ name = "tqdm" version = "4.67.1" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "colorama", marker = "platform_system == 'Windows'" }, + { name = "colorama", marker = "sys_platform == 'win32'" }, ] sdist = { url = "https://files.pythonhosted.org/packages/a8/4b/29b4ef32e036bb34e4ab51796dd745cdba7ed47ad142a9f4a1eb8e0c744d/tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2", size = 169737 } wheels = [ From 64328bfe625647389ef1731ae7f090c88404f1a3 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Fri, 14 Feb 2025 12:19:53 -0500 Subject: [PATCH 15/37] fix: enable_session_persistence in AgentConfig should be optional (#1012) # What does this PR do? This issue was discovered in https://github.com/meta-llama/llama-stack/pull/1009#discussion_r1947036518. ## Test Plan This field is no longer required after the change. [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant) --------- Signed-off-by: Yuan Tang Co-authored-by: Ashwin Bharambe --- docs/_static/llama-stack-spec.html | 6 +++--- docs/_static/llama-stack-spec.yaml | 2 +- llama_stack/apis/agents/agents.py | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/_static/llama-stack-spec.html b/docs/_static/llama-stack-spec.html index 026a061c8..17cf92341 100644 --- a/docs/_static/llama-stack-spec.html +++ b/docs/_static/llama-stack-spec.html @@ -2724,7 +2724,8 @@ "type": "string" }, "enable_session_persistence": { - "type": "boolean" + "type": "boolean", + "default": false }, "response_format": { "$ref": "#/components/schemas/ResponseFormat" @@ -2733,8 +2734,7 @@ "additionalProperties": false, "required": [ "model", - "instructions", - "enable_session_persistence" + "instructions" ] }, "AgentTool": { diff --git a/docs/_static/llama-stack-spec.yaml b/docs/_static/llama-stack-spec.yaml index e4f0398c0..f63374406 100644 --- a/docs/_static/llama-stack-spec.yaml +++ b/docs/_static/llama-stack-spec.yaml @@ -1660,13 +1660,13 @@ components: type: string enable_session_persistence: type: boolean + default: false response_format: $ref: '#/components/schemas/ResponseFormat' additionalProperties: false required: - model - instructions - - enable_session_persistence AgentTool: oneOf: - type: string diff --git a/llama_stack/apis/agents/agents.py b/llama_stack/apis/agents/agents.py index ccd15c3d6..367648ded 100644 --- a/llama_stack/apis/agents/agents.py +++ b/llama_stack/apis/agents/agents.py @@ -179,7 +179,7 @@ class AgentConfigCommon(BaseModel): class AgentConfig(AgentConfigCommon): model: str instructions: str - enable_session_persistence: bool + enable_session_persistence: Optional[bool] = False response_format: Optional[ResponseFormat] = None From 369cc513cbab67c4196a058b76616d80381e55ea Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?S=C3=A9bastien=20Han?= Date: Fri, 14 Feb 2025 18:22:03 +0100 Subject: [PATCH 16/37] fix: improve stack build on venv (#980) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? Added a pre_run_checks function to ensure a smooth environment setup by verifying prerequisites. It checks for an existing virtual environment, ensures uv is installed, and deactivates any active environment if necessary. Run the full build inside a venv created by 'uv'. Improved string handling in printf statements and added shellcheck suppressions for expected word splitting in pip commands. These enhancements improve robustness, prevent conflicts, and ensure a seamless setup process. Signed-off-by: Sébastien Han - [ ] Addresses issue (#issue) ## Test Plan Run the following command on either Linux or MacOS: ``` llama stack build --template ollama --image-type venv --image-name foo + build_name=foo + env_name=llamastack-foo + pip_dependencies='datasets matplotlib autoevals transformers blobfile opentelemetry-sdk sentencepiece opentelemetry-exporter-otlp-proto-http ollama nltk redis pillow psycopg2-binary scikit-learn pandas faiss-cpu chromadb-client numpy chardet scipy aiohttp aiosqlite requests tqdm pypdf openai aiosqlite fastapi fire httpx uvicorn' + RED='\033[0;31m' + NC='\033[0m' + ENVNAME= +++ readlink -f /Users/leseb/Documents/AI/llama-stack/llama_stack/distribution/build_venv.sh ++ dirname /Users/leseb/Documents/AI/llama-stack/llama_stack/distribution/build_venv.sh + SCRIPT_DIR=/Users/leseb/Documents/AI/llama-stack/llama_stack/distribution + source /Users/leseb/Documents/AI/llama-stack/llama_stack/distribution/common.sh + pre_run_checks llamastack-foo + local env_name=llamastack-foo + is_command_available uv + command -v uv + '[' -d llamastack-foo ']' + run llamastack-foo 'datasets matplotlib autoevals transformers blobfile opentelemetry-sdk sentencepiece opentelemetry-exporter-otlp-proto-http ollama nltk redis pillow psycopg2-binary scikit-learn pandas faiss-cpu chromadb-client numpy chardet scipy aiohttp aiosqlite requests tqdm pypdf openai aiosqlite fastapi fire httpx uvicorn' 'sentence-transformers --no-deps#torch torchvision --index-url https://download.pytorch.org/whl/cpu' + local env_name=llamastack-foo + local 'pip_dependencies=datasets matplotlib autoevals transformers blobfile opentelemetry-sdk sentencepiece opentelemetry-exporter-otlp-proto-http ollama nltk redis pillow psycopg2-binary scikit-learn pandas faiss-cpu chromadb-client numpy chardet scipy aiohttp aiosqlite requests tqdm pypdf openai aiosqlite fastapi fire httpx uvicorn' + local 'special_pip_deps=sentence-transformers --no-deps#torch torchvision --index-url https://download.pytorch.org/whl/cpu' + echo 'Creating new virtual environment llamastack-foo' Creating new virtual environment llamastack-foo + uv venv llamastack-foo Using CPython 3.13.1 interpreter at: /opt/homebrew/opt/python@3.13/bin/python3.13 Creating virtual environment at: llamastack-foo Activate with: source llamastack-foo/bin/activate + source llamastack-foo/bin/activate ++ '[' -n x ']' ++ SCRIPT_PATH=llamastack-foo/bin/activate ++ '[' llamastack-foo/bin/activate = /Users/leseb/Documents/AI/llama-stack/llama_stack/distribution/build_venv.sh ']' ++ deactivate nondestructive ++ unset -f pydoc ++ '[' -z '' ']' ++ '[' -z '' ']' ++ hash -r ++ '[' -z '' ']' ++ unset VIRTUAL_ENV ++ unset VIRTUAL_ENV_PROMPT ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/Users/leseb/Documents/AI/llama-stack/llamastack-foo ++ '[' darwin24 = cygwin ']' ++ '[' darwin24 = msys ']' ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH='/Users/leseb/Documents/AI/llama-stack/.venv/bin:/opt/homebrew/opt/protobuf@21/bin:/opt/homebrew/opt/gnu-sed/libexec/gnubin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/usr/local/munki:/opt/podman/bin:/opt/homebrew/opt/protobuf@21/bin:/opt/homebrew/opt/gnu-sed/libexec/gnubin:/Users/leseb/.local/share/zinit/plugins/so-fancy---diff-so-fancy:/Users/leseb/.local/share/zinit/polaris/bin:/Users/leseb/.cargo/bin:/Users/leseb/Library/Application Support/Code/User/globalStorage/github.copilot-chat/debugCommand' ++ PATH='/Users/leseb/Documents/AI/llama-stack/llamastack-foo/bin:/Users/leseb/Documents/AI/llama-stack/.venv/bin:/opt/homebrew/opt/protobuf@21/bin:/opt/homebrew/opt/gnu-sed/libexec/gnubin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/usr/local/munki:/opt/podman/bin:/opt/homebrew/opt/protobuf@21/bin:/opt/homebrew/opt/gnu-sed/libexec/gnubin:/Users/leseb/.local/share/zinit/plugins/so-fancy---diff-so-fancy:/Users/leseb/.local/share/zinit/polaris/bin:/Users/leseb/.cargo/bin:/Users/leseb/Library/Application Support/Code/User/globalStorage/github.copilot-chat/debugCommand' ++ export PATH ++ '[' x '!=' x ']' +++ basename /Users/leseb/Documents/AI/llama-stack/llamastack-foo ++ VIRTUAL_ENV_PROMPT='(llamastack-foo) ' ++ export VIRTUAL_ENV_PROMPT ++ '[' -z '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1= ++ PS1='(llamastack-foo) ' ++ export PS1 ++ alias pydoc ++ true ++ hash -r + '[' -n '' ']' + '[' -n '' ']' + uv pip install --no-cache-dir llama-stack Using Python 3.13.1 environment at: llamastack-foo Resolved 50 packages in 1.25s Built fire==0.7.0 Prepared 50 packages in 1.22s Installed 50 packages in 126ms + annotated-types==0.7.0 + anyio==4.8.0 + blobfile==3.0.0 + certifi==2025.1.31 + charset-normalizer==3.4.1 + click==8.1.8 + distro==1.9.0 + filelock==3.17.0 + fire==0.7.0 + fsspec==2025.2.0 + h11==0.14.0 + httpcore==1.0.7 + httpx==0.28.1 + huggingface-hub==0.28.1 + idna==3.10 + jinja2==3.1.5 + llama-models==0.1.2 + llama-stack==0.1.2 + llama-stack-client==0.1.2 + lxml==5.3.1 + markdown-it-py==3.0.0 + markupsafe==3.0.2 + mdurl==0.1.2 + numpy==2.2.2 + packaging==24.2 + pandas==2.2.3 + pillow==11.1.0 + prompt-toolkit==3.0.50 + pyaml==25.1.0 + pycryptodomex==3.21.0 + pydantic==2.10.6 + pydantic-core==2.27.2 + pygments==2.19.1 + python-dateutil==2.9.0.post0 + python-dotenv==1.0.1 + pytz==2025.1 + pyyaml==6.0.2 + regex==2024.11.6 + requests==2.32.3 + rich==13.9.4 + setuptools==75.8.0 + six==1.17.0 + sniffio==1.3.1 + termcolor==2.5.0 + tiktoken==0.8.0 + tqdm==4.67.1 + typing-extensions==4.12.2 + tzdata==2025.1 + urllib3==2.3.0 + wcwidth==0.2.13 + '[' -n '' ']' + printf 'Installing pip dependencies\n' Installing pip dependencies + uv pip install datasets matplotlib autoevals transformers blobfile opentelemetry-sdk sentencepiece opentelemetry-exporter-otlp-proto-http ollama nltk redis pillow psycopg2-binary scikit-learn pandas faiss-cpu chromadb-client numpy chardet scipy aiohttp aiosqlite requests tqdm pypdf openai aiosqlite fastapi fire httpx uvicorn Using Python 3.13.1 environment at: llamastack-foo Resolved 105 packages in 37ms Uninstalled 2 packages in 65ms Installed 72 packages in 195ms + aiohappyeyeballs==2.4.6 + aiohttp==3.11.12 + aiosignal==1.3.2 + aiosqlite==0.21.0 + attrs==25.1.0 + autoevals==0.0.119 + backoff==2.2.1 + braintrust-core==0.0.58 + chardet==5.2.0 + chevron==0.14.0 + chromadb-client==0.6.3 + contourpy==1.3.1 + cycler==0.12.1 + datasets==3.2.0 + deprecated==1.2.18 + dill==0.3.8 + faiss-cpu==1.10.0 + fastapi==0.115.8 + fonttools==4.56.0 + frozenlist==1.5.0 - fsspec==2025.2.0 + fsspec==2024.9.0 + googleapis-common-protos==1.66.0 + grpcio==1.70.0 + importlib-metadata==8.5.0 + jiter==0.8.2 + joblib==1.4.2 + jsonschema==4.23.0 + jsonschema-specifications==2024.10.1 + kiwisolver==1.4.8 + levenshtein==0.26.1 + matplotlib==3.10.0 + monotonic==1.6 + multidict==6.1.0 + multiprocess==0.70.16 + nltk==3.9.1 - numpy==2.2.2 + numpy==1.26.4 + ollama==0.4.7 + openai==1.61.1 + opentelemetry-api==1.30.0 + opentelemetry-exporter-otlp-proto-common==1.30.0 + opentelemetry-exporter-otlp-proto-grpc==1.30.0 + opentelemetry-exporter-otlp-proto-http==1.30.0 + opentelemetry-proto==1.30.0 + opentelemetry-sdk==1.30.0 + opentelemetry-semantic-conventions==0.51b0 + orjson==3.10.15 + overrides==7.7.0 + posthog==3.12.0 + propcache==0.2.1 + protobuf==5.29.3 + psycopg2-binary==2.9.10 + pyarrow==19.0.0 + pyparsing==3.2.1 + pypdf==5.3.0 + rapidfuzz==3.12.1 + redis==5.2.1 + referencing==0.36.2 + rpds-py==0.22.3 + safetensors==0.5.2 + scikit-learn==1.6.1 + scipy==1.15.1 + sentencepiece==0.2.0 + starlette==0.45.3 + tenacity==9.0.0 + threadpoolctl==3.5.0 + tokenizers==0.21.0 + transformers==4.48.3 + uvicorn==0.34.0 + wrapt==1.17.2 + xxhash==3.5.0 + yarl==1.18.3 + zipp==3.21.0 + '[' -n 'sentence-transformers --no-deps#torch torchvision --index-url https://download.pytorch.org/whl/cpu' ']' + IFS='#' + read -ra parts + for part in '"${parts[@]}"' + echo 'sentence-transformers --no-deps' sentence-transformers --no-deps + uv pip install sentence-transformers --no-deps Using Python 3.13.1 environment at: llamastack-foo Resolved 1 package in 141ms Installed 1 package in 6ms + sentence-transformers==3.4.1 + for part in '"${parts[@]}"' + echo 'torch torchvision --index-url https://download.pytorch.org/whl/cpu' torch torchvision --index-url https://download.pytorch.org/whl/cpu + uv pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu Using Python 3.13.1 environment at: llamastack-foo Resolved 13 packages in 2.15s Installed 5 packages in 324ms + mpmath==1.3.0 + networkx==3.3 + sympy==1.13.1 + torch==2.6.0 + torchvision==0.21.0 Build Successful! ``` Run: ``` $ source llamastack-foo/bin/activate $ INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" OLLAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16" python -m llama_stack.distribution.server.server --yaml-config ./llama_stack/templates/ollama/run.yaml --port 5001 Using config file: llama_stack/templates/ollama/run.yaml Run configuration: apis: - agents - datasetio - eval - inference - safety - scoring - telemetry - tool_runtime - vector_io container_image: null datasets: [] eval_tasks: [] image_name: ollama metadata_store: db_path: /Users/leseb/.llama/distributions/ollama/registry.db namespace: null type: sqlite models: - metadata: {} model_id: meta-llama/Llama-3.2-3B-Instruct model_type: !!python/object/apply:llama_stack.apis.models.models.ModelType - llm provider_id: ollama provider_model_id: null - metadata: embedding_dimension: 384 model_id: all-MiniLM-L6-v2 model_type: !!python/object/apply:llama_stack.apis.models.models.ModelType - embedding provider_id: sentence-transformers provider_model_id: null providers: agents: - config: persistence_store: db_path: /Users/leseb/.llama/distributions/ollama/agents_store.db namespace: null type: sqlite provider_id: meta-reference provider_type: inline::meta-reference datasetio: - config: {} provider_id: huggingface provider_type: remote::huggingface - config: {} provider_id: localfs provider_type: inline::localfs eval: - config: {} provider_id: meta-reference provider_type: inline::meta-reference inference: - config: url: http://localhost:11434 provider_id: ollama provider_type: remote::ollama - config: {} provider_id: sentence-transformers provider_type: inline::sentence-transformers safety: - config: {} provider_id: llama-guard provider_type: inline::llama-guard scoring: - config: {} provider_id: basic provider_type: inline::basic - config: {} provider_id: llm-as-judge provider_type: inline::llm-as-judge - config: openai_api_key: '********' provider_id: braintrust provider_type: inline::braintrust telemetry: - config: service_name: llama-stack sinks: console,sqlite sqlite_db_path: /Users/leseb/.llama/distributions/ollama/trace_store.db provider_id: meta-reference provider_type: inline::meta-reference tool_runtime: - config: api_key: '********' max_results: 3 provider_id: brave-search provider_type: remote::brave-search - config: api_key: '********' max_results: 3 provider_id: tavily-search provider_type: remote::tavily-search - config: {} provider_id: code-interpreter provider_type: inline::code-interpreter - config: {} provider_id: rag-runtime provider_type: inline::rag-runtime vector_io: - config: kvstore: db_path: /Users/leseb/.llama/distributions/ollama/faiss_store.db namespace: null type: sqlite provider_id: faiss provider_type: inline::faiss scoring_fns: [] server: port: 8321 tls_certfile: null tls_keyfile: null shields: [] tool_groups: - args: null mcp_endpoint: null provider_id: tavily-search toolgroup_id: builtin::websearch - args: null mcp_endpoint: null provider_id: rag-runtime toolgroup_id: builtin::rag - args: null mcp_endpoint: null provider_id: code-interpreter toolgroup_id: builtin::code_interpreter vector_dbs: [] version: '2' Warning: `bwrap` is not available. Code interpreter tool will not work correctly. modules.json: 100%|███████████████████████████████████████████████████████████| 349/349 [00:00<00:00, 485kB/s] config_sentence_transformers.json: 100%|██████████████████████████████████████| 116/116 [00:00<00:00, 498kB/s] README.md: 100%|█████████████████████████████████████████████████████████| 10.7k/10.7k [00:00<00:00, 20.5MB/s] sentence_bert_config.json: 100%|████████████████████████████████████████████| 53.0/53.0 [00:00<00:00, 583kB/s] config.json: 100%|███████████████████████████████████████████████████████████| 612/612 [00:00<00:00, 4.63MB/s] model.safetensors: 100%|█████████████████████████████████████████████████| 90.9M/90.9M [00:02<00:00, 36.6MB/s] tokenizer_config.json: 100%|█████████████████████████████████████████████████| 350/350 [00:00<00:00, 4.27MB/s] vocab.txt: 100%|███████████████████████████████████████████████████████████| 232k/232k [00:00<00:00, 1.90MB/s] tokenizer.json: 100%|██████████████████████████████████████████████████████| 466k/466k [00:00<00:00, 2.23MB/s] special_tokens_map.json: 100%|███████████████████████████████████████████████| 112/112 [00:00<00:00, 1.47MB/s] 1_Pooling/config.json: 100%|██████████████████████████████████████████████████| 190/190 [00:00<00:00, 841kB/s] Serving API tool_groups GET /v1/tools/{tool_name} GET /v1/toolgroups/{toolgroup_id} GET /v1/toolgroups GET /v1/tools POST /v1/toolgroups DELETE /v1/toolgroups/{toolgroup_id} Serving API tool_runtime POST /v1/tool-runtime/invoke GET /v1/tool-runtime/list-tools POST /v1/tool-runtime/rag-tool/insert POST /v1/tool-runtime/rag-tool/query Serving API vector_io POST /v1/vector-io/insert POST /v1/vector-io/query Serving API telemetry GET /v1/telemetry/traces/{trace_id}/spans/{span_id} GET /v1/telemetry/spans/{span_id}/tree GET /v1/telemetry/traces/{trace_id} POST /v1/telemetry/events GET /v1/telemetry/spans GET /v1/telemetry/traces POST /v1/telemetry/spans/export Serving API models GET /v1/models/{model_id} GET /v1/models POST /v1/models DELETE /v1/models/{model_id} Serving API eval POST /v1/eval/tasks/{task_id}/evaluations DELETE /v1/eval/tasks/{task_id}/jobs/{job_id} GET /v1/eval/tasks/{task_id}/jobs/{job_id}/result GET /v1/eval/tasks/{task_id}/jobs/{job_id} POST /v1/eval/tasks/{task_id}/jobs Serving API datasets GET /v1/datasets/{dataset_id} GET /v1/datasets POST /v1/datasets DELETE /v1/datasets/{dataset_id} Serving API scoring_functions GET /v1/scoring-functions/{scoring_fn_id} GET /v1/scoring-functions POST /v1/scoring-functions Serving API inspect GET /v1/health GET /v1/inspect/providers GET /v1/inspect/routes GET /v1/version Serving API scoring POST /v1/scoring/score POST /v1/scoring/score-batch Serving API shields GET /v1/shields/{identifier} GET /v1/shields POST /v1/shields Serving API vector_dbs GET /v1/vector-dbs/{vector_db_id} GET /v1/vector-dbs POST /v1/vector-dbs DELETE /v1/vector-dbs/{vector_db_id} Serving API eval_tasks GET /v1/eval-tasks/{eval_task_id} GET /v1/eval-tasks POST /v1/eval-tasks Serving API agents POST /v1/agents POST /v1/agents/{agent_id}/session POST /v1/agents/{agent_id}/session/{session_id}/turn DELETE /v1/agents/{agent_id} DELETE /v1/agents/{agent_id}/session/{session_id} GET /v1/agents/{agent_id}/session/{session_id} GET /v1/agents/{agent_id}/session/{session_id}/turn/{turn_id}/step/{step_id} GET /v1/agents/{agent_id}/session/{session_id}/turn/{turn_id} Serving API inference POST /v1/inference/chat-completion POST /v1/inference/completion POST /v1/inference/embeddings Serving API datasetio POST /v1/datasetio/rows GET /v1/datasetio/rows Serving API safety POST /v1/safety/run-shield Listening on ['::', '0.0.0.0']:5001 INFO: Started server process [39145] INFO: Waiting for application startup. INFO: ASGI 'lifespan' protocol appears unsupported. INFO: Application startup complete. INFO: Uvicorn running on http://['::', '0.0.0.0']:5001 (Press CTRL+C to quit) ``` ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. Signed-off-by: Sébastien Han --- llama_stack/cli/stack/build.py | 5 +-- llama_stack/distribution/build.py | 1 - llama_stack/distribution/build_venv.sh | 56 ++++++++++++++++++++------ llama_stack/distribution/common.sh | 5 +++ 4 files changed, 51 insertions(+), 16 deletions(-) diff --git a/llama_stack/cli/stack/build.py b/llama_stack/cli/stack/build.py index 729bd3ff1..ca4c0d8ce 100644 --- a/llama_stack/cli/stack/build.py +++ b/llama_stack/cli/stack/build.py @@ -56,9 +56,8 @@ class StackBuild(Subcommand): "--image-name", type=str, help=textwrap.dedent( - """[for image-type=conda] Name of the conda environment to use for the build. If -not specified, currently active Conda environment will be used. If no Conda -environment is active, you must specify a name. + """[for image-type=conda|venv] Name of the conda or virtual environment to use for +the build. If not specified, currently active Conda environment will be used if found. """ ), default=None, diff --git a/llama_stack/distribution/build.py b/llama_stack/distribution/build.py index 9422c8457..511817de8 100644 --- a/llama_stack/distribution/build.py +++ b/llama_stack/distribution/build.py @@ -126,7 +126,6 @@ def build_image( args = [ script, str(image_name), - str(build_file_path), " ".join(normal_deps), ] diff --git a/llama_stack/distribution/build_venv.sh b/llama_stack/distribution/build_venv.sh index 3cb290bb7..0b0bffcfd 100755 --- a/llama_stack/distribution/build_venv.sh +++ b/llama_stack/distribution/build_venv.sh @@ -24,23 +24,21 @@ if [ -n "$LLAMA_MODELS_DIR" ]; then fi if [ "$#" -lt 3 ]; then - echo "Usage: $0 []" >&2 + echo "Usage: $0 []" >&2 echo "Example: $0 mybuild ./my-stack-build.yaml 'numpy pandas scipy'" >&2 exit 1 fi -special_pip_deps="$4" +special_pip_deps="$3" set -euo pipefail build_name="$1" env_name="llamastack-$build_name" -build_file_path="$2" -pip_dependencies="$3" +pip_dependencies="$2" # Define color codes RED='\033[0;31m' -GREEN='\033[0;32m' NC='\033[0m' # No Color # this is set if we actually create a new conda in which case we need to clean up @@ -49,34 +47,63 @@ ENVNAME="" SCRIPT_DIR=$(dirname "$(readlink -f "$0")") source "$SCRIPT_DIR/common.sh" +# pre-run checks to make sure we can proceed with the installation +pre_run_checks() { + local env_name="$1" + + if ! is_command_available uv; then + echo "uv is not installed, trying to install it." + if ! is_command_available pip; then + echo "pip is not installed, cannot automatically install 'uv'." + echo "Follow this link to install it:" + echo "https://docs.astral.sh/uv/getting-started/installation/" + exit 1 + else + pip install uv + fi + fi + + # checking if an environment with the same name already exists + if [ -d "$env_name" ]; then + echo "Environment '$env_name' already exists, re-using it." + fi +} + run() { local env_name="$1" local pip_dependencies="$2" local special_pip_deps="$3" - pip install uv + echo "Using virtual environment $env_name" + uv venv "$env_name" + # shellcheck source=/dev/null + source "$env_name/bin/activate" if [ -n "$TEST_PYPI_VERSION" ]; then # these packages are damaged in test-pypi, so install them first uv pip install fastapi libcst + # shellcheck disable=SC2086 + # we are building a command line so word splitting is expected uv pip install --extra-index-url https://test.pypi.org/simple/ \ - llama-models==$TEST_PYPI_VERSION llama-stack==$TEST_PYPI_VERSION \ + llama-models=="$TEST_PYPI_VERSION" llama-stack=="$TEST_PYPI_VERSION" \ $pip_dependencies if [ -n "$special_pip_deps" ]; then IFS='#' read -ra parts <<<"$special_pip_deps" for part in "${parts[@]}"; do echo "$part" + # shellcheck disable=SC2086 + # we are building a command line so word splitting is expected uv pip install $part done fi else - # Re-installing llama-stack in the new conda environment + # Re-installing llama-stack in the new virtual environment if [ -n "$LLAMA_STACK_DIR" ]; then if [ ! -d "$LLAMA_STACK_DIR" ]; then - printf "${RED}Warning: LLAMA_STACK_DIR is set but directory does not exist: $LLAMA_STACK_DIR${NC}\n" >&2 + printf "${RED}Warning: LLAMA_STACK_DIR is set but directory does not exist: %s${NC}\n" "$LLAMA_STACK_DIR" >&2 exit 1 fi - printf "Installing from LLAMA_STACK_DIR: $LLAMA_STACK_DIR\n" + printf "Installing from LLAMA_STACK_DIR: %s\n" "$LLAMA_STACK_DIR" uv pip install --no-cache-dir -e "$LLAMA_STACK_DIR" else uv pip install --no-cache-dir llama-stack @@ -84,26 +111,31 @@ run() { if [ -n "$LLAMA_MODELS_DIR" ]; then if [ ! -d "$LLAMA_MODELS_DIR" ]; then - printf "${RED}Warning: LLAMA_MODELS_DIR is set but directory does not exist: $LLAMA_MODELS_DIR${NC}\n" >&2 + printf "${RED}Warning: LLAMA_MODELS_DIR is set but directory does not exist: %s${NC}\n" "$LLAMA_MODELS_DIR" >&2 exit 1 fi - printf "Installing from LLAMA_MODELS_DIR: $LLAMA_MODELS_DIR\n" + printf "Installing from LLAMA_MODELS_DIR: %s\n" "$LLAMA_MODELS_DIR" uv pip uninstall llama-models uv pip install --no-cache-dir -e "$LLAMA_MODELS_DIR" fi # Install pip dependencies printf "Installing pip dependencies\n" + # shellcheck disable=SC2086 + # we are building a command line so word splitting is expected uv pip install $pip_dependencies if [ -n "$special_pip_deps" ]; then IFS='#' read -ra parts <<<"$special_pip_deps" for part in "${parts[@]}"; do echo "$part" + # shellcheck disable=SC2086 + # we are building a command line so word splitting is expected uv pip install $part done fi fi } +pre_run_checks "$env_name" run "$env_name" "$pip_dependencies" "$special_pip_deps" diff --git a/llama_stack/distribution/common.sh b/llama_stack/distribution/common.sh index 963eb395b..171023389 100755 --- a/llama_stack/distribution/common.sh +++ b/llama_stack/distribution/common.sh @@ -38,3 +38,8 @@ setup_cleanup_handlers() { conda deactivate } + +# check if a command is present +is_command_available() { + command -v "$1" &>/dev/null +} From 3d88b81ccf0cd4f2d0fc2361ac48eb84f1d411a2 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Sat, 15 Feb 2025 01:33:20 +0800 Subject: [PATCH 17/37] fix: remove the empty line (#1097) # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Remove the empty line from help ``` before: $ llama model download --help --max-parallel MAX_PARALLEL Maximum number of concurrent downloads --ignore-patterns IGNORE_PATTERNS <<<<<<<<>>>>>>>>> For source=huggingface, files matching any of the patterns are not downloaded. Defaults to ignoring safetensors files to avoid downloading duplicate weights. after: $ llama model download --help --max-parallel MAX_PARALLEL Maximum number of concurrent downloads --ignore-patterns IGNORE_PATTERNS For source=huggingface, files matching any of the patterns are not downloaded. Defaults to ignoring safetensors files to avoid downloading duplicate weights. ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: reidliu Co-authored-by: reidliu --- llama_stack/cli/download.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/llama_stack/cli/download.py b/llama_stack/cli/download.py index 6b0463c10..8afc6d31d 100644 --- a/llama_stack/cli/download.py +++ b/llama_stack/cli/download.py @@ -83,8 +83,7 @@ def setup_download_parser(parser: argparse.ArgumentParser) -> None: type=str, required=False, default="*.safetensors", - help=""" -For source=huggingface, files matching any of the patterns are not downloaded. Defaults to ignoring + help="""For source=huggingface, files matching any of the patterns are not downloaded. Defaults to ignoring safetensors files to avoid downloading duplicate weights. """, ) From 9b2fe6beb14edf4d34e4ce7c8b6ea84e5768c86e Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Fri, 14 Feb 2025 19:57:18 +0000 Subject: [PATCH 18/37] Bump version to 0.1.3 --- pyproject.toml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pyproject.toml b/pyproject.toml index 8b0135c70..71af2cc99 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta" [project] name = "llama_stack" -version = "0.1.2" +version = "0.1.3" authors = [{ name = "Meta Llama", email = "llama-oss@meta.com" }] description = "Llama Stack" readme = "README.md" @@ -26,8 +26,8 @@ dependencies = [ "httpx", "huggingface-hub", "jsonschema", - "llama-models>=0.1.2", - "llama-stack-client>=0.1.2", + "llama-models>=0.1.3", + "llama-stack-client>=0.1.3", "prompt-toolkit", "python-dotenv", "pydantic>=2", From 00613d9014adbd66857a05d18f366e2fb2e40342 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?S=C3=A9bastien=20Han?= Date: Fri, 14 Feb 2025 21:26:04 +0100 Subject: [PATCH 19/37] build: resync uv and deps on 0.1.3 (#1108) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? The bot just updated the project to 0.1.3 in https://github.com/meta-llama/llama-stack/commits?author=github-actions%5Bbot%5D but the deps need to be synced. [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: Sébastien Han --- requirements.txt | 4 ++-- uv.lock | 26 ++++++++++++++------------ 2 files changed, 16 insertions(+), 14 deletions(-) diff --git a/requirements.txt b/requirements.txt index 40431e446..b72c240bc 100644 --- a/requirements.txt +++ b/requirements.txt @@ -21,8 +21,8 @@ idna==3.10 jinja2==3.1.5 jsonschema==4.23.0 jsonschema-specifications==2024.10.1 -llama-models==0.1.2 -llama-stack-client==0.1.2 +llama-models==0.1.3 +llama-stack-client==0.1.3 lxml==5.3.0 markdown-it-py==3.0.0 markupsafe==3.0.2 diff --git a/uv.lock b/uv.lock index ed1e4bc2d..336d67c0b 100644 --- a/uv.lock +++ b/uv.lock @@ -265,7 +265,7 @@ name = "click" version = "8.1.8" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "colorama", marker = "sys_platform == 'win32'" }, + { name = "colorama", marker = "platform_system == 'Windows'" }, ] sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593 } wheels = [ @@ -577,7 +577,7 @@ name = "ipykernel" version = "6.29.5" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "appnope", marker = "sys_platform == 'darwin'" }, + { name = "appnope", marker = "platform_system == 'Darwin'" }, { name = "comm" }, { name = "debugpy" }, { name = "ipython" }, @@ -701,7 +701,7 @@ wheels = [ [[package]] name = "llama-models" -version = "0.1.2" +version = "0.1.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "jinja2" }, @@ -710,14 +710,14 @@ dependencies = [ { name = "pyyaml" }, { name = "tiktoken" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/b5/f2/ed8310d4677cd38ab45ffba45aea2a4e9882b640045ad9c3198ac69e5a85/llama_models-0.1.2.tar.gz", hash = "sha256:1266eaec7a8db336e4ed034d2b494189ccb7fd6d6b7aefe874eee749a4340b9b", size = 1608069 } +sdist = { url = "https://files.pythonhosted.org/packages/0b/39/b8e2c02bc5ce1c0ba4e249532e0eb384ad7dae54a8f53198c8ff9aded41e/llama_models-0.1.3.tar.gz", hash = "sha256:2f339e67b8bbd98729bd2052c2cb8a916ef8f7d8a05337febad8879c6718c24a", size = 1568353 } wheels = [ - { url = "https://files.pythonhosted.org/packages/55/a7/34b9e88ef4109759c8881f43b8006139e3d13d54c440b8c571b253655f54/llama_models-0.1.2-py3-none-any.whl", hash = "sha256:8aa5287d1c6325698991ff677e71148cac347e07493bb5b3ab891e614b89e1f8", size = 1651273 }, + { url = "https://files.pythonhosted.org/packages/8c/df/a39f85cce6fcab962f7a7113063a6b2b08d0f66ac8ba4b9b12f21f398885/llama_models-0.1.3-py3-none-any.whl", hash = "sha256:87d92027e27c6b3e905158751758bcb7dabbdca1d995592e8e46fd2160daa844", size = 1587292 }, ] [[package]] name = "llama-stack" -version = "0.1.2" +version = "0.1.3" source = { editable = "." } dependencies = [ { name = "blobfile" }, @@ -770,8 +770,8 @@ requires-dist = [ { name = "httpx" }, { name = "huggingface-hub" }, { name = "jsonschema" }, - { name = "llama-models", specifier = ">=0.1.2" }, - { name = "llama-stack-client", specifier = ">=0.1.2" }, + { name = "llama-models", specifier = ">=0.1.3" }, + { name = "llama-stack-client", specifier = ">=0.1.3" }, { name = "myst-parser", marker = "extra == 'docs'" }, { name = "nbval", marker = "extra == 'dev'" }, { name = "pre-commit", marker = "extra == 'dev'" }, @@ -800,7 +800,7 @@ requires-dist = [ [[package]] name = "llama-stack-client" -version = "0.1.2" +version = "0.1.3" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "anyio" }, @@ -817,9 +817,9 @@ dependencies = [ { name = "tqdm" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/9e/75/8b41a3026c871a8650cd8d2cfda9f891a9163458813574f36518bb40afe4/llama_stack_client-0.1.2.tar.gz", hash = "sha256:94277ddae52be557d771dcdc15d85af9012b5aa87439dd69ec1dc0ff486b0c8e", size = 188023 } +sdist = { url = "https://files.pythonhosted.org/packages/23/bb/f8b21745fcae811d75685202fe127c269f8387ff6374cf8f9b0be9b7eaa7/llama_stack_client-0.1.3.tar.gz", hash = "sha256:8ba46e199ac1a0e0bdcbe55fc776dd0b8f55771418c5f8bf7b419b7a0077fe7a", size = 191842 } wheels = [ - { url = "https://files.pythonhosted.org/packages/c4/32/3a3a97eecff1f1e3a1dc90e9b00681abea11ec4f43a7ca549981261e18b6/llama_stack_client-0.1.2-py3-none-any.whl", hash = "sha256:85ff0fb57a62d7d0470cfaa2b07a595c9fb3483297944d5e5a066db850d38ccd", size = 359415 }, + { url = "https://files.pythonhosted.org/packages/88/52/3ef8405daad5649f11b5708f1df9eca4fa229e499ac198a99c42f1075a08/llama_stack_client-0.1.3-py3-none-any.whl", hash = "sha256:e7b66051918bc0685dfee6103d3efbcec3ae193b3e67edf025cd088539463245", size = 366471 }, ] [[package]] @@ -1414,6 +1414,8 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/61/74/49f5d20c514ccc631b940cc9dfec45dcce418dc84a98463a2e2ebec33904/pycryptodomex-3.21.0-cp36-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:52e23a0a6e61691134aa8c8beba89de420602541afaae70f66e16060fdcd677e", size = 2257982 }, { url = "https://files.pythonhosted.org/packages/92/4b/d33ef74e2cc0025a259936661bb53432c5bbbadc561c5f2e023bcd73ce4c/pycryptodomex-3.21.0-cp36-abi3-win32.whl", hash = "sha256:a3d77919e6ff56d89aada1bd009b727b874d464cb0e2e3f00a49f7d2e709d76e", size = 1779052 }, { url = "https://files.pythonhosted.org/packages/5b/be/7c991840af1184009fc86267160948350d1bf875f153c97bb471ad944e40/pycryptodomex-3.21.0-cp36-abi3-win_amd64.whl", hash = "sha256:b0e9765f93fe4890f39875e6c90c96cb341767833cfa767f41b490b506fa9ec0", size = 1816307 }, + { url = "https://files.pythonhosted.org/packages/af/ac/24125ad36778914a36f08d61ba5338cb9159382c638d9761ee19c8de822c/pycryptodomex-3.21.0-pp27-pypy_73-manylinux2010_x86_64.whl", hash = "sha256:feaecdce4e5c0045e7a287de0c4351284391fe170729aa9182f6bd967631b3a8", size = 1694999 }, + { url = "https://files.pythonhosted.org/packages/93/73/be7a54a5903508070e5508925ba94493a1f326cfeecfff750e3eb250ea28/pycryptodomex-3.21.0-pp27-pypy_73-win32.whl", hash = "sha256:365aa5a66d52fd1f9e0530ea97f392c48c409c2f01ff8b9a39c73ed6f527d36c", size = 1769437 }, { url = "https://files.pythonhosted.org/packages/e5/9f/39a6187f3986841fa6a9f35c6fdca5030ef73ff708b45a993813a51d7d10/pycryptodomex-3.21.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3efddfc50ac0ca143364042324046800c126a1d63816d532f2e19e6f2d8c0c31", size = 1619607 }, { url = "https://files.pythonhosted.org/packages/f8/70/60bb08e9e9841b18d4669fb69d84b64ce900aacd7eb0ebebd4c7b9bdecd3/pycryptodomex-3.21.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df2608682db8279a9ebbaf05a72f62a321433522ed0e499bc486a6889b96bf3", size = 1653571 }, { url = "https://files.pythonhosted.org/packages/c9/6f/191b73509291c5ff0dddec9cc54797b1d73303c12b2e4017b24678e57099/pycryptodomex-3.21.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5823d03e904ea3e53aebd6799d6b8ec63b7675b5d2f4a4bd5e3adcb512d03b37", size = 1691548 }, @@ -2305,7 +2307,7 @@ name = "tqdm" version = "4.67.1" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "colorama", marker = "sys_platform == 'win32'" }, + { name = "colorama", marker = "platform_system == 'Windows'" }, ] sdist = { url = "https://files.pythonhosted.org/packages/a8/4b/29b4ef32e036bb34e4ab51796dd745cdba7ed47ad142a9f4a1eb8e0c744d/tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2", size = 169737 } wheels = [ From df864ee575e0aa869bd30408893e275699a2c4ef Mon Sep 17 00:00:00 2001 From: Hardik Shah Date: Fri, 14 Feb 2025 14:29:17 -0800 Subject: [PATCH 20/37] Update index.md to refer to v0.1.3 --- docs/source/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/index.md b/docs/source/index.md index 2834f5641..cb2355bfd 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -2,7 +2,7 @@ ```{admonition} News :class: tip -Llama Stack 0.1.2 is now available! See the [release notes](https://github.com/meta-llama/llama-stack/releases/tag/v0.1.2) for more details. +Llama Stack 0.1.3 is now available! See the [release notes](https://github.com/meta-llama/llama-stack/releases/tag/v0.1.3) for more details. ``` # Llama Stack From ab210ec59e8da725229f8feb3448c5627e70a14e Mon Sep 17 00:00:00 2001 From: Hardik Shah Date: Fri, 14 Feb 2025 15:45:08 -0800 Subject: [PATCH 21/37] Update README.md --- tests/client-sdk/README.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/tests/client-sdk/README.md b/tests/client-sdk/README.md index d4d439d96..703d06a39 100644 --- a/tests/client-sdk/README.md +++ b/tests/client-sdk/README.md @@ -3,19 +3,16 @@ You can run llama stack integration tests on either a Llama Stack Library or a L To test on a Llama Stack library with certain configuration, run ```bash -LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml -pytest -s -v tests/client-sdk/inference/ +LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml pytest -s -v tests/client-sdk/inference/ ``` or just the template name ```bash -LLAMA_STACK_CONFIG=together -pytest -s -v tests/client-sdk/inference/ +LLAMA_STACK_CONFIG=together pytest -s -v tests/client-sdk/inference/ ``` To test on a Llama Stack endpoint, run ```bash -LLAMA_STACK_BASE_URL=http//localhost:8089 -pytest -s -v tests/client-sdk/inference +LLAMA_STACK_BASE_URL=http://localhost:8089 pytest -s -v tests/client-sdk/inference ``` ## Report Generation From 8dc1cac33371f18504223c9a07da3e299623f57c Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Sat, 15 Feb 2025 09:16:26 +0800 Subject: [PATCH 22/37] style: fix the capitalization issue (#1117) # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] ``` before: $ llama stack run --help usage: llama stack run [-h] [--port PORT] [--image-name IMAGE_NAME] [--disable-ipv6] [--env KEY=VALUE] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE] [--image-type {conda,container,venv}] config start <<<<<<---- the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution. After: $ llama stack run --help usage: llama stack run [-h] [--port PORT] [--image-name IMAGE_NAME] [--disable-ipv6] [--env KEY=VALUE] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE] [--image-type {conda,container,venv}] config Start <<<<<<---- the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution. ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: reidliu Co-authored-by: reidliu --- llama_stack/cli/stack/configure.py | 2 +- llama_stack/cli/stack/run.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/llama_stack/cli/stack/configure.py b/llama_stack/cli/stack/configure.py index 56f4feceb..2bb3f7313 100644 --- a/llama_stack/cli/stack/configure.py +++ b/llama_stack/cli/stack/configure.py @@ -17,7 +17,7 @@ class StackConfigure(Subcommand): self.parser = subparsers.add_parser( "configure", prog="llama stack configure", - description="configure a llama stack distribution", + description="Configure a llama stack distribution", formatter_class=argparse.RawTextHelpFormatter, ) self._add_arguments() diff --git a/llama_stack/cli/stack/run.py b/llama_stack/cli/stack/run.py index c32e51fca..73536491b 100644 --- a/llama_stack/cli/stack/run.py +++ b/llama_stack/cli/stack/run.py @@ -19,7 +19,7 @@ class StackRun(Subcommand): self.parser = subparsers.add_parser( "run", prog="llama stack run", - description="""start the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution.""", + description="""Start the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution.""", formatter_class=argparse.RawTextHelpFormatter, ) self._add_arguments() From ab2b46e5289d2ccd41153ba87f535668de1825f5 Mon Sep 17 00:00:00 2001 From: ehhuang Date: Fri, 14 Feb 2025 17:48:06 -0800 Subject: [PATCH 23/37] feat: log start, complete time to Agent steps (#1116) --- .../inline/agents/meta_reference/agent_instance.py | 11 +++++++++++ tests/client-sdk/agents/test_agents.py | 11 ++++++++++- 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/llama_stack/providers/inline/agents/meta_reference/agent_instance.py b/llama_stack/providers/inline/agents/meta_reference/agent_instance.py index fc597d0f7..1c21df57f 100644 --- a/llama_stack/providers/inline/agents/meta_reference/agent_instance.py +++ b/llama_stack/providers/inline/agents/meta_reference/agent_instance.py @@ -301,6 +301,7 @@ class ChatAgent(ShieldRunnerMixin): return step_id = str(uuid.uuid4()) + shield_call_start_time = datetime.now() try: yield AgentTurnResponseStreamChunk( event=AgentTurnResponseEvent( @@ -323,6 +324,8 @@ class ChatAgent(ShieldRunnerMixin): step_id=step_id, turn_id=turn_id, violation=e.violation, + started_at=shield_call_start_time, + completed_at=datetime.now(), ), ) ) @@ -344,6 +347,8 @@ class ChatAgent(ShieldRunnerMixin): step_id=step_id, turn_id=turn_id, violation=None, + started_at=shield_call_start_time, + completed_at=datetime.now(), ), ) ) @@ -476,6 +481,7 @@ class ChatAgent(ShieldRunnerMixin): client_tools[tool.name] = tool while True: step_id = str(uuid.uuid4()) + inference_start_time = datetime.now() yield AgentTurnResponseStreamChunk( event=AgentTurnResponseEvent( payload=AgentTurnResponseStepStartPayload( @@ -574,6 +580,8 @@ class ChatAgent(ShieldRunnerMixin): step_id=step_id, turn_id=turn_id, model_response=copy.deepcopy(message), + started_at=inference_start_time, + completed_at=datetime.now(), ), ) ) @@ -641,6 +649,7 @@ class ChatAgent(ShieldRunnerMixin): "input": message.model_dump_json(), }, ) as span: + tool_execution_start_time = datetime.now() result_messages = await execute_tool_call_maybe( self.tool_runtime_api, session_id, @@ -668,6 +677,8 @@ class ChatAgent(ShieldRunnerMixin): content=result_message.content, ) ], + started_at=tool_execution_start_time, + completed_at=datetime.now(), ), ) ) diff --git a/tests/client-sdk/agents/test_agents.py b/tests/client-sdk/agents/test_agents.py index e5c20c3a5..0369f325b 100644 --- a/tests/client-sdk/agents/test_agents.py +++ b/tests/client-sdk/agents/test_agents.py @@ -545,7 +545,7 @@ def test_create_turn_response(llama_stack_client, agent_config): messages=[ { "role": "user", - "content": "What is the boiling point of polyjuice?", + "content": "Call get_boiling_point and answer What is the boiling point of polyjuice?", }, ], session_id=session_id, @@ -557,3 +557,12 @@ def test_create_turn_response(llama_stack_client, agent_config): assert steps[1].step_type == "tool_execution" assert steps[1].tool_calls[0].tool_name == "get_boiling_point" assert steps[2].step_type == "inference" + + last_step_completed_at = None + for step in steps: + if last_step_completed_at is None: + last_step_completed_at = step.completed_at + else: + assert last_step_completed_at < step.started_at + assert step.started_at < step.completed_at + last_step_completed_at = step.completed_at From 743f4348603d4934df5c2352a6e3304b462de33c Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Sat, 15 Feb 2025 00:19:16 -0500 Subject: [PATCH 24/37] fix: Ensure a tool call can be converted before adding to buffer (#1119) # What does this PR do? This fixes an issue when running the e2e agent example: https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_client_tools.py ``` | File "/home/yutang/repos/llama-stack/llama_stack/providers/remote/inference/vllm/vllm.py", line 175, in _process_vllm_chat_completion_stream_response | tool_call = convert_tool_call(choice.delta.tool_calls[0]) | File "/home/yutang/repos/llama-stack/llama_stack/providers/utils/inference/openai_compat.py", line 441, in convert_tool_call | return ToolCall( | File "/home/yutang/.conda/envs/distribution-myenv/lib/python3.10/site-packages/pydantic/main.py", line 214, in __init__ | validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) | pydantic_core._pydantic_core.ValidationError: 4 validation errors for ToolCall | call_id | Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] | For further information visit https://errors.pydantic.dev/2.10/v/string_type | tool_name.enum[BuiltinTool] | Input should be 'brave_search', 'wolfram_alpha', 'photogen' or 'code_interpreter' [type=enum, input_value=None, input_type=NoneType] | For further information visit https://errors.pydantic.dev/2.10/v/enum | tool_name.str | Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] | For further information visit https://errors.pydantic.dev/2.10/v/string_type | arguments | Input should be a valid dictionary [type=dict_type, input_value=202, input_type=int] | For further information visit https://errors.pydantic.dev/2.10/v/dict_type ``` This issue happened because not all arguments have been appended to the tool call buffer yet. The current code assumes that we are ready to convert the tool call whenever args can be converted to JSON successfully. In this case, `json.loads("202")` would succeed but the rest of the arguments have not been properly parsed yet. [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan The e2e example worked successfully (although note that I ran the script twice with each function call separately due to https://github.com/meta-llama/llama-stack/issues/1120): ``` tool_execution> Tool:get_ticker_data Args:{'ticker_symbol': 'GOOG', 'start': '2023-01-01', 'end': '2023-12-31'} tool_execution> Tool:get_ticker_data Response:"[{\"('Year', '')\":2023,\"('Close', 'GOOG')\":140.4254455566}]" tool_execution> Tool:web_search Args:{'query': '42nd president of the United States'} tool_execution> Tool:web_search Response:"{\"query\": \"42nd president of the United States\", \"top_k\": [{\"title\": \"William J. Clinton | whitehouse.gov\", \"url\": \"https://obamawhitehouse.archives.gov/1600/presidents/williamjclinton\", \"description\": \"Bill Clinton is an American politician from Arkansas who served as the 42nd President of the United States (1993-2001). He took office at the end of the Cold War, and was the first baby-boomer generation President.\", \"type\": \"search_result\"}, {\"title\": \"Bill Clinton - Wikipedia\", \"url\": \"https://en.wikipedia.org/wiki/Bill_Clinton\", \"description\": \"William Jefferson Clinton (n\\u00e9 Blythe; born August 19, 1946) is an American politician and lawyer who served as the 42nd president of the United States from 1993 to 2001. A member of the Democratic Party, he previously served as the attorney general of Arkansas from 1977 to 1979 and as the ...\", \"type\": \"search_result\"}, [{\"type\": \"video_result\", \"url\": \"https://www.youtube.com/watch?v=eR2z_1-v87Y\", \"title\": \"A Conversation with Bill Clinton, 42nd President of the United ...\", \"description\": \"William Jefferson Clinton, the first Democratic president in six decades to be elected twice, led the United States to the longest economic expansion in Amer...\"}, {\"type\": \"video_result\", \"url\": \"https://www.facebook.com/clintoncenter/videos/january-20-1993-president-clinton-was-sworn-in-as-the-42nd-president-of-the-unit/448417409677375/\", \"title\": \"January 20, 1993, President Clinton was sworn in as the 42nd ...\", \"description\": \"WATCH: On January 20, 1993, President Bill Clinton was sworn in as the 42nd President of the United States. #InaugurationDay Video courtesy of the...\"}, {\"type\": \"video_result\", \"url\": \"https://www.youtube.com/watch?v=vI0HGQqEJh0\", \"title\": \"42nd President of the United States, Bill Clinton, shared thoughts ...\", \"description\": \"AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new features \\u00b7 \\u00a9 2024 Google LLC\"}, {\"type\": \"video_result\", \"url\": \"https://www.youtube.com/shorts/vI0HGQqEJh0\", \"title\": \"42nd President of the United States, Bill Clinton, shared ...\", \"description\": \"Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.\"}, {\"type\": \"video_result\", \"url\": \"https://www.youtube.com/watch?v=PHihhihVth0\", \"title\": \"Bill & Hillary Clinton returning to Little Rock for 20th ...\", \"description\": \"Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.\"}]]}" ``` All text inference tests passed. [//]: # (## Documentation) Signed-off-by: Yuan Tang --- .../providers/utils/inference/openai_compat.py | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/llama_stack/providers/utils/inference/openai_compat.py b/llama_stack/providers/utils/inference/openai_compat.py index da8e3ce2d..def7e8f37 100644 --- a/llama_stack/providers/utils/inference/openai_compat.py +++ b/llama_stack/providers/utils/inference/openai_compat.py @@ -427,10 +427,14 @@ def convert_tool_call( """ Convert a ChatCompletionMessageToolCall tool call to either a ToolCall or UnparseableToolCall. Returns an UnparseableToolCall - if the tool call is not valid JSON. + if the tool call is not valid ToolCall. """ try: - arguments = json.loads(tool_call.function.arguments) + valid_tool_call = ToolCall( + call_id=tool_call.id, + tool_name=tool_call.function.name, + arguments=json.loads(tool_call.function.arguments), + ) except Exception as e: return UnparseableToolCall( call_id=tool_call.id or "", @@ -438,8 +442,4 @@ def convert_tool_call( arguments=tool_call.function.arguments or "", ) - return ToolCall( - call_id=tool_call.id, - tool_name=tool_call.function.name, - arguments=arguments, - ) + return valid_tool_call From 6b1773d530e9f168f86beb83a6ec6af73555efe4 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Sat, 15 Feb 2025 22:05:23 -0500 Subject: [PATCH 25/37] docs: Fix incorrect link and command for generating API reference (#1124) --- docs/openapi_generator/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/openapi_generator/README.md b/docs/openapi_generator/README.md index 9d407905d..e98cfaf1b 100644 --- a/docs/openapi_generator/README.md +++ b/docs/openapi_generator/README.md @@ -1,4 +1,4 @@ -The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in `llama_stack/[]/api/endpoints.py` using the `generate.py` utility. +The RFC Specification (OpenAPI format) is generated from the set of API endpoints located in `llama_stack/distribution/server/endpoints.py` using the `generate.py` utility. Please install the following packages before running the script: @@ -6,4 +6,4 @@ Please install the following packages before running the script: pip install python-openapi json-strong-typing fire PyYAML llama-models ``` -Then simply run `sh run_openapi_generator.sh ` +Then simply run `sh run_openapi_generator.sh` From 89d37687dd375eeee96bfd04e960bd328cc00f73 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Wed, 19 Feb 2025 02:13:46 +0800 Subject: [PATCH 26/37] chore: remove --no-list-templates option (#1121) # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] From the code and the usage, seems cannot see that need to use `--no-list-templates` to handle, and also make the user confused from the help text, so try to remove it. ``` $ llama stack build --no-list-templates > Enter a name for your Llama Stack (e.g. my-local-stack): $ llama stack build > Enter a name for your Llama Stack (e.g. my-local-stack): before: $ llama stack build --help --list-templates, --no-list-templates Show the available templates for building a Llama Stack distribution (default: False) after: --list-templates Show the available templates for building a Llama Stack distribution ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: reidliu Co-authored-by: reidliu --- docs/source/distributions/building_distro.md | 7 ++++--- llama_stack/cli/stack/build.py | 3 +-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/source/distributions/building_distro.md b/docs/source/distributions/building_distro.md index 90239cb4e..9cb1a402f 100644 --- a/docs/source/distributions/building_distro.md +++ b/docs/source/distributions/building_distro.md @@ -23,7 +23,8 @@ The main points to consider are: ``` llama stack build -h -usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--list-templates | --no-list-templates] [--image-type {conda,container,venv}] [--image-name IMAGE_NAME] +usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--list-templates] + [--image-type {conda,container,venv}] [--image-name IMAGE_NAME] [--print-deps-only] Build a Llama stack container @@ -32,14 +33,14 @@ options: --config CONFIG Path to a config file to use for the build. You can find example configs in llama_stack/distribution/**/build.yaml. If this argument is not provided, you will be prompted to enter information interactively --template TEMPLATE Name of the example template config to use for build. You may use `llama stack build --list-templates` to check out the available templates - --list-templates, --no-list-templates - Show the available templates for building a Llama Stack distribution (default: False) + --list-templates Show the available templates for building a Llama Stack distribution --image-type {conda,container,venv} Image Type to use for the build. This can be either conda or container or venv. If not specified, will use the image type from the template config. --image-name IMAGE_NAME [for image-type=conda] Name of the conda environment to use for the build. If not specified, currently active Conda environment will be used. If no Conda environment is active, you must specify a name. + --print-deps-only Print the dependencies for the stack only, without building the stack ``` After this step is complete, a file named `-build.yaml` and template file `-run.yaml` will be generated and saved at the output file path specified at the end of the command. diff --git a/llama_stack/cli/stack/build.py b/llama_stack/cli/stack/build.py index ca4c0d8ce..7b17a960a 100644 --- a/llama_stack/cli/stack/build.py +++ b/llama_stack/cli/stack/build.py @@ -38,9 +38,8 @@ class StackBuild(Subcommand): self.parser.add_argument( "--list-templates", - type=bool, + action="store_true", default=False, - action=argparse.BooleanOptionalAction, help="Show the available templates for building a Llama Stack distribution", ) From 92aefec191b12715d5467541ff12659a3f434b11 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Wed, 19 Feb 2025 02:15:26 +0800 Subject: [PATCH 27/37] style: update verify-download help text (#1134) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Based on the code https://github.com/meta-llama/llama-stack/blob/6b1773d530e9f168f86beb83a6ec6af73555efe4/llama_stack/cli/download.py#L379 and test, `verify-download` should only use in `downloaded from Meta`. ``` test: no checklist.chk file for hf download $ llama model download --source meta --model-id Llama3.2-1B Downloading checklist.chk ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 156/156 bytes - 0:00:00 Downloading tokenizer.model ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.2/2.2 MB - 0:00:00 Downloading params.json ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 220/220 bytes - 0:00:00 Downloading consolidated.00.pth ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.5/2.5 GB - 0:00:00 before: $ llama model verify-download --help usage: llama model verify-download [-h] --model-id MODEL_ID Verify the downloaded checkpoints' checksums options: -h, --help show this help message and exit --model-id MODEL_ID Model ID to verify after: $ llama model verify-download --help usage: llama model verify-download [-h] --model-id MODEL_ID Verify the downloaded checkpoints' checksums for models downloaded from Meta options: -h, --help show this help message and exit --model-id MODEL_ID Model ID to verify (only for models downloaded from Meta) ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: reidliu Co-authored-by: reidliu --- llama_stack/cli/model/verify_download.py | 2 +- llama_stack/cli/verify_download.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/llama_stack/cli/model/verify_download.py b/llama_stack/cli/model/verify_download.py index b8e6bf173..e7159c0aa 100644 --- a/llama_stack/cli/model/verify_download.py +++ b/llama_stack/cli/model/verify_download.py @@ -15,7 +15,7 @@ class ModelVerifyDownload(Subcommand): self.parser = subparsers.add_parser( "verify-download", prog="llama model verify-download", - description="Verify the downloaded checkpoints' checksums", + description="Verify the downloaded checkpoints' checksums for models downloaded from Meta", formatter_class=argparse.RawTextHelpFormatter, ) diff --git a/llama_stack/cli/verify_download.py b/llama_stack/cli/verify_download.py index 47993c361..1229e8601 100644 --- a/llama_stack/cli/verify_download.py +++ b/llama_stack/cli/verify_download.py @@ -44,7 +44,7 @@ def setup_verify_download_parser(parser: argparse.ArgumentParser) -> None: parser.add_argument( "--model-id", required=True, - help="Model ID to verify", + help="Model ID to verify (only for models downloaded from Meta)", ) parser.set_defaults(func=partial(run_verify_cmd, parser=parser)) From d9f5beb15a2e05c27427c32f52a315947c54c4c9 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Wed, 19 Feb 2025 02:24:31 +0800 Subject: [PATCH 28/37] style: update download help text (#1135) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Based on the cade: https://github.com/meta-llama/llama-stack/blob/6b1773d530e9f168f86beb83a6ec6af73555efe4/llama_stack/cli/download.py#L454 and the test, it can use comma to specify multiple model ids. So update the usage. ``` $ llama model download --source meta --model-id Llama3.2-1B,Llama3.2-3B Please provide the signed URL for model Llama3.2-1B you received via email after visiting https://www.llama.com/llama-downloads/ (e.g., https://llama3-1.llamameta.net/*?Policy...): Downloading checklist.chk ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 156/156 bytes - 0:00:00 Downloading tokenizer.model ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.2/2.2 MB - 0:00:00 Downloading params.json ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 220/220 bytes - 0:00:00 Downloading consolidated.00.pth ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.5/2.5 GB - 0:00:00 Successfully downloaded model to /Users/xx/.llama/checkpoints/Llama3.2-1B [Optionally] To run MD5 checksums, use the following command: llama model verify-download --model-id Llama3.2-1B Please provide the signed URL for model Llama3.2-3B you received via email after visiting https://www.llama.com/llama-downloads/ (e.g., https://llama3-1.llamameta.net/*?Policy...): Downloading checklist.chk ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 156/156 bytes - 0:00:00 Downloading tokenizer.model ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 2.2/2.2 MB - 0:00:00 Downloading params.json ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 220/220 bytes - 0:00:00 Downloading consolidated.00.pth ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0% 6.4/6.4 GB - 0:00:00 Successfully downloaded model to /Users/xx/.llama/checkpoints/Llama3.2-3B $ llama model download --source huggingface --model-id Llama3.2-1B,Llama3.2-3B original%2Fparams.json: 100%|██████████████████████████████████████████████████████████| 220/220 [00:00<00:00, 564kB/ Successfully downloaded model to /Users/xx/.llama/checkpoints/Llama3.2-1B ... tokenizer.json: 100%|█████████████████████████████████████████████████████████████| 9.09M/9.09M [00:00<00:00, 9.18MB/s] Successfully downloaded model to /Users/xxx/.llama/checkpoints/Llama3.2-3B before: $ llama model download --help --model-id MODEL_ID See `llama model list` or `llama model list --show-all` for the list of available models after: $ llama model download --help --model-id MODEL_ID See `llama model list` or `llama model list --show-all` for the list of available models. Specify multiple model IDs with commas, e.g. --model-id Llama3.2-1B,Llama3.2-3B ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: reidliu Co-authored-by: reidliu --- llama_stack/cli/download.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llama_stack/cli/download.py b/llama_stack/cli/download.py index 8afc6d31d..af86f7243 100644 --- a/llama_stack/cli/download.py +++ b/llama_stack/cli/download.py @@ -56,7 +56,7 @@ def setup_download_parser(parser: argparse.ArgumentParser) -> None: parser.add_argument( "--model-id", required=False, - help="See `llama model list` or `llama model list --show-all` for the list of available models", + help="See `llama model list` or `llama model list --show-all` for the list of available models. Specify multiple model IDs with commas, e.g. --model-id Llama3.2-1B,Llama3.2-3B", ) parser.add_argument( "--hf-token", From 4e76d312fa4a8d436da349db8e941d2c7939a894 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Wed, 19 Feb 2025 02:26:41 +0800 Subject: [PATCH 29/37] fix: modify the model id title for model list (#1095) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] Re-check and based on the doc, the download model id, actually is model descriptor(also without `meta-llama/`). https://llama-stack.readthedocs.io/en/latest/references/llama_cli_reference/index.html ``` $ llama download --source huggingface --model-id Llama-Guard-3-1B:int4 --hf-token xxx # model descriptor Fetching 8 files: 0%| | 0/8 [00:00 Co-authored-by: reidliu --- docs/source/references/llama_cli_reference/download_models.md | 2 +- docs/source/references/llama_cli_reference/index.md | 2 +- llama_stack/cli/model/list.py | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/source/references/llama_cli_reference/download_models.md b/docs/source/references/llama_cli_reference/download_models.md index 3c40f1392..6c791bcb7 100644 --- a/docs/source/references/llama_cli_reference/download_models.md +++ b/docs/source/references/llama_cli_reference/download_models.md @@ -39,7 +39,7 @@ You should see a table like this: ``` +----------------------------------+------------------------------------------+----------------+ -| Model Descriptor | Hugging Face Repo | Context Length | +| Model Descriptor(ID) | Hugging Face Repo | Context Length | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K | +----------------------------------+------------------------------------------+----------------+ diff --git a/docs/source/references/llama_cli_reference/index.md b/docs/source/references/llama_cli_reference/index.md index f7ac5fe36..76abce544 100644 --- a/docs/source/references/llama_cli_reference/index.md +++ b/docs/source/references/llama_cli_reference/index.md @@ -63,7 +63,7 @@ You should see a table like this: ``` +----------------------------------+------------------------------------------+----------------+ -| Model Descriptor | Hugging Face Repo | Context Length | +| Model Descriptor(ID) | Hugging Face Repo | Context Length | +----------------------------------+------------------------------------------+----------------+ | Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K | +----------------------------------+------------------------------------------+----------------+ diff --git a/llama_stack/cli/model/list.py b/llama_stack/cli/model/list.py index 4fe28751e..e6bf2216a 100644 --- a/llama_stack/cli/model/list.py +++ b/llama_stack/cli/model/list.py @@ -36,8 +36,8 @@ class ModelList(Subcommand): from .safety_models import prompt_guard_model_sku headers = [ - "Model Descriptor", - "Model ID", + "Model Descriptor(ID)", + "Hugging Face Repo", "Context Length", ] From 8585b95a28b31d8bfe43fd13bd699ad0190fd1bc Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Tue, 18 Feb 2025 16:02:44 -0800 Subject: [PATCH 30/37] rename --- ...nb => Tool_Calling101_Using_Together_Llama_Stack_Server.ipynb} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename docs/zero_to_hero_guide/{Tool_Calling101_Using_Together's_Llama_Stack_Server.ipynb => Tool_Calling101_Using_Together_Llama_Stack_Server.ipynb} (100%) diff --git a/docs/zero_to_hero_guide/Tool_Calling101_Using_Together's_Llama_Stack_Server.ipynb b/docs/zero_to_hero_guide/Tool_Calling101_Using_Together_Llama_Stack_Server.ipynb similarity index 100% rename from docs/zero_to_hero_guide/Tool_Calling101_Using_Together's_Llama_Stack_Server.ipynb rename to docs/zero_to_hero_guide/Tool_Calling101_Using_Together_Llama_Stack_Server.ipynb From e8cb9e0adba6485c438bb7cb1e311ac80a90a06c Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Tue, 18 Feb 2025 16:07:54 -0800 Subject: [PATCH 31/37] fix: direct client pydantic type casting (#1145) # What does this PR do? - Closes #1142 - Root cause is due to having `Union[str, AgenToolGroupWithArgs]` ## Test Plan - Test with script described in issue. - Print out final converted pydantic object image [//]: # (## Documentation) --- llama_stack/distribution/library_client.py | 25 ++++++++++++++++------ llama_stack/strong_typing/auxiliary.py | 2 +- llama_stack/strong_typing/classdef.py | 2 +- llama_stack/strong_typing/deserializer.py | 2 +- llama_stack/strong_typing/inspection.py | 6 ++++-- llama_stack/strong_typing/serializer.py | 2 +- 6 files changed, 26 insertions(+), 13 deletions(-) diff --git a/llama_stack/distribution/library_client.py b/llama_stack/distribution/library_client.py index a7ef753b9..a40651551 100644 --- a/llama_stack/distribution/library_client.py +++ b/llama_stack/distribution/library_client.py @@ -13,7 +13,7 @@ import re from concurrent.futures import ThreadPoolExecutor from enum import Enum from pathlib import Path -from typing import Any, Optional, TypeVar, get_args, get_origin +from typing import Any, Optional, TypeVar, Union, get_args, get_origin import httpx import yaml @@ -81,12 +81,13 @@ def convert_to_pydantic(annotation: Any, value: Any) -> Any: return value origin = get_origin(annotation) + if origin is list: item_type = get_args(annotation)[0] try: return [convert_to_pydantic(item_type, item) for item in value] except Exception: - print(f"Error converting list {value}") + print(f"Error converting list {value} into {item_type}") return value elif origin is dict: @@ -94,17 +95,26 @@ def convert_to_pydantic(annotation: Any, value: Any) -> Any: try: return {k: convert_to_pydantic(val_type, v) for k, v in value.items()} except Exception: - print(f"Error converting dict {value}") + print(f"Error converting dict {value} into {val_type}") return value try: # Handle Pydantic models and discriminated unions return TypeAdapter(annotation).validate_python(value) + except Exception as e: - cprint( - f"Warning: direct client failed to convert parameter {value} into {annotation}: {e}", - "yellow", - ) + # TODO: this is workardound for having Union[str, AgentToolGroup] in API schema. + # We should get rid of any non-discriminated unions in the API schema. + if origin is Union: + for union_type in get_args(annotation): + try: + return convert_to_pydantic(union_type, value) + except Exception: + continue + cprint( + f"Warning: direct client failed to convert parameter {value} into {annotation}: {e}", + "yellow", + ) return value @@ -421,4 +431,5 @@ class AsyncLlamaStackAsLibraryClient(AsyncLlamaStackClient): if param_name in body: value = body.get(param_name) converted_body[param_name] = convert_to_pydantic(param.annotation, value) + return converted_body diff --git a/llama_stack/strong_typing/auxiliary.py b/llama_stack/strong_typing/auxiliary.py index fd183da18..cf19d6083 100644 --- a/llama_stack/strong_typing/auxiliary.py +++ b/llama_stack/strong_typing/auxiliary.py @@ -77,7 +77,7 @@ def typeannotation( """ def wrap(cls: Type[T]) -> Type[T]: - setattr(cls, "__repr__", _compact_dataclass_repr) + cls.__repr__ = _compact_dataclass_repr if not dataclasses.is_dataclass(cls): cls = dataclasses.dataclass( # type: ignore[call-overload] cls, diff --git a/llama_stack/strong_typing/classdef.py b/llama_stack/strong_typing/classdef.py index d2d8688e4..5ead886d4 100644 --- a/llama_stack/strong_typing/classdef.py +++ b/llama_stack/strong_typing/classdef.py @@ -203,7 +203,7 @@ def schema_to_type(schema: Schema, *, module: types.ModuleType, class_name: str) if type_def.default is not dataclasses.MISSING: raise TypeError("disallowed: `default` for top-level type definitions") - setattr(type_def.type, "__module__", module.__name__) + type_def.type.__module__ = module.__name__ setattr(module, type_name, type_def.type) return node_to_typedef(module, class_name, top_node).type diff --git a/llama_stack/strong_typing/deserializer.py b/llama_stack/strong_typing/deserializer.py index 4c4ee9d89..fc0f40f83 100644 --- a/llama_stack/strong_typing/deserializer.py +++ b/llama_stack/strong_typing/deserializer.py @@ -325,7 +325,7 @@ class TupleDeserializer(Deserializer[Tuple[Any, ...]]): f"type `{self.container_type}` expects a JSON `array` of length {count} but received length {len(data)}" ) - return tuple(item_parser.parse(item) for item_parser, item in zip(self.item_parsers, data)) + return tuple(item_parser.parse(item) for item_parser, item in zip(self.item_parsers, data, strict=False)) class UnionDeserializer(Deserializer): diff --git a/llama_stack/strong_typing/inspection.py b/llama_stack/strong_typing/inspection.py index 69bc15597..8bc313021 100644 --- a/llama_stack/strong_typing/inspection.py +++ b/llama_stack/strong_typing/inspection.py @@ -263,8 +263,8 @@ def extend_enum( enum_class: Type[enum.Enum] = enum.Enum(extend.__name__, values) # type: ignore # assign the newly created type to the same module where the extending class is defined - setattr(enum_class, "__module__", extend.__module__) - setattr(enum_class, "__doc__", extend.__doc__) + enum_class.__module__ = extend.__module__ + enum_class.__doc__ = extend.__doc__ setattr(sys.modules[extend.__module__], extend.__name__, enum_class) return enum.unique(enum_class) @@ -874,6 +874,7 @@ def is_generic_instance(obj: Any, typ: TypeLike) -> bool: for tuple_item_type, item in zip( (tuple_item_type for tuple_item_type in typing.get_args(typ)), (item for item in obj), + strict=False, ) ) elif origin_type is Union: @@ -954,6 +955,7 @@ class RecursiveChecker: for tuple_item_type, item in zip( (tuple_item_type for tuple_item_type in typing.get_args(typ)), (item for item in obj), + strict=False, ) ) elif origin_type is Union: diff --git a/llama_stack/strong_typing/serializer.py b/llama_stack/strong_typing/serializer.py index 5e93e4c4d..4ca4a4119 100644 --- a/llama_stack/strong_typing/serializer.py +++ b/llama_stack/strong_typing/serializer.py @@ -216,7 +216,7 @@ class TypedTupleSerializer(Serializer[tuple]): self.item_generators = tuple(_get_serializer(item_type, context) for item_type in item_types) def generate(self, obj: tuple) -> List[JsonType]: - return [item_generator.generate(item) for item_generator, item in zip(self.item_generators, obj)] + return [item_generator.generate(item) for item_generator, item in zip(self.item_generators, obj, strict=False)] class CustomSerializer(Serializer): From 37cf60b73292468775dbfc876e7838fb1b7ccf96 Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Tue, 18 Feb 2025 19:41:37 -0800 Subject: [PATCH 32/37] style: remove prints in codebase (#1146) # What does this PR do? - replace prints in codebase with logger - update print_table to use rich Table ## Test Plan - library client script in https://github.com/meta-llama/llama-stack/pull/1145 ``` llama stack list-providers ``` image [//]: # (## Documentation) --- llama_stack/cli/table.py | 75 +++++-------------- llama_stack/distribution/library_client.py | 11 +-- .../remote/inference/nvidia/nvidia.py | 12 ++- .../remote/inference/nvidia/utils.py | 5 +- 4 files changed, 38 insertions(+), 65 deletions(-) diff --git a/llama_stack/cli/table.py b/llama_stack/cli/table.py index 599749231..bf59e6103 100644 --- a/llama_stack/cli/table.py +++ b/llama_stack/cli/table.py @@ -4,75 +4,36 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. -import re -import textwrap from typing import Iterable -from termcolor import cprint - - -def strip_ansi_colors(text): - ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])") - return ansi_escape.sub("", text) - - -def format_row(row, col_widths): - def wrap(text, width): - lines = [] - for line in text.split("\n"): - if line.strip() == "": - lines.append("") - else: - lines.extend(textwrap.wrap(line, width, break_long_words=False, replace_whitespace=False)) - return lines - - wrapped = [wrap(item, width) for item, width in zip(row, col_widths, strict=False)] - max_lines = max(len(subrow) for subrow in wrapped) - - lines = [] - for i in range(max_lines): - line = [] - for cell_lines, width in zip(wrapped, col_widths, strict=False): - value = cell_lines[i] if i < len(cell_lines) else "" - line.append(value + " " * (width - len(strip_ansi_colors(value)))) - lines.append("| " + (" | ".join(line)) + " |") - - return "\n".join(lines) +from rich.console import Console +from rich.table import Table def print_table(rows, headers=None, separate_rows: bool = False, sort_by: Iterable[int] = tuple()): - def itemlen(item): - return max([len(line) for line in strip_ansi_colors(item).split("\n")]) - + # Convert rows and handle None values rows = [[x or "" for x in row] for row in rows] + # Sort rows if sort_by is specified if sort_by: rows.sort(key=lambda x: tuple(x[i] for i in sort_by)) - if not headers: - col_widths = [max(itemlen(item) for item in col) for col in zip(*rows, strict=False)] - else: - col_widths = [ - max( - itemlen(header), - max(itemlen(item) for item in col), - ) - for header, col in zip(headers, zip(*rows, strict=False), strict=False) - ] - col_widths = [min(w, 80) for w in col_widths] - - header_line = "+".join("-" * (width + 2) for width in col_widths) - header_line = f"+{header_line}+" + # Create Rich table + table = Table(show_lines=separate_rows) + # Add headers if provided if headers: - print(header_line) - cprint(format_row(headers, col_widths), "white", attrs=["bold"]) + for header in headers: + table.add_column(header, style="bold white") + else: + # Add unnamed columns based on first row + for _ in range(len(rows[0]) if rows else 0): + table.add_column() - print(header_line) + # Add rows for row in rows: - print(format_row(row, col_widths)) - if separate_rows: - print(header_line) + table.add_row(*row) - if not separate_rows: - print(header_line) + # Print table + console = Console() + console.print(table) diff --git a/llama_stack/distribution/library_client.py b/llama_stack/distribution/library_client.py index a40651551..639e5ee73 100644 --- a/llama_stack/distribution/library_client.py +++ b/llama_stack/distribution/library_client.py @@ -47,6 +47,8 @@ from llama_stack.providers.utils.telemetry.tracing import ( start_trace, ) +logger = logging.getLogger(__name__) + T = TypeVar("T") @@ -87,7 +89,7 @@ def convert_to_pydantic(annotation: Any, value: Any) -> Any: try: return [convert_to_pydantic(item_type, item) for item in value] except Exception: - print(f"Error converting list {value} into {item_type}") + logger.error(f"Error converting list {value} into {item_type}") return value elif origin is dict: @@ -95,7 +97,7 @@ def convert_to_pydantic(annotation: Any, value: Any) -> Any: try: return {k: convert_to_pydantic(val_type, v) for k, v in value.items()} except Exception: - print(f"Error converting dict {value} into {val_type}") + logger.error(f"Error converting dict {value} into {val_type}") return value try: @@ -111,9 +113,8 @@ def convert_to_pydantic(annotation: Any, value: Any) -> Any: return convert_to_pydantic(union_type, value) except Exception: continue - cprint( + logger.warning( f"Warning: direct client failed to convert parameter {value} into {annotation}: {e}", - "yellow", ) return value @@ -152,7 +153,7 @@ class LlamaStackAsLibraryClient(LlamaStackClient): for handler in root_logger.handlers[:]: root_logger.removeHandler(handler) - print(f"Removed handler {handler.__class__.__name__} from root logger") + logger.info(f"Removed handler {handler.__class__.__name__} from root logger") def request(self, *args, **kwargs): if kwargs.get("stream"): diff --git a/llama_stack/providers/remote/inference/nvidia/nvidia.py b/llama_stack/providers/remote/inference/nvidia/nvidia.py index 0c5b7c454..8e67333af 100644 --- a/llama_stack/providers/remote/inference/nvidia/nvidia.py +++ b/llama_stack/providers/remote/inference/nvidia/nvidia.py @@ -4,6 +4,7 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. +import logging import warnings from typing import AsyncIterator, List, Optional, Union @@ -25,7 +26,12 @@ from llama_stack.apis.inference import ( ToolChoice, ToolConfig, ) -from llama_stack.models.llama.datatypes import CoreModelId, SamplingParams, ToolDefinition, ToolPromptFormat +from llama_stack.models.llama.datatypes import ( + CoreModelId, + SamplingParams, + ToolDefinition, + ToolPromptFormat, +) from llama_stack.providers.utils.inference.model_registry import ( ModelRegistryHelper, build_model_alias, @@ -43,6 +49,8 @@ from .openai_utils import ( ) from .utils import _is_nvidia_hosted, check_health +logger = logging.getLogger(__name__) + _MODEL_ALIASES = [ build_model_alias( "meta/llama3-8b-instruct", @@ -90,7 +98,7 @@ class NVIDIAInferenceAdapter(Inference, ModelRegistryHelper): # TODO(mf): filter by available models ModelRegistryHelper.__init__(self, model_aliases=_MODEL_ALIASES) - print(f"Initializing NVIDIAInferenceAdapter({config.url})...") + logger.info(f"Initializing NVIDIAInferenceAdapter({config.url})...") if _is_nvidia_hosted(config): if not config.api_key: diff --git a/llama_stack/providers/remote/inference/nvidia/utils.py b/llama_stack/providers/remote/inference/nvidia/utils.py index 0ec80e9dd..7d3f3f27e 100644 --- a/llama_stack/providers/remote/inference/nvidia/utils.py +++ b/llama_stack/providers/remote/inference/nvidia/utils.py @@ -4,12 +4,15 @@ # This source code is licensed under the terms described in the LICENSE file in # the root directory of this source tree. +import logging from typing import Tuple import httpx from . import NVIDIAConfig +logger = logging.getLogger(__name__) + def _is_nvidia_hosted(config: NVIDIAConfig) -> bool: return "integrate.api.nvidia.com" in config.url @@ -42,7 +45,7 @@ async def check_health(config: NVIDIAConfig) -> None: RuntimeError: If the server is not running or ready """ if not _is_nvidia_hosted(config): - print("Checking NVIDIA NIM health...") + logger.info("Checking NVIDIA NIM health...") try: is_live, is_ready = await _get_health(config.url) if not is_live: From 8de7cf103b823596f268b19ee2142c6f399556e8 Mon Sep 17 00:00:00 2001 From: ehhuang Date: Tue, 18 Feb 2025 20:25:15 -0800 Subject: [PATCH 33/37] feat: support tool_choice = {required, none, } (#1059) Summary: titled Test Plan: added tests and LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/ --safety-shield meta-llama/Llama-Guard-3-8B --- docs/_static/llama-stack-spec.html | 33 ++++++++++------- docs/_static/llama-stack-spec.yaml | 25 ++++++++----- llama_stack/apis/inference/inference.py | 15 ++++++-- llama_stack/distribution/routers/routers.py | 32 ++++++++++++----- .../utils/inference/prompt_adapter.py | 31 +++++++++++----- tests/client-sdk/agents/test_agents.py | 33 ++++++++++++++++- .../inference/test_text_inference.py | 36 +++++++++++++++++++ 7 files changed, 164 insertions(+), 41 deletions(-) diff --git a/docs/_static/llama-stack-spec.html b/docs/_static/llama-stack-spec.html index 17cf92341..65a1bdd6b 100644 --- a/docs/_static/llama-stack-spec.html +++ b/docs/_static/llama-stack-spec.html @@ -2697,7 +2697,8 @@ "type": "string", "enum": [ "auto", - "required" + "required", + "none" ], "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." }, @@ -3231,13 +3232,22 @@ "type": "object", "properties": { "tool_choice": { - "type": "string", - "enum": [ - "auto", - "required" + "oneOf": [ + { + "type": "string", + "enum": [ + "auto", + "required", + "none" + ], + "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." + }, + { + "type": "string" + } ], - "description": "(Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto.", - "default": "auto" + "default": "auto", + "description": "(Optional) Whether tool use is automatic, required, or none. Can also specify a tool name to use a specific tool. Defaults to ToolChoice.auto." }, "tool_prompt_format": { "type": "string", @@ -3259,9 +3269,6 @@ } }, "additionalProperties": false, - "required": [ - "system_message_behavior" - ], "description": "Configuration for tool use." }, "ToolDef": { @@ -4100,7 +4107,8 @@ "type": "string", "enum": [ "auto", - "required" + "required", + "none" ], "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." }, @@ -4384,7 +4392,8 @@ "type": "string", "enum": [ "auto", - "required" + "required", + "none" ], "description": "(Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto. .. deprecated:: Use tool_config instead." }, diff --git a/docs/_static/llama-stack-spec.yaml b/docs/_static/llama-stack-spec.yaml index f63374406..60b777e91 100644 --- a/docs/_static/llama-stack-spec.yaml +++ b/docs/_static/llama-stack-spec.yaml @@ -1637,6 +1637,7 @@ components: enum: - auto - required + - none description: >- Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities @@ -1994,13 +1995,21 @@ components: type: object properties: tool_choice: - type: string - enum: - - auto - - required - description: >- - (Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto. + oneOf: + - type: string + enum: + - auto + - required + - none + description: >- + Whether tool use is required or automatic. This is a hint to the model + which may not be followed. It depends on the Instruction Following + capabilities of the model. + - type: string default: auto + description: >- + (Optional) Whether tool use is automatic, required, or none. Can also + specify a tool name to use a specific tool. Defaults to ToolChoice.auto. tool_prompt_format: type: string enum: @@ -2027,8 +2036,6 @@ components: where the function definitions should be inserted. default: append additionalProperties: false - required: - - system_message_behavior description: Configuration for tool use. ToolDef: type: object @@ -2533,6 +2540,7 @@ components: enum: - auto - required + - none description: >- Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities @@ -2739,6 +2747,7 @@ components: enum: - auto - required + - none description: >- (Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto. .. deprecated:: Use tool_config instead. diff --git a/llama_stack/apis/inference/inference.py b/llama_stack/apis/inference/inference.py index 433ba3274..a3fb69477 100644 --- a/llama_stack/apis/inference/inference.py +++ b/llama_stack/apis/inference/inference.py @@ -182,10 +182,12 @@ class ToolChoice(Enum): :cvar auto: The model may use tools if it determines that is appropriate. :cvar required: The model must use tools. + :cvar none: The model must not use tools. """ auto = "auto" required = "required" + none = "none" @json_schema_type @@ -326,7 +328,7 @@ class SystemMessageBehavior(Enum): class ToolConfig(BaseModel): """Configuration for tool use. - :param tool_choice: (Optional) Whether tool use is required or automatic. Defaults to ToolChoice.auto. + :param tool_choice: (Optional) Whether tool use is automatic, required, or none. Can also specify a tool name to use a specific tool. Defaults to ToolChoice.auto. :param tool_prompt_format: (Optional) Instructs the model how to format tool calls. By default, Llama Stack will attempt to use a format that is best adapted to the model. - `ToolPromptFormat.json`: The tool calls are formatted as a JSON object. - `ToolPromptFormat.function_tag`: The tool calls are enclosed in a tag. @@ -337,9 +339,16 @@ class ToolConfig(BaseModel): '{{function_definitions}}' to indicate where the function definitions should be inserted. """ - tool_choice: Optional[ToolChoice] = Field(default=ToolChoice.auto) + tool_choice: Optional[ToolChoice | str] = Field(default=ToolChoice.auto) tool_prompt_format: Optional[ToolPromptFormat] = Field(default=None) - system_message_behavior: SystemMessageBehavior = Field(default=SystemMessageBehavior.append) + system_message_behavior: Optional[SystemMessageBehavior] = Field(default=SystemMessageBehavior.append) + + def model_post_init(self, __context: Any) -> None: + if isinstance(self.tool_choice, str): + try: + self.tool_choice = ToolChoice[self.tool_choice] + except KeyError: + pass # This is an internally used class diff --git a/llama_stack/distribution/routers/routers.py b/llama_stack/distribution/routers/routers.py index f45975189..9d12c8a40 100644 --- a/llama_stack/distribution/routers/routers.py +++ b/llama_stack/distribution/routers/routers.py @@ -128,7 +128,7 @@ class InferenceRouter(Inference): sampling_params: Optional[SamplingParams] = SamplingParams(), response_format: Optional[ResponseFormat] = None, tools: Optional[List[ToolDefinition]] = None, - tool_choice: Optional[ToolChoice] = ToolChoice.auto, + tool_choice: Optional[ToolChoice] = None, tool_prompt_format: Optional[ToolPromptFormat] = None, stream: Optional[bool] = False, logprobs: Optional[LogProbConfig] = None, @@ -140,20 +140,36 @@ class InferenceRouter(Inference): if model.model_type == ModelType.embedding: raise ValueError(f"Model '{model_id}' is an embedding model and does not support chat completions") if tool_config: - if tool_choice != tool_config.tool_choice: + if tool_choice and tool_choice != tool_config.tool_choice: raise ValueError("tool_choice and tool_config.tool_choice must match") - if tool_prompt_format != tool_config.tool_prompt_format: + if tool_prompt_format and tool_prompt_format != tool_config.tool_prompt_format: raise ValueError("tool_prompt_format and tool_config.tool_prompt_format must match") else: - tool_config = ToolConfig( - tool_choice=tool_choice, - tool_prompt_format=tool_prompt_format, - ) + params = {} + if tool_choice: + params["tool_choice"] = tool_choice + if tool_prompt_format: + params["tool_prompt_format"] = tool_prompt_format + tool_config = ToolConfig(**params) + + tools = tools or [] + if tool_config.tool_choice == ToolChoice.none: + tools = [] + elif tool_config.tool_choice == ToolChoice.auto: + pass + elif tool_config.tool_choice == ToolChoice.required: + pass + else: + # verify tool_choice is one of the tools + tool_names = [t.tool_name if isinstance(t.tool_name, str) else t.tool_name.value for t in tools] + if tool_config.tool_choice not in tool_names: + raise ValueError(f"Tool choice {tool_config.tool_choice} is not one of the tools: {tool_names}") + params = dict( model_id=model_id, messages=messages, sampling_params=sampling_params, - tools=tools or [], + tools=tools, tool_choice=tool_choice, tool_prompt_format=tool_prompt_format, response_format=response_format, diff --git a/llama_stack/providers/utils/inference/prompt_adapter.py b/llama_stack/providers/utils/inference/prompt_adapter.py index b7945dee7..2782c661f 100644 --- a/llama_stack/providers/utils/inference/prompt_adapter.py +++ b/llama_stack/providers/utils/inference/prompt_adapter.py @@ -31,6 +31,7 @@ from llama_stack.apis.inference import ( SystemMessage, SystemMessageBehavior, ToolChoice, + ToolDefinition, UserMessage, ) from llama_stack.models.llama.datatypes import ( @@ -311,8 +312,6 @@ def response_format_prompt(fmt: Optional[ResponseFormat]): def augment_messages_for_tools_llama_3_1( request: ChatCompletionRequest, ) -> List[Message]: - assert request.tool_config.tool_choice == ToolChoice.auto, "Only `ToolChoice.auto` supported" - existing_messages = request.messages existing_system_message = None if existing_messages[0].role == Role.system.value: @@ -352,6 +351,10 @@ def augment_messages_for_tools_llama_3_1( elif isinstance(existing_system_message.content, list): sys_content += "\n".join([_process(c) for c in existing_system_message.content]) + tool_choice_prompt = _get_tool_choice_prompt(request.tool_config.tool_choice, request.tools) + if tool_choice_prompt: + sys_content += "\n" + tool_choice_prompt + messages.append(SystemMessage(content=sys_content)) has_custom_tools = any(isinstance(dfn.tool_name, str) for dfn in request.tools) @@ -377,8 +380,6 @@ def augment_messages_for_tools_llama_3_1( def augment_messages_for_tools_llama_3_2( request: ChatCompletionRequest, ) -> List[Message]: - assert request.tool_config.tool_choice == ToolChoice.auto, "Only `ToolChoice.auto` supported" - existing_messages = request.messages existing_system_message = None if existing_messages[0].role == Role.system.value: @@ -386,7 +387,6 @@ def augment_messages_for_tools_llama_3_2( assert existing_messages[0].role != Role.system.value, "Should only have 1 system message" - messages = [] sys_content = "" custom_tools, builtin_tools = [], [] for t in request.tools: @@ -395,7 +395,6 @@ def augment_messages_for_tools_llama_3_2( else: builtin_tools.append(t) - tool_template = None if builtin_tools: tool_gen = BuiltinToolGenerator() tool_template = tool_gen.gen(builtin_tools) @@ -423,8 +422,22 @@ def augment_messages_for_tools_llama_3_2( ): sys_content += interleaved_content_as_str(existing_system_message.content, sep="\n") - messages.append(SystemMessage(content=sys_content.strip("\n"))) + tool_choice_prompt = _get_tool_choice_prompt(request.tool_config.tool_choice, request.tools) + if tool_choice_prompt: + sys_content += "\n" + tool_choice_prompt - # Add back existing messages from the request - messages += existing_messages + messages = [SystemMessage(content=sys_content.strip("\n")), *existing_messages] return messages + + +def _get_tool_choice_prompt(tool_choice: ToolChoice | str, tools: List[ToolDefinition]) -> str: + if tool_choice == ToolChoice.auto: + return "" + elif tool_choice == ToolChoice.required: + return "You MUST use one of the provided functions/tools to answer the user query." + elif tool_choice == ToolChoice.none: + # tools are already not passed in + return "" + else: + # specific tool + return f"You MUST use the tool `{tool_choice}` to answer the user query." diff --git a/tests/client-sdk/agents/test_agents.py b/tests/client-sdk/agents/test_agents.py index 0369f325b..e5380d357 100644 --- a/tests/client-sdk/agents/test_agents.py +++ b/tests/client-sdk/agents/test_agents.py @@ -98,7 +98,6 @@ def agent_config(llama_stack_client, text_model_id): }, }, toolgroups=[], - tool_choice="auto", input_shields=available_shields, output_shields=available_shields, enable_session_persistence=False, @@ -322,6 +321,38 @@ def test_custom_tool(llama_stack_client, agent_config): assert "get_boiling_point" in logs_str +def test_tool_choice(llama_stack_client, agent_config): + data = [ + ("required", '{"type": "function"'), + ("none", None), + ("get_boiling_point", '{"type": "function", "name": "get_boiling_point"'), + ] + client_tool = TestClientTool() + for tool_choice, expected_tool in data: + agent_config["tool_config"] = {"tool_choice": tool_choice} + agent_config["client_tools"] = [client_tool.get_tool_definition()] + + agent = Agent(llama_stack_client, agent_config, client_tools=(client_tool,)) + session_id = agent.create_session(f"test-session-{uuid4()}") + + response = agent.create_turn( + messages=[ + { + "role": "user", + "content": "What is the boiling point of polyjuice?", + }, + ], + session_id=session_id, + ) + + logs = [str(log) for log in EventLogger().log(response) if log is not None] + logs_str = "".join(logs) + if expected_tool: + assert expected_tool in logs_str + else: + assert '{"type": "function"' not in logs_str + + # TODO: fix this flaky test def xtest_override_system_message_behavior(llama_stack_client, agent_config): client_tool = TestClientTool() diff --git a/tests/client-sdk/inference/test_text_inference.py b/tests/client-sdk/inference/test_text_inference.py index c931ca255..52d5a24f2 100644 --- a/tests/client-sdk/inference/test_text_inference.py +++ b/tests/client-sdk/inference/test_text_inference.py @@ -247,6 +247,42 @@ def test_text_chat_completion_with_tool_calling_and_streaming( assert tool_invocation_content == "[get_weather, {'location': 'San Francisco, CA'}]" +def test_text_chat_completion_with_tool_choice_required( + llama_stack_client, text_model_id, get_weather_tool_definition, provider_tool_format, inference_provider_type +): + if inference_provider_type == "remote::vllm": + pytest.xfail("vllm-project/vllm#13002") + response = llama_stack_client.inference.chat_completion( + model_id=text_model_id, + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "What's the weather like in San Francisco?"}, + ], + tools=[get_weather_tool_definition], + tool_config={"tool_choice": "required", "tool_prompt_format": provider_tool_format}, + stream=True, + ) + tool_invocation_content = extract_tool_invocation_content(response) + assert tool_invocation_content == "[get_weather, {'location': 'San Francisco, CA'}]" + + +def test_text_chat_completion_with_tool_choice_none( + llama_stack_client, text_model_id, get_weather_tool_definition, provider_tool_format +): + response = llama_stack_client.inference.chat_completion( + model_id=text_model_id, + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "What's the weather like in San Francisco?"}, + ], + tools=[get_weather_tool_definition], + tool_config={"tool_choice": "none", "tool_prompt_format": provider_tool_format}, + stream=True, + ) + tool_invocation_content = extract_tool_invocation_content(response) + assert tool_invocation_content == "" + + def test_text_chat_completion_structured_output(llama_stack_client, text_model_id, inference_provider_type): class AnswerFormat(BaseModel): first_name: str From a66b4c4c81eb2ea899cec0cd8cf3e1401b5c1b51 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Tue, 18 Feb 2025 23:52:15 -0500 Subject: [PATCH 34/37] test: Enable test_text_chat_completion_with_tool_choice_required for remote::vllm (#1148) --- tests/client-sdk/inference/test_text_inference.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/tests/client-sdk/inference/test_text_inference.py b/tests/client-sdk/inference/test_text_inference.py index 52d5a24f2..6a113c463 100644 --- a/tests/client-sdk/inference/test_text_inference.py +++ b/tests/client-sdk/inference/test_text_inference.py @@ -250,8 +250,6 @@ def test_text_chat_completion_with_tool_calling_and_streaming( def test_text_chat_completion_with_tool_choice_required( llama_stack_client, text_model_id, get_weather_tool_definition, provider_tool_format, inference_provider_type ): - if inference_provider_type == "remote::vllm": - pytest.xfail("vllm-project/vllm#13002") response = llama_stack_client.inference.chat_completion( model_id=text_model_id, messages=[ From 5e7904ef6c5483bb3aaf82ec0d687ced4867ef86 Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Wed, 19 Feb 2025 12:24:21 -0800 Subject: [PATCH 35/37] Kill the older strong_typing code --- .../strong_typing/__init__.py | 19 - .../strong_typing/auxiliary.py | 230 ---- .../strong_typing/classdef.py | 460 ------- docs/openapi_generator/strong_typing/core.py | 46 - .../strong_typing/deserializer.py | 959 --------------- .../strong_typing/docstring.py | 437 ------- .../strong_typing/exception.py | 23 - .../strong_typing/inspection.py | 1053 ----------------- .../strong_typing/mapping.py | 42 - docs/openapi_generator/strong_typing/name.py | 188 --- docs/openapi_generator/strong_typing/py.typed | 0 .../openapi_generator/strong_typing/schema.py | 792 ------------- .../strong_typing/serialization.py | 101 -- .../strong_typing/serializer.py | 522 -------- docs/openapi_generator/strong_typing/slots.py | 29 - .../strong_typing/topological.py | 89 -- 16 files changed, 4990 deletions(-) delete mode 100644 docs/openapi_generator/strong_typing/__init__.py delete mode 100644 docs/openapi_generator/strong_typing/auxiliary.py delete mode 100644 docs/openapi_generator/strong_typing/classdef.py delete mode 100644 docs/openapi_generator/strong_typing/core.py delete mode 100644 docs/openapi_generator/strong_typing/deserializer.py delete mode 100644 docs/openapi_generator/strong_typing/docstring.py delete mode 100644 docs/openapi_generator/strong_typing/exception.py delete mode 100644 docs/openapi_generator/strong_typing/inspection.py delete mode 100644 docs/openapi_generator/strong_typing/mapping.py delete mode 100644 docs/openapi_generator/strong_typing/name.py delete mode 100644 docs/openapi_generator/strong_typing/py.typed delete mode 100644 docs/openapi_generator/strong_typing/schema.py delete mode 100644 docs/openapi_generator/strong_typing/serialization.py delete mode 100644 docs/openapi_generator/strong_typing/serializer.py delete mode 100644 docs/openapi_generator/strong_typing/slots.py delete mode 100644 docs/openapi_generator/strong_typing/topological.py diff --git a/docs/openapi_generator/strong_typing/__init__.py b/docs/openapi_generator/strong_typing/__init__.py deleted file mode 100644 index d832dcf6f..000000000 --- a/docs/openapi_generator/strong_typing/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -Provides auxiliary services for working with Python type annotations, converting typed data to and from JSON, -and generating a JSON schema for a complex type. -""" - -__version__ = "0.3.4" -__author__ = "Levente Hunyadi" -__copyright__ = "Copyright 2021-2024, Levente Hunyadi" -__license__ = "MIT" -__maintainer__ = "Levente Hunyadi" -__status__ = "Production" diff --git a/docs/openapi_generator/strong_typing/auxiliary.py b/docs/openapi_generator/strong_typing/auxiliary.py deleted file mode 100644 index bfaec0d29..000000000 --- a/docs/openapi_generator/strong_typing/auxiliary.py +++ /dev/null @@ -1,230 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import dataclasses -import sys -from dataclasses import is_dataclass -from typing import Callable, Dict, Optional, overload, Type, TypeVar, Union - -if sys.version_info >= (3, 9): - from typing import Annotated as Annotated -else: - from typing_extensions import Annotated as Annotated - -if sys.version_info >= (3, 10): - from typing import TypeAlias as TypeAlias -else: - from typing_extensions import TypeAlias as TypeAlias - -if sys.version_info >= (3, 11): - from typing import dataclass_transform as dataclass_transform -else: - from typing_extensions import dataclass_transform as dataclass_transform - -T = TypeVar("T") - - -def _compact_dataclass_repr(obj: object) -> str: - """ - Compact data-class representation where positional arguments are used instead of keyword arguments. - - :param obj: A data-class object. - :returns: A string that matches the pattern `Class(arg1, arg2, ...)`. - """ - - if is_dataclass(obj): - arglist = ", ".join( - repr(getattr(obj, field.name)) for field in dataclasses.fields(obj) - ) - return f"{obj.__class__.__name__}({arglist})" - else: - return obj.__class__.__name__ - - -class CompactDataClass: - "A data class whose repr() uses positional rather than keyword arguments." - - def __repr__(self) -> str: - return _compact_dataclass_repr(self) - - -@overload -def typeannotation(cls: Type[T], /) -> Type[T]: ... - - -@overload -def typeannotation( - cls: None, *, eq: bool = True, order: bool = False -) -> Callable[[Type[T]], Type[T]]: ... - - -@dataclass_transform(eq_default=True, order_default=False) -def typeannotation( - cls: Optional[Type[T]] = None, *, eq: bool = True, order: bool = False -) -> Union[Type[T], Callable[[Type[T]], Type[T]]]: - """ - Returns the same class as was passed in, with dunder methods added based on the fields defined in the class. - - :param cls: The data-class type to transform into a type annotation. - :param eq: Whether to generate functions to support equality comparison. - :param order: Whether to generate functions to support ordering. - :returns: A data-class type, or a wrapper for data-class types. - """ - - def wrap(cls: Type[T]) -> Type[T]: - setattr(cls, "__repr__", _compact_dataclass_repr) - if not dataclasses.is_dataclass(cls): - cls = dataclasses.dataclass( # type: ignore[call-overload] - cls, - init=True, - repr=False, - eq=eq, - order=order, - unsafe_hash=False, - frozen=True, - ) - return cls - - # see if decorator is used as @typeannotation or @typeannotation() - if cls is None: - # called with parentheses - return wrap - else: - # called without parentheses - return wrap(cls) - - -@typeannotation -class Alias: - "Alternative name of a property, typically used in JSON serialization." - - name: str - - -@typeannotation -class Signed: - "Signedness of an integer type." - - is_signed: bool - - -@typeannotation -class Storage: - "Number of bytes the binary representation of an integer type takes, e.g. 4 bytes for an int32." - - bytes: int - - -@typeannotation -class IntegerRange: - "Minimum and maximum value of an integer. The range is inclusive." - - minimum: int - maximum: int - - -@typeannotation -class Precision: - "Precision of a floating-point value." - - significant_digits: int - decimal_digits: int = 0 - - @property - def integer_digits(self) -> int: - return self.significant_digits - self.decimal_digits - - -@typeannotation -class TimePrecision: - """ - Precision of a timestamp or time interval. - - :param decimal_digits: Number of fractional digits retained in the sub-seconds field for a timestamp. - """ - - decimal_digits: int = 0 - - -@typeannotation -class Length: - "Exact length of a string." - - value: int - - -@typeannotation -class MinLength: - "Minimum length of a string." - - value: int - - -@typeannotation -class MaxLength: - "Maximum length of a string." - - value: int - - -@typeannotation -class SpecialConversion: - "Indicates that the annotated type is subject to custom conversion rules." - - -int8: TypeAlias = Annotated[int, Signed(True), Storage(1), IntegerRange(-128, 127)] -int16: TypeAlias = Annotated[int, Signed(True), Storage(2), IntegerRange(-32768, 32767)] -int32: TypeAlias = Annotated[ - int, - Signed(True), - Storage(4), - IntegerRange(-2147483648, 2147483647), -] -int64: TypeAlias = Annotated[ - int, - Signed(True), - Storage(8), - IntegerRange(-9223372036854775808, 9223372036854775807), -] - -uint8: TypeAlias = Annotated[int, Signed(False), Storage(1), IntegerRange(0, 255)] -uint16: TypeAlias = Annotated[int, Signed(False), Storage(2), IntegerRange(0, 65535)] -uint32: TypeAlias = Annotated[ - int, - Signed(False), - Storage(4), - IntegerRange(0, 4294967295), -] -uint64: TypeAlias = Annotated[ - int, - Signed(False), - Storage(8), - IntegerRange(0, 18446744073709551615), -] - -float32: TypeAlias = Annotated[float, Storage(4)] -float64: TypeAlias = Annotated[float, Storage(8)] - -# maps globals of type Annotated[T, ...] defined in this module to their string names -_auxiliary_types: Dict[object, str] = {} -module = sys.modules[__name__] -for var in dir(module): - typ = getattr(module, var) - if getattr(typ, "__metadata__", None) is not None: - # type is Annotated[T, ...] - _auxiliary_types[typ] = var - - -def get_auxiliary_format(data_type: object) -> Optional[str]: - "Returns the JSON format string corresponding to an auxiliary type." - - return _auxiliary_types.get(data_type) diff --git a/docs/openapi_generator/strong_typing/classdef.py b/docs/openapi_generator/strong_typing/classdef.py deleted file mode 100644 index b86940420..000000000 --- a/docs/openapi_generator/strong_typing/classdef.py +++ /dev/null @@ -1,460 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -import copy -import dataclasses -import datetime -import decimal -import enum -import ipaddress -import math -import re -import sys -import types -import typing -import uuid -from dataclasses import dataclass -from typing import Any, Dict, List, Literal, Optional, Tuple, Type, TypeVar, Union - -from .auxiliary import ( - Alias, - Annotated, - float32, - float64, - int16, - int32, - int64, - MaxLength, - Precision, -) -from .core import JsonType, Schema -from .docstring import Docstring, DocstringParam -from .inspection import TypeLike -from .serialization import json_to_object, object_to_json - -T = TypeVar("T") - - -@dataclass -class JsonSchemaNode: - title: Optional[str] - description: Optional[str] - - -@dataclass -class JsonSchemaType(JsonSchemaNode): - type: str - format: Optional[str] - - -@dataclass -class JsonSchemaBoolean(JsonSchemaType): - type: Literal["boolean"] - const: Optional[bool] - default: Optional[bool] - examples: Optional[List[bool]] - - -@dataclass -class JsonSchemaInteger(JsonSchemaType): - type: Literal["integer"] - const: Optional[int] - default: Optional[int] - examples: Optional[List[int]] - enum: Optional[List[int]] - minimum: Optional[int] - maximum: Optional[int] - - -@dataclass -class JsonSchemaNumber(JsonSchemaType): - type: Literal["number"] - const: Optional[float] - default: Optional[float] - examples: Optional[List[float]] - minimum: Optional[float] - maximum: Optional[float] - exclusiveMinimum: Optional[float] - exclusiveMaximum: Optional[float] - multipleOf: Optional[float] - - -@dataclass -class JsonSchemaString(JsonSchemaType): - type: Literal["string"] - const: Optional[str] - default: Optional[str] - examples: Optional[List[str]] - enum: Optional[List[str]] - minLength: Optional[int] - maxLength: Optional[int] - - -@dataclass -class JsonSchemaArray(JsonSchemaType): - type: Literal["array"] - items: "JsonSchemaAny" - - -@dataclass -class JsonSchemaObject(JsonSchemaType): - type: Literal["object"] - properties: Optional[Dict[str, "JsonSchemaAny"]] - additionalProperties: Optional[bool] - required: Optional[List[str]] - - -@dataclass -class JsonSchemaRef(JsonSchemaNode): - ref: Annotated[str, Alias("$ref")] - - -@dataclass -class JsonSchemaAllOf(JsonSchemaNode): - allOf: List["JsonSchemaAny"] - - -@dataclass -class JsonSchemaAnyOf(JsonSchemaNode): - anyOf: List["JsonSchemaAny"] - - -@dataclass -class Discriminator: - propertyName: str - mapping: Dict[str, str] - - -@dataclass -class JsonSchemaOneOf(JsonSchemaNode): - oneOf: List["JsonSchemaAny"] - discriminator: Optional[Discriminator] - - -JsonSchemaAny = Union[ - JsonSchemaRef, - JsonSchemaBoolean, - JsonSchemaInteger, - JsonSchemaNumber, - JsonSchemaString, - JsonSchemaArray, - JsonSchemaObject, - JsonSchemaOneOf, -] - - -@dataclass -class JsonSchemaTopLevelObject(JsonSchemaObject): - schema: Annotated[str, Alias("$schema")] - definitions: Optional[Dict[str, JsonSchemaAny]] - - -def integer_range_to_type(min_value: float, max_value: float) -> type: - if min_value >= -(2**15) and max_value < 2**15: - return int16 - elif min_value >= -(2**31) and max_value < 2**31: - return int32 - else: - return int64 - - -def enum_safe_name(name: str) -> str: - name = re.sub(r"\W", "_", name) - is_dunder = name.startswith("__") - is_sunder = name.startswith("_") and name.endswith("_") - if is_dunder or is_sunder: # provide an alternative for dunder and sunder names - name = f"v{name}" - return name - - -def enum_values_to_type( - module: types.ModuleType, - name: str, - values: Dict[str, Any], - title: Optional[str] = None, - description: Optional[str] = None, -) -> Type[enum.Enum]: - enum_class: Type[enum.Enum] = enum.Enum(name, values) # type: ignore - - # assign the newly created type to the same module where the defining class is - enum_class.__module__ = module.__name__ - enum_class.__doc__ = str( - Docstring(short_description=title, long_description=description) - ) - setattr(module, name, enum_class) - - return enum.unique(enum_class) - - -def schema_to_type( - schema: Schema, *, module: types.ModuleType, class_name: str -) -> TypeLike: - """ - Creates a Python type from a JSON schema. - - :param schema: The JSON schema that the types would correspond to. - :param module: The module in which to create the new types. - :param class_name: The name assigned to the top-level class. - """ - - top_node = typing.cast( - JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema) - ) - if top_node.definitions is not None: - for type_name, type_node in top_node.definitions.items(): - type_def = node_to_typedef(module, type_name, type_node) - if type_def.default is not dataclasses.MISSING: - raise TypeError("disallowed: `default` for top-level type definitions") - - setattr(type_def.type, "__module__", module.__name__) - setattr(module, type_name, type_def.type) - - return node_to_typedef(module, class_name, top_node).type - - -@dataclass -class TypeDef: - type: TypeLike - default: Any = dataclasses.MISSING - - -def json_to_value(target_type: TypeLike, data: JsonType) -> Any: - if data is not None: - return json_to_object(target_type, data) - else: - return dataclasses.MISSING - - -def node_to_typedef( - module: types.ModuleType, context: str, node: JsonSchemaNode -) -> TypeDef: - if isinstance(node, JsonSchemaRef): - match_obj = re.match(r"^#/definitions/(\w+)$", node.ref) - if not match_obj: - raise ValueError(f"invalid reference: {node.ref}") - - type_name = match_obj.group(1) - return TypeDef(getattr(module, type_name), dataclasses.MISSING) - - elif isinstance(node, JsonSchemaBoolean): - if node.const is not None: - return TypeDef(Literal[node.const], dataclasses.MISSING) - - default = json_to_value(bool, node.default) - return TypeDef(bool, default) - - elif isinstance(node, JsonSchemaInteger): - if node.const is not None: - return TypeDef(Literal[node.const], dataclasses.MISSING) - - integer_type: TypeLike - if node.format == "int16": - integer_type = int16 - elif node.format == "int32": - integer_type = int32 - elif node.format == "int64": - integer_type = int64 - else: - if node.enum is not None: - integer_type = integer_range_to_type(min(node.enum), max(node.enum)) - elif node.minimum is not None and node.maximum is not None: - integer_type = integer_range_to_type(node.minimum, node.maximum) - else: - integer_type = int - - default = json_to_value(integer_type, node.default) - return TypeDef(integer_type, default) - - elif isinstance(node, JsonSchemaNumber): - if node.const is not None: - return TypeDef(Literal[node.const], dataclasses.MISSING) - - number_type: TypeLike - if node.format == "float32": - number_type = float32 - elif node.format == "float64": - number_type = float64 - else: - if ( - node.exclusiveMinimum is not None - and node.exclusiveMaximum is not None - and node.exclusiveMinimum == -node.exclusiveMaximum - ): - integer_digits = round(math.log10(node.exclusiveMaximum)) - else: - integer_digits = None - - if node.multipleOf is not None: - decimal_digits = -round(math.log10(node.multipleOf)) - else: - decimal_digits = None - - if integer_digits is not None and decimal_digits is not None: - number_type = Annotated[ - decimal.Decimal, - Precision(integer_digits + decimal_digits, decimal_digits), - ] - else: - number_type = float - - default = json_to_value(number_type, node.default) - return TypeDef(number_type, default) - - elif isinstance(node, JsonSchemaString): - if node.const is not None: - return TypeDef(Literal[node.const], dataclasses.MISSING) - - string_type: TypeLike - if node.format == "date-time": - string_type = datetime.datetime - elif node.format == "uuid": - string_type = uuid.UUID - elif node.format == "ipv4": - string_type = ipaddress.IPv4Address - elif node.format == "ipv6": - string_type = ipaddress.IPv6Address - - elif node.enum is not None: - string_type = enum_values_to_type( - module, - context, - {enum_safe_name(e): e for e in node.enum}, - title=node.title, - description=node.description, - ) - - elif node.maxLength is not None: - string_type = Annotated[str, MaxLength(node.maxLength)] - else: - string_type = str - - default = json_to_value(string_type, node.default) - return TypeDef(string_type, default) - - elif isinstance(node, JsonSchemaArray): - type_def = node_to_typedef(module, context, node.items) - if type_def.default is not dataclasses.MISSING: - raise TypeError("disallowed: `default` for array element type") - list_type = List[(type_def.type,)] # type: ignore - return TypeDef(list_type, dataclasses.MISSING) - - elif isinstance(node, JsonSchemaObject): - if node.properties is None: - return TypeDef(JsonType, dataclasses.MISSING) - - if node.additionalProperties is None or node.additionalProperties is not False: - raise TypeError("expected: `additionalProperties` equals `false`") - - required = node.required if node.required is not None else [] - - class_name = context - - fields: List[Tuple[str, Any, dataclasses.Field]] = [] - params: Dict[str, DocstringParam] = {} - for prop_name, prop_node in node.properties.items(): - type_def = node_to_typedef(module, f"{class_name}__{prop_name}", prop_node) - if prop_name in required: - prop_type = type_def.type - else: - prop_type = Union[(None, type_def.type)] - fields.append( - (prop_name, prop_type, dataclasses.field(default=type_def.default)) - ) - prop_desc = prop_node.title or prop_node.description - if prop_desc is not None: - params[prop_name] = DocstringParam(prop_name, prop_desc) - - fields.sort(key=lambda t: t[2].default is not dataclasses.MISSING) - if sys.version_info >= (3, 12): - class_type = dataclasses.make_dataclass( - class_name, fields, module=module.__name__ - ) - else: - class_type = dataclasses.make_dataclass( - class_name, fields, namespace={"__module__": module.__name__} - ) - class_type.__doc__ = str( - Docstring( - short_description=node.title, - long_description=node.description, - params=params, - ) - ) - setattr(module, class_name, class_type) - return TypeDef(class_type, dataclasses.MISSING) - - elif isinstance(node, JsonSchemaOneOf): - union_defs = tuple(node_to_typedef(module, context, n) for n in node.oneOf) - if any(d.default is not dataclasses.MISSING for d in union_defs): - raise TypeError("disallowed: `default` for union member type") - union_types = tuple(d.type for d in union_defs) - return TypeDef(Union[union_types], dataclasses.MISSING) - - raise NotImplementedError() - - -@dataclass -class SchemaFlatteningOptions: - qualified_names: bool = False - recursive: bool = False - - -def flatten_schema( - schema: Schema, *, options: Optional[SchemaFlatteningOptions] = None -) -> Schema: - top_node = typing.cast( - JsonSchemaTopLevelObject, json_to_object(JsonSchemaTopLevelObject, schema) - ) - flattener = SchemaFlattener(options) - obj = flattener.flatten(top_node) - return typing.cast(Schema, object_to_json(obj)) - - -class SchemaFlattener: - options: SchemaFlatteningOptions - - def __init__(self, options: Optional[SchemaFlatteningOptions] = None) -> None: - self.options = options or SchemaFlatteningOptions() - - def flatten(self, source_node: JsonSchemaObject) -> JsonSchemaObject: - if source_node.type != "object": - return source_node - - source_props = source_node.properties or {} - target_props: Dict[str, JsonSchemaAny] = {} - - source_reqs = source_node.required or [] - target_reqs: List[str] = [] - - for name, prop in source_props.items(): - if not isinstance(prop, JsonSchemaObject): - target_props[name] = prop - if name in source_reqs: - target_reqs.append(name) - continue - - if self.options.recursive: - obj = self.flatten(prop) - else: - obj = prop - if obj.properties is not None: - if self.options.qualified_names: - target_props.update( - (f"{name}.{n}", p) for n, p in obj.properties.items() - ) - else: - target_props.update(obj.properties.items()) - if obj.required is not None: - if self.options.qualified_names: - target_reqs.extend(f"{name}.{n}" for n in obj.required) - else: - target_reqs.extend(obj.required) - - target_node = copy.copy(source_node) - target_node.properties = target_props or None - target_node.additionalProperties = False - target_node.required = target_reqs or None - return target_node diff --git a/docs/openapi_generator/strong_typing/core.py b/docs/openapi_generator/strong_typing/core.py deleted file mode 100644 index 501b6a5db..000000000 --- a/docs/openapi_generator/strong_typing/core.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -from typing import Dict, List, Union - - -class JsonObject: - "Placeholder type for an unrestricted JSON object." - - -class JsonArray: - "Placeholder type for an unrestricted JSON array." - - -# a JSON type with possible `null` values -JsonType = Union[ - None, - bool, - int, - float, - str, - Dict[str, "JsonType"], - List["JsonType"], -] - -# a JSON type that cannot contain `null` values -StrictJsonType = Union[ - bool, - int, - float, - str, - Dict[str, "StrictJsonType"], - List["StrictJsonType"], -] - -# a meta-type that captures the object type in a JSON schema -Schema = Dict[str, JsonType] diff --git a/docs/openapi_generator/strong_typing/deserializer.py b/docs/openapi_generator/strong_typing/deserializer.py deleted file mode 100644 index 5859d3bbe..000000000 --- a/docs/openapi_generator/strong_typing/deserializer.py +++ /dev/null @@ -1,959 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import abc -import base64 -import dataclasses -import datetime -import enum -import inspect -import ipaddress -import sys -import typing -import uuid -from types import ModuleType -from typing import ( - Any, - Callable, - Dict, - Generic, - List, - Literal, - NamedTuple, - Optional, - Set, - Tuple, - Type, - TypeVar, - Union, -) - -from .core import JsonType -from .exception import JsonKeyError, JsonTypeError, JsonValueError -from .inspection import ( - create_object, - enum_value_types, - evaluate_type, - get_class_properties, - get_class_property, - get_resolved_hints, - is_dataclass_instance, - is_dataclass_type, - is_named_tuple_type, - is_type_annotated, - is_type_literal, - is_type_optional, - TypeLike, - unwrap_annotated_type, - unwrap_literal_values, - unwrap_optional_type, -) -from .mapping import python_field_to_json_property -from .name import python_type_to_str - -E = TypeVar("E", bound=enum.Enum) -T = TypeVar("T") -R = TypeVar("R") -K = TypeVar("K") -V = TypeVar("V") - - -class Deserializer(abc.ABC, Generic[T]): - "Parses a JSON value into a Python type." - - def build(self, context: Optional[ModuleType]) -> None: - """ - Creates auxiliary parsers that this parser is depending on. - - :param context: A module context for evaluating types specified as a string. - """ - - @abc.abstractmethod - def parse(self, data: JsonType) -> T: - """ - Parses a JSON value into a Python type. - - :param data: The JSON value to de-serialize. - :returns: The Python object that the JSON value de-serializes to. - """ - - -class NoneDeserializer(Deserializer[None]): - "Parses JSON `null` values into Python `None`." - - def parse(self, data: JsonType) -> None: - if data is not None: - raise JsonTypeError( - f"`None` type expects JSON `null` but instead received: {data}" - ) - return None - - -class BoolDeserializer(Deserializer[bool]): - "Parses JSON `boolean` values into Python `bool` type." - - def parse(self, data: JsonType) -> bool: - if not isinstance(data, bool): - raise JsonTypeError( - f"`bool` type expects JSON `boolean` data but instead received: {data}" - ) - return bool(data) - - -class IntDeserializer(Deserializer[int]): - "Parses JSON `number` values into Python `int` type." - - def parse(self, data: JsonType) -> int: - if not isinstance(data, int): - raise JsonTypeError( - f"`int` type expects integer data as JSON `number` but instead received: {data}" - ) - return int(data) - - -class FloatDeserializer(Deserializer[float]): - "Parses JSON `number` values into Python `float` type." - - def parse(self, data: JsonType) -> float: - if not isinstance(data, float) and not isinstance(data, int): - raise JsonTypeError( - f"`int` type expects data as JSON `number` but instead received: {data}" - ) - return float(data) - - -class StringDeserializer(Deserializer[str]): - "Parses JSON `string` values into Python `str` type." - - def parse(self, data: JsonType) -> str: - if not isinstance(data, str): - raise JsonTypeError( - f"`str` type expects JSON `string` data but instead received: {data}" - ) - return str(data) - - -class BytesDeserializer(Deserializer[bytes]): - "Parses JSON `string` values of Base64-encoded strings into Python `bytes` type." - - def parse(self, data: JsonType) -> bytes: - if not isinstance(data, str): - raise JsonTypeError( - f"`bytes` type expects JSON `string` data but instead received: {data}" - ) - return base64.b64decode(data, validate=True) - - -class DateTimeDeserializer(Deserializer[datetime.datetime]): - "Parses JSON `string` values representing timestamps in ISO 8601 format to Python `datetime` with time zone." - - def parse(self, data: JsonType) -> datetime.datetime: - if not isinstance(data, str): - raise JsonTypeError( - f"`datetime` type expects JSON `string` data but instead received: {data}" - ) - - if data.endswith("Z"): - data = f"{data[:-1]}+00:00" # Python's isoformat() does not support military time zones like "Zulu" for UTC - timestamp = datetime.datetime.fromisoformat(data) - if timestamp.tzinfo is None: - raise JsonValueError( - f"timestamp lacks explicit time zone designator: {data}" - ) - return timestamp - - -class DateDeserializer(Deserializer[datetime.date]): - "Parses JSON `string` values representing dates in ISO 8601 format to Python `date` type." - - def parse(self, data: JsonType) -> datetime.date: - if not isinstance(data, str): - raise JsonTypeError( - f"`date` type expects JSON `string` data but instead received: {data}" - ) - - return datetime.date.fromisoformat(data) - - -class TimeDeserializer(Deserializer[datetime.time]): - "Parses JSON `string` values representing time instances in ISO 8601 format to Python `time` type with time zone." - - def parse(self, data: JsonType) -> datetime.time: - if not isinstance(data, str): - raise JsonTypeError( - f"`time` type expects JSON `string` data but instead received: {data}" - ) - - return datetime.time.fromisoformat(data) - - -class UUIDDeserializer(Deserializer[uuid.UUID]): - "Parses JSON `string` values of UUID strings into Python `uuid.UUID` type." - - def parse(self, data: JsonType) -> uuid.UUID: - if not isinstance(data, str): - raise JsonTypeError( - f"`UUID` type expects JSON `string` data but instead received: {data}" - ) - return uuid.UUID(data) - - -class IPv4Deserializer(Deserializer[ipaddress.IPv4Address]): - "Parses JSON `string` values of IPv4 address strings into Python `ipaddress.IPv4Address` type." - - def parse(self, data: JsonType) -> ipaddress.IPv4Address: - if not isinstance(data, str): - raise JsonTypeError( - f"`IPv4Address` type expects JSON `string` data but instead received: {data}" - ) - return ipaddress.IPv4Address(data) - - -class IPv6Deserializer(Deserializer[ipaddress.IPv6Address]): - "Parses JSON `string` values of IPv6 address strings into Python `ipaddress.IPv6Address` type." - - def parse(self, data: JsonType) -> ipaddress.IPv6Address: - if not isinstance(data, str): - raise JsonTypeError( - f"`IPv6Address` type expects JSON `string` data but instead received: {data}" - ) - return ipaddress.IPv6Address(data) - - -class ListDeserializer(Deserializer[List[T]]): - "Recursively de-serializes a JSON array into a Python `list`." - - item_type: Type[T] - item_parser: Deserializer - - def __init__(self, item_type: Type[T]) -> None: - self.item_type = item_type - - def build(self, context: Optional[ModuleType]) -> None: - self.item_parser = _get_deserializer(self.item_type, context) - - def parse(self, data: JsonType) -> List[T]: - if not isinstance(data, list): - type_name = python_type_to_str(self.item_type) - raise JsonTypeError( - f"type `List[{type_name}]` expects JSON `array` data but instead received: {data}" - ) - - return [self.item_parser.parse(item) for item in data] - - -class DictDeserializer(Deserializer[Dict[K, V]]): - "Recursively de-serializes a JSON object into a Python `dict`." - - key_type: Type[K] - value_type: Type[V] - value_parser: Deserializer[V] - - def __init__(self, key_type: Type[K], value_type: Type[V]) -> None: - self.key_type = key_type - self.value_type = value_type - self._check_key_type() - - def build(self, context: Optional[ModuleType]) -> None: - self.value_parser = _get_deserializer(self.value_type, context) - - def _check_key_type(self) -> None: - if self.key_type is str: - return - - if issubclass(self.key_type, enum.Enum): - value_types = enum_value_types(self.key_type) - if len(value_types) != 1: - raise JsonTypeError( - f"type `{self.container_type}` has invalid key type, " - f"enumerations must have a consistent member value type but several types found: {value_types}" - ) - value_type = value_types.pop() - if value_type is not str: - f"`type `{self.container_type}` has invalid enumeration key type, expected `enum.Enum` with string values" - return - - raise JsonTypeError( - f"`type `{self.container_type}` has invalid key type, expected `str` or `enum.Enum` with string values" - ) - - @property - def container_type(self) -> str: - key_type_name = python_type_to_str(self.key_type) - value_type_name = python_type_to_str(self.value_type) - return f"Dict[{key_type_name}, {value_type_name}]" - - def parse(self, data: JsonType) -> Dict[K, V]: - if not isinstance(data, dict): - raise JsonTypeError( - f"`type `{self.container_type}` expects JSON `object` data but instead received: {data}" - ) - - return dict( - (self.key_type(key), self.value_parser.parse(value)) # type: ignore[call-arg] - for key, value in data.items() - ) - - -class SetDeserializer(Deserializer[Set[T]]): - "Recursively de-serializes a JSON list into a Python `set`." - - member_type: Type[T] - member_parser: Deserializer - - def __init__(self, member_type: Type[T]) -> None: - self.member_type = member_type - - def build(self, context: Optional[ModuleType]) -> None: - self.member_parser = _get_deserializer(self.member_type, context) - - def parse(self, data: JsonType) -> Set[T]: - if not isinstance(data, list): - type_name = python_type_to_str(self.member_type) - raise JsonTypeError( - f"type `Set[{type_name}]` expects JSON `array` data but instead received: {data}" - ) - - return set(self.member_parser.parse(item) for item in data) - - -class TupleDeserializer(Deserializer[Tuple[Any, ...]]): - "Recursively de-serializes a JSON list into a Python `tuple`." - - item_types: Tuple[Type[Any], ...] - item_parsers: Tuple[Deserializer[Any], ...] - - def __init__(self, item_types: Tuple[Type[Any], ...]) -> None: - self.item_types = item_types - - def build(self, context: Optional[ModuleType]) -> None: - self.item_parsers = tuple( - _get_deserializer(item_type, context) for item_type in self.item_types - ) - - @property - def container_type(self) -> str: - type_names = ", ".join( - python_type_to_str(item_type) for item_type in self.item_types - ) - return f"Tuple[{type_names}]" - - def parse(self, data: JsonType) -> Tuple[Any, ...]: - if not isinstance(data, list) or len(data) != len(self.item_parsers): - if not isinstance(data, list): - raise JsonTypeError( - f"type `{self.container_type}` expects JSON `array` data but instead received: {data}" - ) - else: - count = len(self.item_parsers) - raise JsonValueError( - f"type `{self.container_type}` expects a JSON `array` of length {count} but received length {len(data)}" - ) - - return tuple( - item_parser.parse(item) - for item_parser, item in zip(self.item_parsers, data) - ) - - -class UnionDeserializer(Deserializer): - "De-serializes a JSON value (of any type) into a Python union type." - - member_types: Tuple[type, ...] - member_parsers: Tuple[Deserializer, ...] - - def __init__(self, member_types: Tuple[type, ...]) -> None: - self.member_types = member_types - - def build(self, context: Optional[ModuleType]) -> None: - self.member_parsers = tuple( - _get_deserializer(member_type, context) for member_type in self.member_types - ) - - def parse(self, data: JsonType) -> Any: - for member_parser in self.member_parsers: - # iterate over potential types of discriminated union - try: - return member_parser.parse(data) - except (JsonKeyError, JsonTypeError): - # indicates a required field is missing from JSON dict -OR- the data cannot be cast to the expected type, - # i.e. we don't have the type that we are looking for - continue - - type_names = ", ".join( - python_type_to_str(member_type) for member_type in self.member_types - ) - raise JsonKeyError( - f"type `Union[{type_names}]` could not be instantiated from: {data}" - ) - - -def get_literal_properties(typ: type) -> Set[str]: - "Returns the names of all properties in a class that are of a literal type." - - return set( - property_name - for property_name, property_type in get_class_properties(typ) - if is_type_literal(property_type) - ) - - -def get_discriminating_properties(types: Tuple[type, ...]) -> Set[str]: - "Returns a set of properties with literal type that are common across all specified classes." - - if not types or not all(isinstance(typ, type) for typ in types): - return set() - - props = get_literal_properties(types[0]) - for typ in types[1:]: - props = props & get_literal_properties(typ) - - return props - - -class TaggedUnionDeserializer(Deserializer): - "De-serializes a JSON value with one or more disambiguating properties into a Python union type." - - member_types: Tuple[type, ...] - disambiguating_properties: Set[str] - member_parsers: Dict[Tuple[str, Any], Deserializer] - - def __init__(self, member_types: Tuple[type, ...]) -> None: - self.member_types = member_types - self.disambiguating_properties = get_discriminating_properties(member_types) - - def build(self, context: Optional[ModuleType]) -> None: - self.member_parsers = {} - for member_type in self.member_types: - for property_name in self.disambiguating_properties: - literal_type = get_class_property(member_type, property_name) - if not literal_type: - continue - - for literal_value in unwrap_literal_values(literal_type): - tpl = (property_name, literal_value) - if tpl in self.member_parsers: - raise JsonTypeError( - f"disambiguating property `{property_name}` in type `{self.union_type}` has a duplicate value: {literal_value}" - ) - - self.member_parsers[tpl] = _get_deserializer(member_type, context) - - @property - def union_type(self) -> str: - type_names = ", ".join( - python_type_to_str(member_type) for member_type in self.member_types - ) - return f"Union[{type_names}]" - - def parse(self, data: JsonType) -> Any: - if not isinstance(data, dict): - raise JsonTypeError( - f"tagged union type `{self.union_type}` expects JSON `object` data but instead received: {data}" - ) - - for property_name in self.disambiguating_properties: - disambiguating_value = data.get(property_name) - if disambiguating_value is None: - continue - - member_parser = self.member_parsers.get( - (property_name, disambiguating_value) - ) - if member_parser is None: - raise JsonTypeError( - f"disambiguating property value is invalid for tagged union type `{self.union_type}`: {data}" - ) - - return member_parser.parse(data) - - raise JsonTypeError( - f"disambiguating property value is missing for tagged union type `{self.union_type}`: {data}" - ) - - -class LiteralDeserializer(Deserializer): - "De-serializes a JSON value into a Python literal type." - - values: Tuple[Any, ...] - parser: Deserializer - - def __init__(self, values: Tuple[Any, ...]) -> None: - self.values = values - - def build(self, context: Optional[ModuleType]) -> None: - literal_type_tuple = tuple(type(value) for value in self.values) - literal_type_set = set(literal_type_tuple) - if len(literal_type_set) != 1: - value_names = ", ".join(repr(value) for value in self.values) - raise TypeError( - f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}" - ) - - literal_type = literal_type_set.pop() - self.parser = _get_deserializer(literal_type, context) - - def parse(self, data: JsonType) -> Any: - value = self.parser.parse(data) - if value not in self.values: - value_names = ", ".join(repr(value) for value in self.values) - raise JsonTypeError( - f"type `Literal[{value_names}]` could not be instantiated from: {data}" - ) - return value - - -class EnumDeserializer(Deserializer[E]): - "Returns an enumeration instance based on the enumeration value read from a JSON value." - - enum_type: Type[E] - - def __init__(self, enum_type: Type[E]) -> None: - self.enum_type = enum_type - - def parse(self, data: JsonType) -> E: - return self.enum_type(data) - - -class CustomDeserializer(Deserializer[T]): - "Uses the `from_json` class method in class to de-serialize the object from JSON." - - converter: Callable[[JsonType], T] - - def __init__(self, converter: Callable[[JsonType], T]) -> None: - self.converter = converter - - def parse(self, data: JsonType) -> T: - return self.converter(data) - - -class FieldDeserializer(abc.ABC, Generic[T, R]): - """ - Deserializes a JSON property into a Python object field. - - :param property_name: The name of the JSON property to read from a JSON `object`. - :param field_name: The name of the field in a Python class to write data to. - :param parser: A compatible deserializer that can handle the field's type. - """ - - property_name: str - field_name: str - parser: Deserializer[T] - - def __init__( - self, property_name: str, field_name: str, parser: Deserializer[T] - ) -> None: - self.property_name = property_name - self.field_name = field_name - self.parser = parser - - @abc.abstractmethod - def parse_field(self, data: Dict[str, JsonType]) -> R: ... - - -class RequiredFieldDeserializer(FieldDeserializer[T, T]): - "Deserializes a JSON property into a mandatory Python object field." - - def parse_field(self, data: Dict[str, JsonType]) -> T: - if self.property_name not in data: - raise JsonKeyError( - f"missing required property `{self.property_name}` from JSON object: {data}" - ) - - return self.parser.parse(data[self.property_name]) - - -class OptionalFieldDeserializer(FieldDeserializer[T, Optional[T]]): - "Deserializes a JSON property into an optional Python object field with a default value of `None`." - - def parse_field(self, data: Dict[str, JsonType]) -> Optional[T]: - value = data.get(self.property_name) - if value is not None: - return self.parser.parse(value) - else: - return None - - -class DefaultFieldDeserializer(FieldDeserializer[T, T]): - "Deserializes a JSON property into a Python object field with an explicit default value." - - default_value: T - - def __init__( - self, - property_name: str, - field_name: str, - parser: Deserializer, - default_value: T, - ) -> None: - super().__init__(property_name, field_name, parser) - self.default_value = default_value - - def parse_field(self, data: Dict[str, JsonType]) -> T: - value = data.get(self.property_name) - if value is not None: - return self.parser.parse(value) - else: - return self.default_value - - -class DefaultFactoryFieldDeserializer(FieldDeserializer[T, T]): - "Deserializes a JSON property into an optional Python object field with an explicit default value factory." - - default_factory: Callable[[], T] - - def __init__( - self, - property_name: str, - field_name: str, - parser: Deserializer[T], - default_factory: Callable[[], T], - ) -> None: - super().__init__(property_name, field_name, parser) - self.default_factory = default_factory - - def parse_field(self, data: Dict[str, JsonType]) -> T: - value = data.get(self.property_name) - if value is not None: - return self.parser.parse(value) - else: - return self.default_factory() - - -class ClassDeserializer(Deserializer[T]): - "Base class for de-serializing class-like types such as data classes, named tuples and regular classes." - - class_type: type - property_parsers: List[FieldDeserializer] - property_fields: Set[str] - - def __init__(self, class_type: Type[T]) -> None: - self.class_type = class_type - - def assign(self, property_parsers: List[FieldDeserializer]) -> None: - self.property_parsers = property_parsers - self.property_fields = set( - property_parser.property_name for property_parser in property_parsers - ) - - def parse(self, data: JsonType) -> T: - if not isinstance(data, dict): - type_name = python_type_to_str(self.class_type) - raise JsonTypeError( - f"`type `{type_name}` expects JSON `object` data but instead received: {data}" - ) - - object_data: Dict[str, JsonType] = typing.cast(Dict[str, JsonType], data) - - field_values = {} - for property_parser in self.property_parsers: - field_values[property_parser.field_name] = property_parser.parse_field( - object_data - ) - - if not self.property_fields.issuperset(object_data): - unassigned_names = [ - name for name in object_data if name not in self.property_fields - ] - raise JsonKeyError( - f"unrecognized fields in JSON object: {unassigned_names}" - ) - - return self.create(**field_values) - - def create(self, **field_values: Any) -> T: - "Instantiates an object with a collection of property values." - - obj: T = create_object(self.class_type) - - # use `setattr` on newly created object instance - for field_name, field_value in field_values.items(): - setattr(obj, field_name, field_value) - return obj - - -class NamedTupleDeserializer(ClassDeserializer[NamedTuple]): - "De-serializes a named tuple from a JSON `object`." - - def build(self, context: Optional[ModuleType]) -> None: - property_parsers: List[FieldDeserializer] = [ - RequiredFieldDeserializer( - field_name, field_name, _get_deserializer(field_type, context) - ) - for field_name, field_type in get_resolved_hints(self.class_type).items() - ] - super().assign(property_parsers) - - def create(self, **field_values: Any) -> NamedTuple: - return self.class_type(**field_values) - - -class DataclassDeserializer(ClassDeserializer[T]): - "De-serializes a data class from a JSON `object`." - - def __init__(self, class_type: Type[T]) -> None: - if not dataclasses.is_dataclass(class_type): - raise TypeError("expected: data-class type") - super().__init__(class_type) # type: ignore[arg-type] - - def build(self, context: Optional[ModuleType]) -> None: - property_parsers: List[FieldDeserializer] = [] - resolved_hints = get_resolved_hints(self.class_type) - for field in dataclasses.fields(self.class_type): - field_type = resolved_hints[field.name] - property_name = python_field_to_json_property(field.name, field_type) - - is_optional = is_type_optional(field_type) - has_default = field.default is not dataclasses.MISSING - has_default_factory = field.default_factory is not dataclasses.MISSING - - if is_optional: - required_type: Type[T] = unwrap_optional_type(field_type) - else: - required_type = field_type - - parser = _get_deserializer(required_type, context) - - if has_default: - field_parser: FieldDeserializer = DefaultFieldDeserializer( - property_name, field.name, parser, field.default - ) - elif has_default_factory: - default_factory = typing.cast(Callable[[], Any], field.default_factory) - field_parser = DefaultFactoryFieldDeserializer( - property_name, field.name, parser, default_factory - ) - elif is_optional: - field_parser = OptionalFieldDeserializer( - property_name, field.name, parser - ) - else: - field_parser = RequiredFieldDeserializer( - property_name, field.name, parser - ) - - property_parsers.append(field_parser) - - super().assign(property_parsers) - - -class FrozenDataclassDeserializer(DataclassDeserializer[T]): - "De-serializes a frozen data class from a JSON `object`." - - def create(self, **field_values: Any) -> T: - "Instantiates an object with a collection of property values." - - # create object instance without calling `__init__` - obj: T = create_object(self.class_type) - - # can't use `setattr` on frozen dataclasses, pass member variable values to `__init__` - obj.__init__(**field_values) # type: ignore - return obj - - -class TypedClassDeserializer(ClassDeserializer[T]): - "De-serializes a class with type annotations from a JSON `object` by iterating over class properties." - - def build(self, context: Optional[ModuleType]) -> None: - property_parsers: List[FieldDeserializer] = [] - for field_name, field_type in get_resolved_hints(self.class_type).items(): - property_name = python_field_to_json_property(field_name, field_type) - - is_optional = is_type_optional(field_type) - - if is_optional: - required_type: Type[T] = unwrap_optional_type(field_type) - else: - required_type = field_type - - parser = _get_deserializer(required_type, context) - - if is_optional: - field_parser: FieldDeserializer = OptionalFieldDeserializer( - property_name, field_name, parser - ) - else: - field_parser = RequiredFieldDeserializer( - property_name, field_name, parser - ) - - property_parsers.append(field_parser) - - super().assign(property_parsers) - - -def create_deserializer( - typ: TypeLike, context: Optional[ModuleType] = None -) -> Deserializer: - """ - Creates a de-serializer engine to produce a Python object from an object obtained from a JSON string. - - When de-serializing a JSON object into a Python object, the following transformations are applied: - - * Fundamental types are parsed as `bool`, `int`, `float` or `str`. - * Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type - `datetime`, `date` or `time`. - * Byte arrays are read from a string with Base64 encoding into a `bytes` instance. - * UUIDs are extracted from a UUID string compliant with RFC 4122 into a `uuid.UUID` instance. - * Enumerations are instantiated with a lookup on enumeration value. - * Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively. - * Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs - using reflection (enumerating type annotations). - - :raises TypeError: A de-serializer engine cannot be constructed for the input type. - """ - - if context is None: - if isinstance(typ, type): - context = sys.modules[typ.__module__] - - return _get_deserializer(typ, context) - - -_CACHE: Dict[Tuple[str, str], Deserializer] = {} - - -def _get_deserializer(typ: TypeLike, context: Optional[ModuleType]) -> Deserializer: - "Creates or re-uses a de-serializer engine to parse an object obtained from a JSON string." - - cache_key = None - - if isinstance(typ, (str, typing.ForwardRef)): - if context is None: - raise TypeError(f"missing context for evaluating type: {typ}") - - if isinstance(typ, str): - if hasattr(context, typ): - cache_key = (context.__name__, typ) - elif isinstance(typ, typing.ForwardRef): - if hasattr(context, typ.__forward_arg__): - cache_key = (context.__name__, typ.__forward_arg__) - - typ = evaluate_type(typ, context) - - typ = unwrap_annotated_type(typ) if is_type_annotated(typ) else typ - - if isinstance(typ, type) and typing.get_origin(typ) is None: - cache_key = (typ.__module__, typ.__name__) - - if cache_key is not None: - deserializer = _CACHE.get(cache_key) - if deserializer is None: - deserializer = _create_deserializer(typ) - - # store de-serializer immediately in cache to avoid stack overflow for recursive types - _CACHE[cache_key] = deserializer - - if isinstance(typ, type): - # use type's own module as context for evaluating member types - context = sys.modules[typ.__module__] - - # create any de-serializers this de-serializer is depending on - deserializer.build(context) - else: - # special forms are not always hashable, create a new de-serializer every time - deserializer = _create_deserializer(typ) - deserializer.build(context) - - return deserializer - - -def _create_deserializer(typ: TypeLike) -> Deserializer: - "Creates a de-serializer engine to parse an object obtained from a JSON string." - - # check for well-known types - if typ is type(None): - return NoneDeserializer() - elif typ is bool: - return BoolDeserializer() - elif typ is int: - return IntDeserializer() - elif typ is float: - return FloatDeserializer() - elif typ is str: - return StringDeserializer() - elif typ is bytes: - return BytesDeserializer() - elif typ is datetime.datetime: - return DateTimeDeserializer() - elif typ is datetime.date: - return DateDeserializer() - elif typ is datetime.time: - return TimeDeserializer() - elif typ is uuid.UUID: - return UUIDDeserializer() - elif typ is ipaddress.IPv4Address: - return IPv4Deserializer() - elif typ is ipaddress.IPv6Address: - return IPv6Deserializer() - - # dynamically-typed collection types - if typ is list: - raise TypeError("explicit item type required: use `List[T]` instead of `list`") - if typ is dict: - raise TypeError( - "explicit key and value types required: use `Dict[K, V]` instead of `dict`" - ) - if typ is set: - raise TypeError("explicit member type required: use `Set[T]` instead of `set`") - if typ is tuple: - raise TypeError( - "explicit item type list required: use `Tuple[T, ...]` instead of `tuple`" - ) - - # generic types (e.g. list, dict, set, etc.) - origin_type = typing.get_origin(typ) - if origin_type is list: - (list_item_type,) = typing.get_args(typ) # unpack single tuple element - return ListDeserializer(list_item_type) - elif origin_type is dict: - key_type, value_type = typing.get_args(typ) - return DictDeserializer(key_type, value_type) - elif origin_type is set: - (set_member_type,) = typing.get_args(typ) # unpack single tuple element - return SetDeserializer(set_member_type) - elif origin_type is tuple: - return TupleDeserializer(typing.get_args(typ)) - elif origin_type is Union: - union_args = typing.get_args(typ) - if get_discriminating_properties(union_args): - return TaggedUnionDeserializer(union_args) - else: - return UnionDeserializer(union_args) - elif origin_type is Literal: - return LiteralDeserializer(typing.get_args(typ)) - - if not inspect.isclass(typ): - if is_dataclass_instance(typ): - raise TypeError(f"dataclass type expected but got instance: {typ}") - else: - raise TypeError(f"unable to de-serialize unrecognized type: {typ}") - - if issubclass(typ, enum.Enum): - return EnumDeserializer(typ) - - if is_named_tuple_type(typ): - return NamedTupleDeserializer(typ) - - # check if object has custom serialization method - convert_func = getattr(typ, "from_json", None) - if callable(convert_func): - return CustomDeserializer(convert_func) - - if is_dataclass_type(typ): - dataclass_params = getattr(typ, "__dataclass_params__", None) - if dataclass_params is not None and dataclass_params.frozen: - return FrozenDataclassDeserializer(typ) - else: - return DataclassDeserializer(typ) - - return TypedClassDeserializer(typ) diff --git a/docs/openapi_generator/strong_typing/docstring.py b/docs/openapi_generator/strong_typing/docstring.py deleted file mode 100644 index 3ef1e5e7a..000000000 --- a/docs/openapi_generator/strong_typing/docstring.py +++ /dev/null @@ -1,437 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import builtins -import dataclasses -import inspect -import re -import sys -import types -import typing -from dataclasses import dataclass -from io import StringIO -from typing import Any, Callable, Dict, Optional, Protocol, Type, TypeVar - -if sys.version_info >= (3, 10): - from typing import TypeGuard -else: - from typing_extensions import TypeGuard - -from .inspection import ( - DataclassInstance, - get_class_properties, - get_signature, - is_dataclass_type, - is_type_enum, -) - -T = TypeVar("T") - - -@dataclass -class DocstringParam: - """ - A parameter declaration in a parameter block. - - :param name: The name of the parameter. - :param description: The description text for the parameter. - """ - - name: str - description: str - param_type: type = inspect.Signature.empty - - def __str__(self) -> str: - return f":param {self.name}: {self.description}" - - -@dataclass -class DocstringReturns: - """ - A `returns` declaration extracted from a docstring. - - :param description: The description text for the return value. - """ - - description: str - return_type: type = inspect.Signature.empty - - def __str__(self) -> str: - return f":returns: {self.description}" - - -@dataclass -class DocstringRaises: - """ - A `raises` declaration extracted from a docstring. - - :param typename: The type name of the exception raised. - :param description: The description associated with the exception raised. - """ - - typename: str - description: str - raise_type: type = inspect.Signature.empty - - def __str__(self) -> str: - return f":raises {self.typename}: {self.description}" - - -@dataclass -class Docstring: - """ - Represents the documentation string (a.k.a. docstring) for a type such as a (data) class or function. - - A docstring is broken down into the following components: - * A short description, which is the first block of text in the documentation string, and ends with a double - newline or a parameter block. - * A long description, which is the optional block of text following the short description, and ends with - a parameter block. - * A parameter block of named parameter and description string pairs in ReST-style. - * A `returns` declaration, which adds explanation to the return value. - * A `raises` declaration, which adds explanation to the exception type raised by the function on error. - - When the docstring is attached to a data class, it is understood as the documentation string of the class - `__init__` method. - - :param short_description: The short description text parsed from a docstring. - :param long_description: The long description text parsed from a docstring. - :param params: The parameter block extracted from a docstring. - :param returns: The returns declaration extracted from a docstring. - """ - - short_description: Optional[str] = None - long_description: Optional[str] = None - params: Dict[str, DocstringParam] = dataclasses.field(default_factory=dict) - returns: Optional[DocstringReturns] = None - raises: Dict[str, DocstringRaises] = dataclasses.field(default_factory=dict) - - @property - def full_description(self) -> Optional[str]: - if self.short_description and self.long_description: - return f"{self.short_description}\n\n{self.long_description}" - elif self.short_description: - return self.short_description - else: - return None - - def __str__(self) -> str: - output = StringIO() - - has_description = self.short_description or self.long_description - has_blocks = self.params or self.returns or self.raises - - if has_description: - if self.short_description and self.long_description: - output.write(self.short_description) - output.write("\n\n") - output.write(self.long_description) - elif self.short_description: - output.write(self.short_description) - - if has_blocks: - if has_description: - output.write("\n") - - for param in self.params.values(): - output.write("\n") - output.write(str(param)) - if self.returns: - output.write("\n") - output.write(str(self.returns)) - for raises in self.raises.values(): - output.write("\n") - output.write(str(raises)) - - s = output.getvalue() - output.close() - return s - - -def is_exception(member: object) -> TypeGuard[Type[BaseException]]: - return isinstance(member, type) and issubclass(member, BaseException) - - -def get_exceptions(module: types.ModuleType) -> Dict[str, Type[BaseException]]: - "Returns all exception classes declared in a module." - - return { - name: class_type - for name, class_type in inspect.getmembers(module, is_exception) - } - - -class SupportsDoc(Protocol): - __doc__: Optional[str] - - -def parse_type(typ: SupportsDoc) -> Docstring: - """ - Parse the docstring of a type into its components. - - :param typ: The type whose documentation string to parse. - :returns: Components of the documentation string. - """ - - doc = get_docstring(typ) - if doc is None: - return Docstring() - - docstring = parse_text(doc) - check_docstring(typ, docstring) - - # assign parameter and return types - if is_dataclass_type(typ): - properties = dict(get_class_properties(typing.cast(type, typ))) - - for name, param in docstring.params.items(): - param.param_type = properties[name] - - elif inspect.isfunction(typ): - signature = get_signature(typ) - for name, param in docstring.params.items(): - param.param_type = signature.parameters[name].annotation - if docstring.returns: - docstring.returns.return_type = signature.return_annotation - - # assign exception types - defining_module = inspect.getmodule(typ) - if defining_module: - context: Dict[str, type] = {} - context.update(get_exceptions(builtins)) - context.update(get_exceptions(defining_module)) - for exc_name, exc in docstring.raises.items(): - raise_type = context.get(exc_name) - if raise_type is None: - type_name = ( - getattr(typ, "__qualname__", None) - or getattr(typ, "__name__", None) - or None - ) - raise TypeError( - f"doc-string exception type `{exc_name}` is not an exception defined in the context of `{type_name}`" - ) - - exc.raise_type = raise_type - - return docstring - - -def parse_text(text: str) -> Docstring: - """ - Parse a ReST-style docstring into its components. - - :param text: The documentation string to parse, typically acquired as `type.__doc__`. - :returns: Components of the documentation string. - """ - - if not text: - return Docstring() - - # find block that starts object metadata block (e.g. `:param p:` or `:returns:`) - text = inspect.cleandoc(text) - match = re.search("^:", text, flags=re.MULTILINE) - if match: - desc_chunk = text[: match.start()] - meta_chunk = text[match.start() :] # noqa: E203 - else: - desc_chunk = text - meta_chunk = "" - - # split description text into short and long description - parts = desc_chunk.split("\n\n", 1) - - # ensure short description has no newlines - short_description = parts[0].strip().replace("\n", " ") or None - - # ensure long description preserves its structure (e.g. preformatted text) - if len(parts) > 1: - long_description = parts[1].strip() or None - else: - long_description = None - - params: Dict[str, DocstringParam] = {} - raises: Dict[str, DocstringRaises] = {} - returns = None - for match in re.finditer( - r"(^:.*?)(?=^:|\Z)", meta_chunk, flags=re.DOTALL | re.MULTILINE - ): - chunk = match.group(0) - if not chunk: - continue - - args_chunk, desc_chunk = chunk.lstrip(":").split(":", 1) - args = args_chunk.split() - desc = re.sub(r"\s+", " ", desc_chunk.strip()) - - if len(args) > 0: - kw = args[0] - if len(args) == 2: - if kw == "param": - params[args[1]] = DocstringParam( - name=args[1], - description=desc, - ) - elif kw == "raise" or kw == "raises": - raises[args[1]] = DocstringRaises( - typename=args[1], - description=desc, - ) - - elif len(args) == 1: - if kw == "return" or kw == "returns": - returns = DocstringReturns(description=desc) - - return Docstring( - long_description=long_description, - short_description=short_description, - params=params, - returns=returns, - raises=raises, - ) - - -def has_default_docstring(typ: SupportsDoc) -> bool: - "Check if class has the auto-generated string assigned by @dataclass." - - if not isinstance(typ, type): - return False - - if is_dataclass_type(typ): - return ( - typ.__doc__ is not None - and re.match(f"^{re.escape(typ.__name__)}[(].*[)]$", typ.__doc__) - is not None - ) - - if is_type_enum(typ): - return typ.__doc__ is not None and typ.__doc__ == "An enumeration." - - return False - - -def has_docstring(typ: SupportsDoc) -> bool: - "Check if class has a documentation string other than the auto-generated string assigned by @dataclass." - - if has_default_docstring(typ): - return False - - return bool(typ.__doc__) - - -def get_docstring(typ: SupportsDoc) -> Optional[str]: - if typ.__doc__ is None: - return None - - if has_default_docstring(typ): - return None - - return typ.__doc__ - - -def check_docstring( - typ: SupportsDoc, docstring: Docstring, strict: bool = False -) -> None: - """ - Verifies the doc-string of a type. - - :raises TypeError: Raised on a mismatch between doc-string parameters, and function or type signature. - """ - - if is_dataclass_type(typ): - check_dataclass_docstring(typ, docstring, strict) - elif inspect.isfunction(typ): - check_function_docstring(typ, docstring, strict) - - -def check_dataclass_docstring( - typ: Type[DataclassInstance], docstring: Docstring, strict: bool = False -) -> None: - """ - Verifies the doc-string of a data-class type. - - :param strict: Whether to check if all data-class members have doc-strings. - :raises TypeError: Raised on a mismatch between doc-string parameters and data-class members. - """ - - if not is_dataclass_type(typ): - raise TypeError("not a data-class type") - - properties = dict(get_class_properties(typ)) - class_name = typ.__name__ - - for name in docstring.params: - if name not in properties: - raise TypeError( - f"doc-string parameter `{name}` is not a member of the data-class `{class_name}`" - ) - - if not strict: - return - - for name in properties: - if name not in docstring.params: - raise TypeError( - f"member `{name}` in data-class `{class_name}` is missing its doc-string" - ) - - -def check_function_docstring( - fn: Callable[..., Any], docstring: Docstring, strict: bool = False -) -> None: - """ - Verifies the doc-string of a function or member function. - - :param strict: Whether to check if all function parameters and the return type have doc-strings. - :raises TypeError: Raised on a mismatch between doc-string parameters and function signature. - """ - - signature = get_signature(fn) - func_name = fn.__qualname__ - - for name in docstring.params: - if name not in signature.parameters: - raise TypeError( - f"doc-string parameter `{name}` is absent from signature of function `{func_name}`" - ) - - if ( - docstring.returns is not None - and signature.return_annotation is inspect.Signature.empty - ): - raise TypeError( - f"doc-string has returns description in function `{func_name}` with no return type annotation" - ) - - if not strict: - return - - for name, param in signature.parameters.items(): - # ignore `self` in member function signatures - if name == "self" and ( - param.kind is inspect.Parameter.POSITIONAL_ONLY - or param.kind is inspect.Parameter.POSITIONAL_OR_KEYWORD - ): - continue - - if name not in docstring.params: - raise TypeError( - f"function parameter `{name}` in `{func_name}` is missing its doc-string" - ) - - if ( - signature.return_annotation is not inspect.Signature.empty - and docstring.returns is None - ): - raise TypeError( - f"function `{func_name}` has no returns description in its doc-string" - ) diff --git a/docs/openapi_generator/strong_typing/exception.py b/docs/openapi_generator/strong_typing/exception.py deleted file mode 100644 index af037cc3c..000000000 --- a/docs/openapi_generator/strong_typing/exception.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - - -class JsonKeyError(Exception): - "Raised when deserialization for a class or union type has failed because a matching member was not found." - - -class JsonValueError(Exception): - "Raised when (de)serialization of data has failed due to invalid value." - - -class JsonTypeError(Exception): - "Raised when deserialization of data has failed due to a type mismatch." diff --git a/docs/openapi_generator/strong_typing/inspection.py b/docs/openapi_generator/strong_typing/inspection.py deleted file mode 100644 index 41804f12c..000000000 --- a/docs/openapi_generator/strong_typing/inspection.py +++ /dev/null @@ -1,1053 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import dataclasses -import datetime -import enum -import importlib -import importlib.machinery -import importlib.util -import inspect -import re -import sys -import types -import typing -import uuid -from typing import ( - Any, - Callable, - Dict, - Iterable, - List, - Literal, - NamedTuple, - Optional, - Protocol, - runtime_checkable, - Set, - Tuple, - Type, - TypeVar, - Union, -) - -if sys.version_info >= (3, 9): - from typing import Annotated -else: - from typing_extensions import Annotated - -if sys.version_info >= (3, 10): - from typing import TypeGuard -else: - from typing_extensions import TypeGuard - -S = TypeVar("S") -T = TypeVar("T") -K = TypeVar("K") -V = TypeVar("V") - - -def _is_type_like(data_type: object) -> bool: - """ - Checks if the object is a type or type-like object (e.g. generic type). - - :param data_type: The object to validate. - :returns: True if the object is a type or type-like object. - """ - - if isinstance(data_type, type): - # a standard type - return True - elif typing.get_origin(data_type) is not None: - # a generic type such as `list`, `dict` or `set` - return True - elif hasattr(data_type, "__forward_arg__"): - # an instance of `ForwardRef` - return True - elif data_type is Any: - # the special form `Any` - return True - else: - return False - - -if sys.version_info >= (3, 9): - TypeLike = Union[type, types.GenericAlias, typing.ForwardRef, Any] - - def is_type_like( - data_type: object, - ) -> TypeGuard[TypeLike]: - """ - Checks if the object is a type or type-like object (e.g. generic type). - - :param data_type: The object to validate. - :returns: True if the object is a type or type-like object. - """ - - return _is_type_like(data_type) - -else: - TypeLike = object - - def is_type_like( - data_type: object, - ) -> bool: - return _is_type_like(data_type) - - -def evaluate_member_type(typ: Any, cls: type) -> Any: - """ - Evaluates a forward reference type in a dataclass member. - - :param typ: The dataclass member type to convert. - :param cls: The dataclass in which the member is defined. - :returns: The evaluated type. - """ - - return evaluate_type(typ, sys.modules[cls.__module__]) - - -def evaluate_type(typ: Any, module: types.ModuleType) -> Any: - """ - Evaluates a forward reference type. - - :param typ: The type to convert, typically a dataclass member type. - :param module: The context for the type, i.e. the module in which the member is defined. - :returns: The evaluated type. - """ - - if isinstance(typ, str): - # evaluate data-class field whose type annotation is a string - return eval(typ, module.__dict__, locals()) - if isinstance(typ, typing.ForwardRef): - if sys.version_info >= (3, 9): - return typ._evaluate(module.__dict__, locals(), recursive_guard=frozenset()) - else: - return typ._evaluate(module.__dict__, locals()) - else: - return typ - - -@runtime_checkable -class DataclassInstance(Protocol): - __dataclass_fields__: typing.ClassVar[Dict[str, dataclasses.Field]] - - -def is_dataclass_type(typ: Any) -> TypeGuard[Type[DataclassInstance]]: - "True if the argument corresponds to a data class type (but not an instance)." - - typ = unwrap_annotated_type(typ) - return isinstance(typ, type) and dataclasses.is_dataclass(typ) - - -def is_dataclass_instance(obj: Any) -> TypeGuard[DataclassInstance]: - "True if the argument corresponds to a data class instance (but not a type)." - - return not isinstance(obj, type) and dataclasses.is_dataclass(obj) - - -@dataclasses.dataclass -class DataclassField: - name: str - type: Any - default: Any - - def __init__( - self, name: str, type: Any, default: Any = dataclasses.MISSING - ) -> None: - self.name = name - self.type = type - self.default = default - - -def dataclass_fields(cls: Type[DataclassInstance]) -> Iterable[DataclassField]: - "Generates the fields of a data-class resolving forward references." - - for field in dataclasses.fields(cls): - yield DataclassField( - field.name, evaluate_member_type(field.type, cls), field.default - ) - - -def dataclass_field_by_name(cls: Type[DataclassInstance], name: str) -> DataclassField: - "Looks up a field in a data-class by its field name." - - for field in dataclasses.fields(cls): - if field.name == name: - return DataclassField(field.name, evaluate_member_type(field.type, cls)) - - raise LookupError(f"field `{name}` missing from class `{cls.__name__}`") - - -def is_named_tuple_instance(obj: Any) -> TypeGuard[NamedTuple]: - "True if the argument corresponds to a named tuple instance." - - return is_named_tuple_type(type(obj)) - - -def is_named_tuple_type(typ: Any) -> TypeGuard[Type[NamedTuple]]: - """ - True if the argument corresponds to a named tuple type. - - Calling the function `collections.namedtuple` gives a new type that is a subclass of `tuple` (and no other classes) - with a member named `_fields` that is a tuple whose items are all strings. - """ - - if not isinstance(typ, type): - return False - - typ = unwrap_annotated_type(typ) - - b = getattr(typ, "__bases__", None) - if b is None: - return False - - if len(b) != 1 or b[0] != tuple: - return False - - f = getattr(typ, "_fields", None) - if not isinstance(f, tuple): - return False - - return all(isinstance(n, str) for n in f) - - -if sys.version_info >= (3, 11): - - def is_type_enum(typ: object) -> TypeGuard[Type[enum.Enum]]: - "True if the specified type is an enumeration type." - - typ = unwrap_annotated_type(typ) - return isinstance(typ, enum.EnumType) - -else: - - def is_type_enum(typ: object) -> TypeGuard[Type[enum.Enum]]: - "True if the specified type is an enumeration type." - - typ = unwrap_annotated_type(typ) - - # use an explicit isinstance(..., type) check to filter out special forms like generics - return isinstance(typ, type) and issubclass(typ, enum.Enum) - - -def enum_value_types(enum_type: Type[enum.Enum]) -> List[type]: - """ - Returns all unique value types of the `enum.Enum` type in definition order. - """ - - # filter unique enumeration value types by keeping definition order - return list(dict.fromkeys(type(e.value) for e in enum_type)) - - -def extend_enum( - source: Type[enum.Enum], -) -> Callable[[Type[enum.Enum]], Type[enum.Enum]]: - """ - Creates a new enumeration type extending the set of values in an existing type. - - :param source: The existing enumeration type to be extended with new values. - :returns: A new enumeration type with the extended set of values. - """ - - def wrap(extend: Type[enum.Enum]) -> Type[enum.Enum]: - # create new enumeration type combining the values from both types - values: Dict[str, Any] = {} - values.update((e.name, e.value) for e in source) - values.update((e.name, e.value) for e in extend) - enum_class: Type[enum.Enum] = enum.Enum(extend.__name__, values) # type: ignore - - # assign the newly created type to the same module where the extending class is defined - setattr(enum_class, "__module__", extend.__module__) - setattr(enum_class, "__doc__", extend.__doc__) - setattr(sys.modules[extend.__module__], extend.__name__, enum_class) - - return enum.unique(enum_class) - - return wrap - - -if sys.version_info >= (3, 10): - - def _is_union_like(typ: object) -> bool: - "True if type is a union such as `Union[T1, T2, ...]` or a union type `T1 | T2`." - - return typing.get_origin(typ) is Union or isinstance(typ, types.UnionType) - -else: - - def _is_union_like(typ: object) -> bool: - "True if type is a union such as `Union[T1, T2, ...]` or a union type `T1 | T2`." - - return typing.get_origin(typ) is Union - - -def is_type_optional( - typ: object, strict: bool = False -) -> TypeGuard[Type[Optional[Any]]]: - """ - True if the type annotation corresponds to an optional type (e.g. `Optional[T]` or `Union[T1,T2,None]`). - - `Optional[T]` is represented as `Union[T, None]` is classic style, and is equivalent to `T | None` in new style. - - :param strict: True if only `Optional[T]` qualifies as an optional type but `Union[T1, T2, None]` does not. - """ - - typ = unwrap_annotated_type(typ) - - if _is_union_like(typ): - args = typing.get_args(typ) - if strict and len(args) != 2: - return False - - return type(None) in args - - return False - - -def unwrap_optional_type(typ: Type[Optional[T]]) -> Type[T]: - """ - Extracts the inner type of an optional type. - - :param typ: The optional type `Optional[T]`. - :returns: The inner type `T`. - """ - - return rewrap_annotated_type(_unwrap_optional_type, typ) - - -def _unwrap_optional_type(typ: Type[Optional[T]]) -> Type[T]: - "Extracts the type qualified as optional (e.g. returns `T` for `Optional[T]`)." - - # Optional[T] is represented internally as Union[T, None] - if not _is_union_like(typ): - raise TypeError("optional type must have un-subscripted type of Union") - - # will automatically unwrap Union[T] into T - return Union[ - tuple(filter(lambda item: item is not type(None), typing.get_args(typ))) # type: ignore - ] - - -def is_type_union(typ: object) -> bool: - "True if the type annotation corresponds to a union type (e.g. `Union[T1,T2,T3]`)." - - typ = unwrap_annotated_type(typ) - if _is_union_like(typ): - args = typing.get_args(typ) - return len(args) > 2 or type(None) not in args - - return False - - -def unwrap_union_types(typ: object) -> Tuple[object, ...]: - """ - Extracts the inner types of a union type. - - :param typ: The union type `Union[T1, T2, ...]`. - :returns: The inner types `T1`, `T2`, etc. - """ - - typ = unwrap_annotated_type(typ) - return _unwrap_union_types(typ) - - -def _unwrap_union_types(typ: object) -> Tuple[object, ...]: - "Extracts the types in a union (e.g. returns a tuple of types `T1` and `T2` for `Union[T1, T2]`)." - - if not _is_union_like(typ): - raise TypeError("union type must have un-subscripted type of Union") - - return typing.get_args(typ) - - -def is_type_literal(typ: object) -> bool: - "True if the specified type is a literal of one or more constant values, e.g. `Literal['string']` or `Literal[42]`." - - typ = unwrap_annotated_type(typ) - return typing.get_origin(typ) is Literal - - -def unwrap_literal_value(typ: object) -> Any: - """ - Extracts the single constant value captured by a literal type. - - :param typ: The literal type `Literal[value]`. - :returns: The values captured by the literal type. - """ - - args = unwrap_literal_values(typ) - if len(args) != 1: - raise TypeError("too many values in literal type") - - return args[0] - - -def unwrap_literal_values(typ: object) -> Tuple[Any, ...]: - """ - Extracts the constant values captured by a literal type. - - :param typ: The literal type `Literal[value, ...]`. - :returns: A tuple of values captured by the literal type. - """ - - typ = unwrap_annotated_type(typ) - return typing.get_args(typ) - - -def unwrap_literal_types(typ: object) -> Tuple[type, ...]: - """ - Extracts the types of the constant values captured by a literal type. - - :param typ: The literal type `Literal[value, ...]`. - :returns: A tuple of item types `T` such that `type(value) == T`. - """ - - return tuple(type(t) for t in unwrap_literal_values(typ)) - - -def is_generic_list(typ: object) -> TypeGuard[Type[list]]: - "True if the specified type is a generic list, i.e. `List[T]`." - - typ = unwrap_annotated_type(typ) - return typing.get_origin(typ) is list - - -def unwrap_generic_list(typ: Type[List[T]]) -> Type[T]: - """ - Extracts the item type of a list type. - - :param typ: The list type `List[T]`. - :returns: The item type `T`. - """ - - return rewrap_annotated_type(_unwrap_generic_list, typ) - - -def _unwrap_generic_list(typ: Type[List[T]]) -> Type[T]: - "Extracts the item type of a list type (e.g. returns `T` for `List[T]`)." - - (list_type,) = typing.get_args(typ) # unpack single tuple element - return list_type - - -def is_generic_set(typ: object) -> TypeGuard[Type[set]]: - "True if the specified type is a generic set, i.e. `Set[T]`." - - typ = unwrap_annotated_type(typ) - return typing.get_origin(typ) is set - - -def unwrap_generic_set(typ: Type[Set[T]]) -> Type[T]: - """ - Extracts the item type of a set type. - - :param typ: The set type `Set[T]`. - :returns: The item type `T`. - """ - - return rewrap_annotated_type(_unwrap_generic_set, typ) - - -def _unwrap_generic_set(typ: Type[Set[T]]) -> Type[T]: - "Extracts the item type of a set type (e.g. returns `T` for `Set[T]`)." - - (set_type,) = typing.get_args(typ) # unpack single tuple element - return set_type - - -def is_generic_dict(typ: object) -> TypeGuard[Type[dict]]: - "True if the specified type is a generic dictionary, i.e. `Dict[KeyType, ValueType]`." - - typ = unwrap_annotated_type(typ) - return typing.get_origin(typ) is dict - - -def unwrap_generic_dict(typ: Type[Dict[K, V]]) -> Tuple[Type[K], Type[V]]: - """ - Extracts the key and value types of a dictionary type as a tuple. - - :param typ: The dictionary type `Dict[K, V]`. - :returns: The key and value types `K` and `V`. - """ - - return _unwrap_generic_dict(unwrap_annotated_type(typ)) - - -def _unwrap_generic_dict(typ: Type[Dict[K, V]]) -> Tuple[Type[K], Type[V]]: - "Extracts the key and value types of a dict type (e.g. returns (`K`, `V`) for `Dict[K, V]`)." - - key_type, value_type = typing.get_args(typ) - return key_type, value_type - - -def is_type_annotated(typ: TypeLike) -> bool: - "True if the type annotation corresponds to an annotated type (i.e. `Annotated[T, ...]`)." - - return getattr(typ, "__metadata__", None) is not None - - -def get_annotation(data_type: TypeLike, annotation_type: Type[T]) -> Optional[T]: - """ - Returns the first annotation on a data type that matches the expected annotation type. - - :param data_type: The annotated type from which to extract the annotation. - :param annotation_type: The annotation class to look for. - :returns: The annotation class instance found (if any). - """ - - metadata = getattr(data_type, "__metadata__", None) - if metadata is not None: - for annotation in metadata: - if isinstance(annotation, annotation_type): - return annotation - - return None - - -def unwrap_annotated_type(typ: T) -> T: - "Extracts the wrapped type from an annotated type (e.g. returns `T` for `Annotated[T, ...]`)." - - if is_type_annotated(typ): - # type is Annotated[T, ...] - return typing.get_args(typ)[0] - else: - # type is a regular type - return typ - - -def rewrap_annotated_type( - transform: Callable[[Type[S]], Type[T]], typ: Type[S] -) -> Type[T]: - """ - Un-boxes, transforms and re-boxes an optionally annotated type. - - :param transform: A function that maps an un-annotated type to another type. - :param typ: A type to un-box (if necessary), transform, and re-box (if necessary). - """ - - metadata = getattr(typ, "__metadata__", None) - if metadata is not None: - # type is Annotated[T, ...] - inner_type = typing.get_args(typ)[0] - else: - # type is a regular type - inner_type = typ - - transformed_type = transform(inner_type) - - if metadata is not None: - return Annotated[(transformed_type, *metadata)] # type: ignore - else: - return transformed_type - - -def get_module_classes(module: types.ModuleType) -> List[type]: - "Returns all classes declared directly in a module." - - def is_class_member(member: object) -> TypeGuard[type]: - return inspect.isclass(member) and member.__module__ == module.__name__ - - return [class_type for _, class_type in inspect.getmembers(module, is_class_member)] - - -if sys.version_info >= (3, 9): - - def get_resolved_hints(typ: type) -> Dict[str, type]: - return typing.get_type_hints(typ, include_extras=True) - -else: - - def get_resolved_hints(typ: type) -> Dict[str, type]: - return typing.get_type_hints(typ) - - -def get_class_properties(typ: type) -> Iterable[Tuple[str, type]]: - "Returns all properties of a class." - - if is_dataclass_type(typ): - return ((field.name, field.type) for field in dataclasses.fields(typ)) - else: - resolved_hints = get_resolved_hints(typ) - return resolved_hints.items() - - -def get_class_property(typ: type, name: str) -> Optional[type]: - "Looks up the annotated type of a property in a class by its property name." - - for property_name, property_type in get_class_properties(typ): - if name == property_name: - return property_type - return None - - -@dataclasses.dataclass -class _ROOT: - pass - - -def get_referenced_types( - typ: TypeLike, module: Optional[types.ModuleType] = None -) -> Set[type]: - """ - Extracts types directly or indirectly referenced by this type. - - For example, extract `T` from `List[T]`, `Optional[T]` or `Annotated[T, ...]`, `K` and `V` from `Dict[K,V]`, - `A` and `B` from `Union[A,B]`. - - :param typ: A type or special form. - :param module: The context in which types are evaluated. - :returns: Types referenced by the given type or special form. - """ - - collector = TypeCollector() - collector.run(typ, _ROOT, module) - return collector.references - - -class TypeCollector: - """ - Collects types directly or indirectly referenced by a type. - - :param graph: The type dependency graph, linking types to types they depend on. - """ - - graph: Dict[type, Set[type]] - - @property - def references(self) -> Set[type]: - "Types collected by the type collector." - - dependencies = set() - for edges in self.graph.values(): - dependencies.update(edges) - return dependencies - - def __init__(self) -> None: - self.graph = {_ROOT: set()} - - def traverse(self, typ: type) -> None: - "Finds all dependent types of a type." - - self.run(typ, _ROOT, sys.modules[typ.__module__]) - - def traverse_all(self, types: Iterable[type]) -> None: - "Finds all dependent types of a list of types." - - for typ in types: - self.traverse(typ) - - def run( - self, - typ: TypeLike, - cls: Type[DataclassInstance], - module: Optional[types.ModuleType], - ) -> None: - """ - Extracts types indirectly referenced by this type. - - For example, extract `T` from `List[T]`, `Optional[T]` or `Annotated[T, ...]`, `K` and `V` from `Dict[K,V]`, - `A` and `B` from `Union[A,B]`. - - :param typ: A type or special form. - :param cls: A dataclass type being expanded for dependent types. - :param module: The context in which types are evaluated. - :returns: Types referenced by the given type or special form. - """ - - if typ is type(None) or typ is Any: - return - - if isinstance(typ, type): - self.graph[cls].add(typ) - - if typ in self.graph: - return - - self.graph[typ] = set() - - metadata = getattr(typ, "__metadata__", None) - if metadata is not None: - # type is Annotated[T, ...] - arg = typing.get_args(typ)[0] - return self.run(arg, cls, module) - - # type is a forward reference - if isinstance(typ, str) or isinstance(typ, typing.ForwardRef): - if module is None: - raise ValueError("missing context for evaluating types") - - evaluated_type = evaluate_type(typ, module) - return self.run(evaluated_type, cls, module) - - # type is a special form - origin = typing.get_origin(typ) - if origin in [list, dict, frozenset, set, tuple, Union]: - for arg in typing.get_args(typ): - self.run(arg, cls, module) - return - elif origin is Literal: - return - - # type is optional or a union type - if is_type_optional(typ): - return self.run(unwrap_optional_type(typ), cls, module) - if is_type_union(typ): - for union_type in unwrap_union_types(typ): - self.run(union_type, cls, module) - return - - # type is a regular type - elif is_dataclass_type(typ) or is_type_enum(typ) or isinstance(typ, type): - context = sys.modules[typ.__module__] - if is_dataclass_type(typ): - for field in dataclass_fields(typ): - self.run(field.type, typ, context) - else: - for field_name, field_type in get_resolved_hints(typ).items(): - self.run(field_type, typ, context) - return - - raise TypeError(f"expected: type-like; got: {typ}") - - -if sys.version_info >= (3, 10): - - def get_signature(fn: Callable[..., Any]) -> inspect.Signature: - "Extracts the signature of a function." - - return inspect.signature(fn, eval_str=True) - -else: - - def get_signature(fn: Callable[..., Any]) -> inspect.Signature: - "Extracts the signature of a function." - - return inspect.signature(fn) - - -def is_reserved_property(name: str) -> bool: - "True if the name stands for an internal property." - - # filter built-in and special properties - if re.match(r"^__.+__$", name): - return True - - # filter built-in special names - if name in ["_abc_impl"]: - return True - - return False - - -def create_module(name: str) -> types.ModuleType: - """ - Creates a new module dynamically at run-time. - - :param name: Fully qualified name of the new module (with dot notation). - """ - - if name in sys.modules: - raise KeyError(f"{name!r} already in sys.modules") - - spec = importlib.machinery.ModuleSpec(name, None) - module = importlib.util.module_from_spec(spec) - sys.modules[name] = module - if spec.loader is not None: - spec.loader.exec_module(module) - return module - - -if sys.version_info >= (3, 10): - - def create_data_type(class_name: str, fields: List[Tuple[str, type]]) -> type: - """ - Creates a new data-class type dynamically. - - :param class_name: The name of new data-class type. - :param fields: A list of fields (and their type) that the new data-class type is expected to have. - :returns: The newly created data-class type. - """ - - # has the `slots` parameter - return dataclasses.make_dataclass(class_name, fields, slots=True) - -else: - - def create_data_type(class_name: str, fields: List[Tuple[str, type]]) -> type: - """ - Creates a new data-class type dynamically. - - :param class_name: The name of new data-class type. - :param fields: A list of fields (and their type) that the new data-class type is expected to have. - :returns: The newly created data-class type. - """ - - cls = dataclasses.make_dataclass(class_name, fields) - - cls_dict = dict(cls.__dict__) - field_names = tuple(field.name for field in dataclasses.fields(cls)) - - cls_dict["__slots__"] = field_names - - for field_name in field_names: - cls_dict.pop(field_name, None) - cls_dict.pop("__dict__", None) - - qualname = getattr(cls, "__qualname__", None) - cls = type(cls)(cls.__name__, (), cls_dict) - if qualname is not None: - cls.__qualname__ = qualname - - return cls - - -def create_object(typ: Type[T]) -> T: - "Creates an instance of a type." - - if issubclass(typ, Exception): - # exception types need special treatment - e = typ.__new__(typ) - return typing.cast(T, e) - else: - return object.__new__(typ) - - -if sys.version_info >= (3, 9): - TypeOrGeneric = Union[type, types.GenericAlias] - -else: - TypeOrGeneric = object - - -def is_generic_instance(obj: Any, typ: TypeLike) -> bool: - """ - Returns whether an object is an instance of a generic class, a standard class or of a subclass thereof. - - This function checks the following items recursively: - * items of a list - * keys and values of a dictionary - * members of a set - * items of a tuple - * members of a union type - - :param obj: The (possibly generic container) object to check recursively. - :param typ: The expected type of the object. - """ - - if isinstance(typ, typing.ForwardRef): - fwd: typing.ForwardRef = typ - identifier = fwd.__forward_arg__ - typ = eval(identifier) - if isinstance(typ, type): - return isinstance(obj, typ) - else: - return False - - # generic types (e.g. list, dict, set, etc.) - origin_type = typing.get_origin(typ) - if origin_type is list: - if not isinstance(obj, list): - return False - (list_item_type,) = typing.get_args(typ) # unpack single tuple element - list_obj: list = obj - return all(is_generic_instance(item, list_item_type) for item in list_obj) - elif origin_type is dict: - if not isinstance(obj, dict): - return False - key_type, value_type = typing.get_args(typ) - dict_obj: dict = obj - return all( - is_generic_instance(key, key_type) - and is_generic_instance(value, value_type) - for key, value in dict_obj.items() - ) - elif origin_type is set: - if not isinstance(obj, set): - return False - (set_member_type,) = typing.get_args(typ) # unpack single tuple element - set_obj: set = obj - return all(is_generic_instance(item, set_member_type) for item in set_obj) - elif origin_type is tuple: - if not isinstance(obj, tuple): - return False - return all( - is_generic_instance(item, tuple_item_type) - for tuple_item_type, item in zip( - (tuple_item_type for tuple_item_type in typing.get_args(typ)), - (item for item in obj), - ) - ) - elif origin_type is Union: - return any( - is_generic_instance(obj, member_type) - for member_type in typing.get_args(typ) - ) - elif isinstance(typ, type): - return isinstance(obj, typ) - else: - raise TypeError(f"expected `type` but got: {typ}") - - -class RecursiveChecker: - _pred: Optional[Callable[[type, Any], bool]] - - def __init__(self, pred: Callable[[type, Any], bool]) -> None: - """ - Creates a checker to verify if a predicate applies to all nested member properties of an object recursively. - - :param pred: The predicate to test on member properties. Takes a property type and a property value. - """ - - self._pred = pred - - def pred(self, typ: type, obj: Any) -> bool: - "Acts as a workaround for the type checker mypy." - - assert self._pred is not None - return self._pred(typ, obj) - - def check(self, typ: TypeLike, obj: Any) -> bool: - """ - Checks if a predicate applies to all nested member properties of an object recursively. - - :param typ: The type to recurse into. - :param obj: The object to inspect recursively. Must be an instance of the given type. - :returns: True if all member properties pass the filter predicate. - """ - - # check for well-known types - if ( - typ is type(None) - or typ is bool - or typ is int - or typ is float - or typ is str - or typ is bytes - or typ is datetime.datetime - or typ is datetime.date - or typ is datetime.time - or typ is uuid.UUID - ): - return self.pred(typing.cast(type, typ), obj) - - # generic types (e.g. list, dict, set, etc.) - origin_type = typing.get_origin(typ) - if origin_type is list: - if not isinstance(obj, list): - raise TypeError(f"expected `list` but got: {obj}") - (list_item_type,) = typing.get_args(typ) # unpack single tuple element - list_obj: list = obj - return all(self.check(list_item_type, item) for item in list_obj) - elif origin_type is dict: - if not isinstance(obj, dict): - raise TypeError(f"expected `dict` but got: {obj}") - key_type, value_type = typing.get_args(typ) - dict_obj: dict = obj - return all(self.check(value_type, item) for item in dict_obj.values()) - elif origin_type is set: - if not isinstance(obj, set): - raise TypeError(f"expected `set` but got: {obj}") - (set_member_type,) = typing.get_args(typ) # unpack single tuple element - set_obj: set = obj - return all(self.check(set_member_type, item) for item in set_obj) - elif origin_type is tuple: - if not isinstance(obj, tuple): - raise TypeError(f"expected `tuple` but got: {obj}") - return all( - self.check(tuple_item_type, item) - for tuple_item_type, item in zip( - (tuple_item_type for tuple_item_type in typing.get_args(typ)), - (item for item in obj), - ) - ) - elif origin_type is Union: - return self.pred(typ, obj) # type: ignore[arg-type] - - if not inspect.isclass(typ): - raise TypeError(f"expected `type` but got: {typ}") - - # enumeration type - if issubclass(typ, enum.Enum): - if not isinstance(obj, enum.Enum): - raise TypeError(f"expected `{typ}` but got: {obj}") - return self.pred(typ, obj) - - # class types with properties - if is_named_tuple_type(typ): - if not isinstance(obj, tuple): - raise TypeError(f"expected `NamedTuple` but got: {obj}") - return all( - self.check(field_type, getattr(obj, field_name)) - for field_name, field_type in typing.get_type_hints(typ).items() - ) - elif is_dataclass_type(typ): - if not isinstance(obj, typ): - raise TypeError(f"expected `{typ}` but got: {obj}") - resolved_hints = get_resolved_hints(typ) - return all( - self.check(resolved_hints[field.name], getattr(obj, field.name)) - for field in dataclasses.fields(typ) - ) - else: - if not isinstance(obj, typ): - raise TypeError(f"expected `{typ}` but got: {obj}") - return all( - self.check(property_type, getattr(obj, property_name)) - for property_name, property_type in get_class_properties(typ) - ) - - -def check_recursive( - obj: object, - /, - *, - pred: Optional[Callable[[type, Any], bool]] = None, - type_pred: Optional[Callable[[type], bool]] = None, - value_pred: Optional[Callable[[Any], bool]] = None, -) -> bool: - """ - Checks if a predicate applies to all nested member properties of an object recursively. - - :param obj: The object to inspect recursively. - :param pred: The predicate to test on member properties. Takes a property type and a property value. - :param type_pred: Constrains the check to properties of an expected type. Properties of other types pass automatically. - :param value_pred: Verifies a condition on member property values (of an expected type). - :returns: True if all member properties pass the filter predicate(s). - """ - - if type_pred is not None and value_pred is not None: - if pred is not None: - raise TypeError( - "filter predicate not permitted when type and value predicates are present" - ) - - type_p: Callable[[Type[T]], bool] = type_pred - value_p: Callable[[T], bool] = value_pred - pred = lambda typ, obj: not type_p(typ) or value_p(obj) # noqa: E731 - - elif value_pred is not None: - if pred is not None: - raise TypeError( - "filter predicate not permitted when value predicate is present" - ) - - value_only_p: Callable[[T], bool] = value_pred - pred = lambda typ, obj: value_only_p(obj) # noqa: E731 - - elif type_pred is not None: - raise TypeError("value predicate required when type predicate is present") - - elif pred is None: - pred = lambda typ, obj: True # noqa: E731 - - return RecursiveChecker(pred).check(type(obj), obj) diff --git a/docs/openapi_generator/strong_typing/mapping.py b/docs/openapi_generator/strong_typing/mapping.py deleted file mode 100644 index 2bc68bb63..000000000 --- a/docs/openapi_generator/strong_typing/mapping.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import keyword -from typing import Optional - -from .auxiliary import Alias -from .inspection import get_annotation - - -def python_field_to_json_property( - python_id: str, python_type: Optional[object] = None -) -> str: - """ - Map a Python field identifier to a JSON property name. - - Authors may use an underscore appended at the end of a Python identifier as per PEP 8 if it clashes with a Python - keyword: e.g. `in` would become `in_` and `from` would become `from_`. Remove these suffixes when exporting to JSON. - - Authors may supply an explicit alias with the type annotation `Alias`, e.g. `Annotated[MyType, Alias("alias")]`. - """ - - if python_type is not None: - alias = get_annotation(python_type, Alias) - if alias: - return alias.name - - if python_id.endswith("_"): - id = python_id[:-1] - if keyword.iskeyword(id): - return id - - return python_id diff --git a/docs/openapi_generator/strong_typing/name.py b/docs/openapi_generator/strong_typing/name.py deleted file mode 100644 index c883794c0..000000000 --- a/docs/openapi_generator/strong_typing/name.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import typing -from typing import Any, Literal, Optional, Tuple, Union - -from .auxiliary import _auxiliary_types -from .inspection import ( - is_generic_dict, - is_generic_list, - is_type_optional, - is_type_union, - TypeLike, - unwrap_generic_dict, - unwrap_generic_list, - unwrap_optional_type, - unwrap_union_types, -) - - -class TypeFormatter: - """ - Type formatter. - - :param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604. - """ - - use_union_operator: bool - - def __init__(self, use_union_operator: bool = False) -> None: - self.use_union_operator = use_union_operator - - def union_to_str(self, data_type_args: Tuple[TypeLike, ...]) -> str: - if self.use_union_operator: - return " | ".join(self.python_type_to_str(t) for t in data_type_args) - else: - if len(data_type_args) == 2 and type(None) in data_type_args: - # Optional[T] is represented as Union[T, None] - origin_name = "Optional" - data_type_args = tuple(t for t in data_type_args if t is not type(None)) - else: - origin_name = "Union" - - args = ", ".join(self.python_type_to_str(t) for t in data_type_args) - return f"{origin_name}[{args}]" - - def plain_type_to_str(self, data_type: TypeLike) -> str: - "Returns the string representation of a Python type without metadata." - - # return forward references as the annotation string - if isinstance(data_type, typing.ForwardRef): - fwd: typing.ForwardRef = data_type - return fwd.__forward_arg__ - elif isinstance(data_type, str): - return data_type - - origin = typing.get_origin(data_type) - if origin is not None: - data_type_args = typing.get_args(data_type) - - if origin is dict: # Dict[T] - origin_name = "Dict" - elif origin is list: # List[T] - origin_name = "List" - elif origin is set: # Set[T] - origin_name = "Set" - elif origin is Union: - return self.union_to_str(data_type_args) - elif origin is Literal: - args = ", ".join(repr(arg) for arg in data_type_args) - return f"Literal[{args}]" - else: - origin_name = origin.__name__ - - args = ", ".join(self.python_type_to_str(t) for t in data_type_args) - return f"{origin_name}[{args}]" - - return data_type.__name__ - - def python_type_to_str(self, data_type: TypeLike) -> str: - "Returns the string representation of a Python type." - - if data_type is type(None): - return "None" - - # use compact name for alias types - name = _auxiliary_types.get(data_type) - if name is not None: - return name - - metadata = getattr(data_type, "__metadata__", None) - if metadata is not None: - # type is Annotated[T, ...] - metatuple: Tuple[Any, ...] = metadata - arg = typing.get_args(data_type)[0] - - # check for auxiliary types with user-defined annotations - metaset = set(metatuple) - for auxiliary_type, auxiliary_name in _auxiliary_types.items(): - auxiliary_arg = typing.get_args(auxiliary_type)[0] - if arg is not auxiliary_arg: - continue - - auxiliary_metatuple: Optional[Tuple[Any, ...]] = getattr( - auxiliary_type, "__metadata__", None - ) - if auxiliary_metatuple is None: - continue - - if metaset.issuperset(auxiliary_metatuple): - # type is an auxiliary type with extra annotations - auxiliary_args = ", ".join( - repr(m) for m in metatuple if m not in auxiliary_metatuple - ) - return f"Annotated[{auxiliary_name}, {auxiliary_args}]" - - # type is an annotated type - args = ", ".join(repr(m) for m in metatuple) - return f"Annotated[{self.plain_type_to_str(arg)}, {args}]" - else: - # type is a regular type - return self.plain_type_to_str(data_type) - - -def python_type_to_str(data_type: TypeLike, use_union_operator: bool = False) -> str: - """ - Returns the string representation of a Python type. - - :param use_union_operator: Whether to emit union types as `X | Y` as per PEP 604. - """ - - fmt = TypeFormatter(use_union_operator) - return fmt.python_type_to_str(data_type) - - -def python_type_to_name(data_type: TypeLike, force: bool = False) -> str: - """ - Returns the short name of a Python type. - - :param force: Whether to produce a name for composite types such as generics. - """ - - # use compact name for alias types - name = _auxiliary_types.get(data_type) - if name is not None: - return name - - # unwrap annotated types - metadata = getattr(data_type, "__metadata__", None) - if metadata is not None: - # type is Annotated[T, ...] - arg = typing.get_args(data_type)[0] - return python_type_to_name(arg) - - if force: - # generic types - if is_type_optional(data_type, strict=True): - inner_name = python_type_to_name(unwrap_optional_type(data_type)) - return f"Optional__{inner_name}" - elif is_generic_list(data_type): - item_name = python_type_to_name(unwrap_generic_list(data_type)) - return f"List__{item_name}" - elif is_generic_dict(data_type): - key_type, value_type = unwrap_generic_dict(data_type) - key_name = python_type_to_name(key_type) - value_name = python_type_to_name(value_type) - return f"Dict__{key_name}__{value_name}" - elif is_type_union(data_type): - member_types = unwrap_union_types(data_type) - member_names = "__".join( - python_type_to_name(member_type) for member_type in member_types - ) - return f"Union__{member_names}" - - # named system or user-defined type - if hasattr(data_type, "__name__") and not typing.get_args(data_type): - return data_type.__name__ - - raise TypeError(f"cannot assign a simple name to type: {data_type}") diff --git a/docs/openapi_generator/strong_typing/py.typed b/docs/openapi_generator/strong_typing/py.typed deleted file mode 100644 index e69de29bb..000000000 diff --git a/docs/openapi_generator/strong_typing/schema.py b/docs/openapi_generator/strong_typing/schema.py deleted file mode 100644 index 7f44435b8..000000000 --- a/docs/openapi_generator/strong_typing/schema.py +++ /dev/null @@ -1,792 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import dataclasses -import datetime -import decimal -import enum -import functools -import inspect -import json -import typing -import uuid -from copy import deepcopy -from typing import ( - Any, - Callable, - ClassVar, - Dict, - List, - Literal, - Optional, - overload, - Tuple, - Type, - TypeVar, - Union, -) - -import jsonschema -from typing_extensions import Annotated - -from . import docstring -from .auxiliary import ( - Alias, - get_auxiliary_format, - IntegerRange, - MaxLength, - MinLength, - Precision, -) -from .core import JsonArray, JsonObject, JsonType, Schema, StrictJsonType -from .inspection import ( - enum_value_types, - get_annotation, - get_class_properties, - is_type_enum, - is_type_like, - is_type_optional, - TypeLike, - unwrap_optional_type, -) -from .name import python_type_to_name -from .serialization import object_to_json - -# determines the maximum number of distinct enum members up to which a Dict[EnumType, Any] is converted into a JSON -# schema with explicitly listed properties (rather than employing a pattern constraint on property names) -OBJECT_ENUM_EXPANSION_LIMIT = 4 - - -T = TypeVar("T") - - -def get_class_docstrings(data_type: type) -> Tuple[Optional[str], Optional[str]]: - docstr = docstring.parse_type(data_type) - - # check if class has a doc-string other than the auto-generated string assigned by @dataclass - if docstring.has_default_docstring(data_type): - return None, None - - return docstr.short_description, docstr.long_description - - -def get_class_property_docstrings( - data_type: type, transform_fun: Optional[Callable[[type, str, str], str]] = None -) -> Dict[str, str]: - """ - Extracts the documentation strings associated with the properties of a composite type. - - :param data_type: The object whose properties to iterate over. - :param transform_fun: An optional function that maps a property documentation string to a custom tailored string. - :returns: A dictionary mapping property names to descriptions. - """ - - result = {} - for base in inspect.getmro(data_type): - docstr = docstring.parse_type(base) - for param in docstr.params.values(): - if param.name in result: - continue - - if transform_fun: - description = transform_fun(data_type, param.name, param.description) - else: - description = param.description - - result[param.name] = description - return result - - -def docstring_to_schema(data_type: type) -> Schema: - short_description, long_description = get_class_docstrings(data_type) - schema: Schema = {} - - description = "\n".join(filter(None, [short_description, long_description])) - if description: - schema["description"] = description - return schema - - -def id_from_ref(data_type: Union[typing.ForwardRef, str, type]) -> str: - "Extracts the name of a possibly forward-referenced type." - - if isinstance(data_type, typing.ForwardRef): - forward_type: typing.ForwardRef = data_type - return forward_type.__forward_arg__ - elif isinstance(data_type, str): - return data_type - else: - return data_type.__name__ - - -def type_from_ref(data_type: Union[typing.ForwardRef, str, type]) -> Tuple[str, type]: - "Creates a type from a forward reference." - - if isinstance(data_type, typing.ForwardRef): - forward_type: typing.ForwardRef = data_type - true_type = eval(forward_type.__forward_code__) - return forward_type.__forward_arg__, true_type - elif isinstance(data_type, str): - true_type = eval(data_type) - return data_type, true_type - else: - return data_type.__name__, data_type - - -@dataclasses.dataclass -class TypeCatalogEntry: - schema: Optional[Schema] - identifier: str - examples: Optional[JsonType] = None - - -class TypeCatalog: - "Maintains an association of well-known Python types to their JSON schema." - - _by_type: Dict[TypeLike, TypeCatalogEntry] - _by_name: Dict[str, TypeCatalogEntry] - - def __init__(self) -> None: - self._by_type = {} - self._by_name = {} - - def __contains__(self, data_type: TypeLike) -> bool: - if isinstance(data_type, typing.ForwardRef): - fwd: typing.ForwardRef = data_type - name = fwd.__forward_arg__ - return name in self._by_name - else: - return data_type in self._by_type - - def add( - self, - data_type: TypeLike, - schema: Optional[Schema], - identifier: str, - examples: Optional[List[JsonType]] = None, - ) -> None: - if isinstance(data_type, typing.ForwardRef): - raise TypeError("forward references cannot be used to register a type") - - if data_type in self._by_type: - raise ValueError(f"type {data_type} is already registered in the catalog") - - entry = TypeCatalogEntry(schema, identifier, examples) - self._by_type[data_type] = entry - self._by_name[identifier] = entry - - def get(self, data_type: TypeLike) -> TypeCatalogEntry: - if isinstance(data_type, typing.ForwardRef): - fwd: typing.ForwardRef = data_type - name = fwd.__forward_arg__ - return self._by_name[name] - else: - return self._by_type[data_type] - - -@dataclasses.dataclass -class SchemaOptions: - definitions_path: str = "#/definitions/" - use_descriptions: bool = True - use_examples: bool = True - property_description_fun: Optional[Callable[[type, str, str], str]] = None - - -class JsonSchemaGenerator: - "Creates a JSON schema with user-defined type definitions." - - type_catalog: ClassVar[TypeCatalog] = TypeCatalog() - types_used: Dict[str, TypeLike] - options: SchemaOptions - - def __init__(self, options: Optional[SchemaOptions] = None): - if options is None: - self.options = SchemaOptions() - else: - self.options = options - self.types_used = {} - - @functools.singledispatchmethod - def _metadata_to_schema(self, arg: object) -> Schema: - # unrecognized annotation - return {} - - @_metadata_to_schema.register - def _(self, arg: IntegerRange) -> Schema: - return {"minimum": arg.minimum, "maximum": arg.maximum} - - @_metadata_to_schema.register - def _(self, arg: Precision) -> Schema: - return { - "multipleOf": 10 ** (-arg.decimal_digits), - "exclusiveMinimum": -(10**arg.integer_digits), - "exclusiveMaximum": (10**arg.integer_digits), - } - - @_metadata_to_schema.register - def _(self, arg: MinLength) -> Schema: - return {"minLength": arg.value} - - @_metadata_to_schema.register - def _(self, arg: MaxLength) -> Schema: - return {"maxLength": arg.value} - - def _with_metadata( - self, type_schema: Schema, metadata: Optional[Tuple[Any, ...]] - ) -> Schema: - if metadata: - for m in metadata: - type_schema.update(self._metadata_to_schema(m)) - return type_schema - - def _simple_type_to_schema( - self, typ: TypeLike, json_schema_extra: Optional[dict] = None - ) -> Optional[Schema]: - """ - Returns the JSON schema associated with a simple, unrestricted type. - - :returns: The schema for a simple type, or `None`. - """ - - if typ is type(None): - return {"type": "null"} - elif typ is bool: - return {"type": "boolean"} - elif typ is int: - return {"type": "integer"} - elif typ is float: - return {"type": "number"} - elif typ is str: - if json_schema_extra and "contentEncoding" in json_schema_extra: - return { - "type": "string", - "contentEncoding": json_schema_extra["contentEncoding"], - } - return {"type": "string"} - elif typ is bytes: - return {"type": "string", "contentEncoding": "base64"} - elif typ is datetime.datetime: - # 2018-11-13T20:20:39+00:00 - return { - "type": "string", - "format": "date-time", - } - elif typ is datetime.date: - # 2018-11-13 - return {"type": "string", "format": "date"} - elif typ is datetime.time: - # 20:20:39+00:00 - return {"type": "string", "format": "time"} - elif typ is decimal.Decimal: - return {"type": "number"} - elif typ is uuid.UUID: - # f81d4fae-7dec-11d0-a765-00a0c91e6bf6 - return {"type": "string", "format": "uuid"} - elif typ is Any: - return { - "oneOf": [ - {"type": "null"}, - {"type": "boolean"}, - {"type": "number"}, - {"type": "string"}, - {"type": "array"}, - {"type": "object"}, - ] - } - elif typ is JsonObject: - return {"type": "object"} - elif typ is JsonArray: - return {"type": "array"} - else: - # not a simple type - return None - - def type_to_schema( - self, - data_type: TypeLike, - force_expand: bool = False, - json_schema_extra: Optional[dict] = None, - ) -> Schema: - """ - Returns the JSON schema associated with a type. - - :param data_type: The Python type whose JSON schema to return. - :param force_expand: Forces a JSON schema to be returned even if the type is registered in the catalog of known types. - :returns: The JSON schema associated with the type. - """ - - # short-circuit for common simple types - schema = self._simple_type_to_schema(data_type, json_schema_extra) - if schema is not None: - return schema - - # types registered in the type catalog of well-known types - type_catalog = JsonSchemaGenerator.type_catalog - if not force_expand and data_type in type_catalog: - # user-defined type - identifier = type_catalog.get(data_type).identifier - self.types_used.setdefault(identifier, data_type) - return {"$ref": f"{self.options.definitions_path}{identifier}"} - - # unwrap annotated types - metadata = getattr(data_type, "__metadata__", None) - if metadata is not None: - # type is Annotated[T, ...] - typ = typing.get_args(data_type)[0] - schema = self._simple_type_to_schema(typ) - if schema is not None: - # recognize well-known auxiliary types - fmt = get_auxiliary_format(data_type) - if fmt is not None: - schema.update({"format": fmt}) - return schema - else: - return self._with_metadata(schema, metadata) - - else: - # type is a regular type - typ = data_type - - if isinstance(typ, typing.ForwardRef) or isinstance(typ, str): - if force_expand: - identifier, true_type = type_from_ref(typ) - return self.type_to_schema(true_type, force_expand=True) - else: - try: - identifier, true_type = type_from_ref(typ) - self.types_used[identifier] = true_type - except NameError: - identifier = id_from_ref(typ) - - return {"$ref": f"{self.options.definitions_path}{identifier}"} - - if is_type_enum(typ): - enum_type: Type[enum.Enum] = typ - value_types = enum_value_types(enum_type) - if len(value_types) != 1: - raise ValueError( - f"enumerations must have a consistent member value type but several types found: {value_types}" - ) - enum_value_type = value_types.pop() - - enum_schema: Schema - if ( - enum_value_type is bool - or enum_value_type is int - or enum_value_type is float - or enum_value_type is str - ): - if enum_value_type is bool: - enum_schema_type = "boolean" - elif enum_value_type is int: - enum_schema_type = "integer" - elif enum_value_type is float: - enum_schema_type = "number" - elif enum_value_type is str: - enum_schema_type = "string" - - enum_schema = { - "type": enum_schema_type, - "enum": [object_to_json(e.value) for e in enum_type], - } - if self.options.use_descriptions: - enum_schema.update(docstring_to_schema(typ)) - return enum_schema - else: - enum_schema = self.type_to_schema(enum_value_type) - if self.options.use_descriptions: - enum_schema.update(docstring_to_schema(typ)) - return enum_schema - - origin_type = typing.get_origin(typ) - if origin_type is list: - (list_type,) = typing.get_args(typ) # unpack single tuple element - return {"type": "array", "items": self.type_to_schema(list_type)} - elif origin_type is dict: - key_type, value_type = typing.get_args(typ) - if not (key_type is str or key_type is int or is_type_enum(key_type)): - raise ValueError( - "`dict` with key type not coercible to `str` is not supported" - ) - - dict_schema: Schema - value_schema = self.type_to_schema(value_type) - if is_type_enum(key_type): - enum_values = [str(e.value) for e in key_type] - if len(enum_values) > OBJECT_ENUM_EXPANSION_LIMIT: - dict_schema = { - "propertyNames": { - "pattern": "^(" + "|".join(enum_values) + ")$" - }, - "additionalProperties": value_schema, - } - else: - dict_schema = { - "properties": {value: value_schema for value in enum_values}, - "additionalProperties": False, - } - else: - dict_schema = {"additionalProperties": value_schema} - - schema = {"type": "object"} - schema.update(dict_schema) - return schema - elif origin_type is set: - (set_type,) = typing.get_args(typ) # unpack single tuple element - return { - "type": "array", - "items": self.type_to_schema(set_type), - "uniqueItems": True, - } - elif origin_type is tuple: - args = typing.get_args(typ) - return { - "type": "array", - "minItems": len(args), - "maxItems": len(args), - "prefixItems": [ - self.type_to_schema(member_type) for member_type in args - ], - } - elif origin_type is Union: - discriminator = None - if typing.get_origin(data_type) is Annotated: - discriminator = typing.get_args(data_type)[1].discriminator - ret = { - "oneOf": [ - self.type_to_schema(union_type) - for union_type in typing.get_args(typ) - ] - } - if discriminator: - # for each union type, we need to read the value of the discriminator - mapping = {} - for union_type in typing.get_args(typ): - props = self.type_to_schema(union_type, force_expand=True)[ - "properties" - ] - mapping[props[discriminator]["default"]] = self.type_to_schema( - union_type - )["$ref"] - - ret["discriminator"] = { - "propertyName": discriminator, - "mapping": mapping, - } - return ret - elif origin_type is Literal: - (literal_value,) = typing.get_args(typ) # unpack value of literal type - schema = self.type_to_schema(type(literal_value)) - schema["const"] = literal_value - return schema - elif origin_type is type: - (concrete_type,) = typing.get_args(typ) # unpack single tuple element - return {"const": self.type_to_schema(concrete_type, force_expand=True)} - - # dictionary of class attributes - members = dict(inspect.getmembers(typ, lambda a: not inspect.isroutine(a))) - - property_docstrings = get_class_property_docstrings( - typ, self.options.property_description_fun - ) - properties: Dict[str, Schema] = {} - required: List[str] = [] - for property_name, property_type in get_class_properties(typ): - # rename property if an alias name is specified - alias = get_annotation(property_type, Alias) - if alias: - output_name = alias.name - else: - output_name = property_name - - defaults = {} - json_schema_extra = None - if "model_fields" in members: - f = members["model_fields"] - defaults = {k: finfo.default for k, finfo in f.items()} - json_schema_extra = f.get(output_name, None).json_schema_extra - - if is_type_optional(property_type): - optional_type: type = unwrap_optional_type(property_type) - property_def = self.type_to_schema( - optional_type, json_schema_extra=json_schema_extra - ) - else: - property_def = self.type_to_schema( - property_type, json_schema_extra=json_schema_extra - ) - required.append(output_name) - - # check if attribute has a default value initializer - if defaults.get(property_name) is not None: - def_value = defaults[property_name] - # check if value can be directly represented in JSON - if isinstance( - def_value, - ( - bool, - int, - float, - str, - enum.Enum, - datetime.datetime, - datetime.date, - datetime.time, - ), - ): - property_def["default"] = object_to_json(def_value) - - # add property docstring if available - property_doc = property_docstrings.get(property_name) - if property_doc: - # print(output_name, property_doc) - property_def.pop("title", None) - property_def["description"] = property_doc - - properties[output_name] = property_def - - schema = {"type": "object"} - if len(properties) > 0: - schema["properties"] = typing.cast(JsonType, properties) - schema["additionalProperties"] = False - if len(required) > 0: - schema["required"] = typing.cast(JsonType, required) - if self.options.use_descriptions: - schema.update(docstring_to_schema(typ)) - return schema - - def _type_to_schema_with_lookup(self, data_type: TypeLike) -> Schema: - """ - Returns the JSON schema associated with a type that may be registered in the catalog of known types. - - :param data_type: The type whose JSON schema we seek. - :returns: The JSON schema associated with the type. - """ - - entry = JsonSchemaGenerator.type_catalog.get(data_type) - if entry.schema is None: - type_schema = self.type_to_schema(data_type, force_expand=True) - else: - type_schema = deepcopy(entry.schema) - - # add descriptive text (if present) - if self.options.use_descriptions: - if isinstance(data_type, type) and not isinstance( - data_type, typing.ForwardRef - ): - type_schema.update(docstring_to_schema(data_type)) - - # add example (if present) - if self.options.use_examples and entry.examples: - type_schema["examples"] = entry.examples - - return type_schema - - def classdef_to_schema( - self, data_type: TypeLike, force_expand: bool = False - ) -> Tuple[Schema, Dict[str, Schema]]: - """ - Returns the JSON schema associated with a type and any nested types. - - :param data_type: The type whose JSON schema to return. - :param force_expand: True if a full JSON schema is to be returned even for well-known types; false if a schema - reference is to be used for well-known types. - :returns: A tuple of the JSON schema, and a mapping between nested type names and their corresponding schema. - """ - - if not is_type_like(data_type): - raise TypeError(f"expected a type-like object but got: {data_type}") - - self.types_used = {} - try: - type_schema = self.type_to_schema(data_type, force_expand=force_expand) - - types_defined: Dict[str, Schema] = {} - while len(self.types_used) > len(types_defined): - # make a snapshot copy; original collection is going to be modified - types_undefined = { - sub_name: sub_type - for sub_name, sub_type in self.types_used.items() - if sub_name not in types_defined - } - - # expand undefined types, which may lead to additional types to be defined - for sub_name, sub_type in types_undefined.items(): - types_defined[sub_name] = self._type_to_schema_with_lookup(sub_type) - - type_definitions = dict(sorted(types_defined.items())) - finally: - self.types_used = {} - - return type_schema, type_definitions - - -class Validator(enum.Enum): - "Defines constants for JSON schema standards." - - Draft7 = jsonschema.Draft7Validator - Draft201909 = jsonschema.Draft201909Validator - Draft202012 = jsonschema.Draft202012Validator - Latest = jsonschema.Draft202012Validator - - -def classdef_to_schema( - data_type: TypeLike, - options: Optional[SchemaOptions] = None, - validator: Validator = Validator.Latest, -) -> Schema: - """ - Returns the JSON schema corresponding to the given type. - - :param data_type: The Python type used to generate the JSON schema - :returns: A JSON object that you can serialize to a JSON string with json.dump or json.dumps - :raises TypeError: Indicates that the generated JSON schema does not validate against the desired meta-schema. - """ - - # short-circuit with an error message when passing invalid data - if not is_type_like(data_type): - raise TypeError(f"expected a type-like object but got: {data_type}") - - generator = JsonSchemaGenerator(options) - type_schema, type_definitions = generator.classdef_to_schema(data_type) - - class_schema: Schema = {} - if type_definitions: - class_schema["definitions"] = typing.cast(JsonType, type_definitions) - class_schema.update(type_schema) - - validator_id = validator.value.META_SCHEMA["$id"] - try: - validator.value.check_schema(class_schema) - except jsonschema.exceptions.SchemaError: - raise TypeError( - f"schema does not validate against meta-schema <{validator_id}>" - ) - - schema = {"$schema": validator_id} - schema.update(class_schema) - return schema - - -def validate_object(data_type: TypeLike, json_dict: JsonType) -> None: - """ - Validates if the JSON dictionary object conforms to the expected type. - - :param data_type: The type to match against. - :param json_dict: A JSON object obtained with `json.load` or `json.loads`. - :raises jsonschema.exceptions.ValidationError: Indicates that the JSON object cannot represent the type. - """ - - schema_dict = classdef_to_schema(data_type) - jsonschema.validate( - json_dict, schema_dict, format_checker=jsonschema.FormatChecker() - ) - - -def print_schema(data_type: type) -> None: - """Pretty-prints the JSON schema corresponding to the type.""" - - s = classdef_to_schema(data_type) - print(json.dumps(s, indent=4)) - - -def get_schema_identifier(data_type: type) -> Optional[str]: - if data_type in JsonSchemaGenerator.type_catalog: - return JsonSchemaGenerator.type_catalog.get(data_type).identifier - else: - return None - - -def register_schema( - data_type: T, - schema: Optional[Schema] = None, - name: Optional[str] = None, - examples: Optional[List[JsonType]] = None, -) -> T: - """ - Associates a type with a JSON schema definition. - - :param data_type: The type to associate with a JSON schema. - :param schema: The schema to associate the type with. Derived automatically if omitted. - :param name: The name used for looking uo the type. Determined automatically if omitted. - :returns: The input type. - """ - - JsonSchemaGenerator.type_catalog.add( - data_type, - schema, - name if name is not None else python_type_to_name(data_type), - examples, - ) - return data_type - - -@overload -def json_schema_type(cls: Type[T], /) -> Type[T]: ... - - -@overload -def json_schema_type( - cls: None, *, schema: Optional[Schema] = None -) -> Callable[[Type[T]], Type[T]]: ... - - -def json_schema_type( - cls: Optional[Type[T]] = None, - *, - schema: Optional[Schema] = None, - examples: Optional[List[JsonType]] = None, -) -> Union[Type[T], Callable[[Type[T]], Type[T]]]: - """Decorator to add user-defined schema definition to a class.""" - - def wrap(cls: Type[T]) -> Type[T]: - return register_schema(cls, schema, examples=examples) - - # see if decorator is used as @json_schema_type or @json_schema_type() - if cls is None: - # called with parentheses - return wrap - else: - # called as @json_schema_type without parentheses - return wrap(cls) - - -register_schema(JsonObject, name="JsonObject") -register_schema(JsonArray, name="JsonArray") - -register_schema( - JsonType, - name="JsonType", - examples=[ - { - "property1": None, - "property2": True, - "property3": 64, - "property4": "string", - "property5": ["item"], - "property6": {"key": "value"}, - } - ], -) -register_schema( - StrictJsonType, - name="StrictJsonType", - examples=[ - { - "property1": True, - "property2": 64, - "property3": "string", - "property4": ["item"], - "property5": {"key": "value"}, - } - ], -) diff --git a/docs/openapi_generator/strong_typing/serialization.py b/docs/openapi_generator/strong_typing/serialization.py deleted file mode 100644 index 88d8fccad..000000000 --- a/docs/openapi_generator/strong_typing/serialization.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import inspect -import json -import sys -from types import ModuleType -from typing import Any, Optional, TextIO, TypeVar - -from .core import JsonType -from .deserializer import create_deserializer -from .inspection import TypeLike -from .serializer import create_serializer - -T = TypeVar("T") - - -def object_to_json(obj: Any) -> JsonType: - """ - Converts a Python object to a representation that can be exported to JSON. - - * Fundamental types (e.g. numeric types) are written as is. - * Date and time types are serialized in the ISO 8601 format with time zone. - * A byte array is written as a string with Base64 encoding. - * UUIDs are written as a UUID string. - * Enumerations are written as their value. - * Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively. - * Objects with properties (including data class types) are converted to a dictionaries of key-value pairs. - """ - - typ: type = type(obj) - generator = create_serializer(typ) - return generator.generate(obj) - - -def json_to_object( - typ: TypeLike, data: JsonType, *, context: Optional[ModuleType] = None -) -> object: - """ - Creates an object from a representation that has been de-serialized from JSON. - - When de-serializing a JSON object into a Python object, the following transformations are applied: - - * Fundamental types are parsed as `bool`, `int`, `float` or `str`. - * Date and time types are parsed from the ISO 8601 format with time zone into the corresponding Python type - `datetime`, `date` or `time` - * A byte array is read from a string with Base64 encoding into a `bytes` instance. - * UUIDs are extracted from a UUID string into a `uuid.UUID` instance. - * Enumerations are instantiated with a lookup on enumeration value. - * Containers (e.g. `list`, `dict`, `set`, `tuple`) are parsed recursively. - * Complex objects with properties (including data class types) are populated from dictionaries of key-value pairs - using reflection (enumerating type annotations). - - :raises TypeError: A de-serializing engine cannot be constructed for the input type. - :raises JsonKeyError: Deserialization for a class or union type has failed because a matching member was not found. - :raises JsonTypeError: Deserialization for data has failed due to a type mismatch. - """ - - # use caller context for evaluating types if no context is supplied - if context is None: - this_frame = inspect.currentframe() - if this_frame is not None: - caller_frame = this_frame.f_back - del this_frame - - if caller_frame is not None: - try: - context = sys.modules[caller_frame.f_globals["__name__"]] - finally: - del caller_frame - - parser = create_deserializer(typ, context) - return parser.parse(data) - - -def json_dump_string(json_object: JsonType) -> str: - "Dump an object as a JSON string with a compact representation." - - return json.dumps( - json_object, ensure_ascii=False, check_circular=False, separators=(",", ":") - ) - - -def json_dump(json_object: JsonType, file: TextIO) -> None: - json.dump( - json_object, - file, - ensure_ascii=False, - check_circular=False, - separators=(",", ":"), - ) - file.write("\n") diff --git a/docs/openapi_generator/strong_typing/serializer.py b/docs/openapi_generator/strong_typing/serializer.py deleted file mode 100644 index f1252e374..000000000 --- a/docs/openapi_generator/strong_typing/serializer.py +++ /dev/null @@ -1,522 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -import abc -import base64 -import datetime -import enum -import functools -import inspect -import ipaddress -import sys -import typing -import uuid -from types import FunctionType, MethodType, ModuleType -from typing import ( - Any, - Callable, - Dict, - Generic, - List, - Literal, - NamedTuple, - Optional, - Set, - Tuple, - Type, - TypeVar, - Union, -) - -from .core import JsonType -from .exception import JsonTypeError, JsonValueError -from .inspection import ( - enum_value_types, - evaluate_type, - get_class_properties, - get_resolved_hints, - is_dataclass_type, - is_named_tuple_type, - is_reserved_property, - is_type_annotated, - is_type_enum, - TypeLike, - unwrap_annotated_type, -) -from .mapping import python_field_to_json_property - -T = TypeVar("T") - - -class Serializer(abc.ABC, Generic[T]): - @abc.abstractmethod - def generate(self, data: T) -> JsonType: ... - - -class NoneSerializer(Serializer[None]): - def generate(self, data: None) -> None: - # can be directly represented in JSON - return None - - -class BoolSerializer(Serializer[bool]): - def generate(self, data: bool) -> bool: - # can be directly represented in JSON - return data - - -class IntSerializer(Serializer[int]): - def generate(self, data: int) -> int: - # can be directly represented in JSON - return data - - -class FloatSerializer(Serializer[float]): - def generate(self, data: float) -> float: - # can be directly represented in JSON - return data - - -class StringSerializer(Serializer[str]): - def generate(self, data: str) -> str: - # can be directly represented in JSON - return data - - -class BytesSerializer(Serializer[bytes]): - def generate(self, data: bytes) -> str: - return base64.b64encode(data).decode("ascii") - - -class DateTimeSerializer(Serializer[datetime.datetime]): - def generate(self, obj: datetime.datetime) -> str: - if obj.tzinfo is None: - raise JsonValueError( - f"timestamp lacks explicit time zone designator: {obj}" - ) - fmt = obj.isoformat() - if fmt.endswith("+00:00"): - fmt = f"{fmt[:-6]}Z" # Python's isoformat() does not support military time zones like "Zulu" for UTC - return fmt - - -class DateSerializer(Serializer[datetime.date]): - def generate(self, obj: datetime.date) -> str: - return obj.isoformat() - - -class TimeSerializer(Serializer[datetime.time]): - def generate(self, obj: datetime.time) -> str: - return obj.isoformat() - - -class UUIDSerializer(Serializer[uuid.UUID]): - def generate(self, obj: uuid.UUID) -> str: - return str(obj) - - -class IPv4Serializer(Serializer[ipaddress.IPv4Address]): - def generate(self, obj: ipaddress.IPv4Address) -> str: - return str(obj) - - -class IPv6Serializer(Serializer[ipaddress.IPv6Address]): - def generate(self, obj: ipaddress.IPv6Address) -> str: - return str(obj) - - -class EnumSerializer(Serializer[enum.Enum]): - def generate(self, obj: enum.Enum) -> Union[int, str]: - return obj.value - - -class UntypedListSerializer(Serializer[list]): - def generate(self, obj: list) -> List[JsonType]: - return [object_to_json(item) for item in obj] - - -class UntypedDictSerializer(Serializer[dict]): - def generate(self, obj: dict) -> Dict[str, JsonType]: - if obj and isinstance(next(iter(obj.keys())), enum.Enum): - iterator = ( - (key.value, object_to_json(value)) for key, value in obj.items() - ) - else: - iterator = ((str(key), object_to_json(value)) for key, value in obj.items()) - return dict(iterator) - - -class UntypedSetSerializer(Serializer[set]): - def generate(self, obj: set) -> List[JsonType]: - return [object_to_json(item) for item in obj] - - -class UntypedTupleSerializer(Serializer[tuple]): - def generate(self, obj: tuple) -> List[JsonType]: - return [object_to_json(item) for item in obj] - - -class TypedCollectionSerializer(Serializer, Generic[T]): - generator: Serializer[T] - - def __init__(self, item_type: Type[T], context: Optional[ModuleType]) -> None: - self.generator = _get_serializer(item_type, context) - - -class TypedListSerializer(TypedCollectionSerializer[T]): - def generate(self, obj: List[T]) -> List[JsonType]: - return [self.generator.generate(item) for item in obj] - - -class TypedStringDictSerializer(TypedCollectionSerializer[T]): - def __init__(self, value_type: Type[T], context: Optional[ModuleType]) -> None: - super().__init__(value_type, context) - - def generate(self, obj: Dict[str, T]) -> Dict[str, JsonType]: - return {key: self.generator.generate(value) for key, value in obj.items()} - - -class TypedEnumDictSerializer(TypedCollectionSerializer[T]): - def __init__( - self, - key_type: Type[enum.Enum], - value_type: Type[T], - context: Optional[ModuleType], - ) -> None: - super().__init__(value_type, context) - - value_types = enum_value_types(key_type) - if len(value_types) != 1: - raise JsonTypeError( - f"invalid key type, enumerations must have a consistent member value type but several types found: {value_types}" - ) - - value_type = value_types.pop() - if value_type is not str: - raise JsonTypeError( - "invalid enumeration key type, expected `enum.Enum` with string values" - ) - - def generate(self, obj: Dict[enum.Enum, T]) -> Dict[str, JsonType]: - return {key.value: self.generator.generate(value) for key, value in obj.items()} - - -class TypedSetSerializer(TypedCollectionSerializer[T]): - def generate(self, obj: Set[T]) -> JsonType: - return [self.generator.generate(item) for item in obj] - - -class TypedTupleSerializer(Serializer[tuple]): - item_generators: Tuple[Serializer, ...] - - def __init__( - self, item_types: Tuple[type, ...], context: Optional[ModuleType] - ) -> None: - self.item_generators = tuple( - _get_serializer(item_type, context) for item_type in item_types - ) - - def generate(self, obj: tuple) -> List[JsonType]: - return [ - item_generator.generate(item) - for item_generator, item in zip(self.item_generators, obj) - ] - - -class CustomSerializer(Serializer): - converter: Callable[[object], JsonType] - - def __init__(self, converter: Callable[[object], JsonType]) -> None: - self.converter = converter - - def generate(self, obj: object) -> JsonType: - return self.converter(obj) - - -class FieldSerializer(Generic[T]): - """ - Serializes a Python object field into a JSON property. - - :param field_name: The name of the field in a Python class to read data from. - :param property_name: The name of the JSON property to write to a JSON `object`. - :param generator: A compatible serializer that can handle the field's type. - """ - - field_name: str - property_name: str - generator: Serializer - - def __init__( - self, field_name: str, property_name: str, generator: Serializer[T] - ) -> None: - self.field_name = field_name - self.property_name = property_name - self.generator = generator - - def generate_field(self, obj: object, object_dict: Dict[str, JsonType]) -> None: - value = getattr(obj, self.field_name) - if value is not None: - object_dict[self.property_name] = self.generator.generate(value) - - -class TypedClassSerializer(Serializer[T]): - property_generators: List[FieldSerializer] - - def __init__(self, class_type: Type[T], context: Optional[ModuleType]) -> None: - self.property_generators = [ - FieldSerializer( - field_name, - python_field_to_json_property(field_name, field_type), - _get_serializer(field_type, context), - ) - for field_name, field_type in get_class_properties(class_type) - ] - - def generate(self, obj: T) -> Dict[str, JsonType]: - object_dict: Dict[str, JsonType] = {} - for property_generator in self.property_generators: - property_generator.generate_field(obj, object_dict) - - return object_dict - - -class TypedNamedTupleSerializer(TypedClassSerializer[NamedTuple]): - def __init__( - self, class_type: Type[NamedTuple], context: Optional[ModuleType] - ) -> None: - super().__init__(class_type, context) - - -class DataclassSerializer(TypedClassSerializer[T]): - def __init__(self, class_type: Type[T], context: Optional[ModuleType]) -> None: - super().__init__(class_type, context) - - -class UnionSerializer(Serializer): - def generate(self, obj: Any) -> JsonType: - return object_to_json(obj) - - -class LiteralSerializer(Serializer): - generator: Serializer - - def __init__(self, values: Tuple[Any, ...], context: Optional[ModuleType]) -> None: - literal_type_tuple = tuple(type(value) for value in values) - literal_type_set = set(literal_type_tuple) - if len(literal_type_set) != 1: - value_names = ", ".join(repr(value) for value in values) - raise TypeError( - f"type `Literal[{value_names}]` expects consistent literal value types but got: {literal_type_tuple}" - ) - - literal_type = literal_type_set.pop() - self.generator = _get_serializer(literal_type, context) - - def generate(self, obj: Any) -> JsonType: - return self.generator.generate(obj) - - -class UntypedNamedTupleSerializer(Serializer): - fields: Dict[str, str] - - def __init__(self, class_type: Type[NamedTuple]) -> None: - # named tuples are also instances of tuple - self.fields = {} - field_names: Tuple[str, ...] = class_type._fields - for field_name in field_names: - self.fields[field_name] = python_field_to_json_property(field_name) - - def generate(self, obj: NamedTuple) -> JsonType: - object_dict = {} - for field_name, property_name in self.fields.items(): - value = getattr(obj, field_name) - object_dict[property_name] = object_to_json(value) - - return object_dict - - -class UntypedClassSerializer(Serializer): - def generate(self, obj: object) -> JsonType: - # iterate over object attributes to get a standard representation - object_dict = {} - for name in dir(obj): - if is_reserved_property(name): - continue - - value = getattr(obj, name) - if value is None: - continue - - # filter instance methods - if inspect.ismethod(value): - continue - - object_dict[python_field_to_json_property(name)] = object_to_json(value) - - return object_dict - - -def create_serializer( - typ: TypeLike, context: Optional[ModuleType] = None -) -> Serializer: - """ - Creates a serializer engine to produce an object that can be directly converted into a JSON string. - - When serializing a Python object into a JSON object, the following transformations are applied: - - * Fundamental types (`bool`, `int`, `float` or `str`) are returned as-is. - * Date and time types (`datetime`, `date` or `time`) produce an ISO 8601 format string with time zone - (ending with `Z` for UTC). - * Byte arrays (`bytes`) are written as a string with Base64 encoding. - * UUIDs (`uuid.UUID`) are written as a UUID string as per RFC 4122. - * Enumerations yield their enumeration value. - * Containers (e.g. `list`, `dict`, `set`, `tuple`) are processed recursively. - * Complex objects with properties (including data class types) generate dictionaries of key-value pairs. - - :raises TypeError: A serializer engine cannot be constructed for the input type. - """ - - if context is None: - if isinstance(typ, type): - context = sys.modules[typ.__module__] - - return _get_serializer(typ, context) - - -def _get_serializer(typ: TypeLike, context: Optional[ModuleType]) -> Serializer: - if isinstance(typ, (str, typing.ForwardRef)): - if context is None: - raise TypeError(f"missing context for evaluating type: {typ}") - - typ = evaluate_type(typ, context) - - if isinstance(typ, type): - return _fetch_serializer(typ) - else: - # special forms are not always hashable - return _create_serializer(typ, context) - - -@functools.lru_cache(maxsize=None) -def _fetch_serializer(typ: type) -> Serializer: - context = sys.modules[typ.__module__] - return _create_serializer(typ, context) - - -def _create_serializer(typ: TypeLike, context: Optional[ModuleType]) -> Serializer: - # check for well-known types - if typ is type(None): - return NoneSerializer() - elif typ is bool: - return BoolSerializer() - elif typ is int: - return IntSerializer() - elif typ is float: - return FloatSerializer() - elif typ is str: - return StringSerializer() - elif typ is bytes: - return BytesSerializer() - elif typ is datetime.datetime: - return DateTimeSerializer() - elif typ is datetime.date: - return DateSerializer() - elif typ is datetime.time: - return TimeSerializer() - elif typ is uuid.UUID: - return UUIDSerializer() - elif typ is ipaddress.IPv4Address: - return IPv4Serializer() - elif typ is ipaddress.IPv6Address: - return IPv6Serializer() - - # dynamically-typed collection types - if typ is list: - return UntypedListSerializer() - elif typ is dict: - return UntypedDictSerializer() - elif typ is set: - return UntypedSetSerializer() - elif typ is tuple: - return UntypedTupleSerializer() - - # generic types (e.g. list, dict, set, etc.) - origin_type = typing.get_origin(typ) - if origin_type is list: - (list_item_type,) = typing.get_args(typ) # unpack single tuple element - return TypedListSerializer(list_item_type, context) - elif origin_type is dict: - key_type, value_type = typing.get_args(typ) - if key_type is str: - return TypedStringDictSerializer(value_type, context) - elif issubclass(key_type, enum.Enum): - return TypedEnumDictSerializer(key_type, value_type, context) - elif origin_type is set: - (set_member_type,) = typing.get_args(typ) # unpack single tuple element - return TypedSetSerializer(set_member_type, context) - elif origin_type is tuple: - return TypedTupleSerializer(typing.get_args(typ), context) - elif origin_type is Union: - return UnionSerializer() - elif origin_type is Literal: - return LiteralSerializer(typing.get_args(typ), context) - - if is_type_annotated(typ): - return create_serializer(unwrap_annotated_type(typ)) - - # check if object has custom serialization method - convert_func = getattr(typ, "to_json", None) - if callable(convert_func): - return CustomSerializer(convert_func) - - if is_type_enum(typ): - return EnumSerializer() - if is_dataclass_type(typ): - return DataclassSerializer(typ, context) - if is_named_tuple_type(typ): - if getattr(typ, "__annotations__", None): - return TypedNamedTupleSerializer(typ, context) - else: - return UntypedNamedTupleSerializer(typ) - - # fail early if caller passes an object with an exotic type - if ( - not isinstance(typ, type) - or typ is FunctionType - or typ is MethodType - or typ is type - or typ is ModuleType - ): - raise TypeError(f"object of type {typ} cannot be represented in JSON") - - if get_resolved_hints(typ): - return TypedClassSerializer(typ, context) - else: - return UntypedClassSerializer() - - -def object_to_json(obj: Any) -> JsonType: - """ - Converts a Python object to a representation that can be exported to JSON. - - * Fundamental types (e.g. numeric types) are written as is. - * Date and time types are serialized in the ISO 8601 format with time zone. - * A byte array is written as a string with Base64 encoding. - * UUIDs are written as a UUID string. - * Enumerations are written as their value. - * Containers (e.g. `list`, `dict`, `set`, `tuple`) are exported recursively. - * Objects with properties (including data class types) are converted to a dictionaries of key-value pairs. - """ - - typ: type = type(obj) - generator = create_serializer(typ) - return generator.generate(obj) diff --git a/docs/openapi_generator/strong_typing/slots.py b/docs/openapi_generator/strong_typing/slots.py deleted file mode 100644 index 564ffa11f..000000000 --- a/docs/openapi_generator/strong_typing/slots.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -from typing import Any, Dict, Tuple, Type, TypeVar - -T = TypeVar("T") - - -class SlotsMeta(type): - def __new__( - cls: Type[T], name: str, bases: Tuple[type, ...], ns: Dict[str, Any] - ) -> T: - # caller may have already provided slots, in which case just retain them and keep going - slots: Tuple[str, ...] = ns.get("__slots__", ()) - - # add fields with type annotations to slots - annotations: Dict[str, Any] = ns.get("__annotations__", {}) - members = tuple(member for member in annotations.keys() if member not in slots) - - # assign slots - ns["__slots__"] = slots + tuple(members) - return super().__new__(cls, name, bases, ns) # type: ignore - - -class Slots(metaclass=SlotsMeta): - pass diff --git a/docs/openapi_generator/strong_typing/topological.py b/docs/openapi_generator/strong_typing/topological.py deleted file mode 100644 index 28bf4bd0f..000000000 --- a/docs/openapi_generator/strong_typing/topological.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - -""" -Type-safe data interchange for Python data classes. - -:see: https://github.com/hunyadi/strong_typing -""" - -from typing import Callable, Dict, Iterable, List, Optional, Set, TypeVar - -from .inspection import TypeCollector - -T = TypeVar("T") - - -def topological_sort(graph: Dict[T, Set[T]]) -> List[T]: - """ - Performs a topological sort of a graph. - - Nodes with no outgoing edges are first. Nodes with no incoming edges are last. - The topological ordering is not unique. - - :param graph: A dictionary of mappings from nodes to adjacent nodes. Keys and set members must be hashable. - :returns: The list of nodes in topological order. - """ - - # empty list that will contain the sorted nodes (in reverse order) - ordered: List[T] = [] - - seen: Dict[T, bool] = {} - - def _visit(n: T) -> None: - status = seen.get(n) - if status is not None: - if status: # node has a permanent mark - return - else: # node has a temporary mark - raise RuntimeError(f"cycle detected in graph for node {n}") - - seen[n] = False # apply temporary mark - for m in graph[n]: # visit all adjacent nodes - if m != n: # ignore self-referencing nodes - _visit(m) - - seen[n] = True # apply permanent mark - ordered.append(n) - - for n in graph.keys(): - _visit(n) - - return ordered - - -def type_topological_sort( - types: Iterable[type], - dependency_fn: Optional[Callable[[type], Iterable[type]]] = None, -) -> List[type]: - """ - Performs a topological sort of a list of types. - - Types that don't depend on other types (i.e. fundamental types) are first. Types on which no other types depend - are last. The topological ordering is not unique. - - :param types: A list of types (simple or composite). - :param dependency_fn: Returns a list of additional dependencies for a class (e.g. classes referenced by a foreign key). - :returns: The list of types in topological order. - """ - - if not all(isinstance(typ, type) for typ in types): - raise TypeError("expected a list of types") - - collector = TypeCollector() - collector.traverse_all(types) - graph = collector.graph - - if dependency_fn: - new_types: Set[type] = set() - for source_type, references in graph.items(): - dependent_types = dependency_fn(source_type) - references.update(dependent_types) - new_types.update(dependent_types) - for new_type in new_types: - graph[new_type] = set() - - return topological_sort(graph) From 31a5ba52683a8ca50ec22e4ce2c93242320078c2 Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Wed, 19 Feb 2025 13:26:39 -0800 Subject: [PATCH 36/37] Add title to the json schemas --- docs/_static/llama-stack-spec.html | 524 +++++++++++++++++++--------- docs/_static/llama-stack-spec.yaml | 194 ++++++++++ llama_stack/strong_typing/schema.py | 4 +- 3 files changed, 556 insertions(+), 166 deletions(-) diff --git a/docs/_static/llama-stack-spec.html b/docs/_static/llama-stack-spec.html index 65a1bdd6b..82abc947b 100644 --- a/docs/_static/llama-stack-spec.html +++ b/docs/_static/llama-stack-spec.html @@ -2661,7 +2661,8 @@ "required": [ "type", "config" - ] + ], + "title": "AgentCandidate" }, "AgentConfig": { "type": "object", @@ -2700,6 +2701,7 @@ "required", "none" ], + "title": "ToolChoice", "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." }, "tool_prompt_format": { @@ -2709,6 +2711,7 @@ "function_tag", "python_list" ], + "title": "ToolPromptFormat", "description": "Prompt format for calling custom / zero shot tools." }, "tool_config": { @@ -2736,7 +2739,8 @@ "required": [ "model", "instructions" - ] + ], + "title": "AgentConfig" }, "AgentTool": { "oneOf": [ @@ -2779,7 +2783,8 @@ "required": [ "name", "args" - ] + ], + "title": "AgentToolGroupWithArgs" } ] }, @@ -2790,7 +2795,8 @@ "median", "categorical_count", "accuracy" - ] + ], + "title": "AggregationFunctionType" }, "BasicScoringFnParams": { "type": "object", @@ -2810,7 +2816,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "BasicScoringFnParams" }, "BenchmarkConfig": { "type": "object", @@ -2838,7 +2845,8 @@ "type", "eval_candidate", "scoring_params" - ] + ], + "title": "BenchmarkConfig" }, "EvalCandidate": { "oneOf": [ @@ -2898,6 +2906,7 @@ "type", "bnf" ], + "title": "GrammarResponseFormat", "description": "Configuration for grammar-guided response generation." }, "GreedySamplingStrategy": { @@ -2912,7 +2921,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "GreedySamplingStrategy" }, "ImageContentItem": { "type": "object", @@ -2945,6 +2955,7 @@ "type", "image" ], + "title": "ImageContentItem", "description": "A image content item" }, "InterleavedContent": { @@ -3021,6 +3032,7 @@ "type", "json_schema" ], + "title": "JsonSchemaResponseFormat", "description": "Configuration for JSON schema-guided response generation." }, "LLMAsJudgeScoringFnParams": { @@ -3054,7 +3066,8 @@ "required": [ "type", "judge_model" - ] + ], + "title": "LLMAsJudgeScoringFnParams" }, "ModelCandidate": { "type": "object", @@ -3079,7 +3092,8 @@ "type", "model", "sampling_params" - ] + ], + "title": "ModelCandidate" }, "RegexParserScoringFnParams": { "type": "object", @@ -3105,7 +3119,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "RegexParserScoringFnParams" }, "ResponseFormat": { "oneOf": [ @@ -3142,7 +3157,8 @@ "additionalProperties": false, "required": [ "strategy" - ] + ], + "title": "SamplingParams" }, "SamplingStrategy": { "oneOf": [ @@ -3205,6 +3221,7 @@ "role", "content" ], + "title": "SystemMessage", "description": "A system message providing instructions or context to the model." }, "TextContentItem": { @@ -3226,6 +3243,7 @@ "type", "text" ], + "title": "TextContentItem", "description": "A text content item" }, "ToolConfig": { @@ -3240,6 +3258,7 @@ "required", "none" ], + "title": "ToolChoice", "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." }, { @@ -3269,6 +3288,7 @@ } }, "additionalProperties": false, + "title": "ToolConfig", "description": "Configuration for tool use." }, "ToolDef": { @@ -3315,7 +3335,8 @@ "additionalProperties": false, "required": [ "name" - ] + ], + "title": "ToolDef" }, "ToolParameter": { "type": "object", @@ -3362,7 +3383,8 @@ "parameter_type", "description", "required" - ] + ], + "title": "ToolParameter" }, "TopKSamplingStrategy": { "type": "object", @@ -3380,7 +3402,8 @@ "required": [ "type", "top_k" - ] + ], + "title": "TopKSamplingStrategy" }, "TopPSamplingStrategy": { "type": "object", @@ -3401,7 +3424,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "TopPSamplingStrategy" }, "URL": { "type": "object", @@ -3413,7 +3437,8 @@ "additionalProperties": false, "required": [ "uri" - ] + ], + "title": "URL" }, "DeprecatedEvaluateRowsRequest": { "type": "object", @@ -3461,7 +3486,8 @@ "input_rows", "scoring_functions", "task_config" - ] + ], + "title": "DeprecatedEvaluateRowsRequest" }, "EvaluateResponse": { "type": "object", @@ -3505,7 +3531,8 @@ "required": [ "generations", "scores" - ] + ], + "title": "EvaluateResponse" }, "ScoringResult": { "type": "object", @@ -3568,7 +3595,8 @@ "required": [ "score_rows", "aggregated_results" - ] + ], + "title": "ScoringResult" }, "Benchmark": { "type": "object", @@ -3631,7 +3659,8 @@ "dataset_id", "scoring_functions", "metadata" - ] + ], + "title": "Benchmark" }, "JobStatus": { "type": "string", @@ -3640,7 +3669,8 @@ "in_progress", "failed", "scheduled" - ] + ], + "title": "JobStatus" }, "ListBenchmarksResponse": { "type": "object", @@ -3655,7 +3685,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListBenchmarksResponse" }, "DeprecatedRegisterEvalTaskRequest": { "type": "object", @@ -3709,7 +3740,8 @@ "eval_task_id", "dataset_id", "scoring_functions" - ] + ], + "title": "DeprecatedRegisterEvalTaskRequest" }, "DeprecatedRunEvalRequest": { "type": "object", @@ -3721,7 +3753,8 @@ "additionalProperties": false, "required": [ "task_config" - ] + ], + "title": "DeprecatedRunEvalRequest" }, "Job": { "type": "object", @@ -3733,7 +3766,8 @@ "additionalProperties": false, "required": [ "job_id" - ] + ], + "title": "Job" }, "AppendRowsRequest": { "type": "object", @@ -3774,7 +3808,8 @@ "required": [ "dataset_id", "rows" - ] + ], + "title": "AppendRowsRequest" }, "CompletionMessage": { "type": "object", @@ -3812,6 +3847,7 @@ "content", "stop_reason" ], + "title": "CompletionMessage", "description": "A message containing the model's (assistant) response in a chat conversation." }, "Message": { @@ -3854,7 +3890,8 @@ "wolfram_alpha", "photogen", "code_interpreter" - ] + ], + "title": "BuiltinTool" }, { "type": "string" @@ -3933,7 +3970,8 @@ "call_id", "tool_name", "arguments" - ] + ], + "title": "ToolCall" }, "ToolDefinition": { "type": "object", @@ -3947,7 +3985,8 @@ "wolfram_alpha", "photogen", "code_interpreter" - ] + ], + "title": "BuiltinTool" }, { "type": "string" @@ -3967,7 +4006,8 @@ "additionalProperties": false, "required": [ "tool_name" - ] + ], + "title": "ToolDefinition" }, "ToolParamDefinition": { "type": "object", @@ -4008,7 +4048,8 @@ "additionalProperties": false, "required": [ "param_type" - ] + ], + "title": "ToolParamDefinition" }, "ToolResponseMessage": { "type": "object", @@ -4032,7 +4073,8 @@ "wolfram_alpha", "photogen", "code_interpreter" - ] + ], + "title": "BuiltinTool" }, { "type": "string" @@ -4052,6 +4094,7 @@ "tool_name", "content" ], + "title": "ToolResponseMessage", "description": "A message representing the result of a tool invocation." }, "UserMessage": { @@ -4077,6 +4120,7 @@ "role", "content" ], + "title": "UserMessage", "description": "A message from the user in a chat conversation." }, "BatchChatCompletionRequest": { @@ -4110,6 +4154,7 @@ "required", "none" ], + "title": "ToolChoice", "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." }, "tool_prompt_format": { @@ -4119,6 +4164,7 @@ "function_tag", "python_list" ], + "title": "ToolPromptFormat", "description": "Prompt format for calling custom / zero shot tools." }, "response_format": { @@ -4133,14 +4179,16 @@ "description": "How many tokens (for each position) to return log probabilities for." } }, - "additionalProperties": false + "additionalProperties": false, + "title": "LogProbConfig" } }, "additionalProperties": false, "required": [ "model", "messages_batch" - ] + ], + "title": "BatchChatCompletionRequest" }, "BatchChatCompletionResponse": { "type": "object", @@ -4155,7 +4203,8 @@ "additionalProperties": false, "required": [ "batch" - ] + ], + "title": "BatchChatCompletionResponse" }, "ChatCompletionResponse": { "type": "object", @@ -4182,6 +4231,7 @@ "required": [ "completion_message" ], + "title": "ChatCompletionResponse", "description": "Response from a chat completion request." }, "MetricEvent": { @@ -4250,7 +4300,8 @@ "metric", "value", "unit" - ] + ], + "title": "MetricEvent" }, "TokenLogProbs": { "type": "object", @@ -4267,6 +4318,7 @@ "required": [ "logprobs_by_token" ], + "title": "TokenLogProbs", "description": "Log probabilities for generated tokens." }, "BatchCompletionRequest": { @@ -4296,14 +4348,16 @@ "description": "How many tokens (for each position) to return log probabilities for." } }, - "additionalProperties": false + "additionalProperties": false, + "title": "LogProbConfig" } }, "additionalProperties": false, "required": [ "model", "content_batch" - ] + ], + "title": "BatchCompletionRequest" }, "BatchCompletionResponse": { "type": "object", @@ -4318,7 +4372,8 @@ "additionalProperties": false, "required": [ "batch" - ] + ], + "title": "BatchCompletionResponse" }, "CompletionResponse": { "type": "object", @@ -4349,6 +4404,7 @@ "content", "stop_reason" ], + "title": "CompletionResponse", "description": "Response from a completion request." }, "CancelTrainingJobRequest": { @@ -4361,7 +4417,8 @@ "additionalProperties": false, "required": [ "job_uuid" - ] + ], + "title": "CancelTrainingJobRequest" }, "ChatCompletionRequest": { "type": "object", @@ -4435,7 +4492,8 @@ "required": [ "model_id", "messages" - ] + ], + "title": "ChatCompletionRequest" }, "ChatCompletionResponseEvent": { "type": "object", @@ -4475,6 +4533,7 @@ "event_type", "delta" ], + "title": "ChatCompletionResponseEvent", "description": "An event during chat completion generation." }, "ChatCompletionResponseStreamChunk": { @@ -4495,6 +4554,7 @@ "required": [ "event" ], + "title": "ChatCompletionResponseStreamChunk", "description": "A chunk of a streamed chat completion response." }, "ContentDelta": { @@ -4535,7 +4595,8 @@ "required": [ "type", "image" - ] + ], + "title": "ImageDelta" }, "TextDelta": { "type": "object", @@ -4553,7 +4614,8 @@ "required": [ "type", "text" - ] + ], + "title": "TextDelta" }, "ToolCallDelta": { "type": "object", @@ -4580,7 +4642,8 @@ "in_progress", "failed", "succeeded" - ] + ], + "title": "ToolCallParseStatus" } }, "additionalProperties": false, @@ -4588,7 +4651,8 @@ "type", "tool_call", "parse_status" - ] + ], + "title": "ToolCallDelta" }, "CompletionRequest": { "type": "object", @@ -4630,7 +4694,8 @@ "required": [ "model_id", "content" - ] + ], + "title": "CompletionRequest" }, "CompletionResponseStreamChunk": { "type": "object", @@ -4660,6 +4725,7 @@ "required": [ "delta" ], + "title": "CompletionResponseStreamChunk", "description": "A chunk of a streamed completion response." }, "CreateAgentRequest": { @@ -4672,7 +4738,8 @@ "additionalProperties": false, "required": [ "agent_config" - ] + ], + "title": "CreateAgentRequest" }, "AgentCreateResponse": { "type": "object", @@ -4684,7 +4751,8 @@ "additionalProperties": false, "required": [ "agent_id" - ] + ], + "title": "AgentCreateResponse" }, "CreateAgentSessionRequest": { "type": "object", @@ -4696,7 +4764,8 @@ "additionalProperties": false, "required": [ "session_name" - ] + ], + "title": "CreateAgentSessionRequest" }, "AgentSessionCreateResponse": { "type": "object", @@ -4708,7 +4777,8 @@ "additionalProperties": false, "required": [ "session_id" - ] + ], + "title": "AgentSessionCreateResponse" }, "CreateAgentTurnRequest": { "type": "object", @@ -4761,7 +4831,8 @@ "required": [ "content", "mime_type" - ] + ], + "title": "Document" } }, "toolgroups": { @@ -4777,7 +4848,8 @@ "additionalProperties": false, "required": [ "messages" - ] + ], + "title": "CreateAgentTurnRequest" }, "InferenceStep": { "type": "object", @@ -4811,7 +4883,8 @@ "step_id", "step_type", "model_response" - ] + ], + "title": "InferenceStep" }, "MemoryRetrievalStep": { "type": "object", @@ -4849,7 +4922,8 @@ "step_type", "vector_db_ids", "inserted_context" - ] + ], + "title": "MemoryRetrievalStep" }, "SafetyViolation": { "type": "object", @@ -4890,7 +4964,8 @@ "required": [ "violation_level", "metadata" - ] + ], + "title": "SafetyViolation" }, "ShieldCallStep": { "type": "object", @@ -4923,7 +4998,8 @@ "turn_id", "step_id", "step_type" - ] + ], + "title": "ShieldCallStep" }, "ToolExecutionStep": { "type": "object", @@ -4967,7 +5043,8 @@ "step_type", "tool_calls", "tool_responses" - ] + ], + "title": "ToolExecutionStep" }, "ToolResponse": { "type": "object", @@ -4984,7 +5061,8 @@ "wolfram_alpha", "photogen", "code_interpreter" - ] + ], + "title": "BuiltinTool" }, { "type": "string" @@ -5000,7 +5078,8 @@ "call_id", "tool_name", "content" - ] + ], + "title": "ToolResponse" }, "Turn": { "type": "object", @@ -5087,7 +5166,8 @@ "required": [ "content", "mime_type" - ] + ], + "title": "Attachment" } }, "started_at": { @@ -5108,6 +5188,7 @@ "output_message", "started_at" ], + "title": "Turn", "description": "A single turn in an interaction with an Agentic System." }, "ViolationLevel": { @@ -5116,7 +5197,8 @@ "info", "warn", "error" - ] + ], + "title": "ViolationLevel" }, "AgentTurnResponseEvent": { "type": "object", @@ -5128,7 +5210,8 @@ "additionalProperties": false, "required": [ "payload" - ] + ], + "title": "AgentTurnResponseEvent" }, "AgentTurnResponseEventPayload": { "oneOf": [ @@ -5174,7 +5257,8 @@ "tool_execution", "shield_call", "memory_retrieval" - ] + ], + "title": "StepType" }, "step_id": { "type": "string" @@ -5211,7 +5295,8 @@ "step_type", "step_id", "step_details" - ] + ], + "title": "AgentTurnResponseStepCompletePayload" }, "AgentTurnResponseStepProgressPayload": { "type": "object", @@ -5228,7 +5313,8 @@ "tool_execution", "shield_call", "memory_retrieval" - ] + ], + "title": "StepType" }, "step_id": { "type": "string" @@ -5243,7 +5329,8 @@ "step_type", "step_id", "delta" - ] + ], + "title": "AgentTurnResponseStepProgressPayload" }, "AgentTurnResponseStepStartPayload": { "type": "object", @@ -5260,7 +5347,8 @@ "tool_execution", "shield_call", "memory_retrieval" - ] + ], + "title": "StepType" }, "step_id": { "type": "string" @@ -5296,7 +5384,8 @@ "event_type", "step_type", "step_id" - ] + ], + "title": "AgentTurnResponseStepStartPayload" }, "AgentTurnResponseStreamChunk": { "type": "object", @@ -5309,6 +5398,7 @@ "required": [ "event" ], + "title": "AgentTurnResponseStreamChunk", "description": "streamed agent turn completion response." }, "AgentTurnResponseTurnCompletePayload": { @@ -5327,7 +5417,8 @@ "required": [ "event_type", "turn" - ] + ], + "title": "AgentTurnResponseTurnCompletePayload" }, "AgentTurnResponseTurnStartPayload": { "type": "object", @@ -5345,7 +5436,8 @@ "required": [ "event_type", "turn_id" - ] + ], + "title": "AgentTurnResponseTurnStartPayload" }, "EmbeddingsRequest": { "type": "object", @@ -5366,7 +5458,8 @@ "required": [ "model_id", "contents" - ] + ], + "title": "EmbeddingsRequest" }, "EmbeddingsResponse": { "type": "object", @@ -5386,6 +5479,7 @@ "required": [ "embeddings" ], + "title": "EmbeddingsResponse", "description": "Response containing generated embeddings." }, "EvaluateRowsRequest": { @@ -5434,7 +5528,8 @@ "input_rows", "scoring_functions", "task_config" - ] + ], + "title": "EvaluateRowsRequest" }, "Session": { "type": "object", @@ -5463,6 +5558,7 @@ "turns", "started_at" ], + "title": "Session", "description": "A single session of an interaction with an Agentic System." }, "AgentStepResponse": { @@ -5497,7 +5593,8 @@ "additionalProperties": false, "required": [ "step" - ] + ], + "title": "AgentStepResponse" }, "AgentTurnInputType": { "type": "object", @@ -5511,7 +5608,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "AgentTurnInputType" }, "ArrayType": { "type": "object", @@ -5525,7 +5623,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "ArrayType" }, "BooleanType": { "type": "object", @@ -5539,7 +5638,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "BooleanType" }, "ChatCompletionInputType": { "type": "object", @@ -5553,7 +5653,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "ChatCompletionInputType" }, "CompletionInputType": { "type": "object", @@ -5567,7 +5668,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "CompletionInputType" }, "Dataset": { "type": "object", @@ -5630,7 +5732,8 @@ "dataset_schema", "url", "metadata" - ] + ], + "title": "Dataset" }, "JsonType": { "type": "object", @@ -5644,7 +5747,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "JsonType" }, "NumberType": { "type": "object", @@ -5658,7 +5762,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "NumberType" }, "ObjectType": { "type": "object", @@ -5672,7 +5777,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "ObjectType" }, "ParamType": { "oneOf": [ @@ -5735,7 +5841,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "StringType" }, "UnionType": { "type": "object", @@ -5749,7 +5856,8 @@ "additionalProperties": false, "required": [ "type" - ] + ], + "title": "UnionType" }, "Model": { "type": "object", @@ -5806,14 +5914,16 @@ "type", "metadata", "model_type" - ] + ], + "title": "Model" }, "ModelType": { "type": "string", "enum": [ "llm", "embedding" - ] + ], + "title": "ModelType" }, "PaginatedRowsResult": { "type": "object", @@ -5857,7 +5967,8 @@ "required": [ "rows", "total_count" - ] + ], + "title": "PaginatedRowsResult" }, "ScoringFn": { "type": "object", @@ -5919,7 +6030,8 @@ "type", "metadata", "return_type" - ] + ], + "title": "ScoringFn" }, "Shield": { "type": "object", @@ -5971,6 +6083,7 @@ "provider_id", "type" ], + "title": "Shield", "description": "A safety shield resource that can be used to check content" }, "Span": { @@ -6028,14 +6141,16 @@ "trace_id", "name", "start_time" - ] + ], + "title": "Span" }, "SpanStatus": { "type": "string", "enum": [ "ok", "error" - ] + ], + "title": "SpanStatus" }, "SpanWithStatus": { "type": "object", @@ -6095,7 +6210,8 @@ "trace_id", "name", "start_time" - ] + ], + "title": "SpanWithStatus" }, "QuerySpanTreeResponse": { "type": "object", @@ -6110,7 +6226,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "QuerySpanTreeResponse" }, "Tool": { "type": "object", @@ -6180,7 +6297,8 @@ "tool_host", "description", "parameters" - ] + ], + "title": "Tool" }, "ToolHost": { "type": "string", @@ -6188,7 +6306,8 @@ "distribution", "client", "model_context_protocol" - ] + ], + "title": "ToolHost" }, "ToolGroup": { "type": "object", @@ -6242,7 +6361,8 @@ "provider_resource_id", "provider_id", "type" - ] + ], + "title": "ToolGroup" }, "Trace": { "type": "object", @@ -6267,10 +6387,12 @@ "trace_id", "root_span_id", "start_time" - ] + ], + "title": "Trace" }, "Checkpoint": { - "description": "Checkpoint created during training runs" + "description": "Checkpoint created during training runs", + "title": "Checkpoint" }, "PostTrainingJobArtifactsResponse": { "type": "object", @@ -6290,6 +6412,7 @@ "job_uuid", "checkpoints" ], + "title": "PostTrainingJobArtifactsResponse", "description": "Artifacts of a finetuning job." }, "PostTrainingJobStatusResponse": { @@ -6351,6 +6474,7 @@ "status", "checkpoints" ], + "title": "PostTrainingJobStatusResponse", "description": "Status of a finetuning job." }, "ListPostTrainingJobsResponse": { @@ -6368,14 +6492,16 @@ "additionalProperties": false, "required": [ "job_uuid" - ] + ], + "title": "PostTrainingJob" } } }, "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListPostTrainingJobsResponse" }, "VectorDB": { "type": "object", @@ -6409,7 +6535,8 @@ "type", "embedding_model", "embedding_dimension" - ] + ], + "title": "VectorDB" }, "HealthInfo": { "type": "object", @@ -6421,7 +6548,8 @@ "additionalProperties": false, "required": [ "status" - ] + ], + "title": "HealthInfo" }, "RAGDocument": { "type": "object", @@ -6482,7 +6610,8 @@ "document_id", "content", "metadata" - ] + ], + "title": "RAGDocument" }, "InsertRequest": { "type": "object", @@ -6505,7 +6634,8 @@ "documents", "vector_db_id", "chunk_size_in_tokens" - ] + ], + "title": "InsertRequest" }, "InsertChunksRequest": { "type": "object", @@ -6551,7 +6681,8 @@ "required": [ "content", "metadata" - ] + ], + "title": "Chunk" } }, "ttl_seconds": { @@ -6562,7 +6693,8 @@ "required": [ "vector_db_id", "chunks" - ] + ], + "title": "InsertChunksRequest" }, "InvokeToolRequest": { "type": "object", @@ -6600,7 +6732,8 @@ "required": [ "tool_name", "kwargs" - ] + ], + "title": "InvokeToolRequest" }, "ToolInvocationResult": { "type": "object", @@ -6618,7 +6751,8 @@ "additionalProperties": false, "required": [ "content" - ] + ], + "title": "ToolInvocationResult" }, "ListDatasetsResponse": { "type": "object", @@ -6633,7 +6767,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListDatasetsResponse" }, "ListModelsResponse": { "type": "object", @@ -6648,7 +6783,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListModelsResponse" }, "ProviderInfo": { "type": "object", @@ -6668,7 +6804,8 @@ "api", "provider_id", "provider_type" - ] + ], + "title": "ProviderInfo" }, "ListProvidersResponse": { "type": "object", @@ -6683,7 +6820,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListProvidersResponse" }, "RouteInfo": { "type": "object", @@ -6706,7 +6844,8 @@ "route", "method", "provider_types" - ] + ], + "title": "RouteInfo" }, "ListRoutesResponse": { "type": "object", @@ -6721,7 +6860,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListRoutesResponse" }, "ListScoringFunctionsResponse": { "type": "object", @@ -6736,7 +6876,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListScoringFunctionsResponse" }, "ListShieldsResponse": { "type": "object", @@ -6751,7 +6892,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListShieldsResponse" }, "ListToolGroupsResponse": { "type": "object", @@ -6766,7 +6908,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListToolGroupsResponse" }, "ListToolsResponse": { "type": "object", @@ -6781,7 +6924,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListToolsResponse" }, "ListVectorDBsResponse": { "type": "object", @@ -6796,7 +6940,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "ListVectorDBsResponse" }, "Event": { "oneOf": [ @@ -6828,7 +6973,8 @@ "warn", "error", "critical" - ] + ], + "title": "LogSeverity" }, "SpanEndPayload": { "type": "object", @@ -6846,7 +6992,8 @@ "required": [ "type", "status" - ] + ], + "title": "SpanEndPayload" }, "SpanStartPayload": { "type": "object", @@ -6867,7 +7014,8 @@ "required": [ "type", "name" - ] + ], + "title": "SpanStartPayload" }, "StructuredLogEvent": { "type": "object", @@ -6920,7 +7068,8 @@ "timestamp", "type", "payload" - ] + ], + "title": "StructuredLogEvent" }, "StructuredLogPayload": { "oneOf": [ @@ -6994,7 +7143,8 @@ "type", "message", "severity" - ] + ], + "title": "UnstructuredLogEvent" }, "LogEventRequest": { "type": "object", @@ -7010,7 +7160,8 @@ "required": [ "event", "ttl_seconds" - ] + ], + "title": "LogEventRequest" }, "DPOAlignmentConfig": { "type": "object", @@ -7034,7 +7185,8 @@ "reward_clip", "epsilon", "gamma" - ] + ], + "title": "DPOAlignmentConfig" }, "DataConfig": { "type": "object", @@ -7069,14 +7221,16 @@ "batch_size", "shuffle", "data_format" - ] + ], + "title": "DataConfig" }, "DatasetFormat": { "type": "string", "enum": [ "instruct", "dialog" - ] + ], + "title": "DatasetFormat" }, "EfficiencyConfig": { "type": "object", @@ -7098,7 +7252,8 @@ "default": false } }, - "additionalProperties": false + "additionalProperties": false, + "title": "EfficiencyConfig" }, "OptimizerConfig": { "type": "object", @@ -7122,7 +7277,8 @@ "lr", "weight_decay", "num_warmup_steps" - ] + ], + "title": "OptimizerConfig" }, "OptimizerType": { "type": "string", @@ -7130,7 +7286,8 @@ "adam", "adamw", "sgd" - ] + ], + "title": "OptimizerType" }, "TrainingConfig": { "type": "object", @@ -7169,7 +7326,8 @@ "max_validation_steps", "data_config", "optimizer_config" - ] + ], + "title": "TrainingConfig" }, "PreferenceOptimizeRequest": { "type": "object", @@ -7245,7 +7403,8 @@ "training_config", "hyperparam_search_config", "logger_config" - ] + ], + "title": "PreferenceOptimizeRequest" }, "PostTrainingJob": { "type": "object", @@ -7257,7 +7416,8 @@ "additionalProperties": false, "required": [ "job_uuid" - ] + ], + "title": "PostTrainingJob" }, "DefaultRAGQueryGeneratorConfig": { "type": "object", @@ -7276,7 +7436,8 @@ "required": [ "type", "separator" - ] + ], + "title": "DefaultRAGQueryGeneratorConfig" }, "LLMRAGQueryGeneratorConfig": { "type": "object", @@ -7298,7 +7459,8 @@ "type", "model", "template" - ] + ], + "title": "LLMRAGQueryGeneratorConfig" }, "RAGQueryConfig": { "type": "object", @@ -7320,7 +7482,8 @@ "query_generator_config", "max_tokens_in_context", "max_chunks" - ] + ], + "title": "RAGQueryConfig" }, "RAGQueryGeneratorConfig": { "oneOf": [ @@ -7359,7 +7522,8 @@ "required": [ "content", "vector_db_ids" - ] + ], + "title": "QueryRequest" }, "RAGQueryResult": { "type": "object", @@ -7368,7 +7532,8 @@ "$ref": "#/components/schemas/InterleavedContent" } }, - "additionalProperties": false + "additionalProperties": false, + "title": "RAGQueryResult" }, "QueryChunksRequest": { "type": "object", @@ -7409,7 +7574,8 @@ "required": [ "vector_db_id", "query" - ] + ], + "title": "QueryChunksRequest" }, "QueryChunksResponse": { "type": "object", @@ -7452,7 +7618,8 @@ "required": [ "content", "metadata" - ] + ], + "title": "Chunk" } }, "scores": { @@ -7466,7 +7633,8 @@ "required": [ "chunks", "scores" - ] + ], + "title": "QueryChunksResponse" }, "QueryCondition": { "type": "object", @@ -7505,7 +7673,8 @@ "key", "op", "value" - ] + ], + "title": "QueryCondition" }, "QueryConditionOp": { "type": "string", @@ -7514,7 +7683,8 @@ "ne", "gt", "lt" - ] + ], + "title": "QueryConditionOp" }, "QuerySpansResponse": { "type": "object", @@ -7529,7 +7699,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "QuerySpansResponse" }, "QueryTracesResponse": { "type": "object", @@ -7544,7 +7715,8 @@ "additionalProperties": false, "required": [ "data" - ] + ], + "title": "QueryTracesResponse" }, "RegisterBenchmarkRequest": { "type": "object", @@ -7598,7 +7770,8 @@ "benchmark_id", "dataset_id", "scoring_functions" - ] + ], + "title": "RegisterBenchmarkRequest" }, "RegisterDatasetRequest": { "type": "object", @@ -7652,7 +7825,8 @@ "dataset_id", "dataset_schema", "url" - ] + ], + "title": "RegisterDatasetRequest" }, "RegisterModelRequest": { "type": "object", @@ -7698,7 +7872,8 @@ "additionalProperties": false, "required": [ "model_id" - ] + ], + "title": "RegisterModelRequest" }, "RegisterScoringFunctionRequest": { "type": "object", @@ -7727,7 +7902,8 @@ "scoring_fn_id", "description", "return_type" - ] + ], + "title": "RegisterScoringFunctionRequest" }, "RegisterShieldRequest": { "type": "object", @@ -7770,7 +7946,8 @@ "additionalProperties": false, "required": [ "shield_id" - ] + ], + "title": "RegisterShieldRequest" }, "RegisterToolGroupRequest": { "type": "object", @@ -7814,7 +7991,8 @@ "required": [ "toolgroup_id", "provider_id" - ] + ], + "title": "RegisterToolGroupRequest" }, "RegisterVectorDbRequest": { "type": "object", @@ -7839,7 +8017,8 @@ "required": [ "vector_db_id", "embedding_model" - ] + ], + "title": "RegisterVectorDbRequest" }, "RunEvalRequest": { "type": "object", @@ -7851,7 +8030,8 @@ "additionalProperties": false, "required": [ "task_config" - ] + ], + "title": "RunEvalRequest" }, "RunShieldRequest": { "type": "object", @@ -7896,7 +8076,8 @@ "shield_id", "messages", "params" - ] + ], + "title": "RunShieldRequest" }, "RunShieldResponse": { "type": "object", @@ -7905,7 +8086,8 @@ "$ref": "#/components/schemas/SafetyViolation" } }, - "additionalProperties": false + "additionalProperties": false, + "title": "RunShieldResponse" }, "SaveSpansToDatasetRequest": { "type": "object", @@ -7934,7 +8116,8 @@ "attribute_filters", "attributes_to_save", "dataset_id" - ] + ], + "title": "SaveSpansToDatasetRequest" }, "ScoreRequest": { "type": "object", @@ -7985,7 +8168,8 @@ "required": [ "input_rows", "scoring_functions" - ] + ], + "title": "ScoreRequest" }, "ScoreResponse": { "type": "object", @@ -8000,7 +8184,8 @@ "additionalProperties": false, "required": [ "results" - ] + ], + "title": "ScoreResponse" }, "ScoreBatchRequest": { "type": "object", @@ -8030,7 +8215,8 @@ "dataset_id", "scoring_functions", "save_results_dataset" - ] + ], + "title": "ScoreBatchRequest" }, "ScoreBatchResponse": { "type": "object", @@ -8048,7 +8234,8 @@ "additionalProperties": false, "required": [ "results" - ] + ], + "title": "ScoreBatchResponse" }, "AlgorithmConfig": { "oneOf": [ @@ -8110,7 +8297,8 @@ "apply_lora_to_output", "rank", "alpha" - ] + ], + "title": "LoraFinetuningConfig" }, "QATFinetuningConfig": { "type": "object", @@ -8132,7 +8320,8 @@ "type", "quantizer_name", "group_size" - ] + ], + "title": "QATFinetuningConfig" }, "SupervisedFineTuneRequest": { "type": "object", @@ -8210,7 +8399,8 @@ "hyperparam_search_config", "logger_config", "model" - ] + ], + "title": "SupervisedFineTuneRequest" }, "SyntheticDataGenerateRequest": { "type": "object", @@ -8231,6 +8421,7 @@ "top_k_top_p", "sigmoid" ], + "title": "FilteringFunction", "description": "The type of filtering function." }, "model": { @@ -8241,7 +8432,8 @@ "required": [ "dialogs", "filtering_function" - ] + ], + "title": "SyntheticDataGenerateRequest" }, "SyntheticDataGenerationResponse": { "type": "object", @@ -8304,6 +8496,7 @@ "required": [ "synthetic_data" ], + "title": "SyntheticDataGenerationResponse", "description": "Response from the synthetic data generation. Batch of (prompt, response, score) tuples that pass the threshold." }, "VersionInfo": { @@ -8316,7 +8509,8 @@ "additionalProperties": false, "required": [ "version" - ] + ], + "title": "VersionInfo" } }, "responses": {} diff --git a/docs/_static/llama-stack-spec.yaml b/docs/_static/llama-stack-spec.yaml index 60b777e91..4d13ca565 100644 --- a/docs/_static/llama-stack-spec.yaml +++ b/docs/_static/llama-stack-spec.yaml @@ -1611,6 +1611,7 @@ components: required: - type - config + title: AgentCandidate AgentConfig: type: object properties: @@ -1638,6 +1639,7 @@ components: - auto - required - none + title: ToolChoice description: >- Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities @@ -1648,6 +1650,7 @@ components: - json - function_tag - python_list + title: ToolPromptFormat description: >- Prompt format for calling custom / zero shot tools. tool_config: @@ -1668,6 +1671,7 @@ components: required: - model - instructions + title: AgentConfig AgentTool: oneOf: - type: string @@ -1689,6 +1693,7 @@ components: required: - name - args + title: AgentToolGroupWithArgs AggregationFunctionType: type: string enum: @@ -1696,6 +1701,7 @@ components: - median - categorical_count - accuracy + title: AggregationFunctionType BasicScoringFnParams: type: object properties: @@ -1710,6 +1716,7 @@ components: additionalProperties: false required: - type + title: BasicScoringFnParams BenchmarkConfig: type: object properties: @@ -1730,6 +1737,7 @@ components: - type - eval_candidate - scoring_params + title: BenchmarkConfig EvalCandidate: oneOf: - $ref: '#/components/schemas/ModelCandidate' @@ -1764,6 +1772,7 @@ components: required: - type - bnf + title: GrammarResponseFormat description: >- Configuration for grammar-guided response generation. GreedySamplingStrategy: @@ -1776,6 +1785,7 @@ components: additionalProperties: false required: - type + title: GreedySamplingStrategy ImageContentItem: type: object properties: @@ -1804,6 +1814,7 @@ components: required: - type - image + title: ImageContentItem description: A image content item InterleavedContent: oneOf: @@ -1847,6 +1858,7 @@ components: required: - type - json_schema + title: JsonSchemaResponseFormat description: >- Configuration for JSON schema-guided response generation. LLMAsJudgeScoringFnParams: @@ -1872,6 +1884,7 @@ components: required: - type - judge_model + title: LLMAsJudgeScoringFnParams ModelCandidate: type: object properties: @@ -1890,6 +1903,7 @@ components: - type - model - sampling_params + title: ModelCandidate RegexParserScoringFnParams: type: object properties: @@ -1908,6 +1922,7 @@ components: additionalProperties: false required: - type + title: RegexParserScoringFnParams ResponseFormat: oneOf: - $ref: '#/components/schemas/JsonSchemaResponseFormat' @@ -1931,6 +1946,7 @@ components: additionalProperties: false required: - strategy + title: SamplingParams SamplingStrategy: oneOf: - $ref: '#/components/schemas/GreedySamplingStrategy' @@ -1972,6 +1988,7 @@ components: required: - role - content + title: SystemMessage description: >- A system message providing instructions or context to the model. TextContentItem: @@ -1990,6 +2007,7 @@ components: required: - type - text + title: TextContentItem description: A text content item ToolConfig: type: object @@ -2001,6 +2019,7 @@ components: - auto - required - none + title: ToolChoice description: >- Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following @@ -2036,6 +2055,7 @@ components: where the function definitions should be inserted. default: append additionalProperties: false + title: ToolConfig description: Configuration for tool use. ToolDef: type: object @@ -2061,6 +2081,7 @@ components: additionalProperties: false required: - name + title: ToolDef ToolParameter: type: object properties: @@ -2087,6 +2108,7 @@ components: - parameter_type - description - required + title: ToolParameter TopKSamplingStrategy: type: object properties: @@ -2100,6 +2122,7 @@ components: required: - type - top_k + title: TopKSamplingStrategy TopPSamplingStrategy: type: object properties: @@ -2115,6 +2138,7 @@ components: additionalProperties: false required: - type + title: TopPSamplingStrategy URL: type: object properties: @@ -2123,6 +2147,7 @@ components: additionalProperties: false required: - uri + title: URL DeprecatedEvaluateRowsRequest: type: object properties: @@ -2149,6 +2174,7 @@ components: - input_rows - scoring_functions - task_config + title: DeprecatedEvaluateRowsRequest EvaluateResponse: type: object properties: @@ -2172,6 +2198,7 @@ components: required: - generations - scores + title: EvaluateResponse ScoringResult: type: object properties: @@ -2201,6 +2228,7 @@ components: required: - score_rows - aggregated_results + title: ScoringResult Benchmark: type: object properties: @@ -2239,6 +2267,7 @@ components: - dataset_id - scoring_functions - metadata + title: Benchmark JobStatus: type: string enum: @@ -2246,6 +2275,7 @@ components: - in_progress - failed - scheduled + title: JobStatus ListBenchmarksResponse: type: object properties: @@ -2256,6 +2286,7 @@ components: additionalProperties: false required: - data + title: ListBenchmarksResponse DeprecatedRegisterEvalTaskRequest: type: object properties: @@ -2286,6 +2317,7 @@ components: - eval_task_id - dataset_id - scoring_functions + title: DeprecatedRegisterEvalTaskRequest DeprecatedRunEvalRequest: type: object properties: @@ -2294,6 +2326,7 @@ components: additionalProperties: false required: - task_config + title: DeprecatedRunEvalRequest Job: type: object properties: @@ -2302,6 +2335,7 @@ components: additionalProperties: false required: - job_id + title: Job AppendRowsRequest: type: object properties: @@ -2323,6 +2357,7 @@ components: required: - dataset_id - rows + title: AppendRowsRequest CompletionMessage: type: object properties: @@ -2359,6 +2394,7 @@ components: - role - content - stop_reason + title: CompletionMessage description: >- A message containing the model's (assistant) response in a chat conversation. Message: @@ -2387,6 +2423,7 @@ components: - wolfram_alpha - photogen - code_interpreter + title: BuiltinTool - type: string arguments: type: object @@ -2418,6 +2455,7 @@ components: - call_id - tool_name - arguments + title: ToolCall ToolDefinition: type: object properties: @@ -2429,6 +2467,7 @@ components: - wolfram_alpha - photogen - code_interpreter + title: BuiltinTool - type: string description: type: string @@ -2439,6 +2478,7 @@ components: additionalProperties: false required: - tool_name + title: ToolDefinition ToolParamDefinition: type: object properties: @@ -2460,6 +2500,7 @@ components: additionalProperties: false required: - param_type + title: ToolParamDefinition ToolResponseMessage: type: object properties: @@ -2481,6 +2522,7 @@ components: - wolfram_alpha - photogen - code_interpreter + title: BuiltinTool - type: string description: Name of the tool that was called content: @@ -2492,6 +2534,7 @@ components: - call_id - tool_name - content + title: ToolResponseMessage description: >- A message representing the result of a tool invocation. UserMessage: @@ -2516,6 +2559,7 @@ components: required: - role - content + title: UserMessage description: >- A message from the user in a chat conversation. BatchChatCompletionRequest: @@ -2541,6 +2585,7 @@ components: - auto - required - none + title: ToolChoice description: >- Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities @@ -2551,6 +2596,7 @@ components: - json - function_tag - python_list + title: ToolPromptFormat description: >- Prompt format for calling custom / zero shot tools. response_format: @@ -2564,10 +2610,12 @@ components: description: >- How many tokens (for each position) to return log probabilities for. additionalProperties: false + title: LogProbConfig additionalProperties: false required: - model - messages_batch + title: BatchChatCompletionRequest BatchChatCompletionResponse: type: object properties: @@ -2578,6 +2626,7 @@ components: additionalProperties: false required: - batch + title: BatchChatCompletionResponse ChatCompletionResponse: type: object properties: @@ -2597,6 +2646,7 @@ components: additionalProperties: false required: - completion_message + title: ChatCompletionResponse description: Response from a chat completion request. MetricEvent: type: object @@ -2638,6 +2688,7 @@ components: - metric - value - unit + title: MetricEvent TokenLogProbs: type: object properties: @@ -2650,6 +2701,7 @@ components: additionalProperties: false required: - logprobs_by_token + title: TokenLogProbs description: Log probabilities for generated tokens. BatchCompletionRequest: type: object @@ -2673,10 +2725,12 @@ components: description: >- How many tokens (for each position) to return log probabilities for. additionalProperties: false + title: LogProbConfig additionalProperties: false required: - model - content_batch + title: BatchCompletionRequest BatchCompletionResponse: type: object properties: @@ -2687,6 +2741,7 @@ components: additionalProperties: false required: - batch + title: BatchCompletionResponse CompletionResponse: type: object properties: @@ -2710,6 +2765,7 @@ components: required: - content - stop_reason + title: CompletionResponse description: Response from a completion request. CancelTrainingJobRequest: type: object @@ -2719,6 +2775,7 @@ components: additionalProperties: false required: - job_uuid + title: CancelTrainingJobRequest ChatCompletionRequest: type: object properties: @@ -2796,6 +2853,7 @@ components: required: - model_id - messages + title: ChatCompletionRequest ChatCompletionResponseEvent: type: object properties: @@ -2829,6 +2887,7 @@ components: required: - event_type - delta + title: ChatCompletionResponseEvent description: >- An event during chat completion generation. ChatCompletionResponseStreamChunk: @@ -2844,6 +2903,7 @@ components: additionalProperties: false required: - event + title: ChatCompletionResponseStreamChunk description: >- A chunk of a streamed chat completion response. ContentDelta: @@ -2871,6 +2931,7 @@ components: required: - type - image + title: ImageDelta TextDelta: type: object properties: @@ -2884,6 +2945,7 @@ components: required: - type - text + title: TextDelta ToolCallDelta: type: object properties: @@ -2902,11 +2964,13 @@ components: - in_progress - failed - succeeded + title: ToolCallParseStatus additionalProperties: false required: - type - tool_call - parse_status + title: ToolCallDelta CompletionRequest: type: object properties: @@ -2947,6 +3011,7 @@ components: required: - model_id - content + title: CompletionRequest CompletionResponseStreamChunk: type: object properties: @@ -2971,6 +3036,7 @@ components: additionalProperties: false required: - delta + title: CompletionResponseStreamChunk description: >- A chunk of a streamed completion response. CreateAgentRequest: @@ -2981,6 +3047,7 @@ components: additionalProperties: false required: - agent_config + title: CreateAgentRequest AgentCreateResponse: type: object properties: @@ -2989,6 +3056,7 @@ components: additionalProperties: false required: - agent_id + title: AgentCreateResponse CreateAgentSessionRequest: type: object properties: @@ -2997,6 +3065,7 @@ components: additionalProperties: false required: - session_name + title: CreateAgentSessionRequest AgentSessionCreateResponse: type: object properties: @@ -3005,6 +3074,7 @@ components: additionalProperties: false required: - session_id + title: AgentSessionCreateResponse CreateAgentTurnRequest: type: object properties: @@ -3035,6 +3105,7 @@ components: required: - content - mime_type + title: Document toolgroups: type: array items: @@ -3044,6 +3115,7 @@ components: additionalProperties: false required: - messages + title: CreateAgentTurnRequest InferenceStep: type: object properties: @@ -3069,6 +3141,7 @@ components: - step_id - step_type - model_response + title: InferenceStep MemoryRetrievalStep: type: object properties: @@ -3097,6 +3170,7 @@ components: - step_type - vector_db_ids - inserted_context + title: MemoryRetrievalStep SafetyViolation: type: object properties: @@ -3118,6 +3192,7 @@ components: required: - violation_level - metadata + title: SafetyViolation ShieldCallStep: type: object properties: @@ -3142,6 +3217,7 @@ components: - turn_id - step_id - step_type + title: ShieldCallStep ToolExecutionStep: type: object properties: @@ -3174,6 +3250,7 @@ components: - step_type - tool_calls - tool_responses + title: ToolExecutionStep ToolResponse: type: object properties: @@ -3187,6 +3264,7 @@ components: - wolfram_alpha - photogen - code_interpreter + title: BuiltinTool - type: string content: $ref: '#/components/schemas/InterleavedContent' @@ -3195,6 +3273,7 @@ components: - call_id - tool_name - content + title: ToolResponse Turn: type: object properties: @@ -3244,6 +3323,7 @@ components: required: - content - mime_type + title: Attachment started_at: type: string format: date-time @@ -3258,6 +3338,7 @@ components: - steps - output_message - started_at + title: Turn description: >- A single turn in an interaction with an Agentic System. ViolationLevel: @@ -3266,6 +3347,7 @@ components: - info - warn - error + title: ViolationLevel AgentTurnResponseEvent: type: object properties: @@ -3274,6 +3356,7 @@ components: additionalProperties: false required: - payload + title: AgentTurnResponseEvent AgentTurnResponseEventPayload: oneOf: - $ref: '#/components/schemas/AgentTurnResponseStepStartPayload' @@ -3303,6 +3386,7 @@ components: - tool_execution - shield_call - memory_retrieval + title: StepType step_id: type: string step_details: @@ -3324,6 +3408,7 @@ components: - step_type - step_id - step_details + title: AgentTurnResponseStepCompletePayload AgentTurnResponseStepProgressPayload: type: object properties: @@ -3338,6 +3423,7 @@ components: - tool_execution - shield_call - memory_retrieval + title: StepType step_id: type: string delta: @@ -3348,6 +3434,7 @@ components: - step_type - step_id - delta + title: AgentTurnResponseStepProgressPayload AgentTurnResponseStepStartPayload: type: object properties: @@ -3362,6 +3449,7 @@ components: - tool_execution - shield_call - memory_retrieval + title: StepType step_id: type: string metadata: @@ -3379,6 +3467,7 @@ components: - event_type - step_type - step_id + title: AgentTurnResponseStepStartPayload AgentTurnResponseStreamChunk: type: object properties: @@ -3387,6 +3476,7 @@ components: additionalProperties: false required: - event + title: AgentTurnResponseStreamChunk description: streamed agent turn completion response. AgentTurnResponseTurnCompletePayload: type: object @@ -3401,6 +3491,7 @@ components: required: - event_type - turn + title: AgentTurnResponseTurnCompletePayload AgentTurnResponseTurnStartPayload: type: object properties: @@ -3414,6 +3505,7 @@ components: required: - event_type - turn_id + title: AgentTurnResponseTurnStartPayload EmbeddingsRequest: type: object properties: @@ -3434,6 +3526,7 @@ components: required: - model_id - contents + title: EmbeddingsRequest EmbeddingsResponse: type: object properties: @@ -3450,6 +3543,7 @@ components: additionalProperties: false required: - embeddings + title: EmbeddingsResponse description: >- Response containing generated embeddings. EvaluateRowsRequest: @@ -3478,6 +3572,7 @@ components: - input_rows - scoring_functions - task_config + title: EvaluateRowsRequest Session: type: object properties: @@ -3498,6 +3593,7 @@ components: - session_name - turns - started_at + title: Session description: >- A single session of an interaction with an Agentic System. AgentStepResponse: @@ -3519,6 +3615,7 @@ components: additionalProperties: false required: - step + title: AgentStepResponse AgentTurnInputType: type: object properties: @@ -3529,6 +3626,7 @@ components: additionalProperties: false required: - type + title: AgentTurnInputType ArrayType: type: object properties: @@ -3539,6 +3637,7 @@ components: additionalProperties: false required: - type + title: ArrayType BooleanType: type: object properties: @@ -3549,6 +3648,7 @@ components: additionalProperties: false required: - type + title: BooleanType ChatCompletionInputType: type: object properties: @@ -3559,6 +3659,7 @@ components: additionalProperties: false required: - type + title: ChatCompletionInputType CompletionInputType: type: object properties: @@ -3569,6 +3670,7 @@ components: additionalProperties: false required: - type + title: CompletionInputType Dataset: type: object properties: @@ -3607,6 +3709,7 @@ components: - dataset_schema - url - metadata + title: Dataset JsonType: type: object properties: @@ -3617,6 +3720,7 @@ components: additionalProperties: false required: - type + title: JsonType NumberType: type: object properties: @@ -3627,6 +3731,7 @@ components: additionalProperties: false required: - type + title: NumberType ObjectType: type: object properties: @@ -3637,6 +3742,7 @@ components: additionalProperties: false required: - type + title: ObjectType ParamType: oneOf: - $ref: '#/components/schemas/StringType' @@ -3672,6 +3778,7 @@ components: additionalProperties: false required: - type + title: StringType UnionType: type: object properties: @@ -3682,6 +3789,7 @@ components: additionalProperties: false required: - type + title: UnionType Model: type: object properties: @@ -3716,11 +3824,13 @@ components: - type - metadata - model_type + title: Model ModelType: type: string enum: - llm - embedding + title: ModelType PaginatedRowsResult: type: object properties: @@ -3744,6 +3854,7 @@ components: required: - rows - total_count + title: PaginatedRowsResult ScoringFn: type: object properties: @@ -3781,6 +3892,7 @@ components: - type - metadata - return_type + title: ScoringFn Shield: type: object properties: @@ -3810,6 +3922,7 @@ components: - provider_resource_id - provider_id - type + title: Shield description: >- A safety shield resource that can be used to check content Span: @@ -3845,11 +3958,13 @@ components: - trace_id - name - start_time + title: Span SpanStatus: type: string enum: - ok - error + title: SpanStatus SpanWithStatus: type: object properties: @@ -3885,6 +4000,7 @@ components: - trace_id - name - start_time + title: SpanWithStatus QuerySpanTreeResponse: type: object properties: @@ -3895,6 +4011,7 @@ components: additionalProperties: false required: - data + title: QuerySpanTreeResponse Tool: type: object properties: @@ -3938,12 +4055,14 @@ components: - tool_host - description - parameters + title: Tool ToolHost: type: string enum: - distribution - client - model_context_protocol + title: ToolHost ToolGroup: type: object properties: @@ -3975,6 +4094,7 @@ components: - provider_resource_id - provider_id - type + title: ToolGroup Trace: type: object properties: @@ -3993,8 +4113,10 @@ components: - trace_id - root_span_id - start_time + title: Trace Checkpoint: description: Checkpoint created during training runs + title: Checkpoint PostTrainingJobArtifactsResponse: type: object properties: @@ -4008,6 +4130,7 @@ components: required: - job_uuid - checkpoints + title: PostTrainingJobArtifactsResponse description: Artifacts of a finetuning job. PostTrainingJobStatusResponse: type: object @@ -4044,6 +4167,7 @@ components: - job_uuid - status - checkpoints + title: PostTrainingJobStatusResponse description: Status of a finetuning job. ListPostTrainingJobsResponse: type: object @@ -4058,9 +4182,11 @@ components: additionalProperties: false required: - job_uuid + title: PostTrainingJob additionalProperties: false required: - data + title: ListPostTrainingJobsResponse VectorDB: type: object properties: @@ -4086,6 +4212,7 @@ components: - type - embedding_model - embedding_dimension + title: VectorDB HealthInfo: type: object properties: @@ -4094,6 +4221,7 @@ components: additionalProperties: false required: - status + title: HealthInfo RAGDocument: type: object properties: @@ -4124,6 +4252,7 @@ components: - document_id - content - metadata + title: RAGDocument InsertRequest: type: object properties: @@ -4140,6 +4269,7 @@ components: - documents - vector_db_id - chunk_size_in_tokens + title: InsertRequest InsertChunksRequest: type: object properties: @@ -4166,12 +4296,14 @@ components: required: - content - metadata + title: Chunk ttl_seconds: type: integer additionalProperties: false required: - vector_db_id - chunks + title: InsertChunksRequest InvokeToolRequest: type: object properties: @@ -4191,6 +4323,7 @@ components: required: - tool_name - kwargs + title: InvokeToolRequest ToolInvocationResult: type: object properties: @@ -4203,6 +4336,7 @@ components: additionalProperties: false required: - content + title: ToolInvocationResult ListDatasetsResponse: type: object properties: @@ -4213,6 +4347,7 @@ components: additionalProperties: false required: - data + title: ListDatasetsResponse ListModelsResponse: type: object properties: @@ -4223,6 +4358,7 @@ components: additionalProperties: false required: - data + title: ListModelsResponse ProviderInfo: type: object properties: @@ -4237,6 +4373,7 @@ components: - api - provider_id - provider_type + title: ProviderInfo ListProvidersResponse: type: object properties: @@ -4247,6 +4384,7 @@ components: additionalProperties: false required: - data + title: ListProvidersResponse RouteInfo: type: object properties: @@ -4263,6 +4401,7 @@ components: - route - method - provider_types + title: RouteInfo ListRoutesResponse: type: object properties: @@ -4273,6 +4412,7 @@ components: additionalProperties: false required: - data + title: ListRoutesResponse ListScoringFunctionsResponse: type: object properties: @@ -4283,6 +4423,7 @@ components: additionalProperties: false required: - data + title: ListScoringFunctionsResponse ListShieldsResponse: type: object properties: @@ -4293,6 +4434,7 @@ components: additionalProperties: false required: - data + title: ListShieldsResponse ListToolGroupsResponse: type: object properties: @@ -4303,6 +4445,7 @@ components: additionalProperties: false required: - data + title: ListToolGroupsResponse ListToolsResponse: type: object properties: @@ -4313,6 +4456,7 @@ components: additionalProperties: false required: - data + title: ListToolsResponse ListVectorDBsResponse: type: object properties: @@ -4323,6 +4467,7 @@ components: additionalProperties: false required: - data + title: ListVectorDBsResponse Event: oneOf: - $ref: '#/components/schemas/UnstructuredLogEvent' @@ -4343,6 +4488,7 @@ components: - warn - error - critical + title: LogSeverity SpanEndPayload: type: object properties: @@ -4356,6 +4502,7 @@ components: required: - type - status + title: SpanEndPayload SpanStartPayload: type: object properties: @@ -4371,6 +4518,7 @@ components: required: - type - name + title: SpanStartPayload StructuredLogEvent: type: object properties: @@ -4403,6 +4551,7 @@ components: - timestamp - type - payload + title: StructuredLogEvent StructuredLogPayload: oneOf: - $ref: '#/components/schemas/SpanStartPayload' @@ -4447,6 +4596,7 @@ components: - type - message - severity + title: UnstructuredLogEvent LogEventRequest: type: object properties: @@ -4458,6 +4608,7 @@ components: required: - event - ttl_seconds + title: LogEventRequest DPOAlignmentConfig: type: object properties: @@ -4475,6 +4626,7 @@ components: - reward_clip - epsilon - gamma + title: DPOAlignmentConfig DataConfig: type: object properties: @@ -4500,11 +4652,13 @@ components: - batch_size - shuffle - data_format + title: DataConfig DatasetFormat: type: string enum: - instruct - dialog + title: DatasetFormat EfficiencyConfig: type: object properties: @@ -4521,6 +4675,7 @@ components: type: boolean default: false additionalProperties: false + title: EfficiencyConfig OptimizerConfig: type: object properties: @@ -4538,12 +4693,14 @@ components: - lr - weight_decay - num_warmup_steps + title: OptimizerConfig OptimizerType: type: string enum: - adam - adamw - sgd + title: OptimizerType TrainingConfig: type: object properties: @@ -4572,6 +4729,7 @@ components: - max_validation_steps - data_config - optimizer_config + title: TrainingConfig PreferenceOptimizeRequest: type: object properties: @@ -4611,6 +4769,7 @@ components: - training_config - hyperparam_search_config - logger_config + title: PreferenceOptimizeRequest PostTrainingJob: type: object properties: @@ -4619,6 +4778,7 @@ components: additionalProperties: false required: - job_uuid + title: PostTrainingJob DefaultRAGQueryGeneratorConfig: type: object properties: @@ -4633,6 +4793,7 @@ components: required: - type - separator + title: DefaultRAGQueryGeneratorConfig LLMRAGQueryGeneratorConfig: type: object properties: @@ -4649,6 +4810,7 @@ components: - type - model - template + title: LLMRAGQueryGeneratorConfig RAGQueryConfig: type: object properties: @@ -4665,6 +4827,7 @@ components: - query_generator_config - max_tokens_in_context - max_chunks + title: RAGQueryConfig RAGQueryGeneratorConfig: oneOf: - $ref: '#/components/schemas/DefaultRAGQueryGeneratorConfig' @@ -4689,12 +4852,14 @@ components: required: - content - vector_db_ids + title: QueryRequest RAGQueryResult: type: object properties: content: $ref: '#/components/schemas/InterleavedContent' additionalProperties: false + title: RAGQueryResult QueryChunksRequest: type: object properties: @@ -4716,6 +4881,7 @@ components: required: - vector_db_id - query + title: QueryChunksRequest QueryChunksResponse: type: object properties: @@ -4740,6 +4906,7 @@ components: required: - content - metadata + title: Chunk scores: type: array items: @@ -4748,6 +4915,7 @@ components: required: - chunks - scores + title: QueryChunksResponse QueryCondition: type: object properties: @@ -4768,6 +4936,7 @@ components: - key - op - value + title: QueryCondition QueryConditionOp: type: string enum: @@ -4775,6 +4944,7 @@ components: - ne - gt - lt + title: QueryConditionOp QuerySpansResponse: type: object properties: @@ -4785,6 +4955,7 @@ components: additionalProperties: false required: - data + title: QuerySpansResponse QueryTracesResponse: type: object properties: @@ -4795,6 +4966,7 @@ components: additionalProperties: false required: - data + title: QueryTracesResponse RegisterBenchmarkRequest: type: object properties: @@ -4825,6 +4997,7 @@ components: - benchmark_id - dataset_id - scoring_functions + title: RegisterBenchmarkRequest RegisterDatasetRequest: type: object properties: @@ -4855,6 +5028,7 @@ components: - dataset_id - dataset_schema - url + title: RegisterDatasetRequest RegisterModelRequest: type: object properties: @@ -4879,6 +5053,7 @@ components: additionalProperties: false required: - model_id + title: RegisterModelRequest RegisterScoringFunctionRequest: type: object properties: @@ -4899,6 +5074,7 @@ components: - scoring_fn_id - description - return_type + title: RegisterScoringFunctionRequest RegisterShieldRequest: type: object properties: @@ -4921,6 +5097,7 @@ components: additionalProperties: false required: - shield_id + title: RegisterShieldRequest RegisterToolGroupRequest: type: object properties: @@ -4944,6 +5121,7 @@ components: required: - toolgroup_id - provider_id + title: RegisterToolGroupRequest RegisterVectorDbRequest: type: object properties: @@ -4961,6 +5139,7 @@ components: required: - vector_db_id - embedding_model + title: RegisterVectorDbRequest RunEvalRequest: type: object properties: @@ -4969,6 +5148,7 @@ components: additionalProperties: false required: - task_config + title: RunEvalRequest RunShieldRequest: type: object properties: @@ -4993,12 +5173,14 @@ components: - shield_id - messages - params + title: RunShieldRequest RunShieldResponse: type: object properties: violation: $ref: '#/components/schemas/SafetyViolation' additionalProperties: false + title: RunShieldResponse SaveSpansToDatasetRequest: type: object properties: @@ -5019,6 +5201,7 @@ components: - attribute_filters - attributes_to_save - dataset_id + title: SaveSpansToDatasetRequest ScoreRequest: type: object properties: @@ -5044,6 +5227,7 @@ components: required: - input_rows - scoring_functions + title: ScoreRequest ScoreResponse: type: object properties: @@ -5054,6 +5238,7 @@ components: additionalProperties: false required: - results + title: ScoreResponse ScoreBatchRequest: type: object properties: @@ -5072,6 +5257,7 @@ components: - dataset_id - scoring_functions - save_results_dataset + title: ScoreBatchRequest ScoreBatchResponse: type: object properties: @@ -5084,6 +5270,7 @@ components: additionalProperties: false required: - results + title: ScoreBatchResponse AlgorithmConfig: oneOf: - $ref: '#/components/schemas/LoraFinetuningConfig' @@ -5126,6 +5313,7 @@ components: - apply_lora_to_output - rank - alpha + title: LoraFinetuningConfig QATFinetuningConfig: type: object properties: @@ -5142,6 +5330,7 @@ components: - type - quantizer_name - group_size + title: QATFinetuningConfig SupervisedFineTuneRequest: type: object properties: @@ -5182,6 +5371,7 @@ components: - hyperparam_search_config - logger_config - model + title: SupervisedFineTuneRequest SyntheticDataGenerateRequest: type: object properties: @@ -5198,6 +5388,7 @@ components: - top_p - top_k_top_p - sigmoid + title: FilteringFunction description: The type of filtering function. model: type: string @@ -5205,6 +5396,7 @@ components: required: - dialogs - filtering_function + title: SyntheticDataGenerateRequest SyntheticDataGenerationResponse: type: object properties: @@ -5233,6 +5425,7 @@ components: additionalProperties: false required: - synthetic_data + title: SyntheticDataGenerationResponse description: >- Response from the synthetic data generation. Batch of (prompt, response, score) tuples that pass the threshold. @@ -5244,6 +5437,7 @@ components: additionalProperties: false required: - version + title: VersionInfo responses: {} security: - Default: [] diff --git a/llama_stack/strong_typing/schema.py b/llama_stack/strong_typing/schema.py index ddff7cf82..45c7130ba 100644 --- a/llama_stack/strong_typing/schema.py +++ b/llama_stack/strong_typing/schema.py @@ -108,7 +108,9 @@ def get_class_property_docstrings( def docstring_to_schema(data_type: type) -> Schema: short_description, long_description = get_class_docstrings(data_type) - schema: Schema = {} + schema: Schema = { + "title": python_type_to_name(data_type), + } description = "\n".join(filter(None, [short_description, long_description])) if description: From 034ece0011ece62d905a2b8a163127a365dd6a8c Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Wed, 19 Feb 2025 13:54:04 -0800 Subject: [PATCH 37/37] Ensure that deprecations for fields follow through to OpenAPI --- docs/_static/llama-stack-spec.html | 6 ++++-- docs/_static/llama-stack-spec.yaml | 2 ++ llama_stack/strong_typing/schema.py | 17 ++++++++++++++++- 3 files changed, 22 insertions(+), 3 deletions(-) diff --git a/docs/_static/llama-stack-spec.html b/docs/_static/llama-stack-spec.html index 82abc947b..2b6e1d11c 100644 --- a/docs/_static/llama-stack-spec.html +++ b/docs/_static/llama-stack-spec.html @@ -2702,7 +2702,8 @@ "none" ], "title": "ToolChoice", - "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model." + "description": "Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model.", + "deprecated": true }, "tool_prompt_format": { "type": "string", @@ -2712,7 +2713,8 @@ "python_list" ], "title": "ToolPromptFormat", - "description": "Prompt format for calling custom / zero shot tools." + "description": "Prompt format for calling custom / zero shot tools.", + "deprecated": true }, "tool_config": { "$ref": "#/components/schemas/ToolConfig" diff --git a/docs/_static/llama-stack-spec.yaml b/docs/_static/llama-stack-spec.yaml index 4d13ca565..99300fedf 100644 --- a/docs/_static/llama-stack-spec.yaml +++ b/docs/_static/llama-stack-spec.yaml @@ -1644,6 +1644,7 @@ components: Whether tool use is required or automatic. This is a hint to the model which may not be followed. It depends on the Instruction Following capabilities of the model. + deprecated: true tool_prompt_format: type: string enum: @@ -1653,6 +1654,7 @@ components: title: ToolPromptFormat description: >- Prompt format for calling custom / zero shot tools. + deprecated: true tool_config: $ref: '#/components/schemas/ToolConfig' max_infer_iters: diff --git a/llama_stack/strong_typing/schema.py b/llama_stack/strong_typing/schema.py index 45c7130ba..dfc51ea78 100644 --- a/llama_stack/strong_typing/schema.py +++ b/llama_stack/strong_typing/schema.py @@ -313,6 +313,17 @@ class JsonSchemaGenerator: data_type: TypeLike, force_expand: bool = False, json_schema_extra: Optional[dict] = None, + ) -> Schema: + common_info = {} + if json_schema_extra and "deprecated" in json_schema_extra: + common_info["deprecated"] = json_schema_extra["deprecated"] + return self._type_to_schema(data_type, force_expand, json_schema_extra) | common_info + + def _type_to_schema( + self, + data_type: TypeLike, + force_expand: bool = False, + json_schema_extra: Optional[dict] = None, ) -> Schema: """ Returns the JSON schema associated with a type. @@ -489,7 +500,11 @@ class JsonSchemaGenerator: if "model_fields" in members: f = members["model_fields"] defaults = {k: finfo.default for k, finfo in f.items()} - json_schema_extra = f.get(output_name, None).json_schema_extra + if output_name in f: + finfo = f[output_name] + json_schema_extra = finfo.json_schema_extra or {} + if finfo.deprecated: + json_schema_extra["deprecated"] = True if is_type_optional(property_type): optional_type: type = unwrap_optional_type(property_type)