llama-stack-mirror/llama_stack/providers/inline/inference/meta_reference
Ihar Hrachyshka c3d7d17bc4
chore: fix typing hints for get_provider_impl deps arguments (#1544)
# What does this PR do?

It's a dict that may contain different types, as per
resolver:instantiate_provider implementation. (AFAIU it also never
contains ProviderSpecs, but *instances* of provider implementations.)

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

mypy passing if enabled checks for these modules. (See #1543)

[//]: # (## Documentation)

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-11 10:07:28 -07:00
..
llama3 refactor: move generation.py to llama3 2025-03-03 13:46:50 -08:00
quantization refactor: move llama3 impl to meta_reference provider (#1364) 2025-03-03 13:22:57 -08:00
__init__.py chore: fix typing hints for get_provider_impl deps arguments (#1544) 2025-03-11 10:07:28 -07:00
common.py refactor: move generation.py to llama3 2025-03-03 13:46:50 -08:00
config.py build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
inference.py fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
model_parallel.py refactor: move generation.py to llama3 2025-03-03 13:50:19 -08:00
parallel_utils.py refactor: move generation.py to llama3 2025-03-03 13:46:50 -08:00