llama-stack-mirror/llama_stack/providers/remote/inference/vllm
Matthew Farrellee 3d119a86d4
chore: indicate to mypy that InferenceProvider.batch_completion/batch_chat_completion is concrete (#3239)
# What does this PR do?

closes https://github.com/llamastack/llama-stack/issues/3236

mypy considered our default implementations (raise NotImplementedError)
to be trivial. the result was we implemented the same stubs in
providers.

this change puts enough into the default impls so mypy considers them
non-trivial. this allows us to remove the duplicate implementations.
2025-08-22 14:17:30 -07:00
..
__init__.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
config.py feat(registry): make the Stack query providers for model listing (#2862) 2025-07-24 10:39:53 -07:00
vllm.py chore: indicate to mypy that InferenceProvider.batch_completion/batch_chat_completion is concrete (#3239) 2025-08-22 14:17:30 -07:00