mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
# What does this PR do? remove unused chat_completion implementations vllm features ported - - requires max_tokens be set, use config value - set tool_choice to none if no tools provided ## Test Plan ci |
||
---|---|---|
.. | ||
__init__.py | ||
common.py | ||
config.py | ||
generators.py | ||
inference.py | ||
model_parallel.py | ||
parallel_utils.py |