mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-24 00:47:00 +00:00
# What does this PR do? remove unused chat_completion implementations vllm features ported - - requires max_tokens be set, use config value - set tool_choice to none if no tools provided ## Test Plan ci |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| embedding_mixin.py | ||
| inference_store.py | ||
| litellm_openai_mixin.py | ||
| model_registry.py | ||
| openai_compat.py | ||
| openai_mixin.py | ||
| prompt_adapter.py | ||