llama-stack-mirror/llama_stack/providers/inline/inference
Matthew Farrellee f754e1b65b chore: remove deprecated inference.chat_completion implementations
vllm -
 - requires max_tokens be set, use config value
 - set tool_choice to none if no tools provided
2025-10-02 10:39:30 -04:00
..
meta_reference chore: remove deprecated inference.chat_completion implementations 2025-10-02 10:39:30 -04:00
sentence_transformers chore: remove deprecated inference.chat_completion implementations 2025-10-02 10:39:30 -04:00
__init__.py precommit 2024-11-08 17:58:58 -08:00