llama-stack-mirror/llama_stack/providers
Matthew Farrellee f754e1b65b chore: remove deprecated inference.chat_completion implementations
vllm -
 - requires max_tokens be set, use config value
 - set tool_choice to none if no tools provided
2025-10-02 10:39:30 -04:00
..
inline chore: remove deprecated inference.chat_completion implementations 2025-10-02 10:39:30 -04:00
registry docs: provider and distro codegen migration (#3531) 2025-09-24 14:01:29 -07:00
remote chore: remove deprecated inference.chat_completion implementations 2025-10-02 10:39:30 -04:00
utils chore: remove deprecated inference.chat_completion implementations 2025-10-02 10:39:30 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00