llama-stack-mirror/tests/unit/providers/inference
Matthew Farrellee d266c59c2a
chore: remove deprecated inference.chat_completion implementations (#3654)
# What does this PR do?

remove unused chat_completion implementations

vllm features ported -
 - requires max_tokens be set, use config value
 - set tool_choice to none if no tools provided


## Test Plan

ci
2025-10-03 07:55:34 -04:00
..
bedrock fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
test_inference_client_caching.py chore: update the groq inference impl to use openai-python for openai-compat functions (#3348) 2025-09-06 15:36:27 -07:00
test_litellm_openai_mixin.py feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin (#3547) 2025-09-25 17:17:00 -04:00
test_openai_base_url_config.py chore: add provider-data-api-key support to openaimixin (#3639) 2025-10-01 13:44:59 -07:00
test_remote_vllm.py chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00