mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
vllm - - requires max_tokens be set, use config value - set tool_choice to none if no tools provided |
||
---|---|---|
.. | ||
bedrock | ||
test_inference_client_caching.py | ||
test_litellm_openai_mixin.py | ||
test_openai_base_url_config.py | ||
test_remote_vllm.py |