llama-stack/llama_stack/providers/inline/inference/vllm
Dinesh Yeduguru 8af6951106
remove conflicting default for tool prompt format in chat completion (#742)
# What does this PR do?
We are setting a default value of json for tool prompt format, which
conflicts with llama 3.2/3.3 models since they use python list. This PR
changes the defaults to None and in the code, we infer default based on
the model.

Addresses: #695 

Tests:
❯ LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v
tests/client-sdk/inference/test_inference.py -k
"test_text_chat_completion"

 pytest llama_stack/providers/tests/inference/test_prompt_adapter.py
2025-01-10 10:41:53 -08:00
..
__init__.py Add provider deprecation support; change directory structure (#397) 2024-11-07 13:04:53 -08:00
config.py Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
vllm.py remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00