llama-stack-mirror/llama_stack/providers/inline/inference/meta_reference
Dinesh Yeduguru 8af6951106
remove conflicting default for tool prompt format in chat completion (#742)
# What does this PR do?
We are setting a default value of json for tool prompt format, which
conflicts with llama 3.2/3.3 models since they use python list. This PR
changes the defaults to None and in the code, we infer default based on
the model.

Addresses: #695 

Tests:
❯ LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v
tests/client-sdk/inference/test_inference.py -k
"test_text_chat_completion"

 pytest llama_stack/providers/tests/inference/test_prompt_adapter.py
2025-01-10 10:41:53 -08:00
..
quantization use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
__init__.py Add provider deprecation support; change directory structure (#397) 2024-11-07 13:04:53 -08:00
config.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
generation.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
inference.py remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
model_parallel.py Fix Meta reference GPU implementation (#663) 2024-12-19 14:09:45 -08:00
parallel_utils.py Update types in parallel_utils for meta-refernece-gpu impl 2024-12-19 13:58:41 -08:00