llama-stack-mirror/llama_stack/providers
ehhuang ee5e9b935a
feat: better using get_default_tool_prompt_format (#1360)
Summary:
https://github.com/meta-llama/llama-stack/pull/1214 introduced
`get_default_tool_prompt_format` but tried to use it on the raw
identifier.

Here we move calling this func later in the stack and rely on the
inference provider to resolve the raw identifier into llama model, then
call get_default_tool_prompt_format.

Test Plan:
```
LLAMA_STACK_CONFIG=ollama pytest -s -v tests/client-sdk/inference/test_text_inference.py::test_text_chat_completion_with_tool_calling_and_non_streaming --inference-model=llama3.2:3b-instruct-fp16 --vision-inference-model=""
```

Before:

<img width="1288" alt="image"
src="https://github.com/user-attachments/assets/918c7839-1f45-4540-864e-4b842cc367df"
/>

After:
<img width="1522" alt="image"
src="https://github.com/user-attachments/assets/447d78af-b3b9-4837-8cb7-6ac549005efe"
/>
2025-03-03 14:50:06 -08:00
..
inline refactor: move generation.py to llama3 2025-03-03 13:50:19 -08:00
registry fix: groq now depends on litellm 2025-02-27 14:07:12 -08:00
remote feat: add a configurable category-based logger (#1352) 2025-03-02 18:51:14 -08:00
tests refactor: move generation.py to llama3 2025-03-03 13:50:19 -08:00
utils feat: better using get_default_tool_prompt_format (#1360) 2025-03-03 14:50:06 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00