llama-stack-mirror/llama_stack/providers/utils/inference
ehhuang 14c38acf97
fix: set default tool_prompt_format in inference api (#1214)
Summary:
Currently we don't set the best tool_prompt_format according to model as
promisd.

Test Plan:
Added print around raw model input and inspected manually
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/meta-llama/llama-stack/pull/1214).
* #1234
* __->__ #1214
2025-02-24 12:38:37 -08:00
..
__init__.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
embedding_mixin.py Kill noise from test output 2025-02-21 15:37:23 -08:00
model_registry.py feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00
openai_compat.py chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
prompt_adapter.py fix: set default tool_prompt_format in inference api (#1214) 2025-02-24 12:38:37 -08:00