llama-stack-mirror/llama_stack/providers/utils/inference
ehhuang c23a7af5d6
fix: agents with non-llama model (#1550)
# Summary:
Includes fixes to get test_agents working with openAI model, e.g. tool
parsing and message conversion

# Test Plan:
```
LLAMA_STACK_CONFIG=dev pytest -s -v tests/integration/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --text-model openai/gpt-4o-mini
```

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/meta-llama/llama-stack/pull/1550).
* #1556
* __->__ #1550
2025-03-17 22:11:06 -07:00
..
__init__.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
embedding_mixin.py fix: dont assume SentenceTransformer is imported 2025-02-25 16:53:01 -08:00
litellm_openai_mixin.py fix: agents with non-llama model (#1550) 2025-03-17 22:11:06 -07:00
model_registry.py feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00
openai_compat.py fix: agents with non-llama model (#1550) 2025-03-17 22:11:06 -07:00
prompt_adapter.py feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00