llama-stack-mirror/llama_stack/providers/utils/inference
Derek Higgins 6fe64ee169 Including tool call in chat
Include the tool call details with the chat when doing
Rag with Remote vllm

Fixes: #1929

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-04-22 15:20:33 +01:00
..
__init__.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
embedding_mixin.py fix: dont assume SentenceTransformer is imported 2025-02-25 16:53:01 -08:00
litellm_openai_mixin.py fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
model_registry.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
openai_compat.py Including tool call in chat 2025-04-22 15:20:33 +01:00
prompt_adapter.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00