llama-stack/llama_stack/providers/utils/inference
varunfb f5c36c47ed
Added support for llama 3.3 model (#601)
# What does this PR do?

Llama-Stack does not support the 3.3 model. So added the support so
llama-stack can do inferencing with 3.3 model.
2024-12-10 20:03:31 -08:00
..
__init__.py Added support for llama 3.3 model (#601) 2024-12-10 20:03:31 -08:00
model_registry.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
openai_compat.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
prompt_adapter.py use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00