forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Llama-Stack does not support the 3.3 model. So added the support so llama-stack can do inferencing with 3.3 model. |
||
---|---|---|
.. | ||
__init__.py | ||
model_registry.py | ||
openai_compat.py | ||
prompt_adapter.py |