mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
Handle recursive nature in the structured response_formats. Update test to include 1 nested model. ``` LLAMA_STACK_CONFIG=dev pytest -s -v tests/client-sdk/inference/test_text_inference.py --inference-model "openai/gpt-4o-mini" -k test_text_chat_completion_structured_output ``` --------- Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
embedding_mixin.py | ||
litellm_openai_mixin.py | ||
model_registry.py | ||
openai_compat.py | ||
prompt_adapter.py |