forked from phoenix-oss/llama-stack-mirror
`ChatCompletionResponseEventType: start` is ignored and not yielded in the agent_instance as we expect that to not have any content. However, litellm sends first event as `ChatCompletionResponseEventType: start` with content ( which was the first token that we were skipping ) ``` LLAMA_STACK_CONFIG=dev pytest -s -v tests/client-sdk/agents/test_agents.py --inference-model "openai/gpt-4o-mini" -k test_agent_simple ``` This was failing before ( since the word hello was not in the final response ) |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| embedding_mixin.py | ||
| litellm_openai_mixin.py | ||
| model_registry.py | ||
| openai_compat.py | ||
| prompt_adapter.py | ||