forked from phoenix-oss/llama-stack-mirror
`ChatCompletionResponseEventType: start` is ignored and not yielded in the agent_instance as we expect that to not have any content. However, litellm sends first event as `ChatCompletionResponseEventType: start` with content ( which was the first token that we were skipping ) ``` LLAMA_STACK_CONFIG=dev pytest -s -v tests/client-sdk/agents/test_agents.py --inference-model "openai/gpt-4o-mini" -k test_agent_simple ``` This was failing before ( since the word hello was not in the final response ) |
||
|---|---|---|
| .. | ||
| bedrock | ||
| common | ||
| datasetio | ||
| inference | ||
| kvstore | ||
| memory | ||
| scoring | ||
| telemetry | ||
| __init__.py | ||