forked from phoenix-oss/llama-stack-mirror
# What does this PR do? current passthrough impl returns chatcompletion_message.content as a TextItem() , not a straight string. so it's not compatible with other providers, and causes parsing error downstream. change away from the generic pydantic conversion, and explicitly parse out content.text ## Test Plan setup llama server with passthrough ``` llama-stack-client eval run-benchmark "MMMU_Pro_standard" --model-id meta-llama/Llama-3-8B --output-dir /tmp/ --num-examples 20 ``` works without parsing error |
||
---|---|---|
.. | ||
inline | ||
registry | ||
remote | ||
tests | ||
utils | ||
__init__.py | ||
datatypes.py |