llama-stack-mirror/llama_stack/providers/remote/inference/passthrough
yyymeta fb418813fc
fix: passthrough impl response.content.text (#1665)
# What does this PR do?
current passthrough impl returns chatcompletion_message.content as a
TextItem() , not a straight string. so it's not compatible with other
providers, and causes parsing error downstream.

change away from the generic pydantic conversion, and explicitly parse
out content.text

## Test Plan

setup llama server with passthrough

```
llama-stack-client eval run-benchmark "MMMU_Pro_standard"   --model-id    meta-llama/Llama-3-8B   --output-dir /tmp/   --num-examples 20
```
works without parsing error
2025-03-17 13:42:08 -07:00
..
__init__.py feat: inference passthrough provider (#1166) 2025-02-19 21:47:00 -08:00
config.py feat: inference passthrough provider (#1166) 2025-02-19 21:47:00 -08:00
passthrough.py fix: passthrough impl response.content.text (#1665) 2025-03-17 13:42:08 -07:00