llama-stack/llama_stack/providers/remote/inference
yyymeta fb418813fc
fix: passthrough impl response.content.text (#1665)
# What does this PR do?
current passthrough impl returns chatcompletion_message.content as a
TextItem() , not a straight string. so it's not compatible with other
providers, and causes parsing error downstream.

change away from the generic pydantic conversion, and explicitly parse
out content.text

## Test Plan

setup llama server with passthrough

```
llama-stack-client eval run-benchmark "MMMU_Pro_standard"   --model-id    meta-llama/Llama-3-8B   --output-dir /tmp/   --num-examples 20
```
works without parsing error
2025-03-17 13:42:08 -07:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
cerebras fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
databricks test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
fireworks fix: remove Llama-3.2-1B-Instruct for fireworks (#1558) 2025-03-11 11:19:29 -07:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
ollama feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough fix: passthrough impl response.content.text (#1665) 2025-03-17 13:42:08 -07:00
runpod test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
sambanova fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
tgi fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
together feat: Add open benchmark template codegen (#1579) 2025-03-12 11:12:08 -07:00
vllm fix: Swap to AsyncOpenAI client in remote vllm provider (#1459) 2025-03-07 14:48:00 -05:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00