mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
fixes provider to use stream var correctly
Before
```
curl --request POST \
--url http://localhost:8321/v1/openai/v1/chat/completions \
--header 'content-type: application/json' \
--data '{
"model": "meta-llama/Llama-4-Scout-17B-16E-Instruct",
"messages": [
{
"role": "user",
"content": "Who are you?"
}
]
}'
{"detail":"Internal server error: An unexpected error occurred."}
```
After
```
llama-stack % curl --request POST \
--url http://localhost:8321/v1/openai/v1/chat/completions \
--header 'content-type: application/json' \
--data '{
"model": "accounts/fireworks/models/llama4-scout-instruct-basic",
"messages": [
{
"role": "user",
"content": "Who are you?"
}
]
}'
{"id":"chatcmpl-97978538-271d-4c73-8d4d-c509bfb6c87e","choices":[{"message":{"role":"assistant","content":"I'm an AI assistant designed by Meta. I'm here to answer your questions, share interesting ideas and maybe even surprise you with a fresh perspective. What's on your mind?","name":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"object":"chat.completion","created":1748896403,"model":"accounts/fireworks/models/llama4-scout-instruct-basic"}%
```
|
||
|---|---|---|
| .. | ||
| anthropic | ||
| bedrock | ||
| cerebras | ||
| cerebras_openai_compat | ||
| databricks | ||
| fireworks | ||
| fireworks_openai_compat | ||
| gemini | ||
| groq | ||
| groq_openai_compat | ||
| llama_openai_compat | ||
| nvidia | ||
| ollama | ||
| openai | ||
| passthrough | ||
| runpod | ||
| sambanova | ||
| sambanova_openai_compat | ||
| tgi | ||
| together | ||
| together_openai_compat | ||
| vllm | ||
| watsonx | ||
| __init__.py | ||