mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
fixes provider to use stream var correctly Before ``` curl --request POST \ --url http://localhost:8321/v1/openai/v1/chat/completions \ --header 'content-type: application/json' \ --data '{ "model": "meta-llama/Llama-4-Scout-17B-16E-Instruct", "messages": [ { "role": "user", "content": "Who are you?" } ] }' {"detail":"Internal server error: An unexpected error occurred."} ``` After ``` llama-stack % curl --request POST \ --url http://localhost:8321/v1/openai/v1/chat/completions \ --header 'content-type: application/json' \ --data '{ "model": "accounts/fireworks/models/llama4-scout-instruct-basic", "messages": [ { "role": "user", "content": "Who are you?" } ] }' {"id":"chatcmpl-97978538-271d-4c73-8d4d-c509bfb6c87e","choices":[{"message":{"role":"assistant","content":"I'm an AI assistant designed by Meta. I'm here to answer your questions, share interesting ideas and maybe even surprise you with a fresh perspective. What's on your mind?","name":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"object":"chat.completion","created":1748896403,"model":"accounts/fireworks/models/llama4-scout-instruct-basic"}% ``` |
||
---|---|---|
.. | ||
apis | ||
cli | ||
distribution | ||
models | ||
providers | ||
strong_typing | ||
templates | ||
ui | ||
__init__.py | ||
env.py | ||
log.py | ||
schema_utils.py |