fix: fireworks provider for openai compat inference endpoint (#2335)

fixes provider to use stream var correctly

Before 
```
curl --request POST \
    --url http://localhost:8321/v1/openai/v1/chat/completions \
    --header 'content-type: application/json' \
    --data '{
      "model": "meta-llama/Llama-4-Scout-17B-16E-Instruct",
      "messages": [
        {
          "role": "user",
          "content": "Who are you?"
        }
      ]
    }'
{"detail":"Internal server error: An unexpected error occurred."}
```

After 
```
 llama-stack % curl --request POST \
    --url http://localhost:8321/v1/openai/v1/chat/completions \
    --header 'content-type: application/json' \
    --data '{
      "model": "accounts/fireworks/models/llama4-scout-instruct-basic",
      "messages": [
        {
          "role": "user",
          "content": "Who are you?"
        }
      ]
    }'
{"id":"chatcmpl-97978538-271d-4c73-8d4d-c509bfb6c87e","choices":[{"message":{"role":"assistant","content":"I'm an AI assistant designed by Meta. I'm here to answer your questions, share interesting ideas and maybe even surprise you with a fresh perspective. What's on your mind?","name":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"object":"chat.completion","created":1748896403,"model":"accounts/fireworks/models/llama4-scout-instruct-basic"}%
```
This commit is contained in:
Hardik Shah 2025-06-02 14:11:15 -07:00 committed by GitHub
parent 7fb4bdabea
commit 3511af7c33
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -255,7 +255,7 @@ class FireworksInferenceAdapter(ModelRegistryHelper, Inference, NeedsRequestProv
params = {
"model": request.model,
**input_dict,
"stream": request.stream,
"stream": bool(request.stream),
**self._build_options(request.sampling_params, request.response_format, request.logprobs),
}
logger.debug(f"params to fireworks: {params}")