llama-stack-mirror/llama_stack/providers
Hardik Shah 3511af7c33
fix: fireworks provider for openai compat inference endpoint (#2335)
fixes provider to use stream var correctly

Before 
```
curl --request POST \
    --url http://localhost:8321/v1/openai/v1/chat/completions \
    --header 'content-type: application/json' \
    --data '{
      "model": "meta-llama/Llama-4-Scout-17B-16E-Instruct",
      "messages": [
        {
          "role": "user",
          "content": "Who are you?"
        }
      ]
    }'
{"detail":"Internal server error: An unexpected error occurred."}
```

After 
```
 llama-stack % curl --request POST \
    --url http://localhost:8321/v1/openai/v1/chat/completions \
    --header 'content-type: application/json' \
    --data '{
      "model": "accounts/fireworks/models/llama4-scout-instruct-basic",
      "messages": [
        {
          "role": "user",
          "content": "Who are you?"
        }
      ]
    }'
{"id":"chatcmpl-97978538-271d-4c73-8d4d-c509bfb6c87e","choices":[{"message":{"role":"assistant","content":"I'm an AI assistant designed by Meta. I'm here to answer your questions, share interesting ideas and maybe even surprise you with a fresh perspective. What's on your mind?","name":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"object":"chat.completion","created":1748896403,"model":"accounts/fireworks/models/llama4-scout-instruct-basic"}%
```
2025-06-02 14:11:15 -07:00
..
inline feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
registry chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
remote fix: fireworks provider for openai compat inference endpoint (#2335) 2025-06-02 14:11:15 -07:00
utils feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py fix(tools): do not index tools, only index toolgroups (#2261) 2025-05-25 13:27:52 -07:00