forked from phoenix/litellm-mirror
* (azure): Enable stream_options for Azure OpenAI. (#6024) * LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) * feat(together_ai/completion): handle together ai completion calls * fix: handle list of int / list of list of int for text completion calls * fix(utils.py): check if base model in bedrock converse model list Fixes https://github.com/BerriAI/litellm/issues/6003 * test(test_optional_params.py): add unit tests for bedrock optional param mapping Fixes https://github.com/BerriAI/litellm/issues/6003 * feat(utils.py): enable passing dummy tool call for anthropic/bedrock calls if tool_use blocks exist Fixes https://github.com/BerriAI/litellm/issues/5388 * fixed an issue with tool use of claude models with anthropic and bedrock (#6013) * fix(utils.py): handle empty schema for anthropic/bedrock Fixes https://github.com/BerriAI/litellm/issues/6012 * fix: fix linting errors * fix: fix linting errors * fix: fix linting errors * fix(proxy_cli.py): fix import route for app + health checks path (#6026) * (testing): Enable testing us.anthropic.claude-3-haiku-20240307-v1:0. (#6018) * fix(proxy_cli.py): fix import route for app + health checks gettsburg.wav Fixes https://github.com/BerriAI/litellm/issues/5999 --------- Co-authored-by: David Manouchehri <david.manouchehri@ai.moda> --------- Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com> Co-authored-by: David Manouchehri <david.manouchehri@ai.moda> --------- Co-authored-by: David Manouchehri <david.manouchehri@ai.moda> Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
This commit is contained in:
parent
74647a5227
commit
f8d9be1301
1 changed files with 3 additions and 0 deletions
|
@ -135,6 +135,7 @@ class AzureOpenAIConfig:
|
|||
"temperature",
|
||||
"n",
|
||||
"stream",
|
||||
"stream_options",
|
||||
"stop",
|
||||
"max_tokens",
|
||||
"max_completion_tokens",
|
||||
|
@ -938,6 +939,7 @@ class AzureChatCompletion(BaseLLM):
|
|||
model=model,
|
||||
custom_llm_provider="azure",
|
||||
logging_obj=logging_obj,
|
||||
stream_options=data.get("stream_options", None),
|
||||
_response_headers=process_azure_headers(headers),
|
||||
)
|
||||
return streamwrapper
|
||||
|
@ -1006,6 +1008,7 @@ class AzureChatCompletion(BaseLLM):
|
|||
model=model,
|
||||
custom_llm_provider="azure",
|
||||
logging_obj=logging_obj,
|
||||
stream_options=data.get("stream_options", None),
|
||||
_response_headers=headers,
|
||||
)
|
||||
return streamwrapper ## DO NOT make this into an async for ... loop, it will yield an async generator, which won't raise errors if the response fails
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue