forked from phoenix/litellm-mirror
LiteLLM Minor Fixes and Improvements (08/06/2024) (#5567)
* fix(utils.py): return citations for perplexity streaming Fixes https://github.com/BerriAI/litellm/issues/5535 * fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542) * fix(anthropic/chat.py): support fallbacks for anthropic streaming Fixes https://github.com/BerriAI/litellm/issues/5512 * fix(anthropic/chat.py): use module level http client if none given (prevents early client closure) * fix: fix linting errors * fix(http_handler.py): fix raise_for_status error handling * test: retry flaky test * fix otel type * fix(bedrock/embed): fix error raising * test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded * fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> * All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539) * fix(router.py): support returning model_alias model names in `/v1/models` * fix(proxy_server.py): support returning model alias'es on `/model/info` * feat(router.py): support returning model group alias for `/model_group/info` * fix(proxy_server.py): fix linting errors * fix(proxy_server.py): fix linting errors * build(model_prices_and_context_window.json): add amazon titan text premier pricing information Closes https://github.com/BerriAI/litellm/issues/5560 * feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3 * fix(success_handler.py): fix linting error * fix(success_handler.py): fix linting errors * fix(team_endpoints.py): Allows admin to update team member budgets --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
This commit is contained in:
parent
e4dcd6f745
commit
72e961af3c
25 changed files with 509 additions and 99 deletions
|
@ -534,6 +534,15 @@ def mock_completion(
|
|||
model=model, # type: ignore
|
||||
request=httpx.Request(method="POST", url="https://api.openai.com/v1/"),
|
||||
)
|
||||
elif isinstance(mock_response, str) and mock_response.startswith(
|
||||
"Exception: mock_streaming_error"
|
||||
):
|
||||
mock_response = litellm.MockException(
|
||||
message="This is a mock error raised mid-stream",
|
||||
llm_provider="anthropic",
|
||||
model=model,
|
||||
status_code=529,
|
||||
)
|
||||
time_delay = kwargs.get("mock_delay", None)
|
||||
if time_delay is not None:
|
||||
time.sleep(time_delay)
|
||||
|
@ -561,6 +570,8 @@ def mock_completion(
|
|||
custom_llm_provider="openai",
|
||||
logging_obj=logging,
|
||||
)
|
||||
if isinstance(mock_response, litellm.MockException):
|
||||
raise mock_response
|
||||
if n is None:
|
||||
model_response.choices[0].message.content = mock_response # type: ignore
|
||||
else:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue