mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
* add the logprobs param for fireworks ai (#6915) * add the logprobs param for fireworks ai * (feat) pass through llm endpoints - add `PATCH` support (vertex context caching requires for update ops) (#6924) * add PATCH for pass through endpoints * test_pass_through_routes_support_all_methods * sonnet supports pdf, haiku does not (#6928) * (feat) DataDog Logger - Add Failure logging + use Standard Logging payload (#6929) * add async_log_failure_event for dd * use standard logging payload for DD logging * use standard logging payload for DD * fix use SLP status * allow opting into _create_v0_logging_payload * add unit tests for DD logging payload * fix dd logging tests * (feat) log proxy auth errors on datadog (#6931) * add new dd type for auth errors * add async_log_proxy_authentication_errors * fix comment * use async_log_proxy_authentication_errors * test_datadog_post_call_failure_hook * test_async_log_proxy_authentication_errors * (feat) Allow using include to include external YAML files in a config.yaml (#6922) * add helper to process inlcudes directive on yaml * add doc on config management * unit tests for `include` on config.yaml * bump: version 1.52.16 → 1.53. * (feat) dd logger - set tags according to the values set by those env vars (#6933) * dd logger, inherit from .envs * test_datadog_payload_environment_variables * fix _get_datadog_service * build(ui/): update ui build * bump: version 1.53.0 → 1.53.1 * Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)" This reverts commit68e59824a3
. * LiteLLM Minor Fixes & Improvements (11/26/2024) (#6913) * docs(config_settings.md): document all router_settings * ci(config.yml): add router_settings doc test to ci/cd * test: debug test on ci/cd * test: debug ci/cd test * test: fix test * fix(team_endpoints.py): skip invalid team object. don't fail `/team/list` call Causes downstream errors if ui just fails to load team list * test(base_llm_unit_tests.py): add 'response_format={"type": "text"}' test to base_llm_unit_tests adds complete coverage for all 'response_format' values to ci/cd * feat(router.py): support wildcard routes in `get_router_model_info()` Addresses https://github.com/BerriAI/litellm/issues/6914 * build(model_prices_and_context_window.json): add tpm/rpm limits for all gemini models Allows for ratelimit tracking for gemini models even with wildcard routing enabled Addresses https://github.com/BerriAI/litellm/issues/6914 * feat(router.py): add tpm/rpm tracking on success/failure to global_router Addresses https://github.com/BerriAI/litellm/issues/6914 * feat(router.py): support wildcard routes on router.get_model_group_usage() * fix(router.py): fix linting error * fix(router.py): implement get_remaining_tokens_and_requests Addresses https://github.com/BerriAI/litellm/issues/6914 * fix(router.py): fix linting errors * test: fix test * test: fix tests * docs(config_settings.md): add missing dd env vars to docs * fix(router.py): check if hidden params is dict * LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943) * fix(http_parsing_utils.py): remove `ast.literal_eval()` from http utils Security fix - https://huntr.com/bounties/96a32812-213c-4819-ba4e-36143d35e95b?token=bf414bbd77f8b346556e 64ab2dd9301ea44339910877ea50401c76f977e36cdd78272f5fb4ca852a88a7e832828aae1192df98680544ee24aa98f3cf6980d8 bab641a66b7ccbc02c0e7d4ddba2db4dbe7318889dc0098d8db2d639f345f574159814627bb084563bad472e2f990f825bff0878a9 e281e72c88b4bc5884d637d186c0d67c9987c57c3f0caf395aff07b89ad2b7220d1dd7d1b427fd2260b5f01090efce5250f8b56ea2 c0ec19916c24b23825d85ce119911275944c840a1340d69e23ca6a462da610 * fix(converse/transformation.py): support bedrock apac cross region inference Fixes https://github.com/BerriAI/litellm/issues/6905 * fix(user_api_key_auth.py): add auth check for websocket endpoint Fixes https://github.com/BerriAI/litellm/issues/6926 * fix(user_api_key_auth.py): use `model` from query param * fix: fix linting error * test: run flaky tests first * docs: update the docs (#6923) * (bug fix) /key/update was not storing `budget_duration` in the DB (#6941) * fix - store budget_duration for keys * test_generate_and_update_key * test_update_user_unit_test * fix user update * (fix) handle json decode errors for DD exception logging (#6934) * fix JSONDecodeError * handle async_log_proxy_authentication_errors * fix test_async_log_proxy_authentication_errors_get_request * Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)"" This reverts commit5d13302e6b
. * (docs + fix) Add docs on Moderations endpoint, Text Completion (#6947) * fix _pass_through_moderation_endpoint_factory * fix route_llm_request * doc moderations api * docs on /moderations * add e2e tests for moderations api * docs moderations api * test_pass_through_moderation_endpoint_factory * docs text completion * (feat) add enforcement for unique key aliases on /key/update and /key/generate (#6944) * add enforcement for unique key aliases * fix _enforce_unique_key_alias * fix _enforce_unique_key_alias * fix _enforce_unique_key_alias * test_enforce_unique_key_alias * (fix) tag merging / aggregation logic (#6932) * use 1 helper to merge tags + ensure unique ness * test_add_litellm_data_to_request_duplicate_tags * fix _merge_tags * fix proxy utils test * fix doc string * (feat) Allow disabling ErrorLogs written to the DB (#6940) * fix - allow disabling logging error logs * docs on disabling error logs * doc string for _PROXY_failure_handler * test_disable_error_logs * rename file * fix rename file * increase test coverage for test_enable_error_logs * fix(key_management_endpoints.py): support 'tags' param on `/key/update` (#6945) * LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) * fix(factory.py): ensure tool call converts image url Fixes https://github.com/BerriAI/litellm/issues/6953 * fix(transformation.py): support mp4 + pdf url's for vertex ai Fixes https://github.com/BerriAI/litellm/issues/6936 * fix(http_handler.py): mask gemini api key in error logs Fixes https://github.com/BerriAI/litellm/issues/6963 * docs(prometheus.md): update prometheus FAQs * feat(auth_checks.py): ensure specific model access > wildcard model access if wildcard model is in access group, but specific model is not - deny access * fix(auth_checks.py): handle auth checks for team based model access groups handles scenario where model access group used for wildcard models * fix(internal_user_endpoints.py): support adding guardrails on `/user/update` Fixes https://github.com/BerriAI/litellm/issues/6942 * fix(key_management_endpoints.py): fix prepare_metadata_fields helper * fix: fix tests * build(requirements.txt): bump openai dep version fixes proxies argument * test: fix tests * fix(http_handler.py): fix error message masking * fix(bedrock_guardrails.py): pass in prepped data * test: fix test * test: fix nvidia nim test * fix(http_handler.py): return original response headers * fix: revert maskedhttpstatuserror * test: update tests * test: cleanup test * fix(key_management_endpoints.py): fix metadata field update logic * fix(key_management_endpoints.py): maintain initial order of guardrails in key update * fix(key_management_endpoints.py): handle prepare metadata * fix: fix linting errors * fix: fix linting errors * fix: fix linting errors * fix: fix key management errors * fix(key_management_endpoints.py): update metadata * test: update test * refactor: add more debug statements * test: skip flaky test * test: fix test * fix: fix test * fix: fix update metadata logic * fix: fix test * ci(config.yml): change db url for e2e ui testing * bump: version 1.53.1 → 1.53.2 * Updated config.yml --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com> Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> Co-authored-by: Sara Han <127759186+sdiazlor@users.noreply.github.com> * fix(exceptions.py): ensure ratelimit error code == 429, type == "throttling_error" Fixes https://github.com/BerriAI/litellm/pull/6973 * fix(utils.py): add jina ai dimensions embedding param support Fixes https://github.com/BerriAI/litellm/issues/6591 * fix(exception_mapping_utils.py): add bedrock 'prompt is too long' exception to context window exceeded error exception mapping Fixes https://github.com/BerriAI/litellm/issues/6629 Closes https://github.com/BerriAI/litellm/pull/6975 * fix(litellm_logging.py): strip trailing slash for api base Closes https://github.com/BerriAI/litellm/pull/6859 * test: skip timeout issue --------- Co-authored-by: ershang-dou <erlie.shang@gmail.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com> Co-authored-by: Sara Han <127759186+sdiazlor@users.noreply.github.com>
790 lines
28 KiB
Python
790 lines
28 KiB
Python
# +-----------------------------------------------+
|
|
# | |
|
|
# | Give Feedback / Get Help |
|
|
# | https://github.com/BerriAI/litellm/issues/new |
|
|
# | |
|
|
# +-----------------------------------------------+
|
|
#
|
|
# Thank you users! We ❤️ you! - Krrish & Ishaan
|
|
|
|
## LiteLLM versions of the OpenAI Exception Types
|
|
|
|
from typing import Optional
|
|
|
|
import httpx
|
|
import openai
|
|
|
|
|
|
class AuthenticationError(openai.AuthenticationError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 401
|
|
self.message = "litellm.AuthenticationError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
self.response = response or httpx.Response(
|
|
status_code=self.status_code,
|
|
request=httpx.Request(
|
|
method="GET", url="https://litellm.ai"
|
|
), # mock request object
|
|
)
|
|
super().__init__(
|
|
self.message, response=self.response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
# raise when invalid models passed, example gpt-8
|
|
class NotFoundError(openai.NotFoundError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
model,
|
|
llm_provider,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 404
|
|
self.message = "litellm.NotFoundError: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
self.response = response or httpx.Response(
|
|
status_code=self.status_code,
|
|
request=httpx.Request(
|
|
method="GET", url="https://litellm.ai"
|
|
), # mock request object
|
|
)
|
|
super().__init__(
|
|
self.message, response=self.response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class BadRequestError(openai.BadRequestError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
model,
|
|
llm_provider,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 400
|
|
self.message = "litellm.BadRequestError: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
response = httpx.Response(
|
|
status_code=self.status_code,
|
|
request=httpx.Request(
|
|
method="GET", url="https://litellm.ai"
|
|
), # mock request object
|
|
)
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
super().__init__(
|
|
self.message, response=response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class UnprocessableEntityError(openai.UnprocessableEntityError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
model,
|
|
llm_provider,
|
|
response: httpx.Response,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 422
|
|
self.message = "litellm.UnprocessableEntityError: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
super().__init__(
|
|
self.message, response=response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class Timeout(openai.APITimeoutError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
model,
|
|
llm_provider,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
headers: Optional[dict] = None,
|
|
):
|
|
request = httpx.Request(
|
|
method="POST",
|
|
url="https://api.openai.com/v1",
|
|
)
|
|
super().__init__(
|
|
request=request
|
|
) # Call the base class constructor with the parameters it needs
|
|
self.status_code = 408
|
|
self.message = "litellm.Timeout: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
self.headers = headers
|
|
|
|
# custom function to convert to str
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class PermissionDeniedError(openai.PermissionDeniedError): # type:ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
response: httpx.Response,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 403
|
|
self.message = "litellm.PermissionDeniedError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
super().__init__(
|
|
self.message, response=response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class RateLimitError(openai.RateLimitError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 429
|
|
self.message = "litellm.RateLimitError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
_response_headers = (
|
|
getattr(response, "headers", None) if response is not None else None
|
|
)
|
|
self.response = httpx.Response(
|
|
status_code=429,
|
|
headers=_response_headers,
|
|
request=httpx.Request(
|
|
method="POST",
|
|
url=" https://cloud.google.com/vertex-ai/",
|
|
),
|
|
)
|
|
super().__init__(
|
|
self.message, response=self.response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
self.code = "429"
|
|
self.type = "throttling_error"
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
# sub class of rate limit error - meant to give more granularity for error handling context window exceeded errors
|
|
class ContextWindowExceededError(BadRequestError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
model,
|
|
llm_provider,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
):
|
|
self.status_code = 400
|
|
self.message = "litellm.ContextWindowExceededError: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
request = httpx.Request(method="POST", url="https://api.openai.com/v1")
|
|
self.response = httpx.Response(status_code=400, request=request)
|
|
super().__init__(
|
|
message=self.message,
|
|
model=self.model, # type: ignore
|
|
llm_provider=self.llm_provider, # type: ignore
|
|
response=self.response,
|
|
litellm_debug_info=self.litellm_debug_info,
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
# sub class of bad request error - meant to help us catch guardrails-related errors on proxy.
|
|
class RejectedRequestError(BadRequestError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
model,
|
|
llm_provider,
|
|
request_data: dict,
|
|
litellm_debug_info: Optional[str] = None,
|
|
):
|
|
self.status_code = 400
|
|
self.message = "litellm.RejectedRequestError: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.request_data = request_data
|
|
request = httpx.Request(method="POST", url="https://api.openai.com/v1")
|
|
response = httpx.Response(status_code=400, request=request)
|
|
super().__init__(
|
|
message=self.message,
|
|
model=self.model, # type: ignore
|
|
llm_provider=self.llm_provider, # type: ignore
|
|
response=response,
|
|
litellm_debug_info=self.litellm_debug_info,
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class ContentPolicyViolationError(BadRequestError): # type: ignore
|
|
# Error code: 400 - {'error': {'code': 'content_policy_violation', 'message': 'Your request was rejected as a result of our safety system. Image descriptions generated from your prompt may contain text that is not allowed by our safety system. If you believe this was done in error, your request may succeed if retried, or by adjusting your prompt.', 'param': None, 'type': 'invalid_request_error'}}
|
|
def __init__(
|
|
self,
|
|
message,
|
|
model,
|
|
llm_provider,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
):
|
|
self.status_code = 400
|
|
self.message = "litellm.ContentPolicyViolationError: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
request = httpx.Request(method="POST", url="https://api.openai.com/v1")
|
|
self.response = httpx.Response(status_code=400, request=request)
|
|
super().__init__(
|
|
message=self.message,
|
|
model=self.model, # type: ignore
|
|
llm_provider=self.llm_provider, # type: ignore
|
|
response=self.response,
|
|
litellm_debug_info=self.litellm_debug_info,
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class ServiceUnavailableError(openai.APIStatusError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 503
|
|
self.message = "litellm.ServiceUnavailableError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
self.response = httpx.Response(
|
|
status_code=self.status_code,
|
|
request=httpx.Request(
|
|
method="POST",
|
|
url=" https://cloud.google.com/vertex-ai/",
|
|
),
|
|
)
|
|
super().__init__(
|
|
self.message, response=self.response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class InternalServerError(openai.InternalServerError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 500
|
|
self.message = "litellm.InternalServerError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
self.response = httpx.Response(
|
|
status_code=self.status_code,
|
|
request=httpx.Request(
|
|
method="POST",
|
|
url=" https://cloud.google.com/vertex-ai/",
|
|
),
|
|
)
|
|
super().__init__(
|
|
self.message, response=self.response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
# raise this when the API returns an invalid response object - https://github.com/openai/openai-python/blob/1be14ee34a0f8e42d3f9aa5451aa4cb161f1781f/openai/api_requestor.py#L401
|
|
class APIError(openai.APIError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
status_code: int,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
request: Optional[httpx.Request] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = status_code
|
|
self.message = "litellm.APIError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
if request is None:
|
|
request = httpx.Request(method="POST", url="https://api.openai.com/v1")
|
|
super().__init__(self.message, request=request, body=None) # type: ignore
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
# raised if an invalid request (not get, delete, put, post) is made
|
|
class APIConnectionError(openai.APIConnectionError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
request: Optional[httpx.Request] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.message = "litellm.APIConnectionError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.status_code = 500
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.request = httpx.Request(method="POST", url="https://api.openai.com/v1")
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
super().__init__(message=self.message, request=self.request)
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
# raised if an invalid request (not get, delete, put, post) is made
|
|
class APIResponseValidationError(openai.APIResponseValidationError): # type: ignore
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.message = "litellm.APIResponseValidationError: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
request = httpx.Request(method="POST", url="https://api.openai.com/v1")
|
|
response = httpx.Response(status_code=500, request=request)
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
super().__init__(response=response, body=None, message=message)
|
|
|
|
def __str__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
def __repr__(self):
|
|
_message = self.message
|
|
if self.num_retries:
|
|
_message += f" LiteLLM Retried: {self.num_retries} times"
|
|
if self.max_retries:
|
|
_message += f", LiteLLM Max Retries: {self.max_retries}"
|
|
return _message
|
|
|
|
|
|
class JSONSchemaValidationError(APIResponseValidationError):
|
|
def __init__(
|
|
self, model: str, llm_provider: str, raw_response: str, schema: str
|
|
) -> None:
|
|
self.raw_response = raw_response
|
|
self.schema = schema
|
|
self.model = model
|
|
message = "litellm.JSONSchemaValidationError: model={}, returned an invalid response={}, for schema={}.\nAccess raw response with `e.raw_response`".format(
|
|
model, raw_response, schema
|
|
)
|
|
self.message = message
|
|
super().__init__(model=model, message=message, llm_provider=llm_provider)
|
|
|
|
|
|
class OpenAIError(openai.OpenAIError): # type: ignore
|
|
def __init__(self, original_exception=None):
|
|
super().__init__()
|
|
self.llm_provider = "openai"
|
|
|
|
|
|
class UnsupportedParamsError(BadRequestError):
|
|
def __init__(
|
|
self,
|
|
message,
|
|
llm_provider: Optional[str] = None,
|
|
model: Optional[str] = None,
|
|
status_code: int = 400,
|
|
response: Optional[httpx.Response] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = 400
|
|
self.message = "litellm.UnsupportedParamsError: {}".format(message)
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.litellm_debug_info = litellm_debug_info
|
|
response = response or httpx.Response(
|
|
status_code=self.status_code,
|
|
request=httpx.Request(
|
|
method="GET", url="https://litellm.ai"
|
|
), # mock request object
|
|
)
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
|
|
|
|
LITELLM_EXCEPTION_TYPES = [
|
|
AuthenticationError,
|
|
NotFoundError,
|
|
BadRequestError,
|
|
UnprocessableEntityError,
|
|
UnsupportedParamsError,
|
|
Timeout,
|
|
PermissionDeniedError,
|
|
RateLimitError,
|
|
ContextWindowExceededError,
|
|
RejectedRequestError,
|
|
ContentPolicyViolationError,
|
|
InternalServerError,
|
|
ServiceUnavailableError,
|
|
APIError,
|
|
APIConnectionError,
|
|
APIResponseValidationError,
|
|
OpenAIError,
|
|
InternalServerError,
|
|
JSONSchemaValidationError,
|
|
]
|
|
|
|
|
|
class BudgetExceededError(Exception):
|
|
def __init__(
|
|
self, current_cost: float, max_budget: float, message: Optional[str] = None
|
|
):
|
|
self.current_cost = current_cost
|
|
self.max_budget = max_budget
|
|
message = (
|
|
message
|
|
or f"Budget has been exceeded! Current cost: {current_cost}, Max budget: {max_budget}"
|
|
)
|
|
self.message = message
|
|
super().__init__(message)
|
|
|
|
|
|
## DEPRECATED ##
|
|
class InvalidRequestError(openai.BadRequestError): # type: ignore
|
|
def __init__(self, message, model, llm_provider):
|
|
self.status_code = 400
|
|
self.message = message
|
|
self.model = model
|
|
self.llm_provider = llm_provider
|
|
self.response = httpx.Response(
|
|
status_code=400,
|
|
request=httpx.Request(
|
|
method="GET", url="https://litellm.ai"
|
|
), # mock request object
|
|
)
|
|
super().__init__(
|
|
message=self.message, response=self.response, body=None
|
|
) # Call the base class constructor with the parameters it needs
|
|
|
|
|
|
class MockException(openai.APIError):
|
|
# used for testing
|
|
def __init__(
|
|
self,
|
|
status_code: int,
|
|
message,
|
|
llm_provider,
|
|
model,
|
|
request: Optional[httpx.Request] = None,
|
|
litellm_debug_info: Optional[str] = None,
|
|
max_retries: Optional[int] = None,
|
|
num_retries: Optional[int] = None,
|
|
):
|
|
self.status_code = status_code
|
|
self.message = "litellm.MockException: {}".format(message)
|
|
self.llm_provider = llm_provider
|
|
self.model = model
|
|
self.litellm_debug_info = litellm_debug_info
|
|
self.max_retries = max_retries
|
|
self.num_retries = num_retries
|
|
if request is None:
|
|
request = httpx.Request(method="POST", url="https://api.openai.com/v1")
|
|
super().__init__(self.message, request=request, body=None) # type: ignore
|