litellm/docs/my-website/docs/exception_mapping.md
2023-12-19 19:29:05 +05:30

5.5 KiB

Exception Mapping

LiteLLM maps exceptions across all providers to their OpenAI counterparts.

Status Code Error Type
400 BadRequestError
401 AuthenticationError
403 PermissionDeniedError
404 NotFoundError
422 UnprocessableEntityError
429 RateLimitError
>=500 InternalServerError
N/A ContextWindowExceededError
N/A APIConnectionError

Base case we return APIConnectionError

All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM.

For all cases, the exception returned inherits from the original OpenAI Exception but contains 3 additional attributes:

  • status_code - the http status code of the exception
  • message - the error message
  • llm_provider - the provider raising the exception

Usage

import litellm
import openai

try:
    response = litellm.completion(
                model="gpt-4",
                messages=[
                    {
                        "role": "user",
                        "content": "hello, write a 20 pageg essay"
                    }
                ],
                timeout=0.01, # this will raise a timeout exception
            )
except openai.APITimeoutError as e:
    print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
    print(type(e))
    pass

Usage - Catching Streaming Exceptions

import litellm
try:
    response = litellm.completion(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user",
                "content": "hello, write a 20 pg essay"
            }
        ],
        timeout=0.0001, # this will raise an exception
        stream=True,
    )
    for chunk in response:
        print(chunk)
except openai.APITimeoutError as e:
    print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
    print(type(e))
    pass
except Exception as e:
    print(f"Did not raise error `openai.APITimeoutError`. Instead raised error type: {type(e)}, Error: {e}")

Details

To see how it's implemented - check out the code

Create an issue or make a PR if you want to improve the exception mapping.

Note For OpenAI and Azure we return the original exception (since they're of the OpenAI Error type). But we add the 'llm_provider' attribute to them. See code

Custom mapping list

Base case - we return the original exception.

ContextWindowExceededError AuthenticationError InvalidRequestError RateLimitError ServiceUnavailableError
Anthropic
OpenAI
Azure OpenAI
Replicate
Cohere
Huggingface
Openrouter
AI21
VertexAI
Bedrock
Sagemaker
TogetherAI
AlephAlpha

For a deeper understanding of these exceptions, you can check out this implementation for additional insights.

The ContextWindowExceededError is a sub-class of InvalidRequestError. It was introduced to provide more granularity for exception-handling scenarios. Please refer to this issue to learn more.

Contributions to improve exception mapping are welcome