diff --git a/docs/my-website/docs/getting_started.md b/docs/my-website/docs/getting_started.md index 14f097c5d..f0cef0313 100644 --- a/docs/my-website/docs/getting_started.md +++ b/docs/my-website/docs/getting_started.md @@ -52,15 +52,15 @@ More details 👉 LiteLLM maps exceptions across all supported providers to the OpenAI exceptions. All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM. ```python -from openai.errors import AuthenticationError +from openai.errors import OpenAIError from litellm import completion os.environ["ANTHROPIC_API_KEY"] = "bad-key" try: # some code completion(model="claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}]) -except AuthenticationError as e: - print(e.llm_provider) +except OpenAIError as e: + print(e) ``` More details 👉 diff --git a/docs/my-website/docs/tutorials/azure_openai.md b/docs/my-website/docs/tutorials/azure_openai.md index 9b05fd861..16436550a 100644 --- a/docs/my-website/docs/tutorials/azure_openai.md +++ b/docs/my-website/docs/tutorials/azure_openai.md @@ -1,4 +1,4 @@ -# Use Completion() for OpenAI, Azure +# Replacing OpenAI ChatCompletion with Completion() * [Supported OpenAI LLMs](https://docs.litellm.ai/docs/providers/openai) * [Supported Azure OpenAI LLMs](https://docs.litellm.ai/docs/providers/azure)