forked from phoenix/litellm-mirror
add new tutorial
This commit is contained in:
parent
94a1f4bd0e
commit
2dc1c35a05
2 changed files with 47 additions and 15 deletions
32
docs/my-website/docs/tutorials/model_fallbacks.md
Normal file
32
docs/my-website/docs/tutorials/model_fallbacks.md
Normal file
|
@ -0,0 +1,32 @@
|
|||
# Basic Model Fallbacks w/ LiteLLM
|
||||
|
||||
Here's how you can implement model fallbacks across 3 LLM providers (OpenAI, Anthropic, Azure) using LiteLLM.
|
||||
|
||||
## 1. Install LiteLLM
|
||||
```python
|
||||
!pip install litellm
|
||||
```
|
||||
|
||||
## 2. Complete Code
|
||||
```python
|
||||
import litellm
|
||||
from litellm import embedding, completion
|
||||
|
||||
# set ENV variables
|
||||
os.environ["OPENAI_API_KEY"] = ""
|
||||
os.environ["ANTHROPIC_API_KEY"] = ""
|
||||
os.environ["AZURE_API_KEY"] = ""
|
||||
os.environ["AZURE_API_BASE"] = ""
|
||||
os.environ["AZURE_API_VERSION"] = ""
|
||||
|
||||
model_fallback_list = ["claude-instant-1", "gpt-3.5-turbo", "chatgpt-test"]
|
||||
|
||||
user_message = "Hello, how are you?"
|
||||
messages = [{ "content": user_message,"role": "user"}]
|
||||
|
||||
for model in model_fallback_list:
|
||||
try:
|
||||
response = completion(model=model, messages=messages)
|
||||
except Exception as e:
|
||||
print(f"error occurred: {traceback.format_exc()}")
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue