mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 03:34:10 +00:00
2.4 KiB
2.4 KiB
Getting Started
import QuickStart from '../src/components/QuickStart.js'
LiteLLM simplifies LLM API calls by mapping them all to the OpenAI ChatCompletion format.
basic usage
By default we provide a free $10 community-key to try all providers supported on LiteLLM.
from litellm import completion
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["COHERE_API_KEY"] = "your-api-key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion("command-nightly", messages)
Need a dedicated key? Email us @ krrish@berri.ai
Next Steps 👉 Call all supported models - e.g. Claude-2, Llama2-70b, etc.
More details 👉
- Completion() function details
- All supported models / providers on LiteLLM
- Build your own OpenAI proxy
streaming
Same example from before. Just pass in stream=True
in the completion args.
from litellm import completion
## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
# cohere call
response = completion("command-nightly", messages, stream=True)
print(response)
More details 👉
exception handling
LiteLLM maps exceptions across all supported providers to the OpenAI exceptions. All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM.
from openai.errors import OpenAIError
from litellm import completion
os.environ["ANTHROPIC_API_KEY"] = "bad-key"
try:
# some code
completion(model="claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}])
except OpenAIError as e:
print(e)
More details 👉