LiteLLM fork
Find a file
Marc Abramowitz c794c09679 Disambiguate invalid model name errors
because that error can be thrown in several different places, so
knowing the function it's being thrown from can be very useul for debugging.
2024-04-30 14:34:54 -07:00
.circleci fix - langfuse team based logging 2024-04-15 13:38:09 -07:00
.github build(interpret_load_test.py): fix update release body text 2024-04-20 08:54:00 -07:00
ci_cd (fix) pre commit hook to sync backup context_window mapping 2024-02-05 15:03:04 -08:00
cookbook (docs) updated watsonx cookbook 2024-04-24 17:19:02 +02:00
deploy (fix) load testing key used 2024-04-01 08:28:19 -07:00
docker Revert "build(Dockerfile): move prisma build to dockerfile" 2024-01-06 09:51:44 +05:30
docs/my-website docs - slack alerting 2024-04-29 21:33:03 -07:00
enterprise fix(llm_guard.py): enable request-specific llm guard flag 2024-04-08 21:15:33 -07:00
litellm Disambiguate invalid model name errors 2024-04-30 14:34:54 -07:00
litellm-js build(deps): bump hono from 4.1.5 to 4.2.7 in /litellm-js/spend-logs 2024-04-23 16:25:03 +00:00
tests fix(proxy_server.py): fix setting offset-aware datetime 2024-04-25 21:18:32 -07:00
ui ui - new build 2024-04-27 17:28:30 -07:00
.dockerignore Fix .dockerignore 2024-04-12 11:06:24 +01:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.flake8 chore: list all ignored flake8 rules explicit 2023-12-23 09:07:59 +01:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore fix(router.py): fix default retry logic 2024-04-25 11:57:27 -07:00
.pre-commit-config.yaml fix(utils.py): fix setattr error 2024-04-24 20:19:27 -07:00
build_admin_ui.sh (fix) build command 2024-02-21 21:09:15 -08:00
docker-compose.yml build(docker-compose.yml): fix default docker compose to run with config 2024-04-09 16:27:03 -07:00
Dockerfile build(dockerfile): remove --config proxy_server_config.yaml from docker run 2024-04-08 13:23:56 -07:00
Dockerfile.alpine (fix) alpine Docker image 2024-01-10 22:18:37 +05:30
Dockerfile.database (feat) update docs to not include gunicorn usage 2024-03-23 17:40:22 -07:00
entrypoint.sh (ci/cd) set litellm as entrypoint 2024-01-10 15:15:49 +05:30
LICENSE refactor: creating enterprise folder 2024-02-15 12:54:13 -08:00
model_prices_and_context_window.json build(model_prices_and_context_window.json): add bedrock llama3 pricing 2024-04-30 11:36:29 -07:00
mypy.ini ci(mypy.ini): ignore missing imports 2024-04-04 10:19:13 -07:00
package-lock.json (fix) create key flow 2024-03-29 10:08:35 -07:00
package.json (fix) create key flow 2024-03-29 10:08:35 -07:00
poetry.lock Merge remote-tracking branch 'upstream/main' into fix-pip-install-extra-proxy 2024-03-25 20:22:17 +02:00
proxy_server_config.yaml ci(proxy_server_config.yaml): use redis for usage-based-routing-v2 2024-04-22 13:34:36 -07:00
pyproject.toml bump: version 1.35.32 → 1.35.33 2024-04-30 07:20:50 -07:00
README.md (docs) added watsonx to the list of supported providers 2024-04-27 16:43:17 +02:00
requirements.txt (requirements.txt) - Update gunicorn to 22.0.0 for CVE-2024-1135 2024-04-24 13:44:54 +00:00
retry_push.sh build(Dockerfile): moves prisma logic to dockerfile 2024-01-06 14:59:10 +05:30
schema.prisma fix(proxy_server.py): allow mapping a user to an org 2024-04-08 20:45:11 -07:00

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.]

OpenAI Proxy Server | Hosted Proxy (Preview) | Enterprise Tier

PyPI Version CircleCI Y Combinator W23 Whatsapp Discord

LiteLLM manages:

  • Translate inputs to provider's completion, embedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Set Budgets & Rate limits per project, api key, model OpenAI Proxy Server

Jump to OpenAI Proxy Docs
Jump to Supported LLM Providers

🚨 Stable Release: Use docker images with: main-stable tag. These run through 12 hr load tests (1k req./min).

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

Important

LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["COHERE_API_KEY"] = "your-cohere-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Call any model supported by a provider, with model=<provider_name>/<model_name>. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="gpt-3.5-turbo", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude 2
response = completion('claude-2', messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

from litellm import completion

## set env variables for logging tools
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["lunary", "langfuse", "athina"] # log input/output to lunary, langfuse, supabase, athina etc

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])

OpenAI Proxy - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

📖 Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

Supported Providers (Docs)

Provider Completion Streaming Async Completion Async Streaming Async Embedding Async Image Generation
openai
azure
aws - sagemaker
aws - bedrock
google - vertex_ai [Gemini]
google - palm
google AI Studio - gemini
mistral ai api
cloudflare AI Workers
cohere
anthropic
huggingface
replicate
together_ai
openrouter
ai21
baseten
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra
perplexity-ai
Groq AI
anyscale
IBM - watsonx.ai
voyage ai
xinference [Xorbits Inference]

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
poetry run flake8
poetry run pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your GitHub repo
  • submit a PR from there

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under the LiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors