Find a file
2025-04-23 14:42:22 -07:00
.circleci ci: handle whl 2025-04-12 11:03:21 -07:00
.devcontainer LiteLLM Minor Fixes and Improvements (08/06/2024) (#5567) 2024-09-06 17:16:24 -07:00
.github ci(test-linting.yml): update to run black formatting 2025-03-31 17:03:59 -07:00
ci_cd install prisma migration files - connects litellm proxy to litellm's prisma migration files (#9637) 2025-03-29 15:27:09 -07:00
cookbook Add inference providers support for Hugging Face (#8258) (#9738) (#9773) 2025-04-05 10:50:15 -07:00
db_scripts Litellm dev contributor prs 01 31 2025 (#8168) 2025-02-01 09:05:20 -08:00
deploy feat: add extraEnvVars to the helm deployment (#9292) 2025-04-11 10:32:16 -07:00
dist Litellm dev 01 10 2025 p2 (#7679) 2025-01-10 21:50:53 -08:00
docker Remove redundant apk update in Dockerfiles (cc #5016) (#9055) 2025-04-08 09:03:25 -07:00
docs/my-website [Feat] Add gpt-image-1 cost tracking (#10241) 2025-04-23 12:20:55 -07:00
enterprise build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
litellm fix ImageGenerationRequestQuality.MEDIUM 2025-04-23 14:42:22 -07:00
litellm-js (UI) fix adding Vertex Models (#8129) 2025-01-30 21:11:08 -08:00
litellm-proxy-extras bump litellm-proxy-extras 2025-04-19 09:14:33 -07:00
test-results Add global filtering to Users tab (#10195) 2025-04-22 13:59:43 -07:00
tests TestOpenAIGPTImage1 2025-04-23 12:57:58 -07:00
ui/litellm-dashboard Require auth for all dashboard pages (#10229) 2025-04-23 07:08:25 -07:00
.dockerignore Add back in non root image fixes (#7781) (#7795) 2025-01-15 21:49:03 -08:00
.env.example [Feat] Add infinity embedding support (contributor pr) (#10196) 2025-04-21 20:01:29 -07:00
.flake8 chore: list all ignored flake8 rules explicit 2023-12-23 09:07:59 +01:00
.git-blame-ignore-revs Add my commit to .git-blame-ignore-revs 2024-05-12 10:21:10 -07:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore [Feat] Add infinity embedding support (contributor pr) (#10196) 2025-04-21 20:01:29 -07:00
.pre-commit-config.yaml install prisma migration files - connects litellm proxy to litellm's prisma migration files (#9637) 2025-03-29 15:27:09 -07:00
codecov.yaml fix comment 2024-10-23 15:44:27 +05:30
docker-compose.yml fix docker compose 2025-03-25 07:03:43 -07:00
Dockerfile Remove redundant apk update in Dockerfiles (cc #5016) (#9055) 2025-04-08 09:03:25 -07:00
index.yaml add 0.2.3 helm 2024-08-19 23:59:58 +08:00
LICENSE refactor: creating enterprise folder 2024-02-15 12:54:13 -08:00
Makefile fix(proxy_server.py): get master key from environment, if not set in … (#9617) 2025-03-28 12:32:04 -07:00
mcp_servers.json mcp servers.json 2025-03-21 21:51:57 -07:00
model_prices_and_context_window.json [Feat] Add gpt-image-1 cost tracking (#10241) 2025-04-23 12:20:55 -07:00
mypy.ini Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
package-lock.json fix(main.py): fix retries being multiplied when using openai sdk (#7221) 2024-12-14 11:56:55 -08:00
package.json fix(main.py): fix retries being multiplied when using openai sdk (#7221) 2024-12-14 11:56:55 -08:00
poetry.lock Add AgentOps Integration to LiteLLM (#9685) 2025-04-22 10:29:01 -07:00
prometheus.yml build(docker-compose.yml): add prometheus scraper to docker compose 2024-07-24 10:09:23 -07:00
proxy_server_config.yaml fix deployment name 2025-04-19 09:23:22 -07:00
pyproject.toml bump: version 1.67.1 → 1.67.2 2025-04-22 21:35:23 -07:00
pyrightconfig.json Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
README.md Update README.md (#9616) 2025-03-28 09:28:09 -07:00
render.yaml build(render.yaml): fix health check route 2024-05-24 09:45:28 -07:00
requirements.txt bump litellm-proxy-extras 2025-04-19 09:14:33 -07:00
ruff.toml (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
schema.prisma [Feat SSO] Add LiteLLM SCIM Integration for Team and User management (#10072) 2025-04-16 19:21:47 -07:00
security.md Discard duplicate sentence (#10231) 2025-04-23 07:05:29 -07:00

🚅 LiteLLM

Deploy to Render Deploy on Railway

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]

LiteLLM Proxy Server (LLM Gateway) | Hosted Proxy (Preview) | Enterprise Tier

PyPI Version Y Combinator W23 Whatsapp Discord

LiteLLM manages:

  • Translate inputs to provider's completion, embedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Set Budgets & Rate limits per project, api key, model LiteLLM Proxy Server (LLM Gateway)

Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers

🚨 Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

Important

LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here
LiteLLM v1.40.14+ now requires pydantic>=2.0.0. No changes required.

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="openai/gpt-4o", messages=messages)

# anthropic call
response = completion(model="anthropic/claude-3-sonnet-20240229", messages=messages)
print(response)

Response (OpenAI Format)

{
    "id": "chatcmpl-565d891b-a42e-4c39-8d14-82a1f5208885",
    "created": 1734366691,
    "model": "claude-3-sonnet-20240229",
    "object": "chat.completion",
    "system_fingerprint": null,
    "choices": [
        {
            "finish_reason": "stop",
            "index": 0,
            "message": {
                "content": "Hello! As an AI language model, I don't have feelings, but I'm operating properly and ready to assist you with any questions or tasks you may have. How can I help you today?",
                "role": "assistant",
                "tool_calls": null,
                "function_call": null
            }
        }
    ],
    "usage": {
        "completion_tokens": 43,
        "prompt_tokens": 13,
        "total_tokens": 56,
        "completion_tokens_details": null,
        "prompt_tokens_details": {
            "audio_tokens": null,
            "cached_tokens": 0
        },
        "cache_creation_input_tokens": 0,
        "cache_read_input_tokens": 0
    }
}

Call any model supported by a provider, with model=<provider_name>/<model_name>. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="openai/gpt-4o", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="openai/gpt-4o", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude 2
response = completion('anthropic/claude-3-sonnet-20240229', messages, stream=True)
for part in response:
    print(part)

Response chunk (OpenAI Format)

{
    "id": "chatcmpl-2be06597-eb60-4c70-9ec5-8cd2ab1b4697",
    "created": 1734366925,
    "model": "claude-3-sonnet-20240229",
    "object": "chat.completion.chunk",
    "system_fingerprint": null,
    "choices": [
        {
            "finish_reason": null,
            "index": 0,
            "delta": {
                "content": "Hello",
                "role": "assistant",
                "function_call": null,
                "tool_calls": null,
                "audio": null
            },
            "logprobs": null
        }
    ]
}

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, MLflow, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

from litellm import completion

## set env variables for logging tools (when using MLflow, no API key set up is required)
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["HELICONE_API_KEY"] = "your-helicone-auth-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"

os.environ["OPENAI_API_KEY"] = "your-openai-key"

# set callbacks
litellm.success_callback = ["lunary", "mlflow", "langfuse", "athina", "helicone"] # log input/output to lunary, langfuse, supabase, athina, helicone etc

#openai call
response = completion(model="openai/gpt-4o", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])

LiteLLM Proxy Server (LLM Gateway) - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

📖 Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

Important

💡 Use LiteLLM Proxy with Langchain (Python, JS), OpenAI SDK (Python, JS) Anthropic SDK, Mistral SDK, LlamaIndex, Instructor, Curl

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

Connect the proxy with a Postgres DB to create proxy keys

# Get the code
git clone https://github.com/BerriAI/litellm

# Go to folder
cd litellm

# Add the master key - you can change this after setup
echo 'LITELLM_MASTER_KEY="sk-1234"' > .env

# Add the litellm salt key - you cannot change this after adding a model
# It is used to encrypt / decrypt your LLM API Key credentials
# We recommend - https://1password.com/password-generator/ 
# password generator to get a random hash for litellm salt key
echo 'LITELLM_SALT_KEY="sk-1234"' > .env

source .env

# Start
docker-compose up

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

Supported Providers (Docs)

Provider Completion Streaming Async Completion Async Streaming Async Embedding Async Image Generation
openai
azure
AI/ML API
aws - sagemaker
aws - bedrock
google - vertex_ai
google - palm
google AI Studio - gemini
mistral ai api
cloudflare AI Workers
cohere
anthropic
empower
huggingface
replicate
together_ai
openrouter
ai21
baseten
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra
perplexity-ai
Groq AI
Deepseek
anyscale
IBM - watsonx.ai
voyage ai
xinference [Xorbits Inference]
FriendliAI
Galadriel

Read the Docs

Contributing

Interested in contributing? Contributions to LiteLLM Python SDK, Proxy Server, and contributing LLM integrations are both accepted and highly encouraged! See our Contribution Guide for more details

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under the LiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Code Quality / Linting

LiteLLM follows the Google Python Style Guide.

We run:

If you have suggestions on how to improve the code quality feel free to open an issue or a PR.

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

Run in Developer mode

Services

  1. Setup .env file in root
  2. Run dependant services docker-compose up db prometheus

Backend

  1. (In root) create virtual environment python -m venv .venv
  2. Activate virtual environment source .venv/bin/activate
  3. Install dependencies pip install -e ".[all]"
  4. Start proxy backend uvicorn litellm.proxy.proxy_server:app --host localhost --port 4000 --reload

Frontend

  1. Navigate to ui/litellm-dashboard
  2. Install dependencies npm install
  3. Run npm run dev to start the dashboard