LiteLLM fork
Find a file
Ishaan Jaff 466accd4f5
Merge pull request #3462 from ffreemt/main
Add return_exceptions to batch_completion (retry)
2024-05-24 09:19:10 -07:00
.circleci reverted the changes made in circleci 2024-05-22 12:13:03 +09:30
.devcontainer Add devcontainer. 2024-05-07 11:33:04 +00:00
.github Added the requested features of Merlinvt ("1.If the model is not multimodal, there is no info needed for input_cost_per_image. Right now it's 0.0 for a lot of models. 2024-05-22 16:10:39 +09:30
ci_cd moved script to the workflow folder 2024-05-22 12:13:06 +09:30
cookbook fix - update migration script to ensure api_key in script 2024-05-22 12:22:23 -07:00
deploy fix - don't use gunicorn on litellm helm 2024-05-22 07:48:34 -07:00
docker Revert "build(Dockerfile): move prisma build to dockerfile" 2024-01-06 09:51:44 +05:30
docs/my-website docs(input.md): add clarifai supported input params to docs 2024-05-24 08:57:50 -07:00
enterprise fix - lakera ai integration 2024-05-23 15:25:26 -07:00
litellm Merge pull request #3462 from ffreemt/main 2024-05-24 09:19:10 -07:00
litellm-js build(deps): bump @hono/node-server in /litellm-js/spend-logs 2024-04-25 23:43:28 +00:00
tests fix test_key_model_list 2024-05-22 20:49:19 -07:00
ui build(ui): updating admin ui with changes 2024-05-23 20:34:48 -07:00
.dockerignore Fix .dockerignore 2024-04-12 11:06:24 +01:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.flake8 chore: list all ignored flake8 rules explicit 2023-12-23 09:07:59 +01:00
.git-blame-ignore-revs Add my commit to .git-blame-ignore-revs 2024-05-12 10:21:10 -07:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore feat(router.py): enable filtering model group by 'allowed_model_region' 2024-05-08 22:10:17 -07:00
.pre-commit-config.yaml fix: fix linting errors 2024-05-09 17:55:27 -07:00
build_admin_ui.sh (fix) build command 2024-02-21 21:09:15 -08:00
docker-compose.yml build(docker-compose.yml): fix default docker compose to run with config 2024-04-09 16:27:03 -07:00
Dockerfile build(dockerfile): remove --config proxy_server_config.yaml from docker run 2024-04-08 13:23:56 -07:00
Dockerfile.alpine (fix) alpine Docker image 2024-01-10 22:18:37 +05:30
Dockerfile.database (feat) update docs to not include gunicorn usage 2024-03-23 17:40:22 -07:00
entrypoint.sh (ci/cd) set litellm as entrypoint 2024-01-10 15:15:49 +05:30
index.yaml build(bump-helm-chart-app-version): bump helm chart app version to latest 2024-05-06 10:26:01 -07:00
LICENSE refactor: creating enterprise folder 2024-02-15 12:54:13 -08:00
litellm-helm-0.2.0.tgz build(bump-helm-chart-app-version): bump helm chart app version to latest 2024-05-06 10:26:01 -07:00
model_prices_and_context_window.json Merge pull request #3807 from danielbichuetti/anyscale-model-updates 2024-05-23 17:45:55 -07:00
mypy.ini ci(mypy.ini): ignore missing imports 2024-04-04 10:19:13 -07:00
package-lock.json (fix) create key flow 2024-03-29 10:08:35 -07:00
package.json (fix) create key flow 2024-03-29 10:08:35 -07:00
poetry.lock bump openai version 2024-05-21 11:33:57 -07:00
proxy_server_config.yaml Add commented set_verbose line to proxy_config 2024-05-16 15:59:37 -07:00
pyproject.toml bump: version 1.38.1 → 1.38.2 2024-05-23 20:31:22 -07:00
README.md Update README.md 2024-05-24 09:07:32 -07:00
render.yaml build(render.yaml): add render.yaml to github 2024-05-24 09:15:20 -07:00
requirements.txt fix(types/init.py): don't import openai assistants types by default 2024-05-15 08:50:31 -07:00
retry_push.sh build(Dockerfile): moves prisma logic to dockerfile 2024-01-06 14:59:10 +05:30
schema.prisma Merge pull request #3789 from BerriAI/litellm_ttft_ui 2024-05-22 18:22:39 -07:00

🚅 LiteLLM

Deploy on Railway

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.]

OpenAI Proxy Server | Hosted Proxy (Preview) | Enterprise Tier

PyPI Version CircleCI Y Combinator W23 Whatsapp Discord

LiteLLM manages:

  • Translate inputs to provider's completion, embedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Set Budgets & Rate limits per project, api key, model OpenAI Proxy Server

Jump to OpenAI Proxy Docs
Jump to Supported LLM Providers

🚨 Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published.

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

Important

LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["COHERE_API_KEY"] = "your-cohere-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Call any model supported by a provider, with model=<provider_name>/<model_name>. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="gpt-3.5-turbo", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude 2
response = completion('claude-2', messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

from litellm import completion

## set env variables for logging tools
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["lunary", "langfuse", "athina"] # log input/output to lunary, langfuse, supabase, athina etc

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])

OpenAI Proxy - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

📖 Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

Supported Providers (Docs)

Provider Completion Streaming Async Completion Async Streaming Async Embedding Async Image Generation
openai
azure
aws - sagemaker
aws - bedrock
google - vertex_ai [Gemini]
google - palm
google AI Studio - gemini
mistral ai api
cloudflare AI Workers
cohere
anthropic
huggingface
replicate
together_ai
openrouter
ai21
baseten
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra
perplexity-ai
Groq AI
Deepseek
anyscale
IBM - watsonx.ai
voyage ai
xinference [Xorbits Inference]

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install -E extra_proxy -E proxy

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
poetry run flake8
poetry run pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your GitHub repo
  • submit a PR from there

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under the LiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors