litellm-mirror/litellm/proxy
Krish Dholakia 27e18358ab
fix(pattern_match_deployments.py): default to user input if unable to… (#6632)
* fix(pattern_match_deployments.py): default to user input if unable to map based on wildcards

* test: fix test

* test: reset test name

* test: update conftest to reload proxy server module between tests

* ci(config.yml): move langfuse out of local_testing

reduce ci/cd time

* ci(config.yml): cleanup langfuse ci/cd tests

* fix: update test to not use global proxy_server app module

* ci: move caching to a separate test pipeline

speed up ci pipeline

* test: update conftest to check if proxy_server attr exists before reloading

* build(conftest.py): don't block on inability to reload proxy_server

* ci(config.yml): update caching unit test filter to work on 'cache' keyword as well

* fix(encrypt_decrypt_utils.py): use function to get salt key

* test: mark flaky test

* test: handle anthropic overloaded errors

* refactor: create separate ci/cd pipeline for proxy unit tests

make ci/cd faster

* ci(config.yml): add litellm_proxy_unit_testing to build_and_test jobs

* ci(config.yml): generate prisma binaries for proxy unit tests

* test: readd vertex_key.json

* ci(config.yml): remove `-s` from proxy_unit_test cmd

speed up test

* ci: remove any 'debug' logging flag

speed up ci pipeline

* test: fix test

* test(test_braintrust.py): rerun

* test: add delay for braintrust test
2024-11-08 00:55:57 +05:30
..
_experimental Litellm dev 11 02 2024 (#6561) 2024-11-04 07:48:20 +05:30
analytics_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
auth fix code quality check 2024-11-06 20:50:52 -08:00
common_utils fix(pattern_match_deployments.py): default to user input if unable to… (#6632) 2024-11-08 00:55:57 +05:30
config_management_endpoints feat(ui): for adding pass-through endpoints 2024-08-15 21:58:11 -07:00
db (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
example_config_yaml Litellm router max depth (#6501) 2024-10-29 22:05:41 -07:00
fine_tuning_endpoints Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
guardrails (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
health_endpoints (fix) Langfuse key based logging (#6372) 2024-10-23 18:24:22 +05:30
hooks Litellm dev 11 02 2024 (#6561) 2024-11-04 07:48:20 +05:30
management_endpoints (UI) Fix viewing members, keys in a team + added testing (#6514) 2024-10-30 23:51:13 +05:30
management_helpers fix create_audit_log_for_update 2024-10-25 16:48:25 +04:00
openai_files_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
pass_through_endpoints LiteLLM Minor Fixes & Improvements (10/30/2024) (#6519) 2024-11-02 00:44:32 +05:30
proxy_load_test Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
rerank_endpoints LiteLLM Minor Fixes & Improvements (09/26/2024) (#5925) (#5937) 2024-09-27 17:54:13 -07:00
spend_tracking (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
ui_crud_endpoints ui - add Create, get, delete endpoints for IP Addresses 2024-07-09 15:12:08 -07:00
vertex_ai_endpoints feat(custom_logger.py): expose new async_dataset_hook for modifying… (#6331) 2024-10-20 09:00:04 -07:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_secret_config.yaml LiteLLM Minor Fixes & Improvements (11/06/2024) (#6624) 2024-11-07 04:37:32 +05:30
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py LiteLLM Minor Fixes & Improvements (11/06/2024) (#6624) 2024-11-07 04:37:32 +05:30
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
caching_routes.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
custom_sso.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
enterprise feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint 2024-02-17 19:13:04 -08:00
health_check.py LiteLLM Minor Fixes and Improvements (09/14/2024) (#5697) 2024-09-14 10:32:39 -07:00
lambda.py
litellm_pre_call_utils.py Litellm perf improvements 3 (#6573) 2024-11-05 03:51:26 +05:30
llamaguard_prompt.txt feat(llama_guard.py): allow user to define custom unsafe content categories 2024-02-17 17:42:47 -08:00
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json
post_call_rules.py (docs) add example post call rules to proxy 2024-01-15 20:58:50 -08:00
prisma_migration.py Litellm expose disable schema update flag (#6085) 2024-10-05 21:26:51 -04:00
proxy_cli.py LiteLLM Minor Fixes & Improvements (11/06/2024) (#6624) 2024-11-07 04:37:32 +05:30
proxy_config.yaml (feat) Allow failed DB connection requests to allow virtual keys with allow_failed_db_requests (#6605) 2024-11-06 20:04:41 -08:00
proxy_server.py (fix) ProxyStartup - Check that prisma connection is healthy when starting an instance of LiteLLM (#6627) 2024-11-06 17:36:48 -08:00
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py LiteLLM Minor Fixes & Improvements (11/06/2024) (#6624) 2024-11-07 04:37:32 +05:30
schema.prisma track created, updated at virtual keys 2024-10-25 07:19:29 +04:00
start.sh
utils.py (fix) ProxyStartup - Check that prisma connection is healthy when starting an instance of LiteLLM (#6627) 2024-11-06 17:36:48 -08:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes