litellm/litellm/proxy
2024-11-27 15:35:19 -08:00
..
_experimental build(ui/): update ui build 2024-11-27 12:53:19 +05:30
analytics_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
auth LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943) 2024-11-28 00:32:46 +05:30
common_utils LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943) 2024-11-28 00:32:46 +05:30
config_management_endpoints feat(ui): for adding pass-through endpoints 2024-08-15 21:58:11 -07:00
db (feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook (#6650) 2024-11-07 17:01:18 -08:00
example_config_yaml (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
fine_tuning_endpoints Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
guardrails (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
health_endpoints (fix) Langfuse key based logging (#6372) 2024-10-23 18:24:22 +05:30
hooks Litellm lm studio embedding params (#6746) 2024-11-19 09:54:50 +05:30
management_endpoints (bug fix) /key/update was not storing budget_duration in the DB (#6941) 2024-11-27 14:48:01 -08:00
management_helpers fix create_audit_log_for_update 2024-10-25 16:48:25 +04:00
openai_files_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
pass_through_endpoints (feat) pass through llm endpoints - add PATCH support (vertex context caching requires for update ops) (#6924) 2024-11-26 14:39:13 -08:00
proxy_load_test Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
rerank_endpoints LiteLLM Minor Fixes & Improvements (09/26/2024) (#5925) (#5937) 2024-09-27 17:54:13 -07:00
spend_tracking LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729) 2024-11-15 11:18:31 +05:30
ui_crud_endpoints ui - add Create, get, delete endpoints for IP Addresses 2024-07-09 15:12:08 -07:00
vertex_ai_endpoints (feat) pass through llm endpoints - add PATCH support (vertex context caching requires for update ops) (#6924) 2024-11-26 14:39:13 -08:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_secret_config.yaml LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943) 2024-11-28 00:32:46 +05:30
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py fix(key_management_endpoints.py): fix user-membership check when creating team key (#6890) 2024-11-26 14:19:24 +05:30
cached_logo.jpg
caching_routes.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
custom_sso.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
enterprise
health_check.py LiteLLM Minor Fixes and Improvements (09/14/2024) (#5697) 2024-09-14 10:32:39 -07:00
lambda.py
litellm_pre_call_utils.py LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
llamaguard_prompt.txt
logo.jpg
openapi.json
post_call_rules.py
prisma_migration.py Litellm expose disable schema update flag (#6085) 2024-10-05 21:26:51 -04:00
proxy_cli.py (Feat) Add support for storing virtual keys in AWS SecretManager (#6728) 2024-11-14 09:25:07 -08:00
proxy_config.yaml Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)" 2024-11-27 10:17:09 -08:00
proxy_server.py LiteLLM Minor Fixes & Improvements (11/27/2024) (#6943) 2024-11-28 00:32:46 +05:30
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py fix route_llm_request 2024-11-27 15:35:19 -08:00
schema.prisma track created, updated at virtual keys 2024-10-25 07:19:29 +04:00
start.sh
utils.py (fix) handle json decode errors for DD exception logging (#6934) 2024-11-27 14:48:54 -08:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes