litellm-mirror/litellm/proxy
Ishaan Jaff 58171f35ef
[Fix proxy perf] Use correct cache key when reading from redis cache (#5928)
* fix parallel request limiter use correct user id

* async def get_user_object(
fix

* use safe get_internal_user_object

* fix store internal users in redis correctly
2024-09-26 18:13:35 -07:00
..
_experimental ui new build 2024-09-23 18:10:12 -07:00
analytics_endpoints show correct key aliases on ui 2024-06-21 14:36:38 -07:00
auth [Fix proxy perf] Use correct cache key when reading from redis cache (#5928) 2024-09-26 18:13:35 -07:00
common_utils LiteLLM Minor Fixes & Improvements (09/17/2024) (#5742) 2024-09-17 23:00:04 -07:00
config_management_endpoints feat(ui): for adding pass-through endpoints 2024-08-15 21:58:11 -07:00
db LiteLLM Minor Fixes & Improvements (09/25/2024) (#5893) 2024-09-26 16:41:44 -07:00
example_config_yaml [Feat] Add testing for prometheus failure metrics (#5823) 2024-09-21 11:36:29 -07:00
fine_tuning_endpoints use native endpoints 2024-08-03 16:52:43 -07:00
guardrails LiteLLM Minor Fixes & Improvements (09/17/2024) (#5742) 2024-09-17 23:00:04 -07:00
health_endpoints [Feat-Proxy] Slack Alerting - allow using os.environ/ vars for alert to webhook url (#5726) 2024-09-16 18:03:37 -07:00
hooks [Fix proxy perf] Use correct cache key when reading from redis cache (#5928) 2024-09-26 18:13:35 -07:00
management_endpoints fix linting 2024-09-23 18:39:32 -07:00
management_helpers [SSO-UI] Set new sso users as internal_view role users (#5824) 2024-09-21 16:43:52 -07:00
openai_files_endpoints feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
out bump: version 1.43.15 → 1.43.16 2024-08-15 23:04:30 -07:00
pass_through_endpoints LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793) 2024-09-20 08:19:52 -07:00
proxy_load_test (fix) locust load test use uuid 2024-03-25 15:36:30 -07:00
queue docs(scheduler.md): add request prioritization to docs 2024-05-31 19:35:47 -07:00
rerank_endpoints v0 add rerank on litellm proxy 2024-08-27 17:28:39 -07:00
spend_tracking LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842) (#5858) 2024-09-24 15:01:31 -07:00
tests LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793) 2024-09-20 08:19:52 -07:00
ui_crud_endpoints ui - add Create, get, delete endpoints for IP Addresses 2024-07-09 15:12:08 -07:00
vertex_ai_endpoints Litellm stable dev (#5711) 2024-09-14 23:22:59 -07:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_secret_config.yaml LiteLLM Minor Fixes & Improvements (09/25/2024) (#5893) 2024-09-26 16:41:44 -07:00
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py LiteLLM Minor Fixes & Improvements (09/21/2024) (#5819) 2024-09-21 18:51:53 -07:00
admin_ui.py
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
caching_routes.py feat - refactor team endpoints 2024-06-15 11:40:36 -07:00
custom_callbacks.py LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723) (#5731) 2024-09-17 08:05:52 -07:00
custom_callbacks1.py [Feat] Add proxy level prometheus metrics (#5789) 2024-09-19 17:13:07 -07:00
custom_guardrail.py [Feat-Proxy] Slack Alerting - allow using os.environ/ vars for alert to webhook url (#5726) 2024-09-16 18:03:37 -07:00
custom_handler.py feat(proxy_server.py): support custom llm handler on proxy 2024-07-25 19:35:52 -07:00
custom_sso.py [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
enterprise
health_check.py LiteLLM Minor Fixes and Improvements (09/14/2024) (#5697) 2024-09-14 10:32:39 -07:00
lambda.py
litellm_pre_call_utils.py [Fix] Tag Based Routing not work with wildcard routing (#5805) 2024-09-20 14:05:56 -07:00
llamaguard_prompt.txt
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json
post_call_rules.py
prisma_migration.py refactor secret managers 2024-09-03 10:58:02 -07:00
proxy_cli.py LiteLLM Minor Fixes & Improvements (09/25/2024) (#5893) 2024-09-26 16:41:44 -07:00
proxy_config.yaml docs service accounts (#5900) 2024-09-25 15:46:13 -07:00
proxy_server.py Cost tracking improvements (#5828) 2024-09-21 21:47:50 -07:00
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842) (#5858) 2024-09-24 15:01:31 -07:00
schema.prisma [Feat UI sso] store 'provider' in user metadata (#5856) 2024-09-23 17:49:36 -07:00
start.sh
utils.py LiteLLM Minor Fixes & Improvements (09/25/2024) (#5893) 2024-09-26 16:41:44 -07:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes