litellm-mirror/litellm/proxy
2025-03-03 18:38:34 -08:00
..
_experimental build: merge branch 2025-03-02 08:31:57 -08:00
analytics_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
auth patch - auth checks for model access (#8924) 2025-03-01 10:11:44 -08:00
batches_endpoints (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint (#7406) 2024-12-24 20:23:50 -08:00
common_utils Litellm dev 02 27 2025 p6 (#8891) 2025-02-28 14:34:17 -08:00
config_management_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
db (Feat) - Show Error Logs on LiteLLM UI (#8904) 2025-02-28 20:10:09 -08:00
example_config_yaml (Infra/DB) - Allow running older litellm version when out of sync with current state of DB (#8695) 2025-02-20 18:30:23 -08:00
fine_tuning_endpoints (Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path} (#7427) 2024-12-27 17:01:14 -08:00
guardrails fix: fix linting error 2025-02-10 22:13:58 -08:00
health_endpoints Add datadog health check support + fix bedrock converse cost tracking w/ region name specified (#7958) 2025-01-23 22:17:09 -08:00
hooks (UI) Error Logs improvements - Store Raw proxy server request for success and failure (#8917) 2025-03-01 16:26:47 -08:00
management_endpoints update generate_authenticated_redirect_response 2025-03-03 17:20:58 -08:00
management_helpers fix ui session handling 2025-03-03 18:38:34 -08:00
openai_files_endpoints (feat) /batches Add support for using /batches endpoints in OAI format (#7402) 2024-12-24 16:58:05 -08:00
pass_through_endpoints (Improvements) use /openai/ pass through with OpenAI Ruby for Assistants API (#8884) 2025-02-27 20:01:16 -08:00
rerank_endpoints Add cohere v2/rerank support (#8421) (#8605) 2025-02-22 22:25:29 -08:00
spend_tracking quote DailyTagSpend in order to look for the right View (#8947) 2025-03-02 21:36:55 -08:00
types_utils build: merge commit 1b15568af7 2025-02-17 21:56:00 -08:00
ui_crud_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
vertex_ai_endpoints vertex ai anthropic thinking param support (#8853) 2025-02-26 21:37:18 -08:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_new_secret_config.yaml Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
_new_secret_config.yaml fix(proxy_server.py): fix setting router redis cache, if cache enable… (#8859) 2025-03-02 08:39:06 -08:00
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py (UI) Error Logs improvements - Store Raw proxy server request for success and failure (#8917) 2025-03-01 16:26:47 -08:00
cached_logo.jpg Litellm dev 01 23 2025 p2 (#7962) 2025-01-23 21:02:15 -08:00
caching_routes.py (Bug fix) - Cache Health not working when configured with prometheus service logger (#8687) 2025-02-20 13:41:56 -08:00
custom_sso.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
custom_validate.py build: merge commit 1b15568af7 2025-02-17 21:56:00 -08:00
enterprise
health_check.py feat(health_check.py): set upperbound for api when making health check call (#7865) 2025-01-18 19:47:43 -08:00
lambda.py
litellm_pre_call_utils.py Litellm dev 02 27 2025 p6 (#8891) 2025-02-28 14:34:17 -08:00
llamaguard_prompt.txt
logo.jpg
model_config.yaml Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)"" 2024-11-27 16:08:59 -08:00
openapi.json
post_call_rules.py
prisma_migration.py LiteLLM Contributor PRs (02/18/2025). (#8643) 2025-02-19 21:52:46 -08:00
proxy_cli.py (Bug fix) - running litellm proxy on wndows (#8735) 2025-02-25 15:19:19 -08:00
proxy_config.yaml (Feat) - Show Error Logs on LiteLLM UI (#8904) 2025-02-28 20:10:09 -08:00
proxy_server.py Revert "fix invitation link sign in logic" 2025-03-03 18:21:29 -08:00
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py Litellm dev 02 10 2025 p2 (#8443) 2025-02-10 17:53:46 -08:00
schema.prisma Add created_by and updated_by fields to Keys table (#8885) 2025-02-27 18:12:58 -08:00
start.sh
utils.py (Feat) - Show Error Logs on LiteLLM UI (#8904) 2025-02-28 20:10:09 -08:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes