litellm-mirror/litellm/proxy
2025-01-24 15:32:41 -08:00
..
_experimental ui new build - 2025-01-23 21:11:23 -08:00
analytics_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
auth Revert "test_team_and_key_budget_enforcement" 2025-01-24 15:32:41 -08:00
batches_endpoints (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint (#7406) 2024-12-24 20:23:50 -08:00
common_utils fix http parsing utils (#7753) 2025-01-13 19:58:26 -08:00
config_management_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
db use asyncio tasks for logging db metrics (#7663) 2025-01-09 19:59:32 -08:00
example_config_yaml Litellm dev 12 25 2024 p3 (#7421) 2024-12-25 18:54:24 -08:00
fine_tuning_endpoints (Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path} (#7427) 2024-12-27 17:01:14 -08:00
guardrails (Feat) - allow setting default_on guardrails (#7973) 2025-01-24 10:14:05 -08:00
health_endpoints Add datadog health check support + fix bedrock converse cost tracking w/ region name specified (#7958) 2025-01-23 22:17:09 -08:00
hooks Litellm dev 01 13 2025 p2 (#7758) 2025-01-14 17:04:01 -08:00
management_endpoints fix LiteLLM_ManagementEndpoint_MetadataFields 2025-01-23 20:59:38 -08:00
management_helpers (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
openai_files_endpoints (feat) /batches Add support for using /batches endpoints in OAI format (#7402) 2024-12-24 16:58:05 -08:00
pass_through_endpoints test: initial commit enforcing testing on all anthropic pass through … (#7794) 2025-01-15 22:02:35 -08:00
rerank_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
spend_tracking Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
ui_crud_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
vertex_ai_endpoints Litellm dev 12 24 2024 p2 (#7400) 2024-12-24 20:33:41 -08:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_new_secret_config.yaml Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
_new_secret_config.yaml Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
cached_logo.jpg Litellm dev 01 23 2025 p2 (#7962) 2025-01-23 21:02:15 -08:00
caching_routes.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
custom_sso.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
enterprise
health_check.py feat(health_check.py): set upperbound for api when making health check call (#7865) 2025-01-18 19:47:43 -08:00
lambda.py
litellm_pre_call_utils.py Litellm dev 01 22 2025 p4 (#7932) 2025-01-22 21:52:07 -08:00
llamaguard_prompt.txt
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
model_config.yaml Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)"" 2024-11-27 16:08:59 -08:00
openapi.json
post_call_rules.py
prisma_migration.py Litellm expose disable schema update flag (#6085) 2024-10-05 21:26:51 -04:00
proxy_cli.py uvicorn allow setting num workers (#7681) 2025-01-10 19:03:14 -08:00
proxy_config.yaml (Feat) - allow setting default_on guardrails (#7973) 2025-01-24 10:14:05 -08:00
proxy_server.py Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py Auth checks on invalid fallback models (#7871) 2025-01-19 21:28:10 -08:00
schema.prisma (UI - View SpendLogs Table) (#7842) 2025-01-17 18:53:45 -08:00
start.sh
utils.py Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes