litellm-mirror/litellm/proxy
2025-03-20 17:55:43 -07:00
..
_experimental litellm mcp routes 2025-03-20 17:55:43 -07:00
analytics_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
anthropic_endpoints get_custom_headers 2025-03-12 18:57:41 -07:00
auth fix code quality checks 2025-03-18 22:34:43 -07:00
batches_endpoints use correct get custom headers 2025-03-12 17:16:51 -07:00
common_utils fix _reset_budget_for_team 2025-03-17 19:34:44 -07:00
config_management_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
credential_endpoints fix: fix linting error 2025-03-14 14:17:28 -07:00
db fix: remove unused import 2025-03-19 15:33:04 -07:00
example_config_yaml fix async_moderation_hook 2025-03-12 18:45:54 -07:00
fine_tuning_endpoints use correct get custom headers 2025-03-12 17:16:51 -07:00
guardrails fix: fix linting errors 2025-03-13 19:40:18 -07:00
health_endpoints fix code quality 2025-03-14 21:06:28 -07:00
hooks fix ProxyUpdateSpend 2025-03-17 22:17:56 -07:00
management_endpoints fix(internal_user_endpoints.py): re-introduce upsert on user not found 2025-03-19 19:28:11 -07:00
management_helpers feat(model_management_endpoints.py): emit audit logs on model delete 2025-03-13 18:48:38 -07:00
openai_files_endpoints use correct get custom headers 2025-03-12 17:16:51 -07:00
pass_through_endpoints use correct get custom headers 2025-03-12 17:16:51 -07:00
rerank_endpoints use correct get custom headers 2025-03-12 17:16:51 -07:00
response_api_endpoints Add stubbed routes to pass initial auth tests 2025-03-13 16:43:25 -07:00
spend_tracking Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077) 2025-03-08 11:47:25 -08:00
types_utils build: merge commit 1b15568af7 2025-02-17 21:56:00 -08:00
ui_crud_endpoints fix settings endpoints code qa 2025-03-17 21:32:05 -07:00
vertex_ai_endpoints vertex ai anthropic thinking param support (#8853) 2025-02-26 21:37:18 -08:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_new_secret_config.yaml Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
_new_secret_config.yaml fix(internal_user_endpoints.py): re-introduce upsert on user not found 2025-03-19 19:28:11 -07:00
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py fix is_internal_user_role 2025-03-17 20:50:16 -07:00
cached_logo.jpg Litellm dev 01 23 2025 p2 (#7962) 2025-01-23 21:02:15 -08:00
caching_routes.py (Bug fix) - Cache Health not working when configured with prometheus service logger (#8687) 2025-02-20 13:41:56 -08:00
common_request_processing.py responses_api 2025-03-12 20:38:05 -07:00
custom_prompt_management.py example X42PromptManagement 2025-03-19 16:24:41 -07:00
custom_sso.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
custom_validate.py build: merge commit 1b15568af7 2025-02-17 21:56:00 -08:00
enterprise
health_check.py fix endpoint_data 2025-03-14 17:21:01 -07:00
lambda.py
litellm_pre_call_utils.py LITELLM_METADATA_ROUTES 2025-03-12 18:20:07 -07:00
llamaguard_prompt.txt
logo.jpg
mcp_tools.py example mcp tools 2025-03-20 17:53:20 -07:00
model_config.yaml Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)"" 2024-11-27 16:08:59 -08:00
openapi.json
post_call_rules.py
prisma_migration.py LiteLLM Contributor PRs (02/18/2025). (#8643) 2025-02-19 21:52:46 -08:00
proxy_cli.py feat(prisma_client.py): initial commit add prisma migration support to proxy 2025-03-19 14:26:59 -07:00
proxy_config.yaml litellm mcp routes 2025-03-20 17:55:43 -07:00
proxy_server.py init global_mcp_tool_registry 2025-03-20 17:53:37 -07:00
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py undo changes to route llm request 2025-03-14 21:05:51 -07:00
schema.prisma feat(endpoints.py): support writing credentials to db 2025-03-10 18:27:43 -07:00
start.sh
utils.py Merge pull request #9331 from BerriAI/litellm_patch_disable_spend_updates 2025-03-17 22:22:09 -07:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes