litellm-mirror/litellm/proxy
Ishaan Jaff 425f1b3976 (UI) allow adding model aliases for teams (#8471)
* update team info endpoint

* clean up model alias

* fix model alias

* fix model alias card

* clean up naming on docs

* fix model alias card

* fix _model_in_team_aliases

* fix key_model_access_denied

* test_can_key_call_model_with_aliases

* fix test_aview_spend_per_user
2025-02-11 16:18:43 -08:00
..
_experimental feat(guardrails.tsx): show configured guardrails on proxy ui 2025-02-10 22:13:58 -08:00
analytics_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
auth (UI) allow adding model aliases for teams (#8471) 2025-02-11 16:18:43 -08:00
batches_endpoints (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint (#7406) 2024-12-24 20:23:50 -08:00
common_utils fix http parsing utils (#7753) 2025-01-13 19:58:26 -08:00
config_management_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
db Litellm dev contributor prs 01 31 2025 (#8168) 2025-02-01 09:05:20 -08:00
example_config_yaml (e2e testing) - add tests for using litellm /team/ updates in multi-instance deployments with Redis (#8440) 2025-02-10 19:33:27 -08:00
fine_tuning_endpoints (Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path} (#7427) 2024-12-27 17:01:14 -08:00
guardrails fix: fix linting error 2025-02-10 22:13:58 -08:00
health_endpoints Add datadog health check support + fix bedrock converse cost tracking w/ region name specified (#7958) 2025-01-23 22:17:09 -08:00
hooks Allow editing model api key + provider on UI (#8406) 2025-02-08 23:50:47 -08:00
management_endpoints (UI) allow adding model aliases for teams (#8471) 2025-02-11 16:18:43 -08:00
management_helpers (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
openai_files_endpoints (feat) /batches Add support for using /batches endpoints in OAI format (#7402) 2024-12-24 16:58:05 -08:00
pass_through_endpoints fix assembly pass through cost tracking 2025-02-06 21:20:59 -08:00
rerank_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
spend_tracking Log applied guardrails on LLM API call (#8452) 2025-02-10 22:57:30 -08:00
ui_crud_endpoints (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
vertex_ai_endpoints (Feat) pass through vertex - allow using credentials defined on litellm router for vertex pass through (#8100) 2025-01-29 17:54:02 -08:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_new_secret_config.yaml Ollama ssl verify = False + Spend Logs reliability fixes (#7931) 2025-01-23 23:05:41 -08:00
_new_secret_config.yaml feat(guardrails.py): return specific litellm params in /guardrails/list endpoint 2025-02-10 22:13:58 -08:00
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py Log applied guardrails on LLM API call (#8452) 2025-02-10 22:57:30 -08:00
cached_logo.jpg Litellm dev 01 23 2025 p2 (#7962) 2025-01-23 21:02:15 -08:00
caching_routes.py Litellm dev 02 07 2025 p2 (#8377) 2025-02-07 17:30:38 -08:00
custom_sso.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
enterprise
health_check.py feat(health_check.py): set upperbound for api when making health check call (#7865) 2025-01-18 19:47:43 -08:00
lambda.py
litellm_pre_call_utils.py Litellm dev 02 10 2025 p2 (#8443) 2025-02-10 17:53:46 -08:00
llamaguard_prompt.txt
logo.jpg
model_config.yaml Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml (#6922)"" 2024-11-27 16:08:59 -08:00
openapi.json
post_call_rules.py
prisma_migration.py Litellm expose disable schema update flag (#6085) 2024-10-05 21:26:51 -04:00
proxy_cli.py uvicorn allow setting num workers (#7681) 2025-01-10 19:03:14 -08:00
proxy_config.yaml (Feat) - Allow viewing Request/Response Logs stored in GCS Bucket (#8449) 2025-02-10 20:38:55 -08:00
proxy_server.py Allow editing model api key + provider on UI (#8406) 2025-02-08 23:50:47 -08:00
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py Litellm dev 02 10 2025 p2 (#8443) 2025-02-10 17:53:46 -08:00
schema.prisma Org UI Improvements (#8436) 2025-02-10 19:13:32 -08:00
start.sh
utils.py Allow org admin to create teams on UI (#8407) 2025-02-09 00:07:15 -08:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes