..
_experimental
Litellm dev 12 06 2024 ( #7067 )
2024-12-06 22:44:18 -08:00
analytics_endpoints
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
auth
(proxy) - Auth fix, ensure re-using safe request body for checking model
field ( #7222 )
2024-12-14 12:01:25 -08:00
common_utils
(proxy) - Auth fix, ensure re-using safe request body for checking model
field ( #7222 )
2024-12-14 12:01:25 -08:00
config_management_endpoints
feat(ui): for adding pass-through endpoints
2024-08-15 21:58:11 -07:00
db
(feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook ( #6650 )
2024-11-07 17:01:18 -08:00
example_config_yaml
(fix) don't block proxy startup if license check fails & using prometheus ( #6839 )
2024-11-20 17:55:39 -08:00
fine_tuning_endpoints
Add pyright to ci/cd + Fix remaining type-checking errors ( #6082 )
2024-10-05 17:04:00 -04:00
guardrails
(Refactor) Code Quality improvement - remove /prompt_templates/
, base_aws_llm.py
from /llms
folder ( #7164 )
2024-12-11 00:02:46 -08:00
health_endpoints
(fix) Langfuse key based logging ( #6372 )
2024-10-23 18:24:22 +05:30
hooks
(minor fix proxy) Clarify Proxy Rate limit errors are showing hash of litellm virtual key ( #7210 )
2024-12-12 20:13:14 -08:00
management_endpoints
fix(main.py): fix retries being multiplied when using openai sdk ( #7221 )
2024-12-14 11:56:55 -08:00
management_helpers
fix create_audit_log_for_update
2024-10-25 16:48:25 +04:00
openai_files_endpoints
(feat) add Vertex Batches API support in OpenAI format ( #7032 )
2024-12-04 19:40:28 -08:00
pass_through_endpoints
Code Quality Improvement - use vertex_ai/
as folder name for vertexAI ( #7166 )
2024-12-11 00:32:41 -08:00
proxy_load_test
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
rerank_endpoints
LiteLLM Minor Fixes & Improvements (09/26/2024) ( #5925 ) ( #5937 )
2024-09-27 17:54:13 -07:00
spend_tracking
(feat - Router / Proxy ) Allow setting budget limits per LLM deployment ( #7220 )
2024-12-13 19:15:51 -08:00
ui_crud_endpoints
ui - add Create, get, delete endpoints for IP Addresses
2024-07-09 15:12:08 -07:00
vertex_ai_endpoints
(feat) pass through llm endpoints - add PATCH
support (vertex context caching requires for update ops) ( #6924 )
2024-11-26 14:39:13 -08:00
.gitignore
__init__.py
_logging.py
fix(_logging.py): fix timestamp format for json logs
2024-06-20 15:20:21 -07:00
_new_secret_config.yaml
Litellm dev 12 13 2024 p1 ( #7219 )
2024-12-13 19:01:28 -08:00
_super_secret_config.yaml
docs(enterprise.md): cleanup docs
2024-07-15 14:52:08 -07:00
_types.py
fix(main.py): fix retries being multiplied when using openai sdk ( #7221 )
2024-12-14 11:56:55 -08:00
cached_logo.jpg
(feat) use hosted images for custom branding
2024-02-22 14:51:40 -08:00
caching_routes.py
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
2024-10-14 16:34:01 +05:30
custom_sso.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
enterprise
feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook
endpoint
2024-02-17 19:13:04 -08:00
health_check.py
LiteLLM Minor Fixes and Improvements (09/14/2024) ( #5697 )
2024-09-14 10:32:39 -07:00
lambda.py
litellm_pre_call_utils.py
fix(key_management_endpoints.py): override metadata field value on up… ( #7008 )
2024-12-03 23:03:50 -08:00
llamaguard_prompt.txt
feat(llama_guard.py): allow user to define custom unsafe content categories
2024-02-17 17:42:47 -08:00
logo.jpg
(feat) admin ui custom branding
2024-02-21 17:34:42 -08:00
model_config.yaml
Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml ( #6922 )""
2024-11-27 16:08:59 -08:00
openapi.json
post_call_rules.py
prisma_migration.py
Litellm expose disable schema update flag ( #6085 )
2024-10-05 21:26:51 -04:00
proxy_cli.py
(Feat) Add support for storing virtual keys in AWS SecretManager ( #6728 )
2024-11-14 09:25:07 -08:00
proxy_config.yaml
(feat - Router / Proxy ) Allow setting budget limits per LLM deployment ( #7220 )
2024-12-13 19:15:51 -08:00
proxy_server.py
fix(main.py): fix retries being multiplied when using openai sdk ( #7221 )
2024-12-14 11:56:55 -08:00
README.md
[Feat-Proxy] Allow using custom sso handler ( #5809 )
2024-09-20 19:14:33 -07:00
route_llm_request.py
(docs + fix) Add docs on Moderations endpoint, Text Completion ( #6947 )
2024-11-27 16:30:48 -08:00
schema.prisma
litellm db fixes LiteLLM_UserTable ( #7089 )
2024-12-07 19:08:37 -08:00
start.sh
utils.py
fix(main.py): fix retries being multiplied when using openai sdk ( #7221 )
2024-12-14 11:56:55 -08:00