..
_experimental
feat(health_check.py): set upperbound for api when making health check call ( #7865 )
2025-01-18 19:47:43 -08:00
analytics_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
auth
(e2e testing + minor refactor) - Virtual Key Max budget check ( #7888 )
2025-01-21 06:47:26 -08:00
batches_endpoints
(Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint ( #7406 )
2024-12-24 20:23:50 -08:00
common_utils
fix http parsing utils ( #7753 )
2025-01-13 19:58:26 -08:00
config_management_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
db
use asyncio tasks for logging db metrics ( #7663 )
2025-01-09 19:59:32 -08:00
example_config_yaml
Litellm dev 12 25 2024 p3 ( #7421 )
2024-12-25 18:54:24 -08:00
fine_tuning_endpoints
(Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path}
( #7427 )
2024-12-27 17:01:14 -08:00
guardrails
(fix) BaseAWSLLM
- cache IAM role credentials when used ( #7775 )
2025-01-14 20:16:22 -08:00
health_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
hooks
Litellm dev 01 13 2025 p2 ( #7758 )
2025-01-14 17:04:01 -08:00
management_endpoints
(Bug fix) - Allow setting null
for max_budget
, rpm_limit
, tpm_limit
when updating values on a team ( #7912 )
2025-01-21 19:19:36 -08:00
management_helpers
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
openai_files_endpoints
(feat) /batches
Add support for using /batches
endpoints in OAI format ( #7402 )
2024-12-24 16:58:05 -08:00
pass_through_endpoints
test: initial commit enforcing testing on all anthropic pass through … ( #7794 )
2025-01-15 22:02:35 -08:00
rerank_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
spend_tracking
(UI Logs) - add pagination + filtering by key name/team name ( #7860 )
2025-01-18 12:47:01 -08:00
ui_crud_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
vertex_ai_endpoints
Litellm dev 12 24 2024 p2 ( #7400 )
2024-12-24 20:33:41 -08:00
.gitignore
__init__.py
_logging.py
fix(_logging.py): fix timestamp format for json logs
2024-06-20 15:20:21 -07:00
_new_secret_config.yaml
fix(proxy_server.py): fix get model info when litellm_model_id is set + move model analytics to free ( #7886 )
2025-01-21 08:19:07 -08:00
_super_secret_config.yaml
docs(enterprise.md): cleanup docs
2024-07-15 14:52:08 -07:00
_types.py
JWT Auth - enforce_rbac
support + UI team view, spend calc fix ( #7863 )
2025-01-19 21:28:55 -08:00
cached_logo.jpg
(feat) use hosted images for custom branding
2024-02-22 14:51:40 -08:00
caching_routes.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
custom_sso.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
enterprise
feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook
endpoint
2024-02-17 19:13:04 -08:00
health_check.py
feat(health_check.py): set upperbound for api when making health check call ( #7865 )
2025-01-18 19:47:43 -08:00
lambda.py
litellm_pre_call_utils.py
(proxy perf improvement) - remove redundant .copy()
operation ( #7564 )
2025-01-06 20:36:47 -08:00
llamaguard_prompt.txt
feat(llama_guard.py): allow user to define custom unsafe content categories
2024-02-17 17:42:47 -08:00
logo.jpg
(feat) admin ui custom branding
2024-02-21 17:34:42 -08:00
model_config.yaml
Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml ( #6922 )""
2024-11-27 16:08:59 -08:00
openapi.json
post_call_rules.py
(docs) add example post call rules to proxy
2024-01-15 20:58:50 -08:00
prisma_migration.py
Litellm expose disable schema update flag ( #6085 )
2024-10-05 21:26:51 -04:00
proxy_cli.py
uvicorn allow setting num workers ( #7681 )
2025-01-10 19:03:14 -08:00
proxy_config.yaml
(e2e testing + minor refactor) - Virtual Key Max budget check ( #7888 )
2025-01-21 06:47:26 -08:00
proxy_server.py
fix(proxy_server.py): fix get model info when litellm_model_id is set + move model analytics to free ( #7886 )
2025-01-21 08:19:07 -08:00
README.md
[Feat-Proxy] Allow using custom sso handler ( #5809 )
2024-09-20 19:14:33 -07:00
route_llm_request.py
Auth checks on invalid fallback models ( #7871 )
2025-01-19 21:28:10 -08:00
schema.prisma
(UI - View SpendLogs Table) ( #7842 )
2025-01-17 18:53:45 -08:00
start.sh
utils.py
fix proxy pre call hook - only use if user is using alerting ( #7683 )
2025-01-10 19:07:05 -08:00