..
_experimental
build: ui updates ( #8206 )
2025-02-03 07:26:58 -08:00
analytics_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
auth
[Bug Fix] - /vertex_ai/
was not detected as llm_api_route on pass through but vertex-ai
was ( #8186 )
2025-02-01 17:26:08 -08:00
batches_endpoints
(Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint ( #7406 )
2024-12-24 20:23:50 -08:00
common_utils
fix http parsing utils ( #7753 )
2025-01-13 19:58:26 -08:00
config_management_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
db
Litellm dev contributor prs 01 31 2025 ( #8168 )
2025-02-01 09:05:20 -08:00
example_config_yaml
(QA / testing) - Add e2e tests for key model access auth checks ( #8000 )
2025-01-25 17:15:11 -08:00
fine_tuning_endpoints
(Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path}
( #7427 )
2024-12-27 17:01:14 -08:00
guardrails
(Refactor / QA) - Use LoggingCallbackManager
to append callbacks and ensure no duplicate callbacks are added ( #8112 )
2025-01-30 19:35:50 -08:00
health_endpoints
Add datadog health check support + fix bedrock converse cost tracking w/ region name specified ( #7958 )
2025-01-23 22:17:09 -08:00
hooks
Litellm dev 01 13 2025 p2 ( #7758 )
2025-01-14 17:04:01 -08:00
management_endpoints
Easier user onboarding via SSO ( #8187 )
2025-02-02 23:02:33 -08:00
management_helpers
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
openai_files_endpoints
(feat) /batches
Add support for using /batches
endpoints in OAI format ( #7402 )
2024-12-24 16:58:05 -08:00
pass_through_endpoints
test: initial commit enforcing testing on all anthropic pass through … ( #7794 )
2025-01-15 22:02:35 -08:00
rerank_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
spend_tracking
(UI + SpendLogs) - Store SpendLogs in UTC Timezone, Fix filtering logs by start/end time ( #8190 )
2025-02-01 17:26:18 -08:00
ui_crud_endpoints
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
vertex_ai_endpoints
(Feat) pass through vertex - allow using credentials defined on litellm router for vertex pass through ( #8100 )
2025-01-29 17:54:02 -08:00
.gitignore
fix(gitmodules): remapping to new proxy
2023-10-12 21:23:53 -07:00
__init__.py
refactor: add black formatting
2023-12-25 14:11:20 +05:30
_logging.py
fix(_logging.py): fix timestamp format for json logs
2024-06-20 15:20:21 -07:00
_new_new_secret_config.yaml
Ollama ssl verify = False + Spend Logs reliability fixes ( #7931 )
2025-01-23 23:05:41 -08:00
_new_secret_config.yaml
LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 ( #7828 )
2025-02-02 23:17:50 -08:00
_super_secret_config.yaml
docs(enterprise.md): cleanup docs
2024-07-15 14:52:08 -07:00
_types.py
[Bug Fix] - /vertex_ai/
was not detected as llm_api_route on pass through but vertex-ai
was ( #8186 )
2025-02-01 17:26:08 -08:00
cached_logo.jpg
Litellm dev 01 23 2025 p2 ( #7962 )
2025-01-23 21:02:15 -08:00
caching_routes.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
custom_sso.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
enterprise
feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook
endpoint
2024-02-17 19:13:04 -08:00
health_check.py
feat(health_check.py): set upperbound for api when making health check call ( #7865 )
2025-01-18 19:47:43 -08:00
lambda.py
Add mangum.
2023-11-23 00:04:47 -05:00
litellm_pre_call_utils.py
Litellm dev 01 27 2025 p3 ( #8047 )
2025-01-28 18:01:27 -08:00
llamaguard_prompt.txt
feat(llama_guard.py): allow user to define custom unsafe content categories
2024-02-17 17:42:47 -08:00
logo.jpg
(feat) admin ui custom branding
2024-02-21 17:34:42 -08:00
model_config.yaml
Revert "Revert "(feat) Allow using include to include external YAML files in a config.yaml ( #6922 )""
2024-11-27 16:08:59 -08:00
openapi.json
(feat) add swagger.json for litellm proxy
2023-10-13 20:41:04 -07:00
post_call_rules.py
(docs) add example post call rules to proxy
2024-01-15 20:58:50 -08:00
prisma_migration.py
Litellm expose disable schema update flag ( #6085 )
2024-10-05 21:26:51 -04:00
proxy_cli.py
uvicorn allow setting num workers ( #7681 )
2025-01-10 19:03:14 -08:00
proxy_config.yaml
(UI) Fix SpendLogs page - truncate bedrock
models + show end_user
( #8118 )
2025-01-30 13:59:13 -08:00
proxy_server.py
(Refactor / QA) - Use LoggingCallbackManager
to append callbacks and ensure no duplicate callbacks are added ( #8112 )
2025-01-30 19:35:50 -08:00
README.md
[Feat-Proxy] Allow using custom sso handler ( #5809 )
2024-09-20 19:14:33 -07:00
route_llm_request.py
Auth checks on invalid fallback models ( #7871 )
2025-01-19 21:28:10 -08:00
schema.prisma
build(schema.prisma): add new sso_user_id
to LiteLLM_UserTable ( #8167 )
2025-01-31 23:04:05 -08:00
start.sh
fix(factory.py): fixing llama-2 non-chat models prompt templating
2023-11-07 21:33:54 -08:00
utils.py
(Refactor / QA) - Use LoggingCallbackManager
to append callbacks and ensure no duplicate callbacks are added ( #8112 )
2025-01-30 19:35:50 -08:00