.. |
assistants
|
Revert "fix: add missing parameters order, limit, before, and after in get_as…" (#7542)
|
2025-01-03 16:32:12 -08:00 |
batch_completion
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
batches
|
Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077)
|
2025-03-08 11:47:25 -08:00 |
caching
|
Support caching on reasoning content + other fixes (#8973)
|
2025-03-04 21:12:16 -08:00 |
files
|
Litellm dev 03 04 2025 p3 (#8997)
|
2025-03-04 21:58:03 -08:00 |
fine_tuning
|
fix linting
|
2025-02-14 21:42:51 -08:00 |
integrations
|
Merge pull request #4 from BerriAI/main
|
2025-03-10 11:13:21 +05:30 |
litellm_core_utils
|
feat(azure.py): add azure bad request error support
|
2025-03-10 15:59:06 -07:00 |
llms
|
Merge pull request #9109 from BerriAI/litellm_dev_03_10_2025_p1_v2
|
2025-03-10 22:38:16 -07:00 |
proxy
|
ui new build
|
2025-03-11 12:22:12 -07:00 |
realtime_api
|
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455)
|
2024-12-28 18:38:54 -08:00 |
rerank_api
|
Add new gpt-4.5-preview model + other updates (#8879)
|
2025-02-27 15:27:14 -08:00 |
router_strategy
|
fix code quality
|
2025-02-18 21:29:23 -08:00 |
router_utils
|
feat: prioritize api_key over tenant_id for more Azure AD token provi… (#8701)
|
2025-03-09 18:59:37 -07:00 |
secret_managers
|
fix if
|
2025-03-11 09:27:31 +00:00 |
types
|
fix(utils.py): fix linting error
|
2025-03-09 20:47:12 -07:00 |
__init__.py
|
[Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029)
|
2025-03-06 18:32:58 -08:00 |
_logging.py
|
(sdk perf fix) - only print args passed to litellm when debugging mode is on (#7708)
|
2025-01-11 22:56:20 -08:00 |
_redis.py
|
(Redis Cluster) - Fixes for using redis cluster + pipeline (#8442)
|
2025-02-12 18:01:32 -08:00 |
_service_logger.py
|
fix svc logger (#7727)
|
2025-01-12 22:00:25 -08:00 |
_version.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
budget_manager.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
constants.py
|
Fix calling claude via invoke route + response_format support for claude on invoke route (#8908)
|
2025-02-28 17:56:26 -08:00 |
cost.json
|
store llm costs in budget manager
|
2023-09-09 19:11:35 -07:00 |
cost_calculator.py
|
Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077)
|
2025-03-08 11:47:25 -08:00 |
exceptions.py
|
feat(openai.py): bubble all error information back to client
|
2025-03-10 15:27:43 -07:00 |
main.py
|
fix linting error
|
2025-03-10 13:57:50 -07:00 |
model_prices_and_context_window_backup.json
|
build(model_prices_and_context_window.json): add gemini/gemini-2.0-pro-exp pricing
|
2025-03-11 12:00:26 -07:00 |
py.typed
|
feature - Types for mypy - #360
|
2024-05-30 14:14:41 -04:00 |
router.py
|
fix: fix type
|
2025-03-10 18:38:40 -07:00 |
scheduler.py
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
timeout.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
utils.py
|
Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077)
|
2025-03-08 11:47:25 -08:00 |