.. |
assistants
|
refactor(azure.py): refactor to have client init work across all endpoints
|
2025-03-11 17:27:24 -07:00 |
batch_completion
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
batches
|
refactor(batches/main.py): working refactored azure client init on batches
|
2025-03-11 14:36:38 -07:00 |
caching
|
Merge pull request #9330 from BerriAI/litellm_dev_03_17_2025_p1
|
2025-03-17 19:57:25 -07:00 |
files
|
refactor(azure.py): refactor to have client init work across all endpoints
|
2025-03-11 17:27:24 -07:00 |
fine_tuning
|
fix linting
|
2025-02-14 21:42:51 -08:00 |
integrations
|
fix(aim.py): fix linting error
|
2025-03-13 15:32:42 -07:00 |
litellm_core_utils
|
Merge pull request #9333 from BerriAI/litellm_dev_03_17_2025_p2
|
2025-03-17 21:48:30 -07:00 |
llms
|
test_azure_embedding_max_retries_0
|
2025-03-18 12:35:34 -07:00 |
proxy
|
fix common utils
|
2025-03-18 11:04:02 -07:00 |
realtime_api
|
fix(aim.py): fix linting error
|
2025-03-13 15:32:42 -07:00 |
rerank_api
|
Add new gpt-4.5-preview model + other updates (#8879)
|
2025-02-27 15:27:14 -08:00 |
responses
|
Add exception mapping for responses API
|
2025-03-13 15:57:58 -07:00 |
router_strategy
|
fix code quality
|
2025-02-18 21:29:23 -08:00 |
router_utils
|
test_openai_responses_litellm_router
|
2025-03-12 16:13:48 -07:00 |
secret_managers
|
fix if
|
2025-03-11 09:27:31 +00:00 |
types
|
fix(http_handler.py): fix typing error
|
2025-03-17 16:42:32 -07:00 |
__init__.py
|
Merge branch 'main' of https://github.com/SunnyWan59/litellm
|
2025-03-13 19:42:25 -04:00 |
_logging.py
|
(sdk perf fix) - only print args passed to litellm when debugging mode is on (#7708)
|
2025-01-11 22:56:20 -08:00 |
_redis.py
|
fix(redis_cache.py): add 5s default timeout
|
2025-03-17 14:27:36 -07:00 |
_service_logger.py
|
fix svc logger (#7727)
|
2025-01-12 22:00:25 -08:00 |
_version.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
budget_manager.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
constants.py
|
STREAM_SSE_DONE_STRING
|
2025-03-12 09:33:28 -07:00 |
cost.json
|
|
|
cost_calculator.py
|
feat(cost_calculator.py): support reading litellm response cost header in client sdk
|
2025-03-17 15:12:01 -07:00 |
exceptions.py
|
feat(openai.py): bubble all error information back to client
|
2025-03-10 15:27:43 -07:00 |
main.py
|
fix typing errors
|
2025-03-18 12:31:44 -07:00 |
model_prices_and_context_window_backup.json
|
Merge branch 'BerriAI:main' into main
|
2025-03-13 19:37:22 -04:00 |
py.typed
|
|
|
router.py
|
feat(endpoints.py): support adding credentials by model id
|
2025-03-14 12:32:32 -07:00 |
scheduler.py
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
timeout.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
utils.py
|
Merge pull request #9274 from BerriAI/litellm_contributor_rebase_branch
|
2025-03-14 21:57:49 -07:00 |