litellm-mirror/litellm
Krish Dholakia 1a441def03 fix(logging): add json formatting for uncaught exceptions (#9615) (#9619)
* fix(logging): add json formatting for uncaught exceptions (#9615)

* fix(_logging.py): cleanup logging to catch unhandled exceptions

* fix(_logging.py): avoid using 'print' '

---------

Co-authored-by: Henrique Cavarsan <hcavarsan@yahoo.com.br>
2025-03-28 15:16:15 -07:00
..
assistants refactor(azure.py): refactor to have client init work across all endpoints 2025-03-11 17:27:24 -07:00
batch_completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batches refactor(batches/main.py): working refactored azure client init on batches 2025-03-11 14:36:38 -07:00
caching update redisvl dependency 2025-03-24 08:42:11 -04:00
experimental_mcp_client fix mcp client 2025-03-21 18:18:23 -07:00
files refactor(azure.py): refactor to have client init work across all endpoints 2025-03-11 17:27:24 -07:00
fine_tuning fix linting 2025-02-14 21:42:51 -08:00
integrations default to use SLP for GCS PubSub 2025-03-24 15:21:59 -07:00
litellm_core_utils Add recursion depth to convert_anyof_null_to_nullable, constants.py. Fix recursive_detector.py raise error state 2025-03-28 13:11:19 -07:00
llms Add recursion depth to convert_anyof_null_to_nullable, constants.py. Fix recursive_detector.py raise error state 2025-03-28 13:11:19 -07:00
proxy fix(logging): add json formatting for uncaught exceptions (#9615) (#9619) 2025-03-28 15:16:15 -07:00
realtime_api fix(aim.py): fix linting error 2025-03-13 15:32:42 -07:00
rerank_api Add new gpt-4.5-preview model + other updates (#8879) 2025-02-27 15:27:14 -08:00
responses MockResponsesAPIStreamingIterator 2025-03-20 12:25:58 -07:00
router_strategy Revert "Fix latency redis" 2025-03-19 18:11:22 -07:00
router_utils fix(handle_error.py): make cooldown error more descriptive 2025-03-21 10:46:35 -07:00
secret_managers fix if 2025-03-11 09:27:31 +00:00
types Add OpenAI gpt-4o-transcribe support (#9517) 2025-03-26 23:10:25 -07:00
__init__.py Support discovering gemini, anthropic, xai models by calling their /v1/model endpoint (#9530) 2025-03-27 22:50:48 -07:00
_logging.py fix(logging): add json formatting for uncaught exceptions (#9615) (#9619) 2025-03-28 15:16:15 -07:00
_redis.py fix(redis_cache.py): add 5s default timeout 2025-03-17 14:27:36 -07:00
_service_logger.py fix svc logger (#7727) 2025-01-12 22:00:25 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
constants.py Add recursion depth to convert_anyof_null_to_nullable, constants.py. Fix recursive_detector.py raise error state 2025-03-28 13:11:19 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
cost_calculator.py Support Gemini audio token cost tracking + fix openai audio input token cost tracking (#9535) 2025-03-26 17:26:25 -07:00
exceptions.py feat(openai.py): bubble all error information back to client 2025-03-10 15:27:43 -07:00
main.py Add OpenAI gpt-4o-transcribe support (#9517) 2025-03-26 23:10:25 -07:00
model_prices_and_context_window_backup.json Add OpenAI gpt-4o-transcribe support (#9517) 2025-03-26 23:10:25 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
router.py Merge pull request #9473 from BerriAI/litellm_dev_03_22_2025_p2 2025-03-22 21:57:15 -07:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py Support discovering gemini, anthropic, xai models by calling their /v1/model endpoint (#9530) 2025-03-27 22:50:48 -07:00