litellm-mirror/tests/litellm/proxy/spend_tracking
Ishaan Jaff f9ce754817
[Feat] Add litellm.supports_reasoning() util to track if an llm supports reasoning (#9923)
* add supports_reasoning for xai models

* add "supports_reasoning": true for o1 series models

* add supports_reasoning util

* add litellm.supports_reasoning

* add supports reasoning for claude 3-7 models

* add deepseek as supports reasoning

* test_supports_reasoning

* add supports reasoning to model group info

* add supports_reasoning

* docs supports reasoning

* fix supports_reasoning test

* "supports_reasoning": false,

* fix test

* supports_reasoning
2025-04-11 17:56:04 -07:00
..
test_spend_management_endpoints.py [Feat] Add litellm.supports_reasoning() util to track if an llm supports reasoning (#9923) 2025-04-11 17:56:04 -07:00
test_spend_tracking_utils.py Fix anthropic prompt caching cost calc + trim logged message in db (#9838) 2025-04-09 21:26:43 -07:00