litellm-mirror/tests/litellm/litellm_core_utils
Krish Dholakia 03b5399f86
test(utils.py): handle scenario where text tokens + reasoning tokens … (#10165)
* test(utils.py): handle scenario where text tokens + reasoning tokens set, but reasoning tokens not charged separately

Addresses https://github.com/BerriAI/litellm/pull/10141#discussion_r2051555332

* fix(vertex_and_google_ai_studio.py): only set content if non-empty str
2025-04-19 12:32:38 -07:00
..
llm_cost_calc test(utils.py): handle scenario where text tokens + reasoning tokens … (#10165) 2025-04-19 12:32:38 -07:00
prompt_templates fix(factory.py): correct indentation for message index increment in ollama, This fixes bug #9822 (#9943) 2025-04-12 09:50:40 -07:00
test_core_helpers.py fix(triton/completion/transformation.py): remove bad_words / stop wor… (#10163) 2025-04-19 11:23:37 -07:00
test_dd_tracing.py (bug fix) - dd tracer, only send traces when user opts into sending dd-trace (#8928) 2025-03-01 10:53:36 -08:00
test_litellm_logging.py test: add unit test 2025-03-24 14:45:20 -07:00
test_realtime_streaming.py Realtime API Cost tracking (#9795) 2025-04-07 16:43:12 -07:00
test_safe_json_dumps.py (Bug fix) - Cache Health not working when configured with prometheus service logger (#8687) 2025-02-20 13:41:56 -08:00
test_streaming_chunk_builder_utils.py fix(stream_chunk_builder_utils.py): don't set index on modelresponse (#10063) 2025-04-16 10:11:47 -07:00
test_streaming_handler.py test: fix flaky test 2025-04-07 19:42:58 -07:00