litellm/tests/llm_translation
Krish Dholakia 1cd1d23fdf
LiteLLM Minor Fixes & Improvements (10/23/2024) (#6407)
* docs(bedrock.md): clarify bedrock auth in litellm docs

* fix(convert_dict_to_response.py): Fixes https://github.com/BerriAI/litellm/issues/6387

* feat(pattern_match_deployments.py): more robust handling for wildcard routes (model_name: custom_route/* -> openai/*)

Enables user to expose custom routes to users with dynamic handling

* test: add more testing

* docs(custom_pricing.md): add debug tutorial for custom pricing

* test: skip codestral test - unreachable backend

* test: fix test

* fix(pattern_matching_deployments.py): fix typing

* test: cleanup codestral tests - backend api unavailable

* (refactor) prometheus async_log_success_event to be under 100 LOC  (#6416)

* unit testig for prometheus

* unit testing for success metrics

* use 1 helper for _increment_token_metrics

* use helper for _increment_remaining_budget_metrics

* use _increment_remaining_budget_metrics

* use _increment_top_level_request_and_spend_metrics

* use helper for _set_latency_metrics

* remove noqa violation

* fix test prometheus

* test prometheus

* unit testing for all prometheus helper functions

* fix prom unit tests

* fix unit tests prometheus

* fix unit test prom

* (refactor) router - use static methods for client init utils  (#6420)

* use InitalizeOpenAISDKClient

* use InitalizeOpenAISDKClient static method

* fix  # noqa: PLR0915

* (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend  (#6406)

* code cleanup remove unused and undocumented code files

* fix unused logging integrations cleanup

* bump: version 1.50.3 → 1.50.4

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-10-24 19:01:41 -07:00
..
test_llm_response_utils LiteLLM Minor Fixes & Improvements (10/23/2024) (#6407) 2024-10-24 19:01:41 -07:00
conftest.py [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
dog.wav (feat) Support audio param in responses streaming (#6312) 2024-10-18 19:16:14 +05:30
Readme.md LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723) (#5731) 2024-09-17 08:05:52 -07:00
test_anthropic_completion.py (fix) get_response_headers for Azure OpenAI (#6344) 2024-10-21 20:41:35 +05:30
test_azure_openai.py (fix) get_response_headers for Azure OpenAI (#6344) 2024-10-21 20:41:35 +05:30
test_databricks.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_fireworks_ai_translation.py LiteLLM Minor Fixes & Improvements (09/18/2024) (#5772) 2024-09-19 13:25:29 -07:00
test_gpt4o_audio.py (feat) Support audio param in responses streaming (#6312) 2024-10-18 19:16:14 +05:30
test_max_completion_tokens.py Litellm dev 10 22 2024 (#6384) 2024-10-22 21:18:54 -07:00
test_nvidia_nim.py (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
test_openai_o1.py [Fix] o1-mini causes pydantic warnings on reasoning_tokens (#5754) 2024-09-17 20:23:14 -07:00
test_optional_params.py Litellm dev 10 22 2024 (#6384) 2024-10-22 21:18:54 -07:00
test_prompt_caching.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_supports_vision.py [Feat] Allow setting supports_vision for Custom OpenAI endpoints + Added testing (#5821) 2024-09-21 11:35:55 -07:00
test_text_completion_unit_tests.py def test_text_completion_with_echo(stream): (#6401) 2024-10-23 23:27:19 +05:30
test_vertex.py LiteLLM Minor Fixes & Improvements (10/09/2024) (#6139) 2024-10-10 00:42:11 -07:00

More tests under litellm/litellm/tests/*.