litellm/litellm/litellm_core_utils
Ishaan Jaff 5851a8f901
Merge pull request #5431 from BerriAI/litellm_Add_fireworks_ai_health_check
[Fix-Proxy] /health check for provider wildcard models (fireworks/*)
2024-08-29 14:25:05 -07:00
..
llm_cost_calc use cost per token for jamba 2024-08-27 14:18:04 -07:00
asyncify.py build(config.yml): bump anyio version 2024-08-27 07:37:06 -07:00
core_helpers.py fix use get_file_check_sum 2024-08-08 08:03:08 -07:00
exception_mapping_utils.py fix - error str in OpenAI, Azure exception 2024-06-29 13:11:55 -07:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py fix(utils.py): correctly log streaming cache hits (#5417) (#5426) 2024-08-28 22:50:33 -07:00
llm_request_utils.py add util to pick_cheapest_model_from_llm_provider 2024-08-29 09:27:20 -07:00
logging_utils.py feat run aporia as post call success hook 2024-08-19 11:25:31 -07:00
redact_messages.py feat(redact_messages.py): allow remove sensitive key information before passing to logging integration 2024-07-22 20:58:02 -07:00
streaming_utils.py fix(streaming_utils.py): fix generic_chunk_has_all_required_fields 2024-08-26 21:13:02 -07:00
token_counter.py fix(token_counter.py): New `get_modified_max_tokens' helper func 2024-06-27 15:38:09 -07:00