litellm-mirror/litellm
miraclebakelaser 97f714d2b0 fix(factory.py): handle missing 'content' in cohere assistant messages
Update cohere_messages_pt_v2 function to check for 'content' existence
2024-08-27 19:38:37 +09:00
..
adapters fix(anthropic_adapter.py): fix sync streaming 2024-08-03 20:52:29 -07:00
assistants add async assistants delete support 2024-07-10 11:14:40 -07:00
batches fix: fix linting errors 2024-08-22 15:51:59 -07:00
deprecated_litellm_server
files fix: fix linting errors 2024-08-22 15:51:59 -07:00
fine_tuning test translating to vertex ai params 2024-08-03 08:44:54 -07:00
integrations fix use guardrail for pre call hook 2024-08-23 09:34:08 -07:00
litellm_core_utils fix(streaming_utils.py): fix generic_chunk_has_all_required_fields 2024-08-26 21:13:02 -07:00
llms fix(factory.py): handle missing 'content' in cohere assistant messages 2024-08-27 19:38:37 +09:00
proxy fix created_at and updated_at not existing error 2024-08-26 21:04:39 -07:00
router_strategy refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
router_utils fix azure_ad_token_provider 2024-08-22 16:15:53 -07:00
tests fix: fix imports 2024-08-26 22:24:30 -07:00
types feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format) 2024-08-26 22:19:01 -07:00
__init__.py fix: fix imports 2024-08-26 22:24:30 -07:00
_logging.py
_redis.py feat(caching.py): redis cluster support 2024-08-21 15:01:52 -07:00
_service_logger.py fix handle case when service logger has no attribute prometheusServicesLogger 2024-08-08 17:19:12 -07:00
_version.py
budget_manager.py
caching.py feat(vertex_ai_context_caching.py): check gemini cache, if key already exists 2024-08-26 22:19:01 -07:00
cost.json
cost_calculator.py feat(cost_calculator.py): only override base model if custom pricing is set 2024-08-19 16:05:49 -07:00
exceptions.py fix: fix tests 2024-08-07 15:02:04 -07:00
main.py feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format) 2024-08-26 22:19:01 -07:00
model_prices_and_context_window_backup.json ui new build 2024-08-26 19:01:35 -07:00
py.typed
requirements.txt
router.py fix(router.py): don't cooldown on apiconnectionerrors 2024-08-24 09:53:05 -07:00
scheduler.py
timeout.py
utils.py fix: fix imports 2024-08-26 22:19:01 -07:00