litellm-mirror/litellm
2024-08-17 13:20:55 -07:00
..
adapters fix(anthropic_adapter.py): fix sync streaming 2024-08-03 20:52:29 -07:00
assistants add async assistants delete support 2024-07-10 11:14:40 -07:00
batches test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
files fix linting checks 2024-07-30 16:55:17 -07:00
fine_tuning test translating to vertex ai params 2024-08-03 08:44:54 -07:00
integrations Merge pull request #5259 from BerriAI/litellm_return_remaining_tokens_in_header 2024-08-17 12:41:16 -07:00
litellm_core_utils Merge branch 'main' into litellm_log_model_price_information 2024-08-16 19:34:16 -07:00
llms Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry 2024-08-16 19:16:20 -07:00
proxy add tpm limits per api key per model 2024-08-17 13:20:55 -07:00
router_strategy refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
router_utils Use AZURE_API_VERSION as default azure openai version 2024-08-14 15:47:57 -07:00
tests add tpm limits per api key per model 2024-08-17 13:20:55 -07:00
types feat(litellm_logging.py): support logging model price information to s3 logs 2024-08-16 16:21:34 -07:00
__init__.py fix(__init__.py): fix models_by_provider to include cohere_chat models 2024-08-16 11:33:23 -07:00
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py fix handle case when service logger has no attribute prometheusServicesLogger 2024-08-08 17:19:12 -07:00
_version.py (fix) ci/cd don't let importing litellm._version block starting proxy 2024-02-01 16:23:16 -08:00
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
cost_calculator.py fix(litellm_logging.py): fix price information logging to s3 2024-08-16 16:42:38 -07:00
exceptions.py fix: fix tests 2024-08-07 15:02:04 -07:00
main.py Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry 2024-08-16 19:16:20 -07:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to all vertex ai anthropic models 2024-08-14 14:08:12 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
scheduler.py feat(scheduler.py): support redis caching for req. prioritization 2024-06-06 14:19:21 -07:00
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry 2024-08-16 19:16:20 -07:00