.. |
assistants
|
feat(assistants/main.py): support arun_thread_stream
|
2024-06-04 16:47:51 -07:00 |
batches
|
docs(customers.md): add customer cost tracking to docs
|
2024-05-29 14:55:33 -07:00 |
deprecated_litellm_server
|
|
|
integrations
|
Merge pull request #4503 from BerriAI/litellm_log_remaining_rate_limit_prometheus
|
2024-07-01 21:11:42 -07:00 |
litellm_core_utils
|
fix(core_helpers.py): map vertex ai 'RECITATION' finish reason to 'content_filter'
|
2024-07-01 12:48:56 -07:00 |
llms
|
feat - return headers for openai audio transcriptions
|
2024-07-01 20:27:27 -07:00 |
proxy
|
fix(aws_secret_manager.py): fix string replace
|
2024-07-02 00:42:12 -07:00 |
router_strategy
|
refactor: replace 'traceback.print_exc()' with logging library
|
2024-06-06 13:47:43 -07:00 |
router_utils
|
fix use safe access for router alerting
|
2024-06-14 15:17:32 -07:00 |
tests
|
ci/cd run again
|
2024-07-01 21:36:30 -07:00 |
types
|
fix(vertex_ai_anthropic.py): support pre-filling "{" for json mode
|
2024-06-29 18:54:10 -07:00 |
__init__.py
|
feat - return response headers for async openai requests
|
2024-07-01 17:01:42 -07:00 |
_logging.py
|
fix(_logging.py): fix timestamp format for json logs
|
2024-06-20 15:20:21 -07:00 |
_redis.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
_service_logger.py
|
feat(dynamic_rate_limiter.py): update cache with active project
|
2024-06-21 20:25:40 -07:00 |
_version.py
|
|
|
budget_manager.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
caching.py
|
remove debug print statement
|
2024-06-27 20:58:29 -07:00 |
cost.json
|
|
|
cost_calculator.py
|
fix(cost_calculator.py): handle unexpected error in cost_calculator.py
|
2024-06-28 14:53:00 -07:00 |
exceptions.py
|
fix(utils.py): support json schema validation
|
2024-06-29 15:05:52 -07:00 |
main.py
|
fix(router.py): disable cooldowns
|
2024-07-01 15:03:10 -07:00 |
model_prices_and_context_window_backup.json
|
feat(vertex_httpx.py): support the 'response_schema' param for older vertex ai models - pass as prompt (user-controlled)
|
2024-06-29 13:25:27 -07:00 |
py.typed
|
feature - Types for mypy - #360
|
2024-05-30 14:14:41 -04:00 |
requirements.txt
|
|
|
router.py
|
fix(router.py): disable cooldowns
|
2024-07-01 15:03:10 -07:00 |
scheduler.py
|
feat(scheduler.py): support redis caching for req. prioritization
|
2024-06-06 14:19:21 -07:00 |
timeout.py
|
|
|
utils.py
|
fix exception provider not known
|
2024-07-01 21:05:37 -07:00 |