.. |
assistants
|
feat(proxy_server.py): add assistants api endpoints to proxy server
|
2024-05-30 22:44:43 -07:00 |
batches
|
docs(customers.md): add customer cost tracking to docs
|
2024-05-29 14:55:33 -07:00 |
deprecated_litellm_server
|
|
|
integrations
|
fix - move email templates
|
2024-05-31 10:37:56 -07:00 |
llms
|
Merge pull request #3956 from BerriAI/litellm_cache_openai_clients
|
2024-06-01 09:46:42 -07:00 |
proxy
|
fix(proxy_server.py): security fix - fix sql injection attack on global spend logs
|
2024-06-01 14:16:41 -07:00 |
router_strategy
|
fix(lowest_latency.py): set default none value for time_to_first_token in sync log success event
|
2024-05-21 18:42:15 -07:00 |
tests
|
fix(test_scheduler.py): fix test
|
2024-06-01 11:30:26 -07:00 |
types
|
Merge pull request #3954 from BerriAI/litellm_simple_request_prioritization
|
2024-05-31 23:29:09 -07:00 |
__init__.py
|
Merge pull request #3954 from BerriAI/litellm_simple_request_prioritization
|
2024-05-31 23:29:09 -07:00 |
_logging.py
|
fix(proxy_cli.py): enable json logging via litellm_settings param on config
|
2024-05-29 21:41:20 -07:00 |
_redis.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
_service_logger.py
|
fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2
|
2024-04-18 21:38:00 -07:00 |
_version.py
|
|
|
budget_manager.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
caching.py
|
fix(proxy_server.py): allow user_api_key_cache_ttl to be a controllable param
|
2024-05-25 12:07:28 -07:00 |
cost.json
|
|
|
exceptions.py
|
fix(proxy_server.py): fix end user object check when master key used
|
2024-05-29 17:20:59 -07:00 |
main.py
|
docs(assistants.md): add assistants api to docs
|
2024-06-01 10:30:07 -07:00 |
model_prices_and_context_window_backup.json
|
build(model_prices_and_context_window.json): add azure gpt-4o pricing
|
2024-05-31 15:44:19 -07:00 |
requirements.txt
|
|
|
router.py
|
Merge pull request #3954 from BerriAI/litellm_simple_request_prioritization
|
2024-05-31 23:29:09 -07:00 |
scheduler.py
|
docs(scheduler.md): add request prioritization to docs
|
2024-05-31 19:35:47 -07:00 |
timeout.py
|
|
|
utils.py
|
Merge pull request #3944 from BerriAI/litellm_fix_parallel_streaming
|
2024-05-31 21:42:37 -07:00 |