.. |
deprecated_litellm_server
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
integrations
|
fix: support info level logging on pkg + proxy
|
2024-01-20 17:45:47 -08:00 |
llms
|
fix(ollama_chat.py): fix default token counting for ollama chat
|
2024-01-24 20:09:17 -08:00 |
proxy
|
(feat) view spend/logs by user_id, view spend/user by user
|
2024-01-25 16:12:28 -08:00 |
router_strategy
|
(feat) router - usage based routing - consider input_tokens
|
2024-01-19 13:59:49 -08:00 |
tests
|
fix(utils.py): completion_cost support for image gen models
|
2024-01-25 18:08:18 -08:00 |
types
|
(types) routerConfig
|
2024-01-02 14:14:29 +05:30 |
__init__.py
|
feat(proxy_server.py): support global budget and resets
|
2024-01-24 14:27:13 -08:00 |
_logging.py
|
(fix) alerting - show timestamps in alert
|
2024-01-24 15:25:40 -08:00 |
_redis.py
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
_version.py
|
formatting improvements
|
2023-08-28 09:20:50 -07:00 |
budget_manager.py
|
add headers to budget manager
|
2024-01-18 16:10:45 -08:00 |
caching.py
|
fix(caching.py): add logging module support for caching
|
2024-01-20 17:34:29 -08:00 |
cost.json
|
store llm costs in budget manager
|
2023-09-09 19:11:35 -07:00 |
exceptions.py
|
fix(bedrock.py): add support for sts based boto3 initialization
|
2024-01-17 12:08:59 -08:00 |
main.py
|
feat(main.py): support auto-infering mode if not set
|
2024-01-25 20:07:31 -08:00 |
model_prices_and_context_window_backup.json
|
(fix) update back model prices with latest llms
|
2023-12-11 10:56:01 -08:00 |
requirements.txt
|
Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas.
|
2023-11-22 23:07:33 -05:00 |
router.py
|
Merge pull request #1534 from BerriAI/litellm_custom_cooldown_times
|
2024-01-23 08:05:59 -08:00 |
timeout.py
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
utils.py
|
fix(utils.py): completion_cost support for image gen models
|
2024-01-25 18:08:18 -08:00 |