.. |
deprecated_litellm_server
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
integrations
|
fix: fix merge issues
|
2024-02-13 23:04:12 -08:00 |
llms
|
fix(vertex_ai.py): map finish reason
|
2024-02-14 11:42:13 -08:00 |
proxy
|
Merge pull request #1971 from BerriAI/litellm_fix_team_id
|
2024-02-13 23:24:38 -08:00 |
router_strategy
|
(feat) router - usage based routing - consider input_tokens
|
2024-01-19 13:59:49 -08:00 |
tests
|
fix(vertex_ai.py): map finish reason
|
2024-02-14 11:42:13 -08:00 |
types
|
(types) routerConfig
|
2024-01-02 14:14:29 +05:30 |
__init__.py
|
fix(vertex_ai.py): map finish reason
|
2024-02-14 11:42:13 -08:00 |
_logging.py
|
fix(proxy_server.py): update user cache to with new spend
|
2024-02-06 23:06:05 -08:00 |
_redis.py
|
fix(caching.py): use bulk writes and blockconnectionpooling for reads from Redis
|
2024-01-13 11:50:50 +05:30 |
_version.py
|
(fix) ci/cd don't let importing litellm._version block starting proxy
|
2024-02-01 16:23:16 -08:00 |
budget_manager.py
|
feat(utils.py): support region based pricing for bedrock + use bedrock's token counts if given
|
2024-01-26 14:53:58 -08:00 |
caching.py
|
(fix) s3 cache proxy - fix notImplemented error
|
2024-02-13 16:34:43 -08:00 |
cost.json
|
store llm costs in budget manager
|
2023-09-09 19:11:35 -07:00 |
exceptions.py
|
fix(bedrock.py): add support for sts based boto3 initialization
|
2024-01-17 12:08:59 -08:00 |
main.py
|
refactor(main.py): trigger rebuild
|
2024-02-14 11:55:54 -08:00 |
model_prices_and_context_window_backup.json
|
(feat) add text-moderation OpenAI models
|
2024-02-14 10:34:20 -08:00 |
requirements.txt
|
Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas.
|
2023-11-22 23:07:33 -05:00 |
router.py
|
(feat) support timeout on bedrock
|
2024-02-09 17:42:17 -08:00 |
timeout.py
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
utils.py
|
fix(utils.py): fix streaming rule calling
|
2024-02-12 22:36:32 -08:00 |