.. |
adapters
|
fix(anthropic_adapter.py): fix sync streaming
|
2024-08-03 20:52:29 -07:00 |
assistants
|
add async assistants delete support
|
2024-07-10 11:14:40 -07:00 |
batches
|
fix: fix linting errors
|
2024-08-22 15:51:59 -07:00 |
deprecated_litellm_server
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
files
|
fix: fix linting errors
|
2024-08-22 15:51:59 -07:00 |
fine_tuning
|
test translating to vertex ai params
|
2024-08-03 08:44:54 -07:00 |
integrations
|
prometheus - safe update start / end time
|
2024-08-28 16:13:56 -07:00 |
litellm_core_utils
|
fix(utils.py): correctly log streaming cache hits (#5417) (#5426)
|
2024-08-28 22:50:33 -07:00 |
llms
|
fix(google_ai_studio): working context caching (#5421)
|
2024-08-29 07:00:30 -07:00 |
proxy
|
add set / update tags for a team
|
2024-08-29 13:05:00 -07:00 |
rerank_api
|
add test for rerank on custom api base
|
2024-08-27 18:25:51 -07:00 |
router_strategy
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
router_utils
|
fix(cooldown_cache.py): fix linting errors
|
2024-08-27 07:40:28 -07:00 |
tests
|
add test_team_tags to set / update tags
|
2024-08-29 13:02:57 -07:00 |
types
|
fix(google_ai_studio): working context caching (#5421)
|
2024-08-29 07:00:30 -07:00 |
__init__.py
|
Merge pull request #5393 from BerriAI/litellm_gemini_embedding_support
|
2024-08-28 13:46:28 -07:00 |
_logging.py
|
fix(_logging.py): fix timestamp format for json logs
|
2024-06-20 15:20:21 -07:00 |
_redis.py
|
feat(caching.py): redis cluster support
|
2024-08-21 15:01:52 -07:00 |
_service_logger.py
|
fix handle case when service logger has no attribute prometheusServicesLogger
|
2024-08-08 17:19:12 -07:00 |
_version.py
|
(fix) ci/cd don't let importing litellm._version block starting proxy
|
2024-02-01 16:23:16 -08:00 |
budget_manager.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
caching.py
|
feat(vertex_ai_context_caching.py): check gemini cache, if key already exists
|
2024-08-26 22:19:01 -07:00 |
cost.json
|
store llm costs in budget manager
|
2023-09-09 19:11:35 -07:00 |
cost_calculator.py
|
feat(cost_calculator.py): only override base model if custom pricing is set
|
2024-08-19 16:05:49 -07:00 |
exceptions.py
|
fix: fix tests
|
2024-08-07 15:02:04 -07:00 |
main.py
|
Merge pull request #5393 from BerriAI/litellm_gemini_embedding_support
|
2024-08-28 13:46:28 -07:00 |
model_prices_and_context_window_backup.json
|
Merge branch 'main' into litellm_main_staging
|
2024-08-28 18:05:27 -07:00 |
py.typed
|
feature - Types for mypy - #360
|
2024-05-30 14:14:41 -04:00 |
requirements.txt
|
Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas.
|
2023-11-22 23:07:33 -05:00 |
router.py
|
fix(router.py): fix cooldown check
|
2024-08-28 16:38:42 -07:00 |
scheduler.py
|
feat(scheduler.py): support redis caching for req. prioritization
|
2024-06-06 14:19:21 -07:00 |
timeout.py
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
utils.py
|
fix(utils.py): correctly log streaming cache hits (#5417) (#5426)
|
2024-08-28 22:50:33 -07:00 |