.. |
adapters
|
fix(anthropic_adapter.py): fix sync streaming
|
2024-08-03 20:52:29 -07:00 |
assistants
|
add async assistants delete support
|
2024-07-10 11:14:40 -07:00 |
batches
|
test batches endpoint on proxy
|
2024-07-30 09:46:30 -07:00 |
deprecated_litellm_server
|
|
|
files
|
fix linting checks
|
2024-07-30 16:55:17 -07:00 |
fine_tuning
|
test translating to vertex ai params
|
2024-08-03 08:44:54 -07:00 |
integrations
|
show warning about prometheus moving to enterprise
|
2024-08-07 12:46:26 -07:00 |
litellm_core_utils
|
fix use get_file_check_sum
|
2024-08-08 17:19:12 -07:00 |
llms
|
fix(huggingface_restapi.py): support passing 'wait_for_model' param on completion calls
|
2024-08-09 09:25:19 -07:00 |
proxy
|
feat(vertex_httpx.py): return vertex grounding, citation, and safety results
|
2024-08-09 08:54:55 -07:00 |
router_strategy
|
control using enable_tag_filtering
|
2024-07-18 22:40:51 -07:00 |
router_utils
|
fix logging cool down deployment
|
2024-08-07 11:27:05 -07:00 |
tests
|
fix(huggingface_restapi.py): support passing 'wait_for_model' param on completion calls
|
2024-08-09 09:25:19 -07:00 |
types
|
fix(anthropic.py): handle scenario where anthropic returns invalid json string for tool call while streaming
|
2024-08-07 09:24:11 -07:00 |
__init__.py
|
fix(internal_user_endpoints.py): expose new 'internal_user_budget_duration' flag
|
2024-08-08 17:19:05 -07:00 |
_logging.py
|
fix(_logging.py): fix timestamp format for json logs
|
2024-06-20 15:20:21 -07:00 |
_redis.py
|
|
|
_service_logger.py
|
fix handle case when service logger has no attribute prometheusServicesLogger
|
2024-08-08 17:19:12 -07:00 |
_version.py
|
|
|
budget_manager.py
|
|
|
caching.py
|
feat: hash prompt when caching
|
2024-08-08 23:25:27 -07:00 |
cost.json
|
|
|
cost_calculator.py
|
fix(cost_calculator.py): respect litellm.suppress_debug_info for cost calc
|
2024-08-01 18:07:38 -07:00 |
exceptions.py
|
fix: fix tests
|
2024-08-07 15:02:04 -07:00 |
main.py
|
Merge pull request #5079 from BerriAI/litellm_add_pydantic_model_support
|
2024-08-07 14:43:05 -07:00 |
model_prices_and_context_window_backup.json
|
build(model_prices_and_context_window.json): Fixes https://github.com/BerriAI/litellm/issues/5113
|
2024-08-08 09:11:59 -07:00 |
py.typed
|
|
|
requirements.txt
|
|
|
router.py
|
feat(router.py): allow using .acompletion() for request prioritization
|
2024-08-07 16:43:12 -07:00 |
scheduler.py
|
feat(scheduler.py): support redis caching for req. prioritization
|
2024-06-06 14:19:21 -07:00 |
timeout.py
|
|
|
utils.py
|
fix(utils.py): handle anthropic overloaded error
|
2024-08-08 17:19:12 -07:00 |