litellm/litellm
2024-08-05 16:34:37 -07:00
..
adapters fix(anthropic_adapter.py): fix sync streaming 2024-08-03 20:52:29 -07:00
assistants add async assistants delete support 2024-07-10 11:14:40 -07:00
batches test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
deprecated_litellm_server
files fix linting checks 2024-07-30 16:55:17 -07:00
fine_tuning test translating to vertex ai params 2024-08-03 08:44:54 -07:00
integrations Merge pull request #5047 from BerriAI/litellm_log_request_response_gcs 2024-08-05 09:05:56 -07:00
litellm_core_utils fix linting errors 2024-08-05 08:54:04 -07:00
llms fix(ollama_chat.py): fix passing auth headers to ollama 2024-08-05 09:33:09 -07:00
proxy build ui on custom path 2024-08-05 16:34:37 -07:00
router_strategy control using enable_tag_filtering 2024-07-18 22:40:51 -07:00
router_utils Revert "[Ui] add together AI, Mistral, PerplexityAI, OpenRouter models on Admin UI " 2024-07-20 19:04:22 -07:00
tests fix test fine tuning api azure 2024-08-05 11:08:13 -07:00
types fix(types/router.py): remove model_info pydantic field 2024-08-05 09:58:44 -07:00
__init__.py fix linting errors 2024-08-05 08:54:04 -07:00
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py use common helpers for writing to otel 2024-07-27 11:40:39 -07:00
_version.py
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py use file name when getting cache key 2024-08-02 14:52:08 -07:00
cost.json
cost_calculator.py fix(cost_calculator.py): respect litellm.suppress_debug_info for cost calc 2024-08-01 18:07:38 -07:00
exceptions.py fix: add type hints for APIError and AnthropicError status codes 2024-08-01 18:07:38 -07:00
main.py Merge branch 'main' into litellm_anthropic_api_streaming 2024-08-03 21:16:50 -07:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): update gpt-4o-mini max_output_tokens 2024-08-05 09:30:18 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt
router.py fix(router.py): move deployment cooldown list message to error log, not client-side 2024-08-03 12:49:39 -07:00
scheduler.py feat(scheduler.py): support redis caching for req. prioritization 2024-06-06 14:19:21 -07:00
timeout.py
utils.py fix(utils.py): parse out aws specific params from openai call 2024-08-03 12:04:44 -07:00