..
adapters
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
assistants
Revert "fix: add missing parameters order, limit, before, and after in get_as…" ( #7542 )
2025-01-03 16:32:12 -08:00
batch_completion
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
batches
(Feat - Batches API) add support for retrieving vertex api batch jobs ( #7661 )
2025-01-09 18:35:03 -08:00
caching
(Refactor / QA) - Use LoggingCallbackManager
to append callbacks and ensure no duplicate callbacks are added ( #8112 )
2025-01-30 19:35:50 -08:00
files
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
fine_tuning
(feat) POST /fine_tuning/jobs
support passing vertex specific hyper params ( #7490 )
2025-01-01 07:44:48 -08:00
integrations
(Bug Fix - Langfuse) - fix for when model response has choices=[]
( #8339 )
2025-02-06 18:02:26 -08:00
litellm_core_utils
(Bug Fix - Langfuse) - fix for when model response has choices=[]
( #8339 )
2025-02-06 18:02:26 -08:00
llms
Azure OpenAI improvements - o3 native streaming, improved tool call + response format handling ( #8292 )
2025-02-05 19:38:58 -08:00
proxy
fix(utils.py): handle key error in msg validation ( #8325 )
2025-02-06 18:13:46 -08:00
realtime_api
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks ( #7455 )
2024-12-28 18:38:54 -08:00
rerank_api
(feat) /batches
- track user_api_key_alias
, user_api_key_team_alias
etc for /batch requests ( #7401 )
2024-12-24 17:44:28 -08:00
router_strategy
Litellm dev 01 30 2025 p2 ( #8134 )
2025-01-30 22:18:53 -08:00
router_utils
Add attempted-retries
and timeout
values to response headers + more testing ( #7926 )
2025-01-22 22:19:44 -08:00
secret_managers
fix: add default credential for azure ( #7095 ) ( #7891 )
2025-01-21 09:01:49 -08:00
types
(Feat) - Add support for structured output on bedrock/nova
models + add util litellm.supports_tool_choice
( #8264 )
2025-02-04 21:47:16 -08:00
__init__.py
fix test_models_by_provider
2025-02-05 19:01:00 -08:00
_logging.py
(sdk perf fix) - only print args passed to litellm when debugging mode is on ( #7708 )
2025-01-11 22:56:20 -08:00
_redis.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
_service_logger.py
fix svc logger ( #7727 )
2025-01-12 22:00:25 -08:00
_version.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
budget_manager.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
constants.py
Complete o3 model support ( #8183 )
2025-02-02 22:36:37 -08:00
cost.json
store llm costs in budget manager
2023-09-09 19:11:35 -07:00
cost_calculator.py
Fix custom pricing - separate provider info from model info ( #7990 )
2025-01-25 21:49:28 -08:00
exceptions.py
LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 ( #7448 )
2024-12-27 19:04:39 -08:00
main.py
(Refactor) - migrate bedrock invoke to BaseLLMHTTPHandler
class ( #8290 )
2025-02-05 18:58:55 -08:00
model_prices_and_context_window_backup.json
Fixed meta llama 3.3 key for Databricks API ( #8093 )
2025-02-06 18:05:49 -08:00
py.typed
feature - Types for mypy - #360
2024-05-30 14:14:41 -04:00
router.py
(Bug Fix - Langfuse) - fix for when model response has choices=[]
( #8339 )
2025-02-06 18:02:26 -08:00
scheduler.py
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
2024-10-14 16:34:01 +05:30
timeout.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
utils.py
fix(utils.py): handle key error in msg validation ( #8325 )
2025-02-06 18:13:46 -08:00