litellm-mirror/litellm
Ishaan Jaff 1d31e25816
Merge pull request #9183 from BerriAI/litellm_router_responses_api_2
[Feat] - Add Responses API on LiteLLM Proxy
2025-03-12 21:28:16 -07:00
..
assistants refactor(azure.py): refactor to have client init work across all endpoints 2025-03-11 17:27:24 -07:00
batch_completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batches refactor(batches/main.py): working refactored azure client init on batches 2025-03-11 14:36:38 -07:00
caching fix(llm_caching_handler.py): handle no current event loop error 2025-03-12 12:29:25 -07:00
files refactor(azure.py): refactor to have client init work across all endpoints 2025-03-11 17:27:24 -07:00
fine_tuning fix linting 2025-02-14 21:42:51 -08:00
integrations use ProxyBaseLLMRequestProcessing 2025-03-12 16:54:33 -07:00
litellm_core_utils Merge branch 'main' into litellm_dev_03_10_2025_p3 2025-03-12 14:56:01 -07:00
llms Merge branch 'main' into litellm_dev_03_10_2025_p3 2025-03-12 14:56:01 -07:00
proxy responses_api 2025-03-12 20:38:05 -07:00
realtime_api (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
rerank_api Add new gpt-4.5-preview model + other updates (#8879) 2025-02-27 15:27:14 -08:00
responses fix mypy linting errors 2025-03-12 12:13:19 -07:00
router_strategy fix code quality 2025-02-18 21:29:23 -08:00
router_utils test_openai_responses_litellm_router 2025-03-12 16:13:48 -07:00
secret_managers fix if 2025-03-11 09:27:31 +00:00
types working spend tracking + logging for response api 2025-03-12 17:29:25 -07:00
__init__.py chore(init): update Azure default API version to 2025-02-01-preview 2025-03-12 22:02:48 -06:00
_logging.py (sdk perf fix) - only print args passed to litellm when debugging mode is on (#7708) 2025-01-11 22:56:20 -08:00
_redis.py (Redis Cluster) - Fixes for using redis cluster + pipeline (#8442) 2025-02-12 18:01:32 -08:00
_service_logger.py fix svc logger (#7727) 2025-01-12 22:00:25 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
constants.py STREAM_SSE_DONE_STRING 2025-03-12 09:33:28 -07:00
cost.json
cost_calculator.py Merge branch 'main' into litellm_responses_api_support 2025-03-12 12:04:12 -07:00
exceptions.py feat(openai.py): bubble all error information back to client 2025-03-10 15:27:43 -07:00
main.py Merge branch 'main' into litellm_dev_03_10_2025_p3 2025-03-12 14:56:01 -07:00
model_prices_and_context_window_backup.json Merge branch 'main' into litellm_dev_contributor_prs_03_10_2025_p1 2025-03-11 22:50:02 -07:00
py.typed
router.py _update_kwargs_with_default_litellm_params 2025-03-12 19:26:12 -07:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py Merge branch 'main' into litellm_dev_03_10_2025_p3 2025-03-12 14:56:01 -07:00