.. |
assistants
|
refactor(azure.py): refactor to have client init work across all endpoints
|
2025-03-11 17:27:24 -07:00 |
batch_completion
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
batches
|
refactor(batches/main.py): working refactored azure client init on batches
|
2025-03-11 14:36:38 -07:00 |
caching
|
update redisvl dependency
|
2025-03-24 08:42:11 -04:00 |
experimental_mcp_client
|
Merge pull request #9642 from BerriAI/litellm_mcp_improvements_expose_sse_urls
|
2025-03-29 20:04:43 -07:00 |
files
|
refactor(azure.py): refactor to have client init work across all endpoints
|
2025-03-11 17:27:24 -07:00 |
fine_tuning
|
fix linting
|
2025-02-14 21:42:51 -08:00 |
integrations
|
default to use SLP for GCS PubSub
|
2025-03-24 15:21:59 -07:00 |
litellm_core_utils
|
Merge pull request #9642 from BerriAI/litellm_mcp_improvements_expose_sse_urls
|
2025-03-29 20:04:43 -07:00 |
llms
|
fix(proxy_server.py): get master key from environment, if not set in … (#9617)
|
2025-03-28 12:32:04 -07:00 |
proxy
|
ui new build
|
2025-03-29 20:05:20 -07:00 |
realtime_api
|
fix(aim.py): fix linting error
|
2025-03-13 15:32:42 -07:00 |
rerank_api
|
Add new gpt-4.5-preview model + other updates (#8879)
|
2025-02-27 15:27:14 -08:00 |
responses
|
MockResponsesAPIStreamingIterator
|
2025-03-20 12:25:58 -07:00 |
router_strategy
|
Revert "Fix latency redis"
|
2025-03-19 18:11:22 -07:00 |
router_utils
|
fix(handle_error.py): make cooldown error more descriptive
|
2025-03-21 10:46:35 -07:00 |
secret_managers
|
fix if
|
2025-03-11 09:27:31 +00:00 |
types
|
Merge pull request #9642 from BerriAI/litellm_mcp_improvements_expose_sse_urls
|
2025-03-29 20:04:43 -07:00 |
__init__.py
|
run ci/cd again
|
2025-03-29 20:34:59 -07:00 |
_logging.py
|
(sdk perf fix) - only print args passed to litellm when debugging mode is on (#7708)
|
2025-01-11 22:56:20 -08:00 |
_redis.py
|
fix(redis_cache.py): add 5s default timeout
|
2025-03-17 14:27:36 -07:00 |
_service_logger.py
|
fix svc logger (#7727)
|
2025-01-12 22:00:25 -08:00 |
_version.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
budget_manager.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
constants.py
|
Merge pull request #9642 from BerriAI/litellm_mcp_improvements_expose_sse_urls
|
2025-03-29 20:04:43 -07:00 |
cost.json
|
|
|
cost_calculator.py
|
Support Gemini audio token cost tracking + fix openai audio input token cost tracking (#9535)
|
2025-03-26 17:26:25 -07:00 |
exceptions.py
|
feat(openai.py): bubble all error information back to client
|
2025-03-10 15:27:43 -07:00 |
main.py
|
Add OpenAI gpt-4o-transcribe support (#9517)
|
2025-03-26 23:10:25 -07:00 |
model_prices_and_context_window_backup.json
|
Add OpenAI gpt-4o-transcribe support (#9517)
|
2025-03-26 23:10:25 -07:00 |
py.typed
|
|
|
router.py
|
Merge pull request #9473 from BerriAI/litellm_dev_03_22_2025_p2
|
2025-03-22 21:57:15 -07:00 |
scheduler.py
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
timeout.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
utils.py
|
Support discovering gemini, anthropic, xai models by calling their /v1/model endpoint (#9530)
|
2025-03-27 22:50:48 -07:00 |