litellm-mirror/litellm
Krish Dholakia e00d4fb18c
Litellm dev 03 08 2025 p3 (#9089)
* feat(ollama_chat.py): pass down http client to ollama_chat

enables easier testing

* fix(factory.py): fix passing images to ollama's `/api/generate` endpoint

Fixes https://github.com/BerriAI/litellm/issues/6683

* fix(factory.py): fix ollama pt to handle templating correctly
2025-03-09 18:20:56 -07:00
..
assistants Revert "fix: add missing parameters order, limit, before, and after in get_as…" (#7542) 2025-01-03 16:32:12 -08:00
batch_completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batches Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077) 2025-03-08 11:47:25 -08:00
caching Support caching on reasoning content + other fixes (#8973) 2025-03-04 21:12:16 -08:00
files Litellm dev 03 04 2025 p3 (#8997) 2025-03-04 21:58:03 -08:00
fine_tuning fix linting 2025-02-14 21:42:51 -08:00
integrations build: merge litellm_dev_03_01_2025_p2 2025-03-03 23:05:41 -08:00
litellm_core_utils Litellm dev 03 08 2025 p3 (#9089) 2025-03-09 18:20:56 -07:00
llms Litellm dev 03 08 2025 p3 (#9089) 2025-03-09 18:20:56 -07:00
proxy Litellm dev 03 08 2025 p3 (#9089) 2025-03-09 18:20:56 -07:00
realtime_api (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
rerank_api Add new gpt-4.5-preview model + other updates (#8879) 2025-02-27 15:27:14 -08:00
router_strategy fix code quality 2025-02-18 21:29:23 -08:00
router_utils fix(route_llm_request.py): move to using common router, even for clie… (#8966) 2025-03-03 22:57:08 -08:00
secret_managers (AWS Secret Manager) - Using K/V pairs in 1 AWS Secret (#9039) 2025-03-06 19:30:18 -08:00
types Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077) 2025-03-08 11:47:25 -08:00
__init__.py [Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029) 2025-03-06 18:32:58 -08:00
_logging.py (sdk perf fix) - only print args passed to litellm when debugging mode is on (#7708) 2025-01-11 22:56:20 -08:00
_redis.py (Redis Cluster) - Fixes for using redis cluster + pipeline (#8442) 2025-02-12 18:01:32 -08:00
_service_logger.py fix svc logger (#7727) 2025-01-12 22:00:25 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
constants.py Fix calling claude via invoke route + response_format support for claude on invoke route (#8908) 2025-02-28 17:56:26 -08:00
cost.json
cost_calculator.py Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077) 2025-03-08 11:47:25 -08:00
exceptions.py fix(main.py): fix key leak error when unknown provider given (#8556) 2025-02-15 14:02:55 -08:00
main.py Litellm dev 03 08 2025 p3 (#9089) 2025-03-09 18:20:56 -07:00
model_prices_and_context_window_backup.json Revert "experimental - track anthropic messages as mode" 2025-03-08 17:38:24 -08:00
py.typed
router.py Support master key rotations (#9041) 2025-03-06 23:13:30 -08:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077) 2025-03-08 11:47:25 -08:00