mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
* build(model_prices_and_context_window.json): add google/gemini-2.0-flash-lite-001 versioned pricing Closes https://github.com/BerriAI/litellm/issues/9829 * build(model_prices_and_context_window.json): add initial support for 'supported_output_modalities' param * build(model_prices_and_context_window.json): add initial support for 'supported_output_modalities' param * build(model_prices_and_context_window.json): add supported endpoints to gemini-2.5-pro * build(model_prices_and_context_window.json): add gemini 200k+ pricing * feat(utils.py): support cost calculation for gemini-2.5-pro above 200k tokens Fixes https://github.com/BerriAI/litellm/issues/9807 * build: test dockerfile change * build: revert apk change * ci(config.yml): pip install wheel * ci: test problematic package first * ci(config.yml): pip install only binary * ci: try more things * ci: test different ml_dtypes version * ci(config.yml): check ml_dtypes==0.4.0 * ci: test * ci: cleanup config.yml * ci: specify ml dtypes in requirements.txt * ci: remove redisvl depedency (temporary) * fix: fix linting errors * test: update test * test: fix test |
||
---|---|---|
.. | ||
integrations | ||
llms | ||
mcp_server | ||
passthrough_endpoints | ||
proxy/management_endpoints | ||
adapter.py | ||
caching.py | ||
completion.py | ||
embedding.py | ||
files.py | ||
fine_tuning.py | ||
guardrails.py | ||
rerank.py | ||
router.py | ||
scheduler.py | ||
services.py | ||
tag_management.py | ||
utils.py |