litellm-mirror/litellm/llms/openai
Krish Dholakia 03eef5a2a0
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 34s
Fix custom pricing - separate provider info from model info (#7990)
* fix(utils.py): initial commit fixing custom cost tracking

refactors out provider specific model info from `get_model_info` - this was causing custom costs to be registered incorrectly

* fix(utils.py): cleanup `_supports_factory` to check provider info, if model info is None

some providers support features like vision across all models

* fix(utils.py): refactor to use _supports_factory

* test: update testing

* fix: fix linting errors

* test: fix testing
2025-01-25 21:49:28 -08:00
..
chat Fix custom pricing - separate provider info from model info (#7990) 2025-01-25 21:49:28 -08:00
completion Litellm dev 12 25 2025 p2 (#7420) 2024-12-25 18:35:34 -08:00
fine_tuning (Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path} (#7427) 2024-12-27 17:01:14 -08:00
image_variations [BETA] Add OpenAI /images/variations + Topaz API support (#7700) 2025-01-11 23:27:46 -08:00
realtime (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
transcriptions Litellm dev 12 23 2024 p1 (#7383) 2024-12-23 16:33:31 -08:00
common_utils.py Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
cost_calculation.py LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
openai.py (Feat) Add x-litellm-overhead-duration-ms and "x-litellm-response-duration-ms" in response from LiteLLM (#7899) 2025-01-21 20:27:55 -08:00