fix(cost_calculator.py): handle custom pricing at deployment level fo… (#9855)

* fix(cost_calculator.py): handle custom pricing at deployment level for router

* test: add unit tests

* fix(router.py): show custom pricing on UI

check correct model str

* fix: fix linting error

* docs(custom_pricing.md): clarify custom pricing for proxy

Fixes https://github.com/BerriAI/litellm/issues/8573#issuecomment-2790420740

* test: update code qa test

* fix: cleanup traceback

* fix: handle litellm param custom pricing

* test: update test

* fix(cost_calculator.py): add router model id to list of potential model names

* fix(cost_calculator.py): fix router model id check

* fix: router.py - maintain older model registry approach

* fix: fix ruff check

* fix(router.py): router get deployment info

add custom values to mapped dict

* test: update test

* fix(utils.py): update only if value is non-null

* test: add unit test
This commit is contained in:
Krish Dholakia 2025-04-09 22:13:10 -07:00 committed by GitHub
parent baa9bd6338
commit e1eb5e32c1
16 changed files with 193 additions and 37 deletions

View file

@ -314,12 +314,14 @@ def test_get_model_info_custom_model_router():
"input_cost_per_token": 1,
"output_cost_per_token": 1,
"model": "openai/meta-llama/Meta-Llama-3-8B-Instruct",
"model_id": "c20d603e-1166-4e0f-aa65-ed9c476ad4ca",
},
"model_info": {
"id": "c20d603e-1166-4e0f-aa65-ed9c476ad4ca",
}
}
]
)
info = get_model_info("openai/meta-llama/Meta-Llama-3-8B-Instruct")
info = get_model_info("c20d603e-1166-4e0f-aa65-ed9c476ad4ca")
print("info", info)
assert info is not None