* fix(cost_calculator.py): handle custom pricing at deployment level for router
* test: add unit tests
* fix(router.py): show custom pricing on UI
check correct model str
* fix: fix linting error
* docs(custom_pricing.md): clarify custom pricing for proxy
Fixes https://github.com/BerriAI/litellm/issues/8573#issuecomment-2790420740
* test: update code qa test
* fix: cleanup traceback
* fix: handle litellm param custom pricing
* test: update test
* fix(cost_calculator.py): add router model id to list of potential model names
* fix(cost_calculator.py): fix router model id check
* fix: router.py - maintain older model registry approach
* fix: fix ruff check
* fix(router.py): router get deployment info
add custom values to mapped dict
* test: update test
* fix(utils.py): update only if value is non-null
* test: add unit test
* fix(caching_routes.py): mask redis password on `/cache/ping` route
* fix(caching_routes.py): fix linting erro
* fix(caching_routes.py): fix linting error on caching routes
* fix: fix test - ignore mask_dict - has a breakpoint
* fix(azure.py): add timeout param + elapsed time in azure timeout error
* fix(http_handler.py): add elapsed time to http timeout request
makes it easier to debug how long request took before failing