* feat(view_logs.tsx): show model id + api base in request logs
easier debugging
* fix(index.tsx): fix length of api base
easier viewing
* refactor(leftnav.tsx): show models tab to team admin
* feat(model_dashboard.tsx): add explainer for what the 'models' page is for team admin
helps them understand how they can use it
* feat(model_management_endpoints.py): restrict model add by team to just team admin
allow team admin to add models via non-team keys (e.g. ui token)
* test(test_add_update_models.py): update unit testing for new behaviour
* fix(model_dashboard.tsx): show user the models
* feat(proxy_server.py): add new query param 'user_models_only' to `/v2/model/info`
Allows user to retrieve just the models they've added
Used in UI to show internal users just the models they've added
* feat(model_dashboard.tsx): allow team admins to view their own models
* fix: allow ui user to fetch model cost map
* feat(add_model_tab.tsx): require team admins to specify team when onboarding models
* fix(_types.py): add `/v1/model/info` to info route
`/model/info` was already there
* fix(model_info_view.tsx): allow user to edit a model they created
* fix(model_management_endpoints.py): allow team admin to update team model
* feat(model_managament_endpoints.py): allow team admin to delete team models
* fix(model_management_endpoints.py): don't require team id to be set when adding a model
* fix(proxy_server.py): fix linting error
* fix: fix ui linting error
* fix(model_management_endpoints.py): ensure consistent auth checks on all model calls
* test: remove old test - function no longer exists in same form
* test: add updated mock testing
* feat(view_logs.tsx): show model id + api base in request logs
easier debugging
* fix(index.tsx): fix length of api base
easier viewing
* build(ui/): initial commit allowing user to click into key from request logs
allows easier debugging of 'what key is this?
* refactor: introduce new transformation config for gpt-4o-transcribe models
* refactor: expose new transformation configs for audio transcription
* ci: fix config yml
* feat(openai/transcriptions): support provider config transformation on openai audio transcriptions
allows gpt-4o and whisper audio transformation to work as expected
* refactor: migrate fireworks ai + deepgram to new transform request pattern
* feat(openai/): working support for gpt-4o-audio-transcribe
* build(model_prices_and_context_window.json): add gpt-4o-transcribe to model cost map
* build(model_prices_and_context_window.json): specify what endpoints are supported for `/audio/transcriptions`
* fix(get_supported_openai_params.py): fix return
* refactor(deepgram/): migrate unit test to deepgram handler
* refactor: cleanup unused imports
* fix(get_supported_openai_params.py): fix linting error
* test: update test
* fix(vertex_and_google_ai_studio_gemini.py): log gemini audio tokens in usage object
enables accurate cost tracking
* refactor(vertex_ai/cost_calculator.py): refactor 128k+ token cost calculation to only run if model info has it
Google has moved away from this for gemini-2.0 models
* refactor(vertex_ai/cost_calculator.py): migrate to usage object for more flexible data passthrough
* fix(llm_cost_calc/utils.py): support audio token cost tracking in generic cost per token
enables vertex ai cost tracking to work with audio tokens
* fix(llm_cost_calc/utils.py): default to total prompt tokens if text tokens field not set
* refactor(llm_cost_calc/utils.py): move openai cost tracking to generic cost per token
more consistent behaviour across providers
* test: add unit test for gemini audio token cost calculation
* ci: bump ci config
* test: fix test