* fix(proxy_server.py): get master key from environment, if not set in general settings or general settings not set at all
* test: mark flaky test
* test(test_proxy_server.py): mock prisma client
* ci: add new github workflow for testing just the mock tests
* fix: fix linting error
* ci(conftest.py): add conftest.py to isolate proxy tests
* build(pyproject.toml): add respx to dev dependencies
* build(pyproject.toml): add prisma to dev dependencies
* test: fix mock prompt management tests to use a mock anthropic key
* ci(test-litellm.yml): parallelize mock testing
make it run faster
* build(pyproject.toml): add hypercorn as dev dep
* build(pyproject.toml): separate proxy vs. core dev dependencies
make it easier for non-proxy contributors to run tests locally - e.g. no need to install hypercorn
* ci(test-litellm.yml): pin python version
* test(test_rerank.py): move test - cannot be mocked, requires aws credentials for e2e testing
* ci: add thank you message to ci
* test: add mock env var to test
* test: add autouse to tests
* test: test mock env vars for e2e tests
* build: new ui build
* build: new ui build
* fix(proxy_server.py): only show user models their key can access on `/models`
* fix(model_management_endpoints.py): ensure team admin can add models
* test: update unit testing to reflect changes
* fix(model_dashboard.tsx): fix sizing on models page
* build: fix ui
* feat(view_logs.tsx): show model id + api base in request logs
easier debugging
* fix(index.tsx): fix length of api base
easier viewing
* refactor(leftnav.tsx): show models tab to team admin
* feat(model_dashboard.tsx): add explainer for what the 'models' page is for team admin
helps them understand how they can use it
* feat(model_management_endpoints.py): restrict model add by team to just team admin
allow team admin to add models via non-team keys (e.g. ui token)
* test(test_add_update_models.py): update unit testing for new behaviour
* fix(model_dashboard.tsx): show user the models
* feat(proxy_server.py): add new query param 'user_models_only' to `/v2/model/info`
Allows user to retrieve just the models they've added
Used in UI to show internal users just the models they've added
* feat(model_dashboard.tsx): allow team admins to view their own models
* fix: allow ui user to fetch model cost map
* feat(add_model_tab.tsx): require team admins to specify team when onboarding models
* fix(_types.py): add `/v1/model/info` to info route
`/model/info` was already there
* fix(model_info_view.tsx): allow user to edit a model they created
* fix(model_management_endpoints.py): allow team admin to update team model
* feat(model_managament_endpoints.py): allow team admin to delete team models
* fix(model_management_endpoints.py): don't require team id to be set when adding a model
* fix(proxy_server.py): fix linting error
* fix: fix ui linting error
* fix(model_management_endpoints.py): ensure consistent auth checks on all model calls
* test: remove old test - function no longer exists in same form
* test: add updated mock testing
* fix(route_llm_request.py): move to using common router, even for client-side credentials
ensures fallbacks / cooldown logic still works
* test(test_route_llm_request.py): add unit test for route request
* feat(router.py): generate unique model id when clientside credential passed in
Prevents cooldowns for api key 1 from impacting api key 2
* test(test_router.py): update testing to ensure original litellm params not mutated
* fix(router.py): upsert clientside call into llm router model list
enables cooldown logic to work accurately
* fix: fix linting error
* test(test_router_utils.py): add direct test for new util on router
* ui fix leftnav, allow internal users to view their own logs
* pass user_id in uiSpendLogs call
* ui filter logs for internal user
* fix internal users page
* ui show correct message when store prompts is disabled
* fix internal user logs
* test_ui_view_spend_logs_with_user_id
* test spend management endpoint
* fix test_moderations_bad_model
* use async_post_call_failure_hook
* basic logging errors in DB
* show status on ui
* show status on ui
* ui show request / response side by side
* stash fixes
* working, track raw request
* track error info in metadata
* fix showing error / request / response logs
* show traceback on error viewer
* ui with traceback of error
* fix async_post_call_failure_hook
* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads
* test_get_error_information
* fix code quality
* rename proxy track cost callback test
* _should_store_errors_in_spend_logs
* feature flag error logs
* Revert "_should_store_errors_in_spend_logs"
This reverts commit 7f345df477.
* Revert "feature flag error logs"
This reverts commit 0e90c022bb.
* test_spend_logs_payload
* fix OTEL log_db_metrics
* fix import json
* fix ui linting error
* test_async_post_call_failure_hook
* test_chat_completion_bad_model_with_spend_logs
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* test_openai_assistants_e2e_operations
* test openai assistants pass through
* fix GET request on pass through handler
* _make_non_streaming_http_request
* _is_assistants_api_request
* test_openai_assistants_e2e_operations
* test_openai_assistants_e2e_operations
* openai_proxy_route
* docs openai pass through
* docs openai pass through
* docs openai pass through
* test pass through handler
* Potential fix for code scanning alert no. 2240: Incomplete URL substring sanitization
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix running litellm on windows
* fix importing litellm
* _init_hypercorn_server
* linting fix
* TestProxyInitializationHelpers
* ci/cd run again
* ci/cd run again