litellm-mirror/ui/litellm-dashboard
Krish Dholakia 0a2a51a5a5 UI - Allow admin to control default model access for internal users (#8912)
* fix(create_user_button.tsx): allow admin to set models user has access to, on invite

Enables controlling model access on invite

* feat(auth_checks.py): enforce 'no-model-access' special model name on backend

prevent user from calling models if default key has no model access

* fix(chat_ui.tsx): allow user to input custom model

* fix(chat_ui.tsx): pull available models based on models key has access to

* style(create_user_button.tsx): move default model inside 'personal key creation' accordion

* fix(chat_ui.tsx): fix linting error

* test(test_auth_checks.py): add unit-test for special model name

* docs(internal_user_endpoints.py): update docstring

* fix test_moderations_bad_model

* Litellm dev 02 27 2025 p6 (#8891)

* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads

* fix(sagemaker/handler.py): support passing model id on async streaming

* fix(litellm_pre_call_utils.py): Fixes https://github.com/BerriAI/litellm/issues/7237

* Fix calling claude via invoke route + response_format support for claude on invoke route (#8908)

* fix(anthropic_claude3_transformation.py): fix amazon anthropic claude 3 tool calling transformation on invoke route

move to using anthropic config as base

* fix(utils.py): expose anthropic config via providerconfigmanager

* fix(llm_http_handler.py): support json mode on async completion calls

* fix(invoke_handler/make_call): support json mode for anthropic called via bedrock invoke

* fix(anthropic/): handle 'response_format: {"type": "text"}` + migrate amazon claude 3 invoke config to inherit from anthropic config

Prevents error when passing in 'response_format: {"type": "text"}

* test: fix test

* fix(utils.py): fix base invoke provider check

* fix(anthropic_claude3_transformation.py): don't pass 'stream' param

* fix: fix linting errors

* fix(converse_transformation.py): handle response_format type=text for converse

* converse_transformation: pass 'description' if set in response_format (#8907)

* test(test_bedrock_completion.py): e2e test ensuring tool description is passed in

* fix(converse_transformation.py): pass description, if set

* fix(transformation.py): Fixes https://github.com/BerriAI/litellm/issues/8767#issuecomment-2689887663

* Fix bedrock passing `response_format: {"type": "text"}` (#8900)

* fix(converse_transformation.py): ignore type: text, value in response_format

no-op for bedrock

* fix(converse_transformation.py): handle adding response format value to tools

* fix(base_invoke_transformation.py): fix 'get_bedrock_invoke_provider' to handle cross-region-inferencing models

* test(test_bedrock_completion.py): add unit testing for bedrock invoke provider logic

* test: update test

* fix(exception_mapping_utils.py): add context window exceeded error handling for databricks provider route

* fix(fireworks_ai/): support passing tools + response_format together

* fix: cleanup

* fix(base_invoke_transformation.py): fix imports

* (Feat) - Show Error Logs on LiteLLM UI  (#8904)

* fix test_moderations_bad_model

* use async_post_call_failure_hook

* basic logging errors in DB

* show status on ui

* show status on ui

* ui show request / response side by side

* stash fixes

* working, track raw request

* track error info in metadata

* fix showing error / request / response logs

* show traceback on error viewer

* ui with traceback of error

* fix async_post_call_failure_hook

* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads

* test_get_error_information

* fix code quality

* rename proxy track cost callback test

* _should_store_errors_in_spend_logs

* feature flag error logs

* Revert "_should_store_errors_in_spend_logs"

This reverts commit 7f345df477.

* Revert "feature flag error logs"

This reverts commit 0e90c022bb.

* test_spend_logs_payload

* fix OTEL log_db_metrics

* fix import json

* fix ui linting error

* test_async_post_call_failure_hook

* test_chat_completion_bad_model_with_spend_logs

---------

Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>

* ui new build

* test_chat_completion_bad_model_with_spend_logs

* docs(release_cycle.md): document release cycle

* bump: version 1.62.0 → 1.62.1

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2025-02-28 23:23:03 -08:00
..
out ui new build 2025-02-28 20:12:06 -08:00
public build(ui/litellm-dashboard): initial commit of litellm dashboard 2024-01-27 12:12:48 -08:00
src UI - Allow admin to control default model access for internal users (#8912) 2025-02-28 23:23:03 -08:00
.eslintrc.json build(ui/litellm-dashboard): initial commit of litellm dashboard 2024-01-27 12:12:48 -08:00
build_ui.sh (ui) fix build command 2024-02-21 21:02:46 -08:00
build_ui_custom_path.sh build ui on custom path 2024-08-05 16:34:37 -07:00
next.config.mjs use correct build paths 2024-08-05 15:59:50 -07:00
package-lock.json (UI) Fixes for managing Internal Users (#8786) 2025-02-24 23:40:13 -08:00
package.json (UI) Fixes for managing Internal Users (#8786) 2025-02-24 23:40:13 -08:00
postcss.config.js build(ui/litellm-dashboard): initial commit of litellm dashboard 2024-01-27 12:12:48 -08:00
README.md build(ui/litellm-dashboard): initial commit of litellm dashboard 2024-01-27 12:12:48 -08:00
tailwind.config.js (ui) adjust size 2024-03-28 23:27:23 -07:00
tailwind.config.ts (ui) use indigo theme 2024-02-03 18:35:32 -08:00
tsconfig.json build(ui/litellm-dashboard): initial commit of litellm dashboard 2024-01-27 12:12:48 -08:00
ui_colors.json ui - fix filter by color scheme 2024-06-03 18:39:32 -07:00

This is a Next.js project bootstrapped with create-next-app.

Getting Started

First, run the development server:

npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev

Open http://localhost:3000 with your browser to see the result.

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

This project uses next/font to automatically optimize and load Inter, a custom Google Font.

Learn More

To learn more about Next.js, take a look at the following resources:

You can check out the Next.js GitHub repository - your feedback and contributions are welcome!

Deploy on Vercel

The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.

Check out our Next.js deployment documentation for more details.