Commit graph

18 commits

Author SHA1 Message Date
Krish Dholakia
a9b64037a6 LiteLLM Minor Fixes & Improvements (10/17/2024) (#6293)
* fix(ui_sso.py): fix faulty admin only check

Fixes https://github.com/BerriAI/litellm/issues/6286

* refactor(sso_helper_utils.py): refactor /sso/callback to use helper utils, covered by unit testing

Prevent future regressions

* feat(prompt_factory): support 'ensure_alternating_roles' param

Closes https://github.com/BerriAI/litellm/issues/6257

* fix(proxy/utils.py): add dailytagspend to expected views

* feat(auth_utils.py): support setting regex for clientside auth credentials

Fixes https://github.com/BerriAI/litellm/issues/6203

* build(cookbook): add tutorial for mlflow + langchain + litellm proxy tracing

* feat(argilla.py): add argilla logging integration

Closes https://github.com/BerriAI/litellm/issues/6201

* fix: fix linting errors

* fix: fix ruff error

* test: fix test

* fix: update vertex ai assumption - parts not always guaranteed (#6296)

* docs(configs.md): add argila env var to docs
2024-10-17 22:09:11 -07:00
Krish Dholakia
730171536f LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842) (#5858)
* LiteLLM Minor Fixes & Improvements (09/23/2024)  (#5842)

* feat(auth_utils.py): enable admin to allow client-side credentials to be passed

Makes it easier for devs to experiment with finetuned fireworks ai models

* feat(router.py): allow setting configurable_clientside_auth_params for a model

Closes https://github.com/BerriAI/litellm/issues/5843

* build(model_prices_and_context_window.json): fix anthropic claude-3-5-sonnet max output token limit

Fixes https://github.com/BerriAI/litellm/issues/5850

* fix(azure_ai/): support content list for azure ai

Fixes https://github.com/BerriAI/litellm/issues/4237

* fix(litellm_logging.py): always set saved_cache_cost

Set to 0 by default

* fix(fireworks_ai/cost_calculator.py): add fireworks ai default pricing

handles calling 405b+ size models

* fix(slack_alerting.py): fix error alerting for failed spend tracking

Fixes regression with slack alerting error monitoring

* fix(vertex_and_google_ai_studio_gemini.py): handle gemini no candidates in streaming chunk error

* docs(bedrock.md): add llama3-1 models

* test: fix tests

* fix(azure_ai/chat): fix transformation for azure ai calls
2024-09-24 15:01:31 -07:00
Ishaan Jaff
c226c55d8b fix re-add virtual key auth checks on vertex ai pass thru endpoints (#5827) 2024-09-21 17:34:10 -07:00
Krish Dholakia
501b6f5bac Allow client-side credentials to be sent to proxy (accept only if complete credentials are given) (#5575)
* feat: initial commit

* fix(proxy/auth/auth_utils.py): Allow client-side credentials to be given to the proxy (accept only if complete credentials are given)
2024-09-06 19:21:54 -07:00
Ishaan Jaff
bfb0aceeae add check for admin only routes 2024-09-03 15:03:32 -07:00
Ishaan Jaff
cf66ca89b9 allow setting allowed routes on proxy 2024-09-03 13:59:31 -07:00
Ishaan Jaff
15f1ead87f allow pass through routes as LLM API routes 2024-08-30 16:08:44 -07:00
Ishaan Jaff
c30fd9a775 fix auth checks for provider routes 2024-08-29 16:40:46 -07:00
Ishaan Jaff
a62277a6aa feat - use commong helper for getting model group 2024-08-17 10:46:04 -07:00
Krrish Dholakia
4ba576724c test: improve debugging for test 2024-08-05 19:41:08 -07:00
Ishaan Jaff
3d8befa655 fix get_request_route 2024-08-05 10:33:40 -07:00
Ishaan Jaff
9abe55fa86 add get_request_route 2024-08-05 10:12:34 -07:00
Ishaan Jaff
5f07afa268 feat - check max response size 2024-07-27 16:53:00 -07:00
Ishaan Jaff
a18f5bd5c8 security - check max request size 2024-07-27 16:08:41 -07:00
Ishaan Jaff
5f238f2857 check is_llm_api_route 2024-07-22 14:43:30 -07:00
Ishaan Jaff
c8a15ab83e add helper to check is_openai_route 2024-07-09 11:50:12 -07:00
Ishaan Jaff
90ad55416b fix importing litellm 2024-06-24 19:58:53 -07:00
Ishaan Jaff
7ea4c7b328 add helper to check route_in_additonal_public_routes 2024-06-24 19:50:35 -07:00