Commit graph

13364 commits

Author SHA1 Message Date
Ishaan Jaff
aa5ac6ba3d can_team_access_model 2025-03-10 20:03:19 -07:00
Utkash Dubey
412289a0d0 update backup json as well so test passes 2025-03-10 19:58:47 -07:00
Krrish Dholakia
f56c5ca380 feat: working e2e credential management - support reusing existing credentials 2025-03-10 19:29:24 -07:00
Emerson Gomes
27de9669c2
Merge branch 'main' into azure_models 2025-03-10 21:23:59 -05:00
Emerson Gomes
3d85aa63a7 sync model db copy 2025-03-10 21:19:04 -05:00
Ishaan Jaff
0d6df360bf test_can_team_access_model fix 2025-03-10 19:09:50 -07:00
Ishaan Jaff
9dcc25d63b Merge branch 'main' into litellm_fix_team_model_access_checks 2025-03-10 19:05:11 -07:00
Krrish Dholakia
2ec7830b66 feat: complete crud endpoints for credential management on proxy 2025-03-10 18:46:35 -07:00
Krish Dholakia
c58941d49c
Merge branch 'main' into litellm_dev_03_06_2025_p4 2025-03-10 18:41:10 -07:00
Krrish Dholakia
0b5deb2756 fix: fix type 2025-03-10 18:38:40 -07:00
Krrish Dholakia
507640bc8f fix(endpoints.py): encrypt credentials before storing in db 2025-03-10 18:37:59 -07:00
Krrish Dholakia
a962a97fcb feat(endpoints.py): support writing credentials to db 2025-03-10 18:27:43 -07:00
Ishaan Jaff
6dcf64918c
Merge pull request #9102 from BerriAI/litellm_add_atext_completion_on_ui
(Feat) - Allow adding Text-Completion OpenAI models through UI
2025-03-10 18:11:00 -07:00
Krrish Dholakia
f1cdc26967 feat(endpoints.py): initial set of crud endpoints for reusable credentials on proxy 2025-03-10 17:48:02 -07:00
Krrish Dholakia
fdd5ba3084 feat(credential_accessor.py): support loading in credentials from credential_list
Resolves https://github.com/BerriAI/litellm/issues/9114
2025-03-10 17:15:58 -07:00
Krrish Dholakia
4bd4bb16fd feat(proxy_server.py): move credential list to being a top-level param 2025-03-10 17:04:05 -07:00
Krrish Dholakia
5458b08425 fix(router.py): comment out azure/openai client init - not necessary 2025-03-10 16:47:43 -07:00
Ishaan Jaff
ce35240273 Pre-Submission checklist 2025-03-10 16:21:53 -07:00
Krrish Dholakia
68bd05ac24 fix(base_invoke_transformation.py): support extra_headers on bedrock invoke route
Fixes https://github.com/BerriAI/litellm/issues/9106
2025-03-10 16:13:11 -07:00
Ishaan Jaff
94667e1cf0
Merge pull request #8386 from minwhoo/triton-completions-streaming-fix
Fix triton streaming completions bug
2025-03-10 16:07:19 -07:00
Krrish Dholakia
bfbe26b91d feat(azure.py): add azure bad request error support 2025-03-10 15:59:06 -07:00
Ishaan Jaff
ea058ab4ea
Merge pull request #8746 from niinpatel/patch-1
fix missing comma
2025-03-10 15:57:32 -07:00
Ishaan Jaff
929f3424dc
Merge pull request #8845 from vivek-athina/main
Added tags, user_feedback and model_options to additional_keys which can be sent to athina
2025-03-10 15:53:55 -07:00
Krrish Dholakia
f688fc8138 feat(proxy_server.py): check code before defaulting to status code 2025-03-10 15:34:06 -07:00
Krrish Dholakia
5f87dc229a feat(openai.py): bubble all error information back to client 2025-03-10 15:27:43 -07:00
Krrish Dholakia
c1ec82fbd5 refactor: instrument body param to bubble up on exception 2025-03-10 15:21:04 -07:00
Ishaan Jaff
c1a3cb82a9 docs on contributing 2025-03-10 14:49:27 -07:00
Ishaan Jaff
0fcce63852
Merge pull request #9032 from themrzmaster/feat/jamba_1.6
pricing for jamba new models
2025-03-10 13:59:37 -07:00
Ishaan Jaff
7319fef29d fix linting error 2025-03-10 13:57:50 -07:00
Ishaan Jaff
05ad7a67a7 Revert "ui new build"
This reverts commit 34694d3057.
2025-03-10 13:56:10 -07:00
Ishaan Jaff
34694d3057 ui new build 2025-03-10 12:32:18 -07:00
Ishaan Jaff
666690c31c fix atext_completion 2025-03-10 10:18:03 -07:00
omrishiv
0674491386
add support for Amazon Nova Canvas model (#7838)
* add initial support for Amazon Nova Canvas model

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* adjust name to AmazonNovaCanvas and map function variables to config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* tighten model name check

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* fix quality mapping

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add premium quality in config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* support all Amazon Nova Canvas tasks

* remove unused import

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add tests for image generation tasks and fix payload

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add missing util file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* update model prices backup file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* remove image tasks other than text->image

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

---------

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
2025-03-10 08:02:00 -07:00
vivek-athina
cd4a53d6f2
Merge pull request #4 from BerriAI/main
Update main
2025-03-10 11:13:21 +05:30
Krrish Dholakia
574f5056c8 fix(utils.py): fix linting error 2025-03-09 20:47:12 -07:00
Krish Dholakia
f899b828cf
Support openrouter reasoning_content on streaming (#9094)
* feat(convert_dict_to_response.py): support openrouter format of reasoning content

* fix(transformation.py): fix openrouter streaming with reasoning content

Fixes https://github.com/BerriAI/litellm/issues/8193#issuecomment-270892962

* fix: fix type error
2025-03-09 20:03:59 -07:00
5aaee9
42b7921ca1
fix: perplexity return both delta and message cause OpenWebUI repect text (#9081) 2025-03-09 19:46:31 -07:00
Krish Dholakia
65ef65d360
feat: prioritize api_key over tenant_id for more Azure AD token provi… (#8701)
* feat: prioritize api_key over tenant_id for more Azure AD token provider (#8318)

* fix: prioritize api_key over tenant_id for Azure AD token provider

* test: Add test for Azure AD token provider in router

* fix: fix linting error

---------

Co-authored-by: you-n-g <you-n-g@users.noreply.github.com>
2025-03-09 18:59:37 -07:00
Krrish Dholakia
2c5b2da955 fix: make type object subscriptable 2025-03-09 18:35:10 -07:00
Krish Dholakia
e00d4fb18c
Litellm dev 03 08 2025 p3 (#9089)
* feat(ollama_chat.py): pass down http client to ollama_chat

enables easier testing

* fix(factory.py): fix passing images to ollama's `/api/generate` endpoint

Fixes https://github.com/BerriAI/litellm/issues/6683

* fix(factory.py): fix ollama pt to handle templating correctly
2025-03-09 18:20:56 -07:00
Ishaan Jaff
b6eee01381 Revert "experimental - track anthropic messages as mode"
This reverts commit 22b3862e0d.
2025-03-08 17:38:24 -08:00
Ishaan Jaff
22b3862e0d experimental - track anthropic messages as mode 2025-03-08 17:33:35 -08:00
Ishaan Jaff
b41311bb21
(UI) - Fix show correct count of internal user keys on Users Page (#9082)
* get_user_key_counts

* fix get_user_key_counts

* fix get_user_key_counts

* test_get_users_filters_dashboard_keys

* remove unused func
2025-03-08 16:13:18 -08:00
Ishaan Jaff
73df319f4e
(Clean up) - Allow switching off storing Error Logs in DB (#9084)
* fix - cleanup, dont store ErrorLogs in 2 tables

* async_post_call_failure_hook

* docs disable error logs

* disable_error_logs
2025-03-08 16:12:03 -08:00
Krish Dholakia
4330ef8e81
Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 42s
* feat(batches/): fix batch cost calculation - ensure it's accurate

use the correct cost value - prev. defaulting to non-batch cost

* feat(batch_utils.py): log batch models to spend logs + standard logging payload

makes it easy to understand how cost was calculated

* fix: fix stored payload for test

* test: fix test
2025-03-08 11:47:25 -08:00
Teja Vishwanadha
8c049dfffc
support bytes.IO for audio transcription (#9071) 2025-03-08 08:47:15 -08:00
Ishaan Jaff
e2d612efd9
Bug fix - String data: stripped from entire content in streamed Gemini responses (#9070)
* _strip_sse_data_from_chunk

* use _strip_sse_data_from_chunk

* use _strip_sse_data_from_chunk

* use _strip_sse_data_from_chunk

* _strip_sse_data_from_chunk

* test_strip_sse_data_from_chunk

* _strip_sse_data_from_chunk

* testing

* _strip_sse_data_from_chunk
2025-03-07 21:06:39 -08:00
Krish Dholakia
0e3caf92b9
UI - new API Playground for testing LiteLLM translation (#9073)
* feat: initial commit - enable dev to see translated request

* feat(utils.py): expose new endpoint - `/utils/transform_request` to see the raw request sent by litellm

* feat(transform_request.tsx): allow user to see their transformed request

* refactor(litellm_logging.py): return raw request in 3 parts - api_base, headers, request body

easier to render each individually on UI vs. extracting from combined string

* feat: transform_request.tsx

working e2e raw request viewing

* fix(litellm_logging.py): fix transform viewing for bedrock models

* fix(litellm_logging.py): don't return sensitive headers in raw request headers

prevent accidental leak

* feat(transform_request.tsx): style improvements
2025-03-07 19:39:31 -08:00
Ishaan Jaff
b5eeafdd72
(Docs) OpenWeb x LiteLLM Docker compose + Instructions on spend tracking + logging (#9059)
* docs improve open web ui litellm doc

* docs openweb show teams + keys

* docs open web ui litellm
2025-03-07 17:01:39 -08:00
Krrish Dholakia
36f3276d8c docs: update docs
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 16s
2025-03-07 11:00:12 -08:00