Commit graph

21497 commits

Author SHA1 Message Date
Krrish Dholakia
8be8022914 docs(vertex_ai.md): document new vertex passthrough route
Some checks failed
Read Version from pyproject.toml / read-version (push) Successful in 44s
Helm unit test / unit-test (push) Successful in 51s
Publish Prisma Migrations / publish-migrations (push) Failing after 2m16s
2025-04-16 22:25:16 -07:00
Krrish Dholakia
ff81f48af3 bump: version 1.66.2 → 1.66.3 2025-04-16 22:20:10 -07:00
Krrish Dholakia
78c6d73dea build: new ui build 2025-04-16 22:11:53 -07:00
Ishaan Jaff
257e78ffb5 test fix vertex_ai/mistral-large@2407 2025-04-16 21:52:52 -07:00
Krish Dholakia
8ddaf3dfbc
fix(o_series_transformation.py): correctly map o4 to openai o_series model (#10079)
Fixes https://github.com/BerriAI/litellm/issues/10066
2025-04-16 21:51:31 -07:00
Krish Dholakia
c73a6a8d1e
Add new /vertex_ai/discovery route - enables calling AgentBuilder API routes (#10084)
* feat(llm_passthrough_endpoints.py): expose new `/vertex_ai/discovery/` endpoint

Allows calling vertex ai discovery endpoints via passthrough

 For agentbuilder api calls

* refactor(llm_passthrough_endpoints.py): use common _base_vertex_proxy_route

Prevents duplicate code

* feat(llm_passthrough_endpoints.py): add vertex endpoint specific passthrough handlers
2025-04-16 21:45:51 -07:00
Ishaan Jaff
198922b26f test fixes for vertex mistral, this model was deprecated on vertex 2025-04-16 20:51:45 -07:00
Ishaan Jaff
c38146e180 test fix 2025-04-16 20:13:31 -07:00
Ishaan Jaff
cf801f9642 test fix vertex_ai/codestral 2025-04-16 20:01:36 -07:00
Ishaan Jaff
0ced13aec8
Virtual Keys: Filter by key alias (#10035) (#10085)
Co-authored-by: Christian Owusu <36159205+crisshaker@users.noreply.github.com>
2025-04-16 19:46:05 -07:00
Ishaan Jaff
2b14978d9d bump litellm proxy extras 2025-04-16 19:28:16 -07:00
Ishaan Jaff
12ccb954a6 ui new build 2025-04-16 19:23:04 -07:00
Ishaan Jaff
6220f3e7b8
[Feat SSO] Add LiteLLM SCIM Integration for Team and User management (#10072)
* fix NewUser response type

* add scim router

* add v0 scim v2 endpoints

* working scim transformation

* use 1 file for types

* fix scim firstname and givenName storage

* working SCIMErrorResponse

* working team / group provisioning on SCIM

* add SCIMPatchOp

* move scim folder

* fix import scim_router

* fix dont auto create scim keys

* add auth on all scim endpoints

* add is_virtual_key_allowed_to_call_route

* fix allowed routes

* fix for key management

* fix allowed routes check

* clean up error message

* fix code check

* fix for route checks

* ui SCIM support

* add UI tab for SCIM

* fixes SCIM

* fixes for SCIM settings on ui

* scim settings

* clean up scim view

* add migration for allowed_routes in keys table

* refactor scim transform

* fix SCIM linting error

* fix code quality check

* fix ui linting

* test_scim_transformations.py
2025-04-16 19:21:47 -07:00
Krish Dholakia
7ca553b235
Add team based usage dashboard at 1m+ spend logs (+ new /team/daily/activity API) (#10081)
* feat(ui/): add team based usage to dashboard

allows admin to see spend across teams + within teams at 1m+ spend logs

* fix(entity_usage.tsx): add activity page to entity usage

* style(entity_usage.tsx): place filter above tab switcher
2025-04-16 18:10:14 -07:00
Krish Dholakia
c0d7e9f16d
Add new /tag/daily/activity endpoint + Add tag dashboard to UI (#10073)
Some checks failed
Read Version from pyproject.toml / read-version (push) Successful in 15s
Helm unit test / unit-test (push) Successful in 24s
Publish Prisma Migrations / publish-migrations (push) Failing after 1m47s
* feat: initial commit adding daily tag spend table to db

* feat(db_spend_update_writer.py): correctly log tag spend transactions

* build(schema.prisma): add new tag table to root

* build: add new migration file

* feat(common_daily_activity.py): add `/tag/daily/activity` API endpoint

allows viewing daily spend by tag

* feat(tag_management_endpoints.py): support comma separated list of tags + tag breakdown metric

allows querying multiple tags + knowing what tags are driving spend

* feat(entity_usage.tsx): initial commit adding tag based usage to litellm dashboard

brings back tag based usage tracking to UI at 1m+ spend logs

* feat(entity_usage.tsx): add top api key view to ui

* feat(entity_usage.tsx): add tag table to ui

* feat(entity_usage.tsx): allow filtering by tag

* refactor(entity_usage.tsx): reorder components

* build(ui/): fix linting error

* fix: fix ruff checks

* fix(schema.prisma): drop uniqueness requirement on tag

allows dailytagspend to have multiple rows with the same tag

* build(schema.prisma): drop uniqueness requirement on tag in dailytagspend

allows tag agg. view to work on multiple rows with same tag

* build(schema.prisma): drop tag uniqueness requirement
2025-04-16 15:24:44 -07:00
Peter Dave Hello
5c078af738
Add OpenAI o3 & 4o-mini (#10065)
Reference:
- https://platform.openai.com/docs/models/o3
- https://platform.openai.com/docs/models/o4-mini
2025-04-16 12:40:13 -07:00
Krish Dholakia
d8a1071bc4
Add aggregate spend by tag (#10071)
* feat: initial commit adding daily tag spend table to db

* feat(db_spend_update_writer.py): correctly log tag spend transactions

* build(schema.prisma): add new tag table to root

* build: add new migration file
2025-04-16 12:26:21 -07:00
Krish Dholakia
47e811d6ce
fix(llm_http_handler.py): fix fake streaming (#10061)
* fix(llm_http_handler.py): fix fake streaming

allows groq to work with llm_http_handler

* fix(groq.py): migrate groq to openai like config

ensures json mode handling works correctly
2025-04-16 10:15:11 -07:00
Krish Dholakia
c603680d2a
fix(stream_chunk_builder_utils.py): don't set index on modelresponse (#10063)
* fix(stream_chunk_builder_utils.py): don't set index on modelresponse

* test: update tests
2025-04-16 10:11:47 -07:00
dependabot[bot]
7b7b43e1a7
build(deps): bump http-proxy-middleware in /docs/my-website (#10064)
Bumps [http-proxy-middleware](https://github.com/chimurai/http-proxy-middleware) from 2.0.7 to 2.0.9.
- [Release notes](https://github.com/chimurai/http-proxy-middleware/releases)
- [Changelog](https://github.com/chimurai/http-proxy-middleware/blob/v2.0.9/CHANGELOG.md)
- [Commits](https://github.com/chimurai/http-proxy-middleware/compare/v2.0.7...v2.0.9)

---
updated-dependencies:
- dependency-name: http-proxy-middleware
  dependency-version: 2.0.9
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-16 09:55:44 -07:00
Ishaan Jaff
a9e8a36f89
[Bug Fix] Azure Blob Storage fixes (#10059)
* Simple fix for #9339 - upgrade the underlying library and cache the azure storage client (#9965)

* fix -  use constants for caching azure storage client

---------

Co-authored-by: Adrian Lyjak <adrian@chatmeter.com>
2025-04-16 09:47:10 -07:00
Krrish Dholakia
a743b6fc1f fix(bedrock/common_utils.py): add us-west-1 to us regions 2025-04-16 08:00:39 -07:00
Krrish Dholakia
6b7d20c911 test: fix test 2025-04-16 07:57:10 -07:00
ChaoFu Yang
c07eea864e
/utils/token_counter: get model_info from deployment directly (#10047) 2025-04-16 07:53:18 -07:00
Michael Leshchinsky
e19d05980c
Add litellm call id passing to Aim guardrails on pre and post-hooks calls (#10021)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 16s
Helm unit test / unit-test (push) Successful in 19s
* Add litellm_call_id passing to aim guardrails on pre and post-hooks

* Add test that ensures that pre_call_hook receives litellm call id when common_request_processing called
2025-04-16 07:41:28 -07:00
Ishaan Jaff
ca593e003a bump litellm-proxy-extras==0.1.9
Some checks failed
Publish Prisma Migrations / publish-migrations (push) Failing after 36s
Read Version from pyproject.toml / read-version (push) Successful in 45s
Helm unit test / unit-test (push) Successful in 54s
2025-04-15 22:49:24 -07:00
Ishaan Jaff
1d4fea509d ui new build 2025-04-15 22:36:44 -07:00
Ishaan Jaff
ad09d250ef test fix azure deprecated mistral 2025-04-15 22:32:14 -07:00
Ishaan Jaff
dcc43e797a
[Docs] Auto prompt caching (#10044)
* docs prompt cache controls

* doc fix auto prompt caching
2025-04-15 22:29:47 -07:00
Krish Dholakia
fdfa1108a6
Add property ordering for vertex ai schema (#9828) + Fix combining multiple tool calls (#10040)
* fix #9783: Retain schema field ordering for google gemini and vertex (#9828)

* test: update test

* refactor(groq.py): initial commit migrating groq to base_llm_http_handler

* fix(streaming_chunk_builder_utils.py): fix how tool content is combined

Fixes https://github.com/BerriAI/litellm/issues/10034

* fix(vertex_ai/common_utils.py): prevent infinite loop in helper function

* fix(groq/chat/transformation.py): handle groq streaming errors correctly

* fix(groq/chat/transformation.py): handle max_retries

---------

Co-authored-by: Adrian Lyjak <adrian@chatmeter.com>
2025-04-15 22:29:25 -07:00
Krish Dholakia
1b9b745cae
Fix gcs pub sub logging with env var GCS_PROJECT_ID (#10042)
* fix(pub_sub.py): fix passing project id in pub sub call

Fixes issue where GCS_PUBSUB_PROJECT_ID was not being used

* test(test_pub_sub.py): add unit test to prevent future regressions

* test: fix test
2025-04-15 21:50:48 -07:00
Ishaan Jaff
b3f37b860d test fix azure deprecated mistral ai 2025-04-15 21:42:40 -07:00
Ishaan Jaff
bd88263b29
[Feat - Cost Tracking improvement] Track prompt caching metrics in DailyUserSpendTransactions (#10029)
* stash changes

* emit cache read/write tokens to daily spend update

* emit cache read/write tokens on daily activity

* update types.ts

* docs prompt caching

* undo ui change

* fix activity metrics

* fix prompt caching metrics

* fix typed dict fields

* fix get_aggregated_daily_spend_update_transactions

* fix aggregating cache tokens

* test_cache_token_fields_aggregation

* daily_transaction

* add cache_creation_input_tokens and cache_read_input_tokens to LiteLLM_DailyUserSpend

* test_daily_spend_update_queue.py
2025-04-15 21:40:57 -07:00
Ishaan Jaff
d32d6fe03e
[UI] Bug Fix - Show created_at and updated_at for Users Page (#10033)
* add created_at and updated_at as fields for internal user table

* test_get_users_includes_timestamps
2025-04-15 21:15:44 -07:00
Ishaan Jaff
70d740332f
[UI Polish] UI fixes for cache control injection settings (#10031)
* ui fixes for cache control

* docs inject cache control settings
2025-04-15 21:10:08 -07:00
Ishaan Jaff
65f8015221 test fix - azure deprecated azure ai mistral 2025-04-15 21:08:55 -07:00
Krish Dholakia
9b77559ccf
Add aggregate team based usage logging (#10039)
* feat(schema.prisma): initial commit adding aggregate table for team spend

allows team spend to be visible at 1m+ logs

* feat(db_spend_update_writer.py): support logging aggregate team spend

allows usage dashboard to work at 1m+ logs

* feat(litellm-proxy-extras/): add new migration file

* fix(db_spend_update_writer.py): fix return type

* build: bump requirements

* fix: fix ruff error
2025-04-15 20:58:48 -07:00
Krish Dholakia
d3e7a137ad
Revert "fix #9783: Retain schema field ordering for google gemini and vertex …" (#10038)
This reverts commit e3729f9855.
2025-04-15 19:21:33 -07:00
Adrian Lyjak
e3729f9855
fix #9783: Retain schema field ordering for google gemini and vertex (#9828) 2025-04-15 19:12:02 -07:00
Marc Abramowitz
837a6948d8
Fix typo: Entrata -> Entra in code (#9922)
* Fix typo: Entrata -> Entra

* Fix a few more
2025-04-15 17:31:18 -07:00
dependabot[bot]
81e7741107
build(deps): bump @babel/runtime in /ui/litellm-dashboard (#10001)
Bumps [@babel/runtime](https://github.com/babel/babel/tree/HEAD/packages/babel-runtime) from 7.23.9 to 7.27.0.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.27.0/packages/babel-runtime)

---
updated-dependencies:
- dependency-name: "@babel/runtime"
  dependency-version: 7.27.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 16:35:26 -07:00
Joakim Lorentz
c9cf43df5b
chore(docs): Update logging.md (#10006)
Fixes a missing slash in OTEL_ENDPOINT example
2025-04-15 16:34:55 -07:00
Ishaan Jaff
09df3815b8 docs cache control injection points 2025-04-15 15:43:58 -07:00
Krrish Dholakia
ef80d25f16 bump: version 1.66.1 → 1.66.2
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 15s
Helm unit test / unit-test (push) Successful in 23s
2025-04-15 13:52:46 -07:00
Krrish Dholakia
8424171c2a fix(config_settings.md): cleanup 2025-04-15 13:41:22 -07:00
Krish Dholakia
6b5f093087
Revert "Fix case where only system messages are passed to Gemini (#9992)" (#10027)
This reverts commit 2afd922f8c.
2025-04-15 13:34:03 -07:00
Nolan Tremelling
2afd922f8c
Fix case where only system messages are passed to Gemini (#9992) 2025-04-15 13:30:49 -07:00
Michael Schmid
14bcc9a6c9
feat: update region configuration in AmazonBedrockGlobalConfig (#9430) 2025-04-15 09:59:32 -07:00
Krrish Dholakia
aff0d1a18c docs(cohere.md): add cohere cost tracking support to docs
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 17s
Helm unit test / unit-test (push) Successful in 25s
2025-04-14 23:46:58 -07:00
Krish Dholakia
33ead69c0a
Support checking provider /models endpoints on proxy /v1/models endpoint (#9958)
* feat(utils.py): support global flag for 'check_provider_endpoints'

enables setting this for `/models` on proxy

* feat(utils.py): add caching to 'get_valid_models'

Prevents checking endpoint repeatedly

* fix(utils.py): ensure mutations don't impact cached results

* test(test_utils.py): add unit test to confirm cache invalidation logic

* feat(utils.py): get_valid_models - support passing litellm params dynamically

Allows for checking endpoints based on received credentials

* test: update test

* feat(model_checks.py): pass router credentials to get_valid_models - ensures it checks correct credentials

* refactor(utils.py): refactor for simpler functions

* fix: fix linting errors

* fix(utils.py): fix test

* fix(utils.py): set valid providers to custom_llm_provider, if given

* test: update test

* fix: fix ruff check error
2025-04-14 23:23:20 -07:00