Commit graph

3604 commits

Author SHA1 Message Date
Ishaan Jaff
c8fe600dbf fix case when gemini is used 2024-09-10 17:06:45 -07:00
Ishaan Jaff
7891b3742c fix vertex use async func to set auth creds 2024-09-10 16:12:18 -07:00
Ishaan Jaff
21c462cf56 fix vertex ai use _get_async_client 2024-09-10 10:33:19 -07:00
Ishaan Jaff
02325f33d7 Merge branch 'main' into litellm_allow_turning_off_message_logging_for_callbacks 2024-09-09 21:59:36 -07:00
Krish Dholakia
09ca581620 LiteLLM Minor Fixes and Improvements (09/09/2024) (#5602)
* fix(main.py): pass default azure api version as alternative in completion call

Fixes api error caused due to api version

Closes https://github.com/BerriAI/litellm/issues/5584

* Fixed gemini-1.5-flash pricing (#5590)

* add /key/list endpoint

* bump: version 1.44.21 → 1.44.22

* docs architecture

* Fixed gemini-1.5-flash pricing

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* fix(bedrock/chat.py): fix converse api stop sequence param mapping

Fixes https://github.com/BerriAI/litellm/issues/5592

* fix(databricks/cost_calculator.py): handle databricks model name changes

Fixes https://github.com/BerriAI/litellm/issues/5597

* fix(azure.py): support azure api version 2024-08-01-preview

Closes https://github.com/BerriAI/litellm/issues/5377

* fix(proxy/_types.py): allow dev keys to call cohere /rerank endpoint

Fixes issue where only admin could call rerank endpoint

* fix(azure.py): check if model is gpt-4o

* fix(proxy/_types.py): support /v1/rerank on non-admin routes as well

* fix(cost_calculator.py): fix split on `/` logic in cost calculator

---------

Co-authored-by: F1bos <44951186+F1bos@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-09 21:56:12 -07:00
Krish Dholakia
52849e6422 LiteLLM Minor Fixes and Improvements (09/07/2024) (#5580)
* fix(litellm_logging.py): set completion_start_time_float to end_time_float if none

Fixes https://github.com/BerriAI/litellm/issues/5500

* feat(_init_.py): add new 'openai_text_completion_compatible_providers' list

Fixes https://github.com/BerriAI/litellm/issues/5558

Handles correctly routing fireworks ai calls when done via text completions

* fix: fix linting errors

* fix: fix linting errors

* fix(openai.py): fix exception raised

* fix(openai.py): fix error handling

* fix(_redis.py): allow all supported arguments for redis cluster (#5554)

* Revert "fix(_redis.py): allow all supported arguments for redis cluster (#5554)" (#5583)

This reverts commit f2191ef4cb.

* fix(router.py): return model alias w/ underlying deployment on router.get_model_list()

Fixes https://github.com/BerriAI/litellm/issues/5524#issuecomment-2336410666

* test: handle flaky tests

---------

Co-authored-by: Jonas Dittrich <58814480+Kakadus@users.noreply.github.com>
2024-09-09 18:54:17 -07:00
Ishaan Jaff
3278da17cf Merge branch 'main' into litellm_tag_routing_fixes 2024-09-09 17:45:18 -07:00
Ishaan Jaff
d303a3d03c fix log failures for key based logging 2024-09-09 16:33:06 -07:00
Ishaan Jaff
7e8af27527 fix otel test 2024-09-09 16:20:47 -07:00
Ishaan Jaff
e07b2ce6ea use callback_settings when intializing otel 2024-09-09 16:05:48 -07:00
Ishaan Jaff
176397cfca Merge pull request #5599 from BerriAI/litellm_allow_mounting_prom_callbacks
[Feat] support using "callbacks" for prometheus
2024-09-09 15:00:43 -07:00
Ishaan Jaff
f49fdab804 fix debug statements 2024-09-09 14:00:17 -07:00
Ishaan Jaff
0f2b8e511c fix create script for pre-creating views 2024-09-09 11:03:27 -07:00
Ishaan Jaff
3369b4e41a support using "callbacks" for prometheus 2024-09-09 08:26:03 -07:00
Ishaan Jaff
0e0decd6b9 add /key/list endpoint 2024-09-07 16:52:28 -07:00
Ishaan Jaff
185579a8ef ui new build 2024-09-07 16:24:06 -07:00
Ishaan Jaff
e912d81b0c add doc on spend report frequency 2024-09-07 11:54:33 -07:00
Ishaan Jaff
15820c6b7b add spend_report_frequency as a general setting 2024-09-07 11:44:58 -07:00
Krish Dholakia
501b6f5bac Allow client-side credentials to be sent to proxy (accept only if complete credentials are given) (#5575)
* feat: initial commit

* fix(proxy/auth/auth_utils.py): Allow client-side credentials to be given to the proxy (accept only if complete credentials are given)
2024-09-06 19:21:54 -07:00
Ishaan Jaff
2b7580916e ui new build 2024-09-06 18:10:46 -07:00
Ishaan Jaff
4db821897d Merge pull request #5566 from BerriAI/litellm_ui_regen_keys
[Feat] Allow setting duration time when regenerating key
2024-09-06 18:05:51 -07:00
Ishaan Jaff
164d8696ca Merge pull request #5574 from BerriAI/litellm_tags_use_views
[Feat-Proxy] Use DB Views to Get spend per Tag (Usage endpoints)
2024-09-06 17:33:06 -07:00
Krish Dholakia
2cab33b061 LiteLLM Minor Fixes and Improvements (08/06/2024) (#5567)
* fix(utils.py): return citations for perplexity streaming

Fixes https://github.com/BerriAI/litellm/issues/5535

* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542)

* fix(anthropic/chat.py): support fallbacks for anthropic streaming

Fixes https://github.com/BerriAI/litellm/issues/5512

* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)

* fix: fix linting errors

* fix(http_handler.py): fix raise_for_status error handling

* test: retry flaky test

* fix otel type

* fix(bedrock/embed): fix error raising

* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded

* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539)

* fix(router.py): support returning model_alias model names in `/v1/models`

* fix(proxy_server.py): support returning model alias'es on `/model/info`

* feat(router.py): support returning model group alias for `/model_group/info`

* fix(proxy_server.py): fix linting errors

* fix(proxy_server.py): fix linting errors

* build(model_prices_and_context_window.json): add amazon titan text premier pricing information

Closes https://github.com/BerriAI/litellm/issues/5560

* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3

* fix(success_handler.py): fix linting error

* fix(success_handler.py): fix linting errors

* fix(team_endpoints.py): Allows admin to update team member budgets

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-06 17:16:24 -07:00
Ishaan Jaff
16a3223474 fix linting 2024-09-06 16:54:43 -07:00
Ishaan Jaff
0c4022d848 fix use view for getting tag usage 2024-09-06 16:28:24 -07:00
Ishaan Jaff
b3629ebdc5 allow passing expiry time to /key/regenerate 2024-09-06 08:36:34 -07:00
Krish Dholakia
355f4a7c90 LiteLLM Minor Fixes and Improvements (#5537)
* fix(vertex_ai): Fixes issue where multimodal message without text was failing vertex calls

Fixes https://github.com/BerriAI/litellm/issues/5515

* fix(azure.py): move to using httphandler for oidc token calls

Fixes issue where ssl certificates weren't being picked up as expected

Closes https://github.com/BerriAI/litellm/issues/5522

* feat: Allows admin to set a default_max_internal_user_budget in config, and allow setting more specific values as env vars

* fix(proxy_server.py): fix read for max_internal_user_budget

* build(model_prices_and_context_window.json): add regional gpt-4o-2024-08-06 pricing

Closes https://github.com/BerriAI/litellm/issues/5540

* test: skip re-test
2024-09-05 18:03:34 -07:00
Ishaan Jaff
18e2169c40 ui new build 2024-09-05 17:05:39 -07:00
Ishaan Jaff
dd7d93fd54 Merge branch 'main' into litellm_allow_internal_user_view_usage 2024-09-05 16:46:06 -07:00
Ishaan Jaff
56835f77aa fix on /user/info show all keys - even expired ones 2024-09-05 15:31:41 -07:00
Ishaan Jaff
7ef1ac7996 fix allow internal user to view their own usage 2024-09-05 12:53:44 -07:00
Ishaan Jaff
3a48776720 fix /global/spend/provider 2024-09-05 12:48:58 -07:00
Ishaan Jaff
b4d6efd454 add global/spend/provider 2024-09-05 12:44:44 -07:00
Ishaan Jaff
6d656983c6 allow internal user to view global/spend/models 2024-09-05 12:38:48 -07:00
Ishaan Jaff
bb0fc2504b allow internal user to view their own spend 2024-09-05 12:35:04 -07:00
Ishaan Jaff
14ba077bf9 add usage endpoints for internal user 2024-09-05 12:34:41 -07:00
Ishaan Jaff
6ab47703b8 show /spend/logs for internal users 2024-09-05 12:14:03 -07:00
Ishaan Jaff
38890a731d fix create view - MonthlyGlobalSpendPerUserPerKey 2024-09-05 12:11:59 -07:00
Ishaan Jaff
5d808f488e add /spend/tags as allowed route for internal user 2024-09-05 10:41:43 -07:00
Krish Dholakia
6f354ecac6 fix(pass_through_endpoints): support bedrock agents via pass through (#5527) 2024-09-04 22:22:22 -07:00
Krish Dholakia
6fdee99632 LiteLLM Minor fixes + improvements (08/04/2024) (#5505)
* Minor IAM AWS OIDC Improvements (#5246)

* AWS IAM: Temporary tokens are valid across all regions after being issued, so it is wasteful to request one for each region.

* AWS IAM: Include an inline policy, to help reduce misuse of overly permissive IAM roles.

* (test_bedrock_completion.py): Ensure we are testing cross AWS region OIDC flow.

* fix(router.py): log rejected requests

Fixes https://github.com/BerriAI/litellm/issues/5498

* refactor: don't use verbose_logger.exception, if exception is raised

User might already have handling for this. But alerting systems in prod will raise this as an unhandled error.

* fix(datadog.py): support setting datadog source as an env var

Fixes https://github.com/BerriAI/litellm/issues/5508

* docs(logging.md): add dd_source to datadog docs

* fix(proxy_server.py): expose `/customer/list` endpoint for showing all customers

* (bedrock): Fix usage with Cloudflare AI Gateway, and proxies in general. (#5509)

* feat(anthropic.py): support 'cache_control' param for content when it is a string

* Revert "(bedrock): Fix usage with Cloudflare AI Gateway, and proxies in gener…" (#5519)

This reverts commit 3fac0349c2.

* refactor: ci/cd run again

---------

Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
2024-09-04 22:16:55 -07:00
Ishaan Jaff
15ac8f4ebe fix allow general guardrails on free tier 2024-09-04 19:59:32 -07:00
Ishaan Jaff
770fc45ec1 Merge pull request #5518 from BerriAI/litellm_log_request_response
[Feat] log request / response on pass through endpoints
2024-09-04 17:57:47 -07:00
Ishaan Jaff
3e1ff425de return error from /global/spend endpoint 2024-09-04 17:26:34 -07:00
Ishaan Jaff
8426d0e3e0 return error client side from spend endpoints 2024-09-04 17:20:47 -07:00
Ishaan Jaff
94ecb4e480 show error from /spend/tags 2024-09-04 17:14:49 -07:00
Ishaan Jaff
784ceaad0d rename type 2024-09-04 16:33:36 -07:00
Ishaan Jaff
b336977ff6 add doc on PassthroughStandardLoggingObject 2024-09-04 16:30:47 -07:00
Ishaan Jaff
5e121660d5 feat log request / response on pass through endpoints 2024-09-04 16:26:32 -07:00
Ishaan Jaff
b468ccbb77 Merge pull request #5514 from BerriAI/litellm_add_presidio
[Fix-Refactor] support presidio on new guardrails config
2024-09-04 16:09:54 -07:00