Commit graph

936 commits

Author SHA1 Message Date
Ishaan Jaff
a9e8a36f89
[Bug Fix] Azure Blob Storage fixes (#10059)
* Simple fix for #9339 - upgrade the underlying library and cache the azure storage client (#9965)

* fix -  use constants for caching azure storage client

---------

Co-authored-by: Adrian Lyjak <adrian@chatmeter.com>
2025-04-16 09:47:10 -07:00
Krish Dholakia
1b9b745cae
Fix gcs pub sub logging with env var GCS_PROJECT_ID (#10042)
* fix(pub_sub.py): fix passing project id in pub sub call

Fixes issue where GCS_PUBSUB_PROJECT_ID was not being used

* test(test_pub_sub.py): add unit test to prevent future regressions

* test: fix test
2025-04-15 21:50:48 -07:00
Marc Abramowitz
837a6948d8
Fix typo: Entrata -> Entra in code (#9922)
* Fix typo: Entrata -> Entra

* Fix a few more
2025-04-15 17:31:18 -07:00
Ishaan Jaff
c1a642ce20
[UI] Allow setting prompt cache_control_injection_points (#10000)
* test_anthropic_cache_control_hook_system_message

* test_anthropic_cache_control_hook.py

* should_run_prompt_management_hooks

* fix should_run_prompt_management_hooks

* test_anthropic_cache_control_hook_specific_index

* fix test

* fix linting errors

* ChatCompletionCachedContent

* initial commit for cache control

* fixes ui design

* fix inserting cache_control_injection_points

* fix entering cache control points

* fixes for using cache control on ui + backend

* update cache control settings on edit model page

* fix init custom logger compatible class

* fix linting errors

* fix linting errors

* fix get_chat_completion_prompt
2025-04-14 21:17:42 -07:00
Ishaan Jaff
6cfa50d278
[Feat] Add support for cache_control_injection_points for Anthropic API, Bedrock API (#9996)
* test_anthropic_cache_control_hook_system_message

* test_anthropic_cache_control_hook.py

* should_run_prompt_management_hooks

* fix should_run_prompt_management_hooks

* test_anthropic_cache_control_hook_specific_index

* fix test

* fix linting errors

* ChatCompletionCachedContent
2025-04-14 20:50:13 -07:00
Krish Dholakia
3ca82c22b6
Support CRUD endpoints for Managed Files (#9924)
* fix(openai.py): ensure openai file object shows up on logs

* fix(managed_files.py): return unified file id as b64 str

allows retrieve file id to work as expected

* fix(managed_files.py): apply decoded file id transformation

* fix: add unit test for file id + decode logic

* fix: initial commit for litellm_proxy support with CRUD Endpoints

* fix(managed_files.py): support retrieve file operation

* fix(managed_files.py): support for DELETE endpoint for files

* fix(managed_files.py): retrieve file content support

supports retrieve file content api from openai

* fix: fix linting error

* test: update tests

* fix: fix linting error

* fix(files/main.py): pass litellm params to azure route

* test: fix test
2025-04-11 21:48:27 -07:00
Ishaan Jaff
94a553dbb2
[Feat] Emit Key, Team Budget metrics on a cron job schedule (#9528)
* _initialize_remaining_budget_metrics

* initialize_budget_metrics_cron_job

* initialize_budget_metrics_cron_job

* initialize_budget_metrics_cron_job

* test_initialize_budget_metrics_cron_job

* LITELLM_PROXY_ADMIN_NAME

* fix code qa checks

* test_initialize_budget_metrics_cron_job

* test_initialize_budget_metrics_cron_job

* pod lock manager allow dynamic cron job ID

* fix pod lock manager

* require cronjobid for PodLockManager

* fix DB_SPEND_UPDATE_JOB_NAME acquire / release lock

* add comment on prometheus logger

* add debug statements for emitting key, team budget metrics

* test_pod_lock_manager.py

* test_initialize_budget_metrics_cron_job

* initialize_budget_metrics_cron_job

* initialize_remaining_budget_metrics

* remove outdated test
2025-04-10 16:59:14 -07:00
Ishaan Jaff
f0f2f819bd
Merge pull request #9760 from BerriAI/litellm_prometheus_error_monitoring
[Reliability] Prometheus emit llm provider on failure metric - make it easy to differentiate litellm error vs llm api error
2025-04-04 21:37:28 -07:00
Ishaan Jaff
b89ed69257
Merge branch 'main' into litellm_add_auth_metrics_endpoint 2025-04-04 21:28:06 -07:00
Ishaan Jaff
f402e9bbd1 _get_exception_class_name 2025-04-04 21:23:21 -07:00
Ishaan Jaff
f16c531002 _mount_metrics_endpoint 2025-04-04 19:54:20 -07:00
Ishaan Jaff
253060cb09 allow requiring auth for /metrics endpoint 2025-04-04 17:35:02 -07:00
Ishaan Jaff
c402db9057 prometheus emit llm provider on failure metric 2025-04-04 17:07:43 -07:00
Ishaan Jaff
d3018a4c28 Merge branch 'main' into litellm_metrics_pod_lock_manager 2025-04-04 16:46:32 -07:00
Ishaan Jaff
901d6fe7b7 add operational metrics for pod lock manager v2 arch 2025-04-04 16:41:07 -07:00
Krish Dholakia
e1f7bcb47d
Fix VertexAI Credential Caching issue (#9756)
* refactor(vertex_llm_base.py): Prevent credential misrouting for projects

Fixes https://github.com/BerriAI/litellm/issues/7904

* fix: passing unit tests

* fix(vertex_llm_base.py): common auth logic across sync + async vertex ai calls

prevents credential caching issue across both flows

* test: fix test

* fix(vertex_llm_base.py): handle project id in default cause

* fix(factory.py): don't pass cache control if not set

bedrock invoke does not support this

* test: fix test

* fix(vertex_llm_base.py): add .exception message in load_auth

* fix: fix ruff error
2025-04-04 16:38:08 -07:00
Ishaan Jaff
bde88b3ba6 fix type error 2025-04-04 16:34:43 -07:00
Ishaan Jaff
e3b788ea29 fix test 2025-04-02 21:58:35 -07:00
Ishaan Jaff
dd2d1dc2f4 Merge branch 'main' into litellm_metrics_pod_lock_manager 2025-04-02 21:35:55 -07:00
Krish Dholakia
8ee32291e0
Squashed commit of the following: (#9709)
commit b12a9892b7
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Wed Apr 2 08:09:56 2025 -0700

    fix(utils.py): don't modify openai_token_counter

commit 294de31803
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 21:22:40 2025 -0700

    fix: fix linting error

commit cb6e9fbe40
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 19:52:45 2025 -0700

    refactor: complete migration

commit bfc159172d
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 19:09:59 2025 -0700

    refactor: refactor more constants

commit 43ffb6a558
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:45:24 2025 -0700

    fix: test

commit 04dbe4310c
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:28:58 2025 -0700

    refactor: refactor: move more constants into constants.py

commit 3c26284aff
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:14:46 2025 -0700

    refactor: migrate hardcoded constants out of __init__.py

commit c11e0de69d
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:11:21 2025 -0700

    build: migrate all constants into constants.py

commit 7882bdc787
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Mar 24 18:07:37 2025 -0700

    build: initial test banning hardcoded numbers in repo
2025-04-02 21:24:54 -07:00
Ishaan Jaff
bcf42fd82d linting fix prometheus services 2025-04-02 21:19:05 -07:00
Ishaan Jaff
80fb4ece97 prom emit size of DB TX queues for observability 2025-04-02 18:39:29 -07:00
Ishaan Jaff
05b30e28db clean up service metrics 2025-04-02 17:50:41 -07:00
Krish Dholakia
9b7ebb6a7d
build(pyproject.toml): add new dev dependencies - for type checking (#9631)
* build(pyproject.toml): add new dev dependencies - for type checking

* build: reformat files to fit black

* ci: reformat to fit black

* ci(test-litellm.yml): make tests run clear

* build(pyproject.toml): add ruff

* fix: fix ruff checks

* build(mypy/): fix mypy linting errors

* fix(hashicorp_secret_manager.py): fix passing cert for tls auth

* build(mypy/): resolve all mypy errors

* test: update test

* fix: fix black formatting

* build(pre-commit-config.yaml): use poetry run black

* fix(proxy_server.py): fix linting error

* fix: fix ruff safe representation error
2025-03-29 11:02:13 -07:00
Ishaan Jaff
fca5926600 default to use SLP for GCS PubSub 2025-03-24 15:21:59 -07:00
Ishaan Jaff
5d3bb86f07 define CustomPromptManagement 2025-03-19 16:22:23 -07:00
Ishaan Jaff
f5ef0c3cb7 fix code quality checks 2025-03-18 22:34:43 -07:00
Ishaan Jaff
0f2e095b6b _arize_otel_logger 2025-03-18 22:19:51 -07:00
Ishaan Jaff
57e5c94360 Merge branch 'main' into litellm_arize_dynamic_logging 2025-03-18 22:13:35 -07:00
Ishaan Jaff
78a5dde31f fix code qa 2025-03-18 17:07:44 -07:00
Ishaan Jaff
bd122f631e fix arize config 2025-03-18 16:54:31 -07:00
Ishaan Jaff
de97cda445 refactor create_litellm_proxy_request_started_spen 2025-03-18 16:12:16 -07:00
Ishaan Jaff
7a5726fc88 fix - Arize - only log LLM I/O 2025-03-18 15:50:38 -07:00
Ishaan Jaff
f8c49175ec fix _get_span_processor 2025-03-18 14:59:13 -07:00
Ishaan Jaff
b940c969fd use _get_headers_dictionary 2025-03-18 14:55:39 -07:00
Ishaan Jaff
48663a0920 use safe dumps for arize ai 2025-03-18 14:30:00 -07:00
Nate Mar
a1d188ba5e Fix test and add comments 2025-03-18 03:46:53 -07:00
Nate Mar
434e262b8c revert space_key change and add tests for arize integration 2025-03-18 01:40:10 -07:00
Nate Mar
35e0856f11 Fix wrong import and use space_id instead of space_key for Arize integration 2025-03-17 20:37:28 -07:00
Krrish Dholakia
997f2f0b3e fix(aim.py): fix linting error 2025-03-13 15:32:42 -07:00
Tomer Bin
4a31b32a88 Support post-call guards for stream and non-stream responses 2025-03-13 08:53:54 +02:00
Ishaan Jaff
b2d9935567 use ProxyBaseLLMRequestProcessing 2025-03-12 16:54:33 -07:00
vivek-athina
cd4a53d6f2
Merge pull request #4 from BerriAI/main
Update main
2025-03-10 11:13:21 +05:30
Krrish Dholakia
8ea3d4c046 build: merge litellm_dev_03_01_2025_p2 2025-03-03 23:05:41 -08:00
Krrish Dholakia
4418e6dd14 build: merge branch 2025-03-02 08:31:57 -08:00
Ishaan Jaff
428ed1360c fix overly verbose non blocking error on dd get_request_response_payload 2025-03-01 10:09:18 -08:00
Vivek Aditya
ed75dd61c2 Removed prints and added unit tests 2025-02-28 21:48:13 +05:30
Vivek Aditya
c40d45ae09 Added tags to additional keys that can be sent to athina 2025-02-26 21:00:56 +05:30
Krish Dholakia
9914c166b7
Litellm contributor prs 02 24 2025 (#8775)
* Adding VertexAI Claude 3.7 Sonnet (#8774)

Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>

* build(model_prices_and_context_window.json): add anthropic 3-7 models on vertex ai and bedrock

* Support video_url (#8743)

* Support video_url

Support VLMs that works with video.
Example implemenation in vllm: https://github.com/vllm-project/vllm/pull/10020

* llms openai.py: Add ChatCompletionVideoObject

Add data structures to support `video_url` in chat completion

* test test_completion.py: add test for video_url

* Arize Phoenix - ensure correct endpoint/protocol are used; and default to phoenix cloud (#8750)

* minor fixes to default to http and to ensure that the correct endpoint is used

* Update test_arize_phoenix.py

* prioritize http over grpc

---------

Co-authored-by: Emerson Gomes <emerson.gomes@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Pang Wu <104795337+pang-wu@users.noreply.github.com>
Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-24 18:55:48 -08:00
Krish Dholakia
21ea52105a
Support arize phoenix on litellm proxy (#7756) (#8715)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* Update opentelemetry.py

wip

* Update test_opentelemetry_unit_tests.py

* fix a few paths and tests

* fix path

* Update litellm_logging.py

* accidentally removed code

* Add type for protocol

* Add and update tests

* minor changes

* update and add additional arize phoenix test

* update existing test

* address feedback

* use standard_logging_object

* address feedback

Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-22 20:55:11 -08:00