Commit graph

3744 commits

Author SHA1 Message Date
Ishaan Jaff
daaca2760e add test for admin only routes 2024-09-03 15:26:42 -07:00
Ishaan Jaff
bfb0aceeae add check for admin only routes 2024-09-03 15:03:32 -07:00
Ishaan Jaff
dd9ae9ccae Merge pull request #5489 from BerriAI/litellm_Add_secret_managers
[Feat] Add Google Secret Manager Support
2024-09-03 14:51:32 -07:00
Ishaan Jaff
cf66ca89b9 allow setting allowed routes on proxy 2024-09-03 13:59:31 -07:00
Ishaan Jaff
b5d1d93c14 refactor secret managers 2024-09-03 10:58:02 -07:00
Ishaan Jaff
47bfa77e3b read from .env for secret manager 2024-09-03 10:53:52 -07:00
Ishaan Jaff
09519b74db refactor get_secret 2024-09-03 10:42:12 -07:00
Krrish Dholakia
030567b886 fix(proxy/_types.py): add lago 'charge_by' env var to proxy ui 2024-09-03 08:19:40 -07:00
Krish Dholakia
18da7adce9 feat(router.py): Support Loadbalancing batch azure api endpoints (#5469)
* feat(router.py): initial commit for loadbalancing azure batch api endpoints

Closes https://github.com/BerriAI/litellm/issues/5396

* fix(router.py): working `router.acreate_file()`

* feat(router.py): working router.acreate_batch endpoint

* feat(router.py): expose router.aretrieve_batch function

Make it easy for user to retrieve the batch information

* feat(router.py): support 'router.alist_batches' endpoint

Adds support for getting all batches across all endpoints

* feat(router.py): working loadbalancing on `/v1/files`

* feat(proxy_server.py): working loadbalancing on `/v1/batches`

* feat(proxy_server.py): working loadbalancing on Retrieve + List batch
2024-09-02 21:32:55 -07:00
Ishaan Jaff
9c14d63697 Merge branch 'main' into litellm_track_imagen_spend_logs 2024-09-02 21:21:15 -07:00
Ishaan Jaff
b6009233ac fix always read redis 2024-09-02 21:08:32 -07:00
Ishaan Jaff
7f8b43542b fix success handler typing 2024-09-02 19:42:36 -07:00
Ishaan Jaff
778cba702e fix linting errors 2024-09-02 19:39:10 -07:00
Ishaan Jaff
e882219b7d Merge pull request #5480 from BerriAI/litellm_track_streaming_spendLogs
[Feat] Track Usage for `/streamGenerateContent` endpoint
2024-09-02 19:25:52 -07:00
Ishaan Jaff
de3fab70bd fix linting error 2024-09-02 18:14:15 -07:00
Ishaan Jaff
e77133f3e1 fix linting error 2024-09-02 18:13:32 -07:00
Ishaan Jaff
dc042d1a00 add cost tracking for pass through imagen 2024-09-02 18:10:46 -07:00
Ishaan Jaff
54fbea1a82 track image gen in spend logs 2024-09-02 17:36:25 -07:00
Ishaan Jaff
e3becc6514 refactor vtx image gen 2024-09-02 17:35:51 -07:00
Ishaan Jaff
a9c9967b6d fix lining 2024-09-02 17:08:30 -07:00
Ishaan Jaff
8b4ba3ccb8 fix linting error 2024-09-02 17:08:03 -07:00
Ishaan Jaff
e60c7a3b85 track /embedding in spendLogs 2024-09-02 17:05:53 -07:00
Ishaan Jaff
5876d043b4 code cleanup 2024-09-02 16:36:19 -07:00
Ishaan Jaff
3f9c58507e pass through track usage for streaming endpoints 2024-09-02 16:11:20 -07:00
Ishaan Jaff
ef6b90a657 use chunk_processort 2024-09-02 15:51:52 -07:00
Ishaan Jaff
fbeb6941f1 new streaming handler fn 2024-09-02 15:51:21 -07:00
Krish Dholakia
11f85d883f LiteLLM Minor Fixes + Improvements (#5474)
* feat(proxy/_types.py): add lago billing to callbacks ui

Closes https://github.com/BerriAI/litellm/issues/5472

* fix(anthropic.py): return anthropic prompt caching information

Fixes https://github.com/BerriAI/litellm/issues/5364

* feat(bedrock/chat.py): support 'json_schema' for bedrock models

Closes https://github.com/BerriAI/litellm/issues/5434

* fix(bedrock/embed/embeddings.py): support async embeddings for amazon titan models

* fix: linting fixes

* fix: handle key errors

* fix(bedrock/chat.py): fix bedrock ai21 streaming object

* feat(bedrock/embed): support bedrock embedding optional params

* fix(databricks.py): fix usage chunk

* fix(internal_user_endpoints.py): apply internal user defaults, if user role updated

Fixes issue where user update wouldn't apply defaults

* feat(slack_alerting.py): provide multiple slack channels for a given alert type

multiple channels might be interested in receiving an alert for a given type

* docs(alerting.md): add multiple channel alerting to docs
2024-09-02 14:29:57 -07:00
Krish Dholakia
3fbb4f8fac Azure Service Principal with Secret authentication workflow. (#5131) (#5437)
* Azure Service Principal with Secret authentication workflow. (#5131)

* Implement Azure Service Principal with Secret authentication workflow.

* Use `ClientSecretCredential` instead of `DefaultAzureCredential`.

* Move imports into the function.

* Add type hint for `azure_ad_token_provider`.

* Add unit test for router initialization and sample completion using Azure Service Principal with Secret authentication workflow.

* Add unit test for router initialization with neither API key nor using Azure Service Principal with Secret authentication workflow.

* fix(client_initializtion_utils.py): fix typing + overrides

* test: fix linting errors

* fix(client_initialization_utils.py): fix client init azure ad token logic

* fix(router_client_initialization.py): add flag check for reading azure ad token from environment

* test(test_streaming.py): skip end of life bedrock model

* test(test_router_client_init.py): add correct flag to test

---------

Co-authored-by: kzych-inpost <142029278+kzych-inpost@users.noreply.github.com>
2024-09-02 14:29:00 -07:00
Ishaan Jaff
ea30be2d91 fix pass through construct_target_url when vertex_proj is None 2024-09-02 12:51:30 -07:00
Krish Dholakia
ca4e746545 LiteLLM minor fixes + improvements (31/08/2024) (#5464)
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints

* test(test_streaming.py): skip model due to end of life

* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits

Closes https://github.com/BerriAI/litellm/issues/4096
2024-09-01 13:31:42 -07:00
Krish Dholakia
e474c3665a Bedrock Embeddings refactor + model support (#5462)
* refactor(bedrock): initial commit to refactor bedrock to a folder

Improve code readability + maintainability

* refactor: more refactor work

* fix: fix imports

* feat(bedrock/embeddings.py): support translating embedding into amazon embedding formats

* fix: fix linting errors

* test: skip test on end of life model

* fix(cohere/embed.py): fix linting error

* fix(cohere/embed.py): fix typing

* fix(cohere/embed.py): fix post-call logging for cohere embedding call

* test(test_embeddings.py): fix error message assertion in test
2024-09-01 13:29:58 -07:00
Ishaan Jaff
6b1cfcba5a Merge pull request #5463 from BerriAI/litellm_track_error_per_model
[Feat - Prometheus] - Track error_code, model metric
2024-08-31 16:36:04 -07:00
Ishaan Jaff
638e6291f0 Merge pull request #5457 from BerriAI/litellm_track_spend_logs_for_vertex_pass_through_endpoints
[Feat-Proxy] track spend logs for vertex pass through endpoints
2024-08-31 16:30:15 -07:00
Krish Dholakia
f88ca9a1fe anthropic prompt caching cost tracking (#5453)
* fix(utils.py): support 'drop_params' for embedding requests

Fixes https://github.com/BerriAI/litellm/issues/5444

* feat(anthropic/cost_calculation.py): Support calculating cost for prompt caching on anthropic

* feat(types/utils.py): allows us to migrate to openai's equivalent, once that comes out

* fix: fix linting errors

* test: mark flaky test
2024-08-31 14:50:52 -07:00
Krish Dholakia
5f993f46a0 anthropic prompt caching cost tracking (#5453)
* fix(utils.py): support 'drop_params' for embedding requests

Fixes https://github.com/BerriAI/litellm/issues/5444

* feat(anthropic/cost_calculation.py): Support calculating cost for prompt caching on anthropic

* feat(types/utils.py): allows us to migrate to openai's equivalent, once that comes out

* fix: fix linting errors

* test: mark flaky test
2024-08-31 14:09:35 -07:00
Ishaan Jaff
3fae5eb94e feat prometheus add metric for failure / model 2024-08-31 10:05:23 -07:00
Ishaan Jaff
2474400796 fix cost tracking for vertex ai native 2024-08-31 08:22:27 -07:00
Ishaan Jaff
2619dbfa57 call spend logs endpoint 2024-08-30 16:35:07 -07:00
Ishaan Jaff
6e02df9ac2 add test for vertex basic pass throgh 2024-08-30 16:26:00 -07:00
Ishaan Jaff
386214302a fix use existing custom_auth.py 2024-08-30 16:22:28 -07:00
Ishaan Jaff
15f1ead87f allow pass through routes as LLM API routes 2024-08-30 16:08:44 -07:00
Ishaan Jaff
dd2aaf33fa use helper class for pass through success handler 2024-08-30 15:52:47 -07:00
Ishaan Jaff
b4573c14c2 add example custom 2024-08-30 15:46:45 -07:00
Ishaan Jaff
f303f8daa8 Merge pull request #5450 from BerriAI/litellm_load_config_from_gcs
[Feat-Proxy] Load config.yaml from GCS Bucket
2024-08-30 12:08:54 -07:00
Ishaan Jaff
1b9aa7b357 vertex forward all headers from vertex 2024-08-30 11:05:23 -07:00
Ishaan Jaff
c60125d7be add gcs bucket base 2024-08-30 10:41:39 -07:00
Ishaan Jaff
ce1a0a93b5 use helper to get_config_file_contents_from_gcs 2024-08-30 10:26:42 -07:00
Ishaan Jaff
67a8907b1e Merge pull request #5438 from BerriAI/litellm_show_error_types_swagger
[Feat-Proxy] Show all exceptioons types on swagger for LiteLLM Proxy
2024-08-30 07:21:23 -07:00
Krish Dholakia
321b0961b5 fix: Minor LiteLLM Fixes + Improvements (29/08/2024) (#5436)
* fix(model_checks.py): support returning wildcard models on `/v1/models`

Fixes https://github.com/BerriAI/litellm/issues/4903

* fix(bedrock_httpx.py): support calling bedrock via api_base

Closes https://github.com/BerriAI/litellm/pull/4587

* fix(litellm_logging.py): only leave last 4 char of gemini key unmasked

Fixes https://github.com/BerriAI/litellm/issues/5433

* feat(router.py): support setting 'weight' param for models on router

Closes https://github.com/BerriAI/litellm/issues/5410

* test(test_bedrock_completion.py): add unit test for custom api base

* fix(model_checks.py): handle no "/" in model
2024-08-29 22:40:25 -07:00
Ishaan Jaff
378182cba2 show all error types on swagger 2024-08-29 18:50:41 -07:00