Commit graph

17789 commits

Author SHA1 Message Date
Ishaan Jaff
1973ae8fb8
[Feat] Allow setting supports_vision for Custom OpenAI endpoints + Added testing (#5821)
* add test for using images with custom openai endpoints

* run all otel tests

* update name of test

* add custom openai model to test config

* add test for setting supports_vision=True for model

* fix test guardrails aporia

* docs supports vison

* fix yaml

* fix yaml

* docs supports vision

* fix bedrock guardrail test

* fix cohere rerank test

* update model_group doc string

* add better prints on test
2024-09-21 11:35:55 -07:00
Yurii Kostyukov
4069942dd8
Fixed DeepSeek input and output tokens (#5718)
* Fixed deepseek input and output tokens

See https://platform.deepseek.com/api-docs/quick_start/pricing/

* Returned 4096
2024-09-21 08:22:27 -07:00
superpoussin22
acfb060bf1
Correct casing (#5817)
* Update Dockerfile

correct casing

* Update Dockerfile.database

correct casing

* Update Dockerfile.alpine

correct casing

* Update Dockerfile.non_root

correct casing
2024-09-21 08:21:11 -07:00
Ishaan Jaff
1d630b61ad
[Feat] Add fireworks AI embedding (#5812)
* add fireworks embedding models

* add fireworks ai

* fireworks ai embeddings support

* is_fireworks_embedding_model

* working fireworks embeddings

* fix health check * models

* fix embedding get optional params

* fix linting errors

* fix pick_cheapest_chat_model_from_llm_provider

* add fireworks ai litellm provider

* docs fireworks embedding models

* fixes for when azure ad token  is passed
2024-09-20 22:23:28 -07:00
Krrish Dholakia
d349d501c8 docs(proxy/configs.md): add CONFIG_FILE_PATH tutorial to docs 2024-09-20 22:04:16 -07:00
Krrish Dholakia
7ca9165d59 bump: version 1.46.8 → 1.47.0 2024-09-20 21:51:18 -07:00
Krish Dholakia
7ed6938a3f
LiteLLM Minor Fixes & Improvements (09/20/2024) (#5807)
* fix(vertex_llm_base.py): Handle api_base = ""

Fixes https://github.com/BerriAI/litellm/issues/5798

* fix(o1_transformation.py): handle stream_options not being supported

https://github.com/BerriAI/litellm/issues/5803

* docs(routing.md): fix docs

Closes https://github.com/BerriAI/litellm/issues/5808

* perf(internal_user_endpoints.py): reduce db calls for getting team_alias for a key

Use the list gotten earlier in `/user/info` endpoint

 Reduces ui keys tab load time to 800ms (prev. 28s+)

* feat(proxy_server.py): support CONFIG_FILE_PATH as env var

Closes https://github.com/BerriAI/litellm/issues/5744

* feat(get_llm_provider_logic.py): add `litellm_proxy/` as a known openai-compatible route

simplifies calling litellm proxy

Reduces confusion when calling models on litellm proxy from litellm sdk

* docs(litellm_proxy.md): cleanup docs

* fix(internal_user_endpoints.py): fix pydantic obj

* test(test_key_generate_prisma.py): fix test
2024-09-20 20:21:32 -07:00
Krish Dholakia
c9ceab0f1e
refactor: cleanup root of repo (#5813) 2024-09-20 20:17:35 -07:00
Krrish Dholakia
dad3964207 build(schema.prisma): add column 'blocked' for litellm keys
enables blocking/unblocking litellm keys
2024-09-20 19:40:45 -07:00
Ishaan Jaff
cf7dcd9168
[Feat-Proxy] Allow using custom sso handler (#5809)
* update internal user doc string

* add readme on location of /sso routes

* add custom_sso_handler

* docs custom sso

* use secure=True for cookies
2024-09-20 19:14:33 -07:00
Ishaan Jaff
0a18b6539c
use .debug for update_database() (#5810) 2024-09-20 18:52:51 -07:00
Ishaan Jaff
b98b5abfb0 fix model cost map fireworks embeddings 2024-09-20 18:33:22 -07:00
Ishaan Jaff
be3fec8bfb add fireworks_ai-embedding-models 2024-09-20 17:56:58 -07:00
Ishaan Jaff
9558cbd115 add fireworks embedding pricing 2024-09-20 17:41:28 -07:00
Ishaan Jaff
036fce8f18
[Fix] Tag Based Routing not work with wildcard routing (#5805)
* allow using tag routing for free

* only enforce tags for teams / keys
2024-09-20 14:05:56 -07:00
Krish Dholakia
3933fba41f
LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793)
* fix(model_prices_and_context_window.json): add cost tracking for more vertex llama3.1 model

8b and 70b models

* fix(proxy/utils.py): handle data being none on pre-call hooks

* fix(proxy/): create views on initial proxy startup

fixes base case, where user starts proxy for first time

 Fixes https://github.com/BerriAI/litellm/issues/5756

* build(config.yml): fix vertex version for test

* feat(ui/): support enabling/disabling slack alerting

Allows admin to turn on/off slack alerting through ui

* feat(rerank/main.py): support langfuse logging

* fix(proxy/utils.py): fix linting errors

* fix(langfuse.py): log clean metadata

* test(tests): replace deprecated openai model
2024-09-20 08:19:52 -07:00
Ishaan Jaff
696fc387d2 ui new build 2024-09-20 08:11:05 -07:00
Ishaan Jaff
a6100d7ea9
ui fix correct team not loading (#5804)
* ui fix correct team not loading

* ui fix
2024-09-20 08:08:56 -07:00
Ishaan Jaff
a3d4bf6c27 bump: version 1.46.7 → 1.46.8 2024-09-19 17:19:17 -07:00
Ishaan Jaff
8dbb1f59d7 ui new build 2024-09-19 17:18:49 -07:00
Ishaan Jaff
186db292ae
[Feat] Add Error Handling for /key/list endpoint (#5787)
* raise error from unsupported param

* add testing for key list endpoint

* add testing for key list error handling

* fix key list test
2024-09-19 17:14:12 -07:00
Ishaan Jaff
e6018a464f
[ Proxy - User Management]: If user assigned to a team don't show Default Team (#5791)
* rename endpoint to ui_settings

* ui allow DEFAULT_TEAM_DISABLED

* fix logic

* docs Set `default_team_disabled: true` on your litellm config.yaml
2024-09-19 17:13:58 -07:00
Ishaan Jaff
91e58d9049
[Feat] Add proxy level prometheus metrics (#5789)
* add Proxy Level Tracking Metrics doc

* update service logger

* prometheus - track litellm_proxy_failed_requests_metric

* use REQUESTED_MODEL

* fix prom request_data
2024-09-19 17:13:07 -07:00
Ishaan Jaff
ae41c0df82 test fix test_multiple_deployments_sync 2024-09-19 16:23:13 -07:00
Ishaan Jaff
b54bbf510e fix azure gpt-4o test 2024-09-19 16:20:43 -07:00
Ishaan Jaff
b022247168
fix curl on /get team info (#5792) 2024-09-19 16:14:01 -07:00
Krish Dholakia
6051086322
test: replace gpt-3.5-turbo-0613 (deprecated model) (#5794) 2024-09-19 15:39:37 -07:00
Ishaan Jaff
4e03e1509f docs docker quick start 2024-09-19 15:10:59 -07:00
Ishaan Jaff
bea9a89ea8 docs fix link on root page 2024-09-19 15:00:30 -07:00
Ishaan Jaff
f971409888 docs add docker quickstart to litellm proxy getting started 2024-09-19 14:57:13 -07:00
Krrish Dholakia
5d67c5436b bump: version 1.46.6 → 1.46.7 2024-09-19 14:48:12 -07:00
Krrish Dholakia
0bdb17eca8 docs(vertex.md): fix example with GOOGLE_APPLICATION_CREDENTIALS 2024-09-19 14:47:52 -07:00
Ishaan Jaff
1e7839377c fix root of docs page 2024-09-19 14:36:21 -07:00
Ishaan Jaff
7e30bcc128
[Feat] Add Azure gpt-35-turbo-0301 pricing (#5790)
* add gpt-35-turbo-0301 pricing

* add azure gpt-35-turbo-0613 pricing

* add gpt-35-turbo-instruct-0914 pricing
2024-09-19 13:32:07 -07:00
Krish Dholakia
d46660ea0f
LiteLLM Minor Fixes & Improvements (09/18/2024) (#5772)
* fix(proxy_server.py): fix azure key vault logic to not require client id/secret

* feat(cost_calculator.py): support fireworks ai cost tracking

* build(docker-compose.yml): add lines for mounting config.yaml to docker compose

Closes https://github.com/BerriAI/litellm/issues/5739

* fix(input.md): update docs to clarify litellm supports content as a list of dictionaries

Fixes https://github.com/BerriAI/litellm/issues/5755

* fix(input.md): update input.md to include all message values

* fix(image_handling.py): follow image url redirects

Fixes https://github.com/BerriAI/litellm/issues/5763

* fix(router.py): Fix model key/base leak in error message

Fixes https://github.com/BerriAI/litellm/issues/5762

* fix(http_handler.py): fix linting error

* fix(azure.py): fix logging to show azure_ad_token being used

Fixes https://github.com/BerriAI/litellm/issues/5767

* fix(_redis.py): add redis sentinel support

Closes https://github.com/BerriAI/litellm/issues/4381

* feat(_redis.py): add redis sentinel support

Closes https://github.com/BerriAI/litellm/issues/4381

* test(test_completion_cost.py): fix test

* Databricks Integration: Integrate Databricks SDK as optional mechanism for fetching API base and token, if unspecified (#5746)

* LiteLLM Minor Fixes & Improvements (09/16/2024)  (#5723)

* coverage (#5713)

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* Move (#5714)

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* fix(litellm_logging.py): fix logging client re-init (#5710)

Fixes https://github.com/BerriAI/litellm/issues/5695

* fix(presidio.py): Fix logging_hook response and add support for additional presidio variables in guardrails config

Fixes https://github.com/BerriAI/litellm/issues/5682

* feat(o1_handler.py): fake streaming for openai o1 models

Fixes https://github.com/BerriAI/litellm/issues/5694

* docs: deprecated traceloop integration in favor of native otel (#5249)

* fix: fix linting errors

* fix: fix linting errors

* fix(main.py): fix o1 import

---------

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>

* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view (#5730)

* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view

Supports having `MonthlyGlobalSpend` view be a material view, and exposes an endpoint to refresh it

* fix(custom_logger.py): reset calltype

* fix: fix linting errors

* fix: fix linting error

* fix

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* fix: fix import

* Fix

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* fix

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* DB test

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* Coverage

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* progress

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* fix

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* fix

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* fix

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* fix test name

Signed-off-by: dbczumar <corey.zumar@databricks.com>

---------

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>

* test: fix test

* test(test_databricks.py): fix test

* fix(databricks/chat.py): handle custom endpoint (e.g. sagemaker)

* Apply code scanning fix for clear-text logging of sensitive information

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* fix(__init__.py): fix known fireworks ai models

---------

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2024-09-19 13:25:29 -07:00
Ishaan Jaff
49b2766723
add gemma2 9b it (#5788) 2024-09-19 13:03:33 -07:00
Ishaan Jaff
cd90807807
fix use converse for all llama3 models (#5729) 2024-09-19 09:31:52 -07:00
Krish Dholakia
8497e2aa36
feat(prometheus_api.py): support querying prometheus metrics for all-up + key-level spend on UI (#5782)
enables getting aggregated view from prometheus api

Makes proxy UI reliable in prod
2024-09-18 22:39:15 -07:00
Ishaan Jaff
a22e473636 set timeout on predibase test 2024-09-18 17:13:13 -07:00
Ishaan Jaff
c60f6f496a bump: version 1.46.5 → 1.46.6 2024-09-18 16:45:46 -07:00
Ishaan Jaff
4399deab2e docs fallback/login 2024-09-18 16:43:19 -07:00
Ishaan Jaff
5480563281 docs add info on /fallback/login 2024-09-18 16:41:19 -07:00
Ishaan Jaff
eba76377ca
[Chore-Proxy] enforce jwt auth as enterprise feature (#5770)
* enforce prometheus as enterprise feature

* show correct error on prometheus metric when not enrterprise user

* docs promethues metrics enforced

* docs enforce JWT auth

* enforce JWT auth as enterprise feature

* fix merge conflicts
2024-09-18 16:28:37 -07:00
Ishaan Jaff
50cc7c0353
[Chore LiteLLM Proxy] enforce prometheus metrics as enterprise feature (#5769)
* enforce prometheus as enterprise feature

* show correct error on prometheus metric when not enrterprise user

* docs promethues metrics enforced

* fix enforcing
2024-09-18 16:28:12 -07:00
Ishaan Jaff
7e07c37be7
[Feat-Proxy] Add Azure Assistants API - Create Assistant, Delete Assistant Support (#5777)
* update docs to show providers

* azure - move assistants in it's own file

* create new azure assistants file

* add azure create assistants

* add test for create / delete assistants

* azure add delete assistants support

* docs add Azure to support providers for assistants api

* fix linting errors

* fix standard logging merge conflict

* docs azure create assistants

* fix doc
2024-09-18 16:27:33 -07:00
Ishaan Jaff
a109853d21
[Prometheus] track requested model (#5774)
* enforce prometheus as enterprise feature

* show correct error on prometheus metric when not enrterprise user

* docs promethues metrics enforced

* track requested model on prometheus

* docs prom metrics

* fix prom tracking failures
2024-09-18 12:46:58 -07:00
Ishaan Jaff
5aad3e6ea4
[Feat - GCS Bucket Logger] Use StandardLoggingPayload (#5771)
* docs update standard logging object

* GCSBucketLogger

* test gcs bucket logger
2024-09-18 11:37:52 -07:00
Krrish Dholakia
8600ec7704 fix(litellm_logging.py): fix merge conflict 2024-09-18 10:49:57 -07:00
Ishaan Jaff
84e813b0f4 update gcs bucket to use standard logging payload 2024-09-18 10:34:21 -07:00
Ishaan Jaff
a4549b5b6c docs update what gets logged on gcs buckets 2024-09-18 10:18:57 -07:00