Commit graph

3421 commits

Author SHA1 Message Date
Ishaan Jaff
baa9fda9ce docs - Custom Retention Policies 2025-01-20 07:29:48 -08:00
Ishaan Jaff
803da333bf docs Data Retention Policy 2025-01-20 07:00:38 -08:00
Krish Dholakia
dca6904937
JWT Auth - enforce_rbac support + UI team view, spend calc fix (#7863)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* fix(user_dashboard.tsx): fix spend calculation when team selected

sum all team keys, not user keys

* docs(admin_ui_sso.md): fix docs tabbing

* feat(user_api_key_auth.py): introduce new 'enforce_rbac' param on jwt auth

allows proxy admin to prevent any unmapped yet authenticated jwt tokens from calling proxy

Fixes https://github.com/BerriAI/litellm/issues/6793

* test: more unit testing + refactoring

* fix: fix returning id when obj not found in db

* fix(user_api_key_auth.py): add end user id tracking from jwt auth

* docs(token_auth.md): add doc on rbac with JWTs

* fix: fix unused params

* test: remove old test
2025-01-19 21:28:55 -08:00
Krish Dholakia
3a7b13efa2
feat(health_check.py): set upperbound for api when making health check call (#7865)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 10s
* feat(health_check.py): set upperbound for api when making health check call

prevent bad model from health check to hang and cause pod restarts

* fix(health_check.py): cleanup task once completed

* fix(constants.py): bump default health check timeout to 1min

* docs(health.md): add 'health_check_timeout' to health docs on litellm

* build(proxy_server_config.yaml): add bad model to health check
2025-01-18 19:47:43 -08:00
Ishaan Jaff
f8b059bfa1 docs data sec 2025-01-18 17:44:02 -08:00
Ishaan Jaff
c458c7c801 litellm security page 2025-01-18 17:24:39 -08:00
Ishaan Jaff
c0253e17af docs Security Certifications 2025-01-18 17:12:42 -08:00
Ishaan Jaff
40a78253a0 docs data privacy 2025-01-18 17:01:57 -08:00
Ishaan Jaff
a2762fb273 ui release note 2025-01-17 20:27:53 -08:00
Ishaan Jaff
bc311b7a47 ui logs - view messages / responses 2025-01-17 20:20:49 -08:00
Ishaan Jaff
2c117264a2
[Hashicorp - secret manager] - use vault namespace for tls auth (#7834)
* hcorp - use x-vault-namespace

* _get_tls_cert_auth_body

* HCP_VAULT_CERT_ROLE

* test_hashicorp_secret_manager_tls_cert_auth

* HCP_VAULT_CERT_ROLE
2025-01-17 19:27:56 -08:00
Nikolaiev Dmytro
7b45349145
Update instructor tutorial (#7784)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
2025-01-15 15:10:50 -08:00
Hugues Chocart
6fff77d131
[integrations/lunary] Improve Lunary documentaiton (#7770)
* update lunary doc

* better title

* tweaks

* Update langchain.md

* Update lunary_integration.md
2025-01-15 15:00:25 -08:00
Ishaan Jaff
df7d500d42
docs iam role based access for bedrock (#7774) 2025-01-14 19:02:02 -08:00
Krish Dholakia
7b27cfb0ae
Support temporary budget increases on keys (#7754)
* fix(gpt_transformation.py): fix response_format translation check for 4o models

Fixes https://github.com/BerriAI/litellm/issues/7616

* feat(key_management_endpoints.py): support 'temp_budget_increase' and 'temp_budget_expiry' fields

Allow proxy admin to grant temporary budget increases to keys

* fix(proxy/_types.py): enforce temp_budget_increase and temp_budget_expiry are always passed together

* feat(user_api_key_auth.py): initial working temp budget increase logic

ensures key budget exceeded error checks for temp budget in key metadata

* feat(proxy_server.py): return the key max budget and key spend in the response headers

Allows clientside user to know their remaining limits

* test: add unit testing for new proxy utils

Ensures new key budget is correctly handled

* docs(temporary_budget_increase.md): add doc on temporary budget increase

* fix(utils.py): remove 3.5 from response_format check for now

not all azure  3.5 models support response_format

* fix(user_api_key_auth.py): return valid user api key auth object on all paths
2025-01-14 17:03:11 -08:00
Krish Dholakia
29663c2db5
Litellm dev 01 14 2025 p1 (#7771)
* First-class Aim Guardrails support (#7738)

* initial aim support

* add tests

* docs(langsmith_integration.md): cleanup

* style: cleanup unused imports

---------

Co-authored-by: Tomer Bin <117278227+hxtomer@users.noreply.github.com>
2025-01-14 16:18:21 -08:00
Ishaan Jaff
8c016d0184 docs benchmark 2025-01-14 10:48:43 -08:00
Ishaan Jaff
eb2770fee2 update benchmarks 2025-01-14 10:45:28 -08:00
Ishaan Jaff
d510f1d517
(fix) health check - allow setting health_check_model (#7752)
* use _update_litellm_params_for_health_check

* fix Wildcard Routes

* test_update_litellm_params_for_health_check

* test_perform_health_check_with_health_check_model

* fix doc string

* huggingface/mistralai/Mistral-7B-Instruct-v0.3
2025-01-13 20:16:44 -08:00
ProphetJeremy
ecf6d22dc2
(docs) Update vertex.md old code example
Complete imports
Remove invalid parameter `disable_atributon`
2025-01-13 14:27:27 +01:00
Krish Dholakia
ec5a354eac
add azure o1 pricing (#7715)
* build(model_prices_and_context_window.json): add azure o1 pricing

Closes https://github.com/BerriAI/litellm/issues/7712

* refactor: replace regex with string method for whitespace check in stop-sequences handling (#7713)

* Allows overriding keep_alive time in ollama (#7079)

* Allows overriding keep_alive time in ollama

* Also adds to ollama_chat

* Adds some info on the docs about this parameter

* fix: together ai warning (#7688)

Co-authored-by: Carl Senze <carl.senze@aleph-alpha.com>

* fix(proxy_server.py): handle config containing thread locked objects when using get_config_state

* fix(proxy_server.py): add exception to debug

* build(model_prices_and_context_window.json): update 'supports_vision' for azure o1

---------

Co-authored-by: Wolfram Ravenwolf <52386626+WolframRavenwolf@users.noreply.github.com>
Co-authored-by: Regis David Souza Mesquita <github@rdsm.dev>
Co-authored-by: Carl <45709281+capsenz@users.noreply.github.com>
Co-authored-by: Carl Senze <carl.senze@aleph-alpha.com>
2025-01-12 18:15:35 -08:00
Krrish Dholakia
3062564488 docs(enterprise.md): cleanup docs and add faq
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
2025-01-11 10:46:55 -08:00
Krrish Dholakia
d988bfb6f8 docs(enterprise.md): clarify sla for patching vulnerabilities 2025-01-11 10:42:32 -08:00
Krish Dholakia
5e537fbdb1
fix(model_hub.tsx): clarify cost in model hub is per 1m tokens (#7687)
* fix(model_hub.tsx): clarify cost in model hub is per 1m tokens

* docs: test blog

* docs: improve release note docs

* docs(docs/): new stable release doc

* docs(docs/): specify date in all posts

* docs(docs/): add git diff to stable release docs
2025-01-11 09:57:09 -08:00
Krrish Dholakia
9a1c050cf7 docs: new release notes
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 41s
2025-01-10 22:49:20 -08:00
Krrish Dholakia
f2ca244766 docs(logging.md): docs(logging.md): add docs on s3 bucket logging with team alias prefix 2025-01-10 22:28:05 -08:00
Krish Dholakia
27892acdfc
Litellm dev 01 10 2025 p3 (#7682)
* feat(langfuse.py): log the used prompt when prompt management used

* test: fix test

* docs(self_serve.md): add doc on restricting personal key creation on ui

* feat(s3.py): support s3 logging with team alias prefixes (if available)

New preview feature

* fix(main.py): remove old if block - simplify to just await if coroutine returned

fixes lm_studio async embedding error

* fix(langfuse.py): handle get prompt check
2025-01-10 21:56:42 -08:00
Krish Dholakia
c4780479a9
Litellm dev 01 10 2025 p2 (#7679)
* test(test_basic_python_version.py): assert all optional dependencies are marked as extras on poetry

Fixes https://github.com/BerriAI/litellm/issues/7677

* docs(secret.md): clarify 'read_and_write' secret manager usage on aws

* docs(secret.md): fix doc

* build(ui/teams.tsx): add edit/delete button for updating user / team membership on ui

allows updating user role to admin on ui

* build(ui/teams.tsx): display edit member component on ui, when edit button on member clicked

* feat(team_endpoints.py): support updating team member role to admin via api endpoints

allows team member to become admin post-add

* build(ui/user_dashboard.tsx): if team admin - show all team keys

Fixes https://github.com/BerriAI/litellm/issues/7650

* test(config.yml): add tomli to ci/cd

* test: don't call python_basic_testing in local testing (covered by python 3.13 testing)
2025-01-10 21:50:53 -08:00
Ishaan Jaff
49d74748b0 fix showing release notes 2025-01-10 20:40:50 -08:00
Krish Dholakia
a3e65c9bcb
LiteLLM Minor Fixes & Improvements (01/10/2025) - p1 (#7670)
* test(test_get_model_info.py): add unit test confirming router deployment updates global 'get_model_info'

* fix(get_supported_openai_params.py): fix custom llm provider 'get_supported_openai_params'

Fixes https://github.com/BerriAI/litellm/issues/7668

* docs(azure.md): clarify how azure ad token refresh on proxy works

Closes https://github.com/BerriAI/litellm/issues/7665
2025-01-10 17:49:05 -08:00
Krrish Dholakia
e98c1b86f4 docs(config_settings.md): update docs to include new athina env var 2025-01-10 10:46:12 -08:00
vivek-athina
8e2653c609
Use environment variable for Athina logging URL (#7628)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* Use environment variable for Athina logging URL

* Added to docs as well

* Changed the env var name
2025-01-10 07:47:12 -08:00
Krish Dholakia
c10ae8879e
fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini p… (#7660)
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url

* refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic

deduplicates code

* fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages

* docs(prompt_management.md): update prompt management to be in beta

given feedback - this still needs to be revised (e.g. passing in user message, not ignoring)

* refactor(prompt_management_base.py): introduce base class for prompt management

allows consistent behaviour across prompt management integrations

* feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base

* fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set

allows tracking what prompt was used for what purpose

* feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse

allows logging prompt id / prompt variables to langfuse

* test: fix test

* fix(router.py): cleanup unused imports

* fix: fix linting error

* fix: fix trace param typing

* fix: fix linting errors

* fix: fix code qa check
2025-01-10 07:31:59 -08:00
Krish Dholakia
865e6d5bda
fix(main.py): fix lm_studio/ embedding routing (#7658)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
* fix(main.py): fix lm_studio/ embedding routing

adds the mapping + updates docs with example

* docs(self_serve.md): update doc to show how to auto-add sso users to teams

* fix(streaming_handler.py): simplify async iterator check, to just check if streaming response is an async iterable
2025-01-09 23:03:24 -08:00
Ishaan Jaff
13f364682d
(Feat - Batches API) add support for retrieving vertex api batch jobs (#7661)
* add _async_retrieve_batch

* fix aretrieve_batch

* fix _get_batch_id_from_vertex_ai_batch_response

* fix batches docs
2025-01-09 18:35:03 -08:00
Krrish Dholakia
39ee4c6bb4 docs(intro.md): add a section on 'why pass through endpoints'
helps proxy admin understand when these would be useful
2025-01-08 19:15:41 -08:00
Ishaan Jaff
fd0a03f719
(feat) - allow building litellm proxy from pip package (#7633)
* fix working build from pip

* add tests for proxy_build_from_pip_tests

* doc clean up for deployment

* docs cleanup

* docs build from pip

* fix cd docker/build_from_pip
2025-01-08 16:36:57 -08:00
Ishaan Jaff
43566e9842 fix docs
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
2025-01-08 12:51:59 -08:00
Ishaan Jaff
e5717d2cb0 update load test docs 2025-01-08 12:48:21 -08:00
Ishaan Jaff
74b41d29d3 sort rn 2025-01-08 12:16:01 -08:00
Ishaan Jaff
f95439af26 docs v1.57.3 2025-01-08 12:08:19 -08:00
Krish Dholakia
a187cee538
Litellm dev 01 07 2025 p3 (#7635)
* fix(__init__.py): fix mistral large tool calling

map bedrock mistral large to converse endpoint

Fixes https://github.com/BerriAI/litellm/issues/7521

* braintrust logging: respect project_id, add more metrics + more (#7613)

* braintrust logging: respect project_id, add more metrics

* braintrust logger: improve json formatting

* braintrust logger: add test for passing specific project_id

* rm unneeded import

* braintrust logging: rm unneeded var in tets

* add project_name

* update docs

---------

Co-authored-by: H <no@email.com>

---------

Co-authored-by: hi019 <65871571+hi019@users.noreply.github.com>
Co-authored-by: H <no@email.com>
2025-01-08 11:46:24 -08:00
Ishaan Jaff
04eb718f7a update docs 2025-01-07 22:35:07 -08:00
Krrish Dholakia
d5a288e29e docs: cleanup keys 2025-01-06 21:57:18 -08:00
Krrish Dholakia
16f13dd55c docs(prompt_management.md): update docs to show how to point to load balanced model name 2025-01-06 21:09:09 -08:00
Ishaan Jaff
6125ba1e2b
(Feat) - allow including dd-trace in litellm base image (#7587)
* introduce USE_DDTRACE=true

* update dd tracer

* update

* bump dd trace

* use og slim image

* DD tracing

* fix _init_dd_tracer
2025-01-06 17:27:09 -08:00
minpeter
f7931b659b
FriendliAI: Documentation Updates (#7517)
* docs(friendliai.md): update FriendliAI documentation and model details

* docs(friendliai.md): remove unused imports for cleaner documentation

* feat: add support for parallel function calling, system messages, and response schema in model configuration
2025-01-04 22:44:24 -08:00
Ishaan Jaff
46d9d29bff
(Feat) Hashicorp Secret Manager - Allow storing virtual keys in secret manager (#7549)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* use a base abstract class

* async_write_secret for hcorp

* fix hcorp

* async_write_secret for hashicopr secret manager

* store virtual keys in hcorp

* add delete secret

* test_hashicorp_secret_manager_write_secret

* test_hashicorp_secret_manager_delete_secret

* docs Supported Secret Managers

* docs storing keys in hcorp

* docs hcorp

* docs secret managers

* test_key_generate_with_secret_manager_call

* fix unused imports
2025-01-04 11:35:59 -08:00
Krish Dholakia
d43d83f9ef
feat(router.py): support request prioritization for text completion c… (#7540)
* feat(router.py): support request prioritization for text completion calls

* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`

Fixes https://github.com/BerriAI/litellm/issues/7485

* fix: fix linting errors

* fix: fix linting error

* test(test_router_helper_utils.py): add direct test for '_schedule_factory'

Fixes code qa test
2025-01-03 19:35:44 -08:00
Krish Dholakia
f770dd0c95
Support checking provider-specific /models endpoints for available models based on key (#7538)
* test(test_utils.py): initial test for valid models

Addresses https://github.com/BerriAI/litellm/issues/7525

* fix: test

* feat(fireworks_ai/transformation.py): support retrieving valid models from fireworks ai endpoint

* refactor(fireworks_ai/): support checking model info on `/v1/models` route

* docs(set_keys.md): update docs to clarify check llm provider api usage

* fix(watsonx/common_utils.py): support 'WATSONX_ZENAPIKEY' for iam auth

* fix(watsonx): read in watsonx token from env var

* fix: fix linting errors

* fix(utils.py): fix provider config check

* style: cleanup unused imports
2025-01-03 19:29:59 -08:00