Ishaan Jaff
2a7e1e970d
(docs) prometheus metrics document all prometheus metrics ( #5989 )
...
* fix doc on prometheus
* (docs) clean up prometheus docs
* docs show what metrics are deprectaed
* doc clarify labels used for bduget metrics
* add litellm_remaining_api_key_requests_for_model
2024-09-30 16:38:38 -07:00
Ishaan Jaff
ca9c437021
add Azure OpenAI entrata id docs ( #5985 )
2024-09-30 12:17:58 -07:00
Ishaan Jaff
30aa04b8c2
add docs on privacy policy
2024-09-30 11:53:52 -07:00
Ishaan Jaff
50d1c864f2
fix grammar on health check docs ( #5984 )
2024-09-30 09:21:42 -07:00
Krrish Dholakia
7630680690
docs(response_headers.md): add response headers to docs
2024-09-28 23:33:50 -07:00
DAOUDI Soufian
bfa9553819
Fixed minor typo in bash command to prevent overwriting .env file ( #5902 )
...
Changed '>' to '>>' in the bash command to append the environment variable to the .env file instead of overwriting it.
2024-09-28 23:12:19 -07:00
Krrish Dholakia
c9d6925a42
docs(reliability.md): add tutorial on setting wildcard models as fallbacks
2024-09-28 21:08:15 -07:00
Ishaan Jaff
b817974c8e
docs clean up langfuse.md
2024-09-28 18:59:02 -07:00
Ishaan Jaff
0d0f46a826
[Feat Proxy] Allow using hypercorn for http v2 ( #5950 )
...
* use run_hypercorn
* add docs on using hypercorn
2024-09-28 15:03:50 -07:00
Ishaan Jaff
fd87ae69b8
[Vertex Multimodal embeddings] Fixes to work with Langchain OpenAI Embedding ( #5949 )
...
* fix parallel request limiter - use one cache update call
* ci/cd run again
* run ci/cd again
* use docker username password
* fix config.yml
* fix config
* fix config
* fix config.yml
* ci/cd run again
* use correct typing for batch set cache
* fix async_set_cache_pipeline
* fix only check user id tpm / rpm limits when limits set
* fix test_openai_azure_embedding_with_oidc_and_cf
* add InstanceImage type
* fix vertex image transform
* add langchain vertex test request
* add new vertex test
* update multimodal embedding tests
* add test_vertexai_multimodal_embedding_base64image_in_input
* simplify langchain mm embedding usage
* add langchain example for multimodal embeddings on vertex
* fix linting error
2024-09-27 18:04:03 -07:00
Khanh Le
71f68ac185
docs(vertex.md): fix codestral fim placement ( #5946 )
2024-09-27 17:21:34 -07:00
Ishaan Jaff
bbf4db79c1
docs - show correct rpm - > tpm conversion for Azure
2024-09-27 17:18:55 -07:00
Krrish Dholakia
70df474e64
docs: resolve imports
2024-09-27 13:36:29 -07:00
Krrish Dholakia
2e9dca135e
docs(data_security.md): add legal/compliance faq's
...
Make it easier for companies to use litellm
2024-09-27 13:33:27 -07:00
Jannik Maierhöfer
52e971155a
[docs] updated langfuse integration guide ( #5921 )
2024-09-27 07:49:47 -07:00
Krish Dholakia
a1d9e96b31
LiteLLM Minor Fixes & Improvements (09/25/2024) ( #5893 )
...
* fix(langfuse.py): support new langfuse prompt_chat class init params
* fix(langfuse.py): handle new init values on prompt chat + prompt text templates
fixes error caused during langfuse logging
* docs(openai_compatible.md): clarify `openai/` handles correct routing for `/v1/completions` route
Fixes https://github.com/BerriAI/litellm/issues/5876
* fix(utils.py): handle unmapped gemini model optional param translation
Fixes https://github.com/BerriAI/litellm/issues/5888
* fix(o1_transformation.py): fix o-1 validation, to not raise error if temperature=1
Fixes https://github.com/BerriAI/litellm/issues/5884
* fix(prisma_client.py): refresh iam token
Fixes https://github.com/BerriAI/litellm/issues/5896
* fix: pass drop params where required
* fix(utils.py): pass drop_params correctly
* fix(types/vertex_ai.py): fix generation config
* test(test_max_completion_tokens.py): fix test
* fix(vertex_and_google_ai_studio_gemini.py): fix map openai params
2024-09-26 16:41:44 -07:00
Ishaan Jaff
a8dd495eae
[Feat] add fireworks llama 3.2 models + cost tracking ( #5905 )
...
* add fireworks llama 3.2 vision models
* add new llama3.2 models
* docs add new llama 3.2 vision models
2024-09-25 17:59:46 -07:00
Ishaan Jaff
4bdeefd7e4
docs service accounts ( #5900 )
2024-09-25 15:46:13 -07:00
Ishaan Jaff
4ec4d02474
[Feat-Router] Allow setting which environment to use a model on ( #5892 )
...
* add check deployment_is_active_for_environment
* add test for test_init_router_with_supported_environments
* show good example config for environments
* docs clean up config.yaml
* docs cleanup
* docs configs
* docs specfic env
2024-09-25 10:12:06 -07:00
Ishaan Jaff
2516360ceb
docs show all configs
2024-09-25 06:37:38 -07:00
Ishaan Jaff
a8bb2f476c
docs show relevant litellm_settings
2024-09-25 06:36:10 -07:00
Krrish Dholakia
b2e80ecb8e
docs(user_keys.md): add docs on configurable clientside auth credentials
...
Allow easy switching of finetuned models
2024-09-24 22:44:39 -07:00
Krish Dholakia
d37c8b5c6b
LiteLLM Minor Fixes & Improvements (09/23/2024) ( #5842 ) ( #5858 )
...
* LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842 )
* feat(auth_utils.py): enable admin to allow client-side credentials to be passed
Makes it easier for devs to experiment with finetuned fireworks ai models
* feat(router.py): allow setting configurable_clientside_auth_params for a model
Closes https://github.com/BerriAI/litellm/issues/5843
* build(model_prices_and_context_window.json): fix anthropic claude-3-5-sonnet max output token limit
Fixes https://github.com/BerriAI/litellm/issues/5850
* fix(azure_ai/): support content list for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4237
* fix(litellm_logging.py): always set saved_cache_cost
Set to 0 by default
* fix(fireworks_ai/cost_calculator.py): add fireworks ai default pricing
handles calling 405b+ size models
* fix(slack_alerting.py): fix error alerting for failed spend tracking
Fixes regression with slack alerting error monitoring
* fix(vertex_and_google_ai_studio_gemini.py): handle gemini no candidates in streaming chunk error
* docs(bedrock.md): add llama3-1 models
* test: fix tests
* fix(azure_ai/chat): fix transformation for azure ai calls
2024-09-24 15:01:31 -07:00
Ishaan Jaff
5337440ff9
[Feat] SSO - add provider
in the OpenID field for custom sso ( #5849 )
...
* service_account_settings on config
* include provider in OpenID for custom sso
* add GENERIC_PROVIDER_ATTRIBUTE to docs
* use correct naming scheme
2024-09-23 16:34:30 -07:00
Krrish Dholakia
16c8549b77
docs(virtual_keys.md): add enable/disable virtual keys to docs + refactor sidebar
2024-09-21 22:20:39 -07:00
Krish Dholakia
8039b95aaf
LiteLLM Minor Fixes & Improvements (09/21/2024) ( #5819 )
...
* fix(router.py): fix error message
* Litellm disable keys (#5814 )
* build(schema.prisma): allow blocking/unblocking keys
Fixes https://github.com/BerriAI/litellm/issues/5328
* fix(key_management_endpoints.py): fix pop
* feat(auth_checks.py): allow admin to enable/disable virtual keys
Closes https://github.com/BerriAI/litellm/issues/5328
* docs(vertex.md): add auth section for vertex ai
Addresses - https://github.com/BerriAI/litellm/issues/5768#issuecomment-2365284223
* build(model_prices_and_context_window.json): show which models support prompt_caching
Closes https://github.com/BerriAI/litellm/issues/5776
* fix(router.py): allow setting default priority for requests
* fix(router.py): add 'retry-after' header for concurrent request limit errors
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(router.py): correctly raise and use retry-after header from azure+openai
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(user_api_key_auth.py): fix valid token being none
* fix(auth_checks.py): fix model dump for cache management object
* fix(user_api_key_auth.py): pass prisma_client to obj
* test(test_otel.py): update test for new key check
* test: fix test
2024-09-21 18:51:53 -07:00
Ishaan Jaff
16b0d38c11
fix re-add virtual key auth checks on vertex ai pass thru endpoints ( #5827 )
2024-09-21 17:34:10 -07:00
Ishaan Jaff
d100b32573
[SSO-UI] Set new sso users as internal_view role users ( #5824 )
...
* use /user/list endpoint on admin ui
* sso insert user with role when user does not exist
* add sso sign in test
* linting fix
* rename self serve doc
* add doc for self serve flow
* test - sso sign in default values
* add test for /user/list endpoint
2024-09-21 16:43:52 -07:00
Ishaan Jaff
a9caba33ef
[Feat] Allow setting custom arize endpoint ( #5709 )
...
* set arize endpoint
* docs arize endpoint
* fix arize endpoint
2024-09-21 13:12:00 -07:00
Ishaan Jaff
1973ae8fb8
[Feat] Allow setting supports_vision
for Custom OpenAI endpoints + Added testing ( #5821 )
...
* add test for using images with custom openai endpoints
* run all otel tests
* update name of test
* add custom openai model to test config
* add test for setting supports_vision=True for model
* fix test guardrails aporia
* docs supports vison
* fix yaml
* fix yaml
* docs supports vision
* fix bedrock guardrail test
* fix cohere rerank test
* update model_group doc string
* add better prints on test
2024-09-21 11:35:55 -07:00
Ishaan Jaff
1d630b61ad
[Feat] Add fireworks AI embedding ( #5812 )
...
* add fireworks embedding models
* add fireworks ai
* fireworks ai embeddings support
* is_fireworks_embedding_model
* working fireworks embeddings
* fix health check * models
* fix embedding get optional params
* fix linting errors
* fix pick_cheapest_chat_model_from_llm_provider
* add fireworks ai litellm provider
* docs fireworks embedding models
* fixes for when azure ad token is passed
2024-09-20 22:23:28 -07:00
Krrish Dholakia
d349d501c8
docs(proxy/configs.md): add CONFIG_FILE_PATH tutorial to docs
2024-09-20 22:04:16 -07:00
Krish Dholakia
7ed6938a3f
LiteLLM Minor Fixes & Improvements (09/20/2024) ( #5807 )
...
* fix(vertex_llm_base.py): Handle api_base = ""
Fixes https://github.com/BerriAI/litellm/issues/5798
* fix(o1_transformation.py): handle stream_options not being supported
https://github.com/BerriAI/litellm/issues/5803
* docs(routing.md): fix docs
Closes https://github.com/BerriAI/litellm/issues/5808
* perf(internal_user_endpoints.py): reduce db calls for getting team_alias for a key
Use the list gotten earlier in `/user/info` endpoint
Reduces ui keys tab load time to 800ms (prev. 28s+)
* feat(proxy_server.py): support CONFIG_FILE_PATH as env var
Closes https://github.com/BerriAI/litellm/issues/5744
* feat(get_llm_provider_logic.py): add `litellm_proxy/` as a known openai-compatible route
simplifies calling litellm proxy
Reduces confusion when calling models on litellm proxy from litellm sdk
* docs(litellm_proxy.md): cleanup docs
* fix(internal_user_endpoints.py): fix pydantic obj
* test(test_key_generate_prisma.py): fix test
2024-09-20 20:21:32 -07:00
Ishaan Jaff
cf7dcd9168
[Feat-Proxy] Allow using custom sso handler ( #5809 )
...
* update internal user doc string
* add readme on location of /sso routes
* add custom_sso_handler
* docs custom sso
* use secure=True for cookies
2024-09-20 19:14:33 -07:00
Ishaan Jaff
e6018a464f
[ Proxy - User Management]: If user assigned to a team don't show Default Team ( #5791 )
...
* rename endpoint to ui_settings
* ui allow DEFAULT_TEAM_DISABLED
* fix logic
* docs Set `default_team_disabled: true` on your litellm config.yaml
2024-09-19 17:13:58 -07:00
Ishaan Jaff
91e58d9049
[Feat] Add proxy level prometheus metrics ( #5789 )
...
* add Proxy Level Tracking Metrics doc
* update service logger
* prometheus - track litellm_proxy_failed_requests_metric
* use REQUESTED_MODEL
* fix prom request_data
2024-09-19 17:13:07 -07:00
Ishaan Jaff
4e03e1509f
docs docker quick start
2024-09-19 15:10:59 -07:00
Ishaan Jaff
bea9a89ea8
docs fix link on root page
2024-09-19 15:00:30 -07:00
Ishaan Jaff
f971409888
docs add docker quickstart to litellm proxy getting started
2024-09-19 14:57:13 -07:00
Krrish Dholakia
0bdb17eca8
docs(vertex.md): fix example with GOOGLE_APPLICATION_CREDENTIALS
2024-09-19 14:47:52 -07:00
Ishaan Jaff
1e7839377c
fix root of docs page
2024-09-19 14:36:21 -07:00
Krish Dholakia
d46660ea0f
LiteLLM Minor Fixes & Improvements (09/18/2024) ( #5772 )
...
* fix(proxy_server.py): fix azure key vault logic to not require client id/secret
* feat(cost_calculator.py): support fireworks ai cost tracking
* build(docker-compose.yml): add lines for mounting config.yaml to docker compose
Closes https://github.com/BerriAI/litellm/issues/5739
* fix(input.md): update docs to clarify litellm supports content as a list of dictionaries
Fixes https://github.com/BerriAI/litellm/issues/5755
* fix(input.md): update input.md to include all message values
* fix(image_handling.py): follow image url redirects
Fixes https://github.com/BerriAI/litellm/issues/5763
* fix(router.py): Fix model key/base leak in error message
Fixes https://github.com/BerriAI/litellm/issues/5762
* fix(http_handler.py): fix linting error
* fix(azure.py): fix logging to show azure_ad_token being used
Fixes https://github.com/BerriAI/litellm/issues/5767
* fix(_redis.py): add redis sentinel support
Closes https://github.com/BerriAI/litellm/issues/4381
* feat(_redis.py): add redis sentinel support
Closes https://github.com/BerriAI/litellm/issues/4381
* test(test_completion_cost.py): fix test
* Databricks Integration: Integrate Databricks SDK as optional mechanism for fetching API base and token, if unspecified (#5746 )
* LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723 )
* coverage (#5713 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* Move (#5714 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix(litellm_logging.py): fix logging client re-init (#5710 )
Fixes https://github.com/BerriAI/litellm/issues/5695
* fix(presidio.py): Fix logging_hook response and add support for additional presidio variables in guardrails config
Fixes https://github.com/BerriAI/litellm/issues/5682
* feat(o1_handler.py): fake streaming for openai o1 models
Fixes https://github.com/BerriAI/litellm/issues/5694
* docs: deprecated traceloop integration in favor of native otel (#5249 )
* fix: fix linting errors
* fix: fix linting errors
* fix(main.py): fix o1 import
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view (#5730 )
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view
Supports having `MonthlyGlobalSpend` view be a material view, and exposes an endpoint to refresh it
* fix(custom_logger.py): reset calltype
* fix: fix linting errors
* fix: fix linting error
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix: fix import
* Fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* DB test
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* Coverage
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* progress
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix test name
Signed-off-by: dbczumar <corey.zumar@databricks.com>
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
* test: fix test
* test(test_databricks.py): fix test
* fix(databricks/chat.py): handle custom endpoint (e.g. sagemaker)
* Apply code scanning fix for clear-text logging of sensitive information
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix(__init__.py): fix known fireworks ai models
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2024-09-19 13:25:29 -07:00
Ishaan Jaff
4399deab2e
docs fallback/login
2024-09-18 16:43:19 -07:00
Ishaan Jaff
5480563281
docs add info on /fallback/login
2024-09-18 16:41:19 -07:00
Ishaan Jaff
eba76377ca
[Chore-Proxy] enforce jwt auth as enterprise feature ( #5770 )
...
* enforce prometheus as enterprise feature
* show correct error on prometheus metric when not enrterprise user
* docs promethues metrics enforced
* docs enforce JWT auth
* enforce JWT auth as enterprise feature
* fix merge conflicts
2024-09-18 16:28:37 -07:00
Ishaan Jaff
50cc7c0353
[Chore LiteLLM Proxy] enforce prometheus metrics as enterprise feature ( #5769 )
...
* enforce prometheus as enterprise feature
* show correct error on prometheus metric when not enrterprise user
* docs promethues metrics enforced
* fix enforcing
2024-09-18 16:28:12 -07:00
Ishaan Jaff
7e07c37be7
[Feat-Proxy] Add Azure Assistants API - Create Assistant, Delete Assistant Support ( #5777 )
...
* update docs to show providers
* azure - move assistants in it's own file
* create new azure assistants file
* add azure create assistants
* add test for create / delete assistants
* azure add delete assistants support
* docs add Azure to support providers for assistants api
* fix linting errors
* fix standard logging merge conflict
* docs azure create assistants
* fix doc
2024-09-18 16:27:33 -07:00
Ishaan Jaff
a109853d21
[Prometheus] track requested model ( #5774 )
...
* enforce prometheus as enterprise feature
* show correct error on prometheus metric when not enrterprise user
* docs promethues metrics enforced
* track requested model on prometheus
* docs prom metrics
* fix prom tracking failures
2024-09-18 12:46:58 -07:00
Ishaan Jaff
a4549b5b6c
docs update what gets logged on gcs buckets
2024-09-18 10:18:57 -07:00
Ishaan Jaff
aa84bcebaf
docs update standard logging object
2024-09-18 10:17:09 -07:00