Krrish Dholakia
|
ac8c0ecd85
|
docs(user_keys.md): cleanup docs
|
2024-08-13 13:14:27 -07:00 |
|
Ishaan Jaff
|
81680d6b1a
|
docs control langfuse specific tags
|
2024-08-13 12:48:42 -07:00 |
|
Krrish Dholakia
|
718c2cfa4e
|
docs(team_logging.md): cleanup docs
|
2024-08-12 19:53:05 -07:00 |
|
Krrish Dholakia
|
fdd9a07051
|
fix(utils.py): Break out of infinite streaming loop
Fixes https://github.com/BerriAI/litellm/issues/5158
|
2024-08-12 14:00:43 -07:00 |
|
Ishaan Jaff
|
dc8f9e7241
|
docs mark oidc as beta
|
2024-08-12 09:01:36 -07:00 |
|
Krrish Dholakia
|
8cbf8d5671
|
docs(perplexity.md): show how to get 'return_citations'
|
2024-08-12 09:01:14 -07:00 |
|
Ishaan Jaff
|
e46009f3d2
|
Merge pull request #5154 from BerriAI/litellm_send_prometheus_fallbacks_from_slack
[Feat-Proxy] send prometheus fallbacks stats to slack
|
2024-08-10 17:14:01 -07:00 |
|
Ishaan Jaff
|
cc3316104f
|
doc new prometheus metrics
|
2024-08-10 17:13:36 -07:00 |
|
Ishaan Jaff
|
ffb7f9f280
|
add fallback_reports as slack alert
|
2024-08-10 15:26:32 -07:00 |
|
Krrish Dholakia
|
0ea056971c
|
docs(prefix.md): add prefix support to docs
|
2024-08-10 13:55:47 -07:00 |
|
Krrish Dholakia
|
f10970f1b1
|
docs(custom_llm_server.md): clarify what to use for modifying incoming/outgoing calls
|
2024-08-10 12:58:43 -07:00 |
|
Ishaan Jaff
|
0acc6efa8f
|
docs clean sidebar
|
2024-08-09 18:09:11 -07:00 |
|
Ishaan Jaff
|
4c08d1a21d
|
docs migration policy
|
2024-08-09 18:06:37 -07:00 |
|
Ishaan Jaff
|
09000d4b66
|
docs add migration policy
|
2024-08-09 18:03:37 -07:00 |
|
Ishaan Jaff
|
fc9086759d
|
docs prometheus metrics
|
2024-08-09 09:07:31 -07:00 |
|
Ishaan Jaff
|
a1c3167853
|
doc Grounding vertex ai
|
2024-08-09 08:31:38 -07:00 |
|
Ishaan Jaff
|
baaa444c8f
|
docs fix typo
|
2024-08-09 08:17:36 -07:00 |
|
Krrish Dholakia
|
dde477494f
|
docs(self_serve.md): add internal_user_budget_duration to docs
|
2024-08-08 23:54:26 -07:00 |
|
Ishaan Jaff
|
369ddfb49e
|
docs vertex context caching
|
2024-08-08 17:18:12 -07:00 |
|
Ishaan Jaff
|
84c05a57d6
|
docs use (LLM Gateway) in some places
|
2024-08-08 17:00:52 -07:00 |
|
Ishaan Jaff
|
f179759672
|
docs vertex ai
|
2024-08-08 16:12:36 -07:00 |
|
Ishaan Jaff
|
e671ae58e3
|
Merge pull request #5119 from BerriAI/litellm_add_gemini_context_caching_litellm
[Feat-Proxy] Add Support for VertexAI context caching
|
2024-08-08 16:08:58 -07:00 |
|
Ishaan Jaff
|
d78c38f8e7
|
docs vertex
|
2024-08-08 16:07:14 -07:00 |
|
Ishaan Jaff
|
a3dd3a19fa
|
docs cachedContent endpoint
|
2024-08-08 16:06:23 -07:00 |
|
Ishaan Jaff
|
8ad5a40283
|
doc on using litellm proxy with vertex ai content caching
|
2024-08-08 11:45:46 -07:00 |
|
Krrish Dholakia
|
2710bec02d
|
docs(scheduler.md): cleanup docs to use /chat/completion endpoint
|
2024-08-07 21:49:06 -07:00 |
|
Krish Dholakia
|
e1610d37b9
|
Merge pull request #5099 from BerriAI/litellm_personal_user_budgets
fix(user_api_key_auth.py): respect team budgets over user budget, if key belongs to team
|
2024-08-07 20:00:16 -07:00 |
|
Krish Dholakia
|
baf01b47d8
|
Merge branch 'main' into litellm_personal_user_budgets
|
2024-08-07 19:59:50 -07:00 |
|
Krrish Dholakia
|
7e1f296981
|
docs(self_serve.md): cleanup docs on how to onboard new users + teams
|
2024-08-07 19:58:36 -07:00 |
|
Krrish Dholakia
|
400653992c
|
feat(router.py): allow using .acompletion() for request prioritization
allows /chat/completion endpoint to work for request prioritization calls
|
2024-08-07 16:43:12 -07:00 |
|
Ishaan Jaff
|
e585dfba92
|
docs prom
|
2024-08-07 16:03:11 -07:00 |
|
Ishaan Jaff
|
04b201efed
|
Merge pull request #5098 from BerriAI/litellm_provider_wildcard_routing
[Feat-Router + Proxy] Add provider wildcard routing
|
2024-08-07 14:51:42 -07:00 |
|
Ishaan Jaff
|
a367f97eb2
|
docs provider specific wildcard routing
|
2024-08-07 14:49:45 -07:00 |
|
Krish Dholakia
|
3605e873a1
|
Merge branch 'main' into litellm_add_pydantic_model_support
|
2024-08-07 13:07:46 -07:00 |
|
Ishaan Jaff
|
1bf36cd7a4
|
docs prom metrics
|
2024-08-07 12:50:03 -07:00 |
|
Ishaan Jaff
|
61ccd5354b
|
docs prometheus
|
2024-08-07 12:47:06 -07:00 |
|
Ishaan Jaff
|
958e0fdfab
|
show warning about prometheus moving to enterprise
|
2024-08-07 12:46:26 -07:00 |
|
Ishaan Jaff
|
72aebe5e59
|
docs link to enteprise pricing
|
2024-08-07 12:10:47 -07:00 |
|
Ishaan Jaff
|
8d1f051d8c
|
docs prometheus
|
2024-08-07 11:37:05 -07:00 |
|
Krrish Dholakia
|
a0bb89a372
|
docs(ui.md): add restrict email subdomains w/ sso
|
2024-08-06 22:54:33 -07:00 |
|
Krish Dholakia
|
c82fc0cac2
|
Merge branch 'main' into litellm_support_lakera_config_thresholds
|
2024-08-06 22:47:13 -07:00 |
|
Ishaan Jaff
|
0d76f49ea6
|
docs run ui on custom server root path
|
2024-08-06 21:27:47 -07:00 |
|
Krrish Dholakia
|
2dd27a4e12
|
feat(utils.py): support validating json schema client-side if user opts in
|
2024-08-06 19:35:33 -07:00 |
|
Krrish Dholakia
|
0c88cc4153
|
docs(json_mode.md): add example of calling openai with pydantic model via litellm
|
2024-08-06 18:27:06 -07:00 |
|
Krrish Dholakia
|
cf44d1e069
|
docs(sidebars.js): cleanup sidebar title
|
2024-08-06 18:24:54 -07:00 |
|
Krrish Dholakia
|
f3a0eb8eb9
|
docs(json_mode.md): update json mode docs to show structured output responses
Relevant issue - https://github.com/BerriAI/litellm/issues/5074
|
2024-08-06 17:01:41 -07:00 |
|
Krrish Dholakia
|
0e222cf76b
|
feat(lakera_ai.py): support lakera custom thresholds + custom api base
Allows user to configure thresholds to trigger prompt injection rejections
|
2024-08-06 15:21:45 -07:00 |
|
Krrish Dholakia
|
2a95484a83
|
docs(deploy.md): add iam-based auth to rds
|
2024-08-06 14:29:39 -07:00 |
|
Ishaan Jaff
|
645d3ae09d
|
Merge pull request #5062 from BerriAI/litellm_forward_headers
[Fix-Proxy] allow forwarding headers from request
|
2024-08-06 12:34:25 -07:00 |
|
Ishaan Jaff
|
0cd2435aff
|
doc forward_headers
|
2024-08-06 12:07:21 -07:00 |
|