Krrish Dholakia
|
b8e4ef0abf
|
docs(json_mode.md): add azure openai models to doc
|
2024-08-19 07:19:23 -07:00 |
|
Ishaan Jaff
|
0bc67761dc
|
docs access groups
|
2024-08-17 17:38:28 -07:00 |
|
Ishaan Jaff
|
3cba235109
|
docs virtual key access groups
|
2024-08-17 17:37:23 -07:00 |
|
Ishaan Jaff
|
d9c91838ce
|
docs cleanup
|
2024-08-17 15:59:23 -07:00 |
|
Ishaan Jaff
|
78d30990a3
|
docs clean up virtual key access
|
2024-08-17 15:39:50 -07:00 |
|
Ishaan Jaff
|
671663abe6
|
docs rate limits per model per api key
|
2024-08-17 14:50:17 -07:00 |
|
Krish Dholakia
|
ff6ff133ee
|
Merge pull request #5260 from BerriAI/google_ai_studio_pass_through
Pass-through endpoints for Gemini - Google AI Studio
|
2024-08-17 13:51:51 -07:00 |
|
Krrish Dholakia
|
0df41653f3
|
docs(google_ai_studio.md): add docs on google ai studio pass through endpoints
|
2024-08-17 13:47:05 -07:00 |
|
Ishaan Jaff
|
b35b09ea93
|
docs clean up emojis
|
2024-08-17 13:30:11 -07:00 |
|
Ishaan Jaff
|
9b0bd54571
|
docs cleanup - reduce emojis
|
2024-08-17 13:28:34 -07:00 |
|
Krrish Dholakia
|
08411f37b4
|
docs(vertex_ai.md): cleanup docs
|
2024-08-17 08:38:01 -07:00 |
|
Krish Dholakia
|
6b1be4783a
|
Merge pull request #5251 from Manouchehri/oidc-improvements-20240816
(oidc): Add support for loading tokens via a file, env var, and path in env var
|
2024-08-16 19:15:31 -07:00 |
|
Krrish Dholakia
|
d991e1320c
|
docs(langfuse_integration.md): add disable logging for specific calls to docs
|
2024-08-16 17:36:13 -07:00 |
|
Ishaan Jaff
|
84925264d8
|
docs oauth2
|
2024-08-16 14:03:56 -07:00 |
|
Ishaan Jaff
|
3d06b51e4e
|
docs correct link to oauth 2.0
|
2024-08-16 14:02:58 -07:00 |
|
Ishaan Jaff
|
ac833f415d
|
docs oauh 2.0 enterprise feature
|
2024-08-16 14:00:24 -07:00 |
|
Ishaan Jaff
|
0d41e2972b
|
docs on oauth 2.0
|
2024-08-16 13:55:28 -07:00 |
|
David Manouchehri
|
bef8568cb3
|
(oidc): Improve docs for unofficial provider.
|
2024-08-16 20:30:41 +00:00 |
|
Ishaan Jaff
|
bd7bf7f6b0
|
fix endpoint name on router
|
2024-08-16 12:46:43 -07:00 |
|
Ishaan Jaff
|
dcd8ff44df
|
docs add example on setting temp=0 for sagemaker
|
2024-08-16 12:04:35 -07:00 |
|
Ishaan Jaff
|
63b4e11bfd
|
docs sagemaker - add example using with proxy
|
2024-08-16 11:47:13 -07:00 |
|
Ishaan Jaff
|
7f39f9f97d
|
docs cleanup
|
2024-08-16 11:38:53 -07:00 |
|
Krrish Dholakia
|
eb6a0a32f1
|
docs(bedrock.md): add guardrails on config.yaml to docs
|
2024-08-14 22:11:19 -07:00 |
|
Krrish Dholakia
|
c7fd626805
|
docs(team_logging.md): add key-based logging to docs
|
2024-08-14 21:49:55 -07:00 |
|
Krrish Dholakia
|
3487d84fcc
|
docs(pass_through.md): add doc on using langfuse client sdk w/ litellm proxy
|
2024-08-14 21:43:31 -07:00 |
|
Ishaan Jaff
|
1f631606a5
|
Merge pull request #5210 from BerriAI/litellm_add_prompt_caching_support
[Feat] Add Anthropic API Prompt Caching Support
|
2024-08-14 17:43:01 -07:00 |
|
Ishaan Jaff
|
912acb1cae
|
docs using proxy with context caaching anthropic
|
2024-08-14 17:42:48 -07:00 |
|
Ishaan Jaff
|
2267b8a59f
|
docs add examples with litellm proxy
|
2024-08-14 17:13:26 -07:00 |
|
Ishaan Jaff
|
fd122aa7a3
|
docs add examples doing context caching anthropic sdk
|
2024-08-14 17:07:51 -07:00 |
|
Ishaan Jaff
|
e0ff4823d0
|
add test for caching tool calls
|
2024-08-14 16:19:14 -07:00 |
|
Ishaan Jaff
|
45e367d4d4
|
docs Caching - Continuing Multi-Turn Convo
|
2024-08-14 15:26:25 -07:00 |
|
Ishaan Jaff
|
69a640e9c4
|
test amnthropic prompt caching
|
2024-08-14 14:59:46 -07:00 |
|
Krrish Dholakia
|
179dd7b893
|
docs(model_management.md): add section on adding additional model information to proxy config
|
2024-08-14 14:39:48 -07:00 |
|
Ishaan Jaff
|
acadabe6c9
|
use litellm_ prefix for new deployment metrics
|
2024-08-14 09:08:14 -07:00 |
|
Krrish Dholakia
|
4cef6df4cf
|
docs(sidebar.js): cleanup docs
|
2024-08-14 09:04:52 -07:00 |
|
Zbigniew Łukasiak
|
963c921c5a
|
Mismatch in example fixed
|
2024-08-14 15:07:10 +02:00 |
|
Ishaan Jaff
|
4d2cedfdb6
|
Merge pull request #5191 from BerriAI/litellm_load_config_from_s3
[Feat] Allow loading LiteLLM config from s3 buckets
|
2024-08-13 21:19:16 -07:00 |
|
Ishaan Jaff
|
6f7b204294
|
docs - set litellm config as s3 object
|
2024-08-13 20:26:29 -07:00 |
|
Keith Stevens
|
17c6a4e532
|
Improving the proxy docs for configuring with vllm
|
2024-08-13 16:07:41 -07:00 |
|
Ishaan Jaff
|
b24da18d2d
|
Merge pull request #5180 from BerriAI/litellm_allow_controlling_logged_tags_langfuse
[Feat-Proxy+langfuse] LiteLLM-specific Tags on Langfuse - `cache_hit`, `cache_key`
|
2024-08-13 13:50:01 -07:00 |
|
Krrish Dholakia
|
7e99cfe938
|
docs(user_keys.md): cleanup instructor docs
|
2024-08-13 13:15:46 -07:00 |
|
Krrish Dholakia
|
ac8c0ecd85
|
docs(user_keys.md): cleanup docs
|
2024-08-13 13:14:27 -07:00 |
|
Ishaan Jaff
|
81680d6b1a
|
docs control langfuse specific tags
|
2024-08-13 12:48:42 -07:00 |
|
Krrish Dholakia
|
718c2cfa4e
|
docs(team_logging.md): cleanup docs
|
2024-08-12 19:53:05 -07:00 |
|
Krrish Dholakia
|
fdd9a07051
|
fix(utils.py): Break out of infinite streaming loop
Fixes https://github.com/BerriAI/litellm/issues/5158
|
2024-08-12 14:00:43 -07:00 |
|
Ishaan Jaff
|
dc8f9e7241
|
docs mark oidc as beta
|
2024-08-12 09:01:36 -07:00 |
|
Krrish Dholakia
|
8cbf8d5671
|
docs(perplexity.md): show how to get 'return_citations'
|
2024-08-12 09:01:14 -07:00 |
|
Ishaan Jaff
|
e46009f3d2
|
Merge pull request #5154 from BerriAI/litellm_send_prometheus_fallbacks_from_slack
[Feat-Proxy] send prometheus fallbacks stats to slack
|
2024-08-10 17:14:01 -07:00 |
|
Ishaan Jaff
|
cc3316104f
|
doc new prometheus metrics
|
2024-08-10 17:13:36 -07:00 |
|
Ishaan Jaff
|
ffb7f9f280
|
add fallback_reports as slack alert
|
2024-08-10 15:26:32 -07:00 |
|