Ishaan Jaff
|
319690ab5e
|
feat - guardrails v2
|
2024-08-19 18:24:20 -07:00 |
|
Krrish Dholakia
|
a9025280bd
|
feat(langfuse_endpoints.py): support team based logging for langfuse pass-through endpoints
|
2024-08-19 17:58:39 -07:00 |
|
Krrish Dholakia
|
4b15f5bc83
|
feat(langfuse_endpoints.py): support langfuse pass through endpoints by default
|
2024-08-19 17:28:34 -07:00 |
|
Krrish Dholakia
|
727035913b
|
fix(proxy_cli.py): support database_host, database_username, database_password, database_name
|
2024-08-19 16:17:45 -07:00 |
|
Ishaan Jaff
|
181be05cb5
|
doc aporia_w_litellm
|
2024-08-19 14:36:55 -07:00 |
|
Krrish Dholakia
|
d08479b52c
|
feat(azure.py): support dynamic api versions
Closes https://github.com/BerriAI/litellm/issues/5228
|
2024-08-19 12:17:43 -07:00 |
|
Ishaan Jaff
|
249df0a78e
|
run during_call_hook
|
2024-08-19 12:07:46 -07:00 |
|
Ishaan Jaff
|
f16e0472c2
|
feat - return applied guardrails in response headers
|
2024-08-19 11:56:20 -07:00 |
|
Ishaan Jaff
|
b4bca8db82
|
feat - allow accessing data post success call
|
2024-08-19 11:35:33 -07:00 |
|
Ishaan Jaff
|
6af497e383
|
feat run aporia as post call success hook
|
2024-08-19 11:25:31 -07:00 |
|
Krrish Dholakia
|
5e8a2ced04
|
fix(user_api_key_auth.py): log requester ip address to logs on request rejection
Closes https://github.com/BerriAI/litellm/issues/5220
|
2024-08-19 11:03:58 -07:00 |
|
Krrish Dholakia
|
0d82089136
|
test(test_caching.py): re-introduce testing for s3 cache w/ streaming
Closes https://github.com/BerriAI/litellm/issues/3268
|
2024-08-19 10:56:48 -07:00 |
|
Krrish Dholakia
|
c0b7f56fc2
|
fix(ollama_chat.py): fix sync tool calling
Fixes https://github.com/BerriAI/litellm/issues/5245
|
2024-08-19 08:31:46 -07:00 |
|
Ishaan Jaff
|
94e74b9ede
|
inly write model tpm/rpm tracking when user set it
|
2024-08-18 09:58:09 -07:00 |
|
Krish Dholakia
|
db30aa6382
|
Merge pull request #5264 from BerriAI/litellm_bedrock_pass_through
feat: Bedrock pass-through endpoint support (All endpoints)
|
2024-08-18 09:55:22 -07:00 |
|
Krrish Dholakia
|
c5d1899940
|
feat(Support-pass-through-for-bedrock-endpoints): Allows pass-through support for bedrock endpoints
|
2024-08-17 17:57:43 -07:00 |
|
Ishaan Jaff
|
888afa2d08
|
Merge pull request #5263 from BerriAI/litellm_support_access_groups
[Feat-Proxy] Use model access groups for teams
|
2024-08-17 17:11:11 -07:00 |
|
Krrish Dholakia
|
1856ac585d
|
feat(pass_through_endpoints.py): add pass-through support for all cohere endpoints
|
2024-08-17 16:57:55 -07:00 |
|
Ishaan Jaff
|
7171efc729
|
use model access groups for teams
|
2024-08-17 16:45:53 -07:00 |
|
Ishaan Jaff
|
ec671b491d
|
fix proxy all models test
|
2024-08-17 15:54:51 -07:00 |
|
Ishaan Jaff
|
a2178c026b
|
update tpm / rpm limit per model
|
2024-08-17 15:26:12 -07:00 |
|
Krrish Dholakia
|
5dc52aedc9
|
style(vertex_httpx.py): make vertex error string more helpful
|
2024-08-17 15:09:55 -07:00 |
|
Ishaan Jaff
|
7854652696
|
Merge pull request #5261 from BerriAI/litellm_set_model_rpm_tpm_limit
[Feat-Proxy] set rpm/tpm limits per api key per model
|
2024-08-17 14:30:54 -07:00 |
|
Krish Dholakia
|
5e6700f985
|
Merge pull request #5260 from BerriAI/google_ai_studio_pass_through
Pass-through endpoints for Gemini - Google AI Studio
|
2024-08-17 13:51:51 -07:00 |
|
Ishaan Jaff
|
2c5f5996f3
|
add tpm limits per api key per model
|
2024-08-17 13:20:55 -07:00 |
|
Krrish Dholakia
|
b2ffa564d1
|
feat(pass_through_endpoints.py): support streaming requests
|
2024-08-17 12:46:57 -07:00 |
|
Ishaan Jaff
|
8578301116
|
fix async_pre_call_hook in parallel request limiter
|
2024-08-17 12:42:28 -07:00 |
|
Ishaan Jaff
|
db8f789318
|
Merge pull request #5259 from BerriAI/litellm_return_remaining_tokens_in_header
[Feat] return `x-litellm-key-remaining-requests-{model}`: 1, `x-litellm-key-remaining-tokens-{model}: None` in response headers
|
2024-08-17 12:41:16 -07:00 |
|
Ishaan Jaff
|
9f6630912d
|
feat return rmng tokens for model for api key
|
2024-08-17 12:35:10 -07:00 |
|
Krrish Dholakia
|
29bedae79f
|
feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
New Feature
|
2024-08-17 10:46:59 -07:00 |
|
Ishaan Jaff
|
a62277a6aa
|
feat - use commong helper for getting model group
|
2024-08-17 10:46:04 -07:00 |
|
Ishaan Jaff
|
03196742d2
|
add litellm-key-remaining-tokens on prometheus
|
2024-08-17 10:02:20 -07:00 |
|
Ishaan Jaff
|
8ae626b31f
|
feat add settings for rpm/tpm limits for a model
|
2024-08-17 09:16:01 -07:00 |
|
Krrish Dholakia
|
668ea6cbc7
|
fix(pass_through_endpoints.py): fix returned response headers for pass-through endpoitns
|
2024-08-17 09:00:00 -07:00 |
|
Krrish Dholakia
|
3b9eb7ca1e
|
docs(vertex_ai.md): cleanup docs
|
2024-08-17 08:38:01 -07:00 |
|
Krish Dholakia
|
88fccb2427
|
Merge branch 'main' into litellm_log_model_price_information
|
2024-08-16 19:34:16 -07:00 |
|
Krish Dholakia
|
0916197c9d
|
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 19:16:20 -07:00 |
|
Ishaan Jaff
|
824ea32452
|
track rpm/tpm usage per key+model
|
2024-08-16 18:28:58 -07:00 |
|
Ishaan Jaff
|
dbc9c9e8d8
|
user api key auth rpm_limit_per_model
|
2024-08-16 18:22:35 -07:00 |
|
Krrish Dholakia
|
9efe9982f5
|
fix(health_check.py): return 'missing mode' error message, if error with health check, and mode is missing
|
2024-08-16 17:24:29 -07:00 |
|
Krrish Dholakia
|
ef51f8600d
|
feat(litellm_logging.py): support logging model price information to s3 logs
|
2024-08-16 16:21:34 -07:00 |
|
Ishaan Jaff
|
55df861291
|
docs oauh 2.0 enterprise feature
|
2024-08-16 14:00:24 -07:00 |
|
Ishaan Jaff
|
9a9710b8a1
|
add debugging for oauth2.0
|
2024-08-16 13:40:32 -07:00 |
|
Ishaan Jaff
|
8745e1608a
|
allow using oauth2 checks for logging into proxy
|
2024-08-16 13:36:29 -07:00 |
|
Ishaan Jaff
|
d2be2d6e23
|
add init commit for oauth 2 checks
|
2024-08-16 13:30:22 -07:00 |
|
Ishaan Jaff
|
f2569740fa
|
ui new build
|
2024-08-16 12:53:23 -07:00 |
|
Krrish Dholakia
|
83ed174059
|
fix(__init__.py): fix models_by_provider to include cohere_chat models
Fixes https://github.com/BerriAI/litellm/issues/5201
|
2024-08-16 11:33:23 -07:00 |
|
Krrish Dholakia
|
2874b94fb1
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Krrish Dholakia
|
62365835f3
|
bump: version 1.43.15 → 1.43.16
|
2024-08-15 23:04:30 -07:00 |
|
Krish Dholakia
|
ca07898fbb
|
Merge pull request #5235 from BerriAI/litellm_fix_s3_logs
fix(s3.py): fix s3 logging payload to have valid json values
|
2024-08-15 23:00:18 -07:00 |
|