Ishaan Jaff
|
c9d9c6444e
|
fix aporia typo
|
2024-08-19 21:03:37 -07:00 |
|
Ishaan Jaff
|
eb9da06033
|
feat - guardrails v2
|
2024-08-19 21:03:37 -07:00 |
|
Ishaan Jaff
|
49a27320dc
|
doc aporia_w_litellm
|
2024-08-19 21:03:37 -07:00 |
|
Ishaan Jaff
|
507fab06f8
|
run during_call_hook
|
2024-08-19 21:03:37 -07:00 |
|
Ishaan Jaff
|
b0f546d551
|
feat - return applied guardrails in response headers
|
2024-08-19 21:03:37 -07:00 |
|
Ishaan Jaff
|
edf0507399
|
feat - allow accessing data post success call
|
2024-08-19 21:03:37 -07:00 |
|
Ishaan Jaff
|
1998809f18
|
feat run aporia as post call success hook
|
2024-08-19 21:03:37 -07:00 |
|
Krrish Dholakia
|
2aba1f17cc
|
feat(langfuse_endpoints.py): support team based logging for langfuse pass-through endpoints
|
2024-08-19 21:03:37 -07:00 |
|
Krrish Dholakia
|
c1a3d86fe4
|
feat(langfuse_endpoints.py): support langfuse pass through endpoints by default
|
2024-08-19 21:03:37 -07:00 |
|
Krrish Dholakia
|
5bde7b0c70
|
fix(proxy_cli.py): support database_host, database_username, database_password, database_name
|
2024-08-19 21:03:37 -07:00 |
|
Krrish Dholakia
|
49416e121c
|
feat(azure.py): support dynamic api versions
Closes https://github.com/BerriAI/litellm/issues/5228
|
2024-08-19 12:17:43 -07:00 |
|
Krrish Dholakia
|
417547b6f9
|
fix(user_api_key_auth.py): log requester ip address to logs on request rejection
Closes https://github.com/BerriAI/litellm/issues/5220
|
2024-08-19 11:03:58 -07:00 |
|
Krrish Dholakia
|
3cafebbc65
|
test(test_caching.py): re-introduce testing for s3 cache w/ streaming
Closes https://github.com/BerriAI/litellm/issues/3268
|
2024-08-19 10:56:48 -07:00 |
|
Krrish Dholakia
|
cc42f96d6a
|
fix(ollama_chat.py): fix sync tool calling
Fixes https://github.com/BerriAI/litellm/issues/5245
|
2024-08-19 08:31:46 -07:00 |
|
Ishaan Jaff
|
398295116f
|
inly write model tpm/rpm tracking when user set it
|
2024-08-18 09:58:09 -07:00 |
|
Krish Dholakia
|
f42ac2c9d8
|
Merge pull request #5264 from BerriAI/litellm_bedrock_pass_through
feat: Bedrock pass-through endpoint support (All endpoints)
|
2024-08-18 09:55:22 -07:00 |
|
Krrish Dholakia
|
663a0c1b83
|
feat(Support-pass-through-for-bedrock-endpoints): Allows pass-through support for bedrock endpoints
|
2024-08-17 17:57:43 -07:00 |
|
Ishaan Jaff
|
83515e88ce
|
Merge pull request #5263 from BerriAI/litellm_support_access_groups
[Feat-Proxy] Use model access groups for teams
|
2024-08-17 17:11:11 -07:00 |
|
Krrish Dholakia
|
f7a2e04426
|
feat(pass_through_endpoints.py): add pass-through support for all cohere endpoints
|
2024-08-17 16:57:55 -07:00 |
|
Ishaan Jaff
|
08db691dec
|
use model access groups for teams
|
2024-08-17 16:45:53 -07:00 |
|
Ishaan Jaff
|
eff874bf05
|
fix proxy all models test
|
2024-08-17 15:54:51 -07:00 |
|
Ishaan Jaff
|
b83fa87880
|
update tpm / rpm limit per model
|
2024-08-17 15:26:12 -07:00 |
|
Krrish Dholakia
|
db54b66457
|
style(vertex_httpx.py): make vertex error string more helpful
|
2024-08-17 15:09:55 -07:00 |
|
Ishaan Jaff
|
a60fc3ad70
|
Merge pull request #5261 from BerriAI/litellm_set_model_rpm_tpm_limit
[Feat-Proxy] set rpm/tpm limits per api key per model
|
2024-08-17 14:30:54 -07:00 |
|
Krish Dholakia
|
ff6ff133ee
|
Merge pull request #5260 from BerriAI/google_ai_studio_pass_through
Pass-through endpoints for Gemini - Google AI Studio
|
2024-08-17 13:51:51 -07:00 |
|
Ishaan Jaff
|
68b54bed85
|
add tpm limits per api key per model
|
2024-08-17 13:20:55 -07:00 |
|
Krrish Dholakia
|
fd44cf8d26
|
feat(pass_through_endpoints.py): support streaming requests
|
2024-08-17 12:46:57 -07:00 |
|
Ishaan Jaff
|
fa96610bbc
|
fix async_pre_call_hook in parallel request limiter
|
2024-08-17 12:42:28 -07:00 |
|
Ishaan Jaff
|
feb8c3c5b4
|
Merge pull request #5259 from BerriAI/litellm_return_remaining_tokens_in_header
[Feat] return `x-litellm-key-remaining-requests-{model}`: 1, `x-litellm-key-remaining-tokens-{model}: None` in response headers
|
2024-08-17 12:41:16 -07:00 |
|
Ishaan Jaff
|
ee0f772b5c
|
feat return rmng tokens for model for api key
|
2024-08-17 12:35:10 -07:00 |
|
Krrish Dholakia
|
bc0023a409
|
feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
New Feature
|
2024-08-17 10:46:59 -07:00 |
|
Ishaan Jaff
|
5985c7e933
|
feat - use commong helper for getting model group
|
2024-08-17 10:46:04 -07:00 |
|
Ishaan Jaff
|
412d30d362
|
add litellm-key-remaining-tokens on prometheus
|
2024-08-17 10:02:20 -07:00 |
|
Ishaan Jaff
|
785482f023
|
feat add settings for rpm/tpm limits for a model
|
2024-08-17 09:16:01 -07:00 |
|
Krrish Dholakia
|
b56ecd7e02
|
fix(pass_through_endpoints.py): fix returned response headers for pass-through endpoitns
|
2024-08-17 09:00:00 -07:00 |
|
Krrish Dholakia
|
08411f37b4
|
docs(vertex_ai.md): cleanup docs
|
2024-08-17 08:38:01 -07:00 |
|
Krish Dholakia
|
f3e17cd692
|
Merge branch 'main' into litellm_log_model_price_information
|
2024-08-16 19:34:16 -07:00 |
|
Krish Dholakia
|
a8dd2b6910
|
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 19:16:20 -07:00 |
|
Ishaan Jaff
|
1ee33478c9
|
track rpm/tpm usage per key+model
|
2024-08-16 18:28:58 -07:00 |
|
Ishaan Jaff
|
a6a4b944ad
|
user api key auth rpm_limit_per_model
|
2024-08-16 18:22:35 -07:00 |
|
Krrish Dholakia
|
7fce6b0163
|
fix(health_check.py): return 'missing mode' error message, if error with health check, and mode is missing
|
2024-08-16 17:24:29 -07:00 |
|
Krrish Dholakia
|
178139f18d
|
feat(litellm_logging.py): support logging model price information to s3 logs
|
2024-08-16 16:21:34 -07:00 |
|
Ishaan Jaff
|
ac833f415d
|
docs oauh 2.0 enterprise feature
|
2024-08-16 14:00:24 -07:00 |
|
Ishaan Jaff
|
cd28b6607e
|
add debugging for oauth2.0
|
2024-08-16 13:40:32 -07:00 |
|
Ishaan Jaff
|
d4b33cf87c
|
allow using oauth2 checks for logging into proxy
|
2024-08-16 13:36:29 -07:00 |
|
Ishaan Jaff
|
0c0b835c3f
|
add init commit for oauth 2 checks
|
2024-08-16 13:30:22 -07:00 |
|
Ishaan Jaff
|
9c3124c5a7
|
ui new build
|
2024-08-16 12:53:23 -07:00 |
|
Krrish Dholakia
|
cbdaecb5a8
|
fix(__init__.py): fix models_by_provider to include cohere_chat models
Fixes https://github.com/BerriAI/litellm/issues/5201
|
2024-08-16 11:33:23 -07:00 |
|
Krrish Dholakia
|
61f4b71ef7
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Krrish Dholakia
|
1510daba4f
|
bump: version 1.43.15 → 1.43.16
|
2024-08-15 23:04:30 -07:00 |
|