Krrish Dholakia
|
83ed174059
|
fix(__init__.py): fix models_by_provider to include cohere_chat models
Fixes https://github.com/BerriAI/litellm/issues/5201
|
2024-08-16 11:33:23 -07:00 |
|
Ishaan Jaff
|
c5515513a9
|
feat allow controlling logged tags on langfuse
|
2024-08-13 12:24:01 -07:00 |
|
Krrish Dholakia
|
f4c984878d
|
fix(utils.py): Break out of infinite streaming loop
Fixes https://github.com/BerriAI/litellm/issues/5158
|
2024-08-12 14:00:43 -07:00 |
|
Ishaan Jaff
|
8e90139377
|
refactor prometheus to be a customLogger class
|
2024-08-10 09:28:46 -07:00 |
|
Ishaan Jaff
|
e82656d59a
|
init bedrock_tool_name_mappings
|
2024-08-09 17:09:19 -07:00 |
|
Krrish Dholakia
|
36c37bcc8b
|
fix(internal_user_endpoints.py): expose new 'internal_user_budget_duration' flag
Relevant to - https://github.com/BerriAI/litellm/issues/5106
|
2024-08-08 13:05:03 -07:00 |
|
Krish Dholakia
|
7d28b6ebc3
|
Merge branch 'main' into litellm_personal_user_budgets
|
2024-08-07 19:59:50 -07:00 |
|
Krrish Dholakia
|
182d63853b
|
fix: use more descriptive flag
|
2024-08-07 18:59:46 -07:00 |
|
Krrish Dholakia
|
8b028d41aa
|
feat(utils.py): support validating json schema client-side if user opts in
|
2024-08-06 19:35:33 -07:00 |
|
Krrish Dholakia
|
adec69ef2f
|
fix(__init__.py): bump default allowed fails
|
2024-08-05 16:50:26 -07:00 |
|
Krrish Dholakia
|
b5e22bde06
|
fix: bump default allowed_fails + reduce default db pool limit
Fixes issues with running proxy server in production
|
2024-08-05 15:07:46 -07:00 |
|
Ishaan Jaff
|
c9856d91c7
|
fix linting errors
|
2024-08-05 08:54:04 -07:00 |
|
Ishaan Jaff
|
4604233408
|
add ALL_LITELLM_RESPONSE_TYPES
|
2024-08-05 08:41:13 -07:00 |
|
Krrish Dholakia
|
acbc2917b8
|
feat(utils.py): Add github as a provider
Closes https://github.com/BerriAI/litellm/issues/4922#issuecomment-2266564469
|
2024-08-03 09:11:22 -07:00 |
|
Ishaan Jaff
|
e3cf86702a
|
init gcs using gcs_bucket
|
2024-08-01 15:25:19 -07:00 |
|
Ishaan Jaff
|
866519b659
|
use itellm.forward_traceparent_to_llm_provider
|
2024-08-01 09:05:13 -07:00 |
|
Ishaan Jaff
|
77389ac577
|
add create_fine_tuning
|
2024-07-29 18:57:29 -07:00 |
|
Krrish Dholakia
|
ce7257ec5e
|
feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
|
2024-07-27 12:54:14 -07:00 |
|
Krrish Dholakia
|
bf23aac11d
|
feat(utils.py): support sync streaming for custom llm provider
|
2024-07-25 16:47:32 -07:00 |
|
Krrish Dholakia
|
54e1ca29b7
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 15:33:05 -07:00 |
|
Ishaan Jaff
|
2dc6367dc9
|
feat use UnsupportedParamsError as litellm error type
|
2024-07-24 12:19:10 -07:00 |
|
Krrish Dholakia
|
fe8b22ee9d
|
fix(__init__.py): update init
|
2024-07-23 17:55:28 -07:00 |
|
Krrish Dholakia
|
23a3be184b
|
build(model_prices_and_context_window.json): add model pricing for vertex ai llama 3.1 api
|
2024-07-23 17:36:07 -07:00 |
|
Krish Dholakia
|
7cf9620b12
|
Merge branch 'main' into litellm_braintrust_integration
|
2024-07-22 22:40:39 -07:00 |
|
Krrish Dholakia
|
8c005d8134
|
feat(redact_messages.py): allow remove sensitive key information before passing to logging integration
|
2024-07-22 20:58:02 -07:00 |
|
Krrish Dholakia
|
d4c72f913c
|
feat(braintrust_logging.py): working braintrust logging for successful calls
|
2024-07-22 17:04:55 -07:00 |
|
Ishaan Jaff
|
c7c65696fd
|
set _known_custom_logger_compatible_callbacks in _init
|
2024-07-22 15:38:46 -07:00 |
|
Ishaan Jaff
|
464a4edfb2
|
otel - log to arize ai
|
2024-07-22 13:40:42 -07:00 |
|
Ishaan Jaff
|
d3ee7a947c
|
use langsmith as a custom callback class
|
2024-07-17 15:35:13 -07:00 |
|
Ishaan Jaff
|
6d23b78a92
|
fix remove index from tool calls cohere error
|
2024-07-16 21:49:45 -07:00 |
|
Krrish Dholakia
|
8240cf8997
|
test: test fixes
|
2024-07-13 15:04:13 -07:00 |
|
Krrish Dholakia
|
6641683d66
|
feat(guardrails.py): allow setting logging_only in guardrails_config for presidio pii masking integration
|
2024-07-13 12:22:17 -07:00 |
|
Ishaan Jaff
|
c43948545f
|
feat add safe_memory_mode
|
2024-07-12 18:18:39 -07:00 |
|
Krish Dholakia
|
35a17b7d99
|
Merge pull request #4669 from BerriAI/litellm_logging_only_masking
Flag for PII masking on Logging only
|
2024-07-11 22:03:37 -07:00 |
|
Krrish Dholakia
|
abd682323c
|
feat(guardrails): Flag for PII Masking on Logging
Fixes https://github.com/BerriAI/litellm/issues/4580
|
2024-07-11 16:09:34 -07:00 |
|
Krrish Dholakia
|
948fd6fc33
|
fix: fix linting errors
|
2024-07-11 13:36:55 -07:00 |
|
Krish Dholakia
|
f4d140efec
|
Merge pull request #4635 from BerriAI/litellm_anthropic_adapter
Anthropic `/v1/messages` endpoint support
|
2024-07-10 22:41:53 -07:00 |
|
Ishaan Jaff
|
84c172f9fc
|
add retrive file to litellm SDK
|
2024-07-10 14:51:48 -07:00 |
|
Krrish Dholakia
|
01a335b4c3
|
feat(anthropic_adapter.py): support for translating anthropic params to openai format
|
2024-07-10 00:32:28 -07:00 |
|
Yulong Liu
|
10212b3da8
|
Merge branch 'main' into empower-functions-v1
|
2024-07-08 17:01:15 -07:00 |
|
Krrish Dholakia
|
d68ab2a8bc
|
fix(whisper---handle-openai/azure-vtt-response-format): Fixes https://github.com/BerriAI/litellm/issues/4595
|
2024-07-08 09:10:40 -07:00 |
|
Krish Dholakia
|
2fa9a1f3cf
|
Merge pull request #4461 from t968914/litellm-fix-vertexaibeta
fix: Include vertex_ai_beta in vertex_ai param mapping/Do not use google auth project_id
|
2024-07-04 15:27:20 -07:00 |
|
Krrish Dholakia
|
b17fe3e0d2
|
fix(router.py): bump azure default api version
Allows 'tool_choice' to be passed to azure
|
2024-07-03 12:00:00 -07:00 |
|
Krish Dholakia
|
b2f2560e54
|
Merge branch 'main' into litellm_support_dynamic_rpm_limiting
|
2024-07-02 17:51:18 -07:00 |
|
Tiger Yu
|
58bd8a4afb
|
Merge branch 'main' into litellm-fix-vertexaibeta
|
2024-07-02 09:49:44 -07:00 |
|
Krrish Dholakia
|
0bc08063e1
|
fix(dynamic_rate_limiter.py): support setting priority + reserving tpm/rpm
|
2024-07-01 23:08:54 -07:00 |
|
Ishaan Jaff
|
ae7f39417d
|
feat - return response headers for async openai requests
|
2024-07-01 17:01:42 -07:00 |
|
Krish Dholakia
|
90e97b9917
|
Merge pull request #4478 from BerriAI/litellm_support_response_schema_param_vertex_ai_old
feat(vertex_httpx.py): support the 'response_schema' param for older vertex ai models
|
2024-06-29 20:17:39 -07:00 |
|
Ishaan Jaff
|
695fb5118c
|
fix bedrock claude test
|
2024-06-29 18:46:06 -07:00 |
|
Krrish Dholakia
|
8ba78aae77
|
fix(utils.py): support json schema validation
|
2024-06-29 15:05:52 -07:00 |
|