Krrish Dholakia
|
3c4c78a71f
|
feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
|
2024-08-05 11:18:59 -07:00 |
|
Krrish Dholakia
|
cd94c3adc1
|
fix(types/router.py): remove model_info pydantic field
Fixes https://github.com/BerriAI/litellm/issues/5042
|
2024-08-05 09:58:44 -07:00 |
|
Krrish Dholakia
|
ac6c39c283
|
feat(anthropic_adapter.py): support streaming requests for /v1/messages endpoint
Fixes https://github.com/BerriAI/litellm/issues/5011
|
2024-08-03 20:16:19 -07:00 |
|
Krrish Dholakia
|
5add6687cc
|
fix(types/utils.py): fix linting errors
|
2024-08-03 11:48:33 -07:00 |
|
Krrish Dholakia
|
c982ec88d8
|
fix(bedrock.py): fix response format for bedrock image generation response
Fixes https://github.com/BerriAI/litellm/issues/5010
|
2024-08-03 09:46:49 -07:00 |
|
Ishaan Jaff
|
4917aaefab
|
fix vertex credentials
|
2024-08-03 08:40:35 -07:00 |
|
Ishaan Jaff
|
9dffe23108
|
Merge pull request #5030 from BerriAI/litellm_add_vertex_ft_proxy
[Feat] Add support for Vertex AI Fine tuning on LiteLLM Proxy
|
2024-08-03 08:29:11 -07:00 |
|
Ishaan Jaff
|
f840a5f6b4
|
Merge pull request #5028 from BerriAI/litellm_create_ft_job_gemini
[Feat] Add support for Vertex AI fine tuning endpoints
|
2024-08-03 08:22:55 -07:00 |
|
Ishaan Jaff
|
4fc27e87c5
|
add vertex ai ft on proxy
|
2024-08-02 18:26:36 -07:00 |
|
Ishaan Jaff
|
ac6224c2b1
|
translate response from vertex to openai
|
2024-08-02 18:02:24 -07:00 |
|
Krrish Dholakia
|
5d96ff6694
|
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
|
2024-08-02 17:48:53 -07:00 |
|
Ishaan Jaff
|
d364d76bd0
|
add vertex FT spec
|
2024-08-02 17:24:25 -07:00 |
|
Ishaan Jaff
|
cff7050147
|
add vertex_credentials in router param
|
2024-08-02 16:58:17 -07:00 |
|
Krrish Dholakia
|
0a30ba9674
|
fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
|
2024-08-02 09:30:50 -07:00 |
|
Ishaan Jaff
|
d833c69acb
|
Merge pull request #4987 from BerriAI/litellm_add_ft_endpoints
[Feat-Proxy] Add List fine-tuning jobs
|
2024-07-31 16:49:59 -07:00 |
|
Ishaan Jaff
|
e4c73036fc
|
validation for passing config file
|
2024-07-31 13:32:18 -07:00 |
|
Ishaan Jaff
|
ef5aeb17a1
|
fix pydantic obj for FT endpoints
|
2024-07-31 12:41:39 -07:00 |
|
Krrish Dholakia
|
46634af06f
|
fix(utils.py): fix model registeration to model cost map
Fixes https://github.com/BerriAI/litellm/issues/4972
|
2024-07-30 18:15:00 -07:00 |
|
Ishaan Jaff
|
f18827cbc0
|
fix type errors
|
2024-07-29 20:10:03 -07:00 |
|
Ishaan Jaff
|
6abc49c611
|
fix linting
|
2024-07-29 20:01:12 -07:00 |
|
Ishaan Jaff
|
5123bf4e75
|
add types for FineTuningJobCreate OpenAI
|
2024-07-29 18:59:44 -07:00 |
|
Ishaan Jaff
|
e031359e67
|
Merge pull request #4946 from BerriAI/litellm_Add_bedrock_guardrail_config
[Feat] Bedrock add support for Bedrock Guardrails
|
2024-07-29 15:09:56 -07:00 |
|
Ishaan Jaff
|
3eaa1fa217
|
types add GuardrailConfigBlock
|
2024-07-29 13:14:53 -07:00 |
|
Krrish Dholakia
|
ae4bcd8a41
|
fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
|
2024-07-29 13:04:41 -07:00 |
|
Ishaan Jaff
|
32eb3bd719
|
add new BATCH_WRITE_TO_DB type for service logger
|
2024-07-27 11:36:51 -07:00 |
|
Krrish Dholakia
|
b25d4a8cb3
|
feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
|
2024-07-26 21:51:54 -07:00 |
|
Ishaan Jaff
|
079a41fbe1
|
Merge branch 'main' into litellm_proxy_support_all_providers
|
2024-07-25 20:15:37 -07:00 |
|
Ishaan Jaff
|
e67daf79be
|
router support setting pass_through_all_models
|
2024-07-25 18:22:35 -07:00 |
|
Krrish Dholakia
|
6bf1b9353b
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 15:33:05 -07:00 |
|
Ishaan Jaff
|
a71b60d005
|
Pass litellm proxy specific metadata
|
2024-07-23 15:31:30 -07:00 |
|
Krrish Dholakia
|
1a83935aa4
|
fix(proxy/utils.py): add stronger typing for litellm params in failure call logging
|
2024-07-22 21:31:39 -07:00 |
|
Ishaan Jaff
|
5e4d291244
|
rename to _response_headers
|
2024-07-20 17:31:16 -07:00 |
|
Ishaan Jaff
|
ca8012090c
|
return response_headers in response
|
2024-07-20 14:58:14 -07:00 |
|
Ishaan Jaff
|
f3ac6493e8
|
fix typing errors
|
2024-07-19 16:18:53 -07:00 |
|
Ishaan Jaff
|
4d0fbfea83
|
router - refactor to tag based routing
|
2024-07-18 19:22:09 -07:00 |
|
Ishaan Jaff
|
007b959460
|
litellm router - use free / paid tier
|
2024-07-18 16:55:50 -07:00 |
|
Ishaan Jaff
|
75ca53fab5
|
fix linting errors on main
|
2024-07-18 13:32:48 -07:00 |
|
Krish Dholakia
|
57f6923ab6
|
Merge pull request #4729 from vingiarrusso/vgiarrusso/guardrails
Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook
|
2024-07-17 22:24:35 -07:00 |
|
Krish Dholakia
|
0fb88e527c
|
Merge pull request #4716 from pamelafox/countfuncs
Add token counting for OpenAI tools/tool_choice
|
2024-07-16 07:21:31 -07:00 |
|
Vinnie Giarrusso
|
6ff863ee00
|
Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook
|
2024-07-16 01:52:08 -07:00 |
|
Ishaan Jaff
|
254ac37f65
|
Merge pull request #4724 from BerriAI/litellm_Set_max_file_size_transc
[Feat] - set max file size on /audio/transcriptions
|
2024-07-15 20:42:24 -07:00 |
|
Ishaan Jaff
|
0bd747ef7e
|
max_file_size_mb in float
|
2024-07-15 19:58:41 -07:00 |
|
Krrish Dholakia
|
023f10cf1c
|
fix(vertex_httpx.py): return grounding metadata
|
2024-07-15 19:43:37 -07:00 |
|
Ishaan Jaff
|
865469e43f
|
allow setting max_file_size_mb
|
2024-07-15 19:25:24 -07:00 |
|
Pamela Fox
|
d43dbc756b
|
Count tokens for tools
|
2024-07-15 11:07:52 -07:00 |
|
Krrish Dholakia
|
6b78e39600
|
feat(guardrails.py): allow setting logging_only in guardrails_config for presidio pii masking integration
|
2024-07-13 12:22:17 -07:00 |
|
Krrish Dholakia
|
f2522867ed
|
fix(types/guardrails.py): add 'logging_only' param support
|
2024-07-13 11:44:37 -07:00 |
|
Krish Dholakia
|
f0b8c0e7fb
|
Merge pull request #4588 from Manouchehri/vertex-seed-2973
feat(vertex_httpx.py): Add seed parameter
|
2024-07-11 22:02:13 -07:00 |
|
Krrish Dholakia
|
b2e46086dd
|
fix(utils.py): fix recreating model response object when stream usage is true
|
2024-07-11 21:01:12 -07:00 |
|
Ishaan Jaff
|
8bf50ac5db
|
Merge pull request #4661 from BerriAI/litellm_fix_mh
[Fix] Model Hub - Show supports vision correctly
|
2024-07-11 15:03:37 -07:00 |
|