Krrish Dholakia
|
04d69464e2
|
fix(ollama.py): fix ollama embeddings - pass optional params
Fixes https://github.com/BerriAI/litellm/issues/5267
|
2024-08-19 08:45:26 -07:00 |
|
Krrish Dholakia
|
cc42f96d6a
|
fix(ollama_chat.py): fix sync tool calling
Fixes https://github.com/BerriAI/litellm/issues/5245
|
2024-08-19 08:31:46 -07:00 |
|
Krrish Dholakia
|
db54b66457
|
style(vertex_httpx.py): make vertex error string more helpful
|
2024-08-17 15:09:55 -07:00 |
|
Krish Dholakia
|
be37310e94
|
Merge pull request #5232 from Penagwin/fix_anthropic_tool_streaming_index
Fixes the `tool_use` indexes not being correctly mapped
|
2024-08-17 14:33:50 -07:00 |
|
Krish Dholakia
|
a8dd2b6910
|
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 19:16:20 -07:00 |
|
Ishaan Jaff
|
5d1fcc545b
|
fix predictions image generation response
|
2024-08-16 15:06:12 -07:00 |
|
Krrish Dholakia
|
61f4b71ef7
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Ishaan Jaff
|
89ba7b3e11
|
pass trace through for bedrock guardrails
|
2024-08-16 09:10:56 -07:00 |
|
Paul Lang
|
b0d13f42ed
|
Merge remote-tracking branch 'origin/main' into fix_anthropic_tool_streaming_index
|
2024-08-16 09:26:04 -04:00 |
|
Ishaan Jaff
|
e1839c8da2
|
allow index to not exist in sagemaker chunks
|
2024-08-15 19:45:59 -07:00 |
|
Ishaan Jaff
|
b93152e978
|
assume index is not always in stream chunk
|
2024-08-15 19:42:03 -07:00 |
|
Ishaan Jaff
|
fa569aaf6f
|
feat add support for aws_region_name
|
2024-08-15 19:32:59 -07:00 |
|
Ishaan Jaff
|
b4ba12e22c
|
show bedrock, sagemaker creds in verbose mode
|
2024-08-15 19:10:11 -07:00 |
|
Ishaan Jaff
|
0d374fb7c0
|
fix sagemaker test
|
2024-08-15 18:23:41 -07:00 |
|
Ishaan Jaff
|
df4ea8fba6
|
refactor sagemaker to be async
|
2024-08-15 18:18:02 -07:00 |
|
Ishaan Jaff
|
e217eda303
|
use BaseAWSLLM for bedrock getcredentials
|
2024-08-15 13:58:47 -07:00 |
|
Paul Lang
|
97fce022e4
|
Fixes the tool_use chunk mapping
|
2024-08-15 15:44:09 -04:00 |
|
Ishaan Jaff
|
1f631606a5
|
Merge pull request #5210 from BerriAI/litellm_add_prompt_caching_support
[Feat] Add Anthropic API Prompt Caching Support
|
2024-08-14 17:43:01 -07:00 |
|
Ishaan Jaff
|
78a2013e51
|
add test for large context in system message for anthropic
|
2024-08-14 17:03:10 -07:00 |
|
Krrish Dholakia
|
6333b04be3
|
fix(factory.py): handle assistant null content
|
2024-08-14 15:44:38 -07:00 |
|
Ishaan Jaff
|
54102a660d
|
pass cache_control in tool call
|
2024-08-14 15:18:11 -07:00 |
|
Ishaan Jaff
|
96f9655a02
|
test test_anthropic_api_prompt_caching_basic
|
2024-08-14 15:06:10 -07:00 |
|
Ishaan Jaff
|
b0651bd481
|
add anthropic cache controls
|
2024-08-14 14:56:49 -07:00 |
|
Krrish Dholakia
|
9b46ec05b0
|
fix(factory.py): support assistant messages as a list of dictionaries - cohere messages api
Fixes https://github.com/BerriAI/litellm/pull/5121
|
2024-08-14 14:19:59 -07:00 |
|
Krish Dholakia
|
eb5f0a8da2
|
Merge pull request #5148 from fabceolin/main
Enable follow redirects in ollama_chat
|
2024-08-14 14:08:45 -07:00 |
|
Ishaan Jaff
|
209c91ac03
|
feat - anthropic api context caching v0
|
2024-08-14 13:08:03 -07:00 |
|
Krish Dholakia
|
a6bf763dd9
|
Merge pull request #5116 from thiswillbeyourgithub/fix_ollama_argument_order
fix: wrong order of arguments for ollama
|
2024-08-14 08:36:14 -07:00 |
|
Krish Dholakia
|
c8f4cd373d
|
Merge pull request #5189 from ArtyomZemlyak/patch-1
Fix not sended json_data_for_triton
|
2024-08-13 20:44:26 -07:00 |
|
Krrish Dholakia
|
c1279ed809
|
fix(bedrock_httpx.py): fix error code for not found provider/model combo to be 404
|
2024-08-13 20:36:12 -07:00 |
|
Artem Zemliak
|
5290490106
|
Fix not sended json_data_for_triton
|
2024-08-14 09:57:48 +07:00 |
|
Krrish Dholakia
|
66d77f177f
|
fix(bedrock_httpx.py): raise bad request error if invalid bedrock model given
|
2024-08-13 19:27:06 -07:00 |
|
Krrish Dholakia
|
526b196f83
|
fix(bedrock_httpx.py): handle empty stop string
|
2024-08-13 07:30:30 -07:00 |
|
Krrish Dholakia
|
6e8d2856b0
|
fix(bedrock_httpx.py): handle bedrock empty system message
|
2024-08-13 07:17:17 -07:00 |
|
Krrish Dholakia
|
96d4466a8f
|
fix(azure.py): return response headers acompletion + acompletion w/ streaming
Fixes https://github.com/BerriAI/litellm/issues/2440#issuecomment-2283259187
|
2024-08-12 11:29:49 -07:00 |
|
Krrish Dholakia
|
dd2ea72cb4
|
fix(openai.py): fix position of invalid_params param
|
2024-08-10 09:52:27 -07:00 |
|
Fabrício Ceolin
|
936b76662f
|
Follow redirects
|
2024-08-10 12:12:55 -03:00 |
|
Ishaan Jaff
|
d445f3967d
|
correctly add modified tool names to cache
|
2024-08-09 17:39:49 -07:00 |
|
Ishaan Jaff
|
43ff70348d
|
init bedrock_tool_name_mappings
|
2024-08-09 17:09:19 -07:00 |
|
Ishaan Jaff
|
8b98193a51
|
bedrock make_valid_bedrock_tool_name
|
2024-08-09 13:16:38 -07:00 |
|
Krrish Dholakia
|
0cf81eba62
|
fix(huggingface_restapi.py): support passing 'wait_for_model' param on completion calls
|
2024-08-09 09:25:19 -07:00 |
|
Krrish Dholakia
|
466dc9f32a
|
fix(huggingface_restapi.py): fix hf embeddings optional param processing
|
2024-08-09 09:10:56 -07:00 |
|
Krrish Dholakia
|
d471f7e840
|
feat(vertex_httpx.py): return vertex grounding, citation, and safety results
|
2024-08-09 08:54:55 -07:00 |
|
Krrish Dholakia
|
51ccfa9e77
|
fix(huggingface_restapi.py): fixes issue where 'wait_for_model' was not being passed as expected
|
2024-08-09 08:36:35 -07:00 |
|
Krrish Dholakia
|
9839a7858c
|
fix(anthropic.py): fix translation
|
2024-08-08 17:33:48 -07:00 |
|
Krrish Dholakia
|
da0413f0a4
|
fix(factory.py): handle openai function message having tool call id
|
2024-08-08 17:06:50 -07:00 |
|
Ishaan Jaff
|
e671ae58e3
|
Merge pull request #5119 from BerriAI/litellm_add_gemini_context_caching_litellm
[Feat-Proxy] Add Support for VertexAI context caching
|
2024-08-08 16:08:58 -07:00 |
|
Ishaan Jaff
|
cae941f4c0
|
auto update the model in cachedContents request
|
2024-08-08 15:56:46 -07:00 |
|
Krrish Dholakia
|
1bf8af5cb1
|
fix(anthropic.py): fix translation from /v1/messages format to openai format
|
2024-08-08 15:55:01 -07:00 |
|
Ishaan Jaff
|
0d662368ac
|
add native cachedContents endpoint
|
2024-08-08 15:32:07 -07:00 |
|
Ishaan Jaff
|
d237e625ca
|
use v1beta1 when using cached_content
|
2024-08-08 11:09:31 -07:00 |
|