Krish Dholakia
|
b6ede4eb1b
|
Merge pull request #4716 from pamelafox/countfuncs
Add token counting for OpenAI tools/tool_choice
|
2024-07-16 07:21:31 -07:00 |
|
Ishaan Jaff
|
7944450074
|
Merge pull request #4724 from BerriAI/litellm_Set_max_file_size_transc
[Feat] - set max file size on /audio/transcriptions
|
2024-07-15 20:42:24 -07:00 |
|
Ishaan Jaff
|
4ec6d3847d
|
max_file_size_mb in float
|
2024-07-15 19:58:41 -07:00 |
|
Krrish Dholakia
|
58ed852a25
|
fix(vertex_httpx.py): return grounding metadata
|
2024-07-15 19:43:37 -07:00 |
|
Ishaan Jaff
|
11ed40be80
|
allow setting max_file_size_mb
|
2024-07-15 19:25:24 -07:00 |
|
Pamela Fox
|
ae6b8450c1
|
Count tokens for tools
|
2024-07-15 11:07:52 -07:00 |
|
Krrish Dholakia
|
6641683d66
|
feat(guardrails.py): allow setting logging_only in guardrails_config for presidio pii masking integration
|
2024-07-13 12:22:17 -07:00 |
|
Krrish Dholakia
|
d5f5415add
|
fix(types/guardrails.py): add 'logging_only' param support
|
2024-07-13 11:44:37 -07:00 |
|
Krish Dholakia
|
f01298bec9
|
Merge pull request #4588 from Manouchehri/vertex-seed-2973
feat(vertex_httpx.py): Add seed parameter
|
2024-07-11 22:02:13 -07:00 |
|
Krrish Dholakia
|
d85f24a80b
|
fix(utils.py): fix recreating model response object when stream usage is true
|
2024-07-11 21:01:12 -07:00 |
|
Ishaan Jaff
|
bf50c8e087
|
Merge pull request #4661 from BerriAI/litellm_fix_mh
[Fix] Model Hub - Show supports vision correctly
|
2024-07-11 15:03:37 -07:00 |
|
Krrish Dholakia
|
26a2ae76ab
|
fix(types/utils.py): message role is always 'assistant'
|
2024-07-11 14:14:38 -07:00 |
|
Ishaan Jaff
|
a16cd02cd9
|
fix supports vision
|
2024-07-11 12:59:42 -07:00 |
|
Krrish Dholakia
|
91c1d7bfa8
|
fix(watsonx.py): fix watson process response
Fixes https://github.com/BerriAI/litellm/issues/4654
|
2024-07-11 09:34:46 -07:00 |
|
Krrish Dholakia
|
af1064941a
|
fix(types/utils.py): fix streaming function name
|
2024-07-10 21:56:47 -07:00 |
|
Krrish Dholakia
|
48be4ce805
|
feat(proxy_server.py): working /v1/messages with config.yaml
Adds async router support for adapter_completion call
|
2024-07-10 18:53:54 -07:00 |
|
Krrish Dholakia
|
4ba30abb63
|
feat(proxy_server.py): working /v1/messages endpoint
Works with claude engineer
|
2024-07-10 18:15:38 -07:00 |
|
Krrish Dholakia
|
01a335b4c3
|
feat(anthropic_adapter.py): support for translating anthropic params to openai format
|
2024-07-10 00:32:28 -07:00 |
|
Krrish Dholakia
|
71ad281c0a
|
feat(vertex_httpx.py): add support for gemini 'grounding'
Adds support for https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/grounding#rest
|
2024-07-08 21:37:07 -07:00 |
|
David Manouchehri
|
3f993699c0
|
feat(vertex_httpx.py): Add undocumented seed parameter.
|
2024-07-07 23:32:04 +00:00 |
|
Krish Dholakia
|
c643be0c0c
|
Merge branch 'main' into litellm_gemini_stream_tool_calling
|
2024-07-06 19:07:31 -07:00 |
|
Krish Dholakia
|
5640ed4c8c
|
Merge branch 'main' into litellm_proxy_tts_pricing
|
2024-07-06 14:56:16 -07:00 |
|
Krrish Dholakia
|
9f900a1bed
|
fix(vertex_httpx.py): support tool calling w/ streaming for vertex ai + gemini
|
2024-07-06 14:02:25 -07:00 |
|
Ishaan Jaff
|
b9ab94a6bb
|
allow async_only_mode on rotuer
|
2024-07-06 12:50:57 -07:00 |
|
Krrish Dholakia
|
356c18c929
|
feat(litellm_logging.py): support cost tracking for tts calls
|
2024-07-05 22:09:08 -07:00 |
|
Krrish Dholakia
|
56410cfcd0
|
fix(proxy_server.py): support langfuse logging for rejected requests on /v1/chat/completions
|
2024-07-05 13:07:09 -07:00 |
|
Krrish Dholakia
|
8a8dde6622
|
fix(vertex_httpx.py): fix assumptions on usagemetadata
|
2024-07-05 11:01:51 -07:00 |
|
Krish Dholakia
|
5cd4461273
|
Merge branch 'main' into feature/return-output-vector-size-in-modelinfo
|
2024-07-04 17:03:31 -07:00 |
|
Krrish Dholakia
|
8625770010
|
fix(types/router.py): add custom pricing info to 'model_info'
Fixes https://github.com/BerriAI/litellm/issues/4542
|
2024-07-04 16:07:58 -07:00 |
|
Krrish Dholakia
|
cceb7b59db
|
fix(cohere.py): fix message parsing to handle tool calling correctly
|
2024-07-04 11:13:07 -07:00 |
|
Krish Dholakia
|
06c6c65d2a
|
Merge branch 'main' into litellm_anthropic_tool_calling_streaming_fix
|
2024-07-03 20:43:51 -07:00 |
|
Krrish Dholakia
|
eae049d059
|
fix(anthropic.py): support *real* anthropic tool calling + streaming
Parses each chunk and translates to openai format
|
2024-07-03 19:48:35 -07:00 |
|
Ishaan Jaff
|
bf00204700
|
add new GuardrailItem type
|
2024-07-03 14:03:34 -07:00 |
|
Krrish Dholakia
|
a6faa5161c
|
refactor(azure.py): replaces the custom transport logic for just using our httpx client
Done to fix all the http/https proxy issues people are facing with proxy.
|
2024-07-02 15:32:53 -07:00 |
|
Krrish Dholakia
|
39833762cf
|
fix(vertex_ai_anthropic.py): support pre-filling "{" for json mode
|
2024-06-29 18:54:10 -07:00 |
|
Krrish Dholakia
|
106f25625a
|
fix(utils.py): new helper function to check if provider/model supports 'response_schema' param
|
2024-06-29 12:40:29 -07:00 |
|
Krrish Dholakia
|
e616158748
|
fix(utils.py): handle arguments being None
Fixes https://github.com/BerriAI/litellm/issues/4440
|
2024-06-27 08:56:52 -07:00 |
|
Ishaan Jaff
|
eedbf8a016
|
Revert "Add return type annotations to util types"
This reverts commit faef56fe69 .
|
2024-06-26 15:59:38 -07:00 |
|
Josh Learn
|
946599b8cf
|
Add return type annotations to util types
|
2024-06-26 12:46:59 -04:00 |
|
Krrish Dholakia
|
62ff12c0b6
|
fix(vertex_httpx.py): cover gemini content violation (on prompt)
|
2024-06-24 19:13:56 -07:00 |
|
Krish Dholakia
|
39c2fe511c
|
Merge branch 'main' into litellm_azure_content_filter_fallbacks
|
2024-06-22 21:28:29 -07:00 |
|
Krrish Dholakia
|
f9ce6472d7
|
fix(router.py): check if azure returns 'content_filter' response + fallback available -> fallback
Exception maps azure content filter response exceptions
|
2024-06-22 19:10:15 -07:00 |
|
Krrish Dholakia
|
89dba82be9
|
feat(dynamic_rate_limiter.py): initial commit for dynamic rate limiting
Closes https://github.com/BerriAI/litellm/issues/4124
|
2024-06-21 18:41:31 -07:00 |
|
Ishaan Jaff
|
6186d40823
|
router - add doc string
|
2024-06-20 14:36:51 -07:00 |
|
Ishaan Jaff
|
7ce2aa83c1
|
feat - set custom routing strategy
|
2024-06-20 13:49:44 -07:00 |
|
Krrish Dholakia
|
9a4e51c858
|
fix(types/utils.py): fix linting error
|
2024-06-19 18:58:12 -07:00 |
|
Krrish Dholakia
|
edfe550165
|
feat(llm_cost_calc/google.py): do character based cost calculation for vertex ai
Calculate cost for vertex ai responses using characters in query/response
Closes https://github.com/BerriAI/litellm/issues/4165
|
2024-06-19 17:18:42 -07:00 |
|
Tom Usher
|
f9b5e56d46
|
Return output_vector_size in get_model_info
|
2024-06-19 14:09:20 +01:00 |
|
Krish Dholakia
|
4c5eb58fc9
|
Merge pull request #4266 from BerriAI/litellm_gemini_image_url
Support 'image url' to vertex ai / google ai studio gemini models
|
2024-06-18 20:39:25 -07:00 |
|
Krrish Dholakia
|
4f467bd6bf
|
fix(types/utils.py): fix linting errors
|
2024-06-18 20:19:06 -07:00 |
|