Krrish Dholakia
|
bc66ef9d5c
|
fix(utils.py): fix aws secret manager + support key_management_settings
fixes the aws secret manager implementation and allows the user to set which keys they want to check thr
ough it
|
2024-03-16 16:47:50 -07:00 |
|
Krrish Dholakia
|
d8956e9255
|
fix(utils.py): initial commit for aws secret manager support
|
2024-03-16 14:37:46 -07:00 |
|
Krrish Dholakia
|
909341c4f2
|
fix(utils.py): async add to cache - for streaming
|
2024-03-15 18:25:40 -07:00 |
|
Krrish Dholakia
|
226953e1d8
|
feat(batch_redis_get.py): batch redis GET requests for a given key + call type
reduces the number of GET requests we're making in high-throughput scenarios
|
2024-03-15 14:40:11 -07:00 |
|
Krrish Dholakia
|
9909f44015
|
feat(utils.py): add native fireworks ai support
addresses - https://github.com/BerriAI/litellm/issues/777, https://github.com/BerriAI/litellm/issues/2486
|
2024-03-15 09:09:59 -07:00 |
|
Krrish Dholakia
|
a634424fb2
|
fix(utils.py): move to using litellm.modify_params to enable max output token trimming fix
|
2024-03-14 12:17:56 -07:00 |
|
Krrish Dholakia
|
234cdbbfef
|
feat(prompt_injection_detection.py): support simple heuristic similarity check for prompt injection attacks
|
2024-03-13 10:32:21 -07:00 |
|
Krish Dholakia
|
9f2d540ebf
|
Merge pull request #2472 from BerriAI/litellm_anthropic_streaming_tool_calling
fix(anthropic.py): support claude-3 streaming with function calling
|
2024-03-12 21:36:01 -07:00 |
|
Ishaan Jaff
|
7b4f9691c7
|
Merge pull request #2479 from BerriAI/litellm_cohere_tool_call
[FEAT Cohere/command-r tool calling
|
2024-03-12 21:20:59 -07:00 |
|
Krrish Dholakia
|
d620b4dc5d
|
fix(anthropic.py): bug fix
|
2024-03-12 19:32:42 -07:00 |
|
ishaan-jaff
|
b9bfc7c36c
|
(fix) use cohere_chat optional params
|
2024-03-12 14:31:43 -07:00 |
|
Krish Dholakia
|
0d18f3c0ca
|
Merge pull request #2473 from BerriAI/litellm_fix_compatible_provider_model_name
fix(openai.py): return model name with custom llm provider for openai-compatible endpoints (e.g. mistral, together ai, etc.)
|
2024-03-12 12:58:29 -07:00 |
|
Krish Dholakia
|
1ba102c618
|
Merge pull request #2475 from BerriAI/litellm_azure_dall_e_3_cost_tracking
fix(azure.py): support cost tracking for azure/dall-e-3
|
2024-03-12 12:57:31 -07:00 |
|
ishaan-jaff
|
a18c941621
|
(fix) failing cohere test
|
2024-03-12 12:44:19 -07:00 |
|
ishaan-jaff
|
d136238f6f
|
(v0) tool calling
|
2024-03-12 12:35:52 -07:00 |
|
Krrish Dholakia
|
d07c813ef9
|
test: add more logging for failing test
|
2024-03-12 11:15:14 -07:00 |
|
Ishaan Jaff
|
5172fb1de9
|
Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
|
2024-03-12 11:11:56 -07:00 |
|
Krrish Dholakia
|
7dd94c802e
|
fix(azure.py): support cost tracking for azure/dall-e-3
|
2024-03-12 10:55:54 -07:00 |
|
ishaan-jaff
|
e5bb65669d
|
(feat) exception mapping for cohere_chat
|
2024-03-12 10:45:42 -07:00 |
|
Krrish Dholakia
|
0033613b9e
|
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
|
2024-03-12 10:30:10 -07:00 |
|
Krrish Dholakia
|
86ed0aaba8
|
fix(anthropic.py): support streaming with function calling
|
2024-03-12 09:52:11 -07:00 |
|
ishaan-jaff
|
223ac464d7
|
(fix) support streaming for azure/instruct models
|
2024-03-12 09:50:43 -07:00 |
|
Krrish Dholakia
|
312a9d8c26
|
fix(utils.py): support response_format for mistral ai api
|
2024-03-11 10:23:41 -07:00 |
|
Vince Loewe
|
7c38f992dc
|
Merge branch 'main' into main
|
2024-03-11 12:36:41 +09:00 |
|
Krish Dholakia
|
c7d0af0a2e
|
Merge pull request #2426 from BerriAI/litellm_whisper_cost_tracking
feat: add cost tracking + caching for `/audio/transcription` calls
|
2024-03-09 19:12:06 -08:00 |
|
Krrish Dholakia
|
1d15dde6de
|
fix(utils.py): fix model setting in completion cost
|
2024-03-09 19:11:37 -08:00 |
|
Krrish Dholakia
|
8d2d51b625
|
fix(utils.py): fix model name checking
|
2024-03-09 18:22:26 -08:00 |
|
Krrish Dholakia
|
fa45c569fd
|
feat: add cost tracking + caching for transcription calls
|
2024-03-09 15:43:38 -08:00 |
|
Krrish Dholakia
|
8b24ddcbbd
|
fix(bedrock.py): enable claude-3 streaming
|
2024-03-09 14:02:27 -08:00 |
|
Krish Dholakia
|
caa99f43bf
|
Merge branch 'main' into litellm_load_balancing_transcription_endpoints
|
2024-03-08 23:08:47 -08:00 |
|
Krish Dholakia
|
e245b1c98a
|
Merge pull request #2401 from BerriAI/litellm_transcription_endpoints
feat(main.py): support openai transcription endpoints
|
2024-03-08 23:07:48 -08:00 |
|
Krrish Dholakia
|
fd52b502a6
|
fix(utils.py): *new* get_supported_openai_params() function
Returns the supported openai params for a given model + provider
|
2024-03-08 23:06:40 -08:00 |
|
Krrish Dholakia
|
aeb3cbc9b6
|
fix(utils.py): add additional providers to get_supported_openai_params
|
2024-03-08 23:06:40 -08:00 |
|
Krrish Dholakia
|
daa371ade9
|
fix(utils.py): add support for anthropic params in get_supported_openai_params
|
2024-03-08 23:06:40 -08:00 |
|
Krrish Dholakia
|
fac01f8481
|
fix(azure.py): add pre call logging for transcription calls
|
2024-03-08 22:23:21 -08:00 |
|
Krrish Dholakia
|
0fb7afe820
|
feat(proxy_server.py): working /audio/transcription endpoint
|
2024-03-08 18:20:27 -08:00 |
|
ishaan-jaff
|
0a538fe679
|
(feat) use no-log to disable per request logging
|
2024-03-08 16:56:20 -08:00 |
|
ishaan-jaff
|
ddd231a8c2
|
(feat) use no-log as a litellm param
|
2024-03-08 16:46:38 -08:00 |
|
ishaan-jaff
|
986a526790
|
(feat) disable logging per request
|
2024-03-08 16:25:54 -08:00 |
|
Krrish Dholakia
|
696eb54455
|
feat(main.py): support openai transcription endpoints
enable user to load balance between openai + azure transcription endpoints
|
2024-03-08 10:25:19 -08:00 |
|
Krrish Dholakia
|
0e7b30bec9
|
fix(utils.py): return function name for ollama_chat function calls
|
2024-03-08 08:01:10 -08:00 |
|
Krrish Dholakia
|
ec79482612
|
fix(utils.py): fix google ai studio timeout error raising
|
2024-03-06 21:12:04 -08:00 |
|
Krish Dholakia
|
38612ddd34
|
Merge pull request #2377 from BerriAI/litellm_team_level_model_groups
feat(proxy_server.py): team based model aliases
|
2024-03-06 21:03:53 -08:00 |
|
Krrish Dholakia
|
c0c3117dec
|
fix(utils.py): fix get optional param embeddings
|
2024-03-06 20:47:05 -08:00 |
|
ishaan-jaff
|
0ee02e1ab9
|
(fix) vertex_ai test_vertex_projects optional params embedding
|
2024-03-06 20:33:25 -08:00 |
|
Krish Dholakia
|
cb8b30970b
|
Merge pull request #2347 from BerriAI/litellm_retry_rate_limited_requests
feat(proxy_server.py): retry if virtual key is rate limited
|
2024-03-06 19:23:11 -08:00 |
|
Krrish Dholakia
|
7e3734d037
|
test(test_completion.py): handle gemini timeout error
|
2024-03-06 19:05:39 -08:00 |
|
ishaan-jaff
|
d3818713ad
|
(fix) dict changed size during iteration
|
2024-03-06 17:53:01 -08:00 |
|
Krrish Dholakia
|
7d824225a5
|
fix(utils.py): set status code for api error
|
2024-03-05 21:37:59 -08:00 |
|
Krrish Dholakia
|
a3a751ce62
|
fix(utils.py): fix mistral api exception mapping
|
2024-03-05 20:45:16 -08:00 |
|