Ishaan Jaff
|
ebdec4d262
|
(fix) cache control logic
|
2024-03-26 07:36:45 -07:00 |
|
Ishaan Jaff
|
7bf9cb3c54
|
(fix) cache control logic
|
2024-03-25 22:19:34 -07:00 |
|
Krrish Dholakia
|
f604a6155f
|
fix(utils.py): persist system fingerprint across chunks
|
2024-03-25 19:24:09 -07:00 |
|
Krrish Dholakia
|
c5bd4d4233
|
fix(utils.py): log success event for streaming
|
2024-03-25 19:03:10 -07:00 |
|
Krrish Dholakia
|
cbf4c95e5f
|
fix(utils.py): persist response id across chunks
|
2024-03-25 18:20:43 -07:00 |
|
Krrish Dholakia
|
ecc0cf5d9c
|
fix(utils.py): fix text completion streaming
|
2024-03-25 16:47:17 -07:00 |
|
Krrish Dholakia
|
26dbb76d53
|
fix(utils.py): ensure last chunk is always empty delta w/ finish reason
makes sure we're openai-compatible with our streaming. Adds stricter tests for this as well
|
2024-03-25 16:33:41 -07:00 |
|
Krrish Dholakia
|
c667e437b9
|
fix(utils.py): allow user to disable streaming logging
fixes event loop issue for litellm.disable_streaming_logging
|
2024-03-25 14:28:46 -07:00 |
|
Max Deichmann
|
efa599b0ee
|
push
|
2024-03-25 17:43:55 +01:00 |
|
Krrish Dholakia
|
4e70a3e09a
|
feat(router.py): enable pre-call checks
filter models outside of context window limits of a given message for a model group
https://github.com/BerriAI/litellm/issues/872
|
2024-03-23 18:03:30 -07:00 |
|
Tasha Upchurch
|
8814524473
|
Update utils.py
fix for constructed from dict choices.message being a dict still instead of Message class.
|
2024-03-23 00:12:24 -04:00 |
|
Ishaan Jaff
|
07067db5a1
|
(feat) remove litellm.telemetry
|
2024-03-22 20:58:14 -07:00 |
|
Tasha Upchurch
|
2c1fb7e881
|
Update utils.py
Fix for creating an empty choices if no choices passed in
|
2024-03-22 23:39:17 -04:00 |
|
Tasha Upchurch
|
541155c08d
|
Update utils.py
fix for #2655
|
2024-03-22 23:13:24 -04:00 |
|
Krrish Dholakia
|
4dad400b57
|
fix(anthropic.py): handle multiple system prompts
|
2024-03-22 18:14:15 -07:00 |
|
Vincelwt
|
860f1b982d
|
Merge branch 'main' into main
|
2024-03-22 00:52:42 +09:00 |
|
Ishaan Jaff
|
1d598581bb
|
(fix) don't run .completion retries if using router / proxy
|
2024-03-21 08:32:42 -07:00 |
|
Krrish Dholakia
|
416cccdc6a
|
fix(utils.py): support response_format param for ollama
https://github.com/BerriAI/litellm/issues/2580
|
2024-03-19 21:07:20 -07:00 |
|
Ishaan Jaff
|
a23746b776
|
(fix) add /metrics to utils.py
|
2024-03-19 17:28:33 -07:00 |
|
Vincelwt
|
e92db58204
|
Merge branch 'main' into main
|
2024-03-19 12:50:04 +09:00 |
|
Krish Dholakia
|
3e32a245ea
|
Merge pull request #2577 from BerriAI/litellm_vertex_ai_streaming_func_call
feat(vertex_ai.py): support gemini (vertex ai) function calling when streaming
|
2024-03-18 20:10:00 -07:00 |
|
Ishaan Jaff
|
447bddc7f8
|
(feat) v0 datadog logger
|
2024-03-18 16:01:47 -07:00 |
|
Krrish Dholakia
|
f4443e21e0
|
feat(vertex_ai.py): support gemini (vertex ai) function calling when streaming
|
2024-03-18 11:47:27 -07:00 |
|
Krrish Dholakia
|
bad2327b88
|
fix(utils.py): fix aws secret manager + support key_management_settings
fixes the aws secret manager implementation and allows the user to set which keys they want to check thr
ough it
|
2024-03-16 16:47:50 -07:00 |
|
Krrish Dholakia
|
77cb4cdf9c
|
fix(utils.py): initial commit for aws secret manager support
|
2024-03-16 14:37:46 -07:00 |
|
Krrish Dholakia
|
84430d0e29
|
fix(utils.py): async add to cache - for streaming
|
2024-03-15 18:25:40 -07:00 |
|
Krrish Dholakia
|
8d1c60bfdc
|
feat(batch_redis_get.py): batch redis GET requests for a given key + call type
reduces the number of GET requests we're making in high-throughput scenarios
|
2024-03-15 14:40:11 -07:00 |
|
Krrish Dholakia
|
0783a3f247
|
feat(utils.py): add native fireworks ai support
addresses - https://github.com/BerriAI/litellm/issues/777, https://github.com/BerriAI/litellm/issues/2486
|
2024-03-15 09:09:59 -07:00 |
|
Krrish Dholakia
|
1e1190745f
|
fix(utils.py): move to using litellm.modify_params to enable max output token trimming fix
|
2024-03-14 12:17:56 -07:00 |
|
Krrish Dholakia
|
5769bd22c3
|
feat(prompt_injection_detection.py): support simple heuristic similarity check for prompt injection attacks
|
2024-03-13 10:32:21 -07:00 |
|
Krish Dholakia
|
ce3c865adb
|
Merge pull request #2472 from BerriAI/litellm_anthropic_streaming_tool_calling
fix(anthropic.py): support claude-3 streaming with function calling
|
2024-03-12 21:36:01 -07:00 |
|
Ishaan Jaff
|
2c4407bb04
|
Merge pull request #2479 from BerriAI/litellm_cohere_tool_call
[FEAT Cohere/command-r tool calling
|
2024-03-12 21:20:59 -07:00 |
|
Krrish Dholakia
|
c871d61218
|
fix(anthropic.py): bug fix
|
2024-03-12 19:32:42 -07:00 |
|
ishaan-jaff
|
9a2852c353
|
(fix) use cohere_chat optional params
|
2024-03-12 14:31:43 -07:00 |
|
Krish Dholakia
|
bf0adfc246
|
Merge pull request #2473 from BerriAI/litellm_fix_compatible_provider_model_name
fix(openai.py): return model name with custom llm provider for openai-compatible endpoints (e.g. mistral, together ai, etc.)
|
2024-03-12 12:58:29 -07:00 |
|
Krish Dholakia
|
bd3e925d25
|
Merge pull request #2475 from BerriAI/litellm_azure_dall_e_3_cost_tracking
fix(azure.py): support cost tracking for azure/dall-e-3
|
2024-03-12 12:57:31 -07:00 |
|
ishaan-jaff
|
e2787e6ca5
|
(fix) failing cohere test
|
2024-03-12 12:44:19 -07:00 |
|
ishaan-jaff
|
9b72825e3b
|
(v0) tool calling
|
2024-03-12 12:35:52 -07:00 |
|
Krrish Dholakia
|
7c71463d4a
|
test: add more logging for failing test
|
2024-03-12 11:15:14 -07:00 |
|
Ishaan Jaff
|
15591d0978
|
Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
|
2024-03-12 11:11:56 -07:00 |
|
Krrish Dholakia
|
ae9eff5fc4
|
fix(azure.py): support cost tracking for azure/dall-e-3
|
2024-03-12 10:55:54 -07:00 |
|
ishaan-jaff
|
1f7333e9bc
|
(feat) exception mapping for cohere_chat
|
2024-03-12 10:45:42 -07:00 |
|
Krrish Dholakia
|
e94c4f818c
|
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
|
2024-03-12 10:30:10 -07:00 |
|
Krrish Dholakia
|
1c6438c267
|
fix(anthropic.py): support streaming with function calling
|
2024-03-12 09:52:11 -07:00 |
|
ishaan-jaff
|
fb52a98e81
|
(fix) support streaming for azure/instruct models
|
2024-03-12 09:50:43 -07:00 |
|
Krrish Dholakia
|
0806a45bd7
|
fix(utils.py): support response_format for mistral ai api
|
2024-03-11 10:23:41 -07:00 |
|
Vince Loewe
|
a1d1819b40
|
Merge branch 'main' into main
|
2024-03-11 12:36:41 +09:00 |
|
Krish Dholakia
|
774ceb741c
|
Merge pull request #2426 from BerriAI/litellm_whisper_cost_tracking
feat: add cost tracking + caching for `/audio/transcription` calls
|
2024-03-09 19:12:06 -08:00 |
|
Krrish Dholakia
|
78e178cec1
|
fix(utils.py): fix model setting in completion cost
|
2024-03-09 19:11:37 -08:00 |
|
Krrish Dholakia
|
548e9a3590
|
fix(utils.py): fix model name checking
|
2024-03-09 18:22:26 -08:00 |
|