Krish Dholakia
|
6501fdb76e
|
Merge branch 'main' into litellm_global_spend_updates
|
2024-01-24 20:20:15 -08:00 |
|
Krrish Dholakia
|
43f139fafd
|
fix(ollama_chat.py): fix default token counting for ollama chat
|
2024-01-24 20:09:17 -08:00 |
|
Krrish Dholakia
|
574208f005
|
fix(proxy_server.py): track cost for global proxy
|
2024-01-24 16:06:10 -08:00 |
|
Krish Dholakia
|
9784d03d65
|
Merge branch 'main' into litellm_reset_key_budget
|
2024-01-23 18:10:32 -08:00 |
|
Krrish Dholakia
|
d6844f43c8
|
test(test_keys.py): use correct model name for token counting
|
2024-01-23 17:46:14 -08:00 |
|
Krish Dholakia
|
4ca4913468
|
Merge pull request #1574 from BerriAI/litellm_fix_streaming_spend_tracking
[WIP] fix(utils.py): fix proxy streaming spend tracking
|
2024-01-23 17:07:40 -08:00 |
|
Krrish Dholakia
|
d52f5234b4
|
fix(utils.py): fix double hashing issue on spend logs, streaming usage metadata logging iss
ue for spend logs
|
2024-01-23 16:14:01 -08:00 |
|
Krrish Dholakia
|
f8870fb48e
|
fix(utils.py): fix proxy streaming spend tracking
|
2024-01-23 15:59:03 -08:00 |
|
Krrish Dholakia
|
afada01ffc
|
fix(utils.py): fix streaming cost tracking
|
2024-01-23 14:39:45 -08:00 |
|
ishaan-jaff
|
39b4f19bd8
|
(fix) same response_id across chunk
|
2024-01-23 12:56:03 -08:00 |
|
ishaan-jaff
|
e8cd27f2b7
|
(fix) sagemaker streaming support
|
2024-01-23 12:31:16 -08:00 |
|
Krrish Dholakia
|
23b59ac9b8
|
fix(utils.py): fix content policy violation check for streaming
|
2024-01-23 06:55:04 -08:00 |
|
Ishaan Jaff
|
6d105754d7
|
Merge pull request #1561 from BerriAI/litellm_sagemaker_streaming
[Feat] Add REAL Sagemaker streaming
|
2024-01-22 22:10:20 -08:00 |
|
ishaan-jaff
|
c8084bb9d9
|
v0 sagemaker_stream
|
2024-01-22 21:53:16 -08:00 |
|
Krrish Dholakia
|
5e0d99b2ef
|
fix(router.py): fix order of dereferenced dictionaries
|
2024-01-22 21:42:25 -08:00 |
|
Krish Dholakia
|
bedb08bdef
|
Merge pull request #1557 from BerriAI/litellm_emit_spend_logs
feat(utils.py): emit response cost as part of logs
|
2024-01-22 21:02:40 -08:00 |
|
Krrish Dholakia
|
11e3ee4411
|
test: fix tests
|
2024-01-22 20:20:17 -08:00 |
|
Krrish Dholakia
|
3e8c8ef507
|
fix(openai.py): fix linting issue
|
2024-01-22 18:20:15 -08:00 |
|
Krish Dholakia
|
b1cced16fc
|
Merge pull request #1556 from BerriAI/litellm_importlib_issue
fix(utils.py): move from pkg_resources to importlib
|
2024-01-22 15:56:07 -08:00 |
|
Krrish Dholakia
|
e917d0eee6
|
feat(utils.py): emit response cost as part of logs
|
2024-01-22 15:53:04 -08:00 |
|
Krrish Dholakia
|
36c6d3cd90
|
fix(utils.py): fix debug log
|
2024-01-22 15:15:34 -08:00 |
|
Krrish Dholakia
|
a343c4d22f
|
refactor(utils.py): fix linting errors
|
2024-01-22 15:15:34 -08:00 |
|
Krrish Dholakia
|
074ea17325
|
fix: support streaming custom cost completion tracking
|
2024-01-22 15:15:34 -08:00 |
|
Krrish Dholakia
|
2ce4258cc0
|
fix(main.py): support custom pricing for embedding calls
|
2024-01-22 15:15:34 -08:00 |
|
Krrish Dholakia
|
276a685a59
|
feat(utils.py): support custom cost tracking per second
https://github.com/BerriAI/litellm/issues/1374
|
2024-01-22 15:15:34 -08:00 |
|
Krrish Dholakia
|
128cf4a81d
|
fix(utils.py): move from pkg_resources to importlib
|
2024-01-22 15:05:09 -08:00 |
|
Krrish Dholakia
|
6c39b2855f
|
fix(utils.py): fix async/sync streaming logging
|
2024-01-22 13:54:51 -08:00 |
|
Krrish Dholakia
|
2165dcf6fb
|
fix(utils.py): fix callback logging
|
2024-01-21 00:56:30 -08:00 |
|
Krrish Dholakia
|
e2831e9c80
|
fix: fix proxy logging
|
2024-01-20 18:22:45 -08:00 |
|
Krrish Dholakia
|
09b7235b31
|
fix: support info level logging on pkg + proxy
|
2024-01-20 17:45:47 -08:00 |
|
Krrish Dholakia
|
b07677c6be
|
fix(gemini.py): support streaming
|
2024-01-19 20:21:34 -08:00 |
|
Krrish Dholakia
|
f2a8ceddc2
|
fix(utils.py): revert exception mapping change
|
2024-01-19 17:39:35 -08:00 |
|
Krrish Dholakia
|
f05aba1f85
|
fix(utils.py): add metadata to logging obj on setup, if exists
|
2024-01-19 17:29:47 -08:00 |
|
ishaan-jaff
|
6a695477ba
|
(fix) async langfuse logger
|
2024-01-19 10:44:51 -08:00 |
|
ishaan-jaff
|
f2cfb76920
|
(fix) use asyncio run_in_executor
|
2024-01-19 09:52:51 -08:00 |
|
ishaan-jaff
|
a9c5b02303
|
(v0) fix
|
2024-01-19 08:51:14 -08:00 |
|
ishaan-jaff
|
697c511e76
|
(feat) support user param for all providers
|
2024-01-18 17:45:59 -08:00 |
|
ishaan-jaff
|
debef7544d
|
(feat) return Azure enahncements used
|
2024-01-17 18:46:41 -08:00 |
|
Krrish Dholakia
|
08b409bae8
|
fix(utils.py): fix if check
|
2024-01-17 17:17:58 -08:00 |
|
Krrish Dholakia
|
7ed4d9b4d1
|
fix(utils.py): allow dynamically setting boto3 init and switching between bedrock and openai
|
2024-01-17 15:56:30 -08:00 |
|
Krrish Dholakia
|
8e9dc09955
|
fix(bedrock.py): add support for sts based boto3 initialization
https://github.com/BerriAI/litellm/issues/1476
|
2024-01-17 12:08:59 -08:00 |
|
Krrish Dholakia
|
7b39aacadf
|
fix(utils.py): mistral optional param mapping
|
2024-01-17 09:44:21 -08:00 |
|
ishaan-jaff
|
00ac18e8b7
|
(feat) improve bedrock, sagemaker exception mapping
|
2024-01-15 21:22:22 -08:00 |
|
ishaan-jaff
|
fcc1e23a05
|
(fix) post_call rules
|
2024-01-15 20:56:25 -08:00 |
|
ishaan-jaff
|
e864c78d15
|
(feat) post call rules - fail with error message
|
2024-01-15 17:13:13 -08:00 |
|
ishaan-jaff
|
79ad63009e
|
(feat) support extra body for Azure, OpenAI
|
2024-01-13 14:32:11 -08:00 |
|
ishaan-jaff
|
6bae534968
|
(fix) check if custom_llm_provider is not None
|
2024-01-13 12:54:03 -08:00 |
|
ishaan-jaff
|
53fd62b0cd
|
(feat) use custom_llm_provider in completion_cost
|
2024-01-13 12:29:51 -08:00 |
|
ishaan-jaff
|
6b2a4714a6
|
(feat) return custom_llm_provider in streaming response
|
2024-01-12 17:14:43 -08:00 |
|
David Leen
|
a674de8f36
|
improve bedrock exception granularity
|
2024-01-12 16:38:55 +01:00 |
|