ishaan-jaff
|
506c14b896
|
(ci/cd) run again
|
2024-02-06 12:22:24 -08:00 |
|
ishaan-jaff
|
405a44727c
|
(ci/cd) run in verbose mode
|
2024-02-06 10:57:20 -08:00 |
|
Krish Dholakia
|
640572647a
|
Merge pull request #1805 from BerriAI/litellm_cost_tracking_image_gen
feat(utils.py): support cost tracking for openai/azure image gen models
|
2024-02-03 22:23:22 -08:00 |
|
Krrish Dholakia
|
66565f96b1
|
test(test_completion.py): fix test
|
2024-02-03 21:44:57 -08:00 |
|
Krrish Dholakia
|
3a19c8b600
|
test(test_completion.py): fix test
|
2024-02-03 21:30:45 -08:00 |
|
Krish Dholakia
|
28df60b609
|
Merge pull request #1809 from BerriAI/litellm_embedding_caching_updates
Support caching individual items in embedding list (Async embedding only)
|
2024-02-03 21:04:23 -08:00 |
|
ishaan-jaff
|
1155025e6a
|
(ci/cd) run again
|
2024-02-03 20:36:35 -08:00 |
|
ishaan-jaff
|
774cbbde52
|
(test) tgai is unstable
|
2024-02-03 20:00:40 -08:00 |
|
Krrish Dholakia
|
efb6123d28
|
fix(utils.py): support get_secret("TOGETHER_AI_TOKEN")
|
2024-02-03 19:35:09 -08:00 |
|
Krrish Dholakia
|
c49c88c8e5
|
fix(utils.py): route together ai calls to openai client
together ai is now openai-compatible
n
|
2024-02-03 19:22:48 -08:00 |
|
Krrish Dholakia
|
2d59331148
|
test(test_completion.py): skip flaky test
|
2024-02-02 18:44:17 -08:00 |
|
Krrish Dholakia
|
62ad6f19b7
|
fix(main.py): for health checks, don't use cached responses
|
2024-02-02 16:51:42 -08:00 |
|
Krrish Dholakia
|
245ec2430e
|
fix(utils.py): fix azure exception mapping
|
2024-02-01 19:05:20 -08:00 |
|
ishaan-jaff
|
d90e04b531
|
(ci/cd) run again
|
2024-01-31 21:04:03 -08:00 |
|
ishaan-jaff
|
9446fbd7cc
|
(ci/cd) run again
|
2024-01-31 20:58:14 -08:00 |
|
ishaan-jaff
|
b61204c1ca
|
(ci/cd) run once more for good luck
|
2024-01-31 20:31:54 -08:00 |
|
ishaan-jaff
|
8fdff3beec
|
(test) setting organization on litellm.completion
|
2024-01-30 10:36:10 -08:00 |
|
ishaan-jaff
|
36f1e4196f
|
(ci/cd) run again
|
2024-01-29 21:09:13 -08:00 |
|
ishaan-jaff
|
fd6b0ff42e
|
(ci/cd) run again
|
2024-01-29 21:03:42 -08:00 |
|
ishaan-jaff
|
1b9e992fef
|
(ci/cd) run again
|
2024-01-29 18:24:39 -08:00 |
|
ishaan-jaff
|
f263677459
|
(ci/cd) cohere api is flaky - run again
|
2024-01-29 12:08:23 -08:00 |
|
ishaan-jaff
|
c304117caa
|
(fix) cohere test
|
2024-01-29 11:36:00 -08:00 |
|
ishaan-jaff
|
3c44a54e7a
|
(ci/cd) run again
|
2024-01-29 11:29:32 -08:00 |
|
ishaan-jaff
|
eb3cab85fb
|
(ci/cd) run again
|
2024-01-29 10:36:25 -08:00 |
|
ishaan-jaff
|
6de59a1744
|
(ci/cd) run again
|
2024-01-29 09:46:44 -08:00 |
|
ishaan-jaff
|
d858f4c027
|
(ci/cd) run again
|
2024-01-29 09:15:53 -08:00 |
|
ishaan-jaff
|
e2d9e40886
|
(test) gpt-4-0125-preview
|
2024-01-25 14:42:10 -08:00 |
|
ishaan-jaff
|
d455833dfb
|
(test) same response id across chunks
|
2024-01-23 12:57:04 -08:00 |
|
ishaan-jaff
|
5ad0971c5a
|
(test) fix incorrect test lol
|
2024-01-23 08:21:14 -08:00 |
|
Krrish Dholakia
|
fd4d65adcd
|
fix(__init__.py): enable logging.debug to true if set verbose is true
|
2024-01-23 07:32:30 -08:00 |
|
ishaan-jaff
|
bccbb0852d
|
(test) test_completion_sagemaker_stream
|
2024-01-22 21:57:26 -08:00 |
|
Krrish Dholakia
|
276a685a59
|
feat(utils.py): support custom cost tracking per second
https://github.com/BerriAI/litellm/issues/1374
|
2024-01-22 15:15:34 -08:00 |
|
ishaan-jaff
|
9988a39169
|
(ci/cd) deploy again
|
2024-01-22 08:25:17 -08:00 |
|
ishaan-jaff
|
6bc7cc46b4
|
(docs) router debugging
|
2024-01-19 15:18:00 -08:00 |
|
Krrish Dholakia
|
f7694bc193
|
Merge branch 'main' into litellm_tpm_rpm_rate_limits
|
2024-01-18 19:10:07 -08:00 |
|
Krrish Dholakia
|
94ce524c63
|
test(test_completion.py): handle together ai timeout
|
2024-01-18 17:54:16 -08:00 |
|
Krrish Dholakia
|
e0aaa94f28
|
fix(main.py): read azure ad token from optional params extra body
|
2024-01-18 17:14:03 -08:00 |
|
ishaan-jaff
|
5a8a5fa0fd
|
(fix) using base_url Azure
|
2024-01-17 10:12:55 -08:00 |
|
ishaan-jaff
|
b95d6ec207
|
(v0) fixes for Azure GPT Vision enhancements
|
2024-01-17 09:57:16 -08:00 |
|
ishaan-jaff
|
d6f0cb8756
|
(ci/cd) fix olama hosted testing
|
2024-01-16 12:27:16 -08:00 |
|
ishaan-jaff
|
a8f2550c25
|
(ci/cd) openrouter unstable - use other model
|
2024-01-15 17:43:56 -08:00 |
|
ishaan-jaff
|
f62dbd0e08
|
(test) litellm.completion_cost mistral, anyscale
|
2024-01-13 12:35:09 -08:00 |
|
ishaan-jaff
|
4b3e9c6b38
|
(ci/cd) run testing again
|
2024-01-13 11:50:43 -08:00 |
|
ishaan-jaff
|
70899521ae
|
(test) custom_llm_provider in hidden params
|
2024-01-12 17:09:59 -08:00 |
|
Krrish Dholakia
|
954d1b071c
|
test: remove invalid arg
|
2024-01-10 21:53:29 +05:30 |
|
ishaan-jaff
|
9c7a4fde87
|
(test) hosted - ollama catch timeouts
|
2024-01-09 10:35:29 +05:30 |
|
Krrish Dholakia
|
e99a41307a
|
test: testing fixes
|
2024-01-09 10:23:34 +05:30 |
|
ishaan-jaff
|
9be7e34cb0
|
(ci/cd) pytest skip slow replicate test
|
2024-01-09 09:57:06 +05:30 |
|
ishaan-jaff
|
6263103680
|
(ci/cd) run again
|
2024-01-08 22:42:31 +05:30 |
|
Ishaan Jaff
|
a70626d6e9
|
Merge pull request #1356 from BerriAI/litellm_improve_proxy_logs
[Feat] Improve Proxy Logging
|
2024-01-08 14:41:01 +05:30 |
|