Krrish Dholakia
|
e54056f0ed
|
fix(azure.py): use openai client sdk for handling sync+async calling
|
2023-11-16 12:08:12 -08:00 |
|
ishaan-jaff
|
aa84ca04d8
|
(fix) HF api + streaming
|
2023-11-16 11:59:56 -08:00 |
|
ishaan-jaff
|
fb2d398d2c
|
(fix) langfuse logging + openai streaming when chunk = [DONE}
|
2023-11-16 10:45:35 -08:00 |
|
Krrish Dholakia
|
9c7cc84eb0
|
fix(openai.py): supporting openai client sdk for handling sync + async calls (incl. for openai-compatible apis)
|
2023-11-16 10:35:03 -08:00 |
|
Ishaan Jaff
|
da9a0ab928
|
Merge pull request #811 from dchristian3188/bedrock-llama
Bedrock llama
|
2023-11-16 07:57:50 -08:00 |
|
Ishaan Jaff
|
d6d0cbd63c
|
Merge pull request #826 from rodneyxr/ollama-fixes
Fix typo for initial_prompt_value and too many values to unpack error
|
2023-11-16 07:55:53 -08:00 |
|
David Christian
|
461115330b
|
updated utils for bedrock.meta streaming
|
2023-11-16 07:12:27 -08:00 |
|
Krrish Dholakia
|
ef4e5b9636
|
test: set request timeout at request level
|
2023-11-15 17:42:31 -08:00 |
|
Rodney Rodriguez
|
f2d8bfd40d
|
bugfixes for ollama
|
2023-11-15 19:27:06 -06:00 |
|
Krrish Dholakia
|
b42cf80585
|
fix(utils): fixing exception mapping
|
2023-11-15 15:51:17 -08:00 |
|
Krrish Dholakia
|
0ede0e836e
|
feat(get_max_tokens): get max tokens for huggingface hub models
|
2023-11-15 15:25:40 -08:00 |
|
Krrish Dholakia
|
e35ce15a89
|
refactor(huggingface_restapi.py): moving async completion + streaming to real async calls
|
2023-11-15 15:14:21 -08:00 |
|
Krrish Dholakia
|
04ce14e404
|
fix(utils.py): fix langfuse integration
|
2023-11-15 14:05:40 -08:00 |
|
Krrish Dholakia
|
e324388520
|
fix(utils.py): check for none params
|
2023-11-15 13:39:09 -08:00 |
|
Krrish Dholakia
|
8eaa1eb37f
|
fix(utils.py): azure streaming initial format
|
2023-11-15 13:30:08 -08:00 |
|
Krrish Dholakia
|
e5929f2f7e
|
fix(azure.py-+-proxy_server.py): fix function calling response object + support router on proxy
|
2023-11-15 13:15:16 -08:00 |
|
Krrish Dholakia
|
29a0c29eb3
|
fix(utils.py): await async function in client wrapper
|
2023-11-14 22:07:28 -08:00 |
|
Krrish Dholakia
|
0f6713993d
|
fix(router.py): enabling retrying with expo backoff (without tenacity) for router
|
2023-11-14 20:57:51 -08:00 |
|
ishaan-jaff
|
9585856b9f
|
(feat) debug POST logs
|
2023-11-14 18:16:45 -08:00 |
|
ishaan-jaff
|
838cb3e20b
|
(fix) debugging with POST request
|
2023-11-14 18:05:34 -08:00 |
|
ishaan-jaff
|
e0f7120459
|
(feat) improve logging of raw POST curl command
|
2023-11-14 17:54:09 -08:00 |
|
ishaan-jaff
|
c7fbbe8764
|
(feat) add ability to view POST requests from litellm.completion()
|
2023-11-14 17:27:20 -08:00 |
|
Krrish Dholakia
|
9b582b2c85
|
fix(main.py): keep client consistent across calls + exponential backoff retry on ratelimit errors
|
2023-11-14 16:26:05 -08:00 |
|
Krrish Dholakia
|
526eb99ade
|
fix(palm.py): exception mapping bad requests / filtered responses
|
2023-11-14 11:53:13 -08:00 |
|
Jack Collins
|
abbf19bce2
|
Provide response to ServiceUnavailableError where needed
|
2023-11-13 21:20:40 -08:00 |
|
Krrish Dholakia
|
34fea89fb0
|
fix(utils.py): streaming
|
2023-11-13 18:15:14 -08:00 |
|
Krrish Dholakia
|
c86be7665d
|
test(utils.py): adding logging for azure streaming
|
2023-11-13 17:53:15 -08:00 |
|
Krrish Dholakia
|
40f5805386
|
test(utils.py): test logging
|
2023-11-13 17:41:45 -08:00 |
|
Krrish Dholakia
|
b572e9fe3a
|
test(utils.py): add logging and fix azure streaming
|
2023-11-13 17:24:13 -08:00 |
|
Krrish Dholakia
|
63daffb91b
|
test(utils.py): additional logging
|
2023-11-13 17:13:41 -08:00 |
|
Krrish Dholakia
|
97e8fc640c
|
test(utils.py): additional logging
|
2023-11-13 17:06:24 -08:00 |
|
Krrish Dholakia
|
e984122117
|
test(utils.py): additional logging
|
2023-11-13 16:59:04 -08:00 |
|
Krrish Dholakia
|
39e784fb8b
|
test(utils.py): adding more logging for streaming test
|
2023-11-13 16:54:16 -08:00 |
|
Krrish Dholakia
|
777a924e6b
|
fix(utils.py): fix response object mapping
|
2023-11-13 15:58:25 -08:00 |
|
David Christian
|
9c4afd87ed
|
added support for bedrock llama models
|
2023-11-13 15:41:21 -08:00 |
|
Krrish Dholakia
|
11b63bfba7
|
fix(promptlayer.py): fixing promptlayer logging integration
|
2023-11-13 15:04:15 -08:00 |
|
Krrish Dholakia
|
6ca8528c25
|
fix(main.py): fix linting errors
|
2023-11-13 14:52:37 -08:00 |
|
Krrish Dholakia
|
330708e7ef
|
fix(tests): fixing response objects for testing
|
2023-11-13 14:39:30 -08:00 |
|
Krrish Dholakia
|
bdf801d987
|
fix(together_ai.py): exception mapping for tgai
|
2023-11-13 13:17:15 -08:00 |
|
Krrish Dholakia
|
d8121737d6
|
test(test_completion.py): cleanup tests
|
2023-11-13 11:23:38 -08:00 |
|
ishaan-jaff
|
a1f1262d18
|
(fix) text completion response
|
2023-11-13 10:29:23 -08:00 |
|
ishaan-jaff
|
f388000566
|
(fix) deepinfra with openai v1.0.0
|
2023-11-13 09:51:22 -08:00 |
|
ishaan-jaff
|
8656a6aff7
|
(fix) token_counter - use openai token counter only for chat completion
|
2023-11-13 08:00:27 -08:00 |
|
Krrish Dholakia
|
8db19d0af4
|
fix(utils.py): replacing openai.error import statements
|
2023-11-11 19:25:21 -08:00 |
|
Krrish Dholakia
|
4b74ddcb17
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
|
Krrish Dholakia
|
c0a757a25f
|
refactor(azure.py): working azure completion calls with openai v1 sdk
|
2023-11-11 16:44:39 -08:00 |
|
Krrish Dholakia
|
1ec07c0aba
|
refactor(openai.py): working openai chat + text completion for openai v1 sdk
|
2023-11-11 16:25:10 -08:00 |
|
Krrish Dholakia
|
a5ec85b1f2
|
refactor(openai.py): making it compatible for openai v1
BREAKING CHANGE:
|
2023-11-11 15:33:02 -08:00 |
|
ishaan-jaff
|
4408a1f806
|
(fix) completion gpt-4 vision check finish_details or finish_reason
|
2023-11-11 10:28:20 -08:00 |
|
Ishaan Jaff
|
292b12b191
|
Merge pull request #787 from duc-phamh/improve_message_trimming
Improve message trimming
|
2023-11-11 09:39:43 -08:00 |
|