ishaan-jaff
|
f4a7760ea1
|
(feat+test) use passed OpenAI client
|
2023-11-28 16:09:10 -08:00 |
|
ishaan-jaff
|
400a268934
|
(feat) completion: Azure allow users to pass client to router
|
2023-11-28 15:56:52 -08:00 |
|
ishaan-jaff
|
7914623fbc
|
(feat) allow users to pass azure client for acmompletion
|
2023-11-28 15:44:56 -08:00 |
|
Krrish Dholakia
|
e8331a4647
|
fix(utils.py): azure tool calling streaming
|
2023-11-27 19:07:38 -08:00 |
|
ishaan-jaff
|
9d259d08e7
|
(linting) fix
|
2023-11-27 10:27:51 -08:00 |
|
ishaan-jaff
|
26938f697e
|
(feat) completion:debugging - show raw POST request
|
2023-11-27 10:13:37 -08:00 |
|
ishaan-jaff
|
f7ae01da8a
|
(feat) completion:sagemaker - support chat models
|
2023-11-27 10:11:10 -08:00 |
|
ishaan-jaff
|
e407b185ee
|
(feat) completion:sagemaker - better debugging
|
2023-11-27 09:08:20 -08:00 |
|
ishaan-jaff
|
afaca3f819
|
(fix) acompletion: Raise same error as completion
|
2023-11-25 15:33:46 -08:00 |
|
Krrish Dholakia
|
6d9f7b8f9d
|
fix: fix nlp cloud streaming
|
2023-11-25 13:45:23 -08:00 |
|
Krrish Dholakia
|
30f47d3169
|
bump: version 1.7.0 → 1.7.1
|
2023-11-25 12:34:28 -08:00 |
|
Krrish Dholakia
|
620633ec28
|
fix(openai.py): fix linting issues
|
2023-11-25 12:21:29 -08:00 |
|
Krrish Dholakia
|
dac76a4861
|
fix(utils.py): fix embedding response output parsing
|
2023-11-25 12:06:57 -08:00 |
|
ishaan-jaff
|
2e08acba93
|
(feat) embedding: better logging
|
2023-11-25 11:10:06 -08:00 |
|
ishaan-jaff
|
77a9eb8a77
|
(feat) logging: for embedding openai.py
|
2023-11-25 11:10:06 -08:00 |
|
ishaan-jaff
|
23466107a7
|
(feat) 10x faster embeddings
|
2023-11-24 17:02:57 -08:00 |
|
ishaan-jaff
|
824136667f
|
(fix) add azure/ to model. TY Krrish !
|
2023-11-23 21:44:08 -08:00 |
|
ishaan-jaff
|
19fb24cd15
|
(feat) cost tracking for azure llms
|
2023-11-23 21:41:38 -08:00 |
|
Krrish Dholakia
|
f24786095a
|
fix(vertex_ai.py): fix exception mapping for vertex ai
|
2023-11-23 17:35:33 -08:00 |
|
ishaan-jaff
|
e8b844abae
|
(fix) azure: better debugging
|
2023-11-23 16:08:59 -08:00 |
|
Krish Dholakia
|
6ba4eeb961
|
Merge pull request #885 from Codium-ai/bugfix/hf_timeout
Do not timeout when calling HF through acomplete
|
2023-11-23 07:48:59 -08:00 |
|
Ori Kotek
|
e74ac03169
|
Do not timeout when calling HF through acomplete
|
2023-11-23 15:56:59 +02:00 |
|
maqsoodshaik
|
0f89c3375a
|
this commit fixes #883
|
2023-11-23 12:45:38 +01:00 |
|
ishaan-jaff
|
4260e0c1f0
|
(fix) linting error
|
2023-11-22 16:22:05 -08:00 |
|
ishaan-jaff
|
5abd566b7c
|
(feat) embedding() support for timeouts
|
2023-11-22 14:25:55 -08:00 |
|
ishaan-jaff
|
4247df02c7
|
(fix) Azure - only use ad_token when api_key is None
|
2023-11-22 14:25:55 -08:00 |
|
Krish Dholakia
|
e4f1e2b138
|
Merge pull request #845 from canada4663/upstream-main
Added support for multiple embeddings via Bedrock
|
2023-11-21 14:00:06 -08:00 |
|
Krrish Dholakia
|
1218121e47
|
fix(huggingface_restapi.pyu): fix linting errors
|
2023-11-21 10:05:35 -08:00 |
|
Krrish Dholakia
|
41483d2660
|
feat(factory.py): add support for anthropic system prompts for claude 2.1
|
2023-11-21 09:57:26 -08:00 |
|
Krrish Dholakia
|
846a32ca87
|
fix(huggingface_restapi.py): fixing formatting
|
2023-11-21 09:57:26 -08:00 |
|
Krrish Dholakia
|
6892fd8b51
|
fix(huggingface_restapi.py): fix huggingface response format
|
2023-11-21 09:57:26 -08:00 |
|
Krrish Dholakia
|
a89b8f55e3
|
fix(huggingface_restapi.py): handle generate text output
|
2023-11-21 09:57:26 -08:00 |
|
Krrish Dholakia
|
1306addfe8
|
fix(openai.py-+-azure.py): fix linting issues
|
2023-11-20 19:29:23 -08:00 |
|
Krrish Dholakia
|
855964ed45
|
fix(utils.py): adding support for rules + mythomax/alpaca prompt template
|
2023-11-20 18:58:15 -08:00 |
|
ishaan-jaff
|
50f883a2fb
|
(fix) pydantic errors with response.time
|
2023-11-20 18:28:19 -08:00 |
|
ishaan-jaff
|
99515c2e25
|
(fix) linting
|
2023-11-20 17:00:20 -08:00 |
|
ishaan-jaff
|
11ec2710c6
|
(fix) completion: max_retries using OpenAI client
|
2023-11-20 16:57:37 -08:00 |
|
canada4663
|
74ed37c4f2
|
bedrock embedding changes pre-testing
|
2023-11-18 21:00:06 -08:00 |
|
ishaan-jaff
|
32f22adf8b
|
(feat) openai improve logging post_call
|
2023-11-17 15:51:27 -08:00 |
|
Krrish Dholakia
|
17bb1184bd
|
fix(main.py): fix linting issue
|
2023-11-17 15:45:00 -08:00 |
|
Krrish Dholakia
|
0ab6b2451d
|
fix(acompletion): support client side timeouts + raise exceptions correctly for async calls
|
2023-11-17 15:39:47 -08:00 |
|
Krrish Dholakia
|
7ef1014e59
|
fix(factory.py): for ollama models check if it's instruct or not before applying prompt template
|
2023-11-16 15:45:08 -08:00 |
|
Krrish Dholakia
|
51bf637656
|
feat: global client for sync + async calls (openai + Azure only)
|
2023-11-16 14:44:13 -08:00 |
|
Krrish Dholakia
|
d7f7694848
|
fix(openai.py): fix linting issues
|
2023-11-16 12:57:53 -08:00 |
|
Krrish Dholakia
|
a94c09c13c
|
fix(openai.py): handling extra headers
|
2023-11-16 12:48:21 -08:00 |
|
Krrish Dholakia
|
f99a161d98
|
fix(azure.py): fix linting errors
|
2023-11-16 12:15:50 -08:00 |
|
Krrish Dholakia
|
bf0f8b824c
|
fix(azure.py): use openai client sdk for handling sync+async calling
|
2023-11-16 12:08:12 -08:00 |
|
Krrish Dholakia
|
a23c0a2599
|
fix(openai.py): fix linting issues
|
2023-11-16 11:01:28 -08:00 |
|
Krrish Dholakia
|
bb51216846
|
fix(openai.py): supporting openai client sdk for handling sync + async calls (incl. for openai-compatible apis)
|
2023-11-16 10:35:03 -08:00 |
|
Ishaan Jaff
|
d6ad62d793
|
Merge pull request #811 from dchristian3188/bedrock-llama
Bedrock llama
|
2023-11-16 07:57:50 -08:00 |
|