ishaan-jaff
|
4bcddec2ce
|
(feat) pass vertex_ai/ as custom_llm_provider
|
2023-12-13 19:02:24 +03:00 |
|
zeeland
|
ac30309909
|
refactor: add CustomStreamWrapper return type for completion
|
2023-12-13 22:57:19 +08:00 |
|
Krrish Dholakia
|
e396fcb55c
|
fix(main.py): pass user_id + encoding_format for logging + to openai/azure
|
2023-12-12 15:46:44 -08:00 |
|
Krrish Dholakia
|
93c7393ae8
|
fix(utils.py): add more logging
|
2023-12-12 15:46:12 -08:00 |
|
Krrish Dholakia
|
0a3320ed6b
|
fix(utils.py): add more logging
|
2023-12-12 15:46:00 -08:00 |
|
ishaan-jaff
|
d540bdf824
|
(fix) from re import T - junk import
|
2023-12-12 12:26:15 -08:00 |
|
Krrish Dholakia
|
0cf0c2d6dd
|
fix(router.py): deepcopy initial model list, don't mutate it
|
2023-12-12 09:54:06 -08:00 |
|
Krrish Dholakia
|
b80a81b419
|
fix(ollama.py): enable parallel ollama completion calls
|
2023-12-11 23:18:37 -08:00 |
|
Krrish Dholakia
|
4fd400015f
|
test(test_custom_callback_router.py): add async azure testing for router
|
2023-12-11 16:40:35 -08:00 |
|
Krrish Dholakia
|
02cfefa257
|
test(test_custom_callback_input.py): embedding callback tests for azure, openai, bedrock
|
2023-12-11 15:32:46 -08:00 |
|
Krrish Dholakia
|
72591a2165
|
test(test_custom_callback_input.py): add bedrock testing
n
n
|
2023-12-11 13:00:01 -08:00 |
|
Krrish Dholakia
|
47d0884c0c
|
test(test_custom_callback_unit.py): adding unit tests for custom callbacks + fixing related bugs
|
2023-12-11 11:44:09 -08:00 |
|
ishaan-jaff
|
0e3f7ea28f
|
(feat) access metadata in embedding kwargs()
|
2023-12-11 09:39:25 -08:00 |
|
ishaan-jaff
|
33d6b5206d
|
(feat) caching + stream - bedrock
|
2023-12-11 08:43:50 -08:00 |
|
Krish Dholakia
|
3ebb7c7dc7
|
Merge pull request #985 from estill01/patch-1
Enable setting default `model` value for `LiteLLM`, `Chat`, `Completions`
|
2023-12-09 13:59:00 -08:00 |
|
ishaan-jaff
|
e056696831
|
(feat) custom logger: async stream,assemble chunks
|
2023-12-09 10:10:48 -08:00 |
|
ishaan-jaff
|
ee8b23cf91
|
(feat) proxy: log model_info + proxy_server request
|
2023-12-08 14:26:18 -08:00 |
|
ishaan-jaff
|
d7af5f1d85
|
(feat) embedding - pass model_info, proxy_server request
|
2023-12-08 14:26:18 -08:00 |
|
ishaan-jaff
|
de6880dc09
|
(feat) pass model_info, proxy_server_request to callback
|
2023-12-08 14:26:18 -08:00 |
|
ishaan-jaff
|
f744445db4
|
(fix) make print_verbose non blocking
|
2023-12-07 17:31:32 -08:00 |
|
Krrish Dholakia
|
583de6ab92
|
fix(bedrock.py): fix output format for cohere embeddings
|
2023-12-06 22:47:01 -08:00 |
|
ishaan-jaff
|
40faeb6cd7
|
(feat) aembedding - add custom logging support
|
2023-12-06 19:09:06 -08:00 |
|
Krrish Dholakia
|
45e9c3eb31
|
feat(sagemaker.py): support huggingface embedding models
|
2023-12-06 11:41:38 -08:00 |
|
Krrish Dholakia
|
f9b74e54a3
|
fix(sagemaker.py): enable passing hf model name for prompt template
|
2023-12-05 16:31:59 -08:00 |
|
Krrish Dholakia
|
7e42c64cc5
|
fix(utils.py): support sagemaker llama2 custom endpoints
|
2023-12-05 16:05:15 -08:00 |
|
Krrish Dholakia
|
1e1fae3d58
|
fix(main.py): accept user in embedding()
|
2023-12-02 21:49:23 -08:00 |
|
estill01
|
2b20d78ce8
|
fix
|
2023-12-03 05:37:57 +00:00 |
|
estill01
|
27ebbaa8f1
|
Fix; persistent 'model' default value
|
2023-12-03 05:34:24 +00:00 |
|
Krrish Dholakia
|
95df9ea50d
|
fix(main.py): only send user if set
|
2023-12-02 20:36:30 -08:00 |
|
Krrish Dholakia
|
00a9150f54
|
fix(main.py): set user to none if not passed in
|
2023-12-02 20:08:25 -08:00 |
|
Krrish Dholakia
|
285f0b72c3
|
fix(main.py): fix pydantic warning for usage dict
|
2023-12-02 20:02:55 -08:00 |
|
estill01
|
145da887af
|
Enable setting default model value for Completions
add `model` arg to `Completions` class; if you provide a value, it will be used when you create new completions from an instance of the class.
|
2023-12-02 19:50:18 -08:00 |
|
Krrish Dholakia
|
49684e9eb4
|
fix(azure.py): fix linting errors
|
2023-11-30 13:32:29 -08:00 |
|
Krrish Dholakia
|
526cc3a9c4
|
feat(main.py): add support for azure-openai via cloudflare ai gateway
|
2023-11-30 13:19:49 -08:00 |
|
Krrish Dholakia
|
68e0eca6b8
|
fix(utils.py): include system fingerprint in streaming response object
|
2023-11-30 08:45:52 -08:00 |
|
Krrish Dholakia
|
0341b0cd07
|
feat(main.py): allow updating model cost via completion()
|
2023-11-29 20:14:39 -08:00 |
|
Krrish Dholakia
|
a1ea893a73
|
fix(main.py): don't pass stream to petals
|
2023-11-29 19:58:04 -08:00 |
|
Krrish Dholakia
|
a05722571b
|
fix(replicate.py): fix custom prompt formatting
|
2023-11-29 19:44:09 -08:00 |
|
ishaan-jaff
|
dc9bafa959
|
(feat) Embedding: Async Azure
|
2023-11-29 19:43:47 -08:00 |
|
ishaan-jaff
|
ca03f83597
|
(feat) async embeddings: OpenAI
|
2023-11-29 19:35:08 -08:00 |
|
Krrish Dholakia
|
67df0804eb
|
fix(main.py): fix null finish reason issue for ollama
|
2023-11-29 16:50:11 -08:00 |
|
Krrish Dholakia
|
2a5592abe7
|
fix(bedrock.py): support ai21 / bedrock streaming
|
2023-11-29 16:35:06 -08:00 |
|
ishaan-jaff
|
0fbbefca94
|
(feat) completion: add rpm, tpm as litellm params
|
2023-11-29 16:19:05 -08:00 |
|
Krrish Dholakia
|
18afb51d72
|
bump: version 1.7.13 → 1.7.14
|
2023-11-29 15:19:18 -08:00 |
|
Krrish Dholakia
|
00b55e4389
|
fix(main.py): have stream_chunk_builder return successful response even if token_counter fails
|
2023-11-29 15:19:11 -08:00 |
|
Krrish Dholakia
|
622eaf4a21
|
fix(utils.py): fix parallel tool calling when streaming
|
2023-11-29 10:56:21 -08:00 |
|
ishaan-jaff
|
e88f498201
|
(fix) embedding pop out client from params
|
2023-11-28 21:22:01 -08:00 |
|
Krrish Dholakia
|
ceadb1547d
|
fix(main.py): passing client as a litellm-specific kwarg
|
2023-11-28 21:20:05 -08:00 |
|
ishaan-jaff
|
9d69ea5b12
|
(fix) router: passing client
|
2023-11-28 16:34:16 -08:00 |
|
ishaan-jaff
|
efd5dbccd5
|
(feat) completion:openai pass OpenAI client
|
2023-11-28 16:05:01 -08:00 |
|