Graham Neubig
6e9267ca66
Make vertex_chat work with generate_content
2023-12-20 15:32:44 -05:00
Graham Neubig
482b3b5bc3
Add a default for safety settings in vertex AI
2023-12-20 13:12:50 -05:00
Krrish Dholakia
04bbd0649f
fix(router.py): only do sync image gen fallbacks for now
...
The customhttptransport we use for dall-e-2 only works for sync httpx calls, not async. Will need to spend some time writing the async version
n
2023-12-20 19:10:59 +05:30
Krrish Dholakia
f355e03515
feat(main.py): add async image generation support
2023-12-20 16:58:40 +05:30
Krrish Dholakia
b3962e483f
feat(azure.py): add support for azure image generations endpoint
2023-12-20 16:37:21 +05:30
Krrish Dholakia
f0df28362a
feat(ollama.py): add support for ollama function calling
2023-12-20 14:59:55 +05:30
Graham Neubig
2d15e5384b
Add partial support of vertexai safety settings
2023-12-19 22:26:55 -05:00
ishaan-jaff
9995229b97
(fix) proxy + ollama - raise exception correctly
2023-12-19 18:48:34 +05:30
Krish Dholakia
408f232bd7
Merge branch 'main' into main
2023-12-18 17:54:34 -08:00
Krrish Dholakia
34509d8dda
fix(main.py): return async completion calls
2023-12-18 17:41:54 -08:00
ishaan-jaff
6b272076d7
(feat) openrouter set transforms=[]
default
2023-12-18 09:16:33 +05:30
ishaan-jaff
b15682bc1f
(feat) set default openrouter configs
2023-12-18 08:55:51 +05:30
Joel Eriksson
e214e6ab47
Fix bug when iterating over lines in ollama response
...
async for line in resp.content.iter_any() will return
incomplete lines when the lines are long, and that
results in an exception being thrown by json.loads()
when it tries to parse the incomplete JSON
The default behavior of the stream reader for aiohttp
response objects is to iterate over lines, so just
removing .iter_any() fixes the bug
2023-12-17 20:23:26 +02:00
Krrish Dholakia
a3c7a340a5
fix(ollama.py): fix sync ollama streaming
2023-12-16 21:23:21 -08:00
Krrish Dholakia
13d088b72e
feat(main.py): add support for image generation endpoint
2023-12-16 21:07:29 -08:00
Krrish Dholakia
7c2fad2d57
fix(azure.py): fix azure streaming logging
2023-12-16 18:06:08 -08:00
Krrish Dholakia
3923c389fd
build(Dockerfile): fixing build requirements
2023-12-16 17:52:30 -08:00
Krrish Dholakia
4e828ff541
fix(health.md): add background health check details to docs
2023-12-16 10:31:59 -08:00
ishaan-jaff
5ee6b87f2e
(fix) vertexai - gemini
2023-12-16 22:15:41 +05:30
ishaan-jaff
764f31c970
(feat) add async, async+stream for gemini
2023-12-16 18:58:12 +05:30
ishaan-jaff
efe8b75200
(fix) use litellm.vertex_vision_models
2023-12-16 18:39:40 +05:30
ishaan-jaff
774a725ccb
(feat) add vertex ai gemini-pro-vision
2023-12-16 18:31:03 +05:30
ishaan-jaff
20b5505476
(feat) show POST request for HF embeddings
2023-12-16 13:09:49 +05:30
ishaan-jaff
287633887e
(feat) add ollama/llava
2023-12-16 10:35:27 +05:30
Krrish Dholakia
add153d110
fix(huggingface_restapi.py): add support for additional hf embedding formats
2023-12-15 21:02:41 -08:00
Krrish Dholakia
4791dda66f
feat(proxy_server.py): enable infinite retries on rate limited requests
2023-12-15 20:03:41 -08:00
Krrish Dholakia
e5268fa6bc
fix(router.py): support openai-compatible endpoints
2023-12-15 14:47:54 -08:00
Krrish Dholakia
edb88e31e4
fix(together_ai.py): return empty tgai responses
2023-12-15 10:46:35 -08:00
Krrish Dholakia
a09a6f24a4
fix(together_ai.py): additional logging for together ai encoding prompt
2023-12-15 10:39:23 -08:00
Krrish Dholakia
cab870f73a
fix(ollama.py): fix ollama async streaming for /completions calls
2023-12-15 09:28:32 -08:00
ishaan-jaff
85a3c67574
(feat) - acompletion, correct exception mapping
2023-12-15 08:28:12 +05:30
Krrish Dholakia
804d58eb20
bump: version 1.14.4 → 1.14.5.dev1
2023-12-14 15:23:52 -08:00
Krrish Dholakia
1608dd7e0b
fix(main.py): support async streaming for text completions endpoint
2023-12-14 13:56:32 -08:00
Krish Dholakia
a6e78497b5
Merge pull request #1122 from emsi/main
...
Fix #1119 , no content when streaming.
2023-12-14 10:01:00 -08:00
Krrish Dholakia
e678009695
fix(vertex_ai.py): add exception mapping for acompletion calls
2023-12-13 16:35:50 -08:00
Krrish Dholakia
7b8851cce5
fix(ollama.py): fix async completion calls for ollama
2023-12-13 13:10:25 -08:00
Mariusz Woloszyn
1feb6317f6
Fix #1119 , no content when streaming.
2023-12-13 21:42:35 +01:00
Krrish Dholakia
75bcb37cb2
fix(factory.py): fix tgai rendering template
2023-12-13 12:27:31 -08:00
Krrish Dholakia
69c29f8f86
fix(vertex_ai.py): add support for real async streaming + completion calls
2023-12-13 11:53:55 -08:00
Krrish Dholakia
07015843ac
fix(vertex_ai.py): support optional params + enable async calls for gemini
2023-12-13 11:01:23 -08:00
Krrish Dholakia
ef7a6e3ae1
feat(vertex_ai.py): adds support for gemini-pro on vertex ai
2023-12-13 10:26:30 -08:00
ishaan-jaff
86e626edab
(feat) pass vertex_ai/ as custom_llm_provider
2023-12-13 19:02:24 +03:00
Krrish Dholakia
a64bd2ca1e
fix(sagemaker.py): filter out templated prompt if in model response
2023-12-13 07:43:33 -08:00
Krrish Dholakia
82d28a8825
fix(factory.py): safely fail prompt template get requests for together ai
2023-12-12 17:28:22 -08:00
Krrish Dholakia
8e7116635f
fix(ollama.py): add support for async streaming
2023-12-12 16:44:20 -08:00
Krrish Dholakia
8b07a6c046
fix(main.py): pass user_id + encoding_format for logging + to openai/azure
2023-12-12 15:46:44 -08:00
ishaan-jaff
a251a52717
(chore) remove junk tkinter import
2023-12-12 13:54:50 -08:00
ishaan-jaff
99b48eff17
(fix) tkinter import
2023-12-12 12:18:25 -08:00
Krrish Dholakia
9cf5ab468f
fix(router.py): deepcopy initial model list, don't mutate it
2023-12-12 09:54:06 -08:00
Krrish Dholakia
2c1c75fdf0
fix(ollama.py): enable parallel ollama completion calls
2023-12-11 23:18:37 -08:00