ishaan-jaff
|
4b717810ec
|
(feat) endpoint router
|
2024-03-09 16:10:04 -08:00 |
|
Ishaan Jaff
|
e10991e02b
|
Merge pull request #2420 from debdutdeb/redis-cache
feat(helm-chart): redis as cache managed by chart
|
2024-03-09 13:49:15 -08:00 |
|
Ishaan Jaff
|
494d0f824a
|
Merge pull request #2423 from BerriAI/litellm_docs_on_deploy_litellm
[Docs] Deploying litellm - litellm, litellm-database, litellm with redis
|
2024-03-09 13:46:00 -08:00 |
|
ishaan-jaff
|
a004240109
|
(docs) deploy litellm
|
2024-03-09 13:45:07 -08:00 |
|
ishaan-jaff
|
ba271e3e87
|
(docs) deploying litell
|
2024-03-09 13:41:20 -08:00 |
|
Debdut Chakraborty
|
5777aeb36e
|
chore: set simpler redis architecture as default
|
2024-03-10 03:09:00 +05:30 |
|
Debdut Chakraborty
|
eea803cae4
|
chore: better handling redis deployment architecture and connection
|
2024-03-10 03:06:17 +05:30 |
|
Debdut Chakraborty
|
b3646f6644
|
fix: redis context
|
2024-03-10 02:37:10 +05:30 |
|
Krrish Dholakia
|
5692481515
|
bump: version 1.30.5 → 1.30.6
|
2024-03-09 13:06:37 -08:00 |
|
Krrish Dholakia
|
4b60bea975
|
fix(proxy/utils.py): add more logging for prisma client get_data error
|
2024-03-09 13:06:30 -08:00 |
|
ishaan-jaff
|
7ae7e95da1
|
(docs) litellm getting started clarify sdk vs proxy
|
2024-03-09 13:04:52 -08:00 |
|
Debdut Chakraborty
|
7a1b3ca30d
|
feat(helm-chart): redis as cache managed by chart
|
2024-03-10 01:53:28 +05:30 |
|
Krrish Dholakia
|
d8a6b8216d
|
docs(input.md): add docs on 'get_supported_openai_params'
|
2024-03-08 23:54:13 -08:00 |
|
Krrish Dholakia
|
b0fa25dfbd
|
docs(audio_transcription.md): add docs on audio transcription
|
2024-03-08 23:51:24 -08:00 |
|
Krrish Dholakia
|
775997b283
|
fix(openai.py): fix async audio transcription
|
2024-03-08 23:33:54 -08:00 |
|
Krrish Dholakia
|
fef47618a7
|
bump: version 1.30.4 → 1.30.5
|
2024-03-08 23:31:18 -08:00 |
|
Krish Dholakia
|
15446ee6aa
|
Merge pull request #2405 from BerriAI/litellm_load_balancing_transcription_endpoints
load balancing transcription endpoints
|
2024-03-08 23:08:57 -08:00 |
|
Krish Dholakia
|
caa99f43bf
|
Merge branch 'main' into litellm_load_balancing_transcription_endpoints
|
2024-03-08 23:08:47 -08:00 |
|
Krish Dholakia
|
e245b1c98a
|
Merge pull request #2401 from BerriAI/litellm_transcription_endpoints
feat(main.py): support openai transcription endpoints
|
2024-03-08 23:07:48 -08:00 |
|
Krrish Dholakia
|
c15c05e460
|
build(config.yml): fix config.yml
|
2024-03-08 23:07:26 -08:00 |
|
Krrish Dholakia
|
fd52b502a6
|
fix(utils.py): *new* get_supported_openai_params() function
Returns the supported openai params for a given model + provider
|
2024-03-08 23:06:40 -08:00 |
|
Krrish Dholakia
|
aeb3cbc9b6
|
fix(utils.py): add additional providers to get_supported_openai_params
|
2024-03-08 23:06:40 -08:00 |
|
Krrish Dholakia
|
daa371ade9
|
fix(utils.py): add support for anthropic params in get_supported_openai_params
|
2024-03-08 23:06:40 -08:00 |
|
Krrish Dholakia
|
7ff8fa09d6
|
test(test_whisper.py): hardcode api base
|
2024-03-08 22:51:17 -08:00 |
|
Krrish Dholakia
|
c0c76707a1
|
test(test_whisper.py): cleanup test
|
2024-03-08 22:44:22 -08:00 |
|
Krrish Dholakia
|
0432c85bf7
|
test(test_whisper.py): add debugging for circle ci error
|
2024-03-08 22:43:07 -08:00 |
|
Krrish Dholakia
|
5f15047a03
|
build(config.yml): test specific whisper endpoint
|
2024-03-08 22:27:36 -08:00 |
|
Krrish Dholakia
|
fac01f8481
|
fix(azure.py): add pre call logging for transcription calls
|
2024-03-08 22:23:21 -08:00 |
|
ishaan-jaff
|
eb53136448
|
(ci/cd) run again
|
2024-03-08 22:05:39 -08:00 |
|
Krish Dholakia
|
2e66267128
|
Updated config.yml
|
2024-03-08 21:47:11 -08:00 |
|
Ishaan Jaff
|
22e9d1073f
|
Merge pull request #2413 from H0llyW00dzZ/web-docs
Fix Docs Formatting in Website
|
2024-03-08 21:39:28 -08:00 |
|
H0llyW00dzZ
|
33ba57a1b5
|
Fix Docs Formatting in Website
- [+] docs(deploy.md): move tip about versioning inside the tab item
|
2024-03-09 12:37:18 +07:00 |
|
ishaan-jaff
|
7891830136
|
(bump) 1.30.4
|
2024-03-08 21:20:02 -08:00 |
|
Ishaan Jaff
|
8036b48f14
|
Merge pull request #2408 from BerriAI/litellm_no_store_reqs
[FEAT-liteLLM Proxy] Incognito Requests - Don't log anything
|
2024-03-08 21:11:43 -08:00 |
|
Ishaan Jaff
|
9b94b0e591
|
Merge pull request #2411 from H0llyW00dzZ/docs
Update Docs for Kubernetes
|
2024-03-08 20:57:46 -08:00 |
|
H0llyW00dzZ
|
7004940ce4
|
Update Docs for Kubernetes
- [+] docs(deploy.md): add tip about using versioning or SHA digests instead of latest tag
|
2024-03-09 11:55:31 +07:00 |
|
ishaan-jaff
|
5850ff470f
|
(feat) disable/enable logging
|
2024-03-08 20:42:12 -08:00 |
|
Ishaan Jaff
|
54114c7e9d
|
Merge pull request #2409 from GuillermoBlasco/patch-2
Add quickstart deploy with k8s
|
2024-03-08 20:37:21 -08:00 |
|
Ishaan Jaff
|
0fc7b273e5
|
Merge pull request #2403 from BerriAI/litellm_api_version_client_side
[FEAT] AzureOpenAI - Pass `api_version` to litellm per request
|
2024-03-08 20:33:45 -08:00 |
|
Krrish Dholakia
|
0fb7afe820
|
feat(proxy_server.py): working /audio/transcription endpoint
|
2024-03-08 18:20:27 -08:00 |
|
Guillermo
|
bb427b4659
|
Update deploy.md
|
2024-03-09 02:30:17 +01:00 |
|
Guillermo
|
37b4dde7fd
|
Add quickstart deploy with k8s
|
2024-03-09 02:24:07 +01:00 |
|
ishaan-jaff
|
0a538fe679
|
(feat) use no-log to disable per request logging
|
2024-03-08 16:56:20 -08:00 |
|
ishaan-jaff
|
ddd231a8c2
|
(feat) use no-log as a litellm param
|
2024-03-08 16:46:38 -08:00 |
|
ishaan-jaff
|
d6dc28f0ed
|
(fix) proxy setting success callbacks
|
2024-03-08 16:27:53 -08:00 |
|
ishaan-jaff
|
4ff68c8562
|
(docs) no log requests
|
2024-03-08 16:26:25 -08:00 |
|
ishaan-jaff
|
986a526790
|
(feat) disable logging per request
|
2024-03-08 16:25:54 -08:00 |
|
Ishaan Jaff
|
ad4bfee3ee
|
Merge pull request #2406 from BerriAI/litellm_locust_load_test
[Feat] LiteLLM - use cpu_count for default num_workers, run locust load test
|
2024-03-08 15:41:40 -08:00 |
|
ishaan-jaff
|
9ed51e791b
|
(fix) default num workers
|
2024-03-08 15:24:08 -08:00 |
|
ishaan-jaff
|
2d71f54afb
|
(docs) load test litellm
|
2024-03-08 15:18:06 -08:00 |
|