mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 11:43:54 +00:00
docs(load_test.md): formatting
This commit is contained in:
parent
5b56a0856e
commit
0bae8911f8
1 changed files with 1 additions and 1 deletions
|
@ -220,7 +220,7 @@ Test if your defined tpm/rpm limits are respected across multiple instances.
|
||||||
The quickest way to do this is by testing the [proxy](./proxy/quick_start.md). The proxy uses the [router](./routing.md) under the hood, so if you're using either of them, this test should work for you.
|
The quickest way to do this is by testing the [proxy](./proxy/quick_start.md). The proxy uses the [router](./routing.md) under the hood, so if you're using either of them, this test should work for you.
|
||||||
|
|
||||||
In our test:
|
In our test:
|
||||||
- Max RPM per deployment is 100 requests per minute
|
- Max RPM per deployment is = 100 requests per minute
|
||||||
- Max Throughput / min on proxy = 200 requests per minute (2 deployments)
|
- Max Throughput / min on proxy = 200 requests per minute (2 deployments)
|
||||||
- Load we'll send to proxy = 600 requests per minute
|
- Load we'll send to proxy = 600 requests per minute
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue