forked from phoenix/litellm-mirror
9 lines
No EOL
377 B
Markdown
9 lines
No EOL
377 B
Markdown
# LiteLLM Proxy Performance
|
|
|
|
### Throughput - 30% Increase
|
|
LiteLLM proxy + Load Balancer gives **30% increase** in throughput compared to Raw OpenAI API
|
|
<Image img={require('../../img/throughput.png')} />
|
|
|
|
### Latency Added - 0.00325 seconds
|
|
LiteLLM proxy adds **0.00325 seconds** latency as compared to using the Raw OpenAI API
|
|
<Image img={require('../../img/latency.png')} /> |