forked from phoenix/litellm-mirror
377 B
377 B
LiteLLM Proxy Performance
Throughput - 30% Increase
LiteLLM proxy + Load Balancer gives 30% increase in throughput compared to Raw OpenAI API <Image img={require('../../img/throughput.png')} />
Latency Added - 0.00325 seconds
LiteLLM proxy adds 0.00325 seconds latency as compared to using the Raw OpenAI API <Image img={require('../../img/latency.png')} />