mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
551 B
551 B
litellm-proxy
A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.
usage
$ pip install litellm
$ litellm --model ollama/codellama
#INFO: Ollama running on http://0.0.0.0:8000
replace openai base
import openai
openai.api_base = "http://0.0.0.0:8000"
print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.