litellm-mirror/litellm/proxy/README.md

551 B

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai 

openai.api_base = "http://0.0.0.0:8000"

print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.