mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
.. | ||
tests | ||
.gitignore | ||
__init__.py | ||
config.yaml | ||
openapi.json | ||
proxy_cli.py | ||
proxy_server.py | ||
README.md | ||
start.sh | ||
utils.py |
litellm-proxy
A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.
usage
$ pip install litellm
$ litellm --model ollama/codellama
#INFO: Ollama running on http://0.0.0.0:8000
replace openai base
import openai
openai.api_base = "http://0.0.0.0:8000"
print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.