# litellm-proxy A local, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM APIs. ## usage ```shell $ pip install litellm ``` ```shell $ litellm --model ollama/codellama #INFO: Ollama running on http://0.0.0.0:8000 ``` ## replace openai base ```python import openai openai.api_base = "http://0.0.0.0:8000" print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}])) ``` [**See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.**](https://docs.litellm.ai/docs/simple_proxy)