diff --git a/README.md b/README.md index caccfe5ad..223e632bc 100644 --- a/README.md +++ b/README.md @@ -91,9 +91,17 @@ $ litellm --model huggingface/bigcode/starcoder ### Step 2: Replace openai base ```python -import openai -client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:8000") -print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}])) +import openai # openai v1.0.0+ +client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url +# request sent to model set on litellm proxy, `litellm --model` +response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [ + { + "role": "user", + "content": "this is a test request, write a short poem" + } +]) + +print(response) ``` ## Logging Observability ([Docs](https://docs.litellm.ai/docs/observability/callbacks))