diff --git a/README.md b/README.md index deef7ddf2..a7ce7b5ba 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,7 @@ response = completion(model="command-nightly", messages=messages) print(response) ``` -To call any model supported by a provider, just use `model=/`. This way, LiteLLM will know which provider to route it to. There might be provider-specific details here (e.g. for vertex ai, any unmapped model is assumed to be a model garden endpoint). So refer to [provider docs for more information](https://docs.litellm.ai/docs/providers) +Call any model supported by a provider, with `model=/`. There might be provider-specific details here, so refer to [provider docs for more information](https://docs.litellm.ai/docs/providers) ## Async ([Docs](https://docs.litellm.ai/docs/completion/stream#async-completion))