From 99b92c4b23e61cf2fe65d937bd9e091188e6500e Mon Sep 17 00:00:00 2001 From: Krish Dholakia Date: Mon, 26 Feb 2024 15:22:50 -0800 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 6a4c738c5..deef7ddf2 100644 --- a/README.md +++ b/README.md @@ -68,6 +68,8 @@ response = completion(model="command-nightly", messages=messages) print(response) ``` +To call any model supported by a provider, just use `model=/`. This way, LiteLLM will know which provider to route it to. There might be provider-specific details here (e.g. for vertex ai, any unmapped model is assumed to be a model garden endpoint). So refer to [provider docs for more information](https://docs.litellm.ai/docs/providers) + ## Async ([Docs](https://docs.litellm.ai/docs/completion/stream#async-completion)) ```python