diff --git a/docs/my-website/docs/providers/openai_compatible.md b/docs/my-website/docs/providers/openai_compatible.md index ff0e85709..f02149024 100644 --- a/docs/my-website/docs/providers/openai_compatible.md +++ b/docs/my-website/docs/providers/openai_compatible.md @@ -115,3 +115,18 @@ Here's how to call an OpenAI-Compatible Endpoint with the LiteLLM Proxy Server + + +### Advanced - Disable System Messages + +Some VLLM models (e.g. gemma) don't support system messages. To map those requests to 'user' messages, use the `supports_system_message` flag. + +```yaml +model_list: +- model_name: my-custom-model + litellm_params: + model: openai/google/gemma + api_base: http://my-custom-base + api_key: "" + supports_system_message: False # 👈 KEY CHANGE +``` \ No newline at end of file diff --git a/docs/my-website/docs/proxy/configs.md b/docs/my-website/docs/proxy/configs.md index 9381a14a4..80235586c 100644 --- a/docs/my-website/docs/proxy/configs.md +++ b/docs/my-website/docs/proxy/configs.md @@ -427,7 +427,7 @@ model_list: ```shell $ litellm --config /path/to/config.yaml -``` +``` ## Setting Embedding Models