diff --git a/docs/my-website/docs/proxy/configs.md b/docs/my-website/docs/proxy/configs.md
index affe666ca..3a9def5b5 100644
--- a/docs/my-website/docs/proxy/configs.md
+++ b/docs/my-website/docs/proxy/configs.md
@@ -392,6 +392,23 @@ model_list:
+
+
+
+https://docs.litellm.ai/docs/providers/xinference
+
+**Note add `xinference/` prefix to `litellm_params`: `model` so litellm knows to route to OpenAI**
+
+```yaml
+model_list:
+- model_name: xinference-model # model group
+ litellm_params:
+ model: xinference/bge-base-en # model name for litellm.embedding(model=xinference/bge-base-en)
+ api_base: http://0.0.0.0:9997/v1
+```
+
+
+
Use this for calling /embedding endpoints on OpenAI Compatible Servers.