diff --git a/docs/advanced.md b/docs/advanced.md
new file mode 100644
index 0000000000..403e607553
--- /dev/null
+++ b/docs/advanced.md
@@ -0,0 +1,23 @@
+# Advanced - liteLLM client
+
+## Use liteLLM client to send Output Data to Posthog, Sentry etc
+liteLLM allows you to create `completion_client` and `embedding_client` to send successfull / error LLM API call data to Posthog, Sentry, Slack etc
+
+### Quick Start
+```python
+from main import litellm_client
+import os
+
+## set env variables
+os.environ['SENTRY_API_URL'] = ""
+os.environ['POSTHOG_API_KEY'], os.environ['POSTHOG_API_URL'] = "api-key", "api-url"
+
+# init liteLLM client
+client = litellm_client(success_callback=["posthog"], failure_callback=["sentry", "posthog"])
+completion = client.completion
+embedding = client.embedding
+
+response = completion(model="gpt-3.5-turbo", messages=messages)
+```
+
+
diff --git a/docs/client_integrations.md b/docs/client_integrations.md
new file mode 100644
index 0000000000..83de02a412
--- /dev/null
+++ b/docs/client_integrations.md
@@ -0,0 +1,11 @@
+# Data Logging Integrations
+
+| Integration | Required OS Variables | How to Use with litellm Client |
+|-----------------|--------------------------------------------|-------------------------------------------|
+| Sentry | `SENTRY_API_URL` | `client = litellm_client(success_callback=["sentry"], failure_callback=["sentry"])` |
+| Posthog | `POSTHOG_API_KEY`,
`POSTHOG_API_URL` | `client = litellm_client(success_callback=["posthog"], failure_callback=["posthog"])` |
+| Slack | `SLACK_API_TOKEN`,
`SLACK_API_SECRET`,
`SLACK_API_CHANNEL` | `client = litellm_client(success_callback=["slack"], failure_callback=["slack"])` |
+
+
+
+
diff --git a/docs/contact.md b/docs/contact.md
new file mode 100644
index 0000000000..d5309cd737
--- /dev/null
+++ b/docs/contact.md
@@ -0,0 +1,6 @@
+# Contact Us
+
+[](https://discord.gg/wuPM9dRgDw)
+
+* [Meet with us 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
+* Contact us at ishaan@berri.ai / krrish@berri.ai
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 0000000000..b562725c25
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,39 @@
+# *🚅 litellm*
+a light 100 line package to simplify calling OpenAI, Azure, Cohere, Anthropic APIs
+
+###### litellm manages:
+* Calling all LLM APIs using the OpenAI format - `completion(model, messages)`
+* Consistent output for all LLM APIs, text responses will always be available at `['choices'][0]['message']['content']`
+* **[Advanced]** Automatically logging your output to Sentry, Posthog, Slack [see liteLLM Client](/docs/advanced.md)
+
+## Quick Start
+Go directly to code: [Getting Started Notebook](https://colab.research.google.com/drive/1gR3pY-JzDZahzpVdbGBtrNGDBmzUNJaJ?usp=sharing)
+### Installation
+```
+pip install litellm
+```
+
+### Usage
+```python
+from litellm import completion
+
+## set ENV variables
+os.environ["OPENAI_API_KEY"] = "openai key"
+os.environ["COHERE_API_KEY"] = "cohere key"
+
+messages = [{ "content": "Hello, how are you?","role": "user"}]
+
+# openai call
+response = completion(model="gpt-3.5-turbo", messages=messages)
+
+# cohere call
+response = completion("command-nightly", messages)
+```
+Need Help / Support : [see troubleshooting](/docs/troubleshoot.md)
+
+## Why did we build liteLLM
+- **Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere
+
+## Support
+* [Meet with us 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
+* Contact us at ishaan@berri.ai / krrish@berri.ai
diff --git a/docs/supported.md b/docs/supported.md
new file mode 100644
index 0000000000..e6107d0ac5
--- /dev/null
+++ b/docs/supported.md
@@ -0,0 +1,41 @@
+## Generation/Completion/Chat Completion Models
+
+### OpenAI Chat Completion Models
+
+| Model Name | Function Call | Required OS Variables |
+|------------------|----------------------------------------|--------------------------------------|
+| gpt-3.5-turbo | `completion('gpt-3.5-turbo', messages)` | `os.environ['OPENAI_API_KEY']` |
+| gpt-4 | `completion('gpt-4', messages)` | `os.environ['OPENAI_API_KEY']` |
+
+## Azure OpenAI Chat Completion Models
+
+| Model Name | Function Call | Required OS Variables |
+|------------------|-----------------------------------------|-------------------------------------------|
+| gpt-3.5-turbo | `completion('gpt-3.5-turbo', messages, azure=True)` | `os.environ['AZURE_API_KEY']`,
`os.environ['AZURE_API_BASE']`,
`os.environ['AZURE_API_VERSION']` |
+| gpt-4 | `completion('gpt-4', messages, azure=True)` | `os.environ['AZURE_API_KEY']`,
`os.environ['AZURE_API_BASE']`,
`os.environ['AZURE_API_VERSION']` |
+
+### OpenAI Text Completion Models
+
+| Model Name | Function Call | Required OS Variables |
+|------------------|--------------------------------------------|--------------------------------------|
+| text-davinci-003 | `completion('text-davinci-003', messages)` | `os.environ['OPENAI_API_KEY']` |
+
+### Cohere Models
+
+| Model Name | Function Call | Required OS Variables |
+|------------------|--------------------------------------------|--------------------------------------|
+| command-nightly | `completion('command-nightly', messages)` | `os.environ['COHERE_API_KEY']` |
+
+### OpenRouter Models
+
+| Model Name | Function Call | Required OS Variables |
+|----------------------------------|----------------------------------------------------------------------|---------------------------------------------------------------------------|
+| google/palm-2-codechat-bison | `completion('google/palm-2-codechat-bison', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| google/palm-2-chat-bison | `completion('google/palm-2-chat-bison', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| openai/gpt-3.5-turbo | `completion('openai/gpt-3.5-turbo', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| openai/gpt-3.5-turbo-16k | `completion('openai/gpt-3.5-turbo-16k', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| openai/gpt-4-32k | `completion('openai/gpt-4-32k', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| anthropic/claude-2 | `completion('anthropic/claude-2', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| anthropic/claude-instant-v1 | `completion('anthropic/claude-instant-v1', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| meta-llama/llama-2-13b-chat | `completion('meta-llama/llama-2-13b-chat', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
+| meta-llama/llama-2-70b-chat | `completion('meta-llama/llama-2-70b-chat', messages)` | `os.environ['OPENROUTER_API_KEY']`,
`os.environ['OR_SITE_URL']`,
`os.environ['OR_APP_NAME']` |
diff --git a/docs/supported_embedding.md b/docs/supported_embedding.md
new file mode 100644
index 0000000000..d509adc58e
--- /dev/null
+++ b/docs/supported_embedding.md
@@ -0,0 +1,5 @@
+## Embedding Models
+
+| Model Name | Function Call | Required OS Variables |
+|----------------------|---------------------------------------------|--------------------------------------|
+| text-embedding-ada-002 | `embedding('text-embedding-ada-002', input)` | `os.environ['OPENAI_API_KEY']` |
\ No newline at end of file
diff --git a/docs/troubleshoot.md b/docs/troubleshoot.md
new file mode 100644
index 0000000000..3dc4a26624
--- /dev/null
+++ b/docs/troubleshoot.md
@@ -0,0 +1,9 @@
+## Stable Version
+
+If you're running into problems with installation / Usage
+Use the stable version of litellm
+
+```
+pip install litellm==0.1.1
+```
+
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 0000000000..8d92fff43e
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,15 @@
+site_name: liteLLM
+nav:
+ - âš¡ Getting Started:
+ - Installation & Quick Start: index.md
+ - 🤖 Supported LLM APIs:
+ - Supported Completion & Chat APIs: supported.md
+ - Supported Embedding APIs: supported_embedding.md
+ - 💾 liteLLM Client - Logging Output:
+ - Quick Start: advanced.md
+ - Output Integrations: client_integrations.md
+ - 💡 Support:
+ - Troubleshooting & Help: troubleshoot.md
+ - Contact Us: contact.md
+
+theme: readthedocs