diff --git a/docs/my-website/docs/completion/input.md b/docs/my-website/docs/completion/input.md
index 86546bbba..2894997ee 100644
--- a/docs/my-website/docs/completion/input.md
+++ b/docs/my-website/docs/completion/input.md
@@ -1,4 +1,4 @@
-# Completion Function - completion()
+# Input Format - completion()
The Input params are **exactly the same** as the
OpenAI Create chat completion, and let you call **Azure OpenAI, Anthropic, Cohere, Replicate, OpenRouter** models in the same format.
diff --git a/docs/my-website/docs/completion/supported.md b/docs/my-website/docs/completion/supported.md
index 2719929b4..48bce1d79 100644
--- a/docs/my-website/docs/completion/supported.md
+++ b/docs/my-website/docs/completion/supported.md
@@ -1,10 +1,12 @@
# Supported Chat, Completion Models
## API Keys
-liteLLM reads keys set in the environment variables or your Key Manager
-liteLLM standardizes naming keys in the following format
-`PROVIDER_API_KEY` for example `OPENAI_API_KEY` or `TOGETHERAI_API_KEY` or `HUGGINGFACE_API_KEY`. In addition to this liteLLM also allows you to use the provider specificed naming conventions for keys
-Example Both `HF_TOKEN` and `HUGGINGFACE_API_KEY` will work for Hugging Face models
+liteLLM reads key naming, all keys should be named in the following format:
+`_API_KEY` for example
+* `OPENAI_API_KEY` Provider = OpenAI
+* `TOGETHERAI_API_KEY` Provider = TogetherAI
+* `HUGGINGFACE_API_KEY` Provider = HuggingFace
+
### OpenAI Chat Completion Models
diff --git a/docs/my-website/sidebars.js b/docs/my-website/sidebars.js
index ba82855d1..5f216a1d1 100644
--- a/docs/my-website/sidebars.js
+++ b/docs/my-website/sidebars.js
@@ -21,19 +21,15 @@ const sidebars = {
'index',
{
type: 'category',
- label: 'completion_function',
+ label: 'Completion()',
items: ['completion/input','completion/output'],
},
{
type: 'category',
- label: 'embedding_function',
+ label: 'Embedding()',
items: ['embedding/supported_embedding'],
},
- {
- type: 'category',
- label: 'Supported Chat, Completion Models',
- items: ['completion/supported'],
- },
+ 'completion/supported',
{
type: 'category',
label: 'Tutorials',
diff --git a/docs/my-website/src/pages/index.md b/docs/my-website/src/pages/index.md
index 4ca470335..57d23215d 100644
--- a/docs/my-website/src/pages/index.md
+++ b/docs/my-website/src/pages/index.md
@@ -1,23 +1,27 @@
-# 🚅 litellm
-a light 100 line package to simplify calling OpenAI, Azure, Cohere, Anthropic APIs
+# *🚅 litellm*
+[](https://pypi.org/project/litellm/)
+[](https://pypi.org/project/litellm/0.1.1/)
+[](https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main)
+
+[](https://github.com/BerriAI/litellm)
-###### litellm manages:
-* Calling all LLM APIs using the OpenAI format - `completion(model, messages)`
-* Consistent output for all LLM APIs, text responses will always be available at `['choices'][0]['message']['content']`
-* Consistent Exceptions for all LLM APIs, we map RateLimit, Context Window, and Authentication Error exceptions across all providers to their OpenAI equivalents. [see Code](https://github.com/BerriAI/litellm/blob/ba1079ff6698ef238c5c7f771dd2b698ec76f8d9/litellm/utils.py#L250)
+[](https://discord.gg/wuPM9dRgDw)
-###### observability:
-* Logging - see exactly what the raw model request/response is by plugging in your own function `completion(.., logger_fn=your_logging_fn)` and/or print statements from the package `litellm.set_verbose=True`
-* Callbacks - automatically send your data to Helicone, Sentry, Posthog, Slack - `litellm.success_callbacks`, `litellm.failure_callbacks` [see Callbacks](https://litellm.readthedocs.io/en/latest/advanced/)
+a light package to simplify calling OpenAI, Azure, Cohere, Anthropic, Huggingface API Endpoints. It manages:
+- translating inputs to the provider's completion and embedding endpoints
+- guarantees [consistent output](https://litellm.readthedocs.io/en/latest/output/), text responses will always be available at `['choices'][0]['message']['content']`
+- exception mapping - common exceptions across providers are mapped to the [OpenAI exception types](https://help.openai.com/en/articles/6897213-openai-library-error-types-guidance)
+# usage
+
-## Quick Start
-Go directly to code: [Getting Started Notebook](https://colab.research.google.com/drive/1gR3pY-JzDZahzpVdbGBtrNGDBmzUNJaJ?usp=sharing)
-### Installation
+Demo - https://litellm.ai/playground \
+Read the docs - https://docs.litellm.ai/docs/
+
+## quick start
```
pip install litellm
```
-### Usage
```python
from litellm import completion
@@ -33,11 +37,32 @@ response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion("command-nightly", messages)
```
-Need Help / Support : [see troubleshooting](https://litellm.readthedocs.io/en/latest/troubleshoot)
+Code Sample: [Getting Started Notebook](https://colab.research.google.com/drive/1gR3pY-JzDZahzpVdbGBtrNGDBmzUNJaJ?usp=sharing)
-## Why did we build liteLLM
+Stable version
+```
+pip install litellm==0.1.345
+```
+
+## Streaming Queries
+liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response.
+Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models
+```python
+response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
+for chunk in response:
+ print(chunk['choices'][0]['delta'])
+
+# claude 2
+result = completion('claude-2', messages, stream=True)
+for chunk in result:
+ print(chunk['choices'][0]['delta'])
+```
+
+# support / talk with founders
+- [Our calendar 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
+- [Community Discord 💭](https://discord.gg/wuPM9dRgDw)
+- Our numbers 📞 +1 (770) 8783-106 / +1 (412) 618-6238
+- Our emails ✉️ ishaan@berri.ai / krrish@berri.ai
+
+# why did we build this
- **Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere
-
-## Support
-* [Meet with us 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
-* Contact us at ishaan@berri.ai / krrish@berri.ai