forked from phoenix/litellm-mirror
with new README
This commit is contained in:
parent
2ccd5848b0
commit
48c09505df
1 changed files with 41 additions and 7 deletions
|
@ -6,9 +6,9 @@
|
|||

|
||||
[](https://github.com/BerriAI/litellm)
|
||||
|
||||
[](https://railway.app/template/_YF4Qj?referralCode=t3ukrU)
|
||||
[](https://railway.app/template/DYqQAW?referralCode=t3ukrU)
|
||||
|
||||
# What does liteLLM proxy do
|
||||
## What does liteLLM proxy do
|
||||
- Make `/chat/completions` requests for 50+ LLM models **Azure, OpenAI, Replicate, Anthropic, Hugging Face**
|
||||
|
||||
Example: for `model` use `claude-2`, `gpt-3.5`, `gpt-4`, `command-nightly`, `stabilityai/stablecode-completion-alpha-3b-4k`
|
||||
|
@ -24,12 +24,12 @@
|
|||
}
|
||||
```
|
||||
- **Consistent Input/Output** Format
|
||||
- Call all models using the OpenAI format - completion(model, messages)
|
||||
- Text responses will always be available at ['choices'][0]['message']['content']
|
||||
- Call all models using the OpenAI format - `completion(model, messages)`
|
||||
- Text responses will always be available at `['choices'][0]['message']['content']`
|
||||
- **Error Handling** Using Model Fallbacks (if `GPT-4` fails, try `llama2`)
|
||||
- **Logging** - Log Requests, Responses and Errors to `Supabase`, `Posthog`, `Mixpanel`, `Sentry`, `Helicone` (Any of the supported providers here: https://litellm.readthedocs.io/en/latest/advanced/
|
||||
|
||||
Example: Logs sent to Supabase
|
||||
**Example: Logs sent to Supabase**
|
||||
<img width="1015" alt="Screenshot 2023-08-11 at 4 02 46 PM" src="https://github.com/ishaan-jaff/proxy-server/assets/29436595/237557b8-ba09-4917-982c-8f3e1b2c8d08">
|
||||
|
||||
- **Token Usage & Spend** - Track Input + Completion tokens used + Spend/model
|
||||
|
@ -118,7 +118,41 @@ All responses from the server are returned in the following format (for all LLM
|
|||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Installation & Usage
|
||||
### Running Locally
|
||||
1. Clone liteLLM repository to your local machine:
|
||||
```
|
||||
git clone https://github.com/BerriAI/liteLLM-proxy
|
||||
```
|
||||
2. Install the required dependencies using pip
|
||||
```
|
||||
pip install requirements.txt
|
||||
```
|
||||
3. Set your LLM API keys
|
||||
```
|
||||
os.environ['OPENAI_API_KEY]` = "YOUR_API_KEY"
|
||||
or
|
||||
set OPENAI_API_KEY in your .env file
|
||||
```
|
||||
4. Run the server:
|
||||
```
|
||||
python main.py
|
||||
```
|
||||
|
||||
|
||||
|
||||
Deploying
|
||||
1. Quick Start: Deploy on Railway
|
||||
|
||||
[](https://railway.app/template/DYqQAW?referralCode=t3ukrU)
|
||||
|
||||
2. `GCP`, `AWS`, `Azure`
|
||||
This project includes a `Dockerfile` allowing you to build and deploy a Docker Project on your providers
|
||||
|
||||
# Support / Talk with founders
|
||||
- [Our calendar 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
|
||||
- [Community Discord 💭](https://discord.gg/wuPM9dRgDw)
|
||||
- Our numbers 📞 +1 (770) 8783-106 / +1 (412) 618-6238
|
||||
- Our emails ✉️ ishaan@berri.ai / krrish@berri.ai
|
||||
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue