+# *🚅 litellm*
+[](https://pypi.org/project/litellm/)
+[](https://pypi.org/project/litellm/0.1.1/)
+[](https://github.com/BerriAI/litellm/actions/workflows/tests.yml)
+[](https://github.com/BerriAI/litellm/actions/workflows/publish_pypi.yml) 
-LiteLLM manages:
+[](https://discord.gg/wuPM9dRgDw)
-- Translate inputs to provider's `completion`, `embedding`, and `image_generation` endpoints
-- [Consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']`
-- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - [Router](https://docs.litellm.ai/docs/routing)
-- Set Budgets & Rate limits per project, api key, model [LiteLLM Proxy Server (LLM Gateway)](https://docs.litellm.ai/docs/simple_proxy)
+a simple & light 100 line package to call OpenAI, Azure, Cohere, Anthropic API Endpoints
-[**Jump to LiteLLM Proxy (LLM Gateway) Docs**](https://github.com/BerriAI/litellm?tab=readme-ov-file#openai-proxy---docs)
-[**Jump to Supported LLM Providers**](https://github.com/BerriAI/litellm?tab=readme-ov-file#supported-providers-docs)
+litellm manages:
+- translating inputs to completion and embedding endpoints
+- guarantees consistent output, text responses will always be available at `['choices'][0]['message']['content']`
-🚨 **Stable Release:** Use docker images with the `-stable` tag. These have undergone 12 hour load tests, before being published.
+# usage
-Support for more providers. Missing a provider or LLM Platform, raise a [feature request](https://github.com/BerriAI/litellm/issues/new?assignees=&labels=enhancement&projects=&template=feature_request.yml&title=%5BFeature%5D%3A+).
+Read the docs - https://litellm.readthedocs.io/en/latest/
-# Usage ([**Docs**](https://docs.litellm.ai/docs/))
-
-> [!IMPORTANT]
-> LiteLLM v1.0.0 now requires `openai>=1.0.0`. Migration guide [here](https://docs.litellm.ai/docs/migration)
-> LiteLLM v1.40.14+ now requires `pydantic>=2.0.0`. No changes required.
-
-
-
-
-
-```shell
+## quick start
+```
pip install litellm
```
```python
from litellm import completion
-import os
## set ENV variables
-os.environ["OPENAI_API_KEY"] = "your-openai-key"
-os.environ["COHERE_API_KEY"] = "your-cohere-key"
+# ENV variables can be set in .env file, too. Example in .env.example
+os.environ["OPENAI_API_KEY"] = "openai key"
+os.environ["COHERE_API_KEY"] = "cohere key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
@@ -72,304 +35,26 @@ messages = [{ "content": "Hello, how are you?","role": "user"}]
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
-response = completion(model="command-nightly", messages=messages)
-print(response)
+response = completion("command-nightly", messages)
+
+# azure openai call
+response = completion("chatgpt-test", messages, azure=True)
+
+# openrouter call
+response = completion("google/palm-2-codechat-bison", messages)
+```
+Code Sample: [Getting Started Notebook](https://colab.research.google.com/drive/1gR3pY-JzDZahzpVdbGBtrNGDBmzUNJaJ?usp=sharing)
+
+Stable version
+```
+pip install litellm==0.1.1
```
-Call any model supported by a provider, with `model=/`. There might be provider-specific details here, so refer to [provider docs for more information](https://docs.litellm.ai/docs/providers)
+# hosted version
+- [Grab time if you want access 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
-## Async ([Docs](https://docs.litellm.ai/docs/completion/stream#async-completion))
+# why did I build this
+- **Need for simplicity**: My code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere
-```python
-from litellm import acompletion
-import asyncio
-
-async def test_get_response():
- user_message = "Hello, how are you?"
- messages = [{"content": user_message, "role": "user"}]
- response = await acompletion(model="gpt-3.5-turbo", messages=messages)
- return response
-
-response = asyncio.run(test_get_response())
-print(response)
-```
-
-## Streaming ([Docs](https://docs.litellm.ai/docs/completion/stream))
-
-liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response.
-Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)
-
-```python
-from litellm import completion
-response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
-for part in response:
- print(part.choices[0].delta.content or "")
-
-# claude 2
-response = completion('claude-2', messages, stream=True)
-for part in response:
- print(part.choices[0].delta.content or "")
-```
-
-## Logging Observability ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
-
-LiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack, MLflow
-
-```python
-from litellm import completion
-
-## set env variables for logging tools
-os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
-os.environ["HELICONE_API_KEY"] = "your-helicone-auth-key"
-os.environ["LANGFUSE_PUBLIC_KEY"] = ""
-os.environ["LANGFUSE_SECRET_KEY"] = ""
-os.environ["ATHINA_API_KEY"] = "your-athina-api-key"
-
-os.environ["OPENAI_API_KEY"]
-
-# set callbacks
-litellm.success_callback = ["lunary", "langfuse", "athina", "helicone"] # log input/output to lunary, langfuse, supabase, athina, helicone etc
-
-#openai call
-response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
-```
-
-# LiteLLM Proxy Server (LLM Gateway) - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
-
-Track spend + Load Balance across multiple projects
-
-[Hosted Proxy (Preview)](https://docs.litellm.ai/docs/hosted)
-
-The proxy provides:
-
-1. [Hooks for auth](https://docs.litellm.ai/docs/proxy/virtual_keys#custom-auth)
-2. [Hooks for logging](https://docs.litellm.ai/docs/proxy/logging#step-1---create-your-custom-litellm-callback-class)
-3. [Cost tracking](https://docs.litellm.ai/docs/proxy/virtual_keys#tracking-spend)
-4. [Rate Limiting](https://docs.litellm.ai/docs/proxy/users#set-rate-limits)
-
-## 📖 Proxy Endpoints - [Swagger Docs](https://litellm-api.up.railway.app/)
-
-
-## Quick Start Proxy - CLI
-
-```shell
-pip install 'litellm[proxy]'
-```
-
-### Step 1: Start litellm proxy
-
-```shell
-$ litellm --model huggingface/bigcode/starcoder
-
-#INFO: Proxy running on http://0.0.0.0:4000
-```
-
-### Step 2: Make ChatCompletions Request to Proxy
-
-
-> [!IMPORTANT]
-> 💡 [Use LiteLLM Proxy with Langchain (Python, JS), OpenAI SDK (Python, JS) Anthropic SDK, Mistral SDK, LlamaIndex, Instructor, Curl](https://docs.litellm.ai/docs/proxy/user_keys)
-
-```python
-import openai # openai v1.0.0+
-client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
-# request sent to model set on litellm proxy, `litellm --model`
-response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
- {
- "role": "user",
- "content": "this is a test request, write a short poem"
- }
-])
-
-print(response)
-```
-
-## Proxy Key Management ([Docs](https://docs.litellm.ai/docs/proxy/virtual_keys))
-
-Connect the proxy with a Postgres DB to create proxy keys
-
-```bash
-# Get the code
-git clone https://github.com/BerriAI/litellm
-
-# Go to folder
-cd litellm
-
-# Add the master key - you can change this after setup
-echo 'LITELLM_MASTER_KEY="sk-1234"' > .env
-
-# Add the litellm salt key - you cannot change this after adding a model
-# It is used to encrypt / decrypt your LLM API Key credentials
-# We recommned - https://1password.com/password-generator/
-# password generator to get a random hash for litellm salt key
-echo 'LITELLM_SALT_KEY="sk-1234"' > .env
-
-source .env
-
-# Start
-docker-compose up
-```
-
-
-UI on `/ui` on your proxy server
-
-
-Set budgets and rate limits across multiple projects
-`POST /key/generate`
-
-### Request
-
-```shell
-curl 'http://0.0.0.0:4000/key/generate' \
---header 'Authorization: Bearer sk-1234' \
---header 'Content-Type: application/json' \
---data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'
-```
-
-### Expected Response
-
-```shell
-{
- "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
- "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
-}
-```
-
-## Supported Providers ([Docs](https://docs.litellm.ai/docs/providers))
-
-| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | [Async Embedding](https://docs.litellm.ai/docs/embedding/supported_embedding) | [Async Image Generation](https://docs.litellm.ai/docs/image_generation) |
-|-------------------------------------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------|-------------------------------------------------------------------------|
-| [openai](https://docs.litellm.ai/docs/providers/openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| [azure](https://docs.litellm.ai/docs/providers/azure) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| [aws - sagemaker](https://docs.litellm.ai/docs/providers/aws_sagemaker) | ✅ | ✅ | ✅ | ✅ | ✅ | |
-| [aws - bedrock](https://docs.litellm.ai/docs/providers/bedrock) | ✅ | ✅ | ✅ | ✅ | ✅ | |
-| [google - vertex_ai](https://docs.litellm.ai/docs/providers/vertex) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| [google - palm](https://docs.litellm.ai/docs/providers/palm) | ✅ | ✅ | ✅ | ✅ | | |
-| [google AI Studio - gemini](https://docs.litellm.ai/docs/providers/gemini) | ✅ | ✅ | ✅ | ✅ | | |
-| [mistral ai api](https://docs.litellm.ai/docs/providers/mistral) | ✅ | ✅ | ✅ | ✅ | ✅ | |
-| [cloudflare AI Workers](https://docs.litellm.ai/docs/providers/cloudflare_workers) | ✅ | ✅ | ✅ | ✅ | | |
-| [cohere](https://docs.litellm.ai/docs/providers/cohere) | ✅ | ✅ | ✅ | ✅ | ✅ | |
-| [anthropic](https://docs.litellm.ai/docs/providers/anthropic) | ✅ | ✅ | ✅ | ✅ | | |
-| [empower](https://docs.litellm.ai/docs/providers/empower) | ✅ | ✅ | ✅ | ✅ |
-| [huggingface](https://docs.litellm.ai/docs/providers/huggingface) | ✅ | ✅ | ✅ | ✅ | ✅ | |
-| [replicate](https://docs.litellm.ai/docs/providers/replicate) | ✅ | ✅ | ✅ | ✅ | | |
-| [together_ai](https://docs.litellm.ai/docs/providers/togetherai) | ✅ | ✅ | ✅ | ✅ | | |
-| [openrouter](https://docs.litellm.ai/docs/providers/openrouter) | ✅ | ✅ | ✅ | ✅ | | |
-| [ai21](https://docs.litellm.ai/docs/providers/ai21) | ✅ | ✅ | ✅ | ✅ | | |
-| [baseten](https://docs.litellm.ai/docs/providers/baseten) | ✅ | ✅ | ✅ | ✅ | | |
-| [vllm](https://docs.litellm.ai/docs/providers/vllm) | ✅ | ✅ | ✅ | ✅ | | |
-| [nlp_cloud](https://docs.litellm.ai/docs/providers/nlp_cloud) | ✅ | ✅ | ✅ | ✅ | | |
-| [aleph alpha](https://docs.litellm.ai/docs/providers/aleph_alpha) | ✅ | ✅ | ✅ | ✅ | | |
-| [petals](https://docs.litellm.ai/docs/providers/petals) | ✅ | ✅ | ✅ | ✅ | | |
-| [ollama](https://docs.litellm.ai/docs/providers/ollama) | ✅ | ✅ | ✅ | ✅ | ✅ | |
-| [deepinfra](https://docs.litellm.ai/docs/providers/deepinfra) | ✅ | ✅ | ✅ | ✅ | | |
-| [perplexity-ai](https://docs.litellm.ai/docs/providers/perplexity) | ✅ | ✅ | ✅ | ✅ | | |
-| [Groq AI](https://docs.litellm.ai/docs/providers/groq) | ✅ | ✅ | ✅ | ✅ | | |
-| [Deepseek](https://docs.litellm.ai/docs/providers/deepseek) | ✅ | ✅ | ✅ | ✅ | | |
-| [anyscale](https://docs.litellm.ai/docs/providers/anyscale) | ✅ | ✅ | ✅ | ✅ | | |
-| [IBM - watsonx.ai](https://docs.litellm.ai/docs/providers/watsonx) | ✅ | ✅ | ✅ | ✅ | ✅ | |
-| [voyage ai](https://docs.litellm.ai/docs/providers/voyage) | | | | | ✅ | |
-| [xinference [Xorbits Inference]](https://docs.litellm.ai/docs/providers/xinference) | | | | | ✅ | |
-| [FriendliAI](https://docs.litellm.ai/docs/providers/friendliai) | ✅ | ✅ | ✅ | ✅ | | |
-
-[**Read the Docs**](https://docs.litellm.ai/docs/)
-
-## Contributing
-
-To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.
-
-Here's how to modify the repo locally:
-Step 1: Clone the repo
-
-```
-git clone https://github.com/BerriAI/litellm.git
-```
-
-Step 2: Navigate into the project, and install dependencies:
-
-```
-cd litellm
-poetry install -E extra_proxy -E proxy
-```
-
-Step 3: Test your change:
-
-```
-cd litellm/tests # pwd: Documents/litellm/litellm/tests
-poetry run flake8
-poetry run pytest .
-```
-
-Step 4: Submit a PR with your changes! 🚀
-
-- push your fork to your GitHub repo
-- submit a PR from there
-
-### Building LiteLLM Docker Image
-
-Follow these instructions if you want to build / run the LiteLLM Docker Image yourself.
-
-Step 1: Clone the repo
-
-```
-git clone https://github.com/BerriAI/litellm.git
-```
-
-Step 2: Build the Docker Image
-
-Build using Dockerfile.non_root
-```
-docker build -f docker/Dockerfile.non_root -t litellm_test_image .
-```
-
-Step 3: Run the Docker Image
-
-Make sure config.yaml is present in the root directory. This is your litellm proxy config file.
-```
-docker run \
- -v $(pwd)/proxy_config.yaml:/app/config.yaml \
- -e DATABASE_URL="postgresql://xxxxxxxx" \
- -e LITELLM_MASTER_KEY="sk-1234" \
- -p 4000:4000 \
- litellm_test_image \
- --config /app/config.yaml --detailed_debug
-```
-
-# Enterprise
-For companies that need better security, user management and professional support
-
-[Talk to founders](https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat)
-
-This covers:
-- ✅ **Features under the [LiteLLM Commercial License](https://docs.litellm.ai/docs/proxy/enterprise):**
-- ✅ **Feature Prioritization**
-- ✅ **Custom Integrations**
-- ✅ **Professional Support - Dedicated discord + slack**
-- ✅ **Custom SLAs**
-- ✅ **Secure access with Single Sign-On**
-
-# Support / talk with founders
-
-- [Schedule Demo 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
-- [Community Discord 💭](https://discord.gg/wuPM9dRgDw)
-- Our numbers 📞 +1 (770) 8783-106 / +1 (412) 618-6238
-- Our emails ✉️ ishaan@berri.ai / krrish@berri.ai
-
-# Why did we build this
-
-- **Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.
-
-# Contributors
-
-
-
-
-
-
-
-
-
-
-
-
-
+# Support
+Contact us at ishaan@berri.ai / krrish@berri.ai
diff --git a/build/lib/litellm/__init__.py b/build/lib/litellm/__init__.py
new file mode 100644
index 000000000..fd66e12bf
--- /dev/null
+++ b/build/lib/litellm/__init__.py
@@ -0,0 +1,2 @@
+__version__ = "1.0.0"
+from .main import * # Import all the symbols from main.py
\ No newline at end of file
diff --git a/build/lib/litellm/main.py b/build/lib/litellm/main.py
new file mode 100644
index 000000000..d4fc60053
--- /dev/null
+++ b/build/lib/litellm/main.py
@@ -0,0 +1,429 @@
+import os, openai, cohere, replicate, sys
+from typing import Any
+from func_timeout import func_set_timeout, FunctionTimedOut
+from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
+import json
+import traceback
+import threading
+import dotenv
+import traceback
+import subprocess
+####### ENVIRONMENT VARIABLES ###################
+# Loading env variables using dotenv
+dotenv.load_dotenv()
+set_verbose = False
+
+####### COMPLETION MODELS ###################
+open_ai_chat_completion_models = [
+ 'gpt-3.5-turbo',
+ 'gpt-4'
+]
+open_ai_text_completion_models = [
+ 'text-davinci-003'
+]
+
+cohere_models = [
+ 'command-nightly',
+]
+
+anthropic_models = [
+ "claude-2",
+ "claude-instant-1"
+]
+
+####### EMBEDDING MODELS ###################
+open_ai_embedding_models = [
+ 'text-embedding-ada-002'
+]
+
+#############################################
+
+
+####### COMPLETION ENDPOINTS ################
+#############################################
+@func_set_timeout(10, allowOverride=True) ## https://pypi.org/project/func-timeout/ - timeouts, in case calls hang (e.g. Azure)
+def completion(model, messages, max_tokens=None, forceTimeout=10, azure=False, logger_fn=None):
+ try:
+ if azure == True:
+ # azure configs
+ openai.api_type = "azure"
+ openai.api_base = os.environ.get("AZURE_API_BASE")
+ openai.api_version = os.environ.get("AZURE_API_VERSION")
+ openai.api_key = os.environ.get("AZURE_API_KEY")
+ ## LOGGING
+ logging(model=model, input=input, azure=azure, logger_fn=logger_fn)
+ ## COMPLETION CALL
+ response = openai.ChatCompletion.create(
+ engine=model,
+ messages = messages
+ )
+ elif "replicate" in model:
+ # replicate defaults to os.environ.get("REPLICATE_API_TOKEN")
+ # checking in case user set it to REPLICATE_API_KEY instead
+ if not os.environ.get("REPLICATE_API_TOKEN") and os.environ.get("REPLICATE_API_KEY"):
+ replicate_api_token = os.environ.get("REPLICATE_API_KEY")
+ os.environ["REPLICATE_API_TOKEN"] = replicate_api_token
+ prompt = " ".join([message["content"] for message in messages])
+ input = [{"prompt": prompt}]
+ if max_tokens:
+ input["max_length"] = max_tokens # for t5 models
+ input["max_new_tokens"] = max_tokens # for llama2 models
+ ## LOGGING
+ logging(model=model, input=input, azure=azure, additional_args={"max_tokens": max_tokens}, logger_fn=logger_fn)
+ ## COMPLETION CALL
+ output = replicate.run(
+ model,
+ input=input)
+ response = ""
+ for item in output:
+ response += item
+ new_response = {
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "message": {
+ "content": response,
+ "role": "assistant"
+ }
+ }
+ ]
+ }
+ response = new_response
+ elif model in anthropic_models:
+ #anthropic defaults to os.environ.get("ANTHROPIC_API_KEY")
+ prompt = f"{HUMAN_PROMPT}"
+ for message in messages:
+ if "role" in message:
+ if message["role"] == "user":
+ prompt += f"{HUMAN_PROMPT}{message['content']}"
+ else:
+ prompt += f"{AI_PROMPT}{message['content']}"
+ else:
+ prompt += f"{HUMAN_PROMPT}{message['content']}"
+ prompt += f"{AI_PROMPT}"
+ anthropic = Anthropic()
+ if max_tokens:
+ max_tokens_to_sample = max_tokens
+ else:
+ max_tokens_to_sample = 300 # default in Anthropic docs https://docs.anthropic.com/claude/reference/client-libraries
+ ## LOGGING
+ logging(model=model, input=prompt, azure=azure, additional_args={"max_tokens": max_tokens}, logger_fn=logger_fn)
+ ## COMPLETION CALL
+ completion = anthropic.completions.create(
+ model=model,
+ prompt=prompt,
+ max_tokens_to_sample=max_tokens_to_sample
+ )
+ new_response = {
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "message": {
+ "content": completion.completion,
+ "role": "assistant"
+ }
+ }
+ ]
+ }
+ print(f"new response: {new_response}")
+ response = new_response
+ elif model in cohere_models:
+ cohere_key = os.environ.get("COHERE_API_KEY")
+ co = cohere.Client(cohere_key)
+ prompt = " ".join([message["content"] for message in messages])
+ ## LOGGING
+ logging(model=model, input=prompt, azure=azure, logger_fn=logger_fn)
+ ## COMPLETION CALL
+ response = co.generate(
+ model=model,
+ prompt = prompt
+ )
+ new_response = {
+ "choices": [
+ {
+ "finish_reason": "stop",
+ "index": 0,
+ "message": {
+ "content": response[0],
+ "role": "assistant"
+ }
+ }
+ ],
+ }
+ response = new_response
+
+ elif model in open_ai_chat_completion_models:
+ openai.api_type = "openai"
+ openai.api_base = "https://api.openai.com/v1"
+ openai.api_version = None
+ openai.api_key = os.environ.get("OPENAI_API_KEY")
+ ## LOGGING
+ logging(model=model, input=messages, azure=azure, logger_fn=logger_fn)
+ ## COMPLETION CALL
+ response = openai.ChatCompletion.create(
+ model=model,
+ messages = messages
+ )
+ elif model in open_ai_text_completion_models:
+ openai.api_type = "openai"
+ openai.api_base = "https://api.openai.com/v1"
+ openai.api_version = None
+ openai.api_key = os.environ.get("OPENAI_API_KEY")
+ prompt = " ".join([message["content"] for message in messages])
+ ## LOGGING
+ logging(model=model, input=prompt, azure=azure, logger_fn=logger_fn)
+ ## COMPLETION CALL
+ response = openai.Completion.create(
+ model=model,
+ prompt = prompt
+ )
+ else:
+ logging(model=model, input=messages, azure=azure, logger_fn=logger_fn)
+ return response
+ except Exception as e:
+ logging(model=model, input=messages, azure=azure, additional_args={"max_tokens": max_tokens}, logger_fn=logger_fn)
+ raise e
+
+
+### EMBEDDING ENDPOINTS ####################
+@func_set_timeout(60, allowOverride=True) ## https://pypi.org/project/func-timeout/
+def embedding(model, input=[], azure=False, forceTimeout=60, logger_fn=None):
+ response = None
+ if azure == True:
+ # azure configs
+ openai.api_type = "azure"
+ openai.api_base = os.environ.get("AZURE_API_BASE")
+ openai.api_version = os.environ.get("AZURE_API_VERSION")
+ openai.api_key = os.environ.get("AZURE_API_KEY")
+ ## LOGGING
+ logging(model=model, input=input, azure=azure, logger_fn=logger_fn)
+ ## EMBEDDING CALL
+ response = openai.Embedding.create(input=input, engine=model)
+ print_verbose(f"response_value: {str(response)[:50]}")
+ elif model in open_ai_embedding_models:
+ openai.api_type = "openai"
+ openai.api_base = "https://api.openai.com/v1"
+ openai.api_version = None
+ openai.api_key = os.environ.get("OPENAI_API_KEY")
+ ## LOGGING
+ logging(model=model, input=input, azure=azure, logger_fn=logger_fn)
+ ## EMBEDDING CALL
+ response = openai.Embedding.create(input=input, model=model)
+ print_verbose(f"response_value: {str(response)[:50]}")
+ else:
+ logging(model=model, input=input, azure=azure, logger_fn=logger_fn)
+
+ return response
+
+
+### CLIENT CLASS #################### make it easy to push completion/embedding runs to different sources -> sentry/posthog/slack, etc.
+class litellm_client:
+ def __init__(self, success_callback=[], failure_callback=[], verbose=False): # Constructor
+ set_verbose = verbose
+ self.success_callback = success_callback
+ self.failure_callback = failure_callback
+ self.logger_fn = None # if user passes in their own logging function
+ self.callback_list = list(set(self.success_callback + self.failure_callback))
+ self.set_callbacks()
+
+ ## COMPLETION CALL
+ def completion(self, model, messages, max_tokens=None, forceTimeout=10, azure=False, logger_fn=None, additional_details={}) -> Any:
+ try:
+ self.logger_fn = logger_fn
+ response = completion(model=model, messages=messages, max_tokens=max_tokens, forceTimeout=forceTimeout, azure=azure, logger_fn=self.handle_input)
+ my_thread = threading.Thread(target=self.handle_success, args=(model, messages, additional_details)) # don't interrupt execution of main thread
+ my_thread.start()
+ return response
+ except Exception as e:
+ args = locals() # get all the param values
+ self.handle_failure(e, args)
+ raise e
+
+ ## EMBEDDING CALL
+ def embedding(self, model, input=[], azure=False, logger_fn=None, forceTimeout=60, additional_details={}) -> Any:
+ try:
+ self.logger_fn = logger_fn
+ response = embedding(model, input, azure=azure, logger_fn=self.handle_input)
+ my_thread = threading.Thread(target=self.handle_success, args=(model, input, additional_details)) # don't interrupt execution of main thread
+ my_thread.start()
+ return response
+ except Exception as e:
+ args = locals() # get all the param values
+ self.handle_failure(e, args)
+ raise e
+
+
+ def set_callbacks(self): #instantiate any external packages
+ for callback in self.callback_list: # only install what's required
+ if callback == "sentry":
+ try:
+ import sentry_sdk
+ except ImportError:
+ print_verbose("Package 'sentry_sdk' is missing. Installing it...")
+ subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'sentry_sdk'])
+ import sentry_sdk
+ self.sentry_sdk = sentry_sdk
+ self.sentry_sdk.init(dsn=os.environ.get("SENTRY_API_URL"), traces_sample_rate=float(os.environ.get("SENTRY_API_TRACE_RATE")))
+ self.capture_exception = self.sentry_sdk.capture_exception
+ self.add_breadcrumb = self.sentry_sdk.add_breadcrumb
+ elif callback == "posthog":
+ try:
+ from posthog import Posthog
+ except:
+ print_verbose("Package 'posthog' is missing. Installing it...")
+ subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'posthog'])
+ from posthog import Posthog
+ self.posthog = Posthog(
+ project_api_key=os.environ.get("POSTHOG_API_KEY"),
+ host=os.environ.get("POSTHOG_API_URL"))
+ elif callback == "slack":
+ try:
+ from slack_bolt import App
+ except ImportError:
+ print_verbose("Package 'slack_bolt' is missing. Installing it...")
+ subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'slack_bolt'])
+ from slack_bolt import App
+ self.slack_app = App(
+ token=os.environ.get("SLACK_API_TOKEN"),
+ signing_secret=os.environ.get("SLACK_API_SECRET")
+ )
+ self.alerts_channel = os.environ["SLACK_API_CHANNEL"]
+
+ def handle_input(self, model_call_details={}):
+ if len(model_call_details.keys()) > 0:
+ model = model_call_details["model"] if "model" in model_call_details else None
+ if model:
+ for callback in self.callback_list:
+ if callback == "sentry": # add a sentry breadcrumb if user passed in sentry integration
+ self.add_breadcrumb(
+ category=f'{model}',
+ message='Trying request model {} input {}'.format(model, json.dumps(model_call_details)),
+ level='info',
+ )
+ if self.logger_fn and callable(self.logger_fn):
+ self.logger_fn(model_call_details)
+ pass
+
+ def handle_success(self, model, messages, additional_details):
+ success_handler = additional_details.pop("success_handler", None)
+ failure_handler = additional_details.pop("failure_handler", None)
+ additional_details["litellm_model"] = str(model)
+ additional_details["litellm_messages"] = str(messages)
+ for callback in self.success_callback:
+ try:
+ if callback == "posthog":
+ ph_obj = {}
+ for detail in additional_details:
+ ph_obj[detail] = additional_details[detail]
+ event_name = additional_details["successful_event"] if "successful_event" in additional_details else "litellm.succes_query"
+ if "user_id" in additional_details:
+ self.posthog.capture(additional_details["user_id"], event_name, ph_obj)
+ else:
+ self.posthog.capture(event_name, ph_obj)
+ pass
+ elif callback == "slack":
+ slack_msg = ""
+ if len(additional_details.keys()) > 0:
+ for detail in additional_details:
+ slack_msg += f"{detail}: {additional_details[detail]}\n"
+ slack_msg += f"Successful call"
+ self.slack_app.client.chat_postMessage(channel=self.alerts_channel, text=slack_msg)
+ except:
+ pass
+
+ if success_handler and callable(success_handler):
+ call_details = {
+ "model": model,
+ "messages": messages,
+ "additional_details": additional_details
+ }
+ success_handler(call_details)
+ pass
+
+ def handle_failure(self, exception, args):
+ args.pop("self")
+ additional_details = args.pop("additional_details", {})
+
+ success_handler = additional_details.pop("success_handler", None)
+ failure_handler = additional_details.pop("failure_handler", None)
+
+ for callback in self.failure_callback:
+ try:
+ if callback == "slack":
+ slack_msg = ""
+ for param in args:
+ slack_msg += f"{param}: {args[param]}\n"
+ if len(additional_details.keys()) > 0:
+ for detail in additional_details:
+ slack_msg += f"{detail}: {additional_details[detail]}\n"
+ slack_msg += f"Traceback: {traceback.format_exc()}"
+ self.slack_app.client.chat_postMessage(channel=self.alerts_channel, text=slack_msg)
+ elif callback == "sentry":
+ self.capture_exception(exception)
+ elif callback == "posthog":
+ if len(additional_details.keys()) > 0:
+ ph_obj = {}
+ for param in args:
+ ph_obj[param] += args[param]
+ for detail in additional_details:
+ ph_obj[detail] = additional_details[detail]
+ event_name = additional_details["failed_event"] if "failed_event" in additional_details else "litellm.failed_query"
+ if "user_id" in additional_details:
+ self.posthog.capture(additional_details["user_id"], event_name, ph_obj)
+ else:
+ self.posthog.capture(event_name, ph_obj)
+ else:
+ pass
+ except:
+ print(f"got an error calling {callback} - {traceback.format_exc()}")
+
+ if failure_handler and callable(failure_handler):
+ call_details = {
+ "exception": exception,
+ "additional_details": additional_details
+ }
+ failure_handler(call_details)
+ pass
+####### HELPER FUNCTIONS ################
+
+#Logging function -> log the exact model details + what's being sent | Non-Blocking
+def logging(model, input, azure=False, additional_args={}, logger_fn=None):
+ try:
+ model_call_details = {}
+ model_call_details["model"] = model
+ model_call_details["input"] = input
+ model_call_details["azure"] = azure
+ model_call_details["additional_args"] = additional_args
+ if logger_fn and callable(logger_fn):
+ try:
+ # log additional call details -> api key, etc.
+ if azure == True or model in open_ai_chat_completion_models or model in open_ai_chat_completion_models or model in open_ai_embedding_models:
+ model_call_details["api_type"] = openai.api_type
+ model_call_details["api_base"] = openai.api_base
+ model_call_details["api_version"] = openai.api_version
+ model_call_details["api_key"] = openai.api_key
+ elif "replicate" in model:
+ model_call_details["api_key"] = os.environ.get("REPLICATE_API_TOKEN")
+ elif model in anthropic_models:
+ model_call_details["api_key"] = os.environ.get("ANTHROPIC_API_KEY")
+ elif model in cohere_models:
+ model_call_details["api_key"] = os.environ.get("COHERE_API_KEY")
+
+ logger_fn(model_call_details) # Expectation: any logger function passed in by the user should accept a dict object
+ except:
+ print_verbose(f"Basic model call details: {model_call_details}")
+ print_verbose(f"[Non-Blocking] Exception occurred while logging {traceback.format_exc()}")
+ pass
+ else:
+ print_verbose(f"Basic model call details: {model_call_details}")
+ pass
+ except:
+ pass
+
+## Set verbose to true -> ```litellm.verbose = True```
+def print_verbose(print_statement):
+ if set_verbose:
+ print(f"LiteLLM: {print_statement}")
+ print("Get help - https://discord.com/invite/wuPM9dRgDw")
\ No newline at end of file
diff --git a/ci_cd/check_file_length.py b/ci_cd/check_file_length.py
deleted file mode 100644
index f23b79add..000000000
--- a/ci_cd/check_file_length.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import sys
-
-
-def check_file_length(max_lines, filenames):
- bad_files = []
- for filename in filenames:
- with open(filename, "r") as file:
- lines = file.readlines()
- if len(lines) > max_lines:
- bad_files.append((filename, len(lines)))
- return bad_files
-
-
-if __name__ == "__main__":
- max_lines = int(sys.argv[1])
- filenames = sys.argv[2:]
-
- bad_files = check_file_length(max_lines, filenames)
- if bad_files:
- bad_files.sort(
- key=lambda x: x[1], reverse=True
- ) # Sort files by length in descending order
- for filename, length in bad_files:
- print(f"{filename}: {length} lines")
-
- sys.exit(1)
- else:
- sys.exit(0)
diff --git a/ci_cd/check_files_match.py b/ci_cd/check_files_match.py
deleted file mode 100644
index 18b6cf792..000000000
--- a/ci_cd/check_files_match.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import sys
-import filecmp
-import shutil
-
-
-def main(argv=None):
- print(
- "Comparing model_prices_and_context_window and litellm/model_prices_and_context_window_backup.json files... checking if they match."
- )
-
- file1 = "model_prices_and_context_window.json"
- file2 = "litellm/model_prices_and_context_window_backup.json"
-
- cmp_result = filecmp.cmp(file1, file2, shallow=False)
-
- if cmp_result:
- print(f"Passed! Files {file1} and {file2} match.")
- return 0
- else:
- print(
- f"Failed! Files {file1} and {file2} do not match. Copying content from {file1} to {file2}."
- )
- copy_content(file1, file2)
- return 1
-
-
-def copy_content(source, destination):
- shutil.copy2(source, destination)
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/codecov.yaml b/codecov.yaml
deleted file mode 100644
index c25cf0fba..000000000
--- a/codecov.yaml
+++ /dev/null
@@ -1,32 +0,0 @@
-component_management:
- individual_components:
- - component_id: "Router"
- paths:
- - "router"
- - component_id: "LLMs"
- paths:
- - "*/llms/*"
- - component_id: "Caching"
- paths:
- - "*/caching/*"
- - ".*redis.*"
- - component_id: "litellm_logging"
- paths:
- - "*/integrations/*"
- - ".*litellm_logging.*"
- - component_id: "Proxy_Authentication"
- paths:
- - "*/proxy/auth/**"
-comment:
- layout: "header, diff, flags, components" # show component info in the PR comment
-
-coverage:
- status:
- project:
- default:
- target: auto
- threshold: 1% # at maximum allow project coverage to drop by 1%
- patch:
- default:
- target: auto
- threshold: 0% # patch coverage should be 100%
diff --git a/cookbook/Benchmarking_LLMs_by_use_case.ipynb b/cookbook/Benchmarking_LLMs_by_use_case.ipynb
deleted file mode 100644
index 80d96261b..000000000
--- a/cookbook/Benchmarking_LLMs_by_use_case.ipynb
+++ /dev/null
@@ -1,757 +0,0 @@
-{
- "nbformat": 4,
- "nbformat_minor": 0,
- "metadata": {
- "colab": {
- "provenance": []
- },
- "kernelspec": {
- "name": "python3",
- "display_name": "Python 3"
- },
- "language_info": {
- "name": "python"
- }
- },
- "cells": [
- {
- "cell_type": "markdown",
- "source": [
- "# LiteLLM - Benchmark Llama2, Claude1.2 and GPT3.5 for a use case\n",
- "In this notebook for a given use case we run the same question and view:\n",
- "* LLM Response\n",
- "* Response Time\n",
- "* Response Cost\n",
- "\n",
- "## Sample output for a question\n",
- ""
- ],
- "metadata": {
- "id": "4Cq-_Y-TKf0r"
- }
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "O3ENsWYB27Mb"
- },
- "outputs": [],
- "source": [
- "!pip install litellm"
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Example Use Case 1 - Code Generator\n",
- "### For this use case enter your system prompt and questions\n"
- ],
- "metadata": {
- "id": "Pk55Mjq_3DiR"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "# enter your system prompt if you have one\n",
- "system_prompt = \"\"\"\n",
- "You are a coding assistant helping users using litellm.\n",
- "litellm is a light package to simplify calling OpenAI, Azure, Cohere, Anthropic, Huggingface API Endpoints\n",
- "--\n",
- "Sample Usage:\n",
- "```\n",
- "pip install litellm\n",
- "from litellm import completion\n",
- "## set ENV variables\n",
- "os.environ[\"OPENAI_API_KEY\"] = \"openai key\"\n",
- "os.environ[\"COHERE_API_KEY\"] = \"cohere key\"\n",
- "messages = [{ \"content\": \"Hello, how are you?\",\"role\": \"user\"}]\n",
- "# openai call\n",
- "response = completion(model=\"gpt-3.5-turbo\", messages=messages)\n",
- "# cohere call\n",
- "response = completion(\"command-nightly\", messages)\n",
- "```\n",
- "\n",
- "\"\"\"\n",
- "\n",
- "\n",
- "# qustions/logs you want to run the LLM on\n",
- "questions = [\n",
- " \"what is litellm?\",\n",
- " \"why should I use LiteLLM\",\n",
- " \"does litellm support Anthropic LLMs\",\n",
- " \"write code to make a litellm completion call\",\n",
- "]"
- ],
- "metadata": {
- "id": "_1SZYJFB3HmQ"
- },
- "execution_count": 21,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Running questions\n",
- "### Select from 100+ LLMs here: https://docs.litellm.ai/docs/providers"
- ],
- "metadata": {
- "id": "AHH3cqeU3_ZT"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "import litellm\n",
- "from litellm import completion, completion_cost\n",
- "import os\n",
- "import time\n",
- "\n",
- "# optional use litellm dashboard to view logs\n",
- "# litellm.use_client = True\n",
- "# litellm.token = \"ishaan_2@berri.ai\" # set your email\n",
- "\n",
- "\n",
- "# set API keys\n",
- "os.environ['TOGETHERAI_API_KEY'] = \"\"\n",
- "os.environ['OPENAI_API_KEY'] = \"\"\n",
- "os.environ['ANTHROPIC_API_KEY'] = \"\"\n",
- "\n",
- "\n",
- "# select LLMs to benchmark\n",
- "# using https://api.together.xyz/playground for llama2\n",
- "# try any supported LLM here: https://docs.litellm.ai/docs/providers\n",
- "\n",
- "models = ['togethercomputer/llama-2-70b-chat', 'gpt-3.5-turbo', 'claude-instant-1.2']\n",
- "data = []\n",
- "\n",
- "for question in questions: # group by question\n",
- " for model in models:\n",
- " print(f\"running question: {question} for model: {model}\")\n",
- " start_time = time.time()\n",
- " # show response, response time, cost for each question\n",
- " response = completion(\n",
- " model=model,\n",
- " max_tokens=500,\n",
- " messages = [\n",
- " {\n",
- " \"role\": \"system\", \"content\": system_prompt\n",
- " },\n",
- " {\n",
- " \"role\": \"user\", \"content\": question\n",
- " }\n",
- " ],\n",
- " )\n",
- " end = time.time()\n",
- " total_time = end-start_time # response time\n",
- " # print(response)\n",
- " cost = completion_cost(response) # cost for completion\n",
- " raw_response = response['choices'][0]['message']['content'] # response string\n",
- "\n",
- "\n",
- " # add log to pandas df\n",
- " data.append(\n",
- " {\n",
- " 'Model': model,\n",
- " 'Question': question,\n",
- " 'Response': raw_response,\n",
- " 'ResponseTime': total_time,\n",
- " 'Cost': cost\n",
- " })"
- ],
- "metadata": {
- "id": "BpQD4A5339L3"
- },
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "source": [
- "## View Benchmarks for LLMs"
- ],
- "metadata": {
- "id": "apOSV3PBLa5Y"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "from IPython.display import display\n",
- "from IPython.core.interactiveshell import InteractiveShell\n",
- "InteractiveShell.ast_node_interactivity = \"all\"\n",
- "from IPython.display import HTML\n",
- "import pandas as pd\n",
- "\n",
- "df = pd.DataFrame(data)\n",
- "grouped_by_question = df.groupby('Question')\n",
- "\n",
- "for question, group_data in grouped_by_question:\n",
- " print(f\"Question: {question}\")\n",
- " HTML(group_data.to_html())\n"
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 1000
- },
- "id": "CJqBlqUh_8Ws",
- "outputId": "e02c3427-d8c6-4614-ff07-6aab64247ff6"
- },
- "execution_count": 22,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Question: does litellm support Anthropic LLMs\n"
- ]
- },
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- ""
- ],
- "text/html": [
- "
\n",
- " \n",
- "
\n",
- "
\n",
- "
Model
\n",
- "
Question
\n",
- "
Response
\n",
- "
ResponseTime
\n",
- "
Cost
\n",
- "
\n",
- " \n",
- " \n",
- "
\n",
- "
6
\n",
- "
togethercomputer/llama-2-70b-chat
\n",
- "
does litellm support Anthropic LLMs
\n",
- "
Yes, litellm supports Anthropic LLMs.\\n\\nIn the example usage you provided, the `completion` function is called with the `model` parameter set to `\"gpt-3.5-turbo\"` for OpenAI and `\"command-nightly\"` for Cohere.\\n\\nTo use an Anthropic LLM with litellm, you would set the `model` parameter to the name of the Anthropic model you want to use, followed by the version number, if applicable. For example:\\n```\\nresponse = completion(model=\"anthropic-gpt-2\", messages=messages)\\n```\\nThis would call the Anthropic GPT-2 model to generate a completion for the given input messages.\\n\\nNote that you will need to set the `ANTHROPIC_API_KEY` environment variable to your Anthropic API key before making the call. You can do this by running the following command in your terminal:\\n```\\nos.environ[\"ANTHROPIC_API_KEY\"] = \"your-anthropic-api-key\"\\n```\\nReplace `\"your-anthropic-api-key\"` with your actual Anthropic API key.\\n\\nOnce you've set the environment variable, you can use the `completion` function with the `model` parameter set to an Anthropic model name to call the Anthropic API and generate a completion.
\n",
- "
21.513009
\n",
- "
0.001347
\n",
- "
\n",
- "
\n",
- "
7
\n",
- "
gpt-3.5-turbo
\n",
- "
does litellm support Anthropic LLMs
\n",
- "
No, currently litellm does not support Anthropic LLMs. It mainly focuses on simplifying the usage of OpenAI, Azure, Cohere, and Huggingface API endpoints.
\n",
- "
8.656510
\n",
- "
0.000342
\n",
- "
\n",
- "
\n",
- "
8
\n",
- "
claude-instant-1.2
\n",
- "
does litellm support Anthropic LLMs
\n",
- "
Yes, litellm supports calling Anthropic LLMs through the completion function.\\n\\nTo use an Anthropic model with litellm:\\n\\n1. Set the ANTHROPIC_API_KEY environment variable with your Anthropic API key\\n\\n2. Pass the model name as the 'model' argument to completion(). Anthropic model names follow the format 'anthropic/<model_name>'\\n\\nFor example:\\n\\n```python \\nimport os\\nfrom litellm import completion\\n\\nos.environ[\"ANTHROPIC_API_KEY\"] = \"your_anthropic_api_key\"\\n\\nmessages = [{\"content\": \"Hello\", \"role\": \"user\"}]\\n\\nresponse = completion(model=\"anthropic/constitutional\", messages=messages)\\n```\\n\\nThis would call the Constitutional AI model from Anthropic.\\n\\nSo in summary, litellm provides a simple interface to call any Anthropic models as long as you specify the model name correctly and set the ANTHROPIC_API_KEY env variable.
Litellm is a lightweight Python package that simplifies calling various AI API endpoints, including OpenAI, Azure, Cohere, Anthropic, and Hugging Face. It provides a convenient interface for making requests to these APIs, allowing developers to easily integrate them into their applications. With Litellm, developers can quickly and easily interact with multiple AI models and services, without having to handle the details of authentication, API calls, and response parsing. This makes it easier to build and deploy AI-powered applications, and can help developers save time and effort.
\n",
- "
13.479644
\n",
- "
0.000870
\n",
- "
\n",
- "
\n",
- "
1
\n",
- "
gpt-3.5-turbo
\n",
- "
what is litellm?
\n",
- "
litellm is a light package that provides a simplified interface for making API calls to various language models and APIs. It abstracts away the complexities of handling network requests, authentication, and response parsing, making it easier for developers to integrate powerful language models into their applications.\\n\\nWith litellm, you can quickly make API calls to models like OpenAI's GPT-3.5 Turbo, Azure's Text Analytics, Cohere's Command API, Anthropic's API, and Huggingface's models. It also supports additional functionality like conversational AI, summarization, translation, and more.\\n\\nBy using litellm, you can focus on your application logic without getting tangled in the details of API integration, allowing you to quickly build intelligent and conversational applications.
\n",
- "
8.324332
\n",
- "
0.000566
\n",
- "
\n",
- "
\n",
- "
2
\n",
- "
claude-instant-1.2
\n",
- "
what is litellm?
\n",
- "
litellm is a Python library that simplifies calling various AI API endpoints like OpenAI, Azure, Cohere, Anthropic, and Huggingface. \\n\\nSome key things to know about litellm:\\n\\n- It provides a consistent interface for completing prompts and generating responses from different AI models through a single method called completion().\\n\\n- You specify the API (e.g. OpenAI, Cohere etc.) and model either by name or by setting environment variables before making the completion call.\\n\\n- This avoids having to use different SDKs or APIs for each provider and standardizes the call structure. \\n\\n- It handles things like setting headers, encoding inputs, parsing responses so the user doesn't have to deal with those details.\\n\\n- The goal is to make it easy to try different AI APIs and models without having to change code or learn different interfaces.\\n\\n- It's lightweight with no other dependencies required besides what's needed for each API (e.g. openai, azure SDKs etc.).\\n\\nSo in summary, litellm is a small library that provides a common way to interact with multiple conversational AI APIs through a single Python method, avoiding the need to directly use each provider's specific SDK.
\\nThere are several reasons why you might want to use LiteLLM:\\n\\n1. Simplified API calls: LiteLLM provides a simple and consistent API for calling various language models, making it easier to use multiple models and switch between them.\\n2. Environment variable configuration: LiteLLM allows you to set environment variables for API keys and model names, making it easier to manage and switch between different models and APIs.\\n3. Support for multiple models and APIs: LiteLLM supports a wide range of language models and APIs, including OpenAI, Azure, Cohere, Anthropic, and Hugging Face.\\n4. Easy integration with popular frameworks: LiteLLM can be easily integrated with popular frameworks such as PyTorch and TensorFlow, making it easy to use with your existing codebase.\\n5. Lightweight: LiteLLM is a lightweight package, making it easy to install and use, even on resource-constrained devices.\\n6. Flexible: LiteLLM allows you to define your own models and APIs, making it easy to use with custom models and APIs.\\n7. Extensive documentation: LiteLLM has extensive documentation, making it easy to get started and learn how to use the package.\\n8. Active community: LiteLLM has an active community of developers and users, making it easy to get help and feedback on your projects.\\n\\nOverall, LiteLLM can help you to simplify your workflow, improve your productivity, and make it easier to work with multiple language models and APIs.
\n",
- "
23.777885
\n",
- "
0.001443
\n",
- "
\n",
- "
\n",
- "
4
\n",
- "
gpt-3.5-turbo
\n",
- "
why should I use LiteLLM
\n",
- "
LiteLLM is a lightweight Python package that simplifies the process of making API calls to various language models. Here are some reasons why you should use LiteLLM:\\n\\n1. **Simplified API Calls**: LiteLLM abstracts away the complexity of making API calls to different language models. It provides a unified interface for invoking models from OpenAI, Azure, Cohere, Anthropic, Huggingface, and more.\\n\\n2. **Easy Integration**: LiteLLM seamlessly integrates with your existing codebase. You can import the package and start making API calls with just a few lines of code.\\n\\n3. **Flexibility**: LiteLLM supports a variety of language models, including GPT-3, GPT-Neo, chatGPT, and more. You can choose the model that suits your requirements and easily switch between them.\\n\\n4. **Convenience**: LiteLLM handles the authentication and connection details for you. You just need to set the relevant environment variables, and the package takes care of the rest.\\n\\n5. **Quick Prototyping**: LiteLLM is ideal for rapid prototyping and experimentation. With its simple API, you can quickly generate text, chat with models, and build interactive applications.\\n\\n6. **Community Support**: LiteLLM is actively maintained and supported by a community of developers. You can find help, share ideas, and collaborate with others to enhance your projects.\\n\\nOverall, LiteLLM simplifies the process of making API calls to language models, saving you time and effort while providing flexibility and convenience.
\n",
- "
12.109881
\n",
- "
0.000881
\n",
- "
\n",
- "
\n",
- "
5
\n",
- "
claude-instant-1.2
\n",
- "
why should I use LiteLLM
\n",
- "
Here are some key reasons why you may want to consider using LiteLLM:\\n\\n- Simplifies calling multiple large language models - LiteLLM provides a unified API to call models from different providers like OpenAI, Azure, HuggingFace, Anthropic etc. This avoids having to deal with different APIs from each provider.\\n\\n- Easy to get started - LiteLLM is very lightweight and simple to install with just one pip install. It has minimal configuration needed with just API keys. This lowers the barrier to start experimenting with LLMs.\\n\\n- Standard interface - The completion API provides a standard way to send messages and get responses back regardless of underlying model. This keeps the code interface consistent. \\n\\n- Model agnostic - Your code doesn't need to change to switch between different models. You can easily compare performance of models from different providers.\\n\\n- Open source - As it is open source, the library can be expanded to support more models and providers over time via community contributions.\\n\\n- Cost effective - LiteLLM helps avoid vendor lock-in. You can optimize costs by testing multiple models and choosing the best suitable and cost-effective option for your needs.\\n\\nSo in summary, L
\\nTo make a completion call using litellm, you can use the `completion` function from the litellm package. Here's an example of how to use it:\\n```\\nfrom litellm import completion\\n\\n# Set ENV variables\\nos.environ[\"OPENAI_API_KEY\"] = \"your_openai_api_key\"\\nos.environ[\"COHERE_API_KEY\"] = \"your_cohere_api_key\"\\n\\n# Define the messages to be completed\\nmessages = [\\n {\\n \"content\": \"Hello, how are you?\",\\n \"role\": \"user\"\\n }\\n]\\n\\n# Make a completion call using OpenAI\\nresponse = completion(model=\"gpt-3.5-turbo\", messages=messages)\\n\\n# Make a completion call using Cohere\\nresponse = completion(\"command-nightly\", messages)\\n```\\nIn this example, we first set the ENV variables for the OpenAI and Cohere API keys. Then, we define a list of messages to be completed, which in this case contains a single message with the content \"Hello, how are you?\" and the role \"user\".\\n\\nNext, we make two completion calls using the `completion` function from litellm. The first call uses the OpenAI model `gpt-3.5-turbo` and passes in the list of messages. The second call uses the Cohere model `command-nightly` and passes in the same list of messages.\\n\\nThe `completion` function returns a response object that contains the completed messages. You can then use the `response.messages` attribute to access the completed messages.\\n\\nHere's an example of how to access the completed messages:\\n```\\n# Print the completed messages\\nprint(response.messages)\\n```\\nThis will print the completed messages, which should look something like this:\\n```\\n[{'content': 'Hello, how are you?', 'role': 'user', 'completion': 'I am doing well, thanks for asking.'}]\\n```\\nNote that the `completion` attribute contains the completed message. You can use this attribute to retrieve the completed message and use it in your application.
\n",
- "
32.192453
\n",
- "
0.001821
\n",
- "
\n",
- "
\n",
- "
10
\n",
- "
gpt-3.5-turbo
\n",
- "
write code to make a litellm completion call
\n",
- "
Sure! Here's an example code to make a completion call using the litellm package:\\n\\n```python\\nfrom litellm import completion\\n\\n# Set the model and messages\\nmodel = \"gpt-3.5-turbo\"\\nmessages = [\\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\\n {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\\n {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\\n {\"role\": \"user\", \"content\": \"Where was it played?\"}\\n]\\n\\n# Make the completion call\\nresponse = completion(model=model, messages=messages)\\n\\n# Print the assistant's reply\\nassistant_reply = response[\"choices\"][0][\"message\"][\"content\"]\\nprint(\"Assistant: \", assistant_reply)\\n```\\n\\nMake sure you have the litellm package installed (`pip install litellm`) and set the necessary environment variables for the API keys before running this code.
\n",
- "
9.377155
\n",
- "
0.000686
\n",
- "
\n",
- "
\n",
- "
11
\n",
- "
claude-instant-1.2
\n",
- "
write code to make a litellm completion call
\n",
- "
Here is an example of making a completion call using litellm:\\n\\n```python\\nimport os\\nfrom litellm import completion\\n\\n# Set API keys as environment variables\\nos.environ[\"OPENAI_API_KEY\"] = \"your openai api key\" \\n\\n# Conversation context \\nmessages = [{\\n \"content\": \"Hello, how can I help you today?\",\\n \"role\": \"assistant\"\\n}]\\n\\n# Make completion call with GPT-3 model\\nresponse = completion(\\n model=\"gpt-3.5-turbo\", \\n messages=messages\\n)\\n\\nprint(response)\\n```\\n\\nTo break it down:\\n\\n- Import completion from litellm\\n- Set the OPENAI_API_KEY env var \\n- Define a messages list with the conversation context\\n- Call completion(), specifying the model (\"gpt-3.5-turbo\") and messages\\n- It will return the response from the API\\n- Print the response\\n\\nThis makes a simple completion call to OpenAI GPT-3 using litellm to handle the API details. You can also call other models like Cohere or Anthropic by specifying their name instead of the OpenAI
\n",
- "
9.839988
\n",
- "
0.001578
\n",
- "
\n",
- " \n",
- "
"
- ]
- },
- "metadata": {},
- "execution_count": 22
- }
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Use Case 2 - Rewrite user input concisely"
- ],
- "metadata": {
- "id": "bmtAbC1rGVAm"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "# enter your system prompt if you have one\n",
- "system_prompt = \"\"\"\n",
- "For a given user input, rewrite the input to make be more concise.\n",
- "\"\"\"\n",
- "\n",
- "# user input for re-writing questions\n",
- "questions = [\n",
- " \"LiteLLM is a lightweight Python package that simplifies the process of making API calls to various language models. Here are some reasons why you should use LiteLLM:\\n\\n1. **Simplified API Calls**: LiteLLM abstracts away the complexity of making API calls to different language models. It provides a unified interface for invoking models from OpenAI, Azure, Cohere, Anthropic, Huggingface, and more.\\n\\n2. **Easy Integration**: LiteLLM seamlessly integrates with your existing codebase. You can import the package and start making API calls with just a few lines of code.\\n\\n3. **Flexibility**: LiteLLM supports a variety of language models, including GPT-3, GPT-Neo, chatGPT, and more. You can choose the model that suits your requirements and easily switch between them.\\n\\n4. **Convenience**: LiteLLM handles the authentication and connection details for you. You just need to set the relevant environment variables, and the package takes care of the rest.\\n\\n5. **Quick Prototyping**: LiteLLM is ideal for rapid prototyping and experimentation. With its simple API, you can quickly generate text, chat with models, and build interactive applications.\\n\\n6. **Community Support**: LiteLLM is actively maintained and supported by a community of developers. You can find help, share ideas, and collaborate with others to enhance your projects.\\n\\nOverall, LiteLLM simplifies the process of making API calls to language models, saving you time and effort while providing flexibility and convenience\",\n",
- " \"Hi everyone! I'm [your name] and I'm currently working on [your project/role involving LLMs]. I came across LiteLLM and was really excited by how it simplifies working with different LLM providers. I'm hoping to use LiteLLM to [build an app/simplify my code/test different models etc]. Before finding LiteLLM, I was struggling with [describe any issues you faced working with multiple LLMs]. With LiteLLM's unified API and automatic translation between providers, I think it will really help me to [goals you have for using LiteLLM]. Looking forward to being part of this community and learning more about how I can build impactful applications powered by LLMs!Let me know if you would like me to modify or expand on any part of this suggested intro. I'm happy to provide any clarification or additional details you need!\",\n",
- " \"Traceloop is a platform for monitoring and debugging the quality of your LLM outputs. It provides you with a way to track the performance of your LLM application; rollout changes with confidence; and debug issues in production. It is based on OpenTelemetry, so it can provide full visibility to your LLM requests, as well vector DB usage, and other infra in your stack.\"\n",
- "]"
- ],
- "metadata": {
- "id": "boiHO1PhGXSL"
- },
- "execution_count": 23,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Run Questions"
- ],
- "metadata": {
- "id": "fwNcC_obICUc"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "import litellm\n",
- "from litellm import completion, completion_cost\n",
- "import os\n",
- "import time\n",
- "\n",
- "# optional use litellm dashboard to view logs\n",
- "# litellm.use_client = True\n",
- "# litellm.token = \"ishaan_2@berri.ai\" # set your email\n",
- "\n",
- "os.environ['TOGETHERAI_API_KEY'] = \"\"\n",
- "os.environ['OPENAI_API_KEY'] = \"\"\n",
- "os.environ['ANTHROPIC_API_KEY'] = \"\"\n",
- "\n",
- "models = ['togethercomputer/llama-2-70b-chat', 'gpt-3.5-turbo', 'claude-instant-1.2'] # enter llms to benchmark\n",
- "data_2 = []\n",
- "\n",
- "for question in questions: # group by question\n",
- " for model in models:\n",
- " print(f\"running question: {question} for model: {model}\")\n",
- " start_time = time.time()\n",
- " # show response, response time, cost for each question\n",
- " response = completion(\n",
- " model=model,\n",
- " max_tokens=500,\n",
- " messages = [\n",
- " {\n",
- " \"role\": \"system\", \"content\": system_prompt\n",
- " },\n",
- " {\n",
- " \"role\": \"user\", \"content\": \"User input:\" + question\n",
- " }\n",
- " ],\n",
- " )\n",
- " end = time.time()\n",
- " total_time = end-start_time # response time\n",
- " # print(response)\n",
- " cost = completion_cost(response) # cost for completion\n",
- " raw_response = response['choices'][0]['message']['content'] # response string\n",
- " #print(raw_response, total_time, cost)\n",
- "\n",
- " # add to pandas df\n",
- " data_2.append(\n",
- " {\n",
- " 'Model': model,\n",
- " 'Question': question,\n",
- " 'Response': raw_response,\n",
- " 'ResponseTime': total_time,\n",
- " 'Cost': cost\n",
- " })\n",
- "\n",
- "\n"
- ],
- "metadata": {
- "id": "KtBjZ1mUIBiJ"
- },
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "source": [
- "## View Logs - Group by Question"
- ],
- "metadata": {
- "id": "-PCYIzG5M0II"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "from IPython.display import display\n",
- "from IPython.core.interactiveshell import InteractiveShell\n",
- "InteractiveShell.ast_node_interactivity = \"all\"\n",
- "from IPython.display import HTML\n",
- "import pandas as pd\n",
- "\n",
- "df = pd.DataFrame(data_2)\n",
- "grouped_by_question = df.groupby('Question')\n",
- "\n",
- "for question, group_data in grouped_by_question:\n",
- " print(f\"Question: {question}\")\n",
- " HTML(group_data.to_html())\n"
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 1000
- },
- "id": "-3R5-2q8IiL2",
- "outputId": "c4a0d9e5-bb21-4de0-fc4c-9f5e71d0f177"
- },
- "execution_count": 20,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Question: Hi everyone! I'm [your name] and I'm currently working on [your project/role involving LLMs]. I came across LiteLLM and was really excited by how it simplifies working with different LLM providers. I'm hoping to use LiteLLM to [build an app/simplify my code/test different models etc]. Before finding LiteLLM, I was struggling with [describe any issues you faced working with multiple LLMs]. With LiteLLM's unified API and automatic translation between providers, I think it will really help me to [goals you have for using LiteLLM]. Looking forward to being part of this community and learning more about how I can build impactful applications powered by LLMs!Let me know if you would like me to modify or expand on any part of this suggested intro. I'm happy to provide any clarification or additional details you need!\n"
- ]
- },
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- ""
- ],
- "text/html": [
- "
\n",
- " \n",
- "
\n",
- "
\n",
- "
Model
\n",
- "
Question
\n",
- "
Response
\n",
- "
ResponseTime
\n",
- "
Cost
\n",
- "
\n",
- " \n",
- " \n",
- "
\n",
- "
3
\n",
- "
togethercomputer/llama-2-70b-chat
\n",
- "
Hi everyone! I'm [your name] and I'm currently working on [your project/role involving LLMs]. I came across LiteLLM and was really excited by how it simplifies working with different LLM providers. I'm hoping to use LiteLLM to [build an app/simplify my code/test different models etc]. Before finding LiteLLM, I was struggling with [describe any issues you faced working with multiple LLMs]. With LiteLLM's unified API and automatic translation between providers, I think it will really help me to [goals you have for using LiteLLM]. Looking forward to being part of this community and learning more about how I can build impactful applications powered by LLMs!Let me know if you would like me to modify or expand on any part of this suggested intro. I'm happy to provide any clarification or additional details you need!
\n",
- "
\\nHere's a more concise version of the user input:\\n\\n\"Hi everyone! I'm [your name] and I'm working on [your project/role involving LLMs]. I recently discovered LiteLLM and I'm excited to use it to [build an app/simplify my code/test different models etc]. Before LiteLLM, I struggled with [describe any issues you faced working with multiple LLMs]. I'm looking forward to using LiteLLM's unified API and automatic translation to achieve my goals. I'm eager to learn more about building impactful applications powered by LLMs and to be part of this community. Let me know if you have any questions or need further clarification.\"\\n\\nIn this revised version, we've kept the essential information and removed some of the extraneous language. We've also rephrased some of the sentences to make them more concise and easier to read.
\n",
- "
18.300620
\n",
- "
0.001200
\n",
- "
\n",
- "
\n",
- "
4
\n",
- "
gpt-3.5-turbo
\n",
- "
Hi everyone! I'm [your name] and I'm currently working on [your project/role involving LLMs]. I came across LiteLLM and was really excited by how it simplifies working with different LLM providers. I'm hoping to use LiteLLM to [build an app/simplify my code/test different models etc]. Before finding LiteLLM, I was struggling with [describe any issues you faced working with multiple LLMs]. With LiteLLM's unified API and automatic translation between providers, I think it will really help me to [goals you have for using LiteLLM]. Looking forward to being part of this community and learning more about how I can build impactful applications powered by LLMs!Let me know if you would like me to modify or expand on any part of this suggested intro. I'm happy to provide any clarification or additional details you need!
\n",
- "
User input: Hi, I'm [your name] and I'm excited about using LiteLLM to simplify working with different LLM providers. Before finding LiteLLM, I faced challenges working with multiple LLMs. With LiteLLM's unified API and automatic translation, I believe it will help me achieve my goals of [state your goals]. I look forward to being part of this community and learning how to build impactful applications with LLMs. Let me know if you need any further clarification or details.
\n",
- "
7.385472
\n",
- "
0.000525
\n",
- "
\n",
- "
\n",
- "
5
\n",
- "
claude-instant-1.2
\n",
- "
Hi everyone! I'm [your name] and I'm currently working on [your project/role involving LLMs]. I came across LiteLLM and was really excited by how it simplifies working with different LLM providers. I'm hoping to use LiteLLM to [build an app/simplify my code/test different models etc]. Before finding LiteLLM, I was struggling with [describe any issues you faced working with multiple LLMs]. With LiteLLM's unified API and automatic translation between providers, I think it will really help me to [goals you have for using LiteLLM]. Looking forward to being part of this community and learning more about how I can build impactful applications powered by LLMs!Let me know if you would like me to modify or expand on any part of this suggested intro. I'm happy to provide any clarification or additional details you need!
\n",
- "
Here is a more concise rewrite of the user input:\\n\\nHi everyone, I'm [your name]. I'm currently [your project/role] and came across LiteLLM, which simplifies working with different LLMs through its unified API. I hope to [build an app/simplify code/test models] with LiteLLM since I previously struggled with [issues]. LiteLLM's automatic translation between providers will help me [goals] and build impactful LLM applications. Looking forward to learning more as part of this community. Let me know if you need any clarification on my plans to use LiteLLM.
\n",
- "
8.628217
\n",
- "
0.001022
\n",
- "
\n",
- " \n",
- "
"
- ]
- },
- "metadata": {},
- "execution_count": 20
- },
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Question: LiteLLM is a lightweight Python package that simplifies the process of making API calls to various language models. Here are some reasons why you should use LiteLLM:\n",
- "\n",
- "1. **Simplified API Calls**: LiteLLM abstracts away the complexity of making API calls to different language models. It provides a unified interface for invoking models from OpenAI, Azure, Cohere, Anthropic, Huggingface, and more.\n",
- "\n",
- "2. **Easy Integration**: LiteLLM seamlessly integrates with your existing codebase. You can import the package and start making API calls with just a few lines of code.\n",
- "\n",
- "3. **Flexibility**: LiteLLM supports a variety of language models, including GPT-3, GPT-Neo, chatGPT, and more. You can choose the model that suits your requirements and easily switch between them.\n",
- "\n",
- "4. **Convenience**: LiteLLM handles the authentication and connection details for you. You just need to set the relevant environment variables, and the package takes care of the rest.\n",
- "\n",
- "5. **Quick Prototyping**: LiteLLM is ideal for rapid prototyping and experimentation. With its simple API, you can quickly generate text, chat with models, and build interactive applications.\n",
- "\n",
- "6. **Community Support**: LiteLLM is actively maintained and supported by a community of developers. You can find help, share ideas, and collaborate with others to enhance your projects.\n",
- "\n",
- "Overall, LiteLLM simplifies the process of making API calls to language models, saving you time and effort while providing flexibility and convenience\n"
- ]
- },
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- ""
- ],
- "text/html": [
- "
\n",
- " \n",
- "
\n",
- "
\n",
- "
Model
\n",
- "
Question
\n",
- "
Response
\n",
- "
ResponseTime
\n",
- "
Cost
\n",
- "
\n",
- " \n",
- " \n",
- "
\n",
- "
0
\n",
- "
togethercomputer/llama-2-70b-chat
\n",
- "
LiteLLM is a lightweight Python package that simplifies the process of making API calls to various language models. Here are some reasons why you should use LiteLLM:\\n\\n1. **Simplified API Calls**: LiteLLM abstracts away the complexity of making API calls to different language models. It provides a unified interface for invoking models from OpenAI, Azure, Cohere, Anthropic, Huggingface, and more.\\n\\n2. **Easy Integration**: LiteLLM seamlessly integrates with your existing codebase. You can import the package and start making API calls with just a few lines of code.\\n\\n3. **Flexibility**: LiteLLM supports a variety of language models, including GPT-3, GPT-Neo, chatGPT, and more. You can choose the model that suits your requirements and easily switch between them.\\n\\n4. **Convenience**: LiteLLM handles the authentication and connection details for you. You just need to set the relevant environment variables, and the package takes care of the rest.\\n\\n5. **Quick Prototyping**: LiteLLM is ideal for rapid prototyping and experimentation. With its simple API, you can quickly generate text, chat with models, and build interactive applications.\\n\\n6. **Community Support**: LiteLLM is actively maintained and supported by a community of developers. You can find help, share ideas, and collaborate with others to enhance your projects.\\n\\nOverall, LiteLLM simplifies the process of making API calls to language models, saving you time and effort while providing flexibility and convenience
\n",
- "
Here's a more concise version of the user input:\\n\\nLiteLLM is a lightweight Python package that simplifies API calls to various language models. It abstracts away complexity, integrates seamlessly, supports multiple models, and handles authentication. It's ideal for rapid prototyping and has community support. It saves time and effort while providing flexibility and convenience.
\n",
- "
11.294250
\n",
- "
0.001251
\n",
- "
\n",
- "
\n",
- "
1
\n",
- "
gpt-3.5-turbo
\n",
- "
LiteLLM is a lightweight Python package that simplifies the process of making API calls to various language models. Here are some reasons why you should use LiteLLM:\\n\\n1. **Simplified API Calls**: LiteLLM abstracts away the complexity of making API calls to different language models. It provides a unified interface for invoking models from OpenAI, Azure, Cohere, Anthropic, Huggingface, and more.\\n\\n2. **Easy Integration**: LiteLLM seamlessly integrates with your existing codebase. You can import the package and start making API calls with just a few lines of code.\\n\\n3. **Flexibility**: LiteLLM supports a variety of language models, including GPT-3, GPT-Neo, chatGPT, and more. You can choose the model that suits your requirements and easily switch between them.\\n\\n4. **Convenience**: LiteLLM handles the authentication and connection details for you. You just need to set the relevant environment variables, and the package takes care of the rest.\\n\\n5. **Quick Prototyping**: LiteLLM is ideal for rapid prototyping and experimentation. With its simple API, you can quickly generate text, chat with models, and build interactive applications.\\n\\n6. **Community Support**: LiteLLM is actively maintained and supported by a community of developers. You can find help, share ideas, and collaborate with others to enhance your projects.\\n\\nOverall, LiteLLM simplifies the process of making API calls to language models, saving you time and effort while providing flexibility and convenience
\n",
- "
LiteLLM is a lightweight Python package that simplifies API calls to various language models. Here's why you should use it:\\n1. Simplified API Calls: Works with multiple models (OpenAI, Azure, Cohere, Anthropic, Huggingface).\\n2. Easy Integration: Import and start using it quickly in your codebase.\\n3. Flexibility: Supports GPT-3, GPT-Neo, chatGPT, etc. easily switch between models.\\n4. Convenience: Handles authentication and connection details, just set environment variables.\\n5. Quick Prototyping: Great for rapid prototyping and building interactive applications.\\n6. Community Support: Actively maintained and supported by a developer community.
\n",
- "
9.778315
\n",
- "
0.000795
\n",
- "
\n",
- "
\n",
- "
2
\n",
- "
claude-instant-1.2
\n",
- "
LiteLLM is a lightweight Python package that simplifies the process of making API calls to various language models. Here are some reasons why you should use LiteLLM:\\n\\n1. **Simplified API Calls**: LiteLLM abstracts away the complexity of making API calls to different language models. It provides a unified interface for invoking models from OpenAI, Azure, Cohere, Anthropic, Huggingface, and more.\\n\\n2. **Easy Integration**: LiteLLM seamlessly integrates with your existing codebase. You can import the package and start making API calls with just a few lines of code.\\n\\n3. **Flexibility**: LiteLLM supports a variety of language models, including GPT-3, GPT-Neo, chatGPT, and more. You can choose the model that suits your requirements and easily switch between them.\\n\\n4. **Convenience**: LiteLLM handles the authentication and connection details for you. You just need to set the relevant environment variables, and the package takes care of the rest.\\n\\n5. **Quick Prototyping**: LiteLLM is ideal for rapid prototyping and experimentation. With its simple API, you can quickly generate text, chat with models, and build interactive applications.\\n\\n6. **Community Support**: LiteLLM is actively maintained and supported by a community of developers. You can find help, share ideas, and collaborate with others to enhance your projects.\\n\\nOverall, LiteLLM simplifies the process of making API calls to language models, saving you time and effort while providing flexibility and convenience
\n",
- "
Here is a concise rewrite of the user input:\\n\\nLiteLLM is a lightweight Python package that simplifies accessing various language models. It provides a unified interface for models from OpenAI, Azure, Cohere, Anthropic, Huggingface, and more. Key benefits include simplified API calls, easy integration, flexibility to use different models, automated handling of authentication, and support for quick prototyping. The actively maintained package saves time by abstracting away complexity while offering convenience and a collaborative community.
\n",
- "
7.697520
\n",
- "
0.001098
\n",
- "
\n",
- " \n",
- "
"
- ]
- },
- "metadata": {},
- "execution_count": 20
- },
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Question: Traceloop is a platform for monitoring and debugging the quality of your LLM outputs. It provides you with a way to track the performance of your LLM application; rollout changes with confidence; and debug issues in production. It is based on OpenTelemetry, so it can provide full visibility to your LLM requests, as well vector DB usage, and other infra in your stack.\n"
- ]
- },
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- ""
- ],
- "text/html": [
- "
\n",
- " \n",
- "
\n",
- "
\n",
- "
Model
\n",
- "
Question
\n",
- "
Response
\n",
- "
ResponseTime
\n",
- "
Cost
\n",
- "
\n",
- " \n",
- " \n",
- "
\n",
- "
6
\n",
- "
togethercomputer/llama-2-70b-chat
\n",
- "
Traceloop is a platform for monitoring and debugging the quality of your LLM outputs. It provides you with a way to track the performance of your LLM application; rollout changes with confidence; and debug issues in production. It is based on OpenTelemetry, so it can provide full visibility to your LLM requests, as well vector DB usage, and other infra in your stack.
\n",
- "
\\nRewritten input: Traceloop is a platform for monitoring and debugging LLM outputs. It allows users to track performance, rollout changes confidently, and debug issues in production. It uses OpenTelemetry for full visibility into LLM requests, vector DB usage, and other infrastructure.
\n",
- "
9.060444
\n",
- "
0.000525
\n",
- "
\n",
- "
\n",
- "
7
\n",
- "
gpt-3.5-turbo
\n",
- "
Traceloop is a platform for monitoring and debugging the quality of your LLM outputs. It provides you with a way to track the performance of your LLM application; rollout changes with confidence; and debug issues in production. It is based on OpenTelemetry, so it can provide full visibility to your LLM requests, as well vector DB usage, and other infra in your stack.
\n",
- "
Traceloop is a platform for monitoring and debugging the quality of your LLM outputs. It helps track performance, rollout changes, and debug issues in production. It is based on OpenTelemetry, providing visibility to LLM requests, vector DB usage, and other infrastructure in your stack.
\n",
- "
7.304661
\n",
- "
0.000283
\n",
- "
\n",
- "
\n",
- "
8
\n",
- "
claude-instant-1.2
\n",
- "
Traceloop is a platform for monitoring and debugging the quality of your LLM outputs. It provides you with a way to track the performance of your LLM application; rollout changes with confidence; and debug issues in production. It is based on OpenTelemetry, so it can provide full visibility to your LLM requests, as well vector DB usage, and other infra in your stack.
\n",
- "
Here is a more concise rewrite of the user input:\\n\\nTraceloop monitors and debugs LLM quality. It tracks LLM performance, enables confident changes, and debugs production issues. Based on OpenTelemetry, Traceloop provides full visibility into LLM requests, vector DB usage, and other stack infrastructure.
\n"
- ],
- "text/plain": [
- "Model Name claude-instant-1 \\\n",
- "Prompt \n",
- "\\nIs paul graham a writer? Yes, Paul Graham is considered a writer in ad... \n",
- "\\nWhat has Paul Graham done? Paul Graham has made significant contribution... \n",
- "\\nWhat is Paul Graham known for? Paul Graham is known for several things:\\n\\n-... \n",
- "\\nWhere does Paul Graham live? Based on the information provided:\\n\\n- Paul ... \n",
- "\\nWho is Paul Graham? Paul Graham is an influential computer scient... \n",
- "\n",
- "Model Name gpt-3.5-turbo-0613 \\\n",
- "Prompt \n",
- "\\nIs paul graham a writer? Yes, Paul Graham is a writer. He has written s... \n",
- "\\nWhat has Paul Graham done? Paul Graham has achieved several notable accom... \n",
- "\\nWhat is Paul Graham known for? Paul Graham is known for his work on the progr... \n",
- "\\nWhere does Paul Graham live? According to the given information, Paul Graha... \n",
- "\\nWho is Paul Graham? Paul Graham is an English computer scientist, ... \n",
- "\n",
- "Model Name gpt-3.5-turbo-16k-0613 \\\n",
- "Prompt \n",
- "\\nIs paul graham a writer? Yes, Paul Graham is a writer. He has authored ... \n",
- "\\nWhat has Paul Graham done? Paul Graham has made significant contributions... \n",
- "\\nWhat is Paul Graham known for? Paul Graham is known for his work on the progr... \n",
- "\\nWhere does Paul Graham live? Paul Graham currently lives in England, where ... \n",
- "\\nWho is Paul Graham? Paul Graham is an English computer scientist, ... \n",
- "\n",
- "Model Name gpt-4-0613 \\\n",
- "Prompt \n",
- "\\nIs paul graham a writer? Yes, Paul Graham is a writer. He is an essayis... \n",
- "\\nWhat has Paul Graham done? Paul Graham is known for his work on the progr... \n",
- "\\nWhat is Paul Graham known for? Paul Graham is known for his work on the progr... \n",
- "\\nWhere does Paul Graham live? The text does not provide a current place of r... \n",
- "\\nWho is Paul Graham? Paul Graham is an English computer scientist, ... \n",
- "\n",
- "Model Name replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781 \n",
- "Prompt \n",
- "\\nIs paul graham a writer? Yes, Paul Graham is an author. According to t... \n",
- "\\nWhat has Paul Graham done? Paul Graham has had a diverse career in compu... \n",
- "\\nWhat is Paul Graham known for? Paul Graham is known for many things, includi... \n",
- "\\nWhere does Paul Graham live? Based on the information provided, Paul Graha... \n",
- "\\nWho is Paul Graham? Paul Graham is an English computer scientist,... "
- ]
- },
- "execution_count": 17,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "import pandas as pd\n",
- "\n",
- "# Create an empty list to store the row data\n",
- "table_data = []\n",
- "\n",
- "# Iterate through the list and extract the required data\n",
- "for item in result:\n",
- " prompt = item['prompt'][0]['content'].replace(context, \"\") # clean the prompt for easy comparison\n",
- " model = item['response']['model']\n",
- " response = item['response']['choices'][0]['message']['content']\n",
- " table_data.append([prompt, model, response])\n",
- "\n",
- "# Create a DataFrame from the table data\n",
- "df = pd.DataFrame(table_data, columns=['Prompt', 'Model Name', 'Response'])\n",
- "\n",
- "# Pivot the DataFrame to get the desired table format\n",
- "table = df.pivot(index='Prompt', columns='Model Name', values='Response')\n",
- "table"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {
- "id": "zOxUM40PINDC"
- },
- "source": [
- "# Load Test endpoint\n",
- "\n",
- "Run 100+ simultaneous queries across multiple providers to see when they fail + impact on latency"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "ZkQf_wbcIRQ9"
- },
- "outputs": [],
- "source": [
- "models=[\"gpt-3.5-turbo\", \"replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781\", \"claude-instant-1\"]\n",
- "context = \"\"\"Paul Graham (/ɡræm/; born 1964)[3] is an English computer scientist, essayist, entrepreneur, venture capitalist, and author. He is best known for his work on the programming language Lisp, his former startup Viaweb (later renamed Yahoo! Store), cofounding the influential startup accelerator and seed capital firm Y Combinator, his essays, and Hacker News. He is the author of several computer programming books, including: On Lisp,[4] ANSI Common Lisp,[5] and Hackers & Painters.[6] Technology journalist Steven Levy has described Graham as a \"hacker philosopher\".[7] Graham was born in England, where he and his family maintain permanent residence. However he is also a citizen of the United States, where he was educated, lived, and worked until 2016.\"\"\"\n",
- "prompt = \"Where does Paul Graham live?\"\n",
- "final_prompt = context + prompt\n",
- "result = load_test_model(models=models, prompt=final_prompt, num_calls=5)"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {
- "id": "8vSNBFC06aXY"
- },
- "source": [
- "## Visualize the data"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 19,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 552
- },
- "id": "SZfiKjLV3-n8",
- "outputId": "00f7f589-b3da-43ed-e982-f9420f074b8d"
- },
- "outputs": [
- {
- "data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAioAAAIXCAYAAACy1HXAAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABn5UlEQVR4nO3dd1QT2d8G8Cf0ojQBEUFRsSv2FXvvvSx2saNi7733ihXELotd7KuIir33sjZUsIuKVGmS+/7hy/yM6K7RYEZ4PufkaO5Mkm/IJHly594ZhRBCgIiIiEiGdLRdABEREdG3MKgQERGRbDGoEBERkWwxqBAREZFsMagQERGRbDGoEBERkWwxqBAREZFsMagQERGRbDGoEBERkWwxqBCR7Dk5OaFLly7aLkNtc+fORd68eaGrq4uSJUtquxyNO3bsGBQKBbZv367tUtSmUCgwadIktW8XGhoKhUKBdevWabwm+joGFVKxfPlyKBQKlC9fXtulyI6TkxMUCoV0MTU1xR9//IENGzZou7TfTuoX3PdcfleHDh3CiBEjUKlSJaxduxYzZszQdkmys27dOul1PnXqVJrlQgg4OjpCoVCgcePGWqiQ5EBP2wWQvPj7+8PJyQkXLlxASEgInJ2dtV2SrJQsWRJDhw4FALx8+RKrVq2Cu7s7EhMT0bNnTy1X9/soXLgw/Pz8VNpGjx6NLFmyYOzYsWnWv3fvHnR0fq/fVUePHoWOjg5Wr14NAwMDbZcja0ZGRti4cSMqV66s0n78+HE8e/YMhoaGWqqM5IBBhSSPHz/GmTNnEBAQAA8PD/j7+2PixIm/tAalUomkpCQYGRn90sf9Xjlz5kTHjh2l6126dEHevHmxcOFCBhU1ZM+eXeXvCACzZs2CtbV1mnYAv+UXVXh4OIyNjTUWUoQQSEhIgLGxsUbuT04aNmyIbdu2YfHixdDT+9/X0saNG1GmTBm8fftWi9WRtv1eP1EoXfn7+8PS0hKNGjVC69at4e/vLy1LTk6GlZUVunbtmuZ20dHRMDIywrBhw6S2xMRETJw4Ec7OzjA0NISjoyNGjBiBxMREldsqFAr069cP/v7+KFq0KAwNDXHw4EEAwLx581CxYkVky5YNxsbGKFOmzFf3hcfHx2PAgAGwtrZG1qxZ0bRpUzx//vyr+6CfP3+Obt26IXv27DA0NETRokWxZs2aH/6b2djYoFChQnj48KFKu1KphJeXF4oWLQojIyNkz54dHh4eeP/+vcp6ly5dQr169WBtbQ1jY2PkyZMH3bp1k5an7g+fN28eFi5ciNy5c8PY2BjVqlXDrVu30tRz9OhRVKlSBaamprCwsECzZs1w584dlXUmTZoEhUKBkJAQdOnSBRYWFjA3N0fXrl3x4cMHlXWDgoJQuXJlWFhYIEuWLChYsCDGjBmjss73vtY/48sxKqm7DE6dOoUBAwbAxsYGFhYW8PDwQFJSEiIjI9G5c2dYWlrC0tISI0aMwJcnitfUa/Q1CoUCa9euRVxcnLRrI3VMw8ePHzF16lTky5cPhoaGcHJywpgxY9L8vZycnNC4cWMEBgaibNmyMDY2xooVK/71cc+fP4/69evD3NwcJiYmqFatGk6fPq2yTlhYGPr27YuCBQvC2NgY2bJlw59//onQ0NA09xcZGYnBgwfDyckJhoaGcHBwQOfOndMEB6VSienTp8PBwQFGRkaoVasWQkJC/rXWz7Vr1w7v3r1DUFCQ1JaUlITt27ejffv2X71NXFwchg4dCkdHRxgaGqJgwYKYN29emtc5MTERgwcPho2NjfT58OzZs6/ep6Y/H0hDBNH/K1SokOjevbsQQogTJ04IAOLChQvS8m7dugkLCwuRmJiocrv169cLAOLixYtCCCFSUlJE3bp1hYmJiRg0aJBYsWKF6Nevn9DT0xPNmjVTuS0AUbhwYWFjYyMmT54sli1bJq5evSqEEMLBwUH07dtXLF26VCxYsED88ccfAoDYt2+fyn24ubkJAKJTp05i2bJlws3NTZQoUUIAEBMnTpTWe/XqlXBwcBCOjo5iypQpwtvbWzRt2lQAEAsXLvzPv0/u3LlFo0aNVNqSk5OFnZ2dyJ49u0p7jx49hJ6enujZs6fw8fERI0eOFKampqJcuXIiKSlJCCHE69evhaWlpShQoICYO3euWLlypRg7dqwoXLiwdD+PHz8WAETx4sWFk5OTmD17tpg8ebKwsrISNjY24tWrV9K6QUFBQk9PTxQoUEDMmTNHTJ48WVhbWwtLS0vx+PFjab2JEycKAKJUqVKiZcuWYvny5aJHjx4CgBgxYoS03q1bt4SBgYEoW7asWLRokfDx8RHDhg0TVatWldZR57X+L0WLFhXVqlX75t/e3d1dur527VoBQJQsWVLUr19fLFu2THTq1El6DpUrVxbt27cXy5cvF40bNxYAxPr169PlNfoaPz8/UaVKFWFoaCj8/PyEn5+fePjwoRBCCHd3dwFAtG7dWixbtkx07txZABDNmzdP85ydnZ2FpaWlGDVqlPDx8RHBwcHffMwjR44IAwMDUaFCBTF//nyxcOFC4eLiIgwMDMT58+el9bZt2yZKlCghJkyYIHx9fcWYMWOEpaWlyJ07t4iLi5PWi4mJEcWKFRO6urqiZ8+ewtvbW0ydOlWUK1dOeo8GBwdL21KZMmXEwoULxaRJk4SJiYn4448//vVv9PnrePHiRVGxYkXRqVMnadmuXbuEjo6OeP78eZr3nlKpFDVr1hQKhUL06NFDLF26VDRp0kQAEIMGDVJ5jI4dOwoAon379mLp0qWiZcuWwsXF5Yc/H1Lfk2vXrv3P50eawaBCQgghLl26JACIoKAgIcSnDwIHBwcxcOBAaZ3AwEABQOzdu1fltg0bNhR58+aVrvv5+QkdHR1x8uRJlfV8fHwEAHH69GmpDYDQ0dERt2/fTlPThw8fVK4nJSWJYsWKiZo1a0ptly9f/uqHU5cuXdJ8EHXv3l3kyJFDvH37VmXdtm3bCnNz8zSP96XcuXOLunXrijdv3og3b96ImzdvSl+Onp6e0nonT54UAIS/v7/K7Q8ePKjSvnPnTpWA9zWpH4rGxsbi2bNnUvv58+cFADF48GCprWTJksLW1la8e/dOart+/brQ0dERnTt3ltpSg0q3bt1UHqtFixYiW7Zs0vWFCxcKAOLNmzffrE+d1/q//EhQqVevnlAqlVJ7hQoVhEKhEL1795baPn78KBwcHFTuW5Ov0be4u7sLU1NTlbZr164JAKJHjx4q7cOGDRMAxNGjR1WeMwBx8ODB/3wspVIp8ufPn+bv8eHDB5EnTx5Rp04dlbYvnT17VgAQGzZskNomTJggAIiAgICvPp4Q/wsqhQsXVvkBs2jRIgFA3Lx581/r/jyoLF26VGTNmlWq788//xQ1atSQ/hafB5Vdu3YJAGLatGkq99e6dWuhUChESEiIEOJ/f+++ffuqrNe+ffsf/nxgUPn1uOuHAHza7ZM9e3bUqFEDwKeu6zZt2mDz5s1ISUkBANSsWRPW1tbYsmWLdLv3798jKCgIbdq0kdq2bduGwoULo1ChQnj79q10qVmzJgAgODhY5bGrVauGIkWKpKnp833x79+/R1RUFKpUqYIrV65I7am7ifr27aty2/79+6tcF0Jgx44daNKkCYQQKnXVq1cPUVFRKvf7LYcOHYKNjQ1sbGxQvHhx+Pn5oWvXrpg7d67K8zc3N0edOnVUHqdMmTLIkiWL9PwtLCwAAPv27UNycvK/Pm7z5s2RM2dO6foff/yB8uXL4++//wbwaWDvtWvX0KVLF1hZWUnrubi4oE6dOtJ6n+vdu7fK9SpVquDdu3eIjo5WqW/37t1QKpVfrUvd11rTunfvrjIzqHz58hBCoHv37lKbrq4uypYti0ePHqnUrenX6Hukvg5DhgxRaU8doL1//36V9jx58qBevXr/eb/Xrl3DgwcP0L59e7x79056PnFxcahVqxZOnDghvYafv6+Sk5Px7t07ODs7w8LCQuU9sGPHDpQoUQItWrRI83hfzsbq2rWrylicKlWqAIDK3/y/uLm5IT4+Hvv27UNMTAz27dv3zd0+f//9N3R1dTFgwACV9qFDh0IIgQMHDkjrAUiz3qBBg1Sua+rzgdJHhgkqJ06cQJMmTWBvbw+FQoFdu3al+2M+f/4cHTt2lMZQFC9eHJcuXUr3x9W0lJQUbN68GTVq1MDjx48REhKCkJAQlC9fHq9fv8aRI0cAAHp6emjVqhV2794t7U8PCAhAcnKySlB58OABbt++LX2hp14KFCgA4NMgw8/lyZPnq3Xt27cPrq6uMDIygpWVFWxsbODt7Y2oqChpnbCwMOjo6KS5jy9nK7158waRkZHw9fVNU1fquJsv6/qa8uXLIygoCAcPHsS8efNgYWGB9+/fq3xIP3jwAFFRUbC1tU3zWLGxsdLjVKtWDa1atcLkyZNhbW2NZs2aYe3atV8d25E/f/40bQUKFJDGFYSFhQEAChYsmGa9woULS19an8uVK5fKdUtLSwCQxmi0adMGlSpVQo8ePZA9e3a0bdsWW7duVQkt6r7WmvblczA3NwcAODo6pmn/fOxJerxG3yN1e/1y+7Szs4OFhYX0Oqb61nvjSw8ePAAAuLu7p3k+q1atQmJiovS+iY+Px4QJE6SxHdbW1rCxsUFkZKTKe+vhw4coVqzYdz3+f21L38PGxga1a9fGxo0bERAQgJSUFLRu3fqr64aFhcHe3h5Zs2ZVaS9cuLC0PPVfHR0d5MuXT2W9L98nmvp8oPSRYWb9xMXFoUSJEujWrRtatmyZ7o/3/v17VKpUCTVq1MCBAwdgY2ODBw8eSG/Q38nRo0fx8uVLbN68GZs3b06z3N/fH3Xr1gUAtG3bFitWrMCBAwfQvHlzbN26FYUKFUKJEiWk9ZVKJYoXL44FCxZ89fG+/BL52iyGkydPomnTpqhatSqWL1+OHDlyQF9fH2vXrsXGjRvVfo6pX64dO3aEu7v7V9dxcXH5z/uxtrZG7dq1AQD16tVDoUKF0LhxYyxatEj6laxUKmFra6syGPlzNjY2ACAdKOvcuXPYu3cvAgMD0a1bN8yfPx/nzp1DlixZ1H6e6tDV1f1qu/j/wYjGxsY4ceIEgoODsX//fhw8eBBbtmxBzZo1cejQIejq6qr9Wmvat57D19rFZ4Mstf0afe/xYb53hk/q9j137txvHlgutdb+/ftj7dq1GDRoECpUqABzc3MoFAq0bdv2mz1n/+W/tqXv1b59e/Ts2ROvXr1CgwYNpB6t9KapzwdKHxkmqDRo0AANGjT45vLExESMHTsWmzZtQmRkJIoVK4bZs2ejevXqP/R4s2fPhqOjI9auXSu1fe+vH7nx9/eHra0tli1blmZZQEAAdu7cCR8fHxgbG6Nq1arIkSMHtmzZgsqVK+Po0aNpjnuRL18+XL9+HbVq1frhA3bt2LEDRkZGCAwMVJma+vnfGwBy584NpVKJx48fq/Q6fDnjIHXEf0pKihQ0NKFRo0aoVq0aZsyYAQ8PD5iamiJfvnw4fPgwKlWq9F1fNK6urnB1dcX06dOxceNGdOjQAZs3b0aPHj2kdVJ/MX/u/v37cHJyAvDp7wB8Ot7Il+7evQtra2uYmpqq/fx0dHRQq1Yt1KpVCwsWLMCMGTMwduxYBAcHo3bt2hp5rbUhPV6j75G6vT548ED69Q8Ar1+/RmRkpPQ6qiu1x8DMzOw/t+/t27fD3d0d8+fPl9oSEhIQGRmZ5j6/NrMsPbVo0QIeHh44d+6cyi7mL+XOnRuHDx9GTEyMSq/K3bt3peWp/yqVSjx8+FClF+XL90l6fT6QZmSYXT//pV+/fjh79iw2b96MGzdu4M8//0T9+vW/+gXwPfbs2YOyZcvizz//hK2tLUqVKoWVK1dquOr0Fx8fj4CAADRu3BitW7dOc+nXrx9iYmKwZ88eAJ++uFq3bo29e/fCz88PHz9+VNntA3za1/z8+fOv/j3i4+PT7IL4Gl1dXSgUCml8DPBpqu6Xu/RS998vX75cpX3JkiVp7q9Vq1bYsWPHVz9837x58581fcvIkSPx7t076fm6ubkhJSUFU6dOTbPux48fpS+E9+/fp/nFmfpr+MtdC7t27cLz58+l6xcuXMD58+elcJ4jRw6ULFkS69evV/nCuXXrFg4dOoSGDRuq/bwiIiLStH1ZnyZea21Ij9foe6S+Dl5eXirtqT1SjRo1Uvs+AaBMmTLIly8f5s2bh9jY2DTLP9++dXV10zynJUuWqLzXAKBVq1a4fv06du7cmeb+1O0p+V5ZsmSBt7c3Jk2ahCZNmnxzvYYNGyIlJQVLly5VaV+4cCEUCoX0vkj9d/HixSrrffn3T8/PB/p5GaZH5d88efIEa9euxZMnT2Bvbw8AGDZsGA4ePPjDh7Z+9OgRvL29MWTIEIwZMwYXL17EgAEDYGBg8M2uQznas2cPYmJi0LRp068ud3V1hY2NDfz9/aVA0qZNGyxZsgQTJ05E8eLFVX4ZAkCnTp2wdetW9O7dG8HBwahUqRJSUlJw9+5dbN26VTouxL9p1KgRFixYgPr166N9+/YIDw/HsmXL4OzsjBs3bkjrlSlTBq1atYKXlxfevXsHV1dXHD9+HPfv3weg2sU+a9YsBAcHo3z58ujZsyeKFCmCiIgIXLlyBYcPH/7qF/P3aNCgAYoVK4YFCxbA09MT1apVg4eHB2bOnIlr166hbt260NfXx4MHD7Bt2zYsWrQIrVu3xvr167F8+XK0aNEC+fLlQ0xMDFauXAkzM7M0wcLZ2RmVK1dGnz59kJiYCC8vL2TLlg0jRoyQ1pk7dy4aNGiAChUqoHv37oiPj8eSJUtgbm7+Q+c0mTJlCk6cOIFGjRohd+7cCA8Px/Lly+Hg4CAdQVQTr7U2pMdr9D1KlCgBd3d3+Pr6IjIyEtWqVcOFCxewfv16NG/eXBrMri4dHR2sWrUKDRo0QNGiRdG1a1fkzJkTz58/R3BwMMzMzLB3714AQOPGjeHn5wdzc3MUKVIEZ8+exeHDh5EtWzaV+xw+fDi2b9+OP//8E926dUOZMmUQERGBPXv2wMfHR2V3ryZ9z+dnkyZNUKNGDYwdOxahoaEoUaIEDh06hN27d2PQoEFSD1PJkiXRrl07LF++HFFRUahYsSKOHDny1WO8pNfnA2mAVuYapTMAYufOndL1ffv2CQDC1NRU5aKnpyfc3NyEEELcuXNHAPjXy8iRI6X71NfXFxUqVFB53P79+wtXV9df8hw1pUmTJsLIyEjl+Alf6tKli9DX15em7SmVSuHo6PjV6YGpkpKSxOzZs0XRokWFoaGhsLS0FGXKlBGTJ08WUVFR0nr4Ymrv51avXi3y588vDA0NRaFChcTatWulqbWfi4uLE56ensLKykpkyZJFNG/eXNy7d08AELNmzVJZ9/Xr18LT01M4OjoKfX19YWdnJ2rVqiV8fX3/82/1teOopFq3bl2aKYu+vr6iTJkywtjYWGTNmlUUL15cjBgxQrx48UIIIcSVK1dEu3btRK5cuYShoaGwtbUVjRs3FpcuXZLuI3Uq5Ny5c8X8+fOFo6OjMDQ0FFWqVBHXr19PU8fhw4dFpUqVhLGxsTAzMxNNmjQR//zzj8o6qX/DL6cdp04VTT3mypEjR0SzZs2Evb29MDAwEPb29qJdu3bi/v37Krf73tf6v/zI9OQvpw1/67l9baqwEJp5jb7lW4+ZnJwsJk+eLPLkySP09fWFo6OjGD16tEhISEjznL+1vX3L1atXRcuWLUW2bNmEoaGhyJ07t3BzcxNHjhyR1nn//r3o2rWrsLa2FlmyZBH16tUTd+/eTfM3FkKId+/eiX79+omcOXMKAwMD4eDgINzd3aXPgtTpydu2bVO53fdO4f3W6/ilr/0tYmJixODBg4W9vb3Q19cX+fPnF3PnzlWZni2EEPHx8WLAgAEiW7ZswtTUVDRp0kQ8ffo0zfRkIb7v84HTk389hRDp1IenRQqFAjt37kTz5s0BAFu2bEGHDh1w+/btNIO+smTJAjs7OyQlJf3nVLps2bJJg+xy586NOnXqYNWqVdJyb29vTJs2TaWLnrTj2rVrKFWqFP766y906NBB2+X8sNDQUOTJkwdz585VOfIvEVFmkSl2/ZQqVQopKSkIDw+X5vd/ycDAAIUKFfru+6xUqVKaAVn379//4cFw9OPi4+PTDIj08vKCjo4OqlatqqWqiIhIEzJMUImNjVXZ7/j48WNcu3YNVlZWKFCgADp06IDOnTtj/vz5KFWqFN68eYMjR47AxcXlhwawDR48GBUrVsSMGTPg5uaGCxcuwNfXF76+vpp8WvQd5syZg8uXL6NGjRrQ09PDgQMHcODAAfTq1Svdp8cSEVE60/a+J01J3Vf65SV1n2tSUpKYMGGCcHJyEvr6+iJHjhyiRYsW4saNGz/8mHv37hXFihWTxlB8zzgH0rxDhw6JSpUqCUtLS6Gvry/y5csnJk2aJJKTk7Vd2k/7fIwKEVFmlCHHqBAREVHGkGmOo0JERES/HwYVIiIikq3fejCtUqnEixcvkDVr1t/q8N1ERESZmRACMTExsLe3h47Ov/eZ/NZB5cWLF5zVQURE9Jt6+vQpHBwc/nWd3zqopJ6M6unTpzAzM9NyNURERPQ9oqOj4ejoqHJSyW/5rYNK6u4eMzMzBhUiIqLfzPcM2+BgWiIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki09bRdARETy5TRqv7ZLIC0LndVIq4/PHhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki3ZBJVZs2ZBoVBg0KBB2i6FiIiIZEIWQeXixYtYsWIFXFxctF0KERERyYjWg0psbCw6dOiAlStXwtLSUtvlEBERkYxoPah4enqiUaNGqF279n+um5iYiOjoaJULERERZVx62nzwzZs348qVK7h48eJ3rT9z5kxMnjw5nasiIiIiudBaj8rTp08xcOBA+Pv7w8jI6LtuM3r0aERFRUmXp0+fpnOVREREpE1a61G5fPkywsPDUbp0aaktJSUFJ06cwNKlS5GYmAhdXV2V2xgaGsLQ0PBXl0pERERaorWgUqtWLdy8eVOlrWvXrihUqBBGjhyZJqQQERFR5qO1oJI1a1YUK1ZMpc3U1BTZsmVL005ERESZk9Zn/RARERF9i1Zn/Xzp2LFj2i6BiIiIZIQ9KkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWz8UVB4+fIhx48ahXbt2CA8PBwAcOHAAt2/f1mhxRERElLmpHVSOHz+O4sWL4/z58wgICEBsbCwA4Pr165g4caLGCyQiIqLMS+2gMmrUKEybNg1BQUEwMDCQ2mvWrIlz585ptDgiIiLK3NQOKjdv3kSLFi3StNva2uLt27caKYqIiIgI+IGgYmFhgZcvX6Zpv3r1KnLmzKmRooiIiIiAHwgqbdu2xciRI/Hq1SsoFAoolUqcPn0aw4YNQ+fOndOjRiIiIsqk1A4qM2bMQKFCheDo6IjY2FgUKVIEVatWRcWKFTFu3Lj0qJGIiIgyKT11b2BgYICVK1di/PjxuHXrFmJjY1GqVCnkz58/PeojIiKiTEztoJIqV65cyJUrlyZrISIiIlKhdlARQmD79u0IDg5GeHg4lEqlyvKAgACNFUdERESZm9pBZdCgQVixYgVq1KiB7NmzQ6FQpEddREREROoHFT8/PwQEBKBhw4bpUQ8RERGRRO1ZP+bm5sibN2961EJERESkQu2gMmnSJEyePBnx8fHpUQ8RERGRRO1dP25ubti0aRNsbW3h5OQEfX19leVXrlzRWHFERESUuakdVNzd3XH58mV07NiRg2mJiIgoXakdVPbv34/AwEBUrlw5PeohIiIikqg9RsXR0RFmZmbpUQsRERGRCrWDyvz58zFixAiEhoamQzlERERE/6P2rp+OHTviw4cPyJcvH0xMTNIMpo2IiNBYcUSZndOo/dougbQsdFYjbZdApFVqBxUvL690KIOIiIgorR+a9UNERET0K3xXUImOjpYG0EZHR//ruhxoS0RERJryXUHF0tISL1++hK2tLSwsLL567BQhBBQKBVJSUjReJBEREWVO3xVUjh49CisrKwBAcHBwuhZERERElOq7gkq1atWQN29eXLx4EdWqVUvvmoiIiIgAqHEcldDQUO7WISIiol9K7QO+aZK3tzdcXFxgZmYGMzMzVKhQAQcOHNBmSURERCQjak1PDgwMhLm5+b+u07Rp0+++PwcHB8yaNQv58+eHEALr169Hs2bNcPXqVRQtWlSd0oiIiCgDUiuo/NcxVNSd9dOkSROV69OnT4e3tzfOnTvHoEJERETqBZVXr17B1tY2XQpJSUnBtm3bEBcXhwoVKnx1ncTERCQmJkrX/+uYLkRERPR7++4xKl87doom3Lx5E1myZIGhoSF69+6NnTt3okiRIl9dd+bMmTA3N5cujo6O6VITERERycN3BxUhRLoUULBgQVy7dg3nz59Hnz594O7ujn/++eer644ePRpRUVHS5enTp+lSExEREcnDd+/6cXd3h7GxscYLMDAwgLOzMwCgTJkyuHjxIhYtWoQVK1akWdfQ0BCGhoYar4GIiIjk6buDytq1a9OzDolSqVQZh0JERESZl9pnT9ak0aNHo0GDBsiVKxdiYmKwceNGHDt2DIGBgdosi4iIiGRCq0ElPDwcnTt3xsuXL2Fubg4XFxcEBgaiTp062iyLiIiIZEKrQWX16tXafHgiIiKSuR8+hH5ISAgCAwMRHx8PIP1mBREREVHmpXZQeffuHWrXro0CBQqgYcOGePnyJQCge/fuGDp0qMYLJCIiosxL7aAyePBg6Onp4cmTJzAxMZHa27Rpg4MHD2q0OCIiIsrc1B6jcujQIQQGBsLBwUGlPX/+/AgLC9NYYURERERq96jExcWp9KSkioiI4MHYiIiISKPUDipVqlTBhg0bpOsKhQJKpRJz5sxBjRo1NFocERERZW5q7/qZM2cOatWqhUuXLiEpKQkjRozA7du3ERERgdOnT6dHjURERJRJqd2jUqxYMdy/fx+VK1dGs2bNEBcXh5YtW+Lq1avIly9fetRIREREmdQPHfDN3NwcY8eO1XQtRERERCrU7lE5ePAgTp06JV1ftmwZSpYsifbt2+P9+/caLY6IiIgyN7WDyvDhwxEdHQ0AuHnzJoYMGYKGDRvi8ePHGDJkiMYLJCIiosxL7V0/jx8/RpEiRQAAO3bsQJMmTTBjxgxcuXIFDRs21HiBRERElHmp3aNiYGCADx8+AAAOHz6MunXrAgCsrKyknhYiIiIiTVC7R6Vy5coYMmQIKlWqhAsXLmDLli0AgPv376c5Wi0RERHRz1C7R2Xp0qXQ09PD9u3b4e3tjZw5cwIADhw4gPr162u8QCIiIsq81O5RyZUrF/bt25emfeHChRopiIiIiCjVDx1HRalUIiQkBOHh4VAqlSrLqlatqpHCiIiIiNQOKufOnUP79u0RFhYGIYTKMoVCgZSUFI0VR0RERJmb2kGld+/eKFu2LPbv348cOXJAoVCkR11ERERE6geVBw8eYPv27XB2dk6PeoiIiIgkas/6KV++PEJCQtKjFiIiIiIVaveo9O/fH0OHDsWrV69QvHhx6Ovrqyx3cXHRWHFERESUuakdVFq1agUA6Natm9SmUCgghOBgWiIiItKoHzrXDxEREdGvoHZQyZ07d3rUQURERJTGDx3w7eHDh/Dy8sKdO3cAAEWKFMHAgQORL18+jRZHREREmZvaQSUwMBBNmzZFyZIlUalSJQDA6dOnUbRoUezduxd16tTReJHa4jRqv7ZLIC0LndVI2yUQEWVqageVUaNGYfDgwZg1a1aa9pEjR2aooEJERETapfZxVO7cuYPu3bunae/WrRv++ecfjRRFREREBPxAULGxscG1a9fStF+7dg22traaqImIiIgIwA/s+unZsyd69eqFR48eoWLFigA+jVGZPXs2hgwZovECiYiIKPNSO6iMHz8eWbNmxfz58zF69GgAgL29PSZNmoQBAwZovEAiIiLKvNQOKgqFAoMHD8bgwYMRExMDAMiaNavGCyMiIiL6oeOoAEB4eDju3bsHAChUqBBsbGw0VhQRERER8AODaWNiYtCpUyfY29ujWrVqqFatGuzt7dGxY0dERUWlR41ERESUSakdVHr06IHz589j//79iIyMRGRkJPbt24dLly7Bw8MjPWokIiKiTErtXT/79u1DYGAgKleuLLXVq1cPK1euRP369TVaHBEREWVuaveoZMuWDebm5mnazc3NYWlpqZGiiIiIiIAfCCrjxo3DkCFD8OrVK6nt1atXGD58OMaPH6/R4oiIiChzU3vXj7e3N0JCQpArVy7kypULAPDkyRMYGhrizZs3WLFihbTulStXNFcpERERZTpqB5XmzZunQxlEREREaakdVCZOnJgedRARERGlofYYladPn+LZs2fS9QsXLmDQoEHw9fXVaGFEREREageV9u3bIzg4GMCnQbS1a9fGhQsXMHbsWEyZMkXjBRIREVHmpXZQuXXrFv744w8AwNatW1G8eHGcOXMG/v7+WLdunabrIyIiokxM7aCSnJwMQ0NDAMDhw4fRtGlTAJ/O9/Py5UvNVkdERESZmtpBpWjRovDx8cHJkycRFBQkHY32xYsXyJYtm8YLJCIiosxL7aAye/ZsrFixAtWrV0e7du1QokQJAMCePXukXUJEREREmqD29OTq1avj7du3iI6OVjlkfq9evWBiYqLR4oiIiChzU7tHBQCEELh8+TJWrFiBmJgYAICBgQGDChEREWmU2j0qYWFhqF+/Pp48eYLExETUqVMHWbNmxezZs5GYmAgfH5/0qJOIiIgyIbV7VAYOHIiyZcvi/fv3MDY2ltpbtGiBI0eOaLQ4IiIiytzU7lE5efIkzpw5AwMDA5V2JycnPH/+XGOFEREREando6JUKpGSkpKm/dmzZ8iaNatGiiIiIiICfiCo1K1bF15eXtJ1hUKB2NhYTJw4EQ0bNtRkbURERJTJqb3rZ/78+ahXrx6KFCmChIQEtG/fHg8ePIC1tTU2bdqUHjUSERFRJqV2UHFwcMD169exZcsWXL9+HbGxsejevTs6dOigMriWiIiI6GepHVQAQE9PDx06dECHDh2ktpcvX2L48OFYunSpxoojIiKizE2toHL79m0EBwfDwMAAbm5usLCwwNu3bzF9+nT4+Pggb9686VUnERERZULfPZh2z549KFWqFAYMGIDevXujbNmyCA4ORuHChXHnzh3s3LkTt2/fTs9aiYiIKJP57qAybdo0eHp6Ijo6GgsWLMCjR48wYMAA/P333zh48KB0FmUiIiIiTfnuoHLv3j14enoiS5Ys6N+/P3R0dLBw4UKUK1cuPesjIiKiTOy7g0pMTAzMzMwAALq6ujA2NuaYFCIiIkpXag2mDQwMhLm5OYBPR6g9cuQIbt26pbJO06ZNNVcdERERZWpqBRV3d3eV6x4eHirXFQrFVw+vT0RERPQjvjuoKJXK9KyDiIiIKA21z/VDRERE9KtoNajMnDkT5cqVQ9asWWFra4vmzZvj3r172iyJiIiIZESrQeX48ePw9PTEuXPnEBQUhOTkZNStWxdxcXHaLIuIiIhk4ofO9aMpBw8eVLm+bt062Nra4vLly6hataqWqiIiIiK50GpQ+VJUVBQAwMrK6qvLExMTkZiYKF2Pjo7+JXURERGRdvzQrp/IyEisWrUKo0ePRkREBADgypUreP78+Q8XolQqMWjQIFSqVAnFihX76jozZ86Eubm5dHF0dPzhxyMiIiL5Uzuo3LhxAwUKFMDs2bMxb948REZGAgACAgIwevToHy7E09MTt27dwubNm7+5zujRoxEVFSVdnj59+sOPR0RERPKndlAZMmQIunTpggcPHsDIyEhqb9iwIU6cOPFDRfTr1w/79u1DcHAwHBwcvrmeoaEhzMzMVC5ERESUcak9RuXixYtYsWJFmvacOXPi1atXat2XEAL9+/fHzp07cezYMeTJk0fdcoiIiCgDUzuoGBoafnUQ6/3792FjY6PWfXl6emLjxo3YvXs3smbNKgUdc3NzGBsbq1saERERZTBq7/pp2rQppkyZguTkZACfzu/z5MkTjBw5Eq1atVLrvry9vREVFYXq1asjR44c0mXLli3qlkVEREQZkNpBZf78+YiNjYWtrS3i4+NRrVo1ODs7I2vWrJg+fbpa9yWE+OqlS5cu6pZFREREGZDau37Mzc0RFBSEU6dO4caNG4iNjUXp0qVRu3bt9KiPiIiIMrEfPuBb5cqVUblyZU3WQkRERKRC7aCyePHir7YrFAoYGRnB2dkZVatWha6u7k8XR0RERJmb2kFl4cKFePPmDT58+ABLS0sAwPv372FiYoIsWbIgPDwcefPmRXBwMI8cS0RERD9F7cG0M2bMQLly5fDgwQO8e/cO7969w/3791G+fHksWrQIT548gZ2dHQYPHpwe9RIREVEmonaPyrhx47Bjxw7ky5dPanN2dsa8efPQqlUrPHr0CHPmzFF7qjIRERHRl9TuUXn58iU+fvyYpv3jx4/SAdvs7e0RExPz89URERFRpqZ2UKlRowY8PDxw9epVqe3q1avo06cPatasCQC4efMmD4dPREREP03toLJ69WpYWVmhTJkyMDQ0hKGhIcqWLQsrKyusXr0aAJAlSxbMnz9f48USERFR5qL2GBU7OzsEBQXh7t27uH//PgCgYMGCKFiwoLROjRo1NFchERERZVo/fMC3QoUKoVChQpqshYiIiEjFDwWVZ8+eYc+ePXjy5AmSkpJUli1YsEAjhRERERGpHVSOHDmCpk2bIm/evLh79y6KFSuG0NBQCCFQunTp9KiRiIiIMim1B9OOHj0aw4YNw82bN2FkZIQdO3bg6dOnqFatGv7888/0qJGIiIgyKbWDyp07d9C5c2cAgJ6eHuLj45ElSxZMmTIFs2fP1niBRERElHmpHVRMTU2lcSk5cuTAw4cPpWVv377VXGVERESU6ak9RsXV1RWnTp1C4cKF0bBhQwwdOhQ3b95EQEAAXF1d06NGIiIiyqTUDioLFixAbGwsAGDy5MmIjY3Fli1bkD9/fs74ISIiIo1SK6ikpKTg2bNncHFxAfBpN5CPj0+6FEZERESk1hgVXV1d1K1bF+/fv0+veoiIiIgkag+mLVasGB49epQetRARERGpUDuoTJs2DcOGDcO+ffvw8uVLREdHq1yIiIiINEXtwbQNGzYEADRt2hQKhUJqF0JAoVAgJSVFc9URERFRpqZ2UAkODk6POoiIiIjSUDuoVKtWLT3qICIiIkpD7TEqAHDy5El07NgRFStWxPPnzwEAfn5+OHXqlEaLIyIiosxN7aCyY8cO1KtXD8bGxrhy5QoSExMBAFFRUZgxY4bGCyQiIqLM64dm/fj4+GDlypXQ19eX2itVqoQrV65otDgiIiLK3NQOKvfu3UPVqlXTtJubmyMyMlITNREREREB+IGgYmdnh5CQkDTtp06dQt68eTVSFBERERHwA0GlZ8+eGDhwIM6fPw+FQoEXL17A398fw4YNQ58+fdKjRiIiIsqk1J6ePGrUKCiVStSqVQsfPnxA1apVYWhoiGHDhqF///7pUSMRERFlUmoHFYVCgbFjx2L48OEICQlBbGwsihQpgixZsqRHfURERJSJqb3r56+//sKHDx9gYGCAIkWK4I8//mBIISIionShdlAZPHgwbG1t0b59e/z99988tw8RERGlG7WDysuXL7F582YoFAq4ubkhR44c8PT0xJkzZ9KjPiIiIsrE1A4qenp6aNy4Mfz9/REeHo6FCxciNDQUNWrUQL58+dKjRiIiIsqk1B5M+zkTExPUq1cP79+/R1hYGO7cuaOpuoiIiIh+7KSEHz58gL+/Pxo2bIicOXPCy8sLLVq0wO3btzVdHxEREWViaveotG3bFvv27YOJiQnc3Nwwfvx4VKhQIT1qIyIiokxO7aCiq6uLrVu3ol69etDV1VVZduvWLRQrVkxjxREREVHmpnZQ8ff3V7keExODTZs2YdWqVbh8+TKnKxMREZHG/NAYFQA4ceIE3N3dkSNHDsybNw81a9bEuXPnNFkbERERZXJq9ai8evUK69atw+rVqxEdHQ03NzckJiZi165dKFKkSHrVSERERJnUd/eoNGnSBAULFsSNGzfg5eWFFy9eYMmSJelZGxEREWVy392jcuDAAQwYMAB9+vRB/vz507MmIiIiIgBq9KicOnUKMTExKFOmDMqXL4+lS5fi7du36VkbERERZXLfHVRcXV2xcuVKvHz5Eh4eHti8eTPs7e2hVCoRFBSEmJiY9KyTiIiIMiG1Z/2YmpqiW7duOHXqFG7evImhQ4di1qxZsLW1RdOmTdOjRiIiIsqkfnh6MgAULFgQc+bMwbNnz7Bp0yZN1UREREQE4CeDSipdXV00b94ce/bs0cTdEREREQHQUFAhIiIiSg8MKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbDCpEREQkWwwqREREJFsMKkRERCRbWg0qJ06cQJMmTWBvbw+FQoFdu3ZpsxwiIiKSGa0Glbi4OJQoUQLLli3TZhlEREQkU3rafPAGDRqgQYMG2iyBiIiIZEyrQUVdiYmJSExMlK5HR0drsRoiIiJKb7/VYNqZM2fC3Nxcujg6Omq7JCIiIkpHv1VQGT16NKKioqTL06dPtV0SERERpaPfatePoaEhDA0NtV0GERER/SK/VY8KERERZS5a7VGJjY1FSEiIdP3x48e4du0arKyskCtXLi1WRkRERHKg1aBy6dIl1KhRQ7o+ZMgQAIC7uzvWrVunpaqIiIhILrQaVKpXrw4hhDZLICIiIhnjGBUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSIiIpItBhUiIiKSLVkElWXLlsHJyQlGRkYoX748Lly4oO2SiIiISAa0HlS2bNmCIUOGYOLEibhy5QpKlCiBevXqITw8XNulERERkZZpPagsWLAAPXv2RNeuXVGkSBH4+PjAxMQEa9as0XZpREREpGVaDSpJSUm4fPkyateuLbXp6Oigdu3aOHv2rBYrIyIiIjnQ0+aDv337FikpKciePbtKe/bs2XH37t006ycmJiIxMVG6HhUVBQCIjo5Ol/qUiR/S5X7p95Fe29b34jZI3AZJ29JjG0y9TyHEf66r1aCirpkzZ2Ly5Mlp2h0dHbVQDWUG5l7aroAyO26DpG3puQ3GxMTA3Nz8X9fRalCxtraGrq4uXr9+rdL++vVr2NnZpVl/9OjRGDJkiHRdqVQiIiIC2bJlg0KhSPd6M5Po6Gg4Ojri6dOnMDMz03Y5lAlxGyRt4zaYfoQQiImJgb29/X+uq9WgYmBggDJlyuDIkSNo3rw5gE/h48iRI+jXr1+a9Q0NDWFoaKjSZmFh8QsqzbzMzMz4BiWt4jZI2sZtMH38V09KKq3v+hkyZAjc3d1RtmxZ/PHHH/Dy8kJcXBy6du2q7dKIiIhIy7QeVNq0aYM3b95gwoQJePXqFUqWLImDBw+mGWBLREREmY/WgwoA9OvX76u7ekh7DA0NMXHixDS72oh+FW6DpG3cBuVBIb5nbhARERGRFmj9yLRERERE38KgQkRERLLFoEJERESyxaBCREREssWgQkRERLLFoEJERESyxaBCREREssWgQkRERLLFoEJERESyxaBCvyWlUqntEoiI6BdgUKHfko7Op0337du3AACeCYJ+tS/DMrdB0oYvt8OM+COOQYV+W4sWLULz5s3x8OFDKBQKbZdDmYyOjg6ioqIQGBgIANwGSSt0dHQQGRmJuXPn4v3799KPuIwk4z0jyrC+/MWqr68PY2NjGBgYaKkiysyUSiXmz58PDw8P7Nu3T9vlUCZ26NAhLFiwAEuXLtV2KemCZ0+m3050dDTMzMwAAFFRUTA3N9dyRZRZKJVKlV+sd+7cwerVqzF79mzo6upqsTLKTFJSUlS2t+TkZGzZsgXt2rXLkNshgwr9VgYPHoyUlBSMHj0aOXLk0HY5lAlFRkYiMjISjo6OKl8KX355EP2ML0Pxl969e4fTp0+jYsWKsLa2ltoz4nbIXT8ka1/maAcHB2zYsCHDvRHp9yCEwKhRo1C+fHmEhoaqLOM2ST/j5cuXePHiBd68eQPg09iTf+tH2Lp1K5o3b47jx4+rtGfE7ZA9KiQbqb8EhBBQKBTf/EXx/v17WFpaaqFCymj+61fr19YJCwvDuHHjsG7dugz5pUC/3tq1a7Fs2TI8ffoU+fLlQ+XKlTFnzhyVdb7WU+Ll5YV+/fpBT0/vV5b7yzGokFakhhHg0xtQCAE9PT08f/4cO3fuRNeuXWFqagrg0+4eS0tLTJgwIc1tiX7U5wHk6NGjePLkCZydnZE3b17Y29urrBMVFQWlUpkmIGfEbnb6tfbt2wc3NzcsX74cJiYmePToEebMmYOKFSti/fr1yJYtm/SZ9/btW4SEhMDV1VXlPj5+/Jihwwp3/dAvkZqHo6OjER8fD4VCgUOHDiEkJAS6urrQ09NDWFgYSpUqhRcvXkghJS4uDvr6+li4cCEiIiIYUkgjhBBSSBk1ahS6dOmCefPmoVevXhg2bBguXrwI4FP3e2JiIiZMmIDSpUvj3bt3KvfDkEI/6+LFi2jUqBG6dOkCNzc3jBgxAoGBgbhx4wY6dOgA4NPU9+TkZPj5+aFixYo4deqUyn1k5JACMKjQL/Tq1SsUL14cx48fx8aNG1G/fn38888/AD7tzilatChatGiB6dOnS7cxNTXFiBEj8ODBA1hZWTGkkEakbkfz5s3DX3/9hU2bNuHWrVto2bIl9u7di3HjxuHs2bMAAAMDA5QqVQq1atWChYWFFqumjOjx48d4+fKlSlu5cuWwZ88eXL58GT179gTw6XAMjRs3xvTp09P0qGR4gugX6tq1qzAzMxM6Ojpi5cqVUntSUpLYsmWLSElJkdqUSqU2SqRM4vXr16Jly5ZizZo1Qggh9uzZI8zMzETv3r1FqVKlRK1atcS5c+eEEKrb4sePH7VSL2VMgYGBInv27GLz5s1SW+r25u/vL5ydncXFixfT3C45OfmX1aht7FGhXyL1sM6enp6IiYmBgYEB7OzskJCQAODTrwU3NzeVQYvsPaH0ZGtrixEjRqB+/fq4evUqPD09MW3aNHh7e6NVq1Y4d+4cPD09cfnyZZVtkbt7SJMKFy6M6tWrw8/PD0eOHAHwv8++kiVLIjw8XDpVyOcy+u6ezzGo0C+RGkAcHR1x6tQpuLu7o23btti9ezfi4+PTrJ8Rz1dB2vOt7alUqVLIkSMHDhw4ABcXF/Tq1QsAYGVlBVdXVzRp0gSlSpX6laVSJuPo6IjevXsjMjISCxcuxJ49e6RlOXLkQJ48ebRYnTxknkhGWiH+f/Dry5cvkZycjFy5csHW1hYVK1ZEQkICunfvjnXr1qFx48YwMjKCj48PateuDWdnZ22XThmE+Gzg7KpVqxAeHg4DAwMMGzZMOv1CYmIinj9/jtDQUBQsWBCHDh1C06ZN0b9//3+dKk/0M1JnjVWvXh3Lly/HmDFjMHLkSAQGBsLFxQVbt26FQqFAnTp1tF2qVnF6MqW7gIAATJo0Ca9fv0ajRo3QokULNGnSBADQtWtX7Ny5E0OHDsXr16/h7e2NmzdvokiRIlqumjKaiRMnwsvLC+XKlcOFCxdQvnx5+Pn5wc7ODnv37sW0adPw/v176OvrQwiBGzduQE9PjzPNKF2kblcBAQFYvnw5Dh06hLt37yI4OBhLly6Fo6MjLCws4O/vD319/Uw9FZ5BhdLV7du3Ua9ePQwePBgmJibYtGkTDA0N4e7ujo4dOwIABg4ciCtXriAxMRG+vr4oWbKkdoumDOHzXpCPHz/C3d0d/fv3R6lSpRAaGopGjRrBzs4OO3fuhI2NDfbv34+QkBDExsZi5MiR0NPTy9RfDqQZqYFEfHHsKF1dXQQEBKBz585YsGCBtNsR+LS96ujoqGy/mWlMypcYVCjd3L17F9u2bUN8fDxmzJgBALh58yYmTJiA6OhodO3aVQorr169gqmpKbJmzarNkimD+Dyk3LlzB9HR0VixYgUmTJgAJycnAJ+mhdapUwfZs2fHrl27YGNjo3IfDCn0sz7fDt++fQuFQoFs2bIB+PSZV7p0aUyYMAG9e/eWbvNlDx579BhUKB0IIfD+/Xs0btwY//zzD5o0aQI/Pz9p+Y0bNzBhwgTEx8ejbdu26Nq1qxarpYxs+PDhUtf569evERAQgAYNGkgf/I8fP0aDBg0ghMDp06dVTu5G9DM+DxhTp07Frl27EB0dDWtra0yfPh01a9bE8+fPkTNnTi1XKn8cHUYap1AoYGVlhZkzZ6Jo0aK4cuUKgoKCpOUuLi6YOnUqkpOTpTcvkSZ8Prtn3759OHjwIBYvXozly5cjT548GDt2LK5fvy4dKTlPnjzYt28fSpYsyfNHkUalhpQpU6Zg0aJF0vR3a2trdOjQAevXr0/Ti0dfxx4V0ohvdU8eP34cY8aMgZ2dHTw9PVGzZk1p2e3bt2Fubg4HB4dfWSplAgEBAThz5gyyZcuG0aNHAwBiY2NRunRpmJmZYdWqVShRokSabZa7e0iT3r17h7p168LT0xPdunWT2nv16oW9e/ciODgYhQoV4u6d/8AeFfppqW+yM2fOYMGCBRg/fjxOnz6N5ORkVKtWDVOmTMGrV6+wdOlSHDt2TLpd0aJFGVJI4+Lj4zF+/HgsWLAAt2/fltqzZMmCK1euICYmBh4eHtL5fD7HkEKa9PHjR7x9+1bqrUs9wKWvry/s7e2xcOFCADy45X9hUKGf8vkUuwYNGuD06dPYs2cPxowZg+nTpyMpKQm1atXClClT8O7dO0ydOhUnT57UdtmUgRkbG+PkyZOoXbs2Ll++jD179iAlJQXA/8LK3bt3sWLFCi1XShnJ13ZOZM+eHXZ2dlizZg0AwMjICElJSQAAZ2dnBpTvxKBCPyW1J2XAgAFYsGABduzYgW3btuHy5cvYsmULxo0bJ4WVUaNGQV9fn0daJI35fEyKEEL6srCyssLGjRthaWmJuXPnIjAwUFpmamqKV69ewdfXVys1U8ajVCql0PHixQuEh4fjw4cPAIBJkybh7t270sye1IMMPnv2jCe5/E4co0I/JPWNqVAosHz5cly7dg2+vr54/PgxateujcqVK8PMzAzbtm2Dh4cHxowZA0NDQ3z48AEmJibaLp8ygM+nfi5ZsgTXr1/Ho0ePMGjQIJQuXRoODg548+YNmjVrBl1dXYwZMwb16tVTOcIsx6TQz/D394erqyvy5csHABg9ejQCAwMRFhaG2rVro2nTpujQoQNWrlyJqVOnIlu2bChWrBgePnyIyMhI6aCC9O8YVOi7pH4pfB40rl27hpIlSyI6OhpPnz6Fs7Mz6tevjzx58mDNmjWIioqSjjDbpUsXTJ8+nYPG6Kd9uQ2NHj0aq1evRq9evfDs2TOcPXsWzZo1Q69eveDs7Iw3b96gZcuWePPmDdatWwdXV1ctVk8ZxYEDB9C4cWOMHDkSgwYNwoEDBzBixAh4eXnh3bt3uHLlCgIDAzF+/Hj07t0bN2/ehJeXF3R0dGBpaYkZM2bwoILfK13PzUwZyqNHj0S7du3EP//8I7Zu3SoUCoW4cOGCdErymzdvikKFConz588LIYR4+PChaNy4sRgzZox48uSJNkunDCYlJUUIIYSfn5/IkyePuHz5shBCiJMnTwqFQiHy588vBg4cKB49eiSEEOLly5eiV69e4uPHj1qrmTKepUuXCgcHBzF16lTRr18/sXLlSmnZ06dPxZQpU4STk5M4ePDgV2+fnJz8q0r9rbHPib5bQkICTp48iS5duuDatWtYu3YtypUrJ+0GEkLg48ePOHv2LIoWLYoNGzYAAIYNG8ZjVNBP69SpE2xsbLBgwQLo6OggOTkZBgYG6N27N0qXLo1du3aha9euWLVqFV69eoVp06ZBR0cHPXv2ROHChaXBs/wFSz8rKSkJBgYG8PT0hImJCUaPHo2YmBhMmzZNWsfBwQGdO3fGoUOHcOnSJdSrVy/NyS252+c7aTsp0e8h9Resj4+P0NHRESVKlBBXr15VWScqKkp06dJF5MuXTzg5OQkbGxvply7Rz4iKihKTJ08WVlZWYtKkSVL78+fPxevXr8XLly9F2bJlxfz586X17e3tRY4cOcSiRYuEEELq+SPSlJkzZ4rw8HDh7+8vTExMRMOGDcX9+/dV1mnTpo1o2bKllirMGDjrh/6TEAI6OjoQQsDe3h7z58/Hx48fMW7cOJw6dUpaz8zMDPPmzcPy5csxceJEnD9/HqVLl9Zi5ZQRxMTEwMzMDH369MG4cePg5eWFiRMnAgDs7e1ha2uLly9f4v3799L4k+fPn6Nu3bqYMGECPD09AfBYFfTzxGdDOtevX4+pU6fiwYMHaN++PRYuXIgrV67Ax8cH9+7dAwBER0fj8ePHyJUrl7ZKzhDY70T/Svz/wMWjR4/i+PHjGDRoEJo0aYLatWvDzc0Ns2bNwpgxY1CxYkUAn046WLduXS1XTRnFiBEjsGLFCjx8+BA2Njbo2LEjhBCYOnUqAGDy5MkAPoUZXV1dnD59GkIIzJo1CyYmJtKUUO7uIU1IDbtHjhzB1atX4evrK3329erVC8nJyZg8eTIOHjyI0qVLIy4uDklJSZgzZ442y/79abM7h+Qttat8+/btwtzcXIwePVpcvHhRWn7jxg1RpEgR0bhxY/HXX3+JSZMmCYVCIZ4+fcpudtKI69evi6pVq4qCBQuKN2/eCCGECA8PF/PnzxcWFhZiwoQJ0rr9+vUT+fLlEw4ODsLV1VUkJSUJIbjLhzTr2LFjonjx4iJbtmxi165dQgghEhMTpeWrV68WWbJkEaVLlxYbNmyQBnBz4OyP4/Rk+lcXLlxA/fr1MXv2bPTs2VNqj46OhpmZGe7cuYOePXsiPj4eUVFR2Lp1K3f3kEacPXsWb968QZEiRdCmTRvExsZKZzh+8+YN/Pz8MHXqVOlkb8CnKfMKhQLFixeHjo4OPn78yAGL9FPEF9PhY2NjMXfuXPj6+qJ8+fLYtGkTjI2NkZycDH19fQDAggULcObMGWzbtg0KhYI9ej+JQYX+1dKlS7Fz504cOXIEUVFROHr0KP766y/cuXMHw4YNQ7du3RAeHo6oqCiYm5vD1tZW2yVTBtG5c2e8ePEChw8fRmhoKFq3bo2YmJg0YWXatGno168fpkyZonJ7fjmQJi1btgwODg5o1qwZ4uPjMW/ePOzcuRPVq1fHjBkzYGRkpBJWUgPOl0GH1MfBtPSv7OzscPnyZcycOROtW7fG2rVrYWRkhEaNGqFHjx64f/8+bG1tkT9/foYU0qhly5bh2bNnWLp0KZycnLBp0yaYm5ujUqVKePv2LWxsbNCpUydMmDAB06ZNw+rVq1Vuz5BCmvLmzRscPXoUffv2xcGDB2FsbIwhQ4agcePGOHPmDMaOHYuEhATo6+vj48ePAMCQokHsUSFJ6psqNjYWWbJkAQC8fv0aS5YswdatW1GzZk106dIFf/zxB16/fo2mTZti3bp1KFq0qJYrp4wmtTdk8eLFuHr1KhYsWABLS0vcvXsXnTt3RlRUlNSz8urVKxw/fhytWrXibh7SiC+PdwIA169fx+LFi3H48GH4+PigQYMGiIuLw5w5c3D48GEULlwYy5cvl87lQ5rDHhWSKBQK7N+/H+3atUP16tWxbt066OnpYdq0aTh//jx8fHzg6uoKHR0dLFmyBHFxcexFoXSR2htSvXp1nDhxAvv37wcAFCxYEH5+frC0tETVqlXx+vVr2NnZoU2bNtDT05N+zRL9jNSQ8urVK6mtRIkSGDhwIGrUqIHevXvj4MGDMDU1xYgRI/DHH39AR0dH2u1DGqalQbwkQ6dPnxZGRkZi+PDhon79+sLFxUV4eHiIkJAQaZ3g4GDRq1cvYWVlleaAb0Q/KvWAgl/j4+MjChQoIO7duye13bt3Tzg5OYm2bdv+ivIok/h8O9y8ebPImzevykxHIYS4du2aaNasmciVK5c4duyYEEKI+Ph4aXbZv23L9GPYo0IAgLCwMAQFBWH69OmYM2cODhw4gF69euHGjRuYOXMmHj16hLi4OJw9exbh4eE4fvw4SpYsqe2yKQP4vJv9woULOHPmDI4fPy4tb9q0KcqXL4/g4GCprUCBAjhx4gT++uuvX14vZUyJiYnSdpiUlIR8+fKhUKFC8PT0xOXLl6X1SpQogebNm+Pp06eoW7cuzpw5AyMjI2lMype7jOjn8S+aCS1duhR///23dP3evXto06YN1qxZAyMjI6nd09MTHTp0wO3btzFnzhxERkZi+PDhWL9+PYoVK6aN0imD+fyDfcyYMejSpQu6desGd3d3tGnTBtHR0ciRI4e0/z85OVm6raOjI3R1dZGSkqKt8imDOHDgAPz8/AAAPXv2RM2aNVG2bFkMHToUdnZ28PDwwKVLl6T1c+XKhbZt22L+/PkoX7681M6Bs+lE21069Gs9fvxYtG/fXjx48EClfdSoUcLW1la0bNlSOrBWKm9vb1GwYEExYMAAHrSI0sW8efNEtmzZxPnz50VKSoqYMWOGUCgU4tSpU9I6lSpVEh4eHlqskjKqdu3aCScnJ1GvXj1hbW0trl+/Li07evSoaN68uShWrJg4cOCAePz4sWjevLkYOnSotA7Pyp2+GFQyobi4OCGEEOfOnRPbt2+X2idMmCCKFy8uxo0bJ16/fq1ym5UrV4rHjx//yjIpk1AqlcLd3V34+voKIYTYsWOHsLCwED4+PkIIIWJiYoQQQhw4cEA0bdpU3LhxQ2u1UsZVsmRJoVAoVE56merkyZOiU6dOQqFQiAIFCggXFxfpRxuPfJz+OJcvEzI2NkZkZCRmzpyJ58+fQ1dXF82bN8fkyZORnJyM/fv3QwiBgQMHwsbGBgDQo0cPLVdNGVVCQgLOnz+P6tWr49ixY3B3d8fcuXPh4eGBjx8/Ys6cOahQoQJcXV0xZcoUXLhwAcWLF9d22ZRBJCUlISEhAc7OzsiVKxe2bNmCnDlzom3bttJhGipXrozy5cujZ8+eSE5ORrVq1aCrq8sjH/8iHKOSCSkUClhYWGDo0KHIkycPvLy8EBAQAACYMWMG6tevj6CgIMyYMQNv377VcrWUkdy4cQPPnj0DAAwePBjHjx+HsbEx2rdvj7/++gsNGzbEwoULpZMJvn//HpcuXcK9e/dgaWkJPz8/5M6dW5tPgTIYAwMDmJmZYdu2bdi9ezfKlSuHOXPmYPPmzYiJiZHWS0hIQJUqVVCzZk1pbBRDyq/BoJIJiU+7/FClShUMHjwYlpaWWLx4sUpYcXV1xdWrV1VOa070o4QQuH//PmrUqIE1a9agd+/eWLRoESwtLQEArq6uCAsLQ/ny5VGhQgUAwIsXL9ClSxdERkaiX79+AIB8+fKhdu3aWnselPEIIaBUKqXr69evR8WKFbFw4UJs2LABT548Qc2aNfHnn39K6wM88vGvxCPTZkKpR/2MioqCiYkJbty4genTp+P9+/cYOHAgmjdvDuDTYaNTd/0QacLKlSsxYsQIJCQkYPfu3ahbt650ROQtW7ZgypQpEEJAT08PxsbGUCqVOHPmDPT19XnuHvppERERsLKyUmlL3f62bduGoKAg+Pr6AgB69eqFY8eOISUlBVZWVjh9+jSPOqsl7FHJZD5+/AhdXV2EhoaievXqOHToEMqUKYNhw4bBxsYGkydPxr59+wCAIYU0JvUXq6OjIwwNDWFmZoZz584hNDRUmtLZpk0bbNiwAVOmTIGbmxtGjhyJc+fOSedPYUihn7Fo0SKUK1dOZXcOACmkdOnSBSVKlJDafX19sWLFCixZsgTnzp2DgYEBj3ysLdoZw0u/wrdGo4eEhIjs2bOLHj16qEyrO3bsmOjUqZMIDQ39VSVSBvflNpiUlCTi4+OFt7e3yJkzpxgzZsx/bm+c+kk/a8WKFcLQ0FBs3LgxzbInT56I4sWLi6VLl0ptX9vmuB1qD3f9ZFDi/7szz549izt37iAkJASdO3dGjhw5sH79ely6dAnr169Pc4bPhIQElYO+Ef2oz484GxERgZiYGJWBsF5eXpg3bx66d++Orl27wsnJCU2aNMHYsWPh6uqqrbIpg1m5ciX69+8PPz8//Pnnn4iMjERcXBwSEhJga2uLrFmz4sGDB8ifP7+2S6VvYFDJwHbs2IFevXpJJ2978+YN2rRpg5EjRyJr1qzaLo8ysM9DypQpU3Do0CHcunULbm5uaNGiBRo0aADgU1jx8vJCsWLF8O7dOzx58gShoaE8uRtpxKNHj+Ds7Aw3Nzds3rwZt27dQt++ffHmzRuEhYWhRo0a6NOnDxo3bqztUulfcG5VBnXr1i0MHjwY8+fPR5cuXRAdHQ0LCwsYGxszpFC6Sw0pEyZMgK+vL+bOnQsnJyf07t0bDx48QGRkJNq1a4dBgwbB2toa169fR0JCAk6ePCmdBZlTP+ln2djYYPbs2ZgwYQKGDRuGQ4cOoUqVKmjWrBmio6Oxfft2jBs3DtbW1uzFkzNt7ncizTh69Kh4+PBhmrYKFSoIIYS4c+eOyJ07t+jRo4e0/OHDh9znSunq6NGjomjRouLEiRNCCCHOnDkjDAwMRJEiRUT58uXFtm3bpHU/PzUDT9NAmpSQkCDmzZsndHR0RLdu3URSUpK07NKlS6JgwYJi2bJlWqyQ/gtn/fzGhBC4evUqGjRoAG9vb4SFhUnLnj9/DiEEYmNjUb9+fdStWxcrVqwAAAQFBcHb2xvv37/XVumUAYkv9iLnzJkTffr0QZUqVXDo0CE0btwYvr6+CAoKwsOHD7F48WKsXr0aAFR6T9iTQppkaGiI3r17Y8eOHejRowf09fWlbbVMmTIwMjLC06dPtVwl/RsGld+YQqFAqVKlMH/+fGzduhXe3t549OgRAKBRo0Z4/fo1zMzM0KhRI/j6+krd8YGBgbhx4wane5LGKJVKaUD2o0ePEBcXh/z586Ndu3ZISEjAokWLMGDAAHTq1An29vYoWrQoQkJCcOfOHS1XTpmBqakpGjRoIB1MMHVbDQ8Ph7GxMYoWLarN8ug/8KfLbyx1P76npycAYO7cudDV1UWPHj2QJ08ejB8/HjNmzMDHjx/x4cMHhISEYNOmTVi1ahVOnTolHRWU6Gd8PnB2woQJOHv2LIYPH44aNWrAysoKcXFxePnyJUxMTKCjo4PExEQ4OTlhxIgRqF+/vparp4xIfDaTMZWhoaH0/5SUFLx9+xY9e/aEQqFAu3btfnWJpAYGld9Yao/IoUOHoKOjg+TkZHh5eSEhIQEjR46Em5sb4uPjMWPGDGzfvh3Zs2eHgYEBgoODUaxYMS1XTxnF5yFlxYoV8PX1RalSpaSZO4mJibCyssKpU6ekAbPv3r3DmjVroKOjoxJ0iH5EWFgYIiIikC1bNtjZ2f3rEWSTk5Ph5+eHTZs2ISIiAufOnZPO3cNeZnni9OTfXGBgoHQiN1NTUzx48ACLFy9G3759MXLkSNjY2CAmJgbHjx+Hk5MTbG1tYWtrq+2y6Tf3Zbi4f/8+mjdvjtmzZ6NJkyZp1rt48SLGjRuH2NhYWFlZISAgAPr6+gwp9NM2bNiA+fPnIzw8HNbW1ujfv7/UU5Lqy+0sKCgIt2/fRr9+/TjL7DfAoPIbUyqV6NChAxQKBTZu3Ci1L1myBCNGjICnpyf69u2LvHnzarFKymhatmyJMWPGoGzZslLbtWvXUL9+fRw/fhwFCxb86kEEExISIISAkZERFAoFvxzop23YsAGenp7S4fFnzJiBR48e4fTp09K2lRpSIiMjcejQIbi5uancB3tS5I8/ZX5jqb8QUrvYk5KSAAD9+/eHh4cH1q5di8WLF6vMBiL6Webm5nBxcVFpMzIywvv373Hr1i2pLfX8PmfPnsWOHTugo6MDY2NjKBQKKJVKhhT6KZcuXcLUqVOxdOlSdOvWDcWLF8fgwYPh7OyMM2fO4Pbt24iOjpZ2i69fvx59+/bFX3/9pXI/DCnyx6DyG3rx4oX0/4IFC2Lv3r0IDw+HgYEBkpOTAQAODg4wMTFBcHAwjI2NtVUqZSDPnz8HAKxduxYGBgZYvHgxDh06hKSkJDg7O6NNmzaYO3cuDh8+DIVCAR0dHaSkpGD69OkIDg5WGTfA3T30sxITEzFo0CA0atRIaps0aRKOHDmCdu3aoXPnzmjbti0iIiKgr6+Phg0bYtiwYRw4+xvirp/fzPXr19GvXz+0b98effr0QVJSEmrWrIm3b9/i2LFjsLOzAwCMHDkSRYsWRePGjdOc1pxIXT179gQAjB49WtqV6OLigrdv32Lz5s2oWrUqTp48iYULF+LmzZvo0KEDDAwMcOTIEbx58wZXrlxhDwpplFKpxJs3b5A9e3YAQOfOnXH48GHs2bMHjo6OOH78OKZNm4aRI0eiffv2KmNWuLvn98KfNb8ZExMTWFhYYPv27Vi3bh0MDAywYsUK2NjYoHDhwmjevDnq1q2LRYsWoWzZsgwppBEuLi44ePAgvL29ERISAgC4ceMGChYsiA4dOuDEiROoUqUKpkyZgs6dO8PPzw9Hjx5Frly5cPnyZWnAIpGm6OjoSCEFAIYNG4bz58+jbNmyyJ49Oxo0aICIiAi8fv06zVRlhpTfC3tUfkMhISEYM2YMXr16hZ49e6JTp05ISUnBvHnzEBYWBiEE+vfvjyJFimi7VMpA1qxZgwkTJqBt27bo2bMnChYsCACoWrUqHj9+DH9/f1StWhUA8OHDB5iYmEi35cBZ+tWePXuGjh07YtiwYTzp4G+OQeU3cOXKFbx8+VJlX2xISAjGjRuH0NBQ9O/fHx06dNBihZSRfT61c/Xq1ZgwYQLatWuXJqyEhYVhw4YNqFChgsp4lK8dfItIHZ9vQ6n/T/33zZs3sLGxUVk/Li4O7dq1Q1RUFI4ePcoelN8cg4rMxcTEoFGjRtDV1cWIESPQoEEDaVloaCjq168PExMT9OjRA3379tVipZTRfOsYJytXrsTkyZPRpk0b9OrVSworNWvWxOnTp3Hu3DmUKlXqV5dLGdTXtsPUtoCAAGzatAmLFi2Cvb094uPjsXv3bvj5+eH58+e4ePEi9PX1OSblN8cxKjKVmh+zZs2KOXPmQE9PD0uXLsX+/fuldZycnFCjRg28evUKR44cQWRkpJaqpYzm8y+HM2fOIDg4GNevXwfwaWDt+PHjsXnzZvj6+uLevXsAgKNHj6JHjx5ppi4T/ahTp05JJwwcMmQIZs2aBeDT+JQtW7agc+fOqF27Nuzt7QF8OqHl48ePkTdvXly6dAn6+vr4+PEjQ8pvjj0qMpPanZn6CyD1C+P8+fMYNWoUTE1N0adPH2k30NChQ5E3b160bNkSOXLk0HL1lBF83s0+ZMgQbNmyBbGxsXBwcECuXLlw4MABAMCKFSswbdo0tG3bFu7u7iqnZeAvWPoZQghERUXB1tYWDRo0gLW1NQICAnDy5EkUK1YMkZGRcHV1haenJ/r37y/d5vPPToDbYUbBoCIjqW+04OBg7NmzBxEREahcuTL+/PNPWFhY4Ny5cxg/fjwSExORN29emJiYYMuWLbh+/TocHBy0XT5lAJ+HlEOHDmHQoEHw9fWFhYUF/vnnH0ycOBGmpqa4dOkSgE9jVjw8PODl5YV+/fpps3TKgMLDw5E3b16kpKRgx44daNiwobTsa2NTvjaWhX5/3PUjIwqFAjt37kSTJk3w4cMHfPjwAX5+fujTpw8iIiLg6uqKefPmoVq1aggJCcGjR49w9OhRhhTSmNQP9j179mDz5s2oXbs2KleujGLFiqF169bYsGEDYmNj0adPHwBA9+7dsXv3buk6kaYkJibi1atXMDExga6uLtasWSNNjQcAa2tr6f+pR0H+PJgwpGQc7FGRkUuXLqFt27YYNWoUevTogbCwMJQuXRrGxsYoWbIkNmzYACsrK+ncKV9OASXShIiICDRu3BjXr19HjRo1sG/fPpXlY8aMwenTp/H333/D1NRUamc3O/2sbw3gDg0NhYuLC2rUqIEFCxYgX758WqiOtIU9Kloyc+ZMjB07VvolAHw6RLmrqyt69OiB0NBQ1KpVC82bN8e4ceNw8eJF9O3bFxERETAyMgIAhhTSiM+3QQCwsrLC+vXrUadOHVy9ehVr165VWZ4/f368e/cO8fHxKu0MKfQzPg8px44dw8aNG3H9+nU8f/4cTk5OOH36NIKDgzFixAhpAHeLFi2wZMkSbZZNvwB7VLRkyZIlGDhwIGbMmIERI0ZIb9A7d+6gYMGCaNasmfSFoVQqUbJkSYSEhKBRo0bYsmULz5VCGvH5l8PDhw+hUChgYmICOzs7PH78GJ6enoiLi8Off/4JDw8PvH79Gu7u7jAyMsK+ffvYvU4aN2zYMKxfvx56enrIkiUL7OzssHDhQpQtWxY3b95EjRo14OTkhKSkJHz8+BHXr1+XTsxKGZSgX06pVAohhFi5cqXQ0dERU6dOFcnJydLyp0+fisKFC4t9+/YJIYSIiIgQ7dq1E0uWLBHPnj3TSs2U8aRuh0IIMXHiRFG8eHFRqFAhkSNHDuHr6yuEECIkJEQ0bNhQGBkZiYIFC4oWLVqIevXqifj4eCGEECkpKVqpnTKOz7fDoKAgUaJECXHy5EkREREhdu/eLVq0aCGcnZ3FlStXhBBCPHjwQEyZMkVMnz5d+tz8/POTMh4GlV9MqVRKb0ylUin++usvoaOjI6ZNmyZ96IeHh4uSJUsKDw8PERoaKsaMGSPKlSsnXr9+rc3SKYOaMmWKsLGxEYGBgSI2Nla0aNFCWFhYiNu3bwshhHj06JFo1KiRKFmypFi4cKF0u4SEBC1VTBnR+vXrRb9+/USvXr1U2i9evCjq168v3N3dRWxsrBBCNdwwpGR83H+gBQqFAocPH8bQoUNRpkwZ6Rwqs2bNghAClpaW6NChA44fPw5XV1ds2LABPj4+sLW11XbplAF8PiZFqVTiwoULWLhwIerWrYugoCAcO3YMM2bMQJEiRZCcnIw8efJg/vz5yJ49O/bv34+AgAAAgKGhobaeAmUA4otRB7t27cKyZctw7do1JCYmSu1ly5ZFlSpVcOrUKaSkpABQndHDc0hlAtpOSpnRjh07hLGxsZg6daq4ePGiEEIIX19faTeQEEIkJiaK27dvi6CgIPH06VNtlksZ1IQJE8SsWbNEzpw5xb1790RwcLDIkiWL8Pb2FkII8eHDBzF27FgRGhoqhBDi/v37onHjxqJs2bIiICBAm6XTb+7zHhF/f3+xYcMGIYQQ/fr1ExYWFmLZsmUiKipKWicwMFAUKlRI2hYpc2FQ+cXu3bsn8uTJI5YvX55m2YoVK6TdQESa9vl4ks2bNwtHR0dx69Yt0bFjR1GvXj1hYmIiVq9eLa3z/PlzUaVKFbFhwwbptnfu3BGtW7cWYWFhv7x+yhg+3w5v3bolSpUqJUqUKCF2794thBDC3d1d5M+fX0yfPl2EhISIkJAQUatWLVGtWjWVgEOZB/vMfrEnT55AX19f5QiLqTMvevXqBVNTU3Tq1AmGhoYYNmyYFiuljCZ1ds/x48dx7NgxDB06FEWLFpUOJFirVi1069YNwKeTYfbo0QO6urpo3749dHR0oFQqUahQIWzcuJGzLOiHpW6Hw4cPx+PHj2FsbIy7d+9i8ODB+PjxI9atW4du3bph3LhxWLJkCSpVqoQsWbJgy5YtUCgU3zzWCmVcDCq/WGxsrMrxJ5RKpbS/9dixYyhTpgy2bNmict4UIk159eoVunfvjvDwcIwZMwYA0Lt3bzx8+BBHjx5FqVKlkD9/fjx58gQJCQm4ePEidHV1VQ7mxjEB9LPWrVuHVatW4ciRI8iTJw8SExPh7u6OmTNnQkdHB2vWrIGJiQm2bt2K+vXro23btjA0NERSUhIMDAy0XT79Yoylv1iJEiXw9u1b+Pr6Avj06yI1qOzevRsbN25Ey5YtUbhwYW2WSRmUnZ0dAgICkD17duzduxeXL1+Grq4u5s6diylTpqBmzZqws7NDmzZtvnn2WR47hX5WSEgIihUrhpIlS8Lc3Bx2dnZYs2YNdHV1MXjwYOzcuRNLly5F7dq1sWDBAuzZswcxMTEMKZkUfxr9Ynny5MHSpUvRu3dvJCcno3PnztDV1cW6deuwbt06nD17lkf4pHTl4uKCHTt2wN3dHT4+Pujfvz9cXFzQtGlTNG3aVGXdlJQU9qCQxoj/P1GgoaEhEhISkJSUBCMjIyQnJyNnzpyYOXMmGjduDC8vLxgbG2Pjxo1o3749hg0bBj09Pbi5uWn7KZAW8Mi0WqBUKrFjxw54eHjA1NQURkZG0NXVxaZNm1CqVCltl0eZxNWrV9GjRw+UKVMGAwcORNGiRbVdEmUSN2/eRKlSpTB+/HhMnDhRag8MDMTKlSvx/v17pKSk4NixYwCArl27Yvz48cibN6+WKiZtYlDRohcvXiAsLAwKhQJ58uRB9uzZtV0SZTJXr16Fh4cHcufOjTlz5iBPnjzaLokyiXXr1qFXr14YNGgQ2rRpA0tLSwwYMAAVK1ZEixYtULRoUezfvx8NGjTQdqmkZQwqRJnchQsX4OPjg1WrVnE2Bf1SO3bsQN++fWFgYAAhBGxtbXHmzBm8fv0aderUwfbt2+Hi4qLtMknLGFSISBo7wKmf9Ks9f/4cT58+RXJyMipVqgQdHR2MHj0au3btQnBwMOzs7LRdImkZgwoRAfhfWCHSltu3b2P27Nn4+++/cfjwYZQsWVLbJZEMcDg/EQHgtGPSro8fPyIpKQm2trY4fvw4B3eThD0qREQkG8nJyTzyMalgUCEiIiLZ4qg5IiIiki0GFSIiIpItBhUiIiKSLQYVIiIiki0GFSL6rRw7dgwKhQKRkZHffRsnJyd4eXmlW01ElH4YVIhIo7p06QKFQoHevXunWebp6QmFQoEuXbr8+sKI6LfEoEJEGufo6IjNmzcjPj5eaktISMDGjRuRK1cuLVZGRL8bBhUi0rjSpUvD0dERAQEBUltAQABy5cqFUqVKSW2JiYkYMGAAbG1tYWRkhMqVK+PixYsq9/X333+jQIECMDY2Ro0aNRAaGprm8U6dOoUqVarA2NgYjo6OGDBgAOLi4tLt+RHRr8OgQkTpolu3bli7dq10fc2aNejatavKOiNGjMCOHTuwfv16XLlyBc7OzqhXrx4iIiIAAE+fPkXLli3RpEkTXLt2DT169MCoUaNU7uPhw4eoX78+WrVqhRs3bmDLli04deoU+vXrl/5PkojSHYMKEaWLjh074tSpUwgLC0NYWBhOnz6Njh07Ssvj4uLg7e2NuXPnokGDBihSpAhWrlwJY2NjrF69GgDg7e2NfPnyYf78+ShYsCA6dOiQZnzLzJkz0aFDBwwaNAj58+dHxYoVsXjxYmzYsAEJCQm/8ikTUTrgSQmJKF3Y2NigUaNGWLduHYQQaNSoEaytraXlDx8+RHJyMipVqiS16evr448//sCdO3cAAHfu3EH58uVV7rdChQoq169fv44bN27A399fahNCQKlU4vHjxyhcuHB6PD0i+kUYVIgo3XTr1k3aBbNs2bJ0eYzY2Fh4eHhgwIABaZZx4C7R749BhYjSTf369ZGUlASFQoF69eqpLMuXLx8MDAxw+vRp5M6dG8CnM+devHgRgwYNAgAULlwYe/bsUbnduXPnVK6XLl0a//zzD5ydndPviRCR1nCMChGlG11dXdy5cwf//PMPdHV1VZaZmpqiT58+GD58OA4ePIh//vkHPXv2xIcPH9C9e3cAQO/evfHgwQMMHz4c9+7dw8aNG7Fu3TqV+xk5ciTOnDmDfv364dq1a3jw4AF2797NwbREGQSDChGlKzMzM5iZmX112axZs9CqVSt06tQJpUuXRkhICAIDA2FpaQng066bHTt2YNeuXShRogR8fHwwY8YMlftwcXHB8ePHcf/+fVSpUgWlSpXChAkTYG9vn+7PjYjSn0IIIbRdBBEREdHXsEeFiIiIZItBhYiIiGSLQYWIiIhki0GFiIiIZItBhYiIiGSLQYWIiIhki0GFiIiIZItBhYiIiGSLQYWIiIhki0GFiIiIZItBhYiIiGSLQYWIiIhk6/8AHoK08GWUizwAAAAASUVORK5CYII=",
- "text/plain": [
- "
"
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
- "source": [
- "import matplotlib.pyplot as plt\n",
- "\n",
- "## calculate avg response time\n",
- "unique_models = set(result[\"response\"]['model'] for result in result[\"results\"])\n",
- "model_dict = {model: {\"response_time\": []} for model in unique_models}\n",
- "for completion_result in result[\"results\"]:\n",
- " model_dict[completion_result[\"response\"][\"model\"]][\"response_time\"].append(completion_result[\"response_time\"])\n",
- "\n",
- "avg_response_time = {}\n",
- "for model, data in model_dict.items():\n",
- " avg_response_time[model] = sum(data[\"response_time\"]) / len(data[\"response_time\"])\n",
- "\n",
- "models = list(avg_response_time.keys())\n",
- "response_times = list(avg_response_time.values())\n",
- "\n",
- "plt.bar(models, response_times)\n",
- "plt.xlabel('Model', fontsize=10)\n",
- "plt.ylabel('Average Response Time')\n",
- "plt.title('Average Response Times for each Model')\n",
- "\n",
- "plt.xticks(models, [model[:15]+'...' if len(model) > 15 else model for model in models], rotation=45)\n",
- "plt.show()"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "metadata": {
- "id": "inSDIE3_IRds"
- },
- "source": [
- "# Duration Test endpoint\n",
- "\n",
- "Run load testing for 2 mins. Hitting endpoints with 100+ queries every 15 seconds."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 20,
- "metadata": {
- "id": "ePIqDx2EIURH"
- },
- "outputs": [],
- "source": [
- "models=[\"gpt-3.5-turbo\", \"replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781\", \"claude-instant-1\"]\n",
- "context = \"\"\"Paul Graham (/ɡræm/; born 1964)[3] is an English computer scientist, essayist, entrepreneur, venture capitalist, and author. He is best known for his work on the programming language Lisp, his former startup Viaweb (later renamed Yahoo! Store), cofounding the influential startup accelerator and seed capital firm Y Combinator, his essays, and Hacker News. He is the author of several computer programming books, including: On Lisp,[4] ANSI Common Lisp,[5] and Hackers & Painters.[6] Technology journalist Steven Levy has described Graham as a \"hacker philosopher\".[7] Graham was born in England, where he and his family maintain permanent residence. However he is also a citizen of the United States, where he was educated, lived, and worked until 2016.\"\"\"\n",
- "prompt = \"Where does Paul Graham live?\"\n",
- "final_prompt = context + prompt\n",
- "result = load_test_model(models=models, prompt=final_prompt, num_calls=100, interval=15, duration=120)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 27,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 552
- },
- "id": "k6rJoELM6t1K",
- "outputId": "f4968b59-3bca-4f78-a88b-149ad55e3cf7"
- },
- "outputs": [
- {
- "data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjcAAAIXCAYAAABghH+YAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABwdUlEQVR4nO3dd1QU198G8GfpoNKUooKCYuwIaiL2GrGLJnYFOxrsNZbYFTsYG2JDjV2xRKOIir33EhsWLBGwUaXJ3vcPX+bnCiYsLi6Oz+ecPbp37ux+lx3YZ+/cmVEIIQSIiIiIZEJH2wUQERERaRLDDREREckKww0RERHJCsMNERERyQrDDREREckKww0RERHJCsMNERERyQrDDREREckKww0RERHJCsMNEcmSg4MDunfvru0y1DZnzhyUKFECurq6cHFx0XY5GnfkyBEoFAps27ZN26WoTaFQYNKkSWqv9+jRIygUCgQFBWm8Jsoaww19tiVLlkChUKBatWraLiXPcXBwgEKhkG758uXDDz/8gLVr12q7tK9Oxodidm5fqwMHDmDUqFGoWbMmVq9ejRkzZmi7pDwnKChIep9PnDiRabkQAvb29lAoFGjRooUWKqS8QE/bBdDXb/369XBwcMC5c+cQHh4OJycnbZeUp7i4uGD48OEAgOfPn2PFihXw8vJCSkoK+vTpo+Xqvh5ly5bFunXrVNrGjBmD/PnzY9y4cZn637lzBzo6X9f3t8OHD0NHRwcrV66EgYGBtsvJ04yMjLBhwwbUqlVLpf3o0aN4+vQpDA0NtVQZ5QUMN/RZHj58iFOnTiE4OBje3t5Yv349Jk6c+EVrUCqVSE1NhZGR0Rd93uwqWrQounbtKt3v3r07SpQoAT8/P4YbNdjY2Kj8HAFg5syZKFSoUKZ2AF/lh1t0dDSMjY01FmyEEEhOToaxsbFGHi8vadasGbZu3Yrff/8denr/+yjbsGEDqlSpgpcvX2qxOtK2r+trDeU569evh4WFBZo3b46ff/4Z69evl5alpaXB0tISPXr0yLReXFwcjIyMMGLECKktJSUFEydOhJOTEwwNDWFvb49Ro0YhJSVFZV2FQoEBAwZg/fr1KF++PAwNDbF//34AwNy5c1GjRg0ULFgQxsbGqFKlSpb79pOSkjBo0CAUKlQIBQoUQKtWrfDs2bMs96k/e/YMPXv2hI2NDQwNDVG+fHmsWrUqxz8zKysrlClTBvfv31dpVyqV8Pf3R/ny5WFkZAQbGxt4e3vjzZs3Kv0uXLgAd3d3FCpUCMbGxnB0dETPnj2l5Rn79+fOnQs/Pz8UL14cxsbGqFu3Lm7cuJGpnsOHD6N27drIly8fzM3N0bp1a9y6dUulz6RJk6BQKBAeHo7u3bvD3NwcZmZm6NGjB96+favSNzQ0FLVq1YK5uTny58+P0qVLY+zYsSp9svtef46P59xk7M44ceIEBg0aBCsrK5ibm8Pb2xupqamIiYmBp6cnLCwsYGFhgVGjRkEIofKYmnqPsqJQKLB69WokJiZKu10y5mi8e/cOU6dORcmSJWFoaAgHBweMHTs208/LwcEBLVq0QEhICKpWrQpjY2MsW7bsX5/37NmzaNKkCczMzGBiYoK6devi5MmTKn0iIiLwyy+/oHTp0jA2NkbBggXRrl07PHr0KNPjxcTEYOjQoXBwcIChoSHs7Ozg6emZKWwolUpMnz4ddnZ2MDIyQsOGDREeHv6vtX6oU6dOePXqFUJDQ6W21NRUbNu2DZ07d85yncTERAwfPhz29vYwNDRE6dKlMXfu3Ezvc0pKCoYOHQorKyvp78PTp0+zfExN/30gDRFEn6FMmTKiV69eQgghjh07JgCIc+fOSct79uwpzM3NRUpKisp6a9asEQDE+fPnhRBCpKeni8aNGwsTExMxZMgQsWzZMjFgwAChp6cnWrdurbIuAFG2bFlhZWUlJk+eLBYvXiwuX74shBDCzs5O/PLLL2LRokVi/vz54ocffhAAxJ49e1Qeo3379gKA6Natm1i8eLFo3769qFSpkgAgJk6cKPWLjIwUdnZ2wt7eXkyZMkUsXbpUtGrVSgAQfn5+//nzKV68uGjevLlKW1pamrC1tRU2NjYq7b179xZ6enqiT58+IiAgQIwePVrky5dPfP/99yI1NVUIIURUVJSwsLAQ3333nZgzZ45Yvny5GDdunChbtqz0OA8fPhQARMWKFYWDg4OYNWuWmDx5srC0tBRWVlYiMjJS6hsaGir09PTEd999J2bPni0mT54sChUqJCwsLMTDhw+lfhMnThQAhKurq2jbtq1YsmSJ6N27twAgRo0aJfW7ceOGMDAwEFWrVhULFiwQAQEBYsSIEaJOnTpSH3Xe6/9Svnx5Ubdu3U/+7L28vKT7q1evFgCEi4uLaNKkiVi8eLHo1q2b9Bpq1aolOnfuLJYsWSJatGghAIg1a9bkynuUlXXr1onatWsLQ0NDsW7dOrFu3Tpx//59IYQQXl5eAoD4+eefxeLFi4Wnp6cAIDw8PDK9ZicnJ2FhYSF+/fVXERAQIMLCwj75nIcOHRIGBgaievXqYt68ecLPz084OzsLAwMDcfbsWanf1q1bRaVKlcSECRNEYGCgGDt2rLCwsBDFixcXiYmJUr/4+HhRoUIFoaurK/r06SOWLl0qpk6dKr7//nvpdzQsLEzalqpUqSL8/PzEpEmThImJifjhhx/+9Wf04ft4/vx5UaNGDdGtWzdp2c6dO4WOjo549uxZpt89pVIpGjRoIBQKhejdu7dYtGiRaNmypQAghgwZovIcXbt2FQBE586dxaJFi0Tbtm2Fs7Nzjv8+ZPxOrl69+j9fH2kGww3l2IULFwQAERoaKoR4/8fDzs5ODB48WOoTEhIiAIg///xTZd1mzZqJEiVKSPfXrVsndHR0xPHjx1X6BQQECADi5MmTUhsAoaOjI27evJmpprdv36rcT01NFRUqVBANGjSQ2i5evJjlH7Tu3btn+uPVq1cvUbhwYfHy5UuVvh07dhRmZmaZnu9jxYsXF40bNxYvXrwQL168ENevX5c+UH18fKR+x48fFwDE+vXrVdbfv3+/SvuOHTtUQmFWMv6QGhsbi6dPn0rtZ8+eFQDE0KFDpTYXFxdhbW0tXr16JbVdvXpV6OjoCE9PT6ktI9z07NlT5bnatGkjChYsKN338/MTAMSLFy8+WZ867/V/yUm4cXd3F0qlUmqvXr26UCgUol+/flLbu3fvhJ2dncpja/I9+hQvLy+RL18+lbYrV64IAKJ3794q7SNGjBAAxOHDh1VeMwCxf//+/3wupVIpSpUqlenn8fbtW+Ho6Ch+/PFHlbaPnT59WgAQa9euldomTJggAIjg4OAsn0+I/4WbsmXLqnzpWbBggQAgrl+//q91fxhuFi1aJAoUKCDV165dO1G/fn3pZ/FhuNm5c6cAIKZNm6byeD///LNQKBQiPDxcCPG/n/cvv/yi0q9z5845/vvAcPPlcbcU5dj69ethY2OD+vXrA3g/rN6hQwds2rQJ6enpAIAGDRqgUKFC2Lx5s7TemzdvEBoaig4dOkhtW7duRdmyZVGmTBm8fPlSujVo0AAAEBYWpvLcdevWRbly5TLV9OHcgjdv3iA2Nha1a9fGpUuXpPaMXVi//PKLyroDBw5UuS+EwPbt29GyZUsIIVTqcnd3R2xsrMrjfsqBAwdgZWUFKysrVKxYEevWrUOPHj0wZ84clddvZmaGH3/8UeV5qlSpgvz580uv39zcHACwZ88epKWl/evzenh4oGjRotL9H374AdWqVcNff/0F4P3k5itXrqB79+6wtLSU+jk7O+PHH3+U+n2oX79+Kvdr166NV69eIS4uTqW+Xbt2QalUZlmXuu+1pvXq1UvliKpq1apBCIFevXpJbbq6uqhatSoePHigUrem36PsyHgfhg0bptKeMUl97969Ku2Ojo5wd3f/z8e9cuUK7t27h86dO+PVq1fS60lMTETDhg1x7Ngx6T388PcqLS0Nr169gpOTE8zNzVV+B7Zv345KlSqhTZs2mZ7v46PYevTooTK3qHbt2gCg8jP/L+3bt0dSUhL27NmD+Ph47Nmz55O7pP766y/o6upi0KBBKu3Dhw+HEAL79u2T+gHI1G/IkCEq9zX194Fyxzcdbo4dO4aWLVuiSJEiUCgU2LlzZ64/57Nnz9C1a1dpTkjFihVx4cKFXH9eTUtPT8emTZtQv359PHz4EOHh4QgPD0e1atUQFRWFQ4cOAQD09PTw008/YdeuXdL8gODgYKSlpamEm3v37uHmzZtSCMi4fffddwDeT7T8kKOjY5Z17dmzB25ubjAyMoKlpSWsrKywdOlSxMbGSn0iIiKgo6OT6TE+PsrrxYsXiImJQWBgYKa6MuYRfVxXVqpVq4bQ0FDs378fc+fOhbm5Od68eaPyh/3evXuIjY2FtbV1pudKSEiQnqdu3br46aefMHnyZBQqVAitW7fG6tWrs5yrUqpUqUxt3333nTRPIiIiAgBQunTpTP3Kli0rfdB9qFixYir3LSwsAECac9KhQwfUrFkTvXv3ho2NDTp27IgtW7aoBB1132tN+/g1mJmZAQDs7e0ztX84lyY33qPsyNheP94+bW1tYW5uLr2PGT71u/Gxe/fuAQC8vLwyvZ4VK1YgJSVF+r1JSkrChAkTpLkqhQoVgpWVFWJiYlR+t+7fv48KFSpk6/n/a1vKDisrKzRq1AgbNmxAcHAw0tPT8fPPP2fZNyIiAkWKFEGBAgVU2suWLSstz/hXR0cHJUuWVOn38e+Jpv4+UO74po+WSkxMRKVKldCzZ0+0bds215/vzZs3qFmzJurXr499+/bBysoK9+7dk36pvyaHDx/G8+fPsWnTJmzatCnT8vXr16Nx48YAgI4dO2LZsmXYt28fPDw8sGXLFpQpUwaVKlWS+iuVSlSsWBHz58/P8vk+/uDJ6uiP48ePo1WrVqhTpw6WLFmCwoULQ19fH6tXr8aGDRvUfo0ZH8hdu3aFl5dXln2cnZ3/83EKFSqERo0aAQDc3d1RpkwZtGjRAgsWLJC+jSuVSlhbW6tMyP6QlZUVAEgnPztz5gz+/PNPhISEoGfPnpg3bx7OnDmD/Pnzq/061aGrq5tlu/j/CZnGxsY4duwYwsLCsHfvXuzfvx+bN29GgwYNcODAAejq6qr9Xmvap15DVu3ig4mm2n6Psnv+nuweGZWxfc+ZM+eTJwvMqHXgwIFYvXo1hgwZgurVq8PMzAwKhQIdO3b85Ajdf/mvbSm7OnfujD59+iAyMhJNmzaVRs5ym6b+PlDu+KbDTdOmTdG0adNPLk9JScG4ceOwceNGxMTEoEKFCpg1axbq1auXo+ebNWsW7O3tsXr1aqktu9+y8pr169fD2toaixcvzrQsODgYO3bsQEBAAIyNjVGnTh0ULlwYmzdvRq1atXD48OFM5yUpWbIkrl69ioYNG+b4JGzbt2+HkZERQkJCVA4D/vDnDQDFixeHUqnEw4cPVUY3Pj5SI+NIifT0dCmcaELz5s1Rt25dzJgxA97e3siXLx9KliyJgwcPombNmtn6cHJzc4ObmxumT5+ODRs2oEuXLti0aRN69+4t9cn4Zv6hu3fvwsHBAcD7nwPw/nwwH7t9+zYKFSqEfPnyqf36dHR00LBhQzRs2BDz58/HjBkzMG7cOISFhaFRo0Yaea+1ITfeo+zI2F7v3bsnjTIAQFRUFGJiYqT3UV0ZIxOmpqb/uX1v27YNXl5emDdvntSWnJyMmJiYTI+Z1RF5ualNmzbw9vbGmTNnVHZ/f6x48eI4ePAg4uPjVUZvbt++LS3P+FepVOL+/fsqozUf/57k1t8H0oxverfUfxkwYABOnz6NTZs24dq1a2jXrh2aNGmS5YdGduzevRtVq1ZFu3btYG1tDVdXVyxfvlzDVee+pKQkBAcHo0WLFvj5558z3QYMGID4+Hjs3r0bwPsPu59//hl//vkn1q1bh3fv3qnskgLe7zt/9uxZlj+PpKSkTLtHsqKrqwuFQiHN9wHeHxb98e7GjPkIS5YsUWlfuHBhpsf76aefsH379iz/YL948eI/a/qU0aNH49WrV9Lrbd++PdLT0zF16tRMfd+9eyd9iLx58ybTN9uMb90f7/bYuXMnnj17Jt0/d+4czp49KwX6woULw8XFBWvWrFH5kLpx4wYOHDiAZs2aqf26Xr9+nant4/o08V5rQ268R9mR8T74+/urtGeMfDVv3lztxwSAKlWqoGTJkpg7dy4SEhIyLf9w+9bV1c30mhYuXKjyuwYAP/30E65evYodO3Zkejx1R2SyK3/+/Fi6dCkmTZqEli1bfrJfs2bNkJ6ejkWLFqm0+/n5QaFQSL8XGf/+/vvvKv0+/vnn5t8H+nzf9MjNv3n8+DFWr16Nx48fo0iRIgCAESNGYP/+/Tk+LfqDBw+wdOlSDBs2DGPHjsX58+cxaNAgGBgYfHJYMy/avXs34uPj0apVqyyXu7m5wcrKCuvXr5dCTIcOHbBw4UJMnDgRFStWVPkGCgDdunXDli1b0K9fP4SFhaFmzZpIT0/H7du3sWXLFum8Hf+mefPmmD9/Ppo0aYLOnTsjOjoaixcvhpOTE65duyb1q1KlCn766Sf4+/vj1atXcHNzw9GjR3H37l0AqsP/M2fORFhYGKpVq4Y+ffqgXLlyeP36NS5duoSDBw9m+WGeHU2bNkWFChUwf/58+Pj4oG7duvD29oavry+uXLmCxo0bQ19fH/fu3cPWrVuxYMEC/Pzzz1izZg2WLFmCNm3aoGTJkoiPj8fy5cthamqaKYw4OTmhVq1a6N+/P1JSUuDv74+CBQti1KhRUp85c+agadOmqF69Onr16oWkpCQsXLgQZmZmObqGzpQpU3Ds2DE0b94cxYsXR3R0NJYsWQI7OzvpTLKaeK+1ITfeo+yoVKkSvLy8EBgYiJiYGNStWxfnzp3DmjVr4OHhIU3oV5eOjg5WrFiBpk2bonz58ujRoweKFi2KZ8+eISwsDKampvjzzz8BAC1atMC6detgZmaGcuXK4fTp0zh48CAKFiyo8pgjR47Etm3b0K5dO/Ts2RNVqlTB69evsXv3bgQEBKjsitak7Pz9bNmyJerXr49x48bh0aNHqFSpEg4cOIBdu3ZhyJAh0kiWi4sLOnXqhCVLliA2NhY1atTAoUOHsjwHT279fSAN0MoxWnkQALFjxw7p/p49ewQAkS9fPpWbnp6eaN++vRBCiFu3bgkA/3obPXq09Jj6+vqievXqKs87cOBA4ebm9kVeo6a0bNlSGBkZqZzf4mPdu3cX+vr60iGSSqVS2NvbZ3koZobU1FQxa9YsUb58eWFoaCgsLCxElSpVxOTJk0VsbKzUDx8dRv2hlStXilKlSglDQ0NRpkwZsXr1aukw5g8lJiYKHx8fYWlpKfLnzy88PDzEnTt3BAAxc+ZMlb5RUVHCx8dH2NvbC319fWFraysaNmwoAgMD//NnldV5bjIEBQVlOjw0MDBQVKlSRRgbG4sCBQqIihUrilGjRol//vlHCCHEpUuXRKdOnUSxYsWEoaGhsLa2Fi1atBAXLlyQHiPjsNM5c+aIefPmCXt7e2FoaChq164trl69mqmOgwcPipo1awpjY2NhamoqWrZsKf7++2+VPhk/w48P8c44LDfjnDiHDh0SrVu3FkWKFBEGBgaiSJEiolOnTuLu3bsq62X3vf4vOTkU/ONDtD/12rI6LFsIzbxHn/Kp50xLSxOTJ08Wjo6OQl9fX9jb24sxY8aI5OTkTK/5U9vbp1y+fFm0bdtWFCxYUBgaGorixYuL9u3bi0OHDkl93rx5I3r06CEKFSok8ufPL9zd3cXt27cz/YyFEOLVq1diwIABomjRosLAwEDY2dkJLy8v6W9BxqHgW7duVVkvu4dLf+p9/FhWP4v4+HgxdOhQUaRIEaGvry9KlSol5syZo3IovBBCJCUliUGDBomCBQuKfPnyiZYtW4onT55kOhRciOz9feCh4F+eQohcGiv8yigUCuzYsQMeHh4AgM2bN6NLly64efNmpolv+fPnh62tLVJTU//zsMWCBQtKEw2LFy+OH3/8EStWrJCWL126FNOmTVPZfUDaceXKFbi6uuKPP/5Aly5dtF1Ojj169AiOjo6YM2eOyhmgiYi+Fdwt9Qmurq5IT09HdHS0dP6FjxkYGKBMmTLZfsyaNWtmmpR29+7dHE8IpJxLSkrKNCnU398fOjo6qFOnjpaqIiIiTfimw01CQoLKftSHDx/iypUrsLS0xHfffYcuXbrA09MT8+bNg6urK168eIFDhw7B2dk5R5P4hg4diho1amDGjBlo3749zp07h8DAQAQGBmryZVE2zJ49GxcvXkT9+vWhp6eHffv2Yd++fejbt2+uH4pMRES5TNv7xbQpY9/vx7eMfcipqaliwoQJwsHBQejr64vChQuLNm3aiGvXruX4Of/8809RoUIFaU5IduZtkOYdOHBA1KxZU1hYWAh9fX1RsmRJMWnSJJGWlqbt0j7bh3NuiIi+RZxzQ0RERLLC89wQERGRrDDcEBERkax8cxOKlUol/vnnHxQoUOCrOvU7ERHRt0wIgfj4eBQpUgQ6Ov8+NvPNhZt//vmHR8MQERF9pZ48eQI7O7t/7fPNhZuMC6Y9efIEpqamWq6GiIiIsiMuLg729vYqFz79lG8u3GTsijI1NWW4ISIi+spkZ0oJJxQTERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGs6Gm7ACLSLIdf92q7BNKyRzOba/X5uQ2StrdBjtwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrGg13CxduhTOzs4wNTWFqakpqlevjn379v3rOlu3bkWZMmVgZGSEihUr4q+//vpC1RIREdHXQKvhxs7ODjNnzsTFixdx4cIFNGjQAK1bt8bNmzez7H/q1Cl06tQJvXr1wuXLl+Hh4QEPDw/cuHHjC1dOREREeZVCCCG0XcSHLC0tMWfOHPTq1SvTsg4dOiAxMRF79uyR2tzc3ODi4oKAgIBsPX5cXBzMzMwQGxsLU1NTjdVNlFfwooWk7YsWchuk3NgG1fn8zjNzbtLT07Fp0yYkJiaievXqWfY5ffo0GjVqpNLm7u6O06dPf/JxU1JSEBcXp3IjIiIi+dJ6uLl+/Try588PQ0ND9OvXDzt27EC5cuWy7BsZGQkbGxuVNhsbG0RGRn7y8X19fWFmZibd7O3tNVo/ERER5S1aDzelS5fGlStXcPbsWfTv3x9eXl74+++/Nfb4Y8aMQWxsrHR78uSJxh6biIiI8h49bRdgYGAAJycnAECVKlVw/vx5LFiwAMuWLcvU19bWFlFRUSptUVFRsLW1/eTjGxoawtDQULNFExERUZ6l9ZGbjymVSqSkpGS5rHr16jh06JBKW2ho6Cfn6BAREdG3R6sjN2PGjEHTpk1RrFgxxMfHY8OGDThy5AhCQkIAAJ6enihatCh8fX0BAIMHD0bdunUxb948NG/eHJs2bcKFCxcQGBiozZdBREREeYhWw010dDQ8PT3x/PlzmJmZwdnZGSEhIfjxxx8BAI8fP4aOzv8Gl2rUqIENGzZg/PjxGDt2LEqVKoWdO3eiQoUK2noJRERElMdoNdysXLnyX5cfOXIkU1u7du3Qrl27XKqIiIiIvnZ5bs4NERER0edguCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZYbghIiIiWdFquPH19cX333+PAgUKwNraGh4eHrhz586/rhMUFASFQqFyMzIy+kIVExERUV6n1XBz9OhR+Pj44MyZMwgNDUVaWhoaN26MxMTEf13P1NQUz58/l24RERFfqGIiIiLK6/S0+eT79+9XuR8UFARra2tcvHgRderU+eR6CoUCtra2uV0eERERfYXy1Jyb2NhYAIClpeW/9ktISEDx4sVhb2+P1q1b4+bNm5/sm5KSgri4OJUbERERyVeeCTdKpRJDhgxBzZo1UaFChU/2K126NFatWoVdu3bhjz/+gFKpRI0aNfD06dMs+/v6+sLMzEy62dvb59ZLICIiojwgz4QbHx8f3LhxA5s2bfrXftWrV4enpydcXFxQt25dBAcHw8rKCsuWLcuy/5gxYxAbGyvdnjx5khvlExERUR6h1Tk3GQYMGIA9e/bg2LFjsLOzU2tdfX19uLq6Ijw8PMvlhoaGMDQ01ESZRERE9BXQ6siNEAIDBgzAjh07cPjwYTg6Oqr9GOnp6bh+/ToKFy6cCxUSERHR10arIzc+Pj7YsGEDdu3ahQIFCiAyMhIAYGZmBmNjYwCAp6cnihYtCl9fXwDAlClT4ObmBicnJ8TExGDOnDmIiIhA7969tfY6iIiIKO/QarhZunQpAKBevXoq7atXr0b37t0BAI8fP4aOzv8GmN68eYM+ffogMjISFhYWqFKlCk6dOoVy5cp9qbKJiIgoD9NquBFC/GefI0eOqNz38/ODn59fLlVEREREX7s8MaFYThx+3avtEkjLHs1sru0SiIi+aXnmUHAiIiIiTWC4ISIiIllhuCEiIiJZYbghIiIiWWG4ISIiIllhuCEiIiJZyVG4uX//PsaPH49OnTohOjoaALBv3z7cvHlTo8URERERqUvtcHP06FFUrFgRZ8+eRXBwMBISEgAAV69excSJEzVeIBEREZE61A43v/76K6ZNm4bQ0FAYGBhI7Q0aNMCZM2c0WhwRERGRutQON9evX0ebNm0ytVtbW+Ply5caKYqIiIgop9QON+bm5nj+/Hmm9suXL6No0aIaKYqIiIgop9QONx07dsTo0aMRGRkJhUIBpVKJkydPYsSIEfD09MyNGomIiIiyTe1wM2PGDJQpUwb29vZISEhAuXLlUKdOHdSoUQPjx4/PjRqJiIiIsk3tq4IbGBhg+fLl+O2333Djxg0kJCTA1dUVpUqVyo36iIiIiNSidrjJUKxYMRQrVkyTtRARERF9NrXDjRAC27ZtQ1hYGKKjo6FUKlWWBwcHa6w4IiIiInWpHW6GDBmCZcuWoX79+rCxsYFCociNuoiIiIhyRO1ws27dOgQHB6NZs2a5UQ8RERHRZ1H7aCkzMzOUKFEiN2ohIiIi+mxqh5tJkyZh8uTJSEpKyo16iIiIiD6L2rul2rdvj40bN8La2hoODg7Q19dXWX7p0iWNFUdERESkLrXDjZeXFy5evIiuXbtyQjERERHlOWqHm7179yIkJAS1atXKjXqIiIiIPovac27s7e1hamqaG7UQERERfTa1w828efMwatQoPHr0KBfKISIiIvo8au+W6tq1K96+fYuSJUvCxMQk04Ti169fa6w4IiIiInWpHW78/f1zoQwiIiIizcjR0VJEREREeVW2wk1cXJw0iTguLu5f+3KyMREREWlTtsKNhYUFnj9/Dmtra5ibm2d5bhshBBQKBdLT0zVeJBEREVF2ZSvcHD58GJaWlgCAsLCwXC2IiIiI6HNkK9zUrVsXJUqUwPnz51G3bt3cromIiIgox7J9nptHjx5xlxMRERHleWqfxI+IiIgoL1PrUPCQkBCYmZn9a59WrVp9VkFEREREn0OtcPNf57jh0VJERESkbWrtloqMjIRSqfzkjcGGiIiItC3b4Sarc9sQERER5TXZDjdCiNysg4iIiEgjsh1uvLy8YGxsnJu1EBEREX22bE8oXr16dW7WQURERKQRPM8NERERyQrDDREREckKww0RERHJSo7DTXh4OEJCQpCUlAQgZ0dT+fr64vvvv0eBAgVgbW0NDw8P3Llz5z/X27p1K8qUKQMjIyNUrFgRf/31l9rPTURERPKkdrh59eoVGjVqhO+++w7NmjXD8+fPAQC9evXC8OHD1Xqso0ePwsfHB2fOnEFoaCjS0tLQuHFjJCYmfnKdU6dOoVOnTujVqxcuX74MDw8PeHh44MaNG+q+FCIiIpIhtcPN0KFDoaenh8ePH8PExERq79ChA/bv36/WY+3fvx/du3dH+fLlUalSJQQFBeHx48e4ePHiJ9dZsGABmjRpgpEjR6Js2bKYOnUqKleujEWLFqn7UoiIiEiG1Lq2FAAcOHAAISEhsLOzU2kvVaoUIiIiPquY2NhYAIClpeUn+5w+fRrDhg1TaXN3d8fOnTuz7J+SkoKUlBTpflxc3GfVSERERHmb2iM3iYmJKiM2GV6/fg1DQ8McF6JUKjFkyBDUrFkTFSpU+GS/yMhI2NjYqLTZ2NggMjIyy/6+vr4wMzOTbvb29jmukYiIiPI+tcNN7dq1sXbtWum+QqGAUqnE7NmzUb9+/RwX4uPjgxs3bmDTpk05foysjBkzBrGxsdLtyZMnGn18IiIiylvU3i01e/ZsNGzYEBcuXEBqaipGjRqFmzdv4vXr1zh58mSOihgwYAD27NmDY8eOZdrd9TFbW1tERUWptEVFRcHW1jbL/oaGhp81okRERERfF7VHbipUqIC7d++iVq1aaN26NRITE9G2bVtcvnwZJUuWVOuxhBAYMGAAduzYgcOHD8PR0fE/16levToOHTqk0hYaGorq1aur9dxEREQkT2qP3ACAmZkZxo0b99lP7uPjgw0bNmDXrl0oUKCANG/GzMxMukinp6cnihYtCl9fXwDA4MGDUbduXcybNw/NmzfHpk2bcOHCBQQGBn52PURERPT1U3vkZv/+/Thx4oR0f/HixXBxcUHnzp3x5s0btR5r6dKliI2NRb169VC4cGHptnnzZqnP48ePpXPpAECNGjWwYcMGBAYGolKlSti2bRt27tz5r5OQiYiI6Nuh9sjNyJEjMWvWLADA9evXMWzYMAwfPhxhYWEYNmyYWlcPz85ZjY8cOZKprV27dmjXrl22n4eIiIi+HWqHm4cPH6JcuXIAgO3bt6Nly5aYMWMGLl26hGbNmmm8QCIiIiJ1qL1bysDAAG/fvgUAHDx4EI0bNwbw/sR7PEEeERERaZvaIze1atXCsGHDULNmTZw7d06aH3P37t3/PIybiIiIKLepPXKzaNEi6OnpYdu2bVi6dCmKFi0KANi3bx+aNGmi8QKJiIiI1KH2yE2xYsWwZ8+eTO1+fn4aKYiIiIjoc+ToPDdKpRLh4eGIjo6GUqlUWVanTh2NFEZERESUE2qHmzNnzqBz586IiIjIdCi3QqFAenq6xoojIiIiUpfa4aZfv36oWrUq9u7di8KFC0OhUORGXUREREQ5ona4uXfvHrZt2wYnJ6fcqIeIiIjos6h9tFS1atUQHh6eG7UQERERfTa1R24GDhyI4cOHIzIyEhUrVoS+vr7KcmdnZ40VR0RERKQutcPNTz/9BADo2bOn1KZQKCCE4IRiIiIi0rocXVuKiIiIKK9SO9wUL148N+ogIiIi0ogcncTv/v378Pf3x61btwAA5cqVw+DBg1GyZEmNFkdERESkLrWPlgoJCUG5cuVw7tw5ODs7w9nZGWfPnkX58uURGhqaGzUSERERZZvaIze//vorhg4dipkzZ2ZqHz16NH788UeNFUdERESkLrVHbm7duoVevXplau/Zsyf+/vtvjRRFRERElFNqhxsrKytcuXIlU/uVK1dgbW2tiZqIiIiIckzt3VJ9+vRB37598eDBA9SoUQMAcPLkScyaNQvDhg3TeIFERERE6lA73Pz2228oUKAA5s2bhzFjxgAAihQpgkmTJmHQoEEaL5CIiIhIHWqHG4VCgaFDh2Lo0KGIj48HABQoUEDjhRERERHlRI7OcwMA0dHRuHPnDgCgTJkysLKy0lhRRERERDml9oTi+Ph4dOvWDUWKFEHdunVRt25dFClSBF27dkVsbGxu1EhERESUbWqHm969e+Ps2bPYu3cvYmJiEBMTgz179uDChQvw9vbOjRqJiIiIsk3t3VJ79uxBSEgIatWqJbW5u7tj+fLlaNKkiUaLIyIiIlKX2iM3BQsWhJmZWaZ2MzMzWFhYaKQoIiIiopxSO9yMHz8ew4YNQ2RkpNQWGRmJkSNH4rffftNocURERETqUnu31NKlSxEeHo5ixYqhWLFiAIDHjx/D0NAQL168wLJly6S+ly5d0lylRERERNmgdrjx8PDIhTKIiIiINEPtcDNx4sTcqIOIiIhII9Sec/PkyRM8ffpUun/u3DkMGTIEgYGBGi2MiIiIKCfUDjedO3dGWFgYgPcTiRs1aoRz585h3LhxmDJlisYLJCIiIlKH2uHmxo0b+OGHHwAAW7ZsQcWKFXHq1CmsX78eQUFBmq6PiIiISC1qh5u0tDQYGhoCAA4ePIhWrVoBeH99qefPn2u2OiIiIiI1qR1uypcvj4CAABw/fhyhoaHSWYn/+ecfFCxYUOMFEhEREalD7XAza9YsLFu2DPXq1UOnTp1QqVIlAMDu3bul3VVERERE2qL2oeD16tXDy5cvERcXp3K5hb59+8LExESjxRERERGpS+2RGwAQQuDixYtYtmwZ4uPjAQAGBgYMN0RERKR1ao/cREREoEmTJnj8+DFSUlLw448/okCBApg1axZSUlIQEBCQG3USERERZYvaIzeDBw9G1apV8ebNGxgbG0vtbdq0waFDhzRaHBEREZG61B65OX78OE6dOgUDAwOVdgcHBzx79kxjhRERERHlhNojN0qlEunp6Znanz59igIFCmikKCIiIqKcUjvcNG7cGP7+/tJ9hUKBhIQETJw4Ec2aNdNkbURERERqU3u31Lx58+Du7o5y5cohOTkZnTt3xr1791CoUCFs3LgxN2okIiIiyja1R27s7Oxw9epVjBs3DkOHDoWrqytmzpyJy5cvw9raWq3HOnbsGFq2bIkiRYpAoVBg586d/9r/yJEjUCgUmW6RkZHqvgwiIiKSKbVHbgBAT08PXbp0QZcuXaS258+fY+TIkVi0aFG2HycxMRGVKlVCz5490bZt22yvd+fOHZiamkr31Q1VREREJF9qhZubN28iLCwMBgYGaN++PczNzfHy5UtMnz4dAQEBKFGihFpP3rRpUzRt2lStdYD3Ycbc3Fzt9YiIiEj+sr1bavfu3XB1dcWgQYPQr18/VK1aFWFhYShbtixu3bqFHTt24ObNm7lZq8TFxQWFCxfGjz/+iJMnT/5r35SUFMTFxanciIiISL6yHW6mTZsGHx8fxMXFYf78+Xjw4AEGDRqEv/76C/v375euDp6bChcujICAAGzfvh3bt2+Hvb096tWrh0uXLn1yHV9fX5iZmUk3e3v7XK+TiIiItCfb4ebOnTvw8fFB/vz5MXDgQOjo6MDPzw/ff/99btanonTp0vD29kaVKlVQo0YNrFq1CjVq1ICfn98n1xkzZgxiY2Ol25MnT75YvURERPTlZXvOTXx8vDSJV1dXF8bGxmrPsckNP/zwA06cOPHJ5YaGhjA0NPyCFREREZE2qTWhOCQkBGZmZgDen6n40KFDuHHjhkqfVq1aaa66bLhy5QoKFy78RZ+TiIiI8i61wo2Xl5fKfW9vb5X7CoUiy0szfEpCQgLCw8Ol+w8fPsSVK1dgaWmJYsWKYcyYMXj27BnWrl0LAPD394ejoyPKly+P5ORkrFixAocPH8aBAwfUeRlEREQkY9kON0qlUuNPfuHCBdSvX1+6P2zYMADvQ1RQUBCeP3+Ox48fS8tTU1MxfPhwPHv2DCYmJnB2dsbBgwdVHoOIiIi+bTk6iZ+m1KtXD0KITy4PCgpSuT9q1CiMGjUql6siIiKir5nal18gIiIiyssYboiIiEhWGG6IiIhIVhhuiIiISFZyFG5iYmKwYsUKjBkzBq9fvwYAXLp0Cc+ePdNocURERETqUvtoqWvXrqFRo0YwMzPDo0eP0KdPH1haWiI4OBiPHz+WzklDREREpA1qj9wMGzYM3bt3x71792BkZCS1N2vWDMeOHdNocURERETqUjvcnD9/PtOZiQGgaNGiiIyM1EhRRERERDmldrgxNDREXFxcpva7d+/CyspKI0URERER5ZTa4aZVq1aYMmUK0tLSALy/ntTjx48xevRo/PTTTxovkIiIiEgdaoebefPmISEhAdbW1khKSkLdunXh5OSEAgUKYPr06blRIxEREVG2qX20lJmZGUJDQ3HixAlcu3YNCQkJqFy5Mho1apQb9RERERGpJccXzqxVqxZq1aqlyVqIiIiIPpva4eb333/Psl2hUMDIyAhOTk6oU6cOdHV1P7s4IiIiInWpHW78/Pzw4sULvH37FhYWFgCAN2/ewMTEBPnz50d0dDRKlCiBsLAw2Nvba7xgIiIion+j9oTiGTNm4Pvvv8e9e/fw6tUrvHr1Cnfv3kW1atWwYMECPH78GLa2thg6dGhu1EtERET0r9QeuRk/fjy2b9+OkiVLSm1OTk6YO3cufvrpJzx48ACzZ8/mYeFERESkFWqP3Dx//hzv3r3L1P7u3TvpDMVFihRBfHz851dHREREpCa1w039+vXh7e2Ny5cvS22XL19G//790aBBAwDA9evX4ejoqLkqiYiIiLJJ7XCzcuVKWFpaokqVKjA0NIShoSGqVq0KS0tLrFy5EgCQP39+zJs3T+PFEhEREf0Xtefc2NraIjQ0FLdv38bdu3cBAKVLl0bp0qWlPvXr19dchURERERqyPFJ/MqUKYMyZcposhYiIiKiz5ajcPP06VPs3r0bjx8/Rmpqqsqy+fPna6QwIiIiopxQO9wcOnQIrVq1QokSJXD79m1UqFABjx49ghAClStXzo0aiYiIiLJN7QnFY8aMwYgRI3D9+nUYGRlh+/btePLkCerWrYt27drlRo1ERERE2aZ2uLl16xY8PT0BAHp6ekhKSkL+/PkxZcoUzJo1S+MFEhEREalD7XCTL18+aZ5N4cKFcf/+fWnZy5cvNVcZERERUQ6oPefGzc0NJ06cQNmyZdGsWTMMHz4c169fR3BwMNzc3HKjRiIiIqJsUzvczJ8/HwkJCQCAyZMnIyEhAZs3b0apUqV4pBQRERFpnVrhJj09HU+fPoWzszOA97uoAgICcqUwIiIiopxQa86Nrq4uGjdujDdv3uRWPURERESfRe0JxRUqVMCDBw9yoxYiIiKiz6Z2uJk2bRpGjBiBPXv24Pnz54iLi1O5EREREWmT2hOKmzVrBgBo1aoVFAqF1C6EgEKhQHp6uuaqIyIiIlKT2uEmLCwsN+ogIiIi0gi1w03dunVzow4iIiIijVB7zg0AHD9+HF27dkWNGjXw7NkzAMC6detw4sQJjRZHREREpC61w8327dvh7u4OY2NjXLp0CSkpKQCA2NhYzJgxQ+MFEhEREakjR0dLBQQEYPny5dDX15faa9asiUuXLmm0OCIiIiJ1qR1u7ty5gzp16mRqNzMzQ0xMjCZqIiIiIsoxtcONra0twsPDM7WfOHECJUqU0EhRRERERDmldrjp06cPBg8ejLNnz0KhUOCff/7B+vXrMWLECPTv3z83aiQiIiLKNrUPBf/111+hVCrRsGFDvH37FnXq1IGhoSFGjBiBgQMH5kaNRERERNmmdrhRKBQYN24cRo4cifDwcCQkJKBcuXLInz9/btRHREREpBa1d0v98ccfePv2LQwMDFCuXDn88MMPDDZERESUZ6gdboYOHQpra2t07twZf/3112ddS+rYsWNo2bIlihQpAoVCgZ07d/7nOkeOHEHlypVhaGgIJycnBAUF5fj5iYiISH7UDjfPnz/Hpk2boFAo0L59exQuXBg+Pj44deqU2k+emJiISpUqYfHixdnq//DhQzRv3hz169fHlStXMGTIEPTu3RshISFqPzcRERHJk9pzbvT09NCiRQu0aNECb9++xY4dO7BhwwbUr18fdnZ2uH//frYfq2nTpmjatGm2+wcEBMDR0RHz5s0DAJQtWxYnTpyAn58f3N3d1X0pREREJENqh5sPmZiYwN3dHW/evEFERARu3bqlqbqydPr0aTRq1Eilzd3dHUOGDPnkOikpKdIlIgAgLi4ut8ojIiKiPCBHF858+/Yt1q9fj2bNmqFo0aLw9/dHmzZtcPPmTU3XpyIyMhI2NjYqbTY2NoiLi0NSUlKW6/j6+sLMzEy62dvb52qNREREpF1qh5uOHTvC2toaQ4cORYkSJXDkyBGEh4dj6tSpKFOmTG7U+FnGjBmD2NhY6fbkyRNtl0RERES5SO3dUrq6utiyZQvc3d2hq6ursuzGjRuoUKGCxor7mK2tLaKiolTaoqKiYGpqCmNj4yzXMTQ0hKGhYa7VRERERHmL2uFm/fr1Kvfj4+OxceNGrFixAhcvXvysQ8P/S/Xq1fHXX3+ptIWGhqJ69eq59pxERET0dcnRnBvg/TlqvLy8ULhwYcydOxcNGjTAmTNn1HqMhIQEXLlyBVeuXAHw/lDvK1eu4PHjxwDe71Ly9PSU+vfr1w8PHjzAqFGjcPv2bSxZsgRbtmzB0KFDc/oyiIiISGbUGrmJjIxEUFAQVq5cibi4OLRv3x4pKSnYuXMnypUrp/aTX7hwAfXr15fuDxs2DADg5eWFoKAgPH/+XAo6AODo6Ii9e/di6NChWLBgAezs7LBixQoeBk5ERESSbIebli1b4tixY2jevDn8/f3RpEkT6OrqIiAgIMdPXq9ePQghPrk8q7MP16tXD5cvX87xcxIREZG8ZTvc7Nu3D4MGDUL//v1RqlSp3KyJiIiIKMeyPefmxIkTiI+PR5UqVVCtWjUsWrQIL1++zM3aiIiIiNSW7XDj5uaG5cuX4/nz5/D29samTZtQpEgRKJVKhIaGIj4+PjfrJCIiIsoWtY+WypcvH3r27IkTJ07g+vXrGD58OGbOnAlra2u0atUqN2okIiIiyrYcHwoOAKVLl8bs2bPx9OlTbNy4UVM1EREREeXYZ4WbDLq6uvDw8MDu3bs18XBEREREOaaRcENERESUVzDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGs5Ilws3jxYjg4OMDIyAjVqlXDuXPnPtk3KCgICoVC5WZkZPQFqyUiIqK8TOvhZvPmzRg2bBgmTpyIS5cuoVKlSnB3d0d0dPQn1zE1NcXz58+lW0RExBesmIiIiPIyrYeb+fPno0+fPujRowfKlSuHgIAAmJiYYNWqVZ9cR6FQwNbWVrrZ2Nh8wYqJiIgoL9NquElNTcXFixfRqFEjqU1HRweNGjXC6dOnP7leQkICihcvDnt7e7Ru3Ro3b978ZN+UlBTExcWp3IiIiEi+tBpuXr58ifT09EwjLzY2NoiMjMxyndKlS2PVqlXYtWsX/vjjDyiVStSoUQNPnz7Nsr+vry/MzMykm729vcZfBxEREeUdWt8tpa7q1avD09MTLi4uqFu3LoKDg2FlZYVly5Zl2X/MmDGIjY2Vbk+ePPnCFRMREdGXpKfNJy9UqBB0dXURFRWl0h4VFQVbW9tsPYa+vj5cXV0RHh6e5XJDQ0MYGhp+dq1ERET0ddDqyI2BgQGqVKmCQ4cOSW1KpRKHDh1C9erVs/UY6enpuH79OgoXLpxbZRIREdFXRKsjNwAwbNgweHl5oWrVqvjhhx/g7++PxMRE9OjRAwDg6emJokWLwtfXFwAwZcoUuLm5wcnJCTExMZgzZw4iIiLQu3dvbb4MIiIiyiO0Hm46dOiAFy9eYMKECYiMjISLiwv2798vTTJ+/PgxdHT+N8D05s0b9OnTB5GRkbCwsECVKlVw6tQplCtXTlsvgYiIiPIQrYcbABgwYAAGDBiQ5bIjR46o3Pfz84Ofn98XqIqIiIi+Rl/d0VJERERE/4bhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkheGGiIiIZIXhhoiIiGSF4YaIiIhkJU+Em8WLF8PBwQFGRkaoVq0azp0796/9t27dijJlysDIyAgVK1bEX3/99YUqJSIiorxO6+Fm8+bNGDZsGCZOnIhLly6hUqVKcHd3R3R0dJb9T506hU6dOqFXr164fPkyPDw84OHhgRs3bnzhyomIiCgv0nq4mT9/Pvr06YMePXqgXLlyCAgIgImJCVatWpVl/wULFqBJkyYYOXIkypYti6lTp6Jy5cpYtGjRF66ciIiI8iKthpvU1FRcvHgRjRo1ktp0dHTQqFEjnD59Ost1Tp8+rdIfANzd3T/Zn4iIiL4tetp88pcvXyI9PR02NjYq7TY2Nrh9+3aW60RGRmbZPzIyMsv+KSkpSElJke7HxsYCAOLi4j6n9E9SprzNlcelr0dubVvZxW2QuA2StuXGNpjxmEKI/+yr1XDzJfj6+mLy5MmZ2u3t7bVQDX0LzPy1XQF967gNkrbl5jYYHx8PMzOzf+2j1XBTqFAh6OrqIioqSqU9KioKtra2Wa5ja2urVv8xY8Zg2LBh0n2lUonXr1+jYMGCUCgUn/kK6ENxcXGwt7fHkydPYGpqqu1y6BvEbZC0jdtg7hFCID4+HkWKFPnPvloNNwYGBqhSpQoOHToEDw8PAO/Dx6FDhzBgwIAs16levToOHTqEIUOGSG2hoaGoXr16lv0NDQ1haGio0mZubq6J8ukTTE1N+UtNWsVtkLSN22Du+K8Rmwxa3y01bNgweHl5oWrVqvjhhx/g7++PxMRE9OjRAwDg6emJokWLwtfXFwAwePBg1K1bF/PmzUPz5s2xadMmXLhwAYGBgdp8GURERJRHaD3cdOjQAS9evMCECRMQGRkJFxcX7N+/X5o0/PjxY+jo/O+grho1amDDhg0YP348xo4di1KlSmHnzp2oUKGCtl4CERER5SEKkZ1px0TZkJKSAl9fX4wZMybTrkCiL4HbIGkbt8G8geGGiIiIZEXrZygmIiIi0iSGGyIiIpIVhhsiIiKSFYYbIiIikhWGGyIiIpIVhhsiIiKSFYYbIiIikhWGGyIiIpIVhhsiIiKSFYYb+mYolUptl0BERF8Aww19MzIuwPry5UsAAK88Ql/axwGb2yBpw8fboRy/+DHc0DdlwYIF8PDwwP3796FQKLRdDn1jdHR0EBsbi5CQEADgNkhaoaOjg5iYGMyZMwdv3ryRvvjJifxeEdEHPv5mrK+vD2NjYxgYGGipIvqWKZVKzJs3D97e3tizZ4+2y6Fv2IEDBzB//nwsWrRI26XkCl4VnL4JcXFxMDU1BQDExsbCzMxMyxXRt0KpVKp8M7516xZWrlyJWbNmQVdXV4uV0bckPT1dZXtLS0vD5s2b0alTJ1luhww3JHtDhw5Feno6xowZg8KFC2u7HPoGxcTEICYmBvb29iofJB9/4BB9jo+D9MdevXqFkydPokaNGihUqJDULsftkLulSHY+zut2dnZYu3at7H556esghMCvv/6KatWq4dGjRyrLuE3S53j+/Dn++ecfvHjxAsD7uTT/Nl6xZcsWeHh44OjRoyrtctwOOXJDX7WMbxxCCCgUik9+c3nz5g0sLCy0UCHJzX99O86qT0REBMaPH4+goCBZfpDQl7d69WosXrwYT548QcmSJVGrVi3Mnj1bpU9WIzL+/v4YMGAA9PT0vmS5XxzDDX01MgIM8P6XVggBPT09PHv2DDt27ECPHj2QL18+AO93RVlYWGDChAmZ1iXKqQ9Dy+HDh/H48WM4OTmhRIkSKFKkiEqf2NhYKJXKTKFajrsA6Mvas2cP2rdvjyVLlsDExAQPHjzA7NmzUaNGDaxZswYFCxaU/ua9fPkS4eHhcHNzU3mMd+/eyTrgcLcU5VkZuTsuLg5JSUlQKBQ4cOAAwsPDoaurCz09PURERMDV1RX//POPFGwSExOhr68PPz8/vH79msGGNEIIIQWbX3/9Fd27d8fcuXPRt29fjBgxAufPnwfwftdASkoKJkyYgMqVK+PVq1cqj8NgQ5/r/PnzaN68Obp374727dtj1KhRCAkJwbVr19ClSxcA708zkJaWhnXr1qFGjRo4ceKEymPIOdgADDeUx0VGRqJixYo4evQoNmzYgCZNmuDvv/8G8H5XU/ny5dGmTRtMnz5dWidfvnwYNWoU7t27B0tLSwYb0oiM7Wju3Ln4448/sHHjRty4cQNt27bFn3/+ifHjx+P06dMAAAMDA7i6uqJhw4YwNzfXYtUkRw8fPsTz589V2r7//nvs3r0bFy9eRJ8+fQC8P/VFixYtMH369EwjN7IniPK4Hj16CFNTU6GjoyOWL18utaemporNmzeL9PR0qU2pVGqjRPpGREVFibZt24pVq1YJIYTYvXu3MDU1Ff369ROurq6iYcOG4syZM0II1W3x3bt3WqmX5CkkJETY2NiITZs2SW0Z29v69euFk5OTOH/+fKb10tLSvliN2saRG8qzMk4J7uPjg/j4eBgYGMDW1hbJyckA3n8rad++vcrETY7SUG6ytrbGqFGj0KRJE1y+fBk+Pj6YNm0ali5dip9++glnzpyBj48PLl68qLItclcUaVLZsmVRr149rFu3DocOHQLwv799Li4uiI6Oli4z8yG574r6EMMN5VkZocXe3h4nTpyAl5cXOnbsiF27diEpKSlTfzleH4W051Pbk6urKwoXLox9+/bB2dkZffv2BQBYWlrCzc0NLVu2hKur65cslb4x9vb26NevH2JiYuDn54fdu3dLywoXLgxHR0ctVpc3fDsxjr4a4v8nAD9//hxpaWkoVqwYrK2tUaNGDSQnJ6NXr14ICgpCixYtYGRkhICAADRq1AhOTk7aLp1kQnwweXjFihWIjo6GgYEBRowYIV26IyUlBc+ePcOjR49QunRpHDhwAK1atcLAgQP/9bQERJ8j42i7evXqYcmSJRg7dixGjx6NkJAQODs7Y8uWLVAoFPjxxx+1XapW8VBwypOCg4MxadIkREVFoXnz5mjTpg1atmwJAOjRowd27NiB4cOHIyoqCkuXLsX169dRrlw5LVdNcjNx4kT4+/vj+++/x7lz51CtWjWsW7cOtra2+PPPPzFt2jS8efMG+vr6EELg2rVr0NPT4xF6lCsytqvg4GAsWbIEBw4cwO3btxEWFoZFixbB3t4e5ubmWL9+PfT19b/p0w4w3FCec/PmTbi7u2Po0KEwMTHBxo0bYWhoCC8vL3Tt2hUAMHjwYFy6dAkpKSkIDAyEi4uLdosmWfhwtOXdu3fw8vLCwIED4erqikePHqF58+awtbXFjh07YGVlhb179yI8PBwJCQkYPXo09PT0vukPFNKMjBAjPjq3l66uLoKDg+Hp6Yn58+dLu0SB99urjo6Oyvb7Lc2x+RjDDeUpt2/fxtatW5GUlIQZM2YAAK5fv44JEyYgLi4OPXr0kAJOZGQk8uXLhwIFCmizZJKJD4PNrVu3EBcXh2XLlmHChAlwcHAA8P4Q3B9//BE2NjbYuXMnrKysVB6DwYY+14fb4cuXL6FQKFCwYEEA7//mVa5cGRMmTEC/fv2kdT4eKeTIIcMN5RFCCLx58wYtWrTA33//jZYtW2LdunXS8mvXrmHChAlISkpCx44d0aNHDy1WS3I2cuRIaVg/KioKwcHBaNq0qfRh8fDhQzRt2hRCCJw8eVLlAoREn+PDUDJ16lTs3LkTcXFxKFSoEKZPn44GDRrg2bNnKFq0qJYrzfs4243yBIVCAUtLS/j6+qJ8+fK4dOkSQkNDpeXOzs6YOnUq0tLSpF94Ik348KioPXv2YP/+/fj999+xZMkSODo6Yty4cbh69ap0xmxHR0fs2bMHLi4uvF4ZaVRGsJkyZQoWLFggnWqgUKFC6NKlC9asWZNptJCyxpEb0ppPDZ0ePXoUY8eOha2tLXx8fNCgQQNp2c2bN2FmZgY7O7svWSp9A4KDg3Hq1CkULFgQY8aMAQAkJCSgcuXKMDU1xYoVK1CpUqVM2yx3RZEmvXr1Co0bN4aPjw969uwptfft2xd//vknwsLCUKZMGe56+g8cuSGtyPjFPHXqFObPn4/ffvsNJ0+eRFpaGurWrYspU6YgMjISixYtwpEjR6T1ypcvz2BDGpeUlITffvsN8+fPx82bN6X2/Pnz49KlS4iPj4e3t7d0/agPMdiQJr179w4vX76URgUzTloaGBiIIkWKwM/PDwBPWPpfGG7oi/vwcMamTZvi5MmT2L17N8aOHYvp06cjNTUVDRs2xJQpU/Dq1StMnToVx48f13bZJGPGxsY4fvw4GjVqhIsXL2L37t1IT08H8L+Ac/v2bSxbtkzLlZKcZLXjxMbGBra2tli1ahUAwMjICKmpqQAAJycnhppsYrihLy5jxGbQoEGYP38+tm/fjq1bt+LixYvYvHkzxo8fLwWcX3/9Ffr6+jzjJmnMh3NshBDSB4ylpSU2bNgACwsLzJkzByEhIdKyfPnyITIyEoGBgVqpmeRHqVRKQeWff/5BdHQ03r59CwCYNGkSbt++LR0RlXHiyKdPn/JCrNnEOTf0xWT8MisUCixZsgRXrlxBYGAgHj58iEaNGqFWrVowNTXF1q1b4e3tjbFjx8LQ0BBv376FiYmJtssnGfjwMNuFCxfi6tWrePDgAYYMGYLKlSvDzs4OL168QOvWraGrq4uxY8fC3d1d5UzDnGNDn2P9+vVwc3NDyZIlAQBjxoxBSEgIIiIi0KhRI7Rq1QpdunTB8uXLMXXqVBQsWBAVKlTA/fv3ERMTI50okv4dww3lmowPkg/DyZUrV+Di4oK4uDg8efIETk5OaNKkCRwdHbFq1SrExsZKZxru3r07pk+fzolz9Nk+3obGjBmDlStXom/fvnj69ClOnz6N1q1bo2/fvnBycsKLFy/Qtm1bvHjxAkFBQXBzc9Ni9SQX+/btQ4sWLTB69GgMGTIE+/btw6hRo+Dv749Xr17h0qVLCAkJwW+//YZ+/frh+vXr8Pf3h46ODiwsLDBjxgyeKDK7cvWa4/TNe/DggejUqZP4+++/xZYtW4RCoRDnzp0TSqVSCCHE9evXRZkyZcTZs2eFEELcv39ftGjRQowdO1Y8fvxYm6WTzKSnpwshhFi3bp1wdHQUFy9eFEIIcfz4caFQKESpUqXE4MGDxYMHD4QQQjx//lz07dtXvHv3Tms1k/wsWrRI2NnZialTp4oBAwaI5cuXS8uePHkipkyZIhwcHMT+/fuzXD8tLe1LlfpV49gW5ark5GQcP34c3bt3x5UrV7B69Wp8//330i4qIQTevXuH06dPo3z58li7di0AYMSIETyHCH22bt26wcrKCvPnz4eOjg7S0tJgYGCAfv36oXLlyti5cyd69OiBFStWIDIyEtOmTYOOjg769OmDsmXLShOI+U2ZPldqaioMDAzg4+MDExMTjBkzBvHx8Zg2bZrUx87ODp6enjhw4AAuXLgAd3f3TBdg5S6pbNJ2uiL5yvimHBAQIHR0dESlSpXE5cuXVfrExsaK7t27i5IlSwoHBwdhZWUlfaMm+hyxsbFi8uTJwtLSUkyaNElqf/bsmYiKihLPnz8XVatWFfPmzZP6FylSRBQuXFgsWLBACCGkEUYiTfH19RXR0dFi/fr1wsTERDRr1kzcvXtXpU+HDh1E27ZttVShPPBoKcoVQgjo6OhACIEiRYpg3rx5ePfuHcaPH48TJ05I/UxNTTF37lwsWbIEEydOxNmzZ1G5cmUtVk5yEB8fD1NTU/Tv3x/jx4+Hv78/Jk6cCAAoUqQIrK2t8fz5c7x580aaT/Ps2TM0btwYEyZMgI+PDwCeS4Q+n/hgWuuaNWswdepU3Lt3D507d4afnx8uXbqEgIAA3LlzBwAQFxeHhw8folixYtoqWRY4vkUaJ/5/8ubhw4dx9OhRDBkyBC1btkSjRo3Qvn17zJw5E2PHjkWNGjUAvL8wZuPGjbVcNcnFqFGjsGzZMty/fx9WVlbo2rUrhBCYOnUqAGDy5MkA3gcgXV1dnDx5EkIIzJw5EyYmJtLht9wVRZqQEZAPHTqEy5cvIzAwUPrb17dvX6SlpWHy5MnYv38/KleujMTERKSmpmL27NnaLPvrp81hI5KfjGH8bdu2CTMzMzFmzBhx/vx5afm1a9dEuXLlRIsWLcQff/whJk2aJBQKhXjy5Al3AZBGXL16VdSpU0eULl1avHjxQgghRHR0tJg3b54wNzcXEyZMkPoOGDBAlCxZUtjZ2Qk3NzeRmpoqhODuKNKsI0eOiIoVK4qCBQuKnTt3CiGESElJkZavXLlS5M+fX1SuXFmsXbtWmsTOycM5x0PBSePOnTuHJk2aYNasWejTp4/UHhcXB1NTU9y6dQt9+vRBUlISYmNjsWXLFu6KIo04ffo0Xrx4gXLlyqFDhw5ISEiQrtz94sULrFu3DlOnTpUuSAi8Pz2BQqFAxYoVoaOjg3fv3nHSJn0W8dGpBxISEjBnzhwEBgaiWrVq2LhxI4yNjZGWlgZ9fX0AwPz583Hq1Cls3boVCoWCI4efieGGNG7RokXYsWMHDh06hNjYWBw+fBh//PEHbt26hREjRqBnz56Ijo5GbGwszMzMYG1tre2SSSY8PT3xzz//4ODBg3j06BF+/vlnxMfHZwo406ZNw4ABAzBlyhSV9fmBQpq0ePFi2NnZoXXr1khKSsLcuXOxY8cO1KtXDzNmzICRkZFKwMkIRR+HI1IfJxSTxtna2uLixYvw9fXFzz//jNWrV8PIyAjNmzdH7969cffuXVhbW6NUqVIMNqRRixcvxtOnT7Fo0SI4ODhg48aNMDMzQ82aNfHy5UtYWVmhW7dumDBhAqZNm4aVK1eqrM9gQ5ry4sULHD58GL/88gv2798PY2NjDBs2DC1atMCpU6cwbtw4JCcnQ19fH+/evQMABhsN4sgNfZaMX8SEhATkz58fABAVFYWFCxdiy5YtaNCgAbp3744ffvgBUVFRaNWqFYKCglC+fHktV05ykzHq8vvvv+Py5cuYP38+LCwscPv2bXh6eiI2NlYawYmMjMTRo0fx008/cRcUacTH56MBgKtXr+L333/HwYMHERAQgKZNmyIxMRGzZ8/GwYMHUbZsWSxZskS6dhRpDkdu6LMoFArs3bsXnTp1Qr169RAUFAQ9PT1MmzYNZ8+eRUBAANzc3KCjo4OFCxciMTGRozWUKzJGXerVq4djx45h7969AIDSpUtj3bp1sLCwQJ06dRAVFQVbW1t06NABenp60rdmos+REWwiIyOltkqVKmHw4MGoX78++vXrh/379yNfvnwYNWoUfvjhB+jo6Ei7pEjDtDSRmWTi5MmTwsjISIwcOVI0adJEODs7C29vbxEeHi71CQsLE3379hWWlpaZTuJHlFMZJ4nMSkBAgPjuu+/EnTt3pLY7d+4IBwcH0bFjxy9RHn0jPtwON23aJEqUKKFyhKgQQly5ckW0bt1aFCtWTBw5ckQIIURSUpJ0VN6/bcuUMxy5oRyLiIhAaGgopk+fjtmzZ2Pfvn3o27cvrl27Bl9fXzx48ACJiYk4ffo0oqOjcfToUbi4uGi7bJKBD3cBnDt3DqdOncLRo0el5a1atUK1atUQFhYmtX333Xc4duwY/vjjjy9eL8lTSkqKtB2mpqaiZMmSKFOmDHx8fHDx4kWpX6VKleDh4YEnT56gcePGOHXqFIyMjKQ5Nh/vzqLPx58oZcuiRYvw119/Sffv3LmDDh06YNWqVTAyMpLafXx80KVLF9y8eROzZ89GTEwMRo4ciTVr1qBChQraKJ1k5sMPg7Fjx6J79+7o2bMnvLy80KFDB8TFxaFw4cLSfIa0tDRpXXt7e+jq6iI9PV1b5ZNM7Nu3D+vWrQMA9OnTBw0aNEDVqlUxfPhw2NrawtvbGxcuXJD6FytWDB07dsS8efNQrVo1qZ2Th3OJtoeOKO97+PCh6Ny5s7h3755K+6+//iqsra1F27ZtpZOlZVi6dKkoXbq0GDRoEE9ERbli7ty5omDBguLs2bMiPT1dzJgxQygUCnHixAmpT82aNYW3t7cWqyS56tSpk3BwcBDu7u6iUKFC4urVq9Kyw4cPCw8PD1GhQgWxb98+8fDhQ+Hh4SGGDx8u9eHV5nMXww1lS2JiohBCiDNnzoht27ZJ7RMmTBAVK1YU48ePF1FRUSrrLF++XDx8+PBLlknfCKVSKby8vERgYKAQQojt27cLc3NzERAQIIQQIj4+XgghxL59+0SrVq3EtWvXtFYryZeLi4tQKBQqF2bNcPz4cdGtWzehUCjEd999J5ydnaUvejwDdu7jMZCULcbGxoiJiYGvry+ePXsGXV1deHh4YPLkyUhLS8PevXshhMDgwYNhZWUFAOjdu7eWqya5Sk5OxtmzZ1GvXj0cOXIEXl5emDNnDry9vfHu3TvMnj0b1atXh5ubG6ZMmYJz586hYsWK2i6bZCI1NRXJyclwcnJCsWLFsHnzZhQtWhQdO3aUTolRq1YtVKtWDX369EFaWhrq1q0LXV1dngH7C+GcG8oWhUIBc3NzDB8+HI6OjvD390dwcDAAYMaMGWjSpAlCQ0MxY8YMvHz5UsvVkpxcu3YNT58+BQAMHToUR48ehbGxMTp37ow//vgDzZo1g5+fn3TByzdv3uDChQu4c+cOLCwssG7dOhQvXlybL4FkxsDAAKampti6dSt27dqF77//HrNnz8amTZsQHx8v9UtOTkbt2rXRoEEDaa4Xg82XwXBD2SLe78JE7dq1MXToUFhYWOD3339XCThubm64fPkyBM8LSRoghMDdu3dRv359rFq1Cv369cOCBQtgYWEBAHBzc0NERASqVauG6tWrAwD++ecfdO/eHTExMRgwYAAAoGTJkmjUqJHWXgfJjxACSqVSur9mzRrUqFEDfn5+WLt2LR4/fowGDRqgXbt2Un+AZ8D+kniGYsqWjLO/xsbGwsTEBNeuXcP06dPx5s0bDB48GB4eHgDen3I8Y7cUkSYsX74co0aNQnJyMnbt2oXGjRtLZ8bevHkzpkyZAiEE9PT0YGxsDKVSiVOnTkFfX5/XiqLP9vr1a1haWqq0ZWx/W7duRWhoKAIDAwEAffv2xZEjR5Ceng5LS0ucPHmSZx/WEo7c0H969+4ddHV18ejRI9SrVw8HDhxAlSpVMGLECFhZWWHy5MnYs2cPADDYkMZkfDO2t7eHoaEhTE1NcebMGTx69Eg6fLZDhw5Yu3YtpkyZgvbt22P06NE4c+aMdL0eBhv6HAsWLMD333+vsqsJgBRsunfvjkqVKkntgYGBWLZsGRYuXIgzZ87AwMCAZ8DWFu3MY6a86lOz+MPDw4WNjY3o3bu3yiGMR44cEd26dROPHj36UiWSzH28DaampoqkpCSxdOlSUbRoUTF27Nj/3N54mC19rmXLlglDQ0OxYcOGTMseP34sKlasKBYtWiS1ZbXNcTvUHu6WIon4/6HW06dP49atWwgPD4enpycKFy6MNWvW4MKFC1izZk2mK9cmJyernMiPKKc+PPPw69evER8frzIZ2N/fH3PnzkWvXr3Qo0cPODg4oGXLlhg3bhzc3Ny0VTbJzPLlyzFw4ECsW7cO7dq1Q0xMDBITE5GcnAxra2sUKFAA9+7dQ6lSpbRdKn0Cww2p2L59O/r27StdYPDFixfo0KEDRo8ejQIFCmi7PJKxD4PNlClTcODAAdy4cQPt27dHmzZt0LRpUwDvA46/vz8qVKiAV69e4fHjx3j06BEvQEga8eDBAzg5OaF9+/bYtGkTbty4gV9++QUvXrxAREQE6tevj/79+6NFixbaLpX+BY9JI8mNGzcwdOhQzJs3D927d0dcXBzMzc1hbGzMYEO5LiPYTJgwAYGBgZgzZw4cHBzQr18/3Lt3DzExMejUqROGDBmCQoUK4erVq0hOTsbx48elq3vzMFv6XFZWVpg1axYmTJiAESNG4MCBA6hduzZat26NuLg4bNu2DePHj0ehQoU4WpiXaXOfGGnP4cOHxf379zO1Va9eXQghxK1bt0Tx4sVF7969peX379/nPmTKVYcPHxbly5cXx44dE0IIcerUKWFgYCDKlSsnqlWrJrZu3Sr1/fCyHrzEB2lScnKymDt3rtDR0RE9e/YUqamp0rILFy6I0qVLi8WLF2uxQvovPFrqGyOEwOXLl9G0aVMsXboUERER0rJnz55BCIGEhAQ0adIEjRs3xrJlywAAoaGhWLp0Kd68eaOt0kmGxEd7xYsWLYr+/fujdu3aOHDgAFq0aIHAwECEhobi/v37+P3337Fy5UoAUBml4YgNaZKhoSH69euH7du3o3fv3tDX15e21SpVqsDIyAhPnjzRcpX0bxhuvjEKhQKurq6YN28etmzZgqVLl+LBgwcAgObNmyMqKgqmpqZo3rw5AgMDpV0FISEhuHbtGg+tJY1RKpXSpPQHDx4gMTERpUqVQqdOnZCcnIwFCxZg0KBB6NatG4oUKYLy5csjPDwct27d0nLl9C3Ily8fmjZtKp0gMmNbjY6OhrGxMcqXL6/N8ug/8OvONyZjXoKPjw8AYM6cOdDV1UXv3r3h6OiI3377DTNmzMC7d+/w9u1bhIeHY+PGjVixYgVOnDghnR2W6HN8OHl4woQJOH36NEaOHIn69evD0tISiYmJeP78OUxMTKCjo4OUlBQ4ODhg1KhRaNKkiZarJzkSHxwBmsHQ0FD6f3p6Ol6+fIk+ffpAoVCgU6dOX7pEUgPDzTcmY+TlwIED0NHRQVpaGvz9/ZGcnIzRo0ejffv2SEpKwowZM7Bt2zbY2NjAwMAAYWFhqFChgparJ7n4MNgsW7YMgYGBcHV1lY54SklJgaWlJU6cOCFNGn716hVWrVoFHR0dlXBElBMRERF4/fo1ChYsCFtb2389k3BaWhrWrVuHjRs34vXr1zhz5ox0rSiOZudNPBT8GxQSEiJdbDBfvny4d+8efv/9d/zyyy8YPXo0rKysEB8fj6NHj8LBwQHW1tawtrbWdtn0lfs4kNy9exceHh6YNWsWWrZsmanf+fPnMX78eCQkJMDS0hLBwcHQ19dnsKHPtnbtWsybNw/R0dEoVKgQBg4cKI3IZPh4OwsNDcXNmzcxYMAAHp33FWC4+cYolUp06dIFCoUCGzZskNoXLlyIUaNGwcfHB7/88gtKlCihxSpJbtq2bYuxY8eiatWqUtuVK1fQpEkTHD16FKVLl87yxJDJyckQQsDIyAgKhYIfKPTZ1q5dCx8fH+nSCjNmzMCDBw9w8uRJadvKCDYxMTE4cOAA2rdvr/IYHLHJ+/j15xuT8U0kY/g/NTUVADBw4EB4e3tj9erV+P3331WOoiL6XGZmZnB2dlZpMzIywps3b3Djxg2pLeN6UqdPn8b27duho6MDY2NjKBQKKJVKBhv6LBcuXMDUqVOxaNEi9OzZExUrVsTQoUPh5OSEU6dO4ebNm4iLi5N22a9Zswa//PIL/vjjD5XHYbDJ+xhuvhH//POP9P/SpUvjzz//RHR0NAwMDJCWlgYAsLOzg4mJCcLCwmBsbKytUklGnj17BgBYvXo1DAwM8Pvvv+PAgQNITU2Fk5MTOnTogDlz5uDgwYNQKBTQ0dFBeno6pk+fjrCwMJV5ENwVRZ8rJSUFQ4YMQfPmzaW2SZMm4dChQ+jUqRM8PT3RsWNHvH79Gvr6+mjWrBlGjBjBycNfIe6W+gZcvXoVAwYMQOfOndG/f3+kpqaiQYMGePnyJY4cOQJbW1sAwOjRo1G+fHm0aNEClpaWWq6avnZ9+vQBAIwZM0bazens7IyXL19i06ZNqFOnDo4fPw4/Pz9cv34dXbp0gYGBAQ4dOoQXL17g0qVLHKkhjVIqlXjx4gVsbGwAAJ6enjh48CB2794Ne3t7HD16FNOmTcPo0aPRuXNnlTk43BX1deFXoW+AiYkJzM3NsW3bNgQFBcHAwADLli2DlZUVypYtCw8PDzRu3BgLFixA1apVGWxII5ydnbF//34sXboU4eHhAIBr166hdOnS6NKlC44dO4batWtjypQp8PT0xLp163D48GEUK1YMFy9elCZtEmmKjo6OFGwAYMSIETh79iyqVq0KGxsbNG3aFK9fv0ZUVFSmw8IZbL4uHLn5RoSHh2Ps2LGIjIxEnz590K1bN6Snp2Pu3LmIiIiAEAIDBw5EuXLltF0qyciqVaswYcIEdOzYEX369EHp0qUBAHXq1MHDhw+xfv161KlTBwDw9u1bmJiYSOty8jB9aU+fPkXXrl0xYsQIXhjzK8dwI1OXLl3C8+fPVfYth4eHY/z48Xj06BEGDhyILl26aLFCkrMPD6NduXIlJkyYgE6dOmUKOBEREVi7di2qV6+uMr8mqxOqEanjw20o4/8Z/7548QJWVlYq/RMTE9GpUyfExsbi8OHDHKn5yjHcyFB8fDyaN28OXV1djBo1Ck2bNpWWPXr0CE2aNIGJiQl69+6NX375RYuVktx86hw0y5cvx+TJk9GhQwf07dtXCjgNGjTAyZMncebMGbi6un7pckmmstoOM9qCg4OxceNGLFiwAEWKFEFSUhJ27dqFdevW4dmzZzh//jz09fU5x+Yrxzk3MpKRUwsUKIDZs2dDT08PixYtwt69e6U+Dg4OqF+/PiIjI3Ho0CHExMRoqVqSmw8/UE6dOoWwsDBcvXoVwPvJxb/99hs2bdqEwMBA3LlzBwBw+PBh9O7dO9Nh4kQ5deLECemilsOGDcPMmTMBvJ9vs3nzZnh6eqJRo0YoUqQIgPcXXX348CFKlCiBCxcuQF9fH+/evWOw+cpx5EYGMoZaM75pZHzInD17Fr/++ivy5cuH/v37S7uohg8fjhIlSqBt27YoXLiwlqsnOfhwF8CwYcOwefNmJCQkwM7ODsWKFcO+ffsAAMuWLcO0adPQsWNHeHl5qVzSg9+U6XMIIRAbGwtra2s0bdoUhQoVQnBwMI4fP44KFSogJiYGbm5u8PHxwcCBA6V1PvzbCXA7lAuGm69cxi9nWFgYdu/ejdevX6NWrVpo164dzM3NcebMGfz2229ISUlBiRIlYGJigs2bN+Pq1auws7PTdvkkAx8GmwMHDmDIkCEIDAyEubk5/v77b0ycOBH58uXDhQsXALyfg+Pt7Q1/f38MGDBAm6WTDEVHR6NEiRJIT0/H9u3b0axZM2lZVnNtspqbQ18/7pb6yikUCuzYsQMtW7bE27dv8fbtW6xbtw79+/fH69ev4ebmhrlz56Ju3boIDw/HgwcPcPjwYQYb0piMD4Pdu3dj06ZNaNSoEWrVqoUKFSrg559/xtq1a5GQkID+/fsDAHr16oVdu3ZJ94k0JSUlBZGRkTAxMYGuri5WrVolnYYAAAoVKiT9P+Ns2B+GGQYb+eDIzVfuwoUL6NixI3799Vf07t0bERERqFy5MoyNjeHi4oK1a9fC0tJSulbPx4fbEmnC69ev0aJFC1y9ehX169fHnj17VJaPHTsWJ0+exF9//YV8+fJJ7dwFQJ/rU5PYHz16BGdnZ9SvXx/z589HyZIltVAdaQtHbr4ivr6+GDdunPSNA3h/ens3Nzf07t0bjx49QsOGDeHh4YHx48fj/Pnz+OWXX/D69WsYGRkBAIMNacSH2yAAWFpaYs2aNfjxxx9x+fJlrF69WmV5qVKl8OrVKyQlJam0M9jQ5/gw2Bw5cgQbNmzA1atX8ezZMzg4OODkyZMICwvDqFGjpEnsbdq0wcKFC7VZNn0BHLn5iixcuBCDBw/GjBkzMGrUKOmX+tatWyhdujRat24tfcgolUq4uLggPDwczZs3x+bNm3ltHtKIDz9Q7t+/D4VCARMTE9ja2uLhw4fw8fFBYmIi2rVrB29vb0RFRcHLywtGRkbYs2cPh/5J40aMGIE1a9ZAT08P+fPnh62tLfz8/FC1alVcv34d9evXh4ODA1JTU/Hu3TtcvXpVungwyZSgr4JSqRRCCLF8+XKho6Mjpk6dKtLS0qTlT548EWXLlhV79uwRQgjx+vVr0alTJ7Fw4ULx9OlTrdRM8pOxHQohxMSJE0XFihVFmTJlROHChUVgYKAQQojw8HDRrFkzYWRkJEqXLi3atGkj3N3dRVJSkhBCiPT0dK3UTvLx4XYYGhoqKlWqJI4fPy5ev34tdu3aJdq0aSOcnJzEpUuXhBBC3Lt3T0yZMkVMnz5d+rv54d9Pkh+Gm6+AUqmUfpmVSqX4448/hI6Ojpg2bZr0QREdHS1cXFyEt7e3ePTokRg7dqz4/vvvRVRUlDZLJ5maMmWKsLKyEiEhISIhIUG0adNGmJubi5s3bwohhHjw4IFo3ry5cHFxEX5+ftJ6ycnJWqqY5GjNmjViwIABom/fvirt58+fF02aNBFeXl4iISFBCKEaiBhs5I/7Kb4SCoUCBw8exPDhw1GlShXpmj0zZ86EEAIWFhbo0qULjh49Cjc3N6xduxYBAQGwtrbWdukkAx/OsVEqlTh37hz8/PzQuHFjhIaG4siRI5gxYwbKlSuHtLQ0ODo6Yt68ebCxscHevXsRHBwMADA0NNTWSyAZEB/Noti5cycWL16MK1euICUlRWqvWrUqateujRMnTiA9PR2A6pFQvGbZN0Db6YqyZ/v27cLY2FhMnTpVnD9/XgghRGBgoLSLSgghUlJSxM2bN0VoaKh48uSJNsslmZowYYKYOXOmKFq0qLhz544ICwsT+fPnF0uXLhVCCPH27Vsxbtw48ejRIyGEEHfv3hUtWrQQVatWFcHBwdosnb5yH468rF+/Xqxdu1YIIcSAAQOEubm5WLx4sYiNjZX6hISEiDJlykjbIn1bGG6+Anfu3BGOjo5iyZIlmZYtW7ZM2kVFpGkfzo/ZtGmTsLe3Fzdu3BBdu3YV7u7uwsTERKxcuVLq8+zZM1G7dm2xdu1aad1bt26Jn3/+WURERHzx+kkePtwOb9y4IVxdXUWlSpXErl27hBBCeHl5iVKlSonp06eL8PBwER4eLho2bCjq1q2rEoro28Gxua/A48ePoa+vr3KmzYwjVvr27Yt8+fKhW7duMDQ0xIgRI7RYKclNxlFRR48exZEjRzB8+HCUL19eOjlkw4YN0bNnTwDvL9jau3dv6OrqonPnztDR0YFSqUSZMmWwYcMGHp1COZaxHY4cORIPHz6EsbExbt++jaFDh+Ldu3cICgpCz549MX78eCxcuBA1a9ZE/vz5sXnzZigUik+eC4fki+HmK5CQkKByfhClUintPz5y5AiqVKmCzZs3q1ynh0hTIiMj0atXL0RHR2Ps2LEAgH79+uH+/fs4fPgwXF1dUapUKTx+/BjJyck4f/48dHV1VU7QxzkO9LmCgoKwYsUKHDp0CI6OjkhJSYGXlxd8fX2ho6ODVatWwcTEBFu2bEGTJk3QsWNHGBoaIjU1FQYGBtoun74wRtmvQKVKlfDy5UsEBgYCeP8tJiPc7Nq1Cxs2bEDbtm1RtmxZbZZJMmVra4vg4GDY2Njgzz//xMWLF6Grq4s5c+ZgypQpaNCgAWxtbdGhQ4dPXlWZ57ahzxUeHo4KFSrAxcUFZmZmsLW1xapVq6Crq4uhQ4dix44dWLRoERo1aoT58+dj9+7diI+PZ7D5RvHr1FfA0dERixYtQr9+/ZCWlgZPT0/o6uoiKCgIQUFBOH36NM/0SrnK2dkZ27dvh5eXFwICAjBw4EA4OzujVatWaNWqlUrf9PR0jtSQxoj/v5iloaEhkpOTkZqaCiMjI6SlpaFo0aLw9fVFixYt4O/vD2NjY2zYsAGdO3fGiBEjoKenh/bt22v7JZAW8AzFXwmlUont27fD29sb+fLlg5GREXR1dbFx40a4urpquzz6Rly+fBm9e/dGlSpVMHjwYJQvX17bJdE34vr163B1dcVvv/2GiRMnSu0hISFYvnw53rx5g/T0dBw5cgQA0KNHD/z2228oUaKEliombWK4+cr8888/iIiIgEKhgKOjI2xsbLRdEn1jLl++DG9vbxQvXhyzZ8+Go6Ojtkuib0RQUBD69u2LIUOGoEOHDrCwsMCgQYNQo0YNtGnTBuXLl8fevXvRtGlTbZdKWsZwQ0RqO3fuHAICArBixQoehUJf1Pbt2/HLL7/AwMAAQghYW1vj1KlTiIqKwo8//oht27bB2dlZ22WSljHcEFGOZMyF4GG29KU9e/YMT548QVpaGmrWrAkdHR2MGTMGO3fuRFhYGGxtbbVdImkZww0R5VhGwCHSlps3b2LWrFn466+/cPDgQbi4uGi7JMoDeEgDEeUYgw1p07t375Camgpra2scPXqUE9xJwpEbIiL6qqWlpfEM2KSC4YaIiIhkhbMAiYiISFYYboiIiEhWGG6IiIhIVhhuiIiISFYYbohI9o4cOQKFQoGYmJhsr+Pg4AB/f/9cq4mIcg/DDRFpXffu3aFQKNCvX79My3x8fKBQKNC9e/cvXxgRfZUYbogoT7C3t8emTZuQlJQktSUnJ2PDhg0oVqyYFisjoq8Nww0R5QmVK1eGvb09goODpbbg4GAUK1YMrq6uUltKSgoGDRoEa2trGBkZoVatWjh//rzKY/3111/47rvvYGxsjPr16+PRo0eZnu/EiROoXbs2jI2NYW9vj0GDBiExMTHXXh8RfTkMN0SUZ/Ts2ROrV6+W7q9atQo9evRQ6TNq1Chs374da9aswaVLl+Dk5AR3d3e8fv0aAPDkyRO0bdsWLVu2xJUrV9C7d2/8+uuvKo9x//59NGnSBD/99BOuXbuGzZs348SJExgwYEDuv0giynUMN0SUZ3Tt2hUnTpxAREQEIiIicPLkSXTt2lVanpiYiKVLl2LOnDlo2rQpypUrh+XLl8PY2BgrV64EACxduhQlS5bEvHnzULp0aXTp0iXTfB1fX1906dIFQ4YMQalSpVCjRg38/vvvWLt2LZKTk7/kSyaiXMALZxJRnmFlZYXmzZsjKCgIQgg0b94chQoVkpbfv38faWlpqFmzptSmr6+PH374Abdu3QIA3Lp1C9WqVVN53OrVq6vcv3r1Kq5du4b169dLbUIIKJVKPHz4EGXLls2Nl0dEXwjDDRHlKT179pR2Dy1evDhXniMhIQHe3t4YNGhQpmWcvEz09WO4IaI8pUmTJkhNTYVCoYC7u7vKspIlS8LAwAAnT55E8eLFAby/IvT58+cxZMgQAEDZsmWxe/dulfXOnDmjcr9y5cr4+++/4eTklHsvhIi0hnNuiChP0dXVxa1bt/D3339DV1dXZVm+fPnQv39/jBw5Evv378fff/+NPn364O3bt+jVqxcAoF+/frh37x5GjhyJO3fuYMOGDQgKClJ5nNGjR+PUqVMYMGAArly5gnv37mHXrl2cUEwkEww3RJTnmJqawtTUNMtlM2fOxE8//YRu3bqhcuXKCA8PR0hICCwsLAC83620fft27Ny5E5UqVUJAQABmzJih8hjOzs44evQo7t69i9q1a8PV1RUTJkxAkSJFcv21EVHuUwghhLaLICIiItIUjtwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGsMNwQERGRrDDcEBERkaww3BAREZGs/B+XLE52CERTBAAAAABJRU5ErkJggg==",
- "text/plain": [
- "
\n"
- ],
- "application/vnd.google.colaboratory.module+javascript": "\n import \"https://ssl.gstatic.com/colaboratory/data_table/881c4a0d49046431/data_table.js\";\n\n const table = window.createDataTable({\n data: [[{\n 'v': 0,\n 'f': \"0\",\n },\n\"how do i call completion() using LiteLLM\",\n\"To call the `completion()` function using LiteLLM, you need to follow these steps:\\n\\n1. Install the `litellm` package by running `pip install litellm` in your terminal.\\n2. Import the `Completion` class from the `litellm` module.\\n3. Initialize an instance of the `Completion` class by providing the required parameters like the API endpoint URL and your API key.\\n4. Call the `complete()` method on the `Completion` instance and pass the text prompt as a string.\\n5. Retrieve the generated completion from the response object and use it as desired.\\n\\nHere's an example:\\n\\n```python\\nfrom litellm.completion import Completion\\n\\n# Initialize the Completion client\\ncompletion_client = Completion(\\n model_name='gpt-3.5-turbo',\\n api_key='your_api_key',\\n endpoint='https://your_endpoint_url'\\n)\\n\\n# Call the completion() method\\nresponse = completion_client.complete(\\\"Once upon a time\\\")\\n\\n# Retrieve the generated completion\\ncompletion = response['choices'][0]['text']\\n\\nprint(completion)\\n```\\n\\nMake sure to replace `'gpt-3.5-turbo'` with the desired model name, `'your_api_key'` with your actual API key, and `'https://your_endpoint_url'` with the correct API endpoint URL provided by your service provider.\\n\\nNote: The above example assumes you have a valid API key and endpoint URL for the OpenAI GPT-3.5-turbo model. Make sure to obtain the necessary credentials according to the API you are using.\",\n\" Here is how you can call the completion() method using LiteLLM:\\n\\nFirst, import LiteLLM:\\n\\n```python\\nimport litellm as lm\\n```\\n\\nThen create a LiteLLM object, specifying the API you want to use (e.g. \\\"openai\\\"):\\n\\n```python \\nai = lm.LiteLLM(\\\"openai\\\")\\n```\\n\\nNow you can call the completion() method on the ai object:\\n\\n```python\\nresponse = ai.completion(\\n prompt=\\\"Hello\\\", \\n model=\\\"text-davinci-003\\\",\\n max_tokens=100\\n)\\n```\\n\\nThe completion() method takes parameters like:\\n\\n- prompt (str): The prompt text to complete \\n- model (str): The AI model to use\\n- max_tokens (int): The maximum number of tokens to generate\\n\\nIt returns a Python dictionary with the AI's response.\\n\\nYou can then access the generated text using:\\n\\n```python\\nprint(response[\\\"choices\\\"][0][\\\"text\\\"]) \\n```\\n\\nSo LiteLLM provides a simple unified interface to call the underlying AI APIs. The completion() method works similarly for OpenAI\"],\n [{\n 'v': 1,\n 'f': \"1\",\n },\n\"does LiteLLM support VertexAI\",\n\"Yes, LiteLLM does support Google Cloud Vertex AI. It provides convenient wrappers and simplified functions to call Vertex AI API endpoints for natural language processing tasks such as text classification, entity extraction, sentiment analysis, etc. You can easily integrate LiteLLM with Vertex AI in your code to leverage its capabilities.\",\n\" Unfortunately, LiteLLM does not currently support VertexAI. LiteLLM focuses on providing a simple interface to call the APIs of services like OpenAI, Azure, Cohere, Anthropic, and Hugging Face. \\n\\nVertexAI is Google's managed machine learning platform. Integrating VertexAI would require additional development work to wrap the VertexAI SDK in a simple interface like LiteLLM provides for other services. \\n\\nHowever, LiteLLM is open source, so it is possible for someone to contribute support for VertexAI. The maintainers would likely welcome a pull request to add VertexAI as an option if implemented well. But out of the box, LiteLLM does not have built-in support for calling VertexAI APIs.\\n\\nThe key services LiteLLM supports are:\\n\\n- OpenAI (GPT, Codex, DALL-E)\\n- Azure Cognitive Services (Text Analytics, Computer Vision, Speech) \\n- Cohere\\n- Anthropic AI\\n- Hugging Face Transformers\\n\\nSo while it doesn't cover every ML API provider, it does make it easy to use the most popular natural language, speech, and vision APIs through a simple interface. Adding VertexAI\"],\n [{\n 'v': 2,\n 'f': \"2\",\n },\n\"how do I set my keys on replicate llama2?\",\n\"To set your keys on Replicate Llama2, follow these steps:\\n\\n1. Open the Llama2 dashboard in your browser.\\n2. Click on the \\\"Settings\\\" tab in the top menu.\\n3. Scroll down to the \\\"API Keys\\\" section.\\n4. Click on the \\\"Add a Key\\\" button.\\n5. Enter a name for your API key to help you identify it later.\\n6. Select the provider for your API key from the dropdown menu. For example, you can select \\\"OpenAI\\\" for OpenAI GPT-3 access.\\n7. Enter your API key in the provided input field. Make sure to copy it correctly.\\n8. Click on the \\\"Save\\\" button to save your API key.\\n\\nNote: The actual steps may vary slightly depending on the platform or interface you are using to access Llama2.\",\n\" Here are the steps to set your API keys on Replicate for litellm:\\n\\n1. Go to your Replicate project settings and select the Environment tab.\\n\\n2. Under Environment Variables, click Add Variable.\\n\\n3. Add variables for the API keys you want to use. The variable names should match the ones used in litellm:\\n\\n- `OPENAI_API_KEY` for OpenAI \\n- `AZURE_API_KEY` for Azure Cognitive Services\\n- `COHERE_API_KEY` for Cohere\\n- `ANTHROPIC_API_KEY` for Anthropic\\n- `HUGGINGFACE_API_KEY` for Hugging Face\\n\\n4. Set the value to your actual API key for each service. Make sure to treat the values as secrets.\\n\\n5. Make sure your litellm code is referencing the environment variable names, for example:\\n\\n```python\\nimport litellm as lm\\n\\nlm.auth(openai_key=os.getenv(\\\"OPENAI_API_KEY\\\")) \\n```\\n\\n6. Restart your Replicate runtime to load the new environment variables.\\n\\nNow litellm will use your\"]],\n columns: [[\"number\", \"index\"], [\"string\", \"Question\"], [\"string\", \"gpt-3.5-turbo\"], [\"string\", \"claude-2\"]],\n columnOptions: [{\"width\": \"1px\", \"className\": \"index_column\"}],\n rowsPerPage: 25,\n helpUrl: \"https://colab.research.google.com/notebooks/data_table.ipynb\",\n suppressOutputScrolling: true,\n minimumWidth: undefined,\n });\n\n function appendQuickchartButton(parentElement) {\n let quickchartButtonContainerElement = document.createElement('div');\n quickchartButtonContainerElement.innerHTML = `\n
-
-"""
-
-# see supported values for "voice" on vertex here:
-# https://console.cloud.google.com/vertex-ai/generative/speech/text-to-speech
-response = client.audio.speech.create(
- model = "vertex-tts",
- input=ssml,
- voice={'languageCode': 'en-US', 'name': 'en-US-Studio-O'},
-)
-print("response from proxy", response)
-```
-
-
-
-
-
-### Forcing SSML Usage
-
-You can force the use of SSML by setting the `use_ssml` parameter to `True`. This is useful when you want to ensure that your input is treated as SSML, even if it doesn't contain the `` tags.
-
-Here are examples of how to force SSML usage:
-
-
-
-
-
-Vertex AI does not support passing a `model` param - so passing `model=vertex_ai/` is the only required param
-
-
-```python
-speech_file_path = Path(__file__).parent / "speech_vertex.mp3"
-
-
-ssml = """
-
-
-
-"""
-
-# see supported values for "voice" on vertex here:
-# https://console.cloud.google.com/vertex-ai/generative/speech/text-to-speech
-response = client.audio.speech.create(
- model = "vertex-tts",
- input=ssml, # pass as None since OpenAI SDK requires this param
- voice={'languageCode': 'en-US', 'name': 'en-US-Studio-O'},
- extra_body={"use_ssml": True},
-)
-print("response from proxy", response)
-```
-
-
-
-
-## Extra
-
-### Using `GOOGLE_APPLICATION_CREDENTIALS`
-Here's the code for storing your service account credentials as `GOOGLE_APPLICATION_CREDENTIALS` environment variable:
-
-
-```python
-import os
-import tempfile
-
-def load_vertex_ai_credentials():
- # Define the path to the vertex_key.json file
- print("loading vertex ai credentials")
- filepath = os.path.dirname(os.path.abspath(__file__))
- vertex_key_path = filepath + "/vertex_key.json"
-
- # Read the existing content of the file or create an empty dictionary
- try:
- with open(vertex_key_path, "r") as file:
- # Read the file content
- print("Read vertexai file path")
- content = file.read()
-
- # If the file is empty or not valid JSON, create an empty dictionary
- if not content or not content.strip():
- service_account_key_data = {}
- else:
- # Attempt to load the existing JSON content
- file.seek(0)
- service_account_key_data = json.load(file)
- except FileNotFoundError:
- # If the file doesn't exist, create an empty dictionary
- service_account_key_data = {}
-
- # Create a temporary file
- with tempfile.NamedTemporaryFile(mode="w+", delete=False) as temp_file:
- # Write the updated content to the temporary file
- json.dump(service_account_key_data, temp_file, indent=2)
-
- # Export the temporary file as GOOGLE_APPLICATION_CREDENTIALS
- os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.abspath(temp_file.name)
-```
-
-
-### Using GCP Service Account
-
-:::info
-
-Trying to deploy LiteLLM on Google Cloud Run? Tutorial [here](https://docs.litellm.ai/docs/proxy/deploy#deploy-on-google-cloud-run)
-
-:::
-
-1. Figure out the Service Account bound to the Google Cloud Run service
-
-
-
-2. Get the FULL EMAIL address of the corresponding Service Account
-
-3. Next, go to IAM & Admin > Manage Resources , select your top-level project that houses your Google Cloud Run Service
-
-Click `Add Principal`
-
-
-
-4. Specify the Service Account as the principal and Vertex AI User as the role
-
-
-
-Once that's done, when you deploy the new container in the Google Cloud Run service, LiteLLM will have automatic access to all Vertex AI endpoints.
-
-
-s/o @[Darien Kindlund](https://www.linkedin.com/in/kindlund/) for this tutorial
-
-
-
-
diff --git a/docs/my-website/docs/providers/vllm.md b/docs/my-website/docs/providers/vllm.md
deleted file mode 100644
index 5388a0bb7..000000000
--- a/docs/my-website/docs/providers/vllm.md
+++ /dev/null
@@ -1,199 +0,0 @@
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# VLLM
-
-LiteLLM supports all models on VLLM.
-
-# Quick Start
-
-## Usage - litellm.completion (calling vLLM endpoint)
-vLLM Provides an OpenAI compatible endpoints - here's how to call it with LiteLLM
-
-In order to use litellm to call a hosted vllm server add the following to your completion call
-
-* `model="hosted_vllm/"`
-* `api_base = "your-hosted-vllm-server"`
-
-```python
-import litellm
-
-response = litellm.completion(
- model="hosted_vllm/facebook/opt-125m", # pass the vllm model name
- messages=messages,
- api_base="https://hosted-vllm-api.co",
- temperature=0.2,
- max_tokens=80)
-
-print(response)
-```
-
-
-## Usage - LiteLLM Proxy Server (calling vLLM endpoint)
-
-Here's how to call an OpenAI-Compatible Endpoint with the LiteLLM Proxy Server
-
-1. Modify the config.yaml
-
- ```yaml
- model_list:
- - model_name: my-model
- litellm_params:
- model: hosted_vllm/facebook/opt-125m # add hosted_vllm/ prefix to route as OpenAI provider
- api_base: https://hosted-vllm-api.co # add api base for OpenAI compatible provider
- ```
-
-2. Start the proxy
-
- ```bash
- $ litellm --config /path/to/config.yaml
- ```
-
-3. Send Request to LiteLLM Proxy Server
-
-
-
-
-
- ```python
- import openai
- client = openai.OpenAI(
- api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
- base_url="http://0.0.0.0:4000" # litellm-proxy-base url
- )
-
- response = client.chat.completions.create(
- model="my-model",
- messages = [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- )
-
- print(response)
- ```
-
-
-
-
- ```shell
- curl --location 'http://0.0.0.0:4000/chat/completions' \
- --header 'Authorization: Bearer sk-1234' \
- --header 'Content-Type: application/json' \
- --data '{
- "model": "my-model",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- }'
- ```
-
-
-
-
-
-## Extras - for `vllm pip package`
-### Using - `litellm.completion`
-
-```
-pip install litellm vllm
-```
-```python
-import litellm
-
-response = litellm.completion(
- model="vllm/facebook/opt-125m", # add a vllm prefix so litellm knows the custom_llm_provider==vllm
- messages=messages,
- temperature=0.2,
- max_tokens=80)
-
-print(response)
-```
-
-
-### Batch Completion
-
-```python
-from litellm import batch_completion
-
-model_name = "facebook/opt-125m"
-provider = "vllm"
-messages = [[{"role": "user", "content": "Hey, how's it going"}] for _ in range(5)]
-
-response_list = batch_completion(
- model=model_name,
- custom_llm_provider=provider, # can easily switch to huggingface, replicate, together ai, sagemaker, etc.
- messages=messages,
- temperature=0.2,
- max_tokens=80,
- )
-print(response_list)
-```
-### Prompt Templates
-
-For models with special prompt templates (e.g. Llama2), we format the prompt to fit their template.
-
-**What if we don't support a model you need?**
-You can also specify you're own custom prompt formatting, in case we don't have your model covered yet.
-
-**Does this mean you have to specify a prompt for all models?**
-No. By default we'll concatenate your message content to make a prompt (expected format for Bloom, T-5, Llama-2 base models, etc.)
-
-**Default Prompt Template**
-```python
-def default_pt(messages):
- return " ".join(message["content"] for message in messages)
-```
-
-[Code for how prompt templates work in LiteLLM](https://github.com/BerriAI/litellm/blob/main/litellm/llms/prompt_templates/factory.py)
-
-
-#### Models we already have Prompt Templates for
-
-| Model Name | Works for Models | Function Call |
-|--------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------------------|
-| meta-llama/Llama-2-7b-chat | All meta-llama llama2 chat models | `completion(model='vllm/meta-llama/Llama-2-7b', messages=messages, api_base="your_api_endpoint")` |
-| tiiuae/falcon-7b-instruct | All falcon instruct models | `completion(model='vllm/tiiuae/falcon-7b-instruct', messages=messages, api_base="your_api_endpoint")` |
-| mosaicml/mpt-7b-chat | All mpt chat models | `completion(model='vllm/mosaicml/mpt-7b-chat', messages=messages, api_base="your_api_endpoint")` |
-| codellama/CodeLlama-34b-Instruct-hf | All codellama instruct models | `completion(model='vllm/codellama/CodeLlama-34b-Instruct-hf', messages=messages, api_base="your_api_endpoint")` |
-| WizardLM/WizardCoder-Python-34B-V1.0 | All wizardcoder models | `completion(model='vllm/WizardLM/WizardCoder-Python-34B-V1.0', messages=messages, api_base="your_api_endpoint")` |
-| Phind/Phind-CodeLlama-34B-v2 | All phind-codellama models | `completion(model='vllm/Phind/Phind-CodeLlama-34B-v2', messages=messages, api_base="your_api_endpoint")` |
-
-#### Custom prompt templates
-
-```python
-# Create your own custom prompt template works
-litellm.register_prompt_template(
- model="togethercomputer/LLaMA-2-7B-32K",
- roles={
- "system": {
- "pre_message": "[INST] <>\n",
- "post_message": "\n<>\n [/INST]\n"
- },
- "user": {
- "pre_message": "[INST] ",
- "post_message": " [/INST]\n"
- },
- "assistant": {
- "pre_message": "\n",
- "post_message": "\n",
- }
- } # tell LiteLLM how you want to map the openai messages to this model
-)
-
-def test_vllm_custom_model():
- model = "vllm/togethercomputer/LLaMA-2-7B-32K"
- response = completion(model=model, messages=messages)
- print(response['choices'][0]['message']['content'])
- return response
-
-test_vllm_custom_model()
-```
-
-[Implementation Code](https://github.com/BerriAI/litellm/blob/6b3cb1898382f2e4e80fd372308ea232868c78d1/litellm/utils.py#L1414)
-
diff --git a/docs/my-website/docs/providers/volcano.md b/docs/my-website/docs/providers/volcano.md
deleted file mode 100644
index 1742a43d8..000000000
--- a/docs/my-website/docs/providers/volcano.md
+++ /dev/null
@@ -1,98 +0,0 @@
-# Volcano Engine (Volcengine)
-https://www.volcengine.com/docs/82379/1263482
-
-:::tip
-
-**We support ALL Volcengine NIM models, just set `model=volcengine/` as a prefix when sending litellm requests**
-
-:::
-
-## API Key
-```python
-# env variable
-os.environ['VOLCENGINE_API_KEY']
-```
-
-## Sample Usage
-```python
-from litellm import completion
-import os
-
-os.environ['VOLCENGINE_API_KEY'] = ""
-response = completion(
- model="volcengine/",
- messages=[
- {
- "role": "user",
- "content": "What's the weather like in Boston today in Fahrenheit?",
- }
- ],
- temperature=0.2, # optional
- top_p=0.9, # optional
- frequency_penalty=0.1, # optional
- presence_penalty=0.1, # optional
- max_tokens=10, # optional
- stop=["\n\n"], # optional
-)
-print(response)
-```
-
-## Sample Usage - Streaming
-```python
-from litellm import completion
-import os
-
-os.environ['VOLCENGINE_API_KEY'] = ""
-response = completion(
- model="volcengine/",
- messages=[
- {
- "role": "user",
- "content": "What's the weather like in Boston today in Fahrenheit?",
- }
- ],
- stream=True,
- temperature=0.2, # optional
- top_p=0.9, # optional
- frequency_penalty=0.1, # optional
- presence_penalty=0.1, # optional
- max_tokens=10, # optional
- stop=["\n\n"], # optional
-)
-
-for chunk in response:
- print(chunk)
-```
-
-
-## Supported Models - 💥 ALL Volcengine NIM Models Supported!
-We support ALL `volcengine` models, just set `volcengine/` as a prefix when sending completion requests
-
-## Sample Usage - LiteLLM Proxy
-
-### Config.yaml setting
-
-```yaml
-model_list:
- - model_name: volcengine-model
- litellm_params:
- model: volcengine/
- api_key: os.environ/VOLCENGINE_API_KEY
-```
-
-### Send Request
-
-```shell
-curl --location 'http://localhost:4000/chat/completions' \
- --header 'Authorization: Bearer sk-1234' \
- --header 'Content-Type: application/json' \
- --data '{
- "model": "volcengine-model",
- "messages": [
- {
- "role": "user",
- "content": "here is my api key. openai_api_key=sk-1234"
- }
- ]
-}'
-```
\ No newline at end of file
diff --git a/docs/my-website/docs/providers/voyage.md b/docs/my-website/docs/providers/voyage.md
deleted file mode 100644
index a56a1408e..000000000
--- a/docs/my-website/docs/providers/voyage.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Voyage AI
-https://docs.voyageai.com/embeddings/
-
-## API Key
-```python
-# env variable
-os.environ['VOYAGE_API_KEY']
-```
-
-## Sample Usage - Embedding
-```python
-from litellm import embedding
-import os
-
-os.environ['VOYAGE_API_KEY'] = ""
-response = embedding(
- model="voyage/voyage-01",
- input=["good morning from litellm"],
-)
-print(response)
-```
-
-## Supported Models
-All models listed here https://docs.voyageai.com/embeddings/#models-and-specifics are supported
-
-| Model Name | Function Call |
-|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| voyage-2 | `embedding(model="voyage/voyage-2", input)` |
-| voyage-large-2 | `embedding(model="voyage/voyage-large-2", input)` |
-| voyage-law-2 | `embedding(model="voyage/voyage-law-2", input)` |
-| voyage-code-2 | `embedding(model="voyage/voyage-code-2", input)` |
-| voyage-lite-02-instruct | `embedding(model="voyage/voyage-lite-02-instruct", input)` |
-| voyage-01 | `embedding(model="voyage/voyage-01", input)` |
-| voyage-lite-01 | `embedding(model="voyage/voyage-lite-01", input)` |
-| voyage-lite-01-instruct | `embedding(model="voyage/voyage-lite-01-instruct", input)` |
\ No newline at end of file
diff --git a/docs/my-website/docs/providers/watsonx.md b/docs/my-website/docs/providers/watsonx.md
deleted file mode 100644
index 7a42a54ed..000000000
--- a/docs/my-website/docs/providers/watsonx.md
+++ /dev/null
@@ -1,284 +0,0 @@
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# IBM watsonx.ai
-
-LiteLLM supports all IBM [watsonx.ai](https://watsonx.ai/) foundational models and embeddings.
-
-## Environment Variables
-```python
-os.environ["WATSONX_URL"] = "" # (required) Base URL of your WatsonX instance
-# (required) either one of the following:
-os.environ["WATSONX_APIKEY"] = "" # IBM cloud API key
-os.environ["WATSONX_TOKEN"] = "" # IAM auth token
-# optional - can also be passed as params to completion() or embedding()
-os.environ["WATSONX_PROJECT_ID"] = "" # Project ID of your WatsonX instance
-os.environ["WATSONX_DEPLOYMENT_SPACE_ID"] = "" # ID of your deployment space to use deployed models
-```
-
-See [here](https://cloud.ibm.com/apidocs/watsonx-ai#api-authentication) for more information on how to get an access token to authenticate to watsonx.ai.
-
-## Usage
-
-
-
-
-
-```python
-import os
-from litellm import completion
-
-os.environ["WATSONX_URL"] = ""
-os.environ["WATSONX_APIKEY"] = ""
-
-response = completion(
- model="watsonx/ibm/granite-13b-chat-v2",
- messages=[{ "content": "what is your favorite colour?","role": "user"}],
- project_id="" # or pass with os.environ["WATSONX_PROJECT_ID"]
-)
-
-response = completion(
- model="watsonx/meta-llama/llama-3-8b-instruct",
- messages=[{ "content": "what is your favorite colour?","role": "user"}],
- project_id=""
-)
-```
-
-## Usage - Streaming
-```python
-import os
-from litellm import completion
-
-os.environ["WATSONX_URL"] = ""
-os.environ["WATSONX_APIKEY"] = ""
-os.environ["WATSONX_PROJECT_ID"] = ""
-
-response = completion(
- model="watsonx/ibm/granite-13b-chat-v2",
- messages=[{ "content": "what is your favorite colour?","role": "user"}],
- stream=True
-)
-for chunk in response:
- print(chunk)
-```
-
-#### Example Streaming Output Chunk
-```json
-{
- "choices": [
- {
- "finish_reason": null,
- "index": 0,
- "delta": {
- "content": "I don't have a favorite color, but I do like the color blue. What's your favorite color?"
- }
- }
- ],
- "created": null,
- "model": "watsonx/ibm/granite-13b-chat-v2",
- "usage": {
- "prompt_tokens": null,
- "completion_tokens": null,
- "total_tokens": null
- }
-}
-```
-
-## Usage - Models in deployment spaces
-
-Models that have been deployed to a deployment space (e.g.: tuned models) can be called using the `deployment/` format (where `` is the ID of the deployed model in your deployment space).
-
-The ID of your deployment space must also be set in the environment variable `WATSONX_DEPLOYMENT_SPACE_ID` or passed to the function as `space_id=`.
-
-```python
-import litellm
-response = litellm.completion(
- model="watsonx/deployment/",
- messages=[{"content": "Hello, how are you?", "role": "user"}],
- space_id=""
-)
-```
-
-## Usage - Embeddings
-
-LiteLLM also supports making requests to IBM watsonx.ai embedding models. The credential needed for this is the same as for completion.
-
-```python
-from litellm import embedding
-
-response = embedding(
- model="watsonx/ibm/slate-30m-english-rtrvr",
- input=["What is the capital of France?"],
- project_id=""
-)
-print(response)
-# EmbeddingResponse(model='ibm/slate-30m-english-rtrvr', data=[{'object': 'embedding', 'index': 0, 'embedding': [-0.037463713, -0.02141933, -0.02851813, 0.015519324, ..., -0.0021367231, -0.01704561, -0.001425816, 0.0035238306]}], object='list', usage=Usage(prompt_tokens=8, total_tokens=8))
-```
-
-## OpenAI Proxy Usage
-
-Here's how to call IBM watsonx.ai with the LiteLLM Proxy Server
-
-### 1. Save keys in your environment
-
-```bash
-export WATSONX_URL=""
-export WATSONX_APIKEY=""
-export WATSONX_PROJECT_ID=""
-```
-
-### 2. Start the proxy
-
-
-
-
-```bash
-$ litellm --model watsonx/meta-llama/llama-3-8b-instruct
-
-# Server running on http://0.0.0.0:4000
-```
-
-
-
-
-```yaml
-model_list:
- - model_name: llama-3-8b
- litellm_params:
- # all params accepted by litellm.completion()
- model: watsonx/meta-llama/llama-3-8b-instruct
- api_key: "os.environ/WATSONX_API_KEY" # does os.getenv("WATSONX_API_KEY")
-```
-
-
-
-### 3. Test it
-
-
-
-
-
-```shell
-curl --location 'http://0.0.0.0:4000/chat/completions' \
---header 'Content-Type: application/json' \
---data ' {
- "model": "llama-3-8b",
- "messages": [
- {
- "role": "user",
- "content": "what is your favorite colour?"
- }
- ]
- }
-'
-```
-
-
-
-```python
-import openai
-client = openai.OpenAI(
- api_key="anything",
- base_url="http://0.0.0.0:4000"
-)
-
-# request sent to model set on litellm proxy, `litellm --model`
-response = client.chat.completions.create(model="llama-3-8b", messages=[
- {
- "role": "user",
- "content": "what is your favorite colour?"
- }
-])
-
-print(response)
-
-```
-
-
-
-```python
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts.chat import (
- ChatPromptTemplate,
- HumanMessagePromptTemplate,
- SystemMessagePromptTemplate,
-)
-from langchain.schema import HumanMessage, SystemMessage
-
-chat = ChatOpenAI(
- openai_api_base="http://0.0.0.0:4000", # set openai_api_base to the LiteLLM Proxy
- model = "llama-3-8b",
- temperature=0.1
-)
-
-messages = [
- SystemMessage(
- content="You are a helpful assistant that im using to make a test request to."
- ),
- HumanMessage(
- content="test from litellm. tell me why it's amazing in 1 sentence"
- ),
-]
-response = chat(messages)
-
-print(response)
-```
-
-
-
-
-## Authentication
-
-### Passing credentials as parameters
-
-You can also pass the credentials as parameters to the completion and embedding functions.
-
-```python
-import os
-from litellm import completion
-
-response = completion(
- model="watsonx/ibm/granite-13b-chat-v2",
- messages=[{ "content": "What is your favorite color?","role": "user"}],
- url="",
- api_key="",
- project_id=""
-)
-```
-
-
-## Supported IBM watsonx.ai Models
-
-Here are some examples of models available in IBM watsonx.ai that you can use with LiteLLM:
-
-| Mode Name | Command |
-|------------------------------------|------------------------------------------------------------------------------------------|
-| Flan T5 XXL | `completion(model=watsonx/google/flan-t5-xxl, messages=messages)` |
-| Flan Ul2 | `completion(model=watsonx/google/flan-ul2, messages=messages)` |
-| Mt0 XXL | `completion(model=watsonx/bigscience/mt0-xxl, messages=messages)` |
-| Gpt Neox | `completion(model=watsonx/eleutherai/gpt-neox-20b, messages=messages)` |
-| Mpt 7B Instruct2 | `completion(model=watsonx/ibm/mpt-7b-instruct2, messages=messages)` |
-| Starcoder | `completion(model=watsonx/bigcode/starcoder, messages=messages)` |
-| Llama 2 70B Chat | `completion(model=watsonx/meta-llama/llama-2-70b-chat, messages=messages)` |
-| Llama 2 13B Chat | `completion(model=watsonx/meta-llama/llama-2-13b-chat, messages=messages)` |
-| Granite 13B Instruct | `completion(model=watsonx/ibm/granite-13b-instruct-v1, messages=messages)` |
-| Granite 13B Chat | `completion(model=watsonx/ibm/granite-13b-chat-v1, messages=messages)` |
-| Flan T5 XL | `completion(model=watsonx/google/flan-t5-xl, messages=messages)` |
-| Granite 13B Chat V2 | `completion(model=watsonx/ibm/granite-13b-chat-v2, messages=messages)` |
-| Granite 13B Instruct V2 | `completion(model=watsonx/ibm/granite-13b-instruct-v2, messages=messages)` |
-| Elyza Japanese Llama 2 7B Instruct | `completion(model=watsonx/elyza/elyza-japanese-llama-2-7b-instruct, messages=messages)` |
-| Mixtral 8X7B Instruct V01 Q | `completion(model=watsonx/ibm-mistralai/mixtral-8x7b-instruct-v01-q, messages=messages)` |
-
-
-For a list of all available models in watsonx.ai, see [here](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx&locale=en&audience=wdp).
-
-
-## Supported IBM watsonx.ai Embedding Models
-
-| Model Name | Function Call |
-|------------|------------------------------------------------------------------------|
-| Slate 30m | `embedding(model="watsonx/ibm/slate-30m-english-rtrvr", input=input)` |
-| Slate 125m | `embedding(model="watsonx/ibm/slate-125m-english-rtrvr", input=input)` |
-
-
-For a list of all available embedding models in watsonx.ai, see [here](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models-embed.html?context=wx).
\ No newline at end of file
diff --git a/docs/my-website/docs/providers/xai.md b/docs/my-website/docs/providers/xai.md
deleted file mode 100644
index 131c02b3d..000000000
--- a/docs/my-website/docs/providers/xai.md
+++ /dev/null
@@ -1,146 +0,0 @@
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# XAI
-
-https://docs.x.ai/docs
-
-:::tip
-
-**We support ALL XAI models, just set `model=xai/` as a prefix when sending litellm requests**
-
-:::
-
-## API Key
-```python
-# env variable
-os.environ['XAI_API_KEY']
-```
-
-## Sample Usage
-```python
-from litellm import completion
-import os
-
-os.environ['XAI_API_KEY'] = ""
-response = completion(
- model="xai/grok-beta",
- messages=[
- {
- "role": "user",
- "content": "What's the weather like in Boston today in Fahrenheit?",
- }
- ],
- max_tokens=10,
- response_format={ "type": "json_object" },
- seed=123,
- stop=["\n\n"],
- temperature=0.2,
- top_p=0.9,
- tool_choice="auto",
- tools=[],
- user="user",
-)
-print(response)
-```
-
-## Sample Usage - Streaming
-```python
-from litellm import completion
-import os
-
-os.environ['XAI_API_KEY'] = ""
-response = completion(
- model="xai/grok-beta",
- messages=[
- {
- "role": "user",
- "content": "What's the weather like in Boston today in Fahrenheit?",
- }
- ],
- stream=True,
- max_tokens=10,
- response_format={ "type": "json_object" },
- seed=123,
- stop=["\n\n"],
- temperature=0.2,
- top_p=0.9,
- tool_choice="auto",
- tools=[],
- user="user",
-)
-
-for chunk in response:
- print(chunk)
-```
-
-
-## Usage with LiteLLM Proxy Server
-
-Here's how to call a XAI model with the LiteLLM Proxy Server
-
-1. Modify the config.yaml
-
- ```yaml
- model_list:
- - model_name: my-model
- litellm_params:
- model: xai/ # add xai/ prefix to route as XAI provider
- api_key: api-key # api key to send your model
- ```
-
-
-2. Start the proxy
-
- ```bash
- $ litellm --config /path/to/config.yaml
- ```
-
-3. Send Request to LiteLLM Proxy Server
-
-
-
-
-
- ```python
- import openai
- client = openai.OpenAI(
- api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
- base_url="http://0.0.0.0:4000" # litellm-proxy-base url
- )
-
- response = client.chat.completions.create(
- model="my-model",
- messages = [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- )
-
- print(response)
- ```
-
-
-
-
- ```shell
- curl --location 'http://0.0.0.0:4000/chat/completions' \
- --header 'Authorization: Bearer sk-1234' \
- --header 'Content-Type: application/json' \
- --data '{
- "model": "my-model",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- }'
- ```
-
-
-
-
-
diff --git a/docs/my-website/docs/providers/xinference.md b/docs/my-website/docs/providers/xinference.md
deleted file mode 100644
index 3686c0209..000000000
--- a/docs/my-website/docs/providers/xinference.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# Xinference [Xorbits Inference]
-https://inference.readthedocs.io/en/latest/index.html
-
-## API Base, Key
-```python
-# env variable
-os.environ['XINFERENCE_API_BASE'] = "http://127.0.0.1:9997/v1"
-os.environ['XINFERENCE_API_KEY'] = "anything" #[optional] no api key required
-```
-
-## Sample Usage - Embedding
-```python
-from litellm import embedding
-import os
-
-os.environ['XINFERENCE_API_BASE'] = "http://127.0.0.1:9997/v1"
-response = embedding(
- model="xinference/bge-base-en",
- input=["good morning from litellm"],
-)
-print(response)
-```
-
-## Sample Usage `api_base` param
-```python
-from litellm import embedding
-import os
-
-response = embedding(
- model="xinference/bge-base-en",
- api_base="http://127.0.0.1:9997/v1",
- input=["good morning from litellm"],
-)
-print(response)
-```
-
-## Supported Models
-All models listed here https://inference.readthedocs.io/en/latest/models/builtin/embedding/index.html are supported
-
-| Model Name | Function Call |
-|-----------------------------|--------------------------------------------------------------------|
-| bge-base-en | `embedding(model="xinference/bge-base-en", input)` |
-| bge-base-en-v1.5 | `embedding(model="xinference/bge-base-en-v1.5", input)` |
-| bge-base-zh | `embedding(model="xinference/bge-base-zh", input)` |
-| bge-base-zh-v1.5 | `embedding(model="xinference/bge-base-zh-v1.5", input)` |
-| bge-large-en | `embedding(model="xinference/bge-large-en", input)` |
-| bge-large-en-v1.5 | `embedding(model="xinference/bge-large-en-v1.5", input)` |
-| bge-large-zh | `embedding(model="xinference/bge-large-zh", input)` |
-| bge-large-zh-noinstruct | `embedding(model="xinference/bge-large-zh-noinstruct", input)` |
-| bge-large-zh-v1.5 | `embedding(model="xinference/bge-large-zh-v1.5", input)` |
-| bge-small-en-v1.5 | `embedding(model="xinference/bge-small-en-v1.5", input)` |
-| bge-small-zh | `embedding(model="xinference/bge-small-zh", input)` |
-| bge-small-zh-v1.5 | `embedding(model="xinference/bge-small-zh-v1.5", input)` |
-| e5-large-v2 | `embedding(model="xinference/e5-large-v2", input)` |
-| gte-base | `embedding(model="xinference/gte-base", input)` |
-| gte-large | `embedding(model="xinference/gte-large", input)` |
-| jina-embeddings-v2-base-en | `embedding(model="xinference/jina-embeddings-v2-base-en", input)` |
-| jina-embeddings-v2-small-en | `embedding(model="xinference/jina-embeddings-v2-small-en", input)` |
-| multilingual-e5-large | `embedding(model="xinference/multilingual-e5-large", input)` |
-
-
-
diff --git a/docs/my-website/docs/proxy/access_control.md b/docs/my-website/docs/proxy/access_control.md
deleted file mode 100644
index 3d335380f..000000000
--- a/docs/my-website/docs/proxy/access_control.md
+++ /dev/null
@@ -1,145 +0,0 @@
-# Role-based Access Controls (RBAC)
-
-Role-based access control (RBAC) is based on Organizations, Teams and Internal User Roles
-
-- `Organizations` are the top-level entities that contain Teams.
-- `Team` - A Team is a collection of multiple `Internal Users`
-- `Internal Users` - users that can create keys, make LLM API calls, view usage on LiteLLM
-- `Roles` define the permissions of an `Internal User`
-- `Virtual Keys` - Keys are used for authentication to the LiteLLM API. Keys are tied to a `Internal User` and `Team`
-
-## Roles
-
-**Admin Roles**
- - `proxy_admin`: admin over the platform
- - `proxy_admin_viewer`: can login, view all keys, view all spend. **Cannot** create keys/delete keys/add new users
-
-**Organization Roles**
- - `org_admin`: admin over the organization. Can create teams and users within their organization
-
-**Internal User Roles**
- - `internal_user`: can login, view/create/delete their own keys, view their spend. **Cannot** add new users.
- - `internal_user_viewer`: can login, view their own keys, view their own spend. **Cannot** create/delete keys, add new users.
-
-
-## Onboarding Organizations
-
-### 1. Creating a new Organization
-
-Any user with role=`proxy_admin` can create a new organization
-
-**Usage**
-
-[**API Reference for /organization/new**](https://litellm-api.up.railway.app/#/organization%20management/new_organization_organization_new_post)
-
-```shell
-curl --location 'http://0.0.0.0:4000/organization/new' \
- --header 'Authorization: Bearer sk-1234' \
- --header 'Content-Type: application/json' \
- --data '{
- "organization_alias": "marketing_department",
- "models": ["gpt-4"],
- "max_budget": 20
- }'
-```
-
-Expected Response
-
-```json
-{
- "organization_id": "ad15e8ca-12ae-46f4-8659-d02debef1b23",
- "organization_alias": "marketing_department",
- "budget_id": "98754244-3a9c-4b31-b2e9-c63edc8fd7eb",
- "metadata": {},
- "models": [
- "gpt-4"
- ],
- "created_by": "109010464461339474872",
- "updated_by": "109010464461339474872",
- "created_at": "2024-10-08T18:30:24.637000Z",
- "updated_at": "2024-10-08T18:30:24.637000Z"
-}
-```
-
-
-### 2. Adding an `org_admin` to an Organization
-
-Create a user (ishaan@berri.ai) as an `org_admin` for the `marketing_department` Organization (from [step 1](#1-creating-a-new-organization))
-
-Users with the following roles can call `/organization/member_add`
-- `proxy_admin`
-- `org_admin` only within their own organization
-
-```shell
-curl -X POST 'http://0.0.0.0:4000/organization/member_add' \
- -H 'Authorization: Bearer sk-1234' \
- -H 'Content-Type: application/json' \
- -d '{"organization_id": "ad15e8ca-12ae-46f4-8659-d02debef1b23", "member": {"role": "org_admin", "user_id": "ishaan@berri.ai"}}'
-```
-
-Now a user with user_id = `ishaan@berri.ai` and role = `org_admin` has been created in the `marketing_department` Organization
-
-Create a Virtual Key for user_id = `ishaan@berri.ai`. The User can then use the Virtual key for their Organization Admin Operations
-
-```shell
-curl --location 'http://0.0.0.0:4000/key/generate' \
- --header 'Authorization: Bearer sk-1234' \
- --header 'Content-Type: application/json' \
- --data '{
- "user_id": "ishaan@berri.ai"
- }'
-```
-
-Expected Response
-
-```json
-{
- "models": [],
- "user_id": "ishaan@berri.ai",
- "key": "sk-7shH8TGMAofR4zQpAAo6kQ",
- "key_name": "sk-...o6kQ",
-}
-```
-
-### 3. `Organization Admin` - Create a Team
-
-The organization admin will use the virtual key created in [step 2](#2-adding-an-org_admin-to-an-organization) to create a `Team` within the `marketing_department` Organization
-
-```shell
-curl --location 'http://0.0.0.0:4000/team/new' \
- --header 'Authorization: Bearer sk-7shH8TGMAofR4zQpAAo6kQ' \
- --header 'Content-Type: application/json' \
- --data '{
- "team_alias": "engineering_team",
- "organization_id": "ad15e8ca-12ae-46f4-8659-d02debef1b23"
- }'
-```
-
-This will create the team `engineering_team` within the `marketing_department` Organization
-
-Expected Response
-
-```json
-{
- "team_alias": "engineering_team",
- "team_id": "01044ee8-441b-45f4-be7d-c70e002722d8",
- "organization_id": "ad15e8ca-12ae-46f4-8659-d02debef1b23",
-}
-```
-
-
-### `Organization Admin` - Add an `Internal User`
-
-The organization admin will use the virtual key created in [step 2](#2-adding-an-org_admin-to-an-organization) to add an Internal User to the `engineering_team` Team.
-
-- We will assign role=`internal_user` so the user can create Virtual Keys for themselves
-- `team_id` is from [step 3](#3-organization-admin---create-a-team)
-
-```shell
-curl -X POST 'http://0.0.0.0:4000/team/member_add' \
- -H 'Authorization: Bearer sk-1234' \
- -H 'Content-Type: application/json' \
- -d '{"team_id": "01044ee8-441b-45f4-be7d-c70e002722d8", "member": {"role": "internal_user", "user_id": "krrish@berri.ai"}}'
-
-```
-
diff --git a/docs/my-website/docs/proxy/alerting.md b/docs/my-website/docs/proxy/alerting.md
deleted file mode 100644
index a5519157c..000000000
--- a/docs/my-website/docs/proxy/alerting.md
+++ /dev/null
@@ -1,459 +0,0 @@
-import Image from '@theme/IdealImage';
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Alerting / Webhooks
-
-Get alerts for:
-
-- Hanging LLM api calls
-- Slow LLM api calls
-- Failed LLM api calls
-- Budget Tracking per key/user
-- Spend Reports - Weekly & Monthly spend per Team, Tag
-- Failed db read/writes
-- Model outage alerting
-- Daily Reports:
- - **LLM** Top 5 slowest deployments
- - **LLM** Top 5 deployments with most failed requests
-- **Spend** Weekly & Monthly spend per Team, Tag
-
-
-Works across:
-- [Slack](#quick-start)
-- [Discord](#advanced---using-discord-webhooks)
-- [Microsoft Teams](#advanced---using-ms-teams-webhooks)
-
-## Quick Start
-
-Set up a slack alert channel to receive alerts from proxy.
-
-### Step 1: Add a Slack Webhook URL to env
-
-Get a slack webhook url from https://api.slack.com/messaging/webhooks
-
-You can also use Discord Webhooks, see [here](#using-discord-webhooks)
-
-
-Set `SLACK_WEBHOOK_URL` in your proxy env to enable Slack alerts.
-
-```bash
-export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/<>/<>/<>"
-```
-
-### Step 2: Setup Proxy
-
-```yaml
-general_settings:
- alerting: ["slack"]
- alerting_threshold: 300 # sends alerts if requests hang for 5min+ and responses take 5min+
- spend_report_frequency: "1d" # [Optional] set as 1d, 2d, 30d .... Specifiy how often you want a Spend Report to be sent
-```
-
-Start proxy
-```bash
-$ litellm --config /path/to/config.yaml
-```
-
-
-### Step 3: Test it!
-
-
-```bash
-curl -X GET 'http://0.0.0.0:4000/health/services?service=slack' \
--H 'Authorization: Bearer sk-1234'
-```
-
-## Advanced
-
-### Redacting Messages from Alerts
-
-By default alerts show the `messages/input` passed to the LLM. If you want to redact this from slack alerting set the following setting on your config
-
-
-```shell
-general_settings:
- alerting: ["slack"]
- alert_types: ["spend_reports"]
-
-litellm_settings:
- redact_messages_in_exceptions: True
-```
-
-
-### Add Metadata to alerts
-
-Add alerting metadata to proxy calls for debugging.
-
-```python
-import openai
-client = openai.OpenAI(
- api_key="anything",
- base_url="http://0.0.0.0:4000"
-)
-
-# request sent to model set on litellm proxy, `litellm --model`
-response = client.chat.completions.create(
- model="gpt-3.5-turbo",
- messages = [],
- extra_body={
- "metadata": {
- "alerting_metadata": {
- "hello": "world"
- }
- }
- }
-)
-```
-
-**Expected Response**
-
-
-
-### Opting into specific alert types
-
-Set `alert_types` if you want to Opt into only specific alert types. When alert_types is not set, all Default Alert Types are enabled.
-
-👉 [**See all alert types here**](#all-possible-alert-types)
-
-```shell
-general_settings:
- alerting: ["slack"]
- alert_types: [
- "llm_exceptions",
- "llm_too_slow",
- "llm_requests_hanging",
- "budget_alerts",
- "spend_reports",
- "db_exceptions",
- "daily_reports",
- "cooldown_deployment",
- "new_model_added",
- ]
-```
-
-### Set specific slack channels per alert type
-
-Use this if you want to set specific channels per alert type
-
-**This allows you to do the following**
-```
-llm_exceptions -> go to slack channel #llm-exceptions
-spend_reports -> go to slack channel #llm-spend-reports
-```
-
-Set `alert_to_webhook_url` on your config.yaml
-
-
-
-
-
-```yaml
-model_list:
- - model_name: gpt-4
- litellm_params:
- model: openai/fake
- api_key: fake-key
- api_base: https://exampleopenaiendpoint-production.up.railway.app/
-
-general_settings:
- master_key: sk-1234
- alerting: ["slack"]
- alerting_threshold: 0.0001 # (Seconds) set an artifically low threshold for testing alerting
- alert_to_webhook_url: {
- "llm_exceptions": "https://hooks.slack.com/services/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",
- "llm_too_slow": "https://hooks.slack.com/services/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",
- "llm_requests_hanging": "https://hooks.slack.com/services/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",
- "budget_alerts": "https://hooks.slack.com/services/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",
- "db_exceptions": "https://hooks.slack.com/services/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",
- "daily_reports": "https://hooks.slack.com/services/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",
- "spend_reports": "https://hooks.slack.com/services/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",
- "cooldown_deployment": "https://hooks.slack.com/services/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",
- "new_model_added": "https://hooks.slack.com/services/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",
- "outage_alerts": "https://hooks.slack.com/services/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",
- }
-
-litellm_settings:
- success_callback: ["langfuse"]
-```
-
-
-
-
-Provide multiple slack channels for a given alert type
-
-```yaml
-model_list:
- - model_name: gpt-4
- litellm_params:
- model: openai/fake
- api_key: fake-key
- api_base: https://exampleopenaiendpoint-production.up.railway.app/
-
-general_settings:
- master_key: sk-1234
- alerting: ["slack"]
- alerting_threshold: 0.0001 # (Seconds) set an artifically low threshold for testing alerting
- alert_to_webhook_url: {
- "llm_exceptions": ["os.environ/SLACK_WEBHOOK_URL", "os.environ/SLACK_WEBHOOK_URL_2"],
- "llm_too_slow": ["https://webhook.site/7843a980-a494-4967-80fb-d502dbc16886", "https://webhook.site/28cfb179-f4fb-4408-8129-729ff55cf213"],
- "llm_requests_hanging": ["os.environ/SLACK_WEBHOOK_URL_5", "os.environ/SLACK_WEBHOOK_URL_6"],
- "budget_alerts": ["os.environ/SLACK_WEBHOOK_URL_7", "os.environ/SLACK_WEBHOOK_URL_8"],
- "db_exceptions": ["os.environ/SLACK_WEBHOOK_URL_9", "os.environ/SLACK_WEBHOOK_URL_10"],
- "daily_reports": ["os.environ/SLACK_WEBHOOK_URL_11", "os.environ/SLACK_WEBHOOK_URL_12"],
- "spend_reports": ["os.environ/SLACK_WEBHOOK_URL_13", "os.environ/SLACK_WEBHOOK_URL_14"],
- "cooldown_deployment": ["os.environ/SLACK_WEBHOOK_URL_15", "os.environ/SLACK_WEBHOOK_URL_16"],
- "new_model_added": ["os.environ/SLACK_WEBHOOK_URL_17", "os.environ/SLACK_WEBHOOK_URL_18"],
- "outage_alerts": ["os.environ/SLACK_WEBHOOK_URL_19", "os.environ/SLACK_WEBHOOK_URL_20"],
- }
-
-litellm_settings:
- success_callback: ["langfuse"]
-```
-
-
-
-
-
-Test it - send a valid llm request - expect to see a `llm_too_slow` alert in it's own slack channel
-
-```shell
-curl -i http://localhost:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer sk-1234" \
- -d '{
- "model": "gpt-4",
- "messages": [
- {"role": "user", "content": "Hello, Claude gm!"}
- ]
-}'
-```
-
-
-### Using MS Teams Webhooks
-
-MS Teams provides a slack compatible webhook url that you can use for alerting
-
-##### Quick Start
-
-1. [Get a webhook url](https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook?tabs=newteams%2Cdotnet#create-an-incoming-webhook) for your Microsoft Teams channel
-
-2. Add it to your .env
-
-```bash
-SLACK_WEBHOOK_URL="https://berriai.webhook.office.com/webhookb2/...6901/IncomingWebhook/b55fa0c2a48647be8e6effedcd540266/e04b1092-4a3e-44a2-ab6b-29a0a4854d1d"
-```
-
-3. Add it to your litellm config
-
-```yaml
-model_list:
- model_name: "azure-model"
- litellm_params:
- model: "azure/gpt-35-turbo"
- api_key: "my-bad-key" # 👈 bad key
-
-general_settings:
- alerting: ["slack"]
- alerting_threshold: 300 # sends alerts if requests hang for 5min+ and responses take 5min+
-```
-
-4. Run health check!
-
-Call the proxy `/health/services` endpoint to test if your alerting connection is correctly setup.
-
-```bash
-curl --location 'http://0.0.0.0:4000/health/services?service=slack' \
---header 'Authorization: Bearer sk-1234'
-```
-
-
-**Expected Response**
-
-
-
-### Using Discord Webhooks
-
-Discord provides a slack compatible webhook url that you can use for alerting
-
-##### Quick Start
-
-1. Get a webhook url for your discord channel
-
-2. Append `/slack` to your discord webhook - it should look like
-
-```
-"https://discord.com/api/webhooks/1240030362193760286/cTLWt5ATn1gKmcy_982rl5xmYHsrM1IWJdmCL1AyOmU9JdQXazrp8L1_PYgUtgxj8x4f/slack"
-```
-
-3. Add it to your litellm config
-
-```yaml
-model_list:
- model_name: "azure-model"
- litellm_params:
- model: "azure/gpt-35-turbo"
- api_key: "my-bad-key" # 👈 bad key
-
-general_settings:
- alerting: ["slack"]
- alerting_threshold: 300 # sends alerts if requests hang for 5min+ and responses take 5min+
-
-environment_variables:
- SLACK_WEBHOOK_URL: "https://discord.com/api/webhooks/1240030362193760286/cTLWt5ATn1gKmcy_982rl5xmYHsrM1IWJdmCL1AyOmU9JdQXazrp8L1_PYgUtgxj8x4f/slack"
-```
-
-
-## [BETA] Webhooks for Budget Alerts
-
-**Note**: This is a beta feature, so the spec might change.
-
-Set a webhook to get notified for budget alerts.
-
-1. Setup config.yaml
-
-Add url to your environment, for testing you can use a link from [here](https://webhook.site/)
-
-```bash
-export WEBHOOK_URL="https://webhook.site/6ab090e8-c55f-4a23-b075-3209f5c57906"
-```
-
-Add 'webhook' to config.yaml
-```yaml
-general_settings:
- alerting: ["webhook"] # 👈 KEY CHANGE
-```
-
-2. Start proxy
-
-```bash
-litellm --config /path/to/config.yaml
-
-# RUNNING on http://0.0.0.0:4000
-```
-
-3. Test it!
-
-```bash
-curl -X GET --location 'http://0.0.0.0:4000/health/services?service=webhook' \
---header 'Authorization: Bearer sk-1234'
-```
-
-**Expected Response**
-
-```bash
-{
- "spend": 1, # the spend for the 'event_group'
- "max_budget": 0, # the 'max_budget' set for the 'event_group'
- "token": "88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b",
- "user_id": "default_user_id",
- "team_id": null,
- "user_email": null,
- "key_alias": null,
- "projected_exceeded_data": null,
- "projected_spend": null,
- "event": "budget_crossed", # Literal["budget_crossed", "threshold_crossed", "projected_limit_exceeded"]
- "event_group": "user",
- "event_message": "User Budget: Budget Crossed"
-}
-```
-
-### API Spec for Webhook Event
-
-- `spend` *float*: The current spend amount for the 'event_group'.
-- `max_budget` *float or null*: The maximum allowed budget for the 'event_group'. null if not set.
-- `token` *str*: A hashed value of the key, used for authentication or identification purposes.
-- `customer_id` *str or null*: The ID of the customer associated with the event (optional).
-- `internal_user_id` *str or null*: The ID of the internal user associated with the event (optional).
-- `team_id` *str or null*: The ID of the team associated with the event (optional).
-- `user_email` *str or null*: The email of the internal user associated with the event (optional).
-- `key_alias` *str or null*: An alias for the key associated with the event (optional).
-- `projected_exceeded_date` *str or null*: The date when the budget is projected to be exceeded, returned when 'soft_budget' is set for key (optional).
-- `projected_spend` *float or null*: The projected spend amount, returned when 'soft_budget' is set for key (optional).
-- `event` *Literal["budget_crossed", "threshold_crossed", "projected_limit_exceeded"]*: The type of event that triggered the webhook. Possible values are:
- * "spend_tracked": Emitted whenver spend is tracked for a customer id.
- * "budget_crossed": Indicates that the spend has exceeded the max budget.
- * "threshold_crossed": Indicates that spend has crossed a threshold (currently sent when 85% and 95% of budget is reached).
- * "projected_limit_exceeded": For "key" only - Indicates that the projected spend is expected to exceed the soft budget threshold.
-- `event_group` *Literal["customer", "internal_user", "key", "team", "proxy"]*: The group associated with the event. Possible values are:
- * "customer": The event is related to a specific customer
- * "internal_user": The event is related to a specific internal user.
- * "key": The event is related to a specific key.
- * "team": The event is related to a team.
- * "proxy": The event is related to a proxy.
-
-- `event_message` *str*: A human-readable description of the event.
-
-## Region-outage alerting (✨ Enterprise feature)
-
-:::info
-[Get a free 2-week license](https://forms.gle/P518LXsAZ7PhXpDn8)
-:::
-
-Setup alerts if a provider region is having an outage.
-
-```yaml
-general_settings:
- alerting: ["slack"]
- alert_types: ["region_outage_alerts"]
-```
-
-By default this will trigger if multiple models in a region fail 5+ requests in 1 minute. '400' status code errors are not counted (i.e. BadRequestErrors).
-
-Control thresholds with:
-
-```yaml
-general_settings:
- alerting: ["slack"]
- alert_types: ["region_outage_alerts"]
- alerting_args:
- region_outage_alert_ttl: 60 # time-window in seconds
- minor_outage_alert_threshold: 5 # number of errors to trigger a minor alert
- major_outage_alert_threshold: 10 # number of errors to trigger a major alert
-```
-
-## **All Possible Alert Types**
-
-👉 [**Here is how you can set specific alert types**](#opting-into-specific-alert-types)
-
-LLM-related Alerts
-
-| Alert Type | Description | Default On |
-|------------|-------------|---------|
-| `llm_exceptions` | Alerts for LLM API exceptions | ✅ |
-| `llm_too_slow` | Notifications for LLM responses slower than the set threshold | ✅ |
-| `llm_requests_hanging` | Alerts for LLM requests that are not completing | ✅ |
-| `cooldown_deployment` | Alerts when a deployment is put into cooldown | ✅ |
-| `new_model_added` | Notifications when a new model is added to litellm proxy through /model/new| ✅ |
-| `outage_alerts` | Alerts when a specific LLM deployment is facing an outage | ✅ |
-| `region_outage_alerts` | Alerts when a specfic LLM region is facing an outage. Example us-east-1 | ✅ |
-
-Budget and Spend Alerts
-
-| Alert Type | Description | Default On|
-|------------|-------------|---------|
-| `budget_alerts` | Notifications related to budget limits or thresholds | ✅ |
-| `spend_reports` | Periodic reports on spending across teams or tags | ✅ |
-| `failed_tracking_spend` | Alerts when spend tracking fails | ✅ |
-| `daily_reports` | Daily Spend reports | ✅ |
-| `fallback_reports` | Weekly Reports on LLM fallback occurrences | ✅ |
-
-Database Alerts
-
-| Alert Type | Description | Default On |
-|------------|-------------|---------|
-| `db_exceptions` | Notifications for database-related exceptions | ✅ |
-
-Management Endpoint Alerts - Virtual Key, Team, Internal User
-
-| Alert Type | Description | Default On |
-|------------|-------------|---------|
-| `new_virtual_key_created` | Notifications when a new virtual key is created | ❌ |
-| `virtual_key_updated` | Alerts when a virtual key is modified | ❌ |
-| `virtual_key_deleted` | Notifications when a virtual key is removed | ❌ |
-| `new_team_created` | Alerts for the creation of a new team | ❌ |
-| `team_updated` | Notifications when team details are modified | ❌ |
-| `team_deleted` | Alerts when a team is deleted | ❌ |
-| `new_internal_user_created` | Notifications for new internal user accounts | ❌ |
-| `internal_user_updated` | Alerts when an internal user's details are changed | ❌ |
-| `internal_user_deleted` | Notifications when an internal user account is removed | ❌ |
\ No newline at end of file
diff --git a/docs/my-website/docs/proxy/architecture.md b/docs/my-website/docs/proxy/architecture.md
deleted file mode 100644
index eb4f1ec8d..000000000
--- a/docs/my-website/docs/proxy/architecture.md
+++ /dev/null
@@ -1,39 +0,0 @@
-import Image from '@theme/IdealImage';
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Life of a Request
-
-## High Level architecture
-
-
-
-
-### Request Flow
-
-1. **User Sends Request**: The process begins when a user sends a request to the LiteLLM Proxy Server (Gateway).
-
-2. [**Virtual Keys**](../virtual_keys): At this stage the `Bearer` token in the request is checked to ensure it is valid and under it's budget. [Here is the list of checks that run for each request](https://github.com/BerriAI/litellm/blob/ba41a72f92a9abf1d659a87ec880e8e319f87481/litellm/proxy/auth/auth_checks.py#L43)
- - 2.1 Check if the Virtual Key exists in Redis Cache or In Memory Cache
- - 2.2 **If not in Cache**, Lookup Virtual Key in DB
-
-3. **Rate Limiting**: The [MaxParallelRequestsHandler](https://github.com/BerriAI/litellm/blob/main/litellm/proxy/hooks/parallel_request_limiter.py) checks the **rate limit (rpm/tpm)** for the the following components:
- - Global Server Rate Limit
- - Virtual Key Rate Limit
- - User Rate Limit
- - Team Limit
-
-4. **LiteLLM `proxy_server.py`**: Contains the `/chat/completions` and `/embeddings` endpoints. Requests to these endpoints are sent through the LiteLLM Router
-
-5. [**LiteLLM Router**](../routing): The LiteLLM Router handles Load balancing, Fallbacks, Retries for LLM API deployments.
-
-6. [**litellm.completion() / litellm.embedding()**:](../index#litellm-python-sdk) The litellm Python SDK is used to call the LLM in the OpenAI API format (Translation and parameter mapping)
-
-7. **Post-Request Processing**: After the response is sent back to the client, the following **asynchronous** tasks are performed:
- - [Logging to LangFuse (logging destination is configurable)](./logging)
- - The [MaxParallelRequestsHandler](https://github.com/BerriAI/litellm/blob/main/litellm/proxy/hooks/parallel_request_limiter.py) updates the rpm/tpm usage for the
- - Global Server Rate Limit
- - Virtual Key Rate Limit
- - User Rate Limit
- - Team Limit
- - The `_PROXY_track_cost_callback` updates spend / usage in the LiteLLM database. [Here is everything tracked in the DB per request](https://github.com/BerriAI/litellm/blob/ba41a72f92a9abf1d659a87ec880e8e319f87481/schema.prisma#L172)
diff --git a/docs/my-website/docs/proxy/billing.md b/docs/my-website/docs/proxy/billing.md
deleted file mode 100644
index 902801cd0..000000000
--- a/docs/my-website/docs/proxy/billing.md
+++ /dev/null
@@ -1,319 +0,0 @@
-import Image from '@theme/IdealImage';
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Billing
-
-Bill internal teams, external customers for their usage
-
-**🚨 Requirements**
-- [Setup Lago](https://docs.getlago.com/guide/self-hosted/docker#run-the-app), for usage-based billing. We recommend following [their Stripe tutorial](https://docs.getlago.com/templates/per-transaction/stripe#step-1-create-billable-metrics-for-transaction)
-
-Steps:
-- Connect the proxy to Lago
-- Set the id you want to bill for (customers, internal users, teams)
-- Start!
-
-## Quick Start
-
-Bill internal teams for their usage
-
-### 1. Connect proxy to Lago
-
-Set 'lago' as a callback on your proxy config.yaml
-
-```yaml
-model_list:
- - model_name: fake-openai-endpoint
- litellm_params:
- model: openai/fake
- api_key: fake-key
- api_base: https://exampleopenaiendpoint-production.up.railway.app/
-
-litellm_settings:
- callbacks: ["lago"] # 👈 KEY CHANGE
-
-general_settings:
- master_key: sk-1234
-```
-
-Add your Lago keys to the environment
-
-```bash
-export LAGO_API_BASE="http://localhost:3000" # self-host - https://docs.getlago.com/guide/self-hosted/docker#run-the-app
-export LAGO_API_KEY="3e29d607-de54-49aa-a019-ecf585729070" # Get key - https://docs.getlago.com/guide/self-hosted/docker#find-your-api-key
-export LAGO_API_EVENT_CODE="openai_tokens" # name of lago billing code
-export LAGO_API_CHARGE_BY="team_id" # 👈 Charges 'team_id' attached to proxy key
-```
-
-Start proxy
-
-```bash
-litellm --config /path/to/config.yaml
-```
-
-### 2. Create Key for Internal Team
-
-```bash
-curl 'http://0.0.0.0:4000/key/generate' \
---header 'Authorization: Bearer sk-1234' \
---header 'Content-Type: application/json' \
---data-raw '{"team_id": "my-unique-id"}' # 👈 Internal Team's ID
-```
-
-Response Object:
-
-```bash
-{
- "key": "sk-tXL0wt5-lOOVK9sfY2UacA",
-}
-```
-
-
-### 3. Start billing!
-
-
-
-
-```bash
-curl --location 'http://0.0.0.0:4000/chat/completions' \
---header 'Content-Type: application/json' \
---header 'Authorization: Bearer sk-tXL0wt5-lOOVK9sfY2UacA' \ # 👈 Team's Key
---data ' {
- "model": "fake-openai-endpoint",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- }
-'
-```
-
-
-
-```python
-import openai
-client = openai.OpenAI(
- api_key="sk-tXL0wt5-lOOVK9sfY2UacA", # 👈 Team's Key
- base_url="http://0.0.0.0:4000"
-)
-
-# request sent to model set on litellm proxy, `litellm --model`
-response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
- {
- "role": "user",
- "content": "this is a test request, write a short poem"
- }
-])
-
-print(response)
-```
-
-
-
-```python
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts.chat import (
- ChatPromptTemplate,
- HumanMessagePromptTemplate,
- SystemMessagePromptTemplate,
-)
-from langchain.schema import HumanMessage, SystemMessage
-import os
-
-os.environ["OPENAI_API_KEY"] = "sk-tXL0wt5-lOOVK9sfY2UacA" # 👈 Team's Key
-
-chat = ChatOpenAI(
- openai_api_base="http://0.0.0.0:4000",
- model = "gpt-3.5-turbo",
- temperature=0.1,
-)
-
-messages = [
- SystemMessage(
- content="You are a helpful assistant that im using to make a test request to."
- ),
- HumanMessage(
- content="test from litellm. tell me why it's amazing in 1 sentence"
- ),
-]
-response = chat(messages)
-
-print(response)
-```
-
-
-
-**See Results on Lago**
-
-
-
-
-## Advanced - Lago Logging object
-
-This is what LiteLLM will log to Lagos
-
-```
-{
- "event": {
- "transaction_id": "",
- "external_customer_id": , # either 'end_user_id', 'user_id', or 'team_id'. Default 'end_user_id'.
- "code": os.getenv("LAGO_API_EVENT_CODE"),
- "properties": {
- "input_tokens": ,
- "output_tokens": ,
- "model": ,
- "response_cost": , # 👈 LITELLM CALCULATED RESPONSE COST - https://github.com/BerriAI/litellm/blob/d43f75150a65f91f60dc2c0c9462ce3ffc713c1f/litellm/utils.py#L1473
- }
- }
-}
-```
-
-## Advanced - Bill Customers, Internal Users
-
-For:
-- Customers (id passed via 'user' param in /chat/completion call) = 'end_user_id'
-- Internal Users (id set when [creating keys](https://docs.litellm.ai/docs/proxy/virtual_keys#advanced---spend-tracking)) = 'user_id'
-- Teams (id set when [creating keys](https://docs.litellm.ai/docs/proxy/virtual_keys#advanced---spend-tracking)) = 'team_id'
-
-
-
-
-
-
-1. Set 'LAGO_API_CHARGE_BY' to 'end_user_id'
-
- ```bash
- export LAGO_API_CHARGE_BY="end_user_id"
- ```
-
-2. Test it!
-
-
-
-
- ```shell
- curl --location 'http://0.0.0.0:4000/chat/completions' \
- --header 'Content-Type: application/json' \
- --data ' {
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- "user": "my_customer_id" # 👈 whatever your customer id is
- }
- '
- ```
-
-
-
- ```python
- import openai
- client = openai.OpenAI(
- api_key="anything",
- base_url="http://0.0.0.0:4000"
- )
-
- # request sent to model set on litellm proxy, `litellm --model`
- response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
- {
- "role": "user",
- "content": "this is a test request, write a short poem"
- }
- ], user="my_customer_id") # 👈 whatever your customer id is
-
- print(response)
- ```
-
-
-
-
- ```python
- from langchain.chat_models import ChatOpenAI
- from langchain.prompts.chat import (
- ChatPromptTemplate,
- HumanMessagePromptTemplate,
- SystemMessagePromptTemplate,
- )
- from langchain.schema import HumanMessage, SystemMessage
- import os
-
- os.environ["OPENAI_API_KEY"] = "anything"
-
- chat = ChatOpenAI(
- openai_api_base="http://0.0.0.0:4000",
- model = "gpt-3.5-turbo",
- temperature=0.1,
- extra_body={
- "user": "my_customer_id" # 👈 whatever your customer id is
- }
- )
-
- messages = [
- SystemMessage(
- content="You are a helpful assistant that im using to make a test request to."
- ),
- HumanMessage(
- content="test from litellm. tell me why it's amazing in 1 sentence"
- ),
- ]
- response = chat(messages)
-
- print(response)
- ```
-
-
-
-
-
-
-
-1. Set 'LAGO_API_CHARGE_BY' to 'user_id'
-
-```bash
-export LAGO_API_CHARGE_BY="user_id"
-```
-
-2. Create a key for that user
-
-```bash
-curl 'http://0.0.0.0:4000/key/generate' \
---header 'Authorization: Bearer ' \
---header 'Content-Type: application/json' \
---data-raw '{"user_id": "my-unique-id"}' # 👈 Internal User's id
-```
-
-Response Object:
-
-```bash
-{
- "key": "sk-tXL0wt5-lOOVK9sfY2UacA",
-}
-```
-
-3. Make API Calls with that Key
-
-```python
-import openai
-client = openai.OpenAI(
- api_key="sk-tXL0wt5-lOOVK9sfY2UacA", # 👈 Generated key
- base_url="http://0.0.0.0:4000"
-)
-
-# request sent to model set on litellm proxy, `litellm --model`
-response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
- {
- "role": "user",
- "content": "this is a test request, write a short poem"
- }
-])
-
-print(response)
-```
-
-
diff --git a/docs/my-website/docs/proxy/bucket.md b/docs/my-website/docs/proxy/bucket.md
deleted file mode 100644
index d1b9e6076..000000000
--- a/docs/my-website/docs/proxy/bucket.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-import Image from '@theme/IdealImage';
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Logging GCS, s3 Buckets
-
-LiteLLM Supports Logging to the following Cloud Buckets
-- (Enterprise) ✨ [Google Cloud Storage Buckets](#logging-proxy-inputoutput-to-google-cloud-storage-buckets)
-- (Free OSS) [Amazon s3 Buckets](#logging-proxy-inputoutput---s3-buckets)
-
-## Google Cloud Storage Buckets
-
-Log LLM Logs to [Google Cloud Storage Buckets](https://cloud.google.com/storage?hl=en)
-
-:::info
-
-✨ This is an Enterprise only feature [Get Started with Enterprise here](https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat)
-
-:::
-
-
-| Property | Details |
-|----------|---------|
-| Description | Log LLM Input/Output to cloud storage buckets |
-| Load Test Benchmarks | [Benchmarks](https://docs.litellm.ai/docs/benchmarks) |
-| Google Docs on Cloud Storage | [Google Cloud Storage](https://cloud.google.com/storage?hl=en) |
-
-
-
-### Usage
-
-1. Add `gcs_bucket` to LiteLLM Config.yaml
-```yaml
-model_list:
-- litellm_params:
- api_base: https://openai-function-calling-workers.tasslexyz.workers.dev/
- api_key: my-fake-key
- model: openai/my-fake-model
- model_name: fake-openai-endpoint
-
-litellm_settings:
- callbacks: ["gcs_bucket"] # 👈 KEY CHANGE # 👈 KEY CHANGE
-```
-
-2. Set required env variables
-
-```shell
-GCS_BUCKET_NAME=""
-GCS_PATH_SERVICE_ACCOUNT="/Users/ishaanjaffer/Downloads/adroit-crow-413218-a956eef1a2a8.json" # Add path to service account.json
-```
-
-3. Start Proxy
-
-```
-litellm --config /path/to/config.yaml
-```
-
-4. Test it!
-
-```bash
-curl --location 'http://0.0.0.0:4000/chat/completions' \
---header 'Content-Type: application/json' \
---data ' {
- "model": "fake-openai-endpoint",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- }
-'
-```
-
-
-### Expected Logs on GCS Buckets
-
-
-
-### Fields Logged on GCS Buckets
-
-[**The standard logging object is logged on GCS Bucket**](../proxy/logging)
-
-
-### Getting `service_account.json` from Google Cloud Console
-
-1. Go to [Google Cloud Console](https://console.cloud.google.com/)
-2. Search for IAM & Admin
-3. Click on Service Accounts
-4. Select a Service Account
-5. Click on 'Keys' -> Add Key -> Create New Key -> JSON
-6. Save the JSON file and add the path to `GCS_PATH_SERVICE_ACCOUNT`
-
-
-## s3 Buckets
-
-We will use the `--config` to set
-
-- `litellm.success_callback = ["s3"]`
-
-This will log all successfull LLM calls to s3 Bucket
-
-**Step 1** Set AWS Credentials in .env
-
-```shell
-AWS_ACCESS_KEY_ID = ""
-AWS_SECRET_ACCESS_KEY = ""
-AWS_REGION_NAME = ""
-```
-
-**Step 2**: Create a `config.yaml` file and set `litellm_settings`: `success_callback`
-
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
-litellm_settings:
- success_callback: ["s3"]
- s3_callback_params:
- s3_bucket_name: logs-bucket-litellm # AWS Bucket Name for S3
- s3_region_name: us-west-2 # AWS Region Name for S3
- s3_aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID # us os.environ/ to pass environment variables. This is AWS Access Key ID for S3
- s3_aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY # AWS Secret Access Key for S3
- s3_path: my-test-path # [OPTIONAL] set path in bucket you want to write logs to
- s3_endpoint_url: https://s3.amazonaws.com # [OPTIONAL] S3 endpoint URL, if you want to use Backblaze/cloudflare s3 buckets
-```
-
-**Step 3**: Start the proxy, make a test request
-
-Start proxy
-
-```shell
-litellm --config config.yaml --debug
-```
-
-Test Request
-
-```shell
-curl --location 'http://0.0.0.0:4000/chat/completions' \
- --header 'Content-Type: application/json' \
- --data ' {
- "model": "Azure OpenAI GPT-4 East",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ]
- }'
-```
-
-Your logs should be available on the specified s3 Bucket
diff --git a/docs/my-website/docs/proxy/caching.md b/docs/my-website/docs/proxy/caching.md
deleted file mode 100644
index 3f5342c7e..000000000
--- a/docs/my-website/docs/proxy/caching.md
+++ /dev/null
@@ -1,945 +0,0 @@
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Caching
-Cache LLM Responses
-
-:::note
-
-For OpenAI/Anthropic Prompt Caching, go [here](../completion/prompt_caching.md)
-
-:::
-
-LiteLLM supports:
-- In Memory Cache
-- Redis Cache
-- Qdrant Semantic Cache
-- Redis Semantic Cache
-- s3 Bucket Cache
-
-## Quick Start - Redis, s3 Cache, Semantic Cache
-
-
-
-
-Caching can be enabled by adding the `cache` key in the `config.yaml`
-
-#### Step 1: Add `cache` to the config.yaml
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
- - model_name: text-embedding-ada-002
- litellm_params:
- model: text-embedding-ada-002
-
-litellm_settings:
- set_verbose: True
- cache: True # set cache responses to True, litellm defaults to using a redis cache
-```
-
-#### [OPTIONAL] Step 1.5: Add redis namespaces, default ttl
-
-#### Namespace
-If you want to create some folder for your keys, you can set a namespace, like this:
-
-```yaml
-litellm_settings:
- cache: true
- cache_params: # set cache params for redis
- type: redis
- namespace: "litellm.caching.caching"
-```
-
-and keys will be stored like:
-
-```
-litellm.caching.caching:
-```
-
-#### Redis Cluster
-
-
-
-
-
-```yaml
-model_list:
- - model_name: "*"
- litellm_params:
- model: "*"
-
-
-litellm_settings:
- cache: True
- cache_params:
- type: redis
- redis_startup_nodes: [{"host": "127.0.0.1", "port": "7001"}]
-```
-
-
-
-
-
-You can configure redis cluster in your .env by setting `REDIS_CLUSTER_NODES` in your .env
-
-**Example `REDIS_CLUSTER_NODES`** value
-
-```
-REDIS_CLUSTER_NODES = "[{"host": "127.0.0.1", "port": "7001"}, {"host": "127.0.0.1", "port": "7003"}, {"host": "127.0.0.1", "port": "7004"}, {"host": "127.0.0.1", "port": "7005"}, {"host": "127.0.0.1", "port": "7006"}, {"host": "127.0.0.1", "port": "7007"}]"
-```
-
-:::note
-
-Example python script for setting redis cluster nodes in .env:
-
-```python
-# List of startup nodes
-startup_nodes = [
- {"host": "127.0.0.1", "port": "7001"},
- {"host": "127.0.0.1", "port": "7003"},
- {"host": "127.0.0.1", "port": "7004"},
- {"host": "127.0.0.1", "port": "7005"},
- {"host": "127.0.0.1", "port": "7006"},
- {"host": "127.0.0.1", "port": "7007"},
-]
-
-# set startup nodes in environment variables
-os.environ["REDIS_CLUSTER_NODES"] = json.dumps(startup_nodes)
-print("REDIS_CLUSTER_NODES", os.environ["REDIS_CLUSTER_NODES"])
-```
-
-:::
-
-
-
-
-
-#### Redis Sentinel
-
-
-
-
-
-
-```yaml
-model_list:
- - model_name: "*"
- litellm_params:
- model: "*"
-
-
-litellm_settings:
- cache: true
- cache_params:
- type: "redis"
- service_name: "mymaster"
- sentinel_nodes: [["localhost", 26379]]
- sentinel_password: "password" # [OPTIONAL]
-```
-
-
-
-
-
-You can configure redis sentinel in your .env by setting `REDIS_SENTINEL_NODES` in your .env
-
-**Example `REDIS_SENTINEL_NODES`** value
-
-```env
-REDIS_SENTINEL_NODES='[["localhost", 26379]]'
-REDIS_SERVICE_NAME = "mymaster"
-REDIS_SENTINEL_PASSWORD = "password"
-```
-
-:::note
-
-Example python script for setting redis cluster nodes in .env:
-
-```python
-# List of startup nodes
-sentinel_nodes = [["localhost", 26379]]
-
-# set startup nodes in environment variables
-os.environ["REDIS_SENTINEL_NODES"] = json.dumps(sentinel_nodes)
-print("REDIS_SENTINEL_NODES", os.environ["REDIS_SENTINEL_NODES"])
-```
-
-:::
-
-
-
-
-
-#### TTL
-
-```yaml
-litellm_settings:
- cache: true
- cache_params: # set cache params for redis
- type: redis
- ttl: 600 # will be cached on redis for 600s
- # default_in_memory_ttl: Optional[float], default is None. time in seconds.
- # default_in_redis_ttl: Optional[float], default is None. time in seconds.
-```
-
-
-#### SSL
-
-just set `REDIS_SSL="True"` in your .env, and LiteLLM will pick this up.
-
-```env
-REDIS_SSL="True"
-```
-
-For quick testing, you can also use REDIS_URL, eg.:
-
-```
-REDIS_URL="rediss://.."
-```
-
-but we **don't** recommend using REDIS_URL in prod. We've noticed a performance difference between using it vs. redis_host, port, etc.
-#### Step 2: Add Redis Credentials to .env
-Set either `REDIS_URL` or the `REDIS_HOST` in your os environment, to enable caching.
-
- ```shell
- REDIS_URL = "" # REDIS_URL='redis://username:password@hostname:port/database'
- ## OR ##
- REDIS_HOST = "" # REDIS_HOST='redis-18841.c274.us-east-1-3.ec2.cloud.redislabs.com'
- REDIS_PORT = "" # REDIS_PORT='18841'
- REDIS_PASSWORD = "" # REDIS_PASSWORD='liteLlmIsAmazing'
- ```
-
-**Additional kwargs**
-You can pass in any additional redis.Redis arg, by storing the variable + value in your os environment, like this:
-```shell
-REDIS_ = ""
-```
-
-[**See how it's read from the environment**](https://github.com/BerriAI/litellm/blob/4d7ff1b33b9991dcf38d821266290631d9bcd2dd/litellm/_redis.py#L40)
-#### Step 3: Run proxy with config
-```shell
-$ litellm --config /path/to/config.yaml
-```
-
-
-
-
-
-Caching can be enabled by adding the `cache` key in the `config.yaml`
-
-#### Step 1: Add `cache` to the config.yaml
-```yaml
-model_list:
- - model_name: fake-openai-endpoint
- litellm_params:
- model: openai/fake
- api_key: fake-key
- api_base: https://exampleopenaiendpoint-production.up.railway.app/
- - model_name: openai-embedding
- litellm_params:
- model: openai/text-embedding-3-small
- api_key: os.environ/OPENAI_API_KEY
-
-litellm_settings:
- set_verbose: True
- cache: True # set cache responses to True, litellm defaults to using a redis cache
- cache_params:
- type: qdrant-semantic
- qdrant_semantic_cache_embedding_model: openai-embedding # the model should be defined on the model_list
- qdrant_collection_name: test_collection
- qdrant_quantization_config: binary
- similarity_threshold: 0.8 # similarity threshold for semantic cache
-```
-
-#### Step 2: Add Qdrant Credentials to your .env
-
-```shell
-QDRANT_API_KEY = "16rJUMBRx*************"
-QDRANT_API_BASE = "https://5392d382-45*********.cloud.qdrant.io"
-```
-
-#### Step 3: Run proxy with config
-```shell
-$ litellm --config /path/to/config.yaml
-```
-
-
-#### Step 4. Test it
-
-```shell
-curl -i http://localhost:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer sk-1234" \
- -d '{
- "model": "fake-openai-endpoint",
- "messages": [
- {"role": "user", "content": "Hello"}
- ]
- }'
-```
-
-**Expect to see `x-litellm-semantic-similarity` in the response headers when semantic caching is one**
-
-
-
-
-
-#### Step 1: Add `cache` to the config.yaml
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
- - model_name: text-embedding-ada-002
- litellm_params:
- model: text-embedding-ada-002
-
-litellm_settings:
- set_verbose: True
- cache: True # set cache responses to True
- cache_params: # set cache params for s3
- type: s3
- s3_bucket_name: cache-bucket-litellm # AWS Bucket Name for S3
- s3_region_name: us-west-2 # AWS Region Name for S3
- s3_aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID # us os.environ/ to pass environment variables. This is AWS Access Key ID for S3
- s3_aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY # AWS Secret Access Key for S3
- s3_endpoint_url: https://s3.amazonaws.com # [OPTIONAL] S3 endpoint URL, if you want to use Backblaze/cloudflare s3 buckets
-```
-
-#### Step 2: Run proxy with config
-```shell
-$ litellm --config /path/to/config.yaml
-```
-
-
-
-
-
-Caching can be enabled by adding the `cache` key in the `config.yaml`
-
-#### Step 1: Add `cache` to the config.yaml
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
- - model_name: azure-embedding-model
- litellm_params:
- model: azure/azure-embedding-model
- api_base: os.environ/AZURE_API_BASE
- api_key: os.environ/AZURE_API_KEY
- api_version: "2023-07-01-preview"
-
-litellm_settings:
- set_verbose: True
- cache: True # set cache responses to True, litellm defaults to using a redis cache
- cache_params:
- type: "redis-semantic"
- similarity_threshold: 0.8 # similarity threshold for semantic cache
- redis_semantic_cache_embedding_model: azure-embedding-model # set this to a model_name set in model_list
-```
-
-#### Step 2: Add Redis Credentials to .env
-Set either `REDIS_URL` or the `REDIS_HOST` in your os environment, to enable caching.
-
- ```shell
- REDIS_URL = "" # REDIS_URL='redis://username:password@hostname:port/database'
- ## OR ##
- REDIS_HOST = "" # REDIS_HOST='redis-18841.c274.us-east-1-3.ec2.cloud.redislabs.com'
- REDIS_PORT = "" # REDIS_PORT='18841'
- REDIS_PASSWORD = "" # REDIS_PASSWORD='liteLlmIsAmazing'
- ```
-
-**Additional kwargs**
-You can pass in any additional redis.Redis arg, by storing the variable + value in your os environment, like this:
-```shell
-REDIS_ = ""
-```
-
-#### Step 3: Run proxy with config
-```shell
-$ litellm --config /path/to/config.yaml
-```
-
-
-
-
-
-
-
-
-
-## Using Caching - /chat/completions
-
-
-
-
-Send the same request twice:
-```shell
-curl http://0.0.0.0:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -d '{
- "model": "gpt-3.5-turbo",
- "messages": [{"role": "user", "content": "write a poem about litellm!"}],
- "temperature": 0.7
- }'
-
-curl http://0.0.0.0:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -d '{
- "model": "gpt-3.5-turbo",
- "messages": [{"role": "user", "content": "write a poem about litellm!"}],
- "temperature": 0.7
- }'
-```
-
-
-
-Send the same request twice:
-```shell
-curl --location 'http://0.0.0.0:4000/embeddings' \
- --header 'Content-Type: application/json' \
- --data ' {
- "model": "text-embedding-ada-002",
- "input": ["write a litellm poem"]
- }'
-
-curl --location 'http://0.0.0.0:4000/embeddings' \
- --header 'Content-Type: application/json' \
- --data ' {
- "model": "text-embedding-ada-002",
- "input": ["write a litellm poem"]
- }'
-```
-
-
-
-## Set cache for proxy, but not on the actual llm api call
-
-Use this if you just want to enable features like rate limiting, and loadbalancing across multiple instances.
-
-Set `supported_call_types: []` to disable caching on the actual api call.
-
-
-```yaml
-litellm_settings:
- cache: True
- cache_params:
- type: redis
- supported_call_types: []
-```
-
-
-## Debugging Caching - `/cache/ping`
-LiteLLM Proxy exposes a `/cache/ping` endpoint to test if the cache is working as expected
-
-**Usage**
-```shell
-curl --location 'http://0.0.0.0:4000/cache/ping' -H "Authorization: Bearer sk-1234"
-```
-
-**Expected Response - when cache healthy**
-```shell
-{
- "status": "healthy",
- "cache_type": "redis",
- "ping_response": true,
- "set_cache_response": "success",
- "litellm_cache_params": {
- "supported_call_types": "['completion', 'acompletion', 'embedding', 'aembedding', 'atranscription', 'transcription']",
- "type": "redis",
- "namespace": "None"
- },
- "redis_cache_params": {
- "redis_client": "Redis>>",
- "redis_kwargs": "{'url': 'redis://:******@redis-16337.c322.us-east-1-2.ec2.cloud.redislabs.com:16337'}",
- "async_redis_conn_pool": "BlockingConnectionPool>",
- "redis_version": "7.2.0"
- }
-}
-```
-
-## Advanced
-
-### Control Call Types Caching is on for - (`/chat/completion`, `/embeddings`, etc.)
-
-By default, caching is on for all call types. You can control which call types caching is on for by setting `supported_call_types` in `cache_params`
-
-**Cache will only be on for the call types specified in `supported_call_types`**
-
-```yaml
-litellm_settings:
- cache: True
- cache_params:
- type: redis
- supported_call_types: ["acompletion", "atext_completion", "aembedding", "atranscription"]
- # /chat/completions, /completions, /embeddings, /audio/transcriptions
-```
-### Set Cache Params on config.yaml
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
- - model_name: text-embedding-ada-002
- litellm_params:
- model: text-embedding-ada-002
-
-litellm_settings:
- set_verbose: True
- cache: True # set cache responses to True, litellm defaults to using a redis cache
- cache_params: # cache_params are optional
- type: "redis" # The type of cache to initialize. Can be "local" or "redis". Defaults to "local".
- host: "localhost" # The host address for the Redis cache. Required if type is "redis".
- port: 6379 # The port number for the Redis cache. Required if type is "redis".
- password: "your_password" # The password for the Redis cache. Required if type is "redis".
-
- # Optional configurations
- supported_call_types: ["acompletion", "atext_completion", "aembedding", "atranscription"]
- # /chat/completions, /completions, /embeddings, /audio/transcriptions
-```
-
-### **Turn on / off caching per request. **
-
-The proxy support 4 cache-controls:
-
-- `ttl`: *Optional(int)* - Will cache the response for the user-defined amount of time (in seconds).
-- `s-maxage`: *Optional(int)* Will only accept cached responses that are within user-defined range (in seconds).
-- `no-cache`: *Optional(bool)* Will not return a cached response, but instead call the actual endpoint.
-- `no-store`: *Optional(bool)* Will not cache the response.
-
-[Let us know if you need more](https://github.com/BerriAI/litellm/issues/1218)
-
-**Turn off caching**
-
-Set `no-cache=True`, this will not return a cached response
-
-
-
-
-```python
-import os
-from openai import OpenAI
-
-client = OpenAI(
- # This is the default and can be omitted
- api_key=os.environ.get("OPENAI_API_KEY"),
- base_url="http://0.0.0.0:4000"
-)
-
-chat_completion = client.chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "Say this is a test",
- }
- ],
- model="gpt-3.5-turbo",
- extra_body = { # OpenAI python accepts extra args in extra_body
- cache: {
- "no-cache": True # will not return a cached response
- }
- }
-)
-```
-
-
-
-
-```shell
-curl http://localhost:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer sk-1234" \
- -d '{
- "model": "gpt-3.5-turbo",
- "cache": {"no-cache": True},
- "messages": [
- {"role": "user", "content": "Say this is a test"}
- ]
- }'
-```
-
-
-
-
-
-**Turn on caching**
-
-By default cache is always on
-
-
-
-
-```python
-import os
-from openai import OpenAI
-
-client = OpenAI(
- # This is the default and can be omitted
- api_key=os.environ.get("OPENAI_API_KEY"),
- base_url="http://0.0.0.0:4000"
-)
-
-chat_completion = client.chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "Say this is a test",
- }
- ],
- model="gpt-3.5-turbo"
-)
-```
-
-
-
-
-```shell
-curl http://localhost:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer sk-1234" \
- -d '{
- "model": "gpt-3.5-turbo",
- "messages": [
- {"role": "user", "content": "Say this is a test"}
- ]
- }'
-```
-
-
-
-
-
-**Set `ttl`**
-
-Set `ttl=600`, this will caches response for 10 minutes (600 seconds)
-
-
-
-
-```python
-import os
-from openai import OpenAI
-
-client = OpenAI(
- # This is the default and can be omitted
- api_key=os.environ.get("OPENAI_API_KEY"),
- base_url="http://0.0.0.0:4000"
-)
-
-chat_completion = client.chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "Say this is a test",
- }
- ],
- model="gpt-3.5-turbo",
- extra_body = { # OpenAI python accepts extra args in extra_body
- cache: {
- "ttl": 600 # caches response for 10 minutes
- }
- }
-)
-```
-
-
-
-
-```shell
-curl http://localhost:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer sk-1234" \
- -d '{
- "model": "gpt-3.5-turbo",
- "cache": {"ttl": 600},
- "messages": [
- {"role": "user", "content": "Say this is a test"}
- ]
- }'
-```
-
-
-
-
-
-
-
-**Set `s-maxage`**
-
-Set `s-maxage`, this will only get responses cached within last 10 minutes
-
-
-
-
-```python
-import os
-from openai import OpenAI
-
-client = OpenAI(
- # This is the default and can be omitted
- api_key=os.environ.get("OPENAI_API_KEY"),
- base_url="http://0.0.0.0:4000"
-)
-
-chat_completion = client.chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "Say this is a test",
- }
- ],
- model="gpt-3.5-turbo",
- extra_body = { # OpenAI python accepts extra args in extra_body
- cache: {
- "s-maxage": 600 # only get responses cached within last 10 minutes
- }
- }
-)
-```
-
-
-
-
-```shell
-curl http://localhost:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer sk-1234" \
- -d '{
- "model": "gpt-3.5-turbo",
- "cache": {"s-maxage": 600},
- "messages": [
- {"role": "user", "content": "Say this is a test"}
- ]
- }'
-```
-
-
-
-
-
-
-### Turn on / off caching per Key.
-
-1. Add cache params when creating a key [full list](#turn-on--off-caching-per-key)
-
-```bash
-curl -X POST 'http://0.0.0.0:4000/key/generate' \
--H 'Authorization: Bearer sk-1234' \
--H 'Content-Type: application/json' \
--d '{
- "user_id": "222",
- "metadata": {
- "cache": {
- "no-cache": true
- }
- }
-}'
-```
-
-2. Test it!
-
-```bash
-curl -X POST 'http://localhost:4000/chat/completions' \
--H 'Content-Type: application/json' \
--H 'Authorization: Bearer ' \
--d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "bom dia"}]}'
-```
-
-### Deleting Cache Keys - `/cache/delete`
-In order to delete a cache key, send a request to `/cache/delete` with the `keys` you want to delete
-
-Example
-```shell
-curl -X POST "http://0.0.0.0:4000/cache/delete" \
- -H "Authorization: Bearer sk-1234" \
- -d '{"keys": ["586bf3f3c1bf5aecb55bd9996494d3bbc69eb58397163add6d49537762a7548d", "key2"]}'
-```
-
-```shell
-# {"status":"success"}
-```
-
-#### Viewing Cache Keys from responses
-You can view the cache_key in the response headers, on cache hits the cache key is sent as the `x-litellm-cache-key` response headers
-```shell
-curl -i --location 'http://0.0.0.0:4000/chat/completions' \
- --header 'Authorization: Bearer sk-1234' \
- --header 'Content-Type: application/json' \
- --data '{
- "model": "gpt-3.5-turbo",
- "user": "ishan",
- "messages": [
- {
- "role": "user",
- "content": "what is litellm"
- }
- ],
-}'
-```
-
-Response from litellm proxy
-```json
-date: Thu, 04 Apr 2024 17:37:21 GMT
-content-type: application/json
-x-litellm-cache-key: 586bf3f3c1bf5aecb55bd9996494d3bbc69eb58397163add6d49537762a7548d
-
-{
- "id": "chatcmpl-9ALJTzsBlXR9zTxPvzfFFtFbFtG6T",
- "choices": [
- {
- "finish_reason": "stop",
- "index": 0,
- "message": {
- "content": "I'm sorr.."
- "role": "assistant"
- }
- }
- ],
- "created": 1712252235,
-}
-
-```
-
-### **Set Caching Default Off - Opt in only **
-
-1. **Set `mode: default_off` for caching**
-
-```yaml
-model_list:
- - model_name: fake-openai-endpoint
- litellm_params:
- model: openai/fake
- api_key: fake-key
- api_base: https://exampleopenaiendpoint-production.up.railway.app/
-
-# default off mode
-litellm_settings:
- set_verbose: True
- cache: True
- cache_params:
- mode: default_off # 👈 Key change cache is default_off
-```
-
-2. **Opting in to cache when cache is default off**
-
-
-
-
-
-```python
-import os
-from openai import OpenAI
-
-client = OpenAI(api_key=, base_url="http://0.0.0.0:4000")
-
-chat_completion = client.chat.completions.create(
- messages=[
- {
- "role": "user",
- "content": "Say this is a test",
- }
- ],
- model="gpt-3.5-turbo",
- extra_body = { # OpenAI python accepts extra args in extra_body
- "cache": {"use-cache": True}
- }
-)
-```
-
-
-
-
-```shell
-curl http://localhost:4000/v1/chat/completions \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer sk-1234" \
- -d '{
- "model": "gpt-3.5-turbo",
- "cache": {"use-cache": True}
- "messages": [
- {"role": "user", "content": "Say this is a test"}
- ]
- }'
-```
-
-
-
-
-
-
-
-### Turn on `batch_redis_requests`
-
-**What it does?**
-When a request is made:
-
-- Check if a key starting with `litellm:::` exists in-memory, if no - get the last 100 cached requests for this key and store it
-
-- New requests are stored with this `litellm:..` as the namespace
-
-**Why?**
-Reduce number of redis GET requests. This improved latency by 46% in prod load tests.
-
-**Usage**
-
-```yaml
-litellm_settings:
- cache: true
- cache_params:
- type: redis
- ... # remaining redis args (host, port, etc.)
- callbacks: ["batch_redis_requests"] # 👈 KEY CHANGE!
-```
-
-[**SEE CODE**](https://github.com/BerriAI/litellm/blob/main/litellm/proxy/hooks/batch_redis_get.py)
-
-## Supported `cache_params` on proxy config.yaml
-
-```yaml
-cache_params:
- # ttl
- ttl: Optional[float]
- default_in_memory_ttl: Optional[float]
- default_in_redis_ttl: Optional[float]
-
- # Type of cache (options: "local", "redis", "s3")
- type: s3
-
- # List of litellm call types to cache for
- # Options: "completion", "acompletion", "embedding", "aembedding"
- supported_call_types: ["acompletion", "atext_completion", "aembedding", "atranscription"]
- # /chat/completions, /completions, /embeddings, /audio/transcriptions
-
- # Redis cache parameters
- host: localhost # Redis server hostname or IP address
- port: "6379" # Redis server port (as a string)
- password: secret_password # Redis server password
- namespace: Optional[str] = None,
-
-
- # S3 cache parameters
- s3_bucket_name: your_s3_bucket_name # Name of the S3 bucket
- s3_region_name: us-west-2 # AWS region of the S3 bucket
- s3_api_version: 2006-03-01 # AWS S3 API version
- s3_use_ssl: true # Use SSL for S3 connections (options: true, false)
- s3_verify: true # SSL certificate verification for S3 connections (options: true, false)
- s3_endpoint_url: https://s3.amazonaws.com # S3 endpoint URL
- s3_aws_access_key_id: your_access_key # AWS Access Key ID for S3
- s3_aws_secret_access_key: your_secret_key # AWS Secret Access Key for S3
- s3_aws_session_token: your_session_token # AWS Session Token for temporary credentials
-
-```
-
-## Advanced - user api key cache ttl
-
-Configure how long the in-memory cache stores the key object (prevents db requests)
-
-```yaml
-general_settings:
- user_api_key_cache_ttl: #time in seconds
-```
-
-By default this value is set to 60s.
\ No newline at end of file
diff --git a/docs/my-website/docs/proxy/call_hooks.md b/docs/my-website/docs/proxy/call_hooks.md
deleted file mode 100644
index 6651393ef..000000000
--- a/docs/my-website/docs/proxy/call_hooks.md
+++ /dev/null
@@ -1,314 +0,0 @@
-import Image from '@theme/IdealImage';
-
-# Modify / Reject Incoming Requests
-
-- Modify data before making llm api calls on proxy
-- Reject data before making llm api calls / before returning the response
-- Enforce 'user' param for all openai endpoint calls
-
-See a complete example with our [parallel request rate limiter](https://github.com/BerriAI/litellm/blob/main/litellm/proxy/hooks/parallel_request_limiter.py)
-
-## Quick Start
-
-1. In your Custom Handler add a new `async_pre_call_hook` function
-
-This function is called just before a litellm completion call is made, and allows you to modify the data going into the litellm call [**See Code**](https://github.com/BerriAI/litellm/blob/589a6ca863000ba8e92c897ba0f776796e7a5904/litellm/proxy/proxy_server.py#L1000)
-
-```python
-from litellm.integrations.custom_logger import CustomLogger
-import litellm
-from litellm.proxy.proxy_server import UserAPIKeyAuth, DualCache
-from typing import Optional, Literal
-
-# This file includes the custom callbacks for LiteLLM Proxy
-# Once defined, these can be passed in proxy_config.yaml
-class MyCustomHandler(CustomLogger): # https://docs.litellm.ai/docs/observability/custom_callback#callback-class
- # Class variables or attributes
- def __init__(self):
- pass
-
- #### CALL HOOKS - proxy only ####
-
- async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal[
- "completion",
- "text_completion",
- "embeddings",
- "image_generation",
- "moderation",
- "audio_transcription",
- ]):
- data["model"] = "my-new-model"
- return data
-
- async def async_post_call_failure_hook(
- self,
- request_data: dict,
- original_exception: Exception,
- user_api_key_dict: UserAPIKeyAuth
- ):
- pass
-
- async def async_post_call_success_hook(
- self,
- data: dict,
- user_api_key_dict: UserAPIKeyAuth,
- response,
- ):
- pass
-
- async def async_moderation_hook( # call made in parallel to llm api call
- self,
- data: dict,
- user_api_key_dict: UserAPIKeyAuth,
- call_type: Literal["completion", "embeddings", "image_generation", "moderation", "audio_transcription"],
- ):
- pass
-
- async def async_post_call_streaming_hook(
- self,
- user_api_key_dict: UserAPIKeyAuth,
- response: str,
- ):
- pass
-proxy_handler_instance = MyCustomHandler()
-```
-
-2. Add this file to your proxy config
-
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
-
-litellm_settings:
- callbacks: custom_callbacks.proxy_handler_instance # sets litellm.callbacks = [proxy_handler_instance]
-```
-
-3. Start the server + test the request
-
-```shell
-$ litellm /path/to/config.yaml
-```
-```shell
-curl --location 'http://0.0.0.0:4000/chat/completions' \
- --data ' {
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "user",
- "content": "good morning good sir"
- }
- ],
- "user": "ishaan-app",
- "temperature": 0.2
- }'
-```
-
-
-## [BETA] *NEW* async_moderation_hook
-
-Run a moderation check in parallel to the actual LLM API call.
-
-In your Custom Handler add a new `async_moderation_hook` function
-
-- This is currently only supported for `/chat/completion` calls.
-- This function runs in parallel to the actual LLM API call.
-- If your `async_moderation_hook` raises an Exception, we will return that to the user.
-
-
-:::info
-
-We might need to update the function schema in the future, to support multiple endpoints (e.g. accept a call_type). Please keep that in mind, while trying this feature
-
-:::
-
-See a complete example with our [Llama Guard content moderation hook](https://github.com/BerriAI/litellm/blob/main/enterprise/enterprise_hooks/llm_guard.py)
-
-```python
-from litellm.integrations.custom_logger import CustomLogger
-import litellm
-from fastapi import HTTPException
-
-# This file includes the custom callbacks for LiteLLM Proxy
-# Once defined, these can be passed in proxy_config.yaml
-class MyCustomHandler(CustomLogger): # https://docs.litellm.ai/docs/observability/custom_callback#callback-class
- # Class variables or attributes
- def __init__(self):
- pass
-
- #### ASYNC ####
-
- async def async_log_stream_event(self, kwargs, response_obj, start_time, end_time):
- pass
-
- async def async_log_pre_api_call(self, model, messages, kwargs):
- pass
-
- async def async_log_success_event(self, kwargs, response_obj, start_time, end_time):
- pass
-
- async def async_log_failure_event(self, kwargs, response_obj, start_time, end_time):
- pass
-
- #### CALL HOOKS - proxy only ####
-
- async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal["completion", "embeddings"]):
- data["model"] = "my-new-model"
- return data
-
- async def async_moderation_hook( ### 👈 KEY CHANGE ###
- self,
- data: dict,
- ):
- messages = data["messages"]
- print(messages)
- if messages[0]["content"] == "hello world":
- raise HTTPException(
- status_code=400, detail={"error": "Violated content safety policy"}
- )
-
-proxy_handler_instance = MyCustomHandler()
-```
-
-
-2. Add this file to your proxy config
-
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
-
-litellm_settings:
- callbacks: custom_callbacks.proxy_handler_instance # sets litellm.callbacks = [proxy_handler_instance]
-```
-
-3. Start the server + test the request
-
-```shell
-$ litellm /path/to/config.yaml
-```
-```shell
-curl --location 'http://0.0.0.0:4000/chat/completions' \
- --data ' {
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "user",
- "content": "Hello world"
- }
- ],
- }'
-```
-
-## Advanced - Enforce 'user' param
-
-Set `enforce_user_param` to true, to require all calls to the openai endpoints to have the 'user' param.
-
-[**See Code**](https://github.com/BerriAI/litellm/blob/4777921a31c4c70e4d87b927cb233b6a09cd8b51/litellm/proxy/auth/auth_checks.py#L72)
-
-```yaml
-general_settings:
- enforce_user_param: True
-```
-
-**Result**
-
-
-
-## Advanced - Return rejected message as response
-
-For chat completions and text completion calls, you can return a rejected message as a user response.
-
-Do this by returning a string. LiteLLM takes care of returning the response in the correct format depending on the endpoint and if it's streaming/non-streaming.
-
-For non-chat/text completion endpoints, this response is returned as a 400 status code exception.
-
-
-### 1. Create Custom Handler
-
-```python
-from litellm.integrations.custom_logger import CustomLogger
-import litellm
-from litellm.utils import get_formatted_prompt
-
-# This file includes the custom callbacks for LiteLLM Proxy
-# Once defined, these can be passed in proxy_config.yaml
-class MyCustomHandler(CustomLogger):
- def __init__(self):
- pass
-
- #### CALL HOOKS - proxy only ####
-
- async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal[
- "completion",
- "text_completion",
- "embeddings",
- "image_generation",
- "moderation",
- "audio_transcription",
- ]) -> Optional[dict, str, Exception]:
- formatted_prompt = get_formatted_prompt(data=data, call_type=call_type)
-
- if "Hello world" in formatted_prompt:
- return "This is an invalid response"
-
- return data
-
-proxy_handler_instance = MyCustomHandler()
-```
-
-### 2. Update config.yaml
-
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: gpt-3.5-turbo
-
-litellm_settings:
- callbacks: custom_callbacks.proxy_handler_instance # sets litellm.callbacks = [proxy_handler_instance]
-```
-
-
-### 3. Test it!
-
-```shell
-$ litellm /path/to/config.yaml
-```
-```shell
-curl --location 'http://0.0.0.0:4000/chat/completions' \
- --data ' {
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "user",
- "content": "Hello world"
- }
- ],
- }'
-```
-
-**Expected Response**
-
-```
-{
- "id": "chatcmpl-d00bbede-2d90-4618-bf7b-11a1c23cf360",
- "choices": [
- {
- "finish_reason": "stop",
- "index": 0,
- "message": {
- "content": "This is an invalid response.", # 👈 REJECTED RESPONSE
- "role": "assistant"
- }
- }
- ],
- "created": 1716234198,
- "model": null,
- "object": "chat.completion",
- "system_fingerprint": null,
- "usage": {}
-}
-```
\ No newline at end of file
diff --git a/docs/my-website/docs/proxy/cli.md b/docs/my-website/docs/proxy/cli.md
deleted file mode 100644
index d0c477a4e..000000000
--- a/docs/my-website/docs/proxy/cli.md
+++ /dev/null
@@ -1,186 +0,0 @@
-# CLI Arguments
-Cli arguments, --host, --port, --num_workers
-
-## --host
- - **Default:** `'0.0.0.0'`
- - The host for the server to listen on.
- - **Usage:**
- ```shell
- litellm --host 127.0.0.1
- ```
- - **Usage - set Environment Variable:** `HOST`
- ```shell
- export HOST=127.0.0.1
- litellm
- ```
-
-## --port
- - **Default:** `4000`
- - The port to bind the server to.
- - **Usage:**
- ```shell
- litellm --port 8080
- ```
- - **Usage - set Environment Variable:** `PORT`
- ```shell
- export PORT=8080
- litellm
- ```
-
-## --num_workers
- - **Default:** `1`
- - The number of uvicorn workers to spin up.
- - **Usage:**
- ```shell
- litellm --num_workers 4
- ```
- - **Usage - set Environment Variable:** `NUM_WORKERS`
- ```shell
- export NUM_WORKERS=4
- litellm
- ```
-
-## --api_base
- - **Default:** `None`
- - The API base for the model litellm should call.
- - **Usage:**
- ```shell
- litellm --model huggingface/tinyllama --api_base https://k58ory32yinf1ly0.us-east-1.aws.endpoints.huggingface.cloud
- ```
-
-## --api_version
- - **Default:** `None`
- - For Azure services, specify the API version.
- - **Usage:**
- ```shell
- litellm --model azure/gpt-deployment --api_version 2023-08-01 --api_base https://"
- ```
-
-## --model or -m
- - **Default:** `None`
- - The model name to pass to Litellm.
- - **Usage:**
- ```shell
- litellm --model gpt-3.5-turbo
- ```
-
-## --test
- - **Type:** `bool` (Flag)
- - Proxy chat completions URL to make a test request.
- - **Usage:**
- ```shell
- litellm --test
- ```
-
-## --health
- - **Type:** `bool` (Flag)
- - Runs a health check on all models in config.yaml
- - **Usage:**
- ```shell
- litellm --health
- ```
-
-## --alias
- - **Default:** `None`
- - An alias for the model, for user-friendly reference.
- - **Usage:**
- ```shell
- litellm --alias my-gpt-model
- ```
-
-## --debug
- - **Default:** `False`
- - **Type:** `bool` (Flag)
- - Enable debugging mode for the input.
- - **Usage:**
- ```shell
- litellm --debug
- ```
- - **Usage - set Environment Variable:** `DEBUG`
- ```shell
- export DEBUG=True
- litellm
- ```
-
-## --detailed_debug
- - **Default:** `False`
- - **Type:** `bool` (Flag)
- - Enable debugging mode for the input.
- - **Usage:**
- ```shell
- litellm --detailed_debug
- ```
- - **Usage - set Environment Variable:** `DETAILED_DEBUG`
- ```shell
- export DETAILED_DEBUG=True
- litellm
- ```
-
-#### --temperature
- - **Default:** `None`
- - **Type:** `float`
- - Set the temperature for the model.
- - **Usage:**
- ```shell
- litellm --temperature 0.7
- ```
-
-## --max_tokens
- - **Default:** `None`
- - **Type:** `int`
- - Set the maximum number of tokens for the model output.
- - **Usage:**
- ```shell
- litellm --max_tokens 50
- ```
-
-## --request_timeout
- - **Default:** `6000`
- - **Type:** `int`
- - Set the timeout in seconds for completion calls.
- - **Usage:**
- ```shell
- litellm --request_timeout 300
- ```
-
-## --drop_params
- - **Type:** `bool` (Flag)
- - Drop any unmapped params.
- - **Usage:**
- ```shell
- litellm --drop_params
- ```
-
-## --add_function_to_prompt
- - **Type:** `bool` (Flag)
- - If a function passed but unsupported, pass it as a part of the prompt.
- - **Usage:**
- ```shell
- litellm --add_function_to_prompt
- ```
-
-## --config
- - Configure Litellm by providing a configuration file path.
- - **Usage:**
- ```shell
- litellm --config path/to/config.yaml
- ```
-
-## --telemetry
- - **Default:** `True`
- - **Type:** `bool`
- - Help track usage of this feature.
- - **Usage:**
- ```shell
- litellm --telemetry False
- ```
-
-
-## --log_config
- - **Default:** `None`
- - **Type:** `str`
- - Specify a log configuration file for uvicorn.
- - **Usage:**
- ```shell
- litellm --log_config path/to/log_config.conf
- ```
diff --git a/docs/my-website/docs/proxy/config_management.md b/docs/my-website/docs/proxy/config_management.md
deleted file mode 100644
index 4f7c5775b..000000000
--- a/docs/my-website/docs/proxy/config_management.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# File Management
-
-## `include` external YAML files in a config.yaml
-
-You can use `include` to include external YAML files in a config.yaml.
-
-**Quick Start Usage:**
-
-To include a config file, use `include` with either a single file or a list of files.
-
-Contents of `parent_config.yaml`:
-```yaml
-include:
- - model_config.yaml # 👈 Key change, will include the contents of model_config.yaml
-
-litellm_settings:
- callbacks: ["prometheus"]
-```
-
-
-Contents of `model_config.yaml`:
-```yaml
-model_list:
- - model_name: gpt-4o
- litellm_params:
- model: openai/gpt-4o
- api_base: https://exampleopenaiendpoint-production.up.railway.app/
- - model_name: fake-anthropic-endpoint
- litellm_params:
- model: anthropic/fake
- api_base: https://exampleanthropicendpoint-production.up.railway.app/
-
-```
-
-Start proxy server
-
-This will start the proxy server with config `parent_config.yaml`. Since the `include` directive is used, the server will also include the contents of `model_config.yaml`.
-```
-litellm --config parent_config.yaml --detailed_debug
-```
-
-
-
-
-
-## Examples using `include`
-
-Include a single file:
-```yaml
-include:
- - model_config.yaml
-```
-
-Include multiple files:
-```yaml
-include:
- - model_config.yaml
- - another_config.yaml
-```
\ No newline at end of file
diff --git a/docs/my-website/docs/proxy/config_settings.md b/docs/my-website/docs/proxy/config_settings.md
deleted file mode 100644
index c762a0716..000000000
--- a/docs/my-website/docs/proxy/config_settings.md
+++ /dev/null
@@ -1,507 +0,0 @@
-# All settings
-
-
-```yaml
-environment_variables: {}
-
-model_list:
- - model_name: string
- litellm_params: {}
- model_info:
- id: string
- mode: embedding
- input_cost_per_token: 0
- output_cost_per_token: 0
- max_tokens: 2048
- base_model: gpt-4-1106-preview
- additionalProp1: {}
-
-litellm_settings:
- # Logging/Callback settings
- success_callback: ["langfuse"] # list of success callbacks
- failure_callback: ["sentry"] # list of failure callbacks
- callbacks: ["otel"] # list of callbacks - runs on success and failure
- service_callbacks: ["datadog", "prometheus"] # logs redis, postgres failures on datadog, prometheus
- turn_off_message_logging: boolean # prevent the messages and responses from being logged to on your callbacks, but request metadata will still be logged.
- redact_user_api_key_info: boolean # Redact information about the user api key (hashed token, user_id, team id, etc.), from logs. Currently supported for Langfuse, OpenTelemetry, Logfire, ArizeAI logging.
- langfuse_default_tags: ["cache_hit", "cache_key", "proxy_base_url", "user_api_key_alias", "user_api_key_user_id", "user_api_key_user_email", "user_api_key_team_alias", "semantic-similarity", "proxy_base_url"] # default tags for Langfuse Logging
-
- # Networking settings
- request_timeout: 10 # (int) llm requesttimeout in seconds. Raise Timeout error if call takes longer than 10s. Sets litellm.request_timeout
- force_ipv4: boolean # If true, litellm will force ipv4 for all LLM requests. Some users have seen httpx ConnectionError when using ipv6 + Anthropic API
-
- set_verbose: boolean # sets litellm.set_verbose=True to view verbose debug logs. DO NOT LEAVE THIS ON IN PRODUCTION
- json_logs: boolean # if true, logs will be in json format
-
- # Fallbacks, reliability
- default_fallbacks: ["claude-opus"] # set default_fallbacks, in case a specific model group is misconfigured / bad.
- content_policy_fallbacks: [{"gpt-3.5-turbo-small": ["claude-opus"]}] # fallbacks for ContentPolicyErrors
- context_window_fallbacks: [{"gpt-3.5-turbo-small": ["gpt-3.5-turbo-large", "claude-opus"]}] # fallbacks for ContextWindowExceededErrors
-
-
-
- # Caching settings
- cache: true
- cache_params: # set cache params for redis
- type: redis # type of cache to initialize
-
- # Optional - Redis Settings
- host: "localhost" # The host address for the Redis cache. Required if type is "redis".
- port: 6379 # The port number for the Redis cache. Required if type is "redis".
- password: "your_password" # The password for the Redis cache. Required if type is "redis".
- namespace: "litellm.caching.caching" # namespace for redis cache
-
- # Optional - Redis Cluster Settings
- redis_startup_nodes: [{"host": "127.0.0.1", "port": "7001"}]
-
- # Optional - Redis Sentinel Settings
- service_name: "mymaster"
- sentinel_nodes: [["localhost", 26379]]
-
- # Optional - Qdrant Semantic Cache Settings
- qdrant_semantic_cache_embedding_model: openai-embedding # the model should be defined on the model_list
- qdrant_collection_name: test_collection
- qdrant_quantization_config: binary
- similarity_threshold: 0.8 # similarity threshold for semantic cache
-
- # Optional - S3 Cache Settings
- s3_bucket_name: cache-bucket-litellm # AWS Bucket Name for S3
- s3_region_name: us-west-2 # AWS Region Name for S3
- s3_aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID # us os.environ/ to pass environment variables. This is AWS Access Key ID for S3
- s3_aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY # AWS Secret Access Key for S3
- s3_endpoint_url: https://s3.amazonaws.com # [OPTIONAL] S3 endpoint URL, if you want to use Backblaze/cloudflare s3 bucket
-
- # Common Cache settings
- # Optional - Supported call types for caching
- supported_call_types: ["acompletion", "atext_completion", "aembedding", "atranscription"]
- # /chat/completions, /completions, /embeddings, /audio/transcriptions
- mode: default_off # if default_off, you need to opt in to caching on a per call basis
- ttl: 600 # ttl for caching
-
-
-callback_settings:
- otel:
- message_logging: boolean # OTEL logging callback specific settings
-
-general_settings:
- completion_model: string
- disable_spend_logs: boolean # turn off writing each transaction to the db
- disable_master_key_return: boolean # turn off returning master key on UI (checked on '/user/info' endpoint)
- disable_retry_on_max_parallel_request_limit_error: boolean # turn off retries when max parallel request limit is reached
- disable_reset_budget: boolean # turn off reset budget scheduled task
- disable_adding_master_key_hash_to_db: boolean # turn off storing master key hash in db, for spend tracking
- enable_jwt_auth: boolean # allow proxy admin to auth in via jwt tokens with 'litellm_proxy_admin' in claims
- enforce_user_param: boolean # requires all openai endpoint requests to have a 'user' param
- allowed_routes: ["route1", "route2"] # list of allowed proxy API routes - a user can access. (currently JWT-Auth only)
- key_management_system: google_kms # either google_kms or azure_kms
- master_key: string
-
- # Database Settings
- database_url: string
- database_connection_pool_limit: 0 # default 100
- database_connection_timeout: 0 # default 60s
- allow_requests_on_db_unavailable: boolean # if true, will allow requests that can not connect to the DB to verify Virtual Key to still work
-
- custom_auth: string
- max_parallel_requests: 0 # the max parallel requests allowed per deployment
- global_max_parallel_requests: 0 # the max parallel requests allowed on the proxy all up
- infer_model_from_keys: true
- background_health_checks: true
- health_check_interval: 300
- alerting: ["slack", "email"]
- alerting_threshold: 0
- use_client_credentials_pass_through_routes: boolean # use client credentials for all pass through routes like "/vertex-ai", /bedrock/. When this is True Virtual Key auth will not be applied on these endpoints
-```
-
-### litellm_settings - Reference
-
-| Name | Type | Description |
-|------|------|-------------|
-| success_callback | array of strings | List of success callbacks. [Doc Proxy logging callbacks](logging), [Doc Metrics](prometheus) |
-| failure_callback | array of strings | List of failure callbacks [Doc Proxy logging callbacks](logging), [Doc Metrics](prometheus) |
-| callbacks | array of strings | List of callbacks - runs on success and failure [Doc Proxy logging callbacks](logging), [Doc Metrics](prometheus) |
-| service_callbacks | array of strings | System health monitoring - Logs redis, postgres failures on specified services (e.g. datadog, prometheus) [Doc Metrics](prometheus) |
-| turn_off_message_logging | boolean | If true, prevents messages and responses from being logged to callbacks, but request metadata will still be logged [Proxy Logging](logging) |
-| modify_params | boolean | If true, allows modifying the parameters of the request before it is sent to the LLM provider |
-| enable_preview_features | boolean | If true, enables preview features - e.g. Azure O1 Models with streaming support.|
-| redact_user_api_key_info | boolean | If true, redacts information about the user api key from logs [Proxy Logging](logging#redacting-userapikeyinfo) |
-| langfuse_default_tags | array of strings | Default tags for Langfuse Logging. Use this if you want to control which LiteLLM-specific fields are logged as tags by the LiteLLM proxy. By default LiteLLM Proxy logs no LiteLLM-specific fields as tags. [Further docs](./logging#litellm-specific-tags-on-langfuse---cache_hit-cache_key) |
-| set_verbose | boolean | If true, sets litellm.set_verbose=True to view verbose debug logs. DO NOT LEAVE THIS ON IN PRODUCTION |
-| json_logs | boolean | If true, logs will be in json format. If you need to store the logs as JSON, just set the `litellm.json_logs = True`. We currently just log the raw POST request from litellm as a JSON [Further docs](./debugging) |
-| default_fallbacks | array of strings | List of fallback models to use if a specific model group is misconfigured / bad. [Further docs](./reliability#default-fallbacks) |
-| request_timeout | integer | The timeout for requests in seconds. If not set, the default value is `6000 seconds`. [For reference OpenAI Python SDK defaults to `600 seconds`.](https://github.com/openai/openai-python/blob/main/src/openai/_constants.py) |
-| force_ipv4 | boolean | If true, litellm will force ipv4 for all LLM requests. Some users have seen httpx ConnectionError when using ipv6 + Anthropic API |
-| content_policy_fallbacks | array of objects | Fallbacks to use when a ContentPolicyViolationError is encountered. [Further docs](./reliability#content-policy-fallbacks) |
-| context_window_fallbacks | array of objects | Fallbacks to use when a ContextWindowExceededError is encountered. [Further docs](./reliability#context-window-fallbacks) |
-| cache | boolean | If true, enables caching. [Further docs](./caching) |
-| cache_params | object | Parameters for the cache. [Further docs](./caching) |
-| cache_params.type | string | The type of cache to initialize. Can be one of ["local", "redis", "redis-semantic", "s3", "disk", "qdrant-semantic"]. Defaults to "redis". [Furher docs](./caching) |
-| cache_params.host | string | The host address for the Redis cache. Required if type is "redis". |
-| cache_params.port | integer | The port number for the Redis cache. Required if type is "redis". |
-| cache_params.password | string | The password for the Redis cache. Required if type is "redis". |
-| cache_params.namespace | string | The namespace for the Redis cache. |
-| cache_params.redis_startup_nodes | array of objects | Redis Cluster Settings. [Further docs](./caching) |
-| cache_params.service_name | string | Redis Sentinel Settings. [Further docs](./caching) |
-| cache_params.sentinel_nodes | array of arrays | Redis Sentinel Settings. [Further docs](./caching) |
-| cache_params.ttl | integer | The time (in seconds) to store entries in cache. |
-| cache_params.qdrant_semantic_cache_embedding_model | string | The embedding model to use for qdrant semantic cache. |
-| cache_params.qdrant_collection_name | string | The name of the collection to use for qdrant semantic cache. |
-| cache_params.qdrant_quantization_config | string | The quantization configuration for the qdrant semantic cache. |
-| cache_params.similarity_threshold | float | The similarity threshold for the semantic cache. |
-| cache_params.s3_bucket_name | string | The name of the S3 bucket to use for the semantic cache. |
-| cache_params.s3_region_name | string | The region name for the S3 bucket. |
-| cache_params.s3_aws_access_key_id | string | The AWS access key ID for the S3 bucket. |
-| cache_params.s3_aws_secret_access_key | string | The AWS secret access key for the S3 bucket. |
-| cache_params.s3_endpoint_url | string | Optional - The endpoint URL for the S3 bucket. |
-| cache_params.supported_call_types | array of strings | The types of calls to cache. [Further docs](./caching) |
-| cache_params.mode | string | The mode of the cache. [Further docs](./caching) |
-| disable_end_user_cost_tracking | boolean | If true, turns off end user cost tracking on prometheus metrics + litellm spend logs table on proxy. |
-| key_generation_settings | object | Restricts who can generate keys. [Further docs](./virtual_keys.md#restricting-key-generation) |
-
-### general_settings - Reference
-
-| Name | Type | Description |
-|------|------|-------------|
-| completion_model | string | The default model to use for completions when `model` is not specified in the request |
-| disable_spend_logs | boolean | If true, turns off writing each transaction to the database |
-| disable_master_key_return | boolean | If true, turns off returning master key on UI. (checked on '/user/info' endpoint) |
-| disable_retry_on_max_parallel_request_limit_error | boolean | If true, turns off retries when max parallel request limit is reached |
-| disable_reset_budget | boolean | If true, turns off reset budget scheduled task |
-| disable_adding_master_key_hash_to_db | boolean | If true, turns off storing master key hash in db |
-| enable_jwt_auth | boolean | allow proxy admin to auth in via jwt tokens with 'litellm_proxy_admin' in claims. [Doc on JWT Tokens](token_auth) |
-| enforce_user_param | boolean | If true, requires all OpenAI endpoint requests to have a 'user' param. [Doc on call hooks](call_hooks)|
-| allowed_routes | array of strings | List of allowed proxy API routes a user can access [Doc on controlling allowed routes](enterprise#control-available-public-private-routes)|
-| key_management_system | string | Specifies the key management system. [Doc Secret Managers](../secret) |
-| master_key | string | The master key for the proxy [Set up Virtual Keys](virtual_keys) |
-| database_url | string | The URL for the database connection [Set up Virtual Keys](virtual_keys) |
-| database_connection_pool_limit | integer | The limit for database connection pool [Setting DB Connection Pool limit](#configure-db-pool-limits--connection-timeouts) |
-| database_connection_timeout | integer | The timeout for database connections in seconds [Setting DB Connection Pool limit, timeout](#configure-db-pool-limits--connection-timeouts) |
-| allow_requests_on_db_unavailable | boolean | If true, allows requests to succeed even if DB is unreachable. **Only use this if running LiteLLM in your VPC** This will allow requests to work even when LiteLLM cannot connect to the DB to verify a Virtual Key |
-| custom_auth | string | Write your own custom authentication logic [Doc Custom Auth](virtual_keys#custom-auth) |
-| max_parallel_requests | integer | The max parallel requests allowed per deployment |
-| global_max_parallel_requests | integer | The max parallel requests allowed on the proxy overall |
-| infer_model_from_keys | boolean | If true, infers the model from the provided keys |
-| background_health_checks | boolean | If true, enables background health checks. [Doc on health checks](health) |
-| health_check_interval | integer | The interval for health checks in seconds [Doc on health checks](health) |
-| alerting | array of strings | List of alerting methods [Doc on Slack Alerting](alerting) |
-| alerting_threshold | integer | The threshold for triggering alerts [Doc on Slack Alerting](alerting) |
-| use_client_credentials_pass_through_routes | boolean | If true, uses client credentials for all pass-through routes. [Doc on pass through routes](pass_through) |
-| health_check_details | boolean | If false, hides health check details (e.g. remaining rate limit). [Doc on health checks](health) |
-| public_routes | List[str] | (Enterprise Feature) Control list of public routes |
-| alert_types | List[str] | Control list of alert types to send to slack (Doc on alert types)[./alerting.md] |
-| enforced_params | List[str] | (Enterprise Feature) List of params that must be included in all requests to the proxy |
-| enable_oauth2_auth | boolean | (Enterprise Feature) If true, enables oauth2.0 authentication |
-| use_x_forwarded_for | str | If true, uses the X-Forwarded-For header to get the client IP address |
-| service_account_settings | List[Dict[str, Any]] | Set `service_account_settings` if you want to create settings that only apply to service account keys (Doc on service accounts)[./service_accounts.md] |
-| image_generation_model | str | The default model to use for image generation - ignores model set in request |
-| store_model_in_db | boolean | If true, allows `/model/new` endpoint to store model information in db. Endpoint disabled by default. [Doc on `/model/new` endpoint](./model_management.md#create-a-new-model) |
-| max_request_size_mb | int | The maximum size for requests in MB. Requests above this size will be rejected. |
-| max_response_size_mb | int | The maximum size for responses in MB. LLM Responses above this size will not be sent. |
-| proxy_budget_rescheduler_min_time | int | The minimum time (in seconds) to wait before checking db for budget resets. **Default is 597 seconds** |
-| proxy_budget_rescheduler_max_time | int | The maximum time (in seconds) to wait before checking db for budget resets. **Default is 605 seconds** |
-| proxy_batch_write_at | int | Time (in seconds) to wait before batch writing spend logs to the db. **Default is 10 seconds** |
-| alerting_args | dict | Args for Slack Alerting [Doc on Slack Alerting](./alerting.md) |
-| custom_key_generate | str | Custom function for key generation [Doc on custom key generation](./virtual_keys.md#custom--key-generate) |
-| allowed_ips | List[str] | List of IPs allowed to access the proxy. If not set, all IPs are allowed. |
-| embedding_model | str | The default model to use for embeddings - ignores model set in request |
-| default_team_disabled | boolean | If true, users cannot create 'personal' keys (keys with no team_id). |
-| alert_to_webhook_url | Dict[str] | [Specify a webhook url for each alert type.](./alerting.md#set-specific-slack-channels-per-alert-type) |
-| key_management_settings | List[Dict[str, Any]] | Settings for key management system (e.g. AWS KMS, Azure Key Vault) [Doc on key management](../secret.md) |
-| allow_user_auth | boolean | (Deprecated) old approach for user authentication. |
-| user_api_key_cache_ttl | int | The time (in seconds) to cache user api keys in memory. |
-| disable_prisma_schema_update | boolean | If true, turns off automatic schema updates to DB |
-| litellm_key_header_name | str | If set, allows passing LiteLLM keys as a custom header. [Doc on custom headers](./virtual_keys.md#custom-headers) |
-| moderation_model | str | The default model to use for moderation. |
-| custom_sso | str | Path to a python file that implements custom SSO logic. [Doc on custom SSO](./custom_sso.md) |
-| allow_client_side_credentials | boolean | If true, allows passing client side credentials to the proxy. (Useful when testing finetuning models) [Doc on client side credentials](./virtual_keys.md#client-side-credentials) |
-| admin_only_routes | List[str] | (Enterprise Feature) List of routes that are only accessible to admin users. [Doc on admin only routes](./enterprise#control-available-public-private-routes) |
-| use_azure_key_vault | boolean | If true, load keys from azure key vault |
-| use_google_kms | boolean | If true, load keys from google kms |
-| spend_report_frequency | str | Specify how often you want a Spend Report to be sent (e.g. "1d", "2d", "30d") [More on this](./alerting.md#spend-report-frequency) |
-| ui_access_mode | Literal["admin_only"] | If set, restricts access to the UI to admin users only. [Docs](./ui.md#restrict-ui-access) |
-| litellm_jwtauth | Dict[str, Any] | Settings for JWT authentication. [Docs](./token_auth.md) |
-| litellm_license | str | The license key for the proxy. [Docs](../enterprise.md#how-does-deployment-with-enterprise-license-work) |
-| oauth2_config_mappings | Dict[str, str] | Define the OAuth2 config mappings |
-| pass_through_endpoints | List[Dict[str, Any]] | Define the pass through endpoints. [Docs](./pass_through) |
-| enable_oauth2_proxy_auth | boolean | (Enterprise Feature) If true, enables oauth2.0 authentication |
-| forward_openai_org_id | boolean | If true, forwards the OpenAI Organization ID to the backend LLM call (if it's OpenAI). |
-| forward_client_headers_to_llm_api | boolean | If true, forwards the client headers (any `x-` headers) to the backend LLM call |
-
-### router_settings - Reference
-
-:::info
-
-Most values can also be set via `litellm_settings`. If you see overlapping values, settings on `router_settings` will override those on `litellm_settings`.
-:::
-
-```yaml
-router_settings:
- routing_strategy: usage-based-routing-v2 # Literal["simple-shuffle", "least-busy", "usage-based-routing","latency-based-routing"], default="simple-shuffle"
- redis_host: # string
- redis_password: # string
- redis_port: # string
- enable_pre_call_check: true # bool - Before call is made check if a call is within model context window
- allowed_fails: 3 # cooldown model if it fails > 1 call in a minute.
- cooldown_time: 30 # (in seconds) how long to cooldown model if fails/min > allowed_fails
- disable_cooldowns: True # bool - Disable cooldowns for all models
- enable_tag_filtering: True # bool - Use tag based routing for requests
- retry_policy: { # Dict[str, int]: retry policy for different types of exceptions
- "AuthenticationErrorRetries": 3,
- "TimeoutErrorRetries": 3,
- "RateLimitErrorRetries": 3,
- "ContentPolicyViolationErrorRetries": 4,
- "InternalServerErrorRetries": 4
- }
- allowed_fails_policy: {
- "BadRequestErrorAllowedFails": 1000, # Allow 1000 BadRequestErrors before cooling down a deployment
- "AuthenticationErrorAllowedFails": 10, # int
- "TimeoutErrorAllowedFails": 12, # int
- "RateLimitErrorAllowedFails": 10000, # int
- "ContentPolicyViolationErrorAllowedFails": 15, # int
- "InternalServerErrorAllowedFails": 20, # int
- }
- content_policy_fallbacks=[{"claude-2": ["my-fallback-model"]}] # List[Dict[str, List[str]]]: Fallback model for content policy violations
- fallbacks=[{"claude-2": ["my-fallback-model"]}] # List[Dict[str, List[str]]]: Fallback model for all errors
-```
-
-| Name | Type | Description |
-|------|------|-------------|
-| routing_strategy | string | The strategy used for routing requests. Options: "simple-shuffle", "least-busy", "usage-based-routing", "latency-based-routing". Default is "simple-shuffle". [More information here](../routing) |
-| redis_host | string | The host address for the Redis server. **Only set this if you have multiple instances of LiteLLM Proxy and want current tpm/rpm tracking to be shared across them** |
-| redis_password | string | The password for the Redis server. **Only set this if you have multiple instances of LiteLLM Proxy and want current tpm/rpm tracking to be shared across them** |
-| redis_port | string | The port number for the Redis server. **Only set this if you have multiple instances of LiteLLM Proxy and want current tpm/rpm tracking to be shared across them**|
-| enable_pre_call_check | boolean | If true, checks if a call is within the model's context window before making the call. [More information here](reliability) |
-| content_policy_fallbacks | array of objects | Specifies fallback models for content policy violations. [More information here](reliability) |
-| fallbacks | array of objects | Specifies fallback models for all types of errors. [More information here](reliability) |
-| enable_tag_filtering | boolean | If true, uses tag based routing for requests [Tag Based Routing](tag_routing) |
-| cooldown_time | integer | The duration (in seconds) to cooldown a model if it exceeds the allowed failures. |
-| disable_cooldowns | boolean | If true, disables cooldowns for all models. [More information here](reliability) |
-| retry_policy | object | Specifies the number of retries for different types of exceptions. [More information here](reliability) |
-| allowed_fails | integer | The number of failures allowed before cooling down a model. [More information here](reliability) |
-| allowed_fails_policy | object | Specifies the number of allowed failures for different error types before cooling down a deployment. [More information here](reliability) |
-| default_max_parallel_requests | Optional[int] | The default maximum number of parallel requests for a deployment. |
-| default_priority | (Optional[int]) | The default priority for a request. Only for '.scheduler_acompletion()'. Default is None. |
-| polling_interval | (Optional[float]) | frequency of polling queue. Only for '.scheduler_acompletion()'. Default is 3ms. |
-| max_fallbacks | Optional[int] | The maximum number of fallbacks to try before exiting the call. Defaults to 5. |
-| default_litellm_params | Optional[dict] | The default litellm parameters to add to all requests (e.g. `temperature`, `max_tokens`). |
-| timeout | Optional[float] | The default timeout for a request. |
-| debug_level | Literal["DEBUG", "INFO"] | The debug level for the logging library in the router. Defaults to "INFO". |
-| client_ttl | int | Time-to-live for cached clients in seconds. Defaults to 3600. |
-| cache_kwargs | dict | Additional keyword arguments for the cache initialization. |
-| routing_strategy_args | dict | Additional keyword arguments for the routing strategy - e.g. lowest latency routing default ttl |
-| model_group_alias | dict | Model group alias mapping. E.g. `{"claude-3-haiku": "claude-3-haiku-20240229"}` |
-| num_retries | int | Number of retries for a request. Defaults to 3. |
-| default_fallbacks | Optional[List[str]] | Fallbacks to try if no model group-specific fallbacks are defined. |
-| caching_groups | Optional[List[tuple]] | List of model groups for caching across model groups. Defaults to None. - e.g. caching_groups=[("openai-gpt-3.5-turbo", "azure-gpt-3.5-turbo")]|
-| alerting_config | AlertingConfig | [SDK-only arg] Slack alerting configuration. Defaults to None. [Further Docs](../routing.md#alerting-) |
-| assistants_config | AssistantsConfig | Set on proxy via `assistant_settings`. [Further docs](../assistants.md) |
-| set_verbose | boolean | [DEPRECATED PARAM - see debug docs](./debugging.md) If true, sets the logging level to verbose. |
-| retry_after | int | Time to wait before retrying a request in seconds. Defaults to 0. If `x-retry-after` is received from LLM API, this value is overridden. |
-| provider_budget_config | ProviderBudgetConfig | Provider budget configuration. Use this to set llm_provider budget limits. example $100/day to OpenAI, $100/day to Azure, etc. Defaults to None. [Further Docs](./provider_budget_routing.md) |
-| enable_pre_call_checks | boolean | If true, checks if a call is within the model's context window before making the call. [More information here](reliability) |
-| model_group_retry_policy | Dict[str, RetryPolicy] | [SDK-only arg] Set retry policy for model groups. |
-| context_window_fallbacks | List[Dict[str, List[str]]] | Fallback models for context window violations. |
-| redis_url | str | URL for Redis server. **Known performance issue with Redis URL.** |
-| cache_responses | boolean | Flag to enable caching LLM Responses, if cache set under `router_settings`. If true, caches responses. Defaults to False. |
-| router_general_settings | RouterGeneralSettings | [SDK-Only] Router general settings - contains optimizations like 'async_only_mode'. [Docs](../routing.md#router-general-settings) |
-
-### environment variables - Reference
-
-| Name | Description |
-|------|-------------|
-| ACTIONS_ID_TOKEN_REQUEST_TOKEN | Token for requesting ID in GitHub Actions
-| ACTIONS_ID_TOKEN_REQUEST_URL | URL for requesting ID token in GitHub Actions
-| AISPEND_ACCOUNT_ID | Account ID for AI Spend
-| AISPEND_API_KEY | API Key for AI Spend
-| ALLOWED_EMAIL_DOMAINS | List of email domains allowed for access
-| ARIZE_API_KEY | API key for Arize platform integration
-| ARIZE_SPACE_KEY | Space key for Arize platform
-| ARGILLA_BATCH_SIZE | Batch size for Argilla logging
-| ARGILLA_API_KEY | API key for Argilla platform
-| ARGILLA_SAMPLING_RATE | Sampling rate for Argilla logging
-| ARGILLA_DATASET_NAME | Dataset name for Argilla logging
-| ARGILLA_BASE_URL | Base URL for Argilla service
-| ATHINA_API_KEY | API key for Athina service
-| AUTH_STRATEGY | Strategy used for authentication (e.g., OAuth, API key)
-| AWS_ACCESS_KEY_ID | Access Key ID for AWS services
-| AWS_PROFILE_NAME | AWS CLI profile name to be used
-| AWS_REGION_NAME | Default AWS region for service interactions
-| AWS_ROLE_NAME | Role name for AWS IAM usage
-| AWS_SECRET_ACCESS_KEY | Secret Access Key for AWS services
-| AWS_SESSION_NAME | Name for AWS session
-| AWS_WEB_IDENTITY_TOKEN | Web identity token for AWS
-| AZURE_API_VERSION | Version of the Azure API being used
-| AZURE_AUTHORITY_HOST | Azure authority host URL
-| AZURE_CLIENT_ID | Client ID for Azure services
-| AZURE_CLIENT_SECRET | Client secret for Azure services
-| AZURE_FEDERATED_TOKEN_FILE | File path to Azure federated token
-| AZURE_KEY_VAULT_URI | URI for Azure Key Vault
-| AZURE_TENANT_ID | Tenant ID for Azure Active Directory
-| BERRISPEND_ACCOUNT_ID | Account ID for BerriSpend service
-| BRAINTRUST_API_KEY | API key for Braintrust integration
-| CIRCLE_OIDC_TOKEN | OpenID Connect token for CircleCI
-| CIRCLE_OIDC_TOKEN_V2 | Version 2 of the OpenID Connect token for CircleCI
-| CONFIG_FILE_PATH | File path for configuration file
-| CUSTOM_TIKTOKEN_CACHE_DIR | Custom directory for Tiktoken cache
-| DATABASE_HOST | Hostname for the database server
-| DATABASE_NAME | Name of the database
-| DATABASE_PASSWORD | Password for the database user
-| DATABASE_PORT | Port number for database connection
-| DATABASE_SCHEMA | Schema name used in the database
-| DATABASE_URL | Connection URL for the database
-| DATABASE_USER | Username for database connection
-| DATABASE_USERNAME | Alias for database user
-| DATABRICKS_API_BASE | Base URL for Databricks API
-| DD_BASE_URL | Base URL for Datadog integration
-| DATADOG_BASE_URL | (Alternative to DD_BASE_URL) Base URL for Datadog integration
-| _DATADOG_BASE_URL | (Alternative to DD_BASE_URL) Base URL for Datadog integration
-| DD_API_KEY | API key for Datadog integration
-| DD_SITE | Site URL for Datadog (e.g., datadoghq.com)
-| DD_SOURCE | Source identifier for Datadog logs
-| DD_ENV | Environment identifier for Datadog logs. Only supported for `datadog_llm_observability` callback
-| DD_SERVICE | Service identifier for Datadog logs. Defaults to "litellm-server"
-| DD_VERSION | Version identifier for Datadog logs. Defaults to "unknown"
-| DEBUG_OTEL | Enable debug mode for OpenTelemetry
-| DIRECT_URL | Direct URL for service endpoint
-| DISABLE_ADMIN_UI | Toggle to disable the admin UI
-| DISABLE_SCHEMA_UPDATE | Toggle to disable schema updates
-| DOCS_DESCRIPTION | Description text for documentation pages
-| DOCS_FILTERED | Flag indicating filtered documentation
-| DOCS_TITLE | Title of the documentation pages
-| DOCS_URL | The path to the Swagger API documentation. **By default this is "/"**
-| EMAIL_SUPPORT_CONTACT | Support contact email address
-| GCS_BUCKET_NAME | Name of the Google Cloud Storage bucket
-| GCS_PATH_SERVICE_ACCOUNT | Path to the Google Cloud service account JSON file
-| GCS_FLUSH_INTERVAL | Flush interval for GCS logging (in seconds). Specify how often you want a log to be sent to GCS. **Default is 20 seconds**
-| GCS_BATCH_SIZE | Batch size for GCS logging. Specify after how many logs you want to flush to GCS. If `BATCH_SIZE` is set to 10, logs are flushed every 10 logs. **Default is 2048**
-| GENERIC_AUTHORIZATION_ENDPOINT | Authorization endpoint for generic OAuth providers
-| GENERIC_CLIENT_ID | Client ID for generic OAuth providers
-| GENERIC_CLIENT_SECRET | Client secret for generic OAuth providers
-| GENERIC_CLIENT_STATE | State parameter for generic client authentication
-| GENERIC_INCLUDE_CLIENT_ID | Include client ID in requests for OAuth
-| GENERIC_SCOPE | Scope settings for generic OAuth providers
-| GENERIC_TOKEN_ENDPOINT | Token endpoint for generic OAuth providers
-| GENERIC_USER_DISPLAY_NAME_ATTRIBUTE | Attribute for user's display name in generic auth
-| GENERIC_USER_EMAIL_ATTRIBUTE | Attribute for user's email in generic auth
-| GENERIC_USER_FIRST_NAME_ATTRIBUTE | Attribute for user's first name in generic auth
-| GENERIC_USER_ID_ATTRIBUTE | Attribute for user ID in generic auth
-| GENERIC_USER_LAST_NAME_ATTRIBUTE | Attribute for user's last name in generic auth
-| GENERIC_USER_PROVIDER_ATTRIBUTE | Attribute specifying the user's provider
-| GENERIC_USER_ROLE_ATTRIBUTE | Attribute specifying the user's role
-| GENERIC_USERINFO_ENDPOINT | Endpoint to fetch user information in generic OAuth
-| GALILEO_BASE_URL | Base URL for Galileo platform
-| GALILEO_PASSWORD | Password for Galileo authentication
-| GALILEO_PROJECT_ID | Project ID for Galileo usage
-| GALILEO_USERNAME | Username for Galileo authentication
-| GREENSCALE_API_KEY | API key for Greenscale service
-| GREENSCALE_ENDPOINT | Endpoint URL for Greenscale service
-| GOOGLE_APPLICATION_CREDENTIALS | Path to Google Cloud credentials JSON file
-| GOOGLE_CLIENT_ID | Client ID for Google OAuth
-| GOOGLE_CLIENT_SECRET | Client secret for Google OAuth
-| GOOGLE_KMS_RESOURCE_NAME | Name of the resource in Google KMS
-| HF_API_BASE | Base URL for Hugging Face API
-| HELICONE_API_KEY | API key for Helicone service
-| HUGGINGFACE_API_BASE | Base URL for Hugging Face API
-| IAM_TOKEN_DB_AUTH | IAM token for database authentication
-| JSON_LOGS | Enable JSON formatted logging
-| JWT_AUDIENCE | Expected audience for JWT tokens
-| JWT_PUBLIC_KEY_URL | URL to fetch public key for JWT verification
-| LAGO_API_BASE | Base URL for Lago API
-| LAGO_API_CHARGE_BY | Parameter to determine charge basis in Lago
-| LAGO_API_EVENT_CODE | Event code for Lago API events
-| LAGO_API_KEY | API key for accessing Lago services
-| LANGFUSE_DEBUG | Toggle debug mode for Langfuse
-| LANGFUSE_FLUSH_INTERVAL | Interval for flushing Langfuse logs
-| LANGFUSE_HOST | Host URL for Langfuse service
-| LANGFUSE_PUBLIC_KEY | Public key for Langfuse authentication
-| LANGFUSE_RELEASE | Release version of Langfuse integration
-| LANGFUSE_SECRET_KEY | Secret key for Langfuse authentication
-| LANGSMITH_API_KEY | API key for Langsmith platform
-| LANGSMITH_BASE_URL | Base URL for Langsmith service
-| LANGSMITH_BATCH_SIZE | Batch size for operations in Langsmith
-| LANGSMITH_DEFAULT_RUN_NAME | Default name for Langsmith run
-| LANGSMITH_PROJECT | Project name for Langsmith integration
-| LANGSMITH_SAMPLING_RATE | Sampling rate for Langsmith logging
-| LANGTRACE_API_KEY | API key for Langtrace service
-| LITERAL_API_KEY | API key for Literal integration
-| LITERAL_API_URL | API URL for Literal service
-| LITERAL_BATCH_SIZE | Batch size for Literal operations
-| LITELLM_DONT_SHOW_FEEDBACK_BOX | Flag to hide feedback box in LiteLLM UI
-| LITELLM_DROP_PARAMS | Parameters to drop in LiteLLM requests
-| LITELLM_EMAIL | Email associated with LiteLLM account
-| LITELLM_GLOBAL_MAX_PARALLEL_REQUEST_RETRIES | Maximum retries for parallel requests in LiteLLM
-| LITELLM_GLOBAL_MAX_PARALLEL_REQUEST_RETRY_TIMEOUT | Timeout for retries of parallel requests in LiteLLM
-| LITELLM_HOSTED_UI | URL of the hosted UI for LiteLLM
-| LITELLM_LICENSE | License key for LiteLLM usage
-| LITELLM_LOCAL_MODEL_COST_MAP | Local configuration for model cost mapping in LiteLLM
-| LITELLM_LOG | Enable detailed logging for LiteLLM
-| LITELLM_MODE | Operating mode for LiteLLM (e.g., production, development)
-| LITELLM_SALT_KEY | Salt key for encryption in LiteLLM
-| LITELLM_SECRET_AWS_KMS_LITELLM_LICENSE | AWS KMS encrypted license for LiteLLM
-| LITELLM_TOKEN | Access token for LiteLLM integration
-| LOGFIRE_TOKEN | Token for Logfire logging service
-| MICROSOFT_CLIENT_ID | Client ID for Microsoft services
-| MICROSOFT_CLIENT_SECRET | Client secret for Microsoft services
-| MICROSOFT_TENANT | Tenant ID for Microsoft Azure
-| NO_DOCS | Flag to disable documentation generation
-| NO_PROXY | List of addresses to bypass proxy
-| OAUTH_TOKEN_INFO_ENDPOINT | Endpoint for OAuth token info retrieval
-| OPENAI_API_BASE | Base URL for OpenAI API
-| OPENAI_API_KEY | API key for OpenAI services
-| OPENAI_ORGANIZATION | Organization identifier for OpenAI
-| OPENID_BASE_URL | Base URL for OpenID Connect services
-| OPENID_CLIENT_ID | Client ID for OpenID Connect authentication
-| OPENID_CLIENT_SECRET | Client secret for OpenID Connect authentication
-| OPENMETER_API_ENDPOINT | API endpoint for OpenMeter integration
-| OPENMETER_API_KEY | API key for OpenMeter services
-| OPENMETER_EVENT_TYPE | Type of events sent to OpenMeter
-| OTEL_ENDPOINT | OpenTelemetry endpoint for traces
-| OTEL_ENVIRONMENT_NAME | Environment name for OpenTelemetry
-| OTEL_EXPORTER | Exporter type for OpenTelemetry
-| OTEL_HEADERS | Headers for OpenTelemetry requests
-| OTEL_SERVICE_NAME | Service name identifier for OpenTelemetry
-| OTEL_TRACER_NAME | Tracer name for OpenTelemetry tracing
-| PREDIBASE_API_BASE | Base URL for Predibase API
-| PRESIDIO_ANALYZER_API_BASE | Base URL for Presidio Analyzer service
-| PRESIDIO_ANONYMIZER_API_BASE | Base URL for Presidio Anonymizer service
-| PROMETHEUS_URL | URL for Prometheus service
-| PROMPTLAYER_API_KEY | API key for PromptLayer integration
-| PROXY_ADMIN_ID | Admin identifier for proxy server
-| PROXY_BASE_URL | Base URL for proxy service
-| PROXY_LOGOUT_URL | URL for logging out of the proxy service
-| PROXY_MASTER_KEY | Master key for proxy authentication
-| QDRANT_API_BASE | Base URL for Qdrant API
-| QDRANT_API_KEY | API key for Qdrant service
-| QDRANT_URL | Connection URL for Qdrant database
-| REDIS_HOST | Hostname for Redis server
-| REDIS_PASSWORD | Password for Redis service
-| REDIS_PORT | Port number for Redis server
-| REDOC_URL | The path to the Redoc Fast API documentation. **By default this is "/redoc"**
-| SERVER_ROOT_PATH | Root path for the server application
-| SET_VERBOSE | Flag to enable verbose logging
-| SLACK_DAILY_REPORT_FREQUENCY | Frequency of daily Slack reports (e.g., daily, weekly)
-| SLACK_WEBHOOK_URL | Webhook URL for Slack integration
-| SMTP_HOST | Hostname for the SMTP server
-| SMTP_PASSWORD | Password for SMTP authentication
-| SMTP_PORT | Port number for SMTP server
-| SMTP_SENDER_EMAIL | Email address used as the sender in SMTP transactions
-| SMTP_SENDER_LOGO | Logo used in emails sent via SMTP
-| SMTP_TLS | Flag to enable or disable TLS for SMTP connections
-| SMTP_USERNAME | Username for SMTP authentication
-| SPEND_LOGS_URL | URL for retrieving spend logs
-| SSL_CERTIFICATE | Path to the SSL certificate file
-| SSL_VERIFY | Flag to enable or disable SSL certificate verification
-| SUPABASE_KEY | API key for Supabase service
-| SUPABASE_URL | Base URL for Supabase instance
-| TEST_EMAIL_ADDRESS | Email address used for testing purposes
-| UI_LOGO_PATH | Path to the logo image used in the UI
-| UI_PASSWORD | Password for accessing the UI
-| UI_USERNAME | Username for accessing the UI
-| UPSTREAM_LANGFUSE_DEBUG | Flag to enable debugging for upstream Langfuse
-| UPSTREAM_LANGFUSE_HOST | Host URL for upstream Langfuse service
-| UPSTREAM_LANGFUSE_PUBLIC_KEY | Public key for upstream Langfuse authentication
-| UPSTREAM_LANGFUSE_RELEASE | Release version identifier for upstream Langfuse
-| UPSTREAM_LANGFUSE_SECRET_KEY | Secret key for upstream Langfuse authentication
-| USE_AWS_KMS | Flag to enable AWS Key Management Service for encryption
-| WEBHOOK_URL | URL for receiving webhooks from external services
-
diff --git a/docs/my-website/docs/proxy/configs.md b/docs/my-website/docs/proxy/configs.md
deleted file mode 100644
index 7876c9dec..000000000
--- a/docs/my-website/docs/proxy/configs.md
+++ /dev/null
@@ -1,618 +0,0 @@
-import Image from '@theme/IdealImage';
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Overview
-Set model list, `api_base`, `api_key`, `temperature` & proxy server settings (`master-key`) on the config.yaml.
-
-| Param Name | Description |
-|----------------------|---------------------------------------------------------------|
-| `model_list` | List of supported models on the server, with model-specific configs |
-| `router_settings` | litellm Router settings, example `routing_strategy="least-busy"` [**see all**](#router-settings)|
-| `litellm_settings` | litellm Module settings, example `litellm.drop_params=True`, `litellm.set_verbose=True`, `litellm.api_base`, `litellm.cache` [**see all**](#all-settings)|
-| `general_settings` | Server settings, example setting `master_key: sk-my_special_key` |
-| `environment_variables` | Environment Variables example, `REDIS_HOST`, `REDIS_PORT` |
-
-**Complete List:** Check the Swagger UI docs on `/#/config.yaml` (e.g. http://0.0.0.0:4000/#/config.yaml), for everything you can pass in the config.yaml.
-
-
-## Quick Start
-
-Set a model alias for your deployments.
-
-In the `config.yaml` the model_name parameter is the user-facing name to use for your deployment.
-
-In the config below:
-- `model_name`: the name to pass TO litellm from the external client
-- `litellm_params.model`: the model string passed to the litellm.completion() function
-
-E.g.:
-- `model=vllm-models` will route to `openai/facebook/opt-125m`.
-- `model=gpt-3.5-turbo` will load balance between `azure/gpt-turbo-small-eu` and `azure/gpt-turbo-small-ca`
-
-```yaml
-model_list:
- - model_name: gpt-3.5-turbo ### RECEIVED MODEL NAME ###
- litellm_params: # all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/input
- model: azure/gpt-turbo-small-eu ### MODEL NAME sent to `litellm.completion()` ###
- api_base: https://my-endpoint-europe-berri-992.openai.azure.com/
- api_key: "os.environ/AZURE_API_KEY_EU" # does os.getenv("AZURE_API_KEY_EU")
- rpm: 6 # [OPTIONAL] Rate limit for this deployment: in requests per minute (rpm)
- - model_name: bedrock-claude-v1
- litellm_params:
- model: bedrock/anthropic.claude-instant-v1
- - model_name: gpt-3.5-turbo
- litellm_params:
- model: azure/gpt-turbo-small-ca
- api_base: https://my-endpoint-canada-berri992.openai.azure.com/
- api_key: "os.environ/AZURE_API_KEY_CA"
- rpm: 6
- - model_name: anthropic-claude
- litellm_params:
- model: bedrock/anthropic.claude-instant-v1
- ### [OPTIONAL] SET AWS REGION ###
- aws_region_name: us-east-1
- - model_name: vllm-models
- litellm_params:
- model: openai/facebook/opt-125m # the `openai/` prefix tells litellm it's openai compatible
- api_base: http://0.0.0.0:4000/v1
- api_key: none
- rpm: 1440
- model_info:
- version: 2
-
- # Use this if you want to make requests to `claude-3-haiku-20240307`,`claude-3-opus-20240229`,`claude-2.1` without defining them on the config.yaml
- # Default models
- # Works for ALL Providers and needs the default provider credentials in .env
- - model_name: "*"
- litellm_params:
- model: "*"
-
-litellm_settings: # module level litellm settings - https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py
- drop_params: True
- success_callback: ["langfuse"] # OPTIONAL - if you want to start sending LLM Logs to Langfuse. Make sure to set `LANGFUSE_PUBLIC_KEY` and `LANGFUSE_SECRET_KEY` in your env
-
-general_settings:
- master_key: sk-1234 # [OPTIONAL] Only use this if you to require all calls to contain this key (Authorization: Bearer sk-1234)
- alerting: ["slack"] # [OPTIONAL] If you want Slack Alerts for Hanging LLM requests, Slow llm responses, Budget Alerts. Make sure to set `SLACK_WEBHOOK_URL` in your env
-```
-:::info
-
-For more provider-specific info, [go here](../providers/)
-
-:::
-
-#### Step 2: Start Proxy with config
-
-```shell
-$ litellm --config /path/to/config.yaml
-```
-
-:::tip
-
-Run with `--detailed_debug` if you need detailed debug logs
-
-```shell
-$ litellm --config /path/to/config.yaml --detailed_debug
-```
-
-:::
-
-#### Step 3: Test it
-
-Sends request to model where `model_name=gpt-3.5-turbo` on config.yaml.
-
-If multiple with `model_name=gpt-3.5-turbo` does [Load Balancing](https://docs.litellm.ai/docs/proxy/load_balancing)
-
-**[Langchain, OpenAI SDK Usage Examples](../proxy/user_keys#request-format)**
-
-```shell
-curl --location 'http://0.0.0.0:4000/chat/completions' \
---header 'Content-Type: application/json' \
---data ' {
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- }
-'
-```
-
-## LLM configs `model_list`
-
-### Model-specific params (API Base, Keys, Temperature, Max Tokens, Organization, Headers etc.)
-You can use the config to save model-specific information like api_base, api_key, temperature, max_tokens, etc.
-
-[**All input params**](https://docs.litellm.ai/docs/completion/input#input-params-1)
-
-**Step 1**: Create a `config.yaml` file
-```yaml
-model_list:
- - model_name: gpt-4-team1
- litellm_params: # params for litellm.completion() - https://docs.litellm.ai/docs/completion/input#input---request-body
- model: azure/chatgpt-v-2
- api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
- api_version: "2023-05-15"
- azure_ad_token: eyJ0eXAiOiJ
- seed: 12
- max_tokens: 20
- - model_name: gpt-4-team2
- litellm_params:
- model: azure/gpt-4
- api_key: sk-123
- api_base: https://openai-gpt-4-test-v-2.openai.azure.com/
- temperature: 0.2
- - model_name: openai-gpt-3.5
- litellm_params:
- model: openai/gpt-3.5-turbo
- extra_headers: {"AI-Resource Group": "ishaan-resource"}
- api_key: sk-123
- organization: org-ikDc4ex8NB
- temperature: 0.2
- - model_name: mistral-7b
- litellm_params:
- model: ollama/mistral
- api_base: your_ollama_api_base
-```
-
-**Step 2**: Start server with config
-
-```shell
-$ litellm --config /path/to/config.yaml
-```
-
-**Expected Logs:**
-
-Look for this line in your console logs to confirm the config.yaml was loaded in correctly.
-```
-LiteLLM: Proxy initialized with Config, Set models:
-```
-
-### Embedding Models - Use Sagemaker, Bedrock, Azure, OpenAI, XInference
-
-See supported Embedding Providers & Models [here](https://docs.litellm.ai/docs/embedding/supported_embedding)
-
-
-
-
-
-```yaml
-model_list:
- - model_name: bedrock-cohere
- litellm_params:
- model: "bedrock/cohere.command-text-v14"
- aws_region_name: "us-west-2"
- - model_name: bedrock-cohere
- litellm_params:
- model: "bedrock/cohere.command-text-v14"
- aws_region_name: "us-east-2"
- - model_name: bedrock-cohere
- litellm_params:
- model: "bedrock/cohere.command-text-v14"
- aws_region_name: "us-east-1"
-
-```
-
-
-
-
-
-Here's how to route between GPT-J embedding (sagemaker endpoint), Amazon Titan embedding (Bedrock) and Azure OpenAI embedding on the proxy server:
-
-```yaml
-model_list:
- - model_name: sagemaker-embeddings
- litellm_params:
- model: "sagemaker/berri-benchmarking-gpt-j-6b-fp16"
- - model_name: amazon-embeddings
- litellm_params:
- model: "bedrock/amazon.titan-embed-text-v1"
- - model_name: azure-embeddings
- litellm_params:
- model: "azure/azure-embedding-model"
- api_base: "os.environ/AZURE_API_BASE" # os.getenv("AZURE_API_BASE")
- api_key: "os.environ/AZURE_API_KEY" # os.getenv("AZURE_API_KEY")
- api_version: "2023-07-01-preview"
-
-general_settings:
- master_key: sk-1234 # [OPTIONAL] if set all calls to proxy will require either this key or a valid generated token
-```
-
-
-
-
-LiteLLM Proxy supports all Feature-Extraction Embedding models.
-
-```yaml
-model_list:
- - model_name: deployed-codebert-base
- litellm_params:
- # send request to deployed hugging face inference endpoint
- model: huggingface/microsoft/codebert-base # add huggingface prefix so it routes to hugging face
- api_key: hf_LdS # api key for hugging face inference endpoint
- api_base: https://uysneno1wv2wd4lw.us-east-1.aws.endpoints.huggingface.cloud # your hf inference endpoint
- - model_name: codebert-base
- litellm_params:
- # no api_base set, sends request to hugging face free inference api https://api-inference.huggingface.co/models/
- model: huggingface/microsoft/codebert-base # add huggingface prefix so it routes to hugging face
- api_key: hf_LdS # api key for hugging face
-
-```
-
-
-
-
-
-```yaml
-model_list:
- - model_name: azure-embedding-model # model group
- litellm_params:
- model: azure/azure-embedding-model # model name for litellm.embedding(model=azure/azure-embedding-model) call
- api_base: your-azure-api-base
- api_key: your-api-key
- api_version: 2023-07-01-preview
-```
-
-
-
-
-
-```yaml
-model_list:
-- model_name: text-embedding-ada-002 # model group
- litellm_params:
- model: text-embedding-ada-002 # model name for litellm.embedding(model=text-embedding-ada-002)
- api_key: your-api-key-1
-- model_name: text-embedding-ada-002
- litellm_params:
- model: text-embedding-ada-002
- api_key: your-api-key-2
-```
-
-
-
-
-
-
-https://docs.litellm.ai/docs/providers/xinference
-
-**Note add `xinference/` prefix to `litellm_params`: `model` so litellm knows to route to OpenAI**
-
-```yaml
-model_list:
-- model_name: embedding-model # model group
- litellm_params:
- model: xinference/bge-base-en # model name for litellm.embedding(model=xinference/bge-base-en)
- api_base: http://0.0.0.0:9997/v1
-```
-
-
-
-
-
-