forked from phoenix/litellm-mirror
* fix(utils.py): add 'disallowed_special' for token counting on .encode()
Fixes error when '<
endoftext
>' in string
* Revert "(fix) standard logging metadata + add unit testing (#6366)" (#6381)
This reverts commit 8359cb6fa9
.
* add new 35 mode lcard (#6378)
* Add claude 3 5 sonnet 20241022 models for all provides (#6380)
* Add Claude 3.5 v2 on Amazon Bedrock and Vertex AI.
* added anthropic/claude-3-5-sonnet-20241022
* add new 35 mode lcard
---------
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: lowjiansheng <15527690+lowjiansheng@users.noreply.github.com>
* test(skip-flaky-google-context-caching-test): google is not reliable. their sample code is also not working
* Fix metadata being overwritten in speech() (#6295)
* fix: adding missing redis cluster kwargs (#6318)
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
* Add support for `max_completion_tokens` in Azure OpenAI (#6376)
Now that Azure supports `max_completion_tokens`, no need for special handling for this param and let it pass thru. More details: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=python-secure#api-support
* build(model_prices_and_context_window.json): add voyage-finance-2 pricing
Closes https://github.com/BerriAI/litellm/issues/6371
* build(model_prices_and_context_window.json): fix llama3.1 pricing model name on map
Closes https://github.com/BerriAI/litellm/issues/6310
* feat(realtime_streaming.py): just log specific events
Closes https://github.com/BerriAI/litellm/issues/6267
* fix(utils.py): more robust checking if unmapped vertex anthropic model belongs to that family of models
Fixes https://github.com/BerriAI/litellm/issues/6383
* Fix Ollama stream handling for tool calls with None content (#6155)
* test(test_max_completions): update test now that azure supports 'max_completion_tokens'
* fix(handler.py): fix linting error
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: John HU <hszqqq12@gmail.com>
Co-authored-by: Ali Arian <113945203+ali-arian@users.noreply.github.com>
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
Co-authored-by: Anand Taralika <46954145+taralika@users.noreply.github.com>
Co-authored-by: Nolan Tremelling <34580718+NolanTrem@users.noreply.github.com>
2.3 KiB
2.3 KiB
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
Realtime Endpoints
Use this to loadbalance across Azure + OpenAI.
Proxy Usage
Add model to config
model_list:
- model_name: openai-gpt-4o-realtime-audio
litellm_params:
model: openai/gpt-4o-realtime-preview-2024-10-01
api_key: os.environ/OPENAI_API_KEY
model_list:
- model_name: gpt-4o
litellm_params:
model: azure/gpt-4o-realtime-preview
api_key: os.environ/AZURE_SWEDEN_API_KEY
api_base: os.environ/AZURE_SWEDEN_API_BASE
- model_name: openai-gpt-4o-realtime-audio
litellm_params:
model: openai/gpt-4o-realtime-preview-2024-10-01
api_key: os.environ/OPENAI_API_KEY
Start proxy
litellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:8000
Test
Run this script using node - node test.js
// test.js
const WebSocket = require("ws");
const url = "ws://0.0.0.0:4000/v1/realtime?model=openai-gpt-4o-realtime-audio";
// const url = "wss://my-endpoint-sweden-berri992.openai.azure.com/openai/realtime?api-version=2024-10-01-preview&deployment=gpt-4o-realtime-preview";
const ws = new WebSocket(url, {
headers: {
"api-key": `f28ab7b695af4154bc53498e5bdccb07`,
"OpenAI-Beta": "realtime=v1",
},
});
ws.on("open", function open() {
console.log("Connected to server.");
ws.send(JSON.stringify({
type: "response.create",
response: {
modalities: ["text"],
instructions: "Please assist the user.",
}
}));
});
ws.on("message", function incoming(message) {
console.log(JSON.parse(message.toString()));
});
ws.on("error", function handleError(error) {
console.error("Error: ", error);
});
Logging
To prevent requests from being dropped, by default LiteLLM just logs these event types:
session.created
response.create
response.done
You can override this by setting the logged_real_time_event_types
parameter in the config. For example:
litellm_settings:
logged_real_time_event_types: "*" # Log all events
## OR ##
logged_real_time_event_types: ["session.created", "response.create", "response.done"] # Log only these event types