litellm/cookbook/Using_Nemo_Guardrails_with_LiteLLM_Server.ipynb
2023-10-26 18:25:32 -07:00

159 lines
No EOL
4.7 KiB
Text
Vendored

{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"source": [
"# Using Nemo-Guardrails with LiteLLM Server\n",
"\n",
"[Call Bedrock, TogetherAI, Huggingface, etc. on the server](https://docs.litellm.ai/docs/providers)"
],
"metadata": {
"id": "eKXncoQbU_2j"
}
},
{
"cell_type": "markdown",
"source": [
"## Using with Bedrock\n",
"\n",
"`docker run -e PORT=8000 -e AWS_ACCESS_KEY_ID=<your-aws-access-key> -e AWS_SECRET_ACCESS_KEY=<your-aws-secret-key> -p 8000:8000 ghcr.io/berriai/litellm:latest`"
],
"metadata": {
"id": "ZciYaLwvuFbu"
}
},
{
"cell_type": "code",
"source": [
"pip install nemoguardrails langchain"
],
"metadata": {
"id": "vOUwGSJ2Vsy3"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xXEJNxe7U0IN"
},
"outputs": [],
"source": [
"import openai\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model_name=\"anthropic.claude-v2\", openai_api_base=\"http://0.0.0.0:8000\", openai_api_key=\"my-fake-key\")\n",
"\n",
"from nemoguardrails import LLMRails, RailsConfig\n",
"\n",
"config = RailsConfig.from_path(\"./config.yml\")\n",
"app = LLMRails(config, llm=llm)\n",
"\n",
"new_message = app.generate(messages=[{\n",
" \"role\": \"user\",\n",
" \"content\": \"Hello! What can you do for me?\"\n",
"}])"
]
},
{
"cell_type": "markdown",
"source": [
"## Using with TogetherAI\n",
"\n",
"1. You can either set this in the server environment:\n",
"`docker run -e PORT=8000 -e TOGETHERAI_API_KEY=<your-together-ai-api-key> -p 8000:8000 ghcr.io/berriai/litellm:latest`\n",
"\n",
"2. **Or** Pass this in as the api key `(...openai_api_key=\"<your-together-ai-api-key>\")`"
],
"metadata": {
"id": "vz5n00qyuKjp"
}
},
{
"cell_type": "code",
"source": [
"import openai\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model_name=\"together_ai/togethercomputer/CodeLlama-13b-Instruct\", openai_api_base=\"http://0.0.0.0:8000\", openai_api_key=\"my-together-ai-api-key\")\n",
"\n",
"from nemoguardrails import LLMRails, RailsConfig\n",
"\n",
"config = RailsConfig.from_path(\"./config.yml\")\n",
"app = LLMRails(config, llm=llm)\n",
"\n",
"new_message = app.generate(messages=[{\n",
" \"role\": \"user\",\n",
" \"content\": \"Hello! What can you do for me?\"\n",
"}])"
],
"metadata": {
"id": "XK1sk-McuhpE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### CONFIG.YML\n",
"\n",
"save this example `config.yml` in your current directory"
],
"metadata": {
"id": "8A1KWKnzuxAS"
}
},
{
"cell_type": "code",
"source": [
"# instructions:\n",
"# - type: general\n",
"# content: |\n",
"# Below is a conversation between a bot and a user about the recent job reports.\n",
"# The bot is factual and concise. If the bot does not know the answer to a\n",
"# question, it truthfully says it does not know.\n",
"\n",
"# sample_conversation: |\n",
"# user \"Hello there!\"\n",
"# express greeting\n",
"# bot express greeting\n",
"# \"Hello! How can I assist you today?\"\n",
"# user \"What can you do for me?\"\n",
"# ask about capabilities\n",
"# bot respond about capabilities\n",
"# \"I am an AI assistant that helps answer mathematical questions. My core mathematical skills are powered by wolfram alpha.\"\n",
"# user \"What's 2+2?\"\n",
"# ask math question\n",
"# bot responds to math question\n",
"# \"2+2 is equal to 4.\"\n",
"\n",
"# models:\n",
"# - type: main\n",
"# engine: openai\n",
"# model: claude-instant-1"
],
"metadata": {
"id": "NKN1GmSvu0Cx"
},
"execution_count": null,
"outputs": []
}
]
}