forked from phoenix/litellm-mirror
add everyting for docs
This commit is contained in:
parent
de45a738ee
commit
0fe8799f94
1015 changed files with 185353 additions and 0 deletions
356
docs/snippets/modules/agents/how_to/custom_llm_agent.mdx
Normal file
356
docs/snippets/modules/agents/how_to/custom_llm_agent.mdx
Normal file
|
@ -0,0 +1,356 @@
|
|||
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
|
||||
2. If the Agent returns an `AgentFinish`, then return that directly to the user
|
||||
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
|
||||
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
|
||||
|
||||
`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).
|
||||
|
||||
`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.
|
||||
|
||||
In this notebook we walk through how to create a custom LLM agent.
|
||||
|
||||
|
||||
|
||||
## Set up environment
|
||||
|
||||
Do necessary imports, etc.
|
||||
|
||||
|
||||
```python
|
||||
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
|
||||
from langchain.prompts import StringPromptTemplate
|
||||
from langchain import OpenAI, SerpAPIWrapper, LLMChain
|
||||
from typing import List, Union
|
||||
from langchain.schema import AgentAction, AgentFinish, OutputParserException
|
||||
import re
|
||||
```
|
||||
|
||||
## Set up tool
|
||||
|
||||
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
|
||||
|
||||
|
||||
```python
|
||||
# Define which tools the agent can use to answer user queries
|
||||
search = SerpAPIWrapper()
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
## Prompt Template
|
||||
|
||||
This instructs the agent on what to do. Generally, the template should incorporate:
|
||||
|
||||
- `tools`: which tools the agent has access and how and when to call them.
|
||||
- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.
|
||||
- `input`: generic user input
|
||||
|
||||
|
||||
```python
|
||||
# Set up the base template
|
||||
template = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
|
||||
|
||||
{tools}
|
||||
|
||||
Use the following format:
|
||||
|
||||
Question: the input question you must answer
|
||||
Thought: you should always think about what to do
|
||||
Action: the action to take, should be one of [{tool_names}]
|
||||
Action Input: the input to the action
|
||||
Observation: the result of the action
|
||||
... (this Thought/Action/Action Input/Observation can repeat N times)
|
||||
Thought: I now know the final answer
|
||||
Final Answer: the final answer to the original input question
|
||||
|
||||
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s
|
||||
|
||||
Question: {input}
|
||||
{agent_scratchpad}"""
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Set up a prompt template
|
||||
class CustomPromptTemplate(StringPromptTemplate):
|
||||
# The template to use
|
||||
template: str
|
||||
# The list of tools available
|
||||
tools: List[Tool]
|
||||
|
||||
def format(self, **kwargs) -> str:
|
||||
# Get the intermediate steps (AgentAction, Observation tuples)
|
||||
# Format them in a particular way
|
||||
intermediate_steps = kwargs.pop("intermediate_steps")
|
||||
thoughts = ""
|
||||
for action, observation in intermediate_steps:
|
||||
thoughts += action.log
|
||||
thoughts += f"\nObservation: {observation}\nThought: "
|
||||
# Set the agent_scratchpad variable to that value
|
||||
kwargs["agent_scratchpad"] = thoughts
|
||||
# Create a tools variable from the list of tools provided
|
||||
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
|
||||
# Create a list of tool names for the tools provided
|
||||
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
|
||||
return self.template.format(**kwargs)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
prompt = CustomPromptTemplate(
|
||||
template=template,
|
||||
tools=tools,
|
||||
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
|
||||
# This includes the `intermediate_steps` variable because that is needed
|
||||
input_variables=["input", "intermediate_steps"]
|
||||
)
|
||||
```
|
||||
|
||||
## Output Parser
|
||||
|
||||
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
|
||||
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc
|
||||
|
||||
|
||||
```python
|
||||
class CustomOutputParser(AgentOutputParser):
|
||||
|
||||
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
|
||||
# Check if agent should finish
|
||||
if "Final Answer:" in llm_output:
|
||||
return AgentFinish(
|
||||
# Return values is generally always a dictionary with a single `output` key
|
||||
# It is not recommended to try anything else at the moment :)
|
||||
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
|
||||
log=llm_output,
|
||||
)
|
||||
# Parse out the action and action input
|
||||
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
|
||||
match = re.search(regex, llm_output, re.DOTALL)
|
||||
if not match:
|
||||
raise OutputParserException(f"Could not parse LLM output: `{llm_output}`")
|
||||
action = match.group(1).strip()
|
||||
action_input = match.group(2)
|
||||
# Return the action and action input
|
||||
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
output_parser = CustomOutputParser()
|
||||
```
|
||||
|
||||
## Set up LLM
|
||||
|
||||
Choose the LLM you want to use!
|
||||
|
||||
|
||||
```python
|
||||
llm = OpenAI(temperature=0)
|
||||
```
|
||||
|
||||
## Define the stop sequence
|
||||
|
||||
This is important because it tells the LLM when to stop generation.
|
||||
|
||||
This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you).
|
||||
|
||||
## Set up the Agent
|
||||
|
||||
We can now combine everything to set up our agent
|
||||
|
||||
|
||||
```python
|
||||
# LLM chain consisting of the LLM and a prompt
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
tool_names = [tool.name for tool in tools]
|
||||
agent = LLMSingleActionAgent(
|
||||
llm_chain=llm_chain,
|
||||
output_parser=output_parser,
|
||||
stop=["\nObservation:"],
|
||||
allowed_tools=tool_names
|
||||
)
|
||||
```
|
||||
|
||||
## Use the Agent
|
||||
|
||||
Now we can use it!
|
||||
|
||||
|
||||
```python
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("How many people live in canada as of 2023?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I need to find out the population of Canada in 2023
|
||||
Action: Search
|
||||
Action Input: Population of Canada in 2023
|
||||
|
||||
Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answer
|
||||
Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## Adding Memory
|
||||
|
||||
If you want to add memory to the agent, you'll need to:
|
||||
|
||||
1. Add a place in the custom prompt for the chat_history
|
||||
2. Add a memory object to the agent executor.
|
||||
|
||||
|
||||
```python
|
||||
# Set up the base template
|
||||
template_with_history = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
|
||||
|
||||
{tools}
|
||||
|
||||
Use the following format:
|
||||
|
||||
Question: the input question you must answer
|
||||
Thought: you should always think about what to do
|
||||
Action: the action to take, should be one of [{tool_names}]
|
||||
Action Input: the input to the action
|
||||
Observation: the result of the action
|
||||
... (this Thought/Action/Action Input/Observation can repeat N times)
|
||||
Thought: I now know the final answer
|
||||
Final Answer: the final answer to the original input question
|
||||
|
||||
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s
|
||||
|
||||
Previous conversation history:
|
||||
{history}
|
||||
|
||||
New question: {input}
|
||||
{agent_scratchpad}"""
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
prompt_with_history = CustomPromptTemplate(
|
||||
template=template_with_history,
|
||||
tools=tools,
|
||||
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
|
||||
# This includes the `intermediate_steps` variable because that is needed
|
||||
input_variables=["input", "intermediate_steps", "history"]
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt_with_history)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
tool_names = [tool.name for tool in tools]
|
||||
agent = LLMSingleActionAgent(
|
||||
llm_chain=llm_chain,
|
||||
output_parser=output_parser,
|
||||
stop=["\nObservation:"],
|
||||
allowed_tools=tool_names
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
from langchain.memory import ConversationBufferWindowMemory
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
memory=ConversationBufferWindowMemory(k=2)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("How many people live in canada as of 2023?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I need to find out the population of Canada in 2023
|
||||
Action: Search
|
||||
Action Input: Population of Canada in 2023
|
||||
|
||||
Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answer
|
||||
Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("how about in mexico?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I need to find out how many people live in Mexico.
|
||||
Action: Search
|
||||
Action Input: How many people live in Mexico as of 2023?
|
||||
|
||||
Observation:The current population of Mexico is 132,679,922 as of Tuesday, April 11, 2023, based on Worldometer elaboration of the latest United Nations data. Mexico 2020 ... I now know the final answer.
|
||||
Final Answer: Arrr, there be 132,679,922 people livin' in Mexico as of 2023!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Arrr, there be 132,679,922 people livin' in Mexico as of 2023!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
247
docs/snippets/modules/agents/how_to/custom_llm_chat_agent.mdx
Normal file
247
docs/snippets/modules/agents/how_to/custom_llm_chat_agent.mdx
Normal file
|
@ -0,0 +1,247 @@
|
|||
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
|
||||
2. If the Agent returns an `AgentFinish`, then return that directly to the user
|
||||
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
|
||||
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
|
||||
|
||||
`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).
|
||||
|
||||
`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.
|
||||
|
||||
In this notebook we walk through how to create a custom LLM agent.
|
||||
|
||||
|
||||
|
||||
## Set up environment
|
||||
|
||||
Do necessary imports, etc.
|
||||
|
||||
|
||||
```bash
|
||||
pip install langchain
|
||||
pip install google-search-results
|
||||
pip install openai
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
|
||||
from langchain.prompts import BaseChatPromptTemplate
|
||||
from langchain import SerpAPIWrapper, LLMChain
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from typing import List, Union
|
||||
from langchain.schema import AgentAction, AgentFinish, HumanMessage
|
||||
import re
|
||||
from getpass import getpass
|
||||
```
|
||||
|
||||
## Set up tool
|
||||
|
||||
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
|
||||
|
||||
|
||||
```python
|
||||
SERPAPI_API_KEY = getpass()
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Define which tools the agent can use to answer user queries
|
||||
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
## Prompt Template
|
||||
|
||||
This instructs the agent on what to do. Generally, the template should incorporate:
|
||||
|
||||
- `tools`: which tools the agent has access and how and when to call them.
|
||||
- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.
|
||||
- `input`: generic user input
|
||||
|
||||
|
||||
```python
|
||||
# Set up the base template
|
||||
template = """Complete the objective as best you can. You have access to the following tools:
|
||||
|
||||
{tools}
|
||||
|
||||
Use the following format:
|
||||
|
||||
Question: the input question you must answer
|
||||
Thought: you should always think about what to do
|
||||
Action: the action to take, should be one of [{tool_names}]
|
||||
Action Input: the input to the action
|
||||
Observation: the result of the action
|
||||
... (this Thought/Action/Action Input/Observation can repeat N times)
|
||||
Thought: I now know the final answer
|
||||
Final Answer: the final answer to the original input question
|
||||
|
||||
These were previous tasks you completed:
|
||||
|
||||
|
||||
|
||||
Begin!
|
||||
|
||||
Question: {input}
|
||||
{agent_scratchpad}"""
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Set up a prompt template
|
||||
class CustomPromptTemplate(BaseChatPromptTemplate):
|
||||
# The template to use
|
||||
template: str
|
||||
# The list of tools available
|
||||
tools: List[Tool]
|
||||
|
||||
def format_messages(self, **kwargs) -> str:
|
||||
# Get the intermediate steps (AgentAction, Observation tuples)
|
||||
# Format them in a particular way
|
||||
intermediate_steps = kwargs.pop("intermediate_steps")
|
||||
thoughts = ""
|
||||
for action, observation in intermediate_steps:
|
||||
thoughts += action.log
|
||||
thoughts += f"\nObservation: {observation}\nThought: "
|
||||
# Set the agent_scratchpad variable to that value
|
||||
kwargs["agent_scratchpad"] = thoughts
|
||||
# Create a tools variable from the list of tools provided
|
||||
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
|
||||
# Create a list of tool names for the tools provided
|
||||
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
|
||||
formatted = self.template.format(**kwargs)
|
||||
return [HumanMessage(content=formatted)]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
prompt = CustomPromptTemplate(
|
||||
template=template,
|
||||
tools=tools,
|
||||
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
|
||||
# This includes the `intermediate_steps` variable because that is needed
|
||||
input_variables=["input", "intermediate_steps"]
|
||||
)
|
||||
```
|
||||
|
||||
## Output Parser
|
||||
|
||||
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
|
||||
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc
|
||||
|
||||
|
||||
```python
|
||||
class CustomOutputParser(AgentOutputParser):
|
||||
|
||||
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
|
||||
# Check if agent should finish
|
||||
if "Final Answer:" in llm_output:
|
||||
return AgentFinish(
|
||||
# Return values is generally always a dictionary with a single `output` key
|
||||
# It is not recommended to try anything else at the moment :)
|
||||
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
|
||||
log=llm_output,
|
||||
)
|
||||
# Parse out the action and action input
|
||||
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
|
||||
match = re.search(regex, llm_output, re.DOTALL)
|
||||
if not match:
|
||||
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
|
||||
action = match.group(1).strip()
|
||||
action_input = match.group(2)
|
||||
# Return the action and action input
|
||||
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
output_parser = CustomOutputParser()
|
||||
```
|
||||
|
||||
## Set up LLM
|
||||
|
||||
Choose the LLM you want to use!
|
||||
|
||||
|
||||
```python
|
||||
OPENAI_API_KEY = getpass()
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)
|
||||
```
|
||||
|
||||
## Define the stop sequence
|
||||
|
||||
This is important because it tells the LLM when to stop generation.
|
||||
|
||||
This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you).
|
||||
|
||||
## Set up the Agent
|
||||
|
||||
We can now combine everything to set up our agent
|
||||
|
||||
|
||||
```python
|
||||
# LLM chain consisting of the LLM and a prompt
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
tool_names = [tool.name for tool in tools]
|
||||
agent = LLMSingleActionAgent(
|
||||
llm_chain=llm_chain,
|
||||
output_parser=output_parser,
|
||||
stop=["\nObservation:"],
|
||||
allowed_tools=tool_names
|
||||
)
|
||||
```
|
||||
|
||||
## Use the Agent
|
||||
|
||||
Now we can use it!
|
||||
|
||||
|
||||
```python
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("Search for Leo DiCaprio's girlfriend on the internet.")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I should use a reliable search engine to get accurate information.
|
||||
Action: Search
|
||||
Action Input: "Leo DiCaprio girlfriend"
|
||||
|
||||
Observation:He went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior.
|
||||
I have found the answer to the question.
|
||||
Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Leo DiCaprio's current girlfriend is Camila Morrone."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
117
docs/snippets/modules/agents/how_to/mrkl.mdx
Normal file
117
docs/snippets/modules/agents/how_to/mrkl.mdx
Normal file
|
@ -0,0 +1,117 @@
|
|||
```python
|
||||
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
|
||||
from langchain.agents import initialize_agent, Tool
|
||||
from langchain.agents import AgentType
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm = OpenAI(temperature=0)
|
||||
search = SerpAPIWrapper()
|
||||
llm_math_chain = LLMMathChain(llm=llm, verbose=True)
|
||||
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
|
||||
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events. You should ask targeted questions"
|
||||
),
|
||||
Tool(
|
||||
name="Calculator",
|
||||
func=llm_math_chain.run,
|
||||
description="useful for when you need to answer questions about math"
|
||||
),
|
||||
Tool(
|
||||
name="FooBar DB",
|
||||
func=db_chain.run,
|
||||
description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
|
||||
Action: Search
|
||||
Action Input: "Who is Leo DiCaprio's girlfriend?"
|
||||
Observation: DiCaprio met actor Camila Morrone in December 2017, when she was 20 and he was 43. They were spotted at Coachella and went on multiple vacations together. Some reports suggested that DiCaprio was ready to ask Morrone to marry him. The couple made their red carpet debut at the 2020 Academy Awards.
|
||||
Thought: I need to calculate Camila Morrone's age raised to the 0.43 power.
|
||||
Action: Calculator
|
||||
Action Input: 21^0.43
|
||||
|
||||
> Entering new LLMMathChain chain...
|
||||
21^0.43
|
||||
```text
|
||||
21**0.43
|
||||
```
|
||||
...numexpr.evaluate("21**0.43")...
|
||||
|
||||
Answer: 3.7030049853137306
|
||||
> Finished chain.
|
||||
|
||||
Observation: Answer: 3.7030049853137306
|
||||
Thought: I now know the final answer.
|
||||
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
I need to find out the artist's full name and then search the FooBar database for their albums.
|
||||
Action: Search
|
||||
Action Input: "The Storm Before the Calm" artist
|
||||
Observation: The Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis Morissette, released June 17, 2022, via Epiphany Music and Thirty Tigers, as well as by RCA Records in Europe.
|
||||
Thought: I now need to search the FooBar database for Alanis Morissette's albums.
|
||||
Action: FooBar DB
|
||||
Action Input: What albums by Alanis Morissette are in the FooBar database?
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
What albums by Alanis Morissette are in the FooBar database?
|
||||
SQLQuery:
|
||||
|
||||
/Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
|
||||
sample_rows = connection.execute(command)
|
||||
|
||||
|
||||
SELECT "Title" FROM "Album" INNER JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId" WHERE "Name" = 'Alanis Morissette' LIMIT 5;
|
||||
SQLResult: [('Jagged Little Pill',)]
|
||||
Answer: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.
|
||||
> Finished chain.
|
||||
|
||||
Observation: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.
|
||||
Thought: I now know the final answer.
|
||||
Final Answer: The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
138
docs/snippets/modules/agents/how_to/mrkl_chat.mdx
Normal file
138
docs/snippets/modules/agents/how_to/mrkl_chat.mdx
Normal file
|
@ -0,0 +1,138 @@
|
|||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
llm = ChatOpenAI(temperature=0)
|
||||
llm1 = OpenAI(temperature=0)
|
||||
search = SerpAPIWrapper()
|
||||
llm_math_chain = LLMMathChain(llm=llm1, verbose=True)
|
||||
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
|
||||
db_chain = SQLDatabaseChain.from_llm(llm1, db, verbose=True)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events. You should ask targeted questions"
|
||||
),
|
||||
Tool(
|
||||
name="Calculator",
|
||||
func=llm_math_chain.run,
|
||||
description="useful for when you need to answer questions about math"
|
||||
),
|
||||
Tool(
|
||||
name="FooBar DB",
|
||||
func=db_chain.run,
|
||||
description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: The first question requires a search, while the second question requires a calculator.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "Leo DiCaprio girlfriend"
|
||||
}
|
||||
```
|
||||
|
||||
Observation: Gigi Hadid: 2022 Leo and Gigi were first linked back in September 2022, when a source told Us Weekly that Leo had his “sights set" on her (alarming way to put it, but okay).
|
||||
Thought:For the second question, I need to calculate the age raised to the 0.43 power. I will use the calculator tool.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Calculator",
|
||||
"action_input": "((2022-1995)^0.43)"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Entering new LLMMathChain chain...
|
||||
((2022-1995)^0.43)
|
||||
```text
|
||||
(2022-1995)**0.43
|
||||
```
|
||||
...numexpr.evaluate("(2022-1995)**0.43")...
|
||||
|
||||
Answer: 4.125593352125936
|
||||
> Finished chain.
|
||||
|
||||
Observation: Answer: 4.125593352125936
|
||||
Thought:I now know the final answer.
|
||||
Final Answer: Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
Question: What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?
|
||||
Thought: I should use the Search tool to find the answer to the first part of the question and then use the FooBar DB tool to find the answer to the second part.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "Who recently released an album called 'The Storm Before the Calm'"
|
||||
}
|
||||
```
|
||||
|
||||
Observation: Alanis Morissette
|
||||
Thought:Now that I know the artist's name, I can use the FooBar DB tool to find out if they are in the database and what albums of theirs are in it.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "FooBar DB",
|
||||
"action_input": "What albums does Alanis Morissette have in the database?"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
What albums does Alanis Morissette have in the database?
|
||||
SQLQuery:
|
||||
|
||||
/Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
|
||||
sample_rows = connection.execute(command)
|
||||
|
||||
|
||||
SELECT "Title" FROM "Album" WHERE "ArtistId" IN (SELECT "ArtistId" FROM "Artist" WHERE "Name" = 'Alanis Morissette') LIMIT 5;
|
||||
SQLResult: [('Jagged Little Pill',)]
|
||||
Answer: Alanis Morissette has the album Jagged Little Pill in the database.
|
||||
> Finished chain.
|
||||
|
||||
Observation: Alanis Morissette has the album Jagged Little Pill in the database.
|
||||
Thought:The artist Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.
|
||||
Final Answer: Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
Loading…
Add table
Add a link
Reference in a new issue