forked from phoenix/litellm-mirror
add v4
This commit is contained in:
parent
2cf949990e
commit
a168cf8b9c
832 changed files with 161273 additions and 0 deletions
|
@ -0,0 +1,130 @@
|
|||
The `chat-conversational-react-description` agent type lets us create a conversational agent using a chat model instead of an LLM.
|
||||
|
||||
```python
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
|
||||
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)
|
||||
agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="hi, i am bob")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Hello Bob! How can I assist you today?"
|
||||
}
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Hello Bob! How can I assist you today?'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="what's my name?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Your name is Bob."
|
||||
}
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Your name is Bob.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run("what are some good dinners to make this week, if i like thai food?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
{
|
||||
"action": "Current Search",
|
||||
"action_input": "Thai food dinner recipes"
|
||||
}
|
||||
Observation: 64 easy Thai recipes for any night of the week · Thai curry noodle soup · Thai yellow cauliflower, snake bean and tofu curry · Thai-spiced chicken hand pies · Thai ...
|
||||
Thought:{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier."
|
||||
}
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="tell me the last letter in my name, and also tell me who won the world cup in 1978?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "The last letter in your name is 'b'. Argentina won the World Cup in 1978."
|
||||
}
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"The last letter in your name is 'b'. Argentina won the World Cup in 1978."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="whats the weather like in pomfret?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
{
|
||||
"action": "Current Search",
|
||||
"action_input": "weather in pomfret"
|
||||
}
|
||||
Observation: Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.
|
||||
Thought:{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%."
|
||||
}
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
|
@ -0,0 +1,150 @@
|
|||
This is accomplished with a specific type of agent (`conversational-react-description`) which expects to be used with a memory component.
|
||||
|
||||
```python
|
||||
from langchain.agents import Tool
|
||||
from langchain.agents import AgentType
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
from langchain import OpenAI
|
||||
from langchain.utilities import SerpAPIWrapper
|
||||
from langchain.agents import initialize_agent
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
search = SerpAPIWrapper()
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Current Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events or the current state of the world"
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
memory = ConversationBufferMemory(memory_key="chat_history")
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm=OpenAI(temperature=0)
|
||||
agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="hi, i am bob")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
|
||||
Thought: Do I need to use a tool? No
|
||||
AI: Hi Bob, nice to meet you! How can I help you today?
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Hi Bob, nice to meet you! How can I help you today?'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="what's my name?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
|
||||
Thought: Do I need to use a tool? No
|
||||
AI: Your name is Bob!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Your name is Bob!'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run("what are some good dinners to make this week, if i like thai food?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
|
||||
Thought: Do I need to use a tool? Yes
|
||||
Action: Current Search
|
||||
Action Input: Thai food dinner recipes
|
||||
Observation: 59 easy Thai recipes for any night of the week · Marion Grasby's Thai spicy chilli and basil fried rice · Thai curry noodle soup · Marion Grasby's Thai Spicy ...
|
||||
Thought: Do I need to use a tool? No
|
||||
AI: Here are some great Thai dinner recipes you can try this week: Marion Grasby's Thai Spicy Chilli and Basil Fried Rice, Thai Curry Noodle Soup, Thai Green Curry with Coconut Rice, Thai Red Curry with Vegetables, and Thai Coconut Soup. I hope you enjoy them!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"Here are some great Thai dinner recipes you can try this week: Marion Grasby's Thai Spicy Chilli and Basil Fried Rice, Thai Curry Noodle Soup, Thai Green Curry with Coconut Rice, Thai Red Curry with Vegetables, and Thai Coconut Soup. I hope you enjoy them!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="tell me the last letter in my name, and also tell me who won the world cup in 1978?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
|
||||
Thought: Do I need to use a tool? Yes
|
||||
Action: Current Search
|
||||
Action Input: Who won the World Cup in 1978
|
||||
Observation: Argentina national football team
|
||||
Thought: Do I need to use a tool? No
|
||||
AI: The last letter in your name is "b" and the winner of the 1978 World Cup was the Argentina national football team.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'The last letter in your name is "b" and the winner of the 1978 World Cup was the Argentina national football team.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_chain.run(input="whats the current temperature in pomfret?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
|
||||
Thought: Do I need to use a tool? Yes
|
||||
Action: Current Search
|
||||
Action Input: Current temperature in Pomfret
|
||||
Observation: Partly cloudy skies. High around 70F. Winds W at 5 to 10 mph. Humidity41%.
|
||||
Thought: Do I need to use a tool? No
|
||||
AI: The current temperature in Pomfret is around 70F with partly cloudy skies and winds W at 5 to 10 mph. The humidity is 41%.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'The current temperature in Pomfret is around 70F with partly cloudy skies and winds W at 5 to 10 mph. The humidity is 41%.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
|
@ -0,0 +1,76 @@
|
|||
Install openai,google-search-results packages which are required as the langchain packages call them internally
|
||||
|
||||
>pip install openai google-search-results
|
||||
|
||||
```python
|
||||
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
|
||||
from langchain.agents import initialize_agent, Tool
|
||||
from langchain.agents import AgentType
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
|
||||
search = SerpAPIWrapper()
|
||||
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
|
||||
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
|
||||
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events. You should ask targeted questions"
|
||||
),
|
||||
Tool(
|
||||
name="Calculator",
|
||||
func=llm_math_chain.run,
|
||||
description="useful for when you need to answer questions about math"
|
||||
),
|
||||
Tool(
|
||||
name="FooBar-DB",
|
||||
func=db_chain.run,
|
||||
description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new chain...
|
||||
|
||||
Invoking: `Search` with `{'query': 'Leo DiCaprio girlfriend'}`
|
||||
|
||||
|
||||
Amidst his casual romance with Gigi, Leo allegedly entered a relationship with 19-year old model, Eden Polani, in February 2023.
|
||||
Invoking: `Calculator` with `{'expression': '19^0.43'}`
|
||||
|
||||
|
||||
> Entering new chain...
|
||||
19^0.43```text
|
||||
19**0.43
|
||||
```
|
||||
...numexpr.evaluate("19**0.43")...
|
||||
|
||||
Answer: 3.547023357958959
|
||||
> Finished chain.
|
||||
Answer: 3.547023357958959Leo DiCaprio's girlfriend is reportedly Eden Polani. Her current age raised to the power of 0.43 is approximately 3.55.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"Leo DiCaprio's girlfriend is reportedly Eden Polani. Her current age raised to the power of 0.43 is approximately 3.55."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
228
docs/snippets/modules/agents/agent_types/plan_and_execute.mdx
Normal file
228
docs/snippets/modules/agents/agent_types/plan_and_execute.mdx
Normal file
|
@ -0,0 +1,228 @@
|
|||
## Imports
|
||||
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner
|
||||
from langchain.llms import OpenAI
|
||||
from langchain import SerpAPIWrapper
|
||||
from langchain.agents.tools import Tool
|
||||
from langchain import LLMMathChain
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
|
||||
```python
|
||||
search = SerpAPIWrapper()
|
||||
llm = OpenAI(temperature=0)
|
||||
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events"
|
||||
),
|
||||
Tool(
|
||||
name="Calculator",
|
||||
func=llm_math_chain.run,
|
||||
description="useful for when you need to answer questions about math"
|
||||
),
|
||||
]
|
||||
```
|
||||
|
||||
## Planner, Executor, and Agent
|
||||
|
||||
|
||||
```python
|
||||
model = ChatOpenAI(temperature=0)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
planner = load_chat_planner(model)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
executor = load_agent_executor(model, tools, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
|
||||
```
|
||||
|
||||
## Run Example
|
||||
|
||||
|
||||
```python
|
||||
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new PlanAndExecute chain...
|
||||
steps=[Step(value="Search for Leo DiCaprio's girlfriend on the internet."), Step(value='Find her current age.'), Step(value='Raise her current age to the 0.43 power using a calculator or programming language.'), Step(value='Output the result.'), Step(value="Given the above steps taken, respond to the user's original question.\n\n")]
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "Who is Leo DiCaprio's girlfriend?"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel – Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week.
|
||||
Thought:Based on the previous observation, I can provide the answer to the current objective.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Leo DiCaprio is currently linked to Gigi Hadid."
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Finished chain.
|
||||
*****
|
||||
|
||||
Step: Search for Leo DiCaprio's girlfriend on the internet.
|
||||
|
||||
Response: Leo DiCaprio is currently linked to Gigi Hadid.
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "What is Gigi Hadid's current age?"
|
||||
}
|
||||
```
|
||||
|
||||
Observation: 28 years
|
||||
Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.'))]
|
||||
|
||||
Current objective: value='Find her current age.'
|
||||
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "What is Gigi Hadid's current age?"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Observation: 28 years
|
||||
Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.')), (Step(value='Find her current age.'), StepResponse(response='28 years'))]
|
||||
|
||||
Current objective: None
|
||||
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Gigi Hadid's current age is 28 years."
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
> Finished chain.
|
||||
*****
|
||||
|
||||
Step: Find her current age.
|
||||
|
||||
Response: Gigi Hadid's current age is 28 years.
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Calculator",
|
||||
"action_input": "28 ** 0.43"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Entering new LLMMathChain chain...
|
||||
28 ** 0.43
|
||||
```text
|
||||
28 ** 0.43
|
||||
```
|
||||
...numexpr.evaluate("28 ** 0.43")...
|
||||
|
||||
Answer: 4.1906168361987195
|
||||
> Finished chain.
|
||||
|
||||
Observation: Answer: 4.1906168361987195
|
||||
Thought:The next step is to provide the answer to the user's question.
|
||||
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
> Finished chain.
|
||||
*****
|
||||
|
||||
Step: Raise her current age to the 0.43 power using a calculator or programming language.
|
||||
|
||||
Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "The result is approximately 4.19."
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Finished chain.
|
||||
*****
|
||||
|
||||
Step: Output the result.
|
||||
|
||||
Response: The result is approximately 4.19.
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Finished chain.
|
||||
*****
|
||||
|
||||
Step: Given the above steps taken, respond to the user's original question.
|
||||
|
||||
|
||||
|
||||
Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
62
docs/snippets/modules/agents/agent_types/react.mdx
Normal file
62
docs/snippets/modules/agents/agent_types/react.mdx
Normal file
|
@ -0,0 +1,62 @@
|
|||
```python
|
||||
from langchain.agents import load_tools
|
||||
from langchain.agents import initialize_agent
|
||||
from langchain.agents import AgentType
|
||||
from langchain.llms import OpenAI
|
||||
```
|
||||
|
||||
First, let's load the language model we're going to use to control the agent.
|
||||
|
||||
|
||||
```python
|
||||
llm = OpenAI(temperature=0)
|
||||
```
|
||||
|
||||
Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
|
||||
|
||||
|
||||
```python
|
||||
tools = load_tools(["serpapi", "llm-math"], llm=llm)
|
||||
```
|
||||
|
||||
Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
|
||||
|
||||
|
||||
```python
|
||||
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
```
|
||||
|
||||
Now let's test it out!
|
||||
|
||||
|
||||
```python
|
||||
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
|
||||
Action: Search
|
||||
Action Input: "Leo DiCaprio girlfriend"
|
||||
Observation: Camila Morrone
|
||||
Thought: I need to find out Camila Morrone's age
|
||||
Action: Search
|
||||
Action Input: "Camila Morrone age"
|
||||
Observation: 25 years
|
||||
Thought: I need to calculate 25 raised to the 0.43 power
|
||||
Action: Calculator
|
||||
Action Input: 25^0.43
|
||||
Observation: Answer: 3.991298452658078
|
||||
|
||||
Thought: I now know the final answer
|
||||
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
7
docs/snippets/modules/agents/agent_types/react_chat.mdx
Normal file
7
docs/snippets/modules/agents/agent_types/react_chat.mdx
Normal file
|
@ -0,0 +1,7 @@
|
|||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
chat_model = ChatOpenAI(temperature=0)
|
||||
agent = initialize_agent(tools, chat_model, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
279
docs/snippets/modules/agents/agent_types/structured_chat.mdx
Normal file
279
docs/snippets/modules/agents/agent_types/structured_chat.mdx
Normal file
|
@ -0,0 +1,279 @@
|
|||
This functionality is natively available using agent types: `structured-chat-zero-shot-react-description` or `AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION`
|
||||
|
||||
```python
|
||||
import os
|
||||
os.environ["LANGCHAIN_TRACING"] = "true" # If you want to trace the execution of the program, set to "true"
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
from langchain.agents import AgentType
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.agents import initialize_agent
|
||||
```
|
||||
|
||||
### Initialize Tools
|
||||
|
||||
We will test the agent using a web browser.
|
||||
|
||||
|
||||
```python
|
||||
from langchain.agents.agent_toolkits import PlayWrightBrowserToolkit
|
||||
from langchain.tools.playwright.utils import (
|
||||
create_async_playwright_browser,
|
||||
create_sync_playwright_browser, # A synchronous browser is available, though it isn't compatible with jupyter.
|
||||
)
|
||||
|
||||
# This import is required only for jupyter notebooks, since they have their own eventloop
|
||||
import nest_asyncio
|
||||
nest_asyncio.apply()
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
async_browser = create_async_playwright_browser()
|
||||
browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)
|
||||
tools = browser_toolkit.get_tools()
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(temperature=0) # Also works well with Anthropic models
|
||||
agent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
response = await agent_chain.arun(input="Hi I'm Erica.")
|
||||
print(response)
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Hello Erica, how can I assist you today?"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Finished chain.
|
||||
Hello Erica, how can I assist you today?
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
response = await agent_chain.arun(input="Don't need help really just chatting.")
|
||||
print(response)
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
|
||||
> Finished chain.
|
||||
I'm here to chat! How's your day going?
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
response = await agent_chain.arun(input="Browse to blog.langchain.dev and summarize the text, please.")
|
||||
print(response)
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "navigate_browser",
|
||||
"action_input": {
|
||||
"url": "https://blog.langchain.dev/"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Observation: Navigating to https://blog.langchain.dev/ returned status code 200
|
||||
Thought:I need to extract the text from the webpage to summarize it.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "extract_text",
|
||||
"action_input": {}
|
||||
}
|
||||
```
|
||||
|
||||
Observation: LangChain LangChain Home About GitHub Docs LangChain The official LangChain blog. Auto-Evaluator Opportunities Editor's Note: this is a guest blog post by Lance Martin.
|
||||
|
||||
|
||||
TL;DR
|
||||
|
||||
We recently open-sourced an auto-evaluator tool for grading LLM question-answer chains. We are now releasing an open source, free to use hosted app and API to expand usability. Below we discuss a few opportunities to further improve May 1, 2023 5 min read Callbacks Improvements TL;DR: We're announcing improvements to our callbacks system, which powers logging, tracing, streaming output, and some awesome third-party integrations. This will better support concurrent runs with independent callbacks, tracing of deeply nested trees of LangChain components, and callback handlers scoped to a single request (which is super useful for May 1, 2023 3 min read Unleashing the power of AI Collaboration with Parallelized LLM Agent Actor Trees Editor's note: the following is a guest blog post from Cyrus at Shaman AI. We use guest blog posts to highlight interesting and novel applications, and this is certainly that. There's been a lot of talk about agents recently, but most have been discussions around a single agent. If multiple Apr 28, 2023 4 min read Gradio & LLM Agents Editor's note: this is a guest blog post from Freddy Boulton, a software engineer at Gradio. We're excited to share this post because it brings a large number of exciting new tools into the ecosystem. Agents are largely defined by the tools they have, so to be able to equip Apr 23, 2023 4 min read RecAlign - The smart content filter for social media feed [Editor's Note] This is a guest post by Tian Jin. We are highlighting this application as we think it is a novel use case. Specifically, we think recommendation systems are incredibly impactful in our everyday lives and there has not been a ton of discourse on how LLMs will impact Apr 22, 2023 3 min read Improving Document Retrieval with Contextual Compression Note: This post assumes some familiarity with LangChain and is moderately technical.
|
||||
|
||||
💡 TL;DR: We’ve introduced a new abstraction and a new document Retriever to facilitate the post-processing of retrieved documents. Specifically, the new abstraction makes it easy to take a set of retrieved documents and extract from them Apr 20, 2023 3 min read Autonomous Agents & Agent Simulations Over the past two weeks, there has been a massive increase in using LLMs in an agentic manner. Specifically, projects like AutoGPT, BabyAGI, CAMEL, and Generative Agents have popped up. The LangChain community has now implemented some parts of all of those projects in the LangChain framework. While researching and Apr 18, 2023 7 min read AI-Powered Medical Knowledge: Revolutionizing Care for Rare Conditions [Editor's Note]: This is a guest post by Jack Simon, who recently participated in a hackathon at Williams College. He built a LangChain-powered chatbot focused on appendiceal cancer, aiming to make specialized knowledge more accessible to those in need. If you are interested in building a chatbot for another rare Apr 17, 2023 3 min read Auto-Eval of Question-Answering Tasks By Lance Martin
|
||||
|
||||
Context
|
||||
|
||||
LLM ops platforms, such as LangChain, make it easy to assemble LLM components (e.g., models, document retrievers, data loaders) into chains. Question-Answering is one of the most popular applications of these chains. But it is often not always obvious to determine what parameters (e.g. Apr 15, 2023 3 min read Announcing LangChainJS Support for Multiple JS Environments TLDR: We're announcing support for running LangChain.js in browsers, Cloudflare Workers, Vercel/Next.js, Deno, Supabase Edge Functions, alongside existing support for Node.js ESM and CJS. See install/upgrade docs and breaking changes list.
|
||||
|
||||
|
||||
Context
|
||||
|
||||
Originally we designed LangChain.js to run in Node.js, which is the Apr 11, 2023 3 min read LangChain x Supabase Supabase is holding an AI Hackathon this week. Here at LangChain we are big fans of both Supabase and hackathons, so we thought this would be a perfect time to highlight the multiple ways you can use LangChain and Supabase together.
|
||||
|
||||
The reason we like Supabase so much is that Apr 8, 2023 2 min read Announcing our $10M seed round led by Benchmark It was only six months ago that we released the first version of LangChain, but it seems like several years. When we launched, generative AI was starting to go mainstream: stable diffusion had just been released and was captivating people’s imagination and fueling an explosion in developer activity, Jasper Apr 4, 2023 4 min read Custom Agents One of the most common requests we've heard is better functionality and documentation for creating custom agents. This has always been a bit tricky - because in our mind it's actually still very unclear what an "agent" actually is, and therefore what the "right" abstractions for them may be. Recently, Apr 3, 2023 3 min read Retrieval TL;DR: We are adjusting our abstractions to make it easy for other retrieval methods besides the LangChain VectorDB object to be used in LangChain. This is done with the goals of (1) allowing retrievers constructed elsewhere to be used more easily in LangChain, (2) encouraging more experimentation with alternative Mar 23, 2023 4 min read LangChain + Zapier Natural Language Actions (NLA) We are super excited to team up with Zapier and integrate their new Zapier NLA API into LangChain, which you can now use with your agents and chains. With this integration, you have access to the 5k+ apps and 20k+ actions on Zapier's platform through a natural language API interface. Mar 16, 2023 2 min read Evaluation Evaluation of language models, and by extension applications built on top of language models, is hard. With recent model releases (OpenAI, Anthropic, Google) evaluation is becoming a bigger and bigger issue. People are starting to try to tackle this, with OpenAI releasing OpenAI/evals - focused on evaluating OpenAI models. Mar 14, 2023 3 min read LLMs and SQL Francisco Ingham and Jon Luo are two of the community members leading the change on the SQL integrations. We’re really excited to write this blog post with them going over all the tips and tricks they’ve learned doing so. We’re even more excited to announce that we’ Mar 13, 2023 8 min read Origin Web Browser [Editor's Note]: This is the second of hopefully many guest posts. We intend to highlight novel applications building on top of LangChain. If you are interested in working with us on such a post, please reach out to harrison@langchain.dev.
|
||||
|
||||
Authors: Parth Asawa (pgasawa@), Ayushi Batwara (ayushi.batwara@), Jason Mar 8, 2023 4 min read Prompt Selectors One common complaint we've heard is that the default prompt templates do not work equally well for all models. This became especially pronounced this past week when OpenAI released a ChatGPT API. This new API had a completely new interface (which required new abstractions) and as a result many users Mar 8, 2023 2 min read Chat Models Last week OpenAI released a ChatGPT endpoint. It came marketed with several big improvements, most notably being 10x cheaper and a lot faster. But it also came with a completely new API endpoint. We were able to quickly write a wrapper for this endpoint to let users use it like Mar 6, 2023 6 min read Using the ChatGPT API to evaluate the ChatGPT API OpenAI released a new ChatGPT API yesterday. Lots of people were excited to try it. But how does it actually compare to the existing API? It will take some time before there is a definitive answer, but here are some initial thoughts. Because I'm lazy, I also enrolled the help Mar 2, 2023 5 min read Agent Toolkits Today, we're announcing agent toolkits, a new abstraction that allows developers to create agents designed for a particular use-case (for example, interacting with a relational database or interacting with an OpenAPI spec). We hope to continue developing different toolkits that can enable agents to do amazing feats. Toolkits are supported Mar 1, 2023 3 min read TypeScript Support It's finally here... TypeScript support for LangChain.
|
||||
|
||||
What does this mean? It means that all your favorite prompts, chains, and agents are all recreatable in TypeScript natively. Both the Python version and TypeScript version utilize the same serializable format, meaning that artifacts can seamlessly be shared between languages. As an Feb 17, 2023 2 min read Streaming Support in LangChain We’re excited to announce streaming support in LangChain. There's been a lot of talk about the best UX for LLM applications, and we believe streaming is at its core. We’ve also updated the chat-langchain repo to include streaming and async execution. We hope that this repo can serve Feb 14, 2023 2 min read LangChain + Chroma Today we’re announcing LangChain's integration with Chroma, the first step on the path to the Modern A.I Stack.
|
||||
|
||||
|
||||
LangChain - The A.I-native developer toolkit
|
||||
|
||||
We started LangChain with the intent to build a modular and flexible framework for developing A.I-native applications. Some of the use cases Feb 13, 2023 2 min read Page 1 of 2 Older Posts → LangChain © 2023 Sign up Powered by Ghost
|
||||
Thought:
|
||||
> Finished chain.
|
||||
The LangChain blog has recently released an open-source auto-evaluator tool for grading LLM question-answer chains and is now releasing an open-source, free-to-use hosted app and API to expand usability. The blog also discusses various opportunities to further improve the LangChain platform.
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
response = await agent_chain.arun(input="What's the latest xkcd comic about?")
|
||||
print(response)
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I can navigate to the xkcd website and extract the latest comic title and alt text to answer the question.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "navigate_browser",
|
||||
"action_input": {
|
||||
"url": "https://xkcd.com/"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Observation: Navigating to https://xkcd.com/ returned status code 200
|
||||
Thought:I can extract the latest comic title and alt text using CSS selectors.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "get_elements",
|
||||
"action_input": {
|
||||
"selector": "#ctitle, #comic img",
|
||||
"attributes": ["alt", "src"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Observation: [{"alt": "Tapetum Lucidum", "src": "//imgs.xkcd.com/comics/tapetum_lucidum.png"}]
|
||||
Thought:
|
||||
> Finished chain.
|
||||
The latest xkcd comic is titled "Tapetum Lucidum" and the image can be found at https://xkcd.com/2565/.
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## Adding in memory
|
||||
|
||||
Here is how you add in memory to this agent
|
||||
|
||||
|
||||
```python
|
||||
from langchain.prompts import MessagesPlaceholder
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
chat_history = MessagesPlaceholder(variable_name="chat_history")
|
||||
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_chain = initialize_agent(
|
||||
tools,
|
||||
llm,
|
||||
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
|
||||
verbose=True,
|
||||
memory=memory,
|
||||
agent_kwargs = {
|
||||
"memory_prompts": [chat_history],
|
||||
"input_variables": ["input", "agent_scratchpad", "chat_history"]
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
response = await agent_chain.arun(input="Hi I'm Erica.")
|
||||
print(response)
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Hi Erica! How can I assist you today?"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Finished chain.
|
||||
Hi Erica! How can I assist you today?
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
response = await agent_chain.arun(input="whats my name?")
|
||||
print(response)
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Your name is Erica.
|
||||
|
||||
> Finished chain.
|
||||
Your name is Erica.
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
132
docs/snippets/modules/agents/get_started.mdx
Normal file
132
docs/snippets/modules/agents/get_started.mdx
Normal file
|
@ -0,0 +1,132 @@
|
|||
This will go over how to get started building an agent.
|
||||
We will use a LangChain agent class, but show how to customize it to give it specific context.
|
||||
We will then define custom tools, and then run it all in the standard LangChain AgentExecutor.
|
||||
|
||||
### Set up the agent
|
||||
|
||||
We will use the OpenAIFunctionsAgent.
|
||||
This is easiest and best agent to get started with.
|
||||
It does however require usage of ChatOpenAI models.
|
||||
If you want to use a different language model, we would recommend using the [ReAct](/docs/modules/agents/agent_types/react) agent.
|
||||
|
||||
For this guide, we will construct a custom agent that has access to a custom tool.
|
||||
We are choosing this example because we think for most use cases you will NEED to customize either the agent or the tools.
|
||||
The tool we will give the agent is a tool to calculate the length of a word.
|
||||
This is useful because this is actually something LLMs can mess up due to tokenization.
|
||||
We will first create it WITHOUT memory, but we will then show how to add memory in.
|
||||
Memory is needed to enable conversation.
|
||||
|
||||
First, let's load the language model we're going to use to control the agent.
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
llm = ChatOpenAI(temperature=0)
|
||||
```
|
||||
|
||||
Next, let's define some tools to use.
|
||||
Let's write a really simple Python function to calculate the length of a word that is passed in.
|
||||
|
||||
|
||||
|
||||
```python
|
||||
from langchain.agents import tool
|
||||
|
||||
@tool
|
||||
def get_word_length(word: str) -> int:
|
||||
"""Returns the length of a word."""
|
||||
return len(word)
|
||||
|
||||
tools = [get_word_length]
|
||||
```
|
||||
|
||||
Now let us create the prompt.
|
||||
We can use the `OpenAIFunctionsAgent.create_prompt` helper function to create a prompt automatically.
|
||||
This allows for a few different ways to customize, including passing in a custom SystemMessage, which we will do.
|
||||
|
||||
```python
|
||||
from langchain.schema import SystemMessage
|
||||
system_message = SystemMessage(content="You are very powerful assistant, but bad at calculating lengths of words.")
|
||||
prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)
|
||||
```
|
||||
|
||||
Putting those pieces together, we can now create the agent.
|
||||
|
||||
```python
|
||||
from langchain.agents import OpenAIFunctionsAgent
|
||||
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
|
||||
```
|
||||
|
||||
Finally, we create the AgentExecutor - the runtime for our agent.
|
||||
|
||||
```python
|
||||
from langchain.agents import AgentExecutor
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
|
||||
```
|
||||
|
||||
Now let's test it out!
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("how many letters in the word educa?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
|
||||
Invoking: `get_word_length` with `{'word': 'educa'}`
|
||||
|
||||
5
|
||||
|
||||
There are 5 letters in the word "educa".
|
||||
|
||||
> Finished chain.
|
||||
|
||||
'There are 5 letters in the word "educa".'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
This is great - we have an agent!
|
||||
However, this agent is stateless - it doesn't remember anything about previous interactions.
|
||||
This means you can't ask follow up questions easily.
|
||||
Let's fix that by adding in memory.
|
||||
|
||||
In order to do this, we need to do two things:
|
||||
|
||||
1. Add a place for memory variables to go in the prompt
|
||||
2. Add memory to the AgentExecutor (note that we add it here, and NOT to the agent, as this is the outermost chain)
|
||||
|
||||
First, let's add a place for memory in the prompt.
|
||||
We do this by adding a placeholder for messages with the key `"chat_history"`.
|
||||
|
||||
```python
|
||||
from langchain.prompts import MessagesPlaceholder
|
||||
|
||||
MEMORY_KEY = "chat_history"
|
||||
prompt = OpenAIFunctionsAgent.create_prompt(
|
||||
system_message=system_message,
|
||||
extra_prompt_messages=[MessagesPlaceholder(variable_name=MEMORY_KEY)]
|
||||
)
|
||||
```
|
||||
|
||||
Next, let's create a memory object.
|
||||
We will do this by using `ConversationBufferMemory`.
|
||||
Importantly, we set `memory_key` also equal to `"chat_history"` (to align it with the prompt) and set `return_messages` (to make it return messages rather than a string).
|
||||
|
||||
```python
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
|
||||
memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)
|
||||
```
|
||||
|
||||
We can then put it all together!
|
||||
|
||||
```python
|
||||
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True)
|
||||
agent_executor.run("how many letters in the word educa?")
|
||||
agent_executor.run("is that a real word?")
|
||||
```
|
356
docs/snippets/modules/agents/how_to/custom_llm_agent.mdx
Normal file
356
docs/snippets/modules/agents/how_to/custom_llm_agent.mdx
Normal file
|
@ -0,0 +1,356 @@
|
|||
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
|
||||
2. If the Agent returns an `AgentFinish`, then return that directly to the user
|
||||
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
|
||||
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
|
||||
|
||||
`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).
|
||||
|
||||
`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.
|
||||
|
||||
In this notebook we walk through how to create a custom LLM agent.
|
||||
|
||||
|
||||
|
||||
## Set up environment
|
||||
|
||||
Do necessary imports, etc.
|
||||
|
||||
|
||||
```python
|
||||
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
|
||||
from langchain.prompts import StringPromptTemplate
|
||||
from langchain import OpenAI, SerpAPIWrapper, LLMChain
|
||||
from typing import List, Union
|
||||
from langchain.schema import AgentAction, AgentFinish, OutputParserException
|
||||
import re
|
||||
```
|
||||
|
||||
## Set up tool
|
||||
|
||||
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
|
||||
|
||||
|
||||
```python
|
||||
# Define which tools the agent can use to answer user queries
|
||||
search = SerpAPIWrapper()
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
## Prompt Template
|
||||
|
||||
This instructs the agent on what to do. Generally, the template should incorporate:
|
||||
|
||||
- `tools`: which tools the agent has access and how and when to call them.
|
||||
- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.
|
||||
- `input`: generic user input
|
||||
|
||||
|
||||
```python
|
||||
# Set up the base template
|
||||
template = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
|
||||
|
||||
{tools}
|
||||
|
||||
Use the following format:
|
||||
|
||||
Question: the input question you must answer
|
||||
Thought: you should always think about what to do
|
||||
Action: the action to take, should be one of [{tool_names}]
|
||||
Action Input: the input to the action
|
||||
Observation: the result of the action
|
||||
... (this Thought/Action/Action Input/Observation can repeat N times)
|
||||
Thought: I now know the final answer
|
||||
Final Answer: the final answer to the original input question
|
||||
|
||||
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s
|
||||
|
||||
Question: {input}
|
||||
{agent_scratchpad}"""
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Set up a prompt template
|
||||
class CustomPromptTemplate(StringPromptTemplate):
|
||||
# The template to use
|
||||
template: str
|
||||
# The list of tools available
|
||||
tools: List[Tool]
|
||||
|
||||
def format(self, **kwargs) -> str:
|
||||
# Get the intermediate steps (AgentAction, Observation tuples)
|
||||
# Format them in a particular way
|
||||
intermediate_steps = kwargs.pop("intermediate_steps")
|
||||
thoughts = ""
|
||||
for action, observation in intermediate_steps:
|
||||
thoughts += action.log
|
||||
thoughts += f"\nObservation: {observation}\nThought: "
|
||||
# Set the agent_scratchpad variable to that value
|
||||
kwargs["agent_scratchpad"] = thoughts
|
||||
# Create a tools variable from the list of tools provided
|
||||
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
|
||||
# Create a list of tool names for the tools provided
|
||||
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
|
||||
return self.template.format(**kwargs)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
prompt = CustomPromptTemplate(
|
||||
template=template,
|
||||
tools=tools,
|
||||
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
|
||||
# This includes the `intermediate_steps` variable because that is needed
|
||||
input_variables=["input", "intermediate_steps"]
|
||||
)
|
||||
```
|
||||
|
||||
## Output Parser
|
||||
|
||||
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
|
||||
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc
|
||||
|
||||
|
||||
```python
|
||||
class CustomOutputParser(AgentOutputParser):
|
||||
|
||||
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
|
||||
# Check if agent should finish
|
||||
if "Final Answer:" in llm_output:
|
||||
return AgentFinish(
|
||||
# Return values is generally always a dictionary with a single `output` key
|
||||
# It is not recommended to try anything else at the moment :)
|
||||
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
|
||||
log=llm_output,
|
||||
)
|
||||
# Parse out the action and action input
|
||||
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
|
||||
match = re.search(regex, llm_output, re.DOTALL)
|
||||
if not match:
|
||||
raise OutputParserException(f"Could not parse LLM output: `{llm_output}`")
|
||||
action = match.group(1).strip()
|
||||
action_input = match.group(2)
|
||||
# Return the action and action input
|
||||
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
output_parser = CustomOutputParser()
|
||||
```
|
||||
|
||||
## Set up LLM
|
||||
|
||||
Choose the LLM you want to use!
|
||||
|
||||
|
||||
```python
|
||||
llm = OpenAI(temperature=0)
|
||||
```
|
||||
|
||||
## Define the stop sequence
|
||||
|
||||
This is important because it tells the LLM when to stop generation.
|
||||
|
||||
This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you).
|
||||
|
||||
## Set up the Agent
|
||||
|
||||
We can now combine everything to set up our agent
|
||||
|
||||
|
||||
```python
|
||||
# LLM chain consisting of the LLM and a prompt
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
tool_names = [tool.name for tool in tools]
|
||||
agent = LLMSingleActionAgent(
|
||||
llm_chain=llm_chain,
|
||||
output_parser=output_parser,
|
||||
stop=["\nObservation:"],
|
||||
allowed_tools=tool_names
|
||||
)
|
||||
```
|
||||
|
||||
## Use the Agent
|
||||
|
||||
Now we can use it!
|
||||
|
||||
|
||||
```python
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("How many people live in canada as of 2023?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I need to find out the population of Canada in 2023
|
||||
Action: Search
|
||||
Action Input: Population of Canada in 2023
|
||||
|
||||
Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answer
|
||||
Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## Adding Memory
|
||||
|
||||
If you want to add memory to the agent, you'll need to:
|
||||
|
||||
1. Add a place in the custom prompt for the chat_history
|
||||
2. Add a memory object to the agent executor.
|
||||
|
||||
|
||||
```python
|
||||
# Set up the base template
|
||||
template_with_history = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:
|
||||
|
||||
{tools}
|
||||
|
||||
Use the following format:
|
||||
|
||||
Question: the input question you must answer
|
||||
Thought: you should always think about what to do
|
||||
Action: the action to take, should be one of [{tool_names}]
|
||||
Action Input: the input to the action
|
||||
Observation: the result of the action
|
||||
... (this Thought/Action/Action Input/Observation can repeat N times)
|
||||
Thought: I now know the final answer
|
||||
Final Answer: the final answer to the original input question
|
||||
|
||||
Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"s
|
||||
|
||||
Previous conversation history:
|
||||
{history}
|
||||
|
||||
New question: {input}
|
||||
{agent_scratchpad}"""
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
prompt_with_history = CustomPromptTemplate(
|
||||
template=template_with_history,
|
||||
tools=tools,
|
||||
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
|
||||
# This includes the `intermediate_steps` variable because that is needed
|
||||
input_variables=["input", "intermediate_steps", "history"]
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt_with_history)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
tool_names = [tool.name for tool in tools]
|
||||
agent = LLMSingleActionAgent(
|
||||
llm_chain=llm_chain,
|
||||
output_parser=output_parser,
|
||||
stop=["\nObservation:"],
|
||||
allowed_tools=tool_names
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
from langchain.memory import ConversationBufferWindowMemory
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
memory=ConversationBufferWindowMemory(k=2)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("How many people live in canada as of 2023?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I need to find out the population of Canada in 2023
|
||||
Action: Search
|
||||
Action Input: Population of Canada in 2023
|
||||
|
||||
Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answer
|
||||
Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("how about in mexico?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I need to find out how many people live in Mexico.
|
||||
Action: Search
|
||||
Action Input: How many people live in Mexico as of 2023?
|
||||
|
||||
Observation:The current population of Mexico is 132,679,922 as of Tuesday, April 11, 2023, based on Worldometer elaboration of the latest United Nations data. Mexico 2020 ... I now know the final answer.
|
||||
Final Answer: Arrr, there be 132,679,922 people livin' in Mexico as of 2023!
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Arrr, there be 132,679,922 people livin' in Mexico as of 2023!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
247
docs/snippets/modules/agents/how_to/custom_llm_chat_agent.mdx
Normal file
247
docs/snippets/modules/agents/how_to/custom_llm_chat_agent.mdx
Normal file
|
@ -0,0 +1,247 @@
|
|||
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
|
||||
2. If the Agent returns an `AgentFinish`, then return that directly to the user
|
||||
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
|
||||
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
|
||||
|
||||
`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).
|
||||
|
||||
`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.
|
||||
|
||||
In this notebook we walk through how to create a custom LLM agent.
|
||||
|
||||
|
||||
|
||||
## Set up environment
|
||||
|
||||
Do necessary imports, etc.
|
||||
|
||||
|
||||
```bash
|
||||
pip install langchain
|
||||
pip install google-search-results
|
||||
pip install openai
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
|
||||
from langchain.prompts import BaseChatPromptTemplate
|
||||
from langchain import SerpAPIWrapper, LLMChain
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from typing import List, Union
|
||||
from langchain.schema import AgentAction, AgentFinish, HumanMessage
|
||||
import re
|
||||
from getpass import getpass
|
||||
```
|
||||
|
||||
## Set up tool
|
||||
|
||||
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
|
||||
|
||||
|
||||
```python
|
||||
SERPAPI_API_KEY = getpass()
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Define which tools the agent can use to answer user queries
|
||||
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
## Prompt Template
|
||||
|
||||
This instructs the agent on what to do. Generally, the template should incorporate:
|
||||
|
||||
- `tools`: which tools the agent has access and how and when to call them.
|
||||
- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.
|
||||
- `input`: generic user input
|
||||
|
||||
|
||||
```python
|
||||
# Set up the base template
|
||||
template = """Complete the objective as best you can. You have access to the following tools:
|
||||
|
||||
{tools}
|
||||
|
||||
Use the following format:
|
||||
|
||||
Question: the input question you must answer
|
||||
Thought: you should always think about what to do
|
||||
Action: the action to take, should be one of [{tool_names}]
|
||||
Action Input: the input to the action
|
||||
Observation: the result of the action
|
||||
... (this Thought/Action/Action Input/Observation can repeat N times)
|
||||
Thought: I now know the final answer
|
||||
Final Answer: the final answer to the original input question
|
||||
|
||||
These were previous tasks you completed:
|
||||
|
||||
|
||||
|
||||
Begin!
|
||||
|
||||
Question: {input}
|
||||
{agent_scratchpad}"""
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# Set up a prompt template
|
||||
class CustomPromptTemplate(BaseChatPromptTemplate):
|
||||
# The template to use
|
||||
template: str
|
||||
# The list of tools available
|
||||
tools: List[Tool]
|
||||
|
||||
def format_messages(self, **kwargs) -> str:
|
||||
# Get the intermediate steps (AgentAction, Observation tuples)
|
||||
# Format them in a particular way
|
||||
intermediate_steps = kwargs.pop("intermediate_steps")
|
||||
thoughts = ""
|
||||
for action, observation in intermediate_steps:
|
||||
thoughts += action.log
|
||||
thoughts += f"\nObservation: {observation}\nThought: "
|
||||
# Set the agent_scratchpad variable to that value
|
||||
kwargs["agent_scratchpad"] = thoughts
|
||||
# Create a tools variable from the list of tools provided
|
||||
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
|
||||
# Create a list of tool names for the tools provided
|
||||
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
|
||||
formatted = self.template.format(**kwargs)
|
||||
return [HumanMessage(content=formatted)]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
prompt = CustomPromptTemplate(
|
||||
template=template,
|
||||
tools=tools,
|
||||
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
|
||||
# This includes the `intermediate_steps` variable because that is needed
|
||||
input_variables=["input", "intermediate_steps"]
|
||||
)
|
||||
```
|
||||
|
||||
## Output Parser
|
||||
|
||||
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
|
||||
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc
|
||||
|
||||
|
||||
```python
|
||||
class CustomOutputParser(AgentOutputParser):
|
||||
|
||||
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
|
||||
# Check if agent should finish
|
||||
if "Final Answer:" in llm_output:
|
||||
return AgentFinish(
|
||||
# Return values is generally always a dictionary with a single `output` key
|
||||
# It is not recommended to try anything else at the moment :)
|
||||
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
|
||||
log=llm_output,
|
||||
)
|
||||
# Parse out the action and action input
|
||||
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
|
||||
match = re.search(regex, llm_output, re.DOTALL)
|
||||
if not match:
|
||||
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
|
||||
action = match.group(1).strip()
|
||||
action_input = match.group(2)
|
||||
# Return the action and action input
|
||||
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
output_parser = CustomOutputParser()
|
||||
```
|
||||
|
||||
## Set up LLM
|
||||
|
||||
Choose the LLM you want to use!
|
||||
|
||||
|
||||
```python
|
||||
OPENAI_API_KEY = getpass()
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)
|
||||
```
|
||||
|
||||
## Define the stop sequence
|
||||
|
||||
This is important because it tells the LLM when to stop generation.
|
||||
|
||||
This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you).
|
||||
|
||||
## Set up the Agent
|
||||
|
||||
We can now combine everything to set up our agent
|
||||
|
||||
|
||||
```python
|
||||
# LLM chain consisting of the LLM and a prompt
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
tool_names = [tool.name for tool in tools]
|
||||
agent = LLMSingleActionAgent(
|
||||
llm_chain=llm_chain,
|
||||
output_parser=output_parser,
|
||||
stop=["\nObservation:"],
|
||||
allowed_tools=tool_names
|
||||
)
|
||||
```
|
||||
|
||||
## Use the Agent
|
||||
|
||||
Now we can use it!
|
||||
|
||||
|
||||
```python
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
agent_executor.run("Search for Leo DiCaprio's girlfriend on the internet.")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I should use a reliable search engine to get accurate information.
|
||||
Action: Search
|
||||
Action Input: "Leo DiCaprio girlfriend"
|
||||
|
||||
Observation:He went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior.
|
||||
I have found the answer to the question.
|
||||
Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Leo DiCaprio's current girlfriend is Camila Morrone."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
117
docs/snippets/modules/agents/how_to/mrkl.mdx
Normal file
117
docs/snippets/modules/agents/how_to/mrkl.mdx
Normal file
|
@ -0,0 +1,117 @@
|
|||
```python
|
||||
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
|
||||
from langchain.agents import initialize_agent, Tool
|
||||
from langchain.agents import AgentType
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
llm = OpenAI(temperature=0)
|
||||
search = SerpAPIWrapper()
|
||||
llm_math_chain = LLMMathChain(llm=llm, verbose=True)
|
||||
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
|
||||
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events. You should ask targeted questions"
|
||||
),
|
||||
Tool(
|
||||
name="Calculator",
|
||||
func=llm_math_chain.run,
|
||||
description="useful for when you need to answer questions about math"
|
||||
),
|
||||
Tool(
|
||||
name="FooBar DB",
|
||||
func=db_chain.run,
|
||||
description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
|
||||
Action: Search
|
||||
Action Input: "Who is Leo DiCaprio's girlfriend?"
|
||||
Observation: DiCaprio met actor Camila Morrone in December 2017, when she was 20 and he was 43. They were spotted at Coachella and went on multiple vacations together. Some reports suggested that DiCaprio was ready to ask Morrone to marry him. The couple made their red carpet debut at the 2020 Academy Awards.
|
||||
Thought: I need to calculate Camila Morrone's age raised to the 0.43 power.
|
||||
Action: Calculator
|
||||
Action Input: 21^0.43
|
||||
|
||||
> Entering new LLMMathChain chain...
|
||||
21^0.43
|
||||
```text
|
||||
21**0.43
|
||||
```
|
||||
...numexpr.evaluate("21**0.43")...
|
||||
|
||||
Answer: 3.7030049853137306
|
||||
> Finished chain.
|
||||
|
||||
Observation: Answer: 3.7030049853137306
|
||||
Thought: I now know the final answer.
|
||||
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.7030049853137306."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
I need to find out the artist's full name and then search the FooBar database for their albums.
|
||||
Action: Search
|
||||
Action Input: "The Storm Before the Calm" artist
|
||||
Observation: The Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis Morissette, released June 17, 2022, via Epiphany Music and Thirty Tigers, as well as by RCA Records in Europe.
|
||||
Thought: I now need to search the FooBar database for Alanis Morissette's albums.
|
||||
Action: FooBar DB
|
||||
Action Input: What albums by Alanis Morissette are in the FooBar database?
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
What albums by Alanis Morissette are in the FooBar database?
|
||||
SQLQuery:
|
||||
|
||||
/Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
|
||||
sample_rows = connection.execute(command)
|
||||
|
||||
|
||||
SELECT "Title" FROM "Album" INNER JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId" WHERE "Name" = 'Alanis Morissette' LIMIT 5;
|
||||
SQLResult: [('Jagged Little Pill',)]
|
||||
Answer: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.
|
||||
> Finished chain.
|
||||
|
||||
Observation: The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.
|
||||
Thought: I now know the final answer.
|
||||
Final Answer: The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"The artist who released the album 'The Storm Before the Calm' is Alanis Morissette and the albums of hers in the FooBar database are Jagged Little Pill."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
138
docs/snippets/modules/agents/how_to/mrkl_chat.mdx
Normal file
138
docs/snippets/modules/agents/how_to/mrkl_chat.mdx
Normal file
|
@ -0,0 +1,138 @@
|
|||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
llm = ChatOpenAI(temperature=0)
|
||||
llm1 = OpenAI(temperature=0)
|
||||
search = SerpAPIWrapper()
|
||||
llm_math_chain = LLMMathChain(llm=llm1, verbose=True)
|
||||
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
|
||||
db_chain = SQLDatabaseChain.from_llm(llm1, db, verbose=True)
|
||||
tools = [
|
||||
Tool(
|
||||
name = "Search",
|
||||
func=search.run,
|
||||
description="useful for when you need to answer questions about current events. You should ask targeted questions"
|
||||
),
|
||||
Tool(
|
||||
name="Calculator",
|
||||
func=llm_math_chain.run,
|
||||
description="useful for when you need to answer questions about math"
|
||||
),
|
||||
Tool(
|
||||
name="FooBar DB",
|
||||
func=db_chain.run,
|
||||
description="useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context"
|
||||
)
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: The first question requires a search, while the second question requires a calculator.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "Leo DiCaprio girlfriend"
|
||||
}
|
||||
```
|
||||
|
||||
Observation: Gigi Hadid: 2022 Leo and Gigi were first linked back in September 2022, when a source told Us Weekly that Leo had his “sights set" on her (alarming way to put it, but okay).
|
||||
Thought:For the second question, I need to calculate the age raised to the 0.43 power. I will use the calculator tool.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Calculator",
|
||||
"action_input": "((2022-1995)^0.43)"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Entering new LLMMathChain chain...
|
||||
((2022-1995)^0.43)
|
||||
```text
|
||||
(2022-1995)**0.43
|
||||
```
|
||||
...numexpr.evaluate("(2022-1995)**0.43")...
|
||||
|
||||
Answer: 4.125593352125936
|
||||
> Finished chain.
|
||||
|
||||
Observation: Answer: 4.125593352125936
|
||||
Thought:I now know the final answer.
|
||||
Final Answer: Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
"Gigi Hadid is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is approximately 4.13."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
mrkl.run("What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new AgentExecutor chain...
|
||||
Question: What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?
|
||||
Thought: I should use the Search tool to find the answer to the first part of the question and then use the FooBar DB tool to find the answer to the second part.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "Who recently released an album called 'The Storm Before the Calm'"
|
||||
}
|
||||
```
|
||||
|
||||
Observation: Alanis Morissette
|
||||
Thought:Now that I know the artist's name, I can use the FooBar DB tool to find out if they are in the database and what albums of theirs are in it.
|
||||
Action:
|
||||
```
|
||||
{
|
||||
"action": "FooBar DB",
|
||||
"action_input": "What albums does Alanis Morissette have in the database?"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
> Entering new SQLDatabaseChain chain...
|
||||
What albums does Alanis Morissette have in the database?
|
||||
SQLQuery:
|
||||
|
||||
/Users/harrisonchase/workplace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
|
||||
sample_rows = connection.execute(command)
|
||||
|
||||
|
||||
SELECT "Title" FROM "Album" WHERE "ArtistId" IN (SELECT "ArtistId" FROM "Artist" WHERE "Name" = 'Alanis Morissette') LIMIT 5;
|
||||
SQLResult: [('Jagged Little Pill',)]
|
||||
Answer: Alanis Morissette has the album Jagged Little Pill in the database.
|
||||
> Finished chain.
|
||||
|
||||
Observation: Alanis Morissette has the album Jagged Little Pill in the database.
|
||||
Thought:The artist Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.
|
||||
Final Answer: Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'Alanis Morissette is in the FooBar database and has the album Jagged Little Pill in it.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
15
docs/snippets/modules/agents/tools/get_started.mdx
Normal file
15
docs/snippets/modules/agents/tools/get_started.mdx
Normal file
|
@ -0,0 +1,15 @@
|
|||
```python
|
||||
from langchain.agents import load_tools
|
||||
tool_names = [...]
|
||||
tools = load_tools(tool_names)
|
||||
```
|
||||
|
||||
Some tools (e.g. chains, agents) may require a base LLM to use to initialize them.
|
||||
In that case, you can pass in an LLM as well:
|
||||
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
tool_names = [...]
|
||||
llm = ...
|
||||
tools = load_tools(tool_names, llm=llm)
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue