forked from phoenix/litellm-mirror
add everyting for docs
This commit is contained in:
parent
de45a738ee
commit
0fe8799f94
1015 changed files with 185353 additions and 0 deletions
120
docs/snippets/modules/model_io/models/chat/get_started.mdx
Normal file
120
docs/snippets/modules/model_io/models/chat/get_started.mdx
Normal file
|
@ -0,0 +1,120 @@
|
|||
### Setup
|
||||
|
||||
To start we'll need to install the OpenAI Python package:
|
||||
|
||||
```bash
|
||||
pip install openai
|
||||
```
|
||||
|
||||
Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="..."
|
||||
```
|
||||
If you'd prefer not to set an environment variable you can pass the key in directly via the `openai_api_key` named parameter when initiating the OpenAI LLM class:
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
chat = ChatOpenAI(openai_api_key="...")
|
||||
```
|
||||
|
||||
otherwise you can initialize without any params:
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
chat = ChatOpenAI()
|
||||
```
|
||||
|
||||
### Messages
|
||||
|
||||
The chat model interface is based around messages rather than raw text.
|
||||
The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`
|
||||
|
||||
### `__call__`
|
||||
#### Messages in -> message out
|
||||
|
||||
You can get chat completions by passing one or more messages to the chat model. The response will be a message.
|
||||
|
||||
```python
|
||||
from langchain.schema import (
|
||||
AIMessage,
|
||||
HumanMessage,
|
||||
SystemMessage
|
||||
)
|
||||
|
||||
chat([HumanMessage(content="Translate this sentence from English to French: I love programming.")])
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
AIMessage(content="J'aime programmer.", additional_kwargs={})
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
OpenAI's chat model supports multiple messages as input. See [here](https://platform.openai.com/docs/guides/chat/chat-vs-completions) for more information. Here is an example of sending a system and user message to the chat model:
|
||||
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are a helpful assistant that translates English to French."),
|
||||
HumanMessage(content="I love programming.")
|
||||
]
|
||||
chat(messages)
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
AIMessage(content="J'aime programmer.", additional_kwargs={})
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
### `generate`
|
||||
#### Batch calls, richer outputs
|
||||
|
||||
You can go one step further and generate completions for multiple sets of messages using `generate`. This returns an `LLMResult` with an additional `message` parameter.
|
||||
|
||||
```python
|
||||
batch_messages = [
|
||||
[
|
||||
SystemMessage(content="You are a helpful assistant that translates English to French."),
|
||||
HumanMessage(content="I love programming.")
|
||||
],
|
||||
[
|
||||
SystemMessage(content="You are a helpful assistant that translates English to French."),
|
||||
HumanMessage(content="I love artificial intelligence.")
|
||||
],
|
||||
]
|
||||
result = chat.generate(batch_messages)
|
||||
result
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
You can recover things like token usage from this LLMResult
|
||||
|
||||
|
||||
```python
|
||||
result.llm_output
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
{'token_usage': {'prompt_tokens': 57,
|
||||
'completion_tokens': 20,
|
||||
'total_tokens': 77}}
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
|
@ -0,0 +1,97 @@
|
|||
```python
|
||||
import langchain
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
||||
llm = ChatOpenAI()
|
||||
```
|
||||
|
||||
## In Memory Cache
|
||||
|
||||
|
||||
```python
|
||||
from langchain.cache import InMemoryCache
|
||||
langchain.llm_cache = InMemoryCache()
|
||||
|
||||
# The first time, it is not yet in cache, so it should take longer
|
||||
llm.predict("Tell me a joke")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms
|
||||
Wall time: 4.83 s
|
||||
|
||||
|
||||
"\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
# The second time it is, so it goes faster
|
||||
llm.predict("Tell me a joke")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
CPU times: user 238 µs, sys: 143 µs, total: 381 µs
|
||||
Wall time: 1.76 ms
|
||||
|
||||
|
||||
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## SQLite Cache
|
||||
|
||||
|
||||
```bash
|
||||
rm .langchain.db
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# We can do the same thing with a SQLite cache
|
||||
from langchain.cache import SQLiteCache
|
||||
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
# The first time, it is not yet in cache, so it should take longer
|
||||
llm.predict("Tell me a joke")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
|
||||
Wall time: 825 ms
|
||||
|
||||
|
||||
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
|
||||
```python
|
||||
# The second time it is, so it goes faster
|
||||
llm.predict("Tell me a joke")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
|
||||
Wall time: 2.67 ms
|
||||
|
||||
|
||||
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
|
@ -0,0 +1,16 @@
|
|||
```python
|
||||
chain = LLMChain(llm=chat, prompt=chat_prompt)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
chain.run(input_language="English", output_language="French", text="I love programming.")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
"J'adore la programmation."
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
|
@ -0,0 +1,47 @@
|
|||
You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
|
||||
|
||||
For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
|
||||
|
||||
|
||||
```python
|
||||
from langchain import PromptTemplate
|
||||
from langchain.prompts.chat import (
|
||||
ChatPromptTemplate,
|
||||
SystemMessagePromptTemplate,
|
||||
AIMessagePromptTemplate,
|
||||
HumanMessagePromptTemplate,
|
||||
)
|
||||
|
||||
template="You are a helpful assistant that translates {input_language} to {output_language}."
|
||||
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
|
||||
human_template="{text}"
|
||||
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
|
||||
```
|
||||
|
||||
|
||||
```python
|
||||
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
|
||||
|
||||
# get a chat completion from the formatted messages
|
||||
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
AIMessage(content="J'adore la programmation.", additional_kwargs={})
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:
|
||||
|
||||
|
||||
```python
|
||||
prompt=PromptTemplate(
|
||||
template="You are a helpful assistant that translates {input_language} to {output_language}.",
|
||||
input_variables=["input_language", "output_language"],
|
||||
)
|
||||
system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
|
||||
```
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.schema import (
|
||||
HumanMessage,
|
||||
)
|
||||
|
||||
|
||||
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
|
||||
chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
|
||||
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
Verse 1:
|
||||
Bubbles rising to the top
|
||||
A refreshing drink that never stops
|
||||
Clear and crisp, it's pure delight
|
||||
A taste that's sure to excite
|
||||
|
||||
Chorus:
|
||||
Sparkling water, oh so fine
|
||||
A drink that's always on my mind
|
||||
With every sip, I feel alive
|
||||
Sparkling water, you're my vibe
|
||||
|
||||
Verse 2:
|
||||
No sugar, no calories, just pure bliss
|
||||
A drink that's hard to resist
|
||||
It's the perfect way to quench my thirst
|
||||
A drink that always comes first
|
||||
|
||||
Chorus:
|
||||
Sparkling water, oh so fine
|
||||
A drink that's always on my mind
|
||||
With every sip, I feel alive
|
||||
Sparkling water, you're my vibe
|
||||
|
||||
Bridge:
|
||||
From the mountains to the sea
|
||||
Sparkling water, you're the key
|
||||
To a healthy life, a happy soul
|
||||
A drink that makes me feel whole
|
||||
|
||||
Chorus:
|
||||
Sparkling water, oh so fine
|
||||
A drink that's always on my mind
|
||||
With every sip, I feel alive
|
||||
Sparkling water, you're my vibe
|
||||
|
||||
Outro:
|
||||
Sparkling water, you're the one
|
||||
A drink that's always so much fun
|
||||
I'll never let you go, my friend
|
||||
Sparkling
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
Loading…
Add table
Add a link
Reference in a new issue