mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 19:24:27 +00:00
docs add docs on supported params
This commit is contained in:
parent
03dffb29bf
commit
d86abb4abe
2 changed files with 96 additions and 2 deletions
|
@ -103,6 +103,93 @@ Here's how to call a ai21 model with the LiteLLM Proxy Server
|
|||
|
||||
</Tabs>
|
||||
|
||||
## Supported OpenAI Parameters
|
||||
|
||||
|
||||
| [param](../completion/input) | type | AI21 equivalent |
|
||||
|-------|-------------|------------------|
|
||||
| `tools` | **Optional[list]** | `tools` |
|
||||
| `response_format` | **Optional[dict]** | `response_format` |
|
||||
| `max_tokens` | **Optional[int]** | `max_tokens` |
|
||||
| `temperature` | **Optional[float]** | `temperature` |
|
||||
| `top_p` | **Optional[float]** | `top_p` |
|
||||
| `stop` | **Optional[Union[str, list]]** | `stop` |
|
||||
| `n` | **Optional[int]** | `n` |
|
||||
| `stream` | **Optional[bool]** | `stream` |
|
||||
| `seed` | **Optional[int]** | `seed` |
|
||||
| `tool_choice` | **Optional[str]** | `tool_choice` |
|
||||
| `user` | **Optional[str]** | `user` |
|
||||
|
||||
## Supported AI21 Parameters
|
||||
|
||||
|
||||
| param | type | [AI21 equivalent](https://docs.ai21.com/reference/jamba-15-api-ref#request-parameters) |
|
||||
|-----------|------|-------------|
|
||||
| `documents` | **Optional[List[Dict]]** | `documents` |
|
||||
|
||||
|
||||
## Passing AI21 Specific Parameters - `documents`
|
||||
|
||||
LiteLLM allows you to pass all AI21 specific parameters to the `litellm.completion` function. Here is an example of how to pass the `documents` parameter to the `litellm.completion` function.
|
||||
|
||||
<Tabs>
|
||||
|
||||
<TabItem value="python" label="LiteLLM Python SDK">
|
||||
|
||||
```python
|
||||
response = await litellm.acompletion(
|
||||
model="jamba-1.5-large",
|
||||
messages=[{"role": "user", "content": "what does the document say"}],
|
||||
documents = [
|
||||
{
|
||||
"content": "hello world",
|
||||
"metadata": {
|
||||
"source": "google",
|
||||
"author": "ishaan"
|
||||
}
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
```
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="proxy" label="LiteLLM Proxy Server">
|
||||
|
||||
```python
|
||||
import openai
|
||||
client = openai.OpenAI(
|
||||
api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
|
||||
base_url="http://0.0.0.0:4000" # litellm-proxy-base url
|
||||
)
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="my-model",
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "what llm are you"
|
||||
}
|
||||
],
|
||||
extra_body = {
|
||||
"documents": [
|
||||
{
|
||||
"content": "hello world",
|
||||
"metadata": {
|
||||
"source": "google",
|
||||
"author": "ishaan"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
print(response)
|
||||
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::tip
|
||||
|
||||
|
@ -118,4 +205,5 @@ Here's how to call a ai21 model with the LiteLLM Proxy Server
|
|||
| jamba-1.5-large | `completion('jamba-1.5-large', messages)` | `os.environ['AI21_API_KEY']` |
|
||||
| j2-light | `completion('j2-light', messages)` | `os.environ['AI21_API_KEY']` |
|
||||
| j2-mid | `completion('j2-mid', messages)` | `os.environ['AI21_API_KEY']` |
|
||||
| j2-ultra | `completion('j2-ultra', messages)` | `os.environ['AI21_API_KEY']` |
|
||||
| j2-ultra | `completion('j2-ultra', messages)` | `os.environ['AI21_API_KEY']` |
|
||||
|
||||
|
|
|
@ -4485,6 +4485,12 @@ async def test_completion_ai21_chat():
|
|||
user="ishaan",
|
||||
tool_choice="auto",
|
||||
seed=123,
|
||||
messages=[{"role": "user", "content": "hi my name is ishaan"}],
|
||||
messages=[{"role": "user", "content": "what does the document say"}],
|
||||
documents=[
|
||||
{
|
||||
"content": "hello world",
|
||||
"metadata": {"source": "google", "author": "ishaan"},
|
||||
}
|
||||
],
|
||||
)
|
||||
pass
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue