mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
docs
This commit is contained in:
parent
493aeba498
commit
353e2ce49b
1 changed files with 14 additions and 2 deletions
|
@ -14,8 +14,21 @@ For calls to your custom API base ensure:
|
|||
| meta-llama/Llama-2-13b-hf | `response = completion(model="custom/meta-llama/Llama-2-13b-hf", messages=messages, api_base="https://your-custom-inference-endpoint")` |
|
||||
| meta-llama/Llama-2-13b-hf | `response = completion(model="custom/meta-llama/Llama-2-13b-hf", messages=messages, api_base="https://api.autoai.dev/inference")` |
|
||||
|
||||
### Example Call to Custom LLM API using LiteLLM
|
||||
```python
|
||||
response = completion(
|
||||
model="custom/meta-llama/Llama-2-13b-hf",
|
||||
messages= [{"content": "what is custom llama?", "role": "user"}],
|
||||
temperature=0.2,
|
||||
max_tokens=10,
|
||||
api_base="https://api.autoai.dev/inference",
|
||||
request_timeout=300,
|
||||
)
|
||||
print("got response\n", response)
|
||||
```
|
||||
|
||||
#### Setting your Custom API endpoint
|
||||
|
||||
#### Input
|
||||
Inputs to your custom LLM api bases should follow this format:
|
||||
|
||||
```python
|
||||
|
@ -34,7 +47,6 @@ resp = requests.post(
|
|||
)
|
||||
```
|
||||
|
||||
#### Output
|
||||
Outputs from your custom LLM api bases should follow this format:
|
||||
```
|
||||
{
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue