forked from phoenix/litellm-mirror
(docs) proxy pass headers
This commit is contained in:
parent
9d4cbf1d22
commit
ebcd29b1ec
1 changed files with 1 additions and 1 deletions
|
@ -496,7 +496,7 @@ completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"rol
|
||||||
print(completion.choices[0].message.content)
|
print(completion.choices[0].message.content)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Save Model-specific params (API Base, API Keys, Temperature, etc.)
|
### Save Model-specific params (API Base, API Keys, Temperature, Headers etc.)
|
||||||
You can use the config to save model-specific information like api_base, api_key, temperature, max_tokens, etc.
|
You can use the config to save model-specific information like api_base, api_key, temperature, max_tokens, etc.
|
||||||
|
|
||||||
**Step 1**: Create a `config.yaml` file
|
**Step 1**: Create a `config.yaml` file
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue