forked from phoenix/litellm-mirror
docs(docs): cleanup
This commit is contained in:
parent
9eb8524c09
commit
c09104e577
2 changed files with 4 additions and 2 deletions
|
@ -53,6 +53,9 @@ By default, LiteLLM raises an exception if the openai param being passed in isn'
|
|||
|
||||
To drop the param instead, set `litellm.drop_params = True`.
|
||||
|
||||
**For function calling:**
|
||||
|
||||
Add to prompt for non-openai models, set: `litellm.add_function_to_prompt = True`.
|
||||
:::
|
||||
|
||||
## Provider-specific Params
|
||||
|
|
|
@ -30,10 +30,9 @@ In order to use litellm to call a hosted vllm server add the following to your c
|
|||
import litellm
|
||||
|
||||
response = completion(
|
||||
model="facebook/opt-125m", # pass the vllm model name
|
||||
model="openai/facebook/opt-125m", # pass the vllm model name
|
||||
messages=messages,
|
||||
api_base="https://hosted-vllm-api.co",
|
||||
custom_llm_provider="openai",
|
||||
temperature=0.2,
|
||||
max_tokens=80)
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue