diff --git a/docs/my-website/docs/providers/vertex.md b/docs/my-website/docs/providers/vertex.md index f87597046..4b95ff2a8 100644 --- a/docs/my-website/docs/providers/vertex.md +++ b/docs/my-website/docs/providers/vertex.md @@ -775,7 +775,6 @@ vertex_ai_location = "your-vertex-location" # can also set this as os.environ["V response = completion( model="vertex_ai/" + model, messages=[{"role": "user", "content": "hi"}], - temperature=0.7, vertex_ai_project=vertex_ai_project, vertex_ai_location=vertex_ai_location, ) @@ -828,6 +827,86 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \ +## Mistral API + +[**Supported OpenAI Params**](https://github.com/BerriAI/litellm/blob/e0f3cd580cb85066f7d36241a03c30aa50a8a31d/litellm/llms/openai.py#L137) + +| Model Name | Function Call | +|------------------|--------------------------------------| +| meta/llama3-405b-instruct-maas | `completion('vertex_ai/mistral-large@2407', messages)` | + +### Usage + + + + +```python +from litellm import completion +import os + +os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "" + +model = "mistral-large@2407" + +vertex_ai_project = "your-vertex-project" # can also set this as os.environ["VERTEXAI_PROJECT"] +vertex_ai_location = "your-vertex-location" # can also set this as os.environ["VERTEXAI_LOCATION"] + +response = completion( + model="vertex_ai/" + model, + messages=[{"role": "user", "content": "hi"}], + vertex_ai_project=vertex_ai_project, + vertex_ai_location=vertex_ai_location, +) +print("\nModel Response", response) +``` + + + +**1. Add to config** + +```yaml +model_list: + - model_name: anthropic-mistral + litellm_params: + model: vertex_ai/mistral-large@2407 + vertex_ai_project: "my-test-project" + vertex_ai_location: "us-east-1" + - model_name: anthropic-mistral + litellm_params: + model: vertex_ai/mistral-large@2407 + vertex_ai_project: "my-test-project" + vertex_ai_location: "us-west-1" +``` + +**2. Start proxy** + +```bash +litellm --config /path/to/config.yaml + +# RUNNING at http://0.0.0.0:4000 +``` + +**3. Test it!** + +```bash +curl --location 'http://0.0.0.0:4000/chat/completions' \ + --header 'Authorization: Bearer sk-1234' \ + --header 'Content-Type: application/json' \ + --data '{ + "model": "anthropic-mistral", # 👈 the 'model_name' in config + "messages": [ + { + "role": "user", + "content": "what llm are you" + } + ], + }' +``` + + + + + ## Model Garden | Model Name | Function Call | |------------------|--------------------------------------|