forked from phoenix-oss/llama-stack-mirror
fix: update getting_started structured decoding cell (#1523)
# What does this PR do? - Together's inference only supports 3.1 for structured decoding [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan ``` pytest -v -s --nbval-lax ./docs/getting_started.ipynb ``` [//]: # (## Documentation)
This commit is contained in:
parent
8814111da1
commit
23278d1e5d
1 changed files with 1 additions and 2 deletions
|
@ -1267,7 +1267,6 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"source": [
|
"source": [
|
||||||
"# NBVAL_SKIP\n",
|
|
||||||
"from pydantic import BaseModel\n",
|
"from pydantic import BaseModel\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
@ -1279,7 +1278,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"user_input = \"Michael Jordan was born in 1963. He played basketball for the Chicago Bulls. He retired in 2003. Extract this information into JSON for me. \"\n",
|
"user_input = \"Michael Jordan was born in 1963. He played basketball for the Chicago Bulls. He retired in 2003. Extract this information into JSON for me. \"\n",
|
||||||
"response = client.inference.completion(\n",
|
"response = client.inference.completion(\n",
|
||||||
" model_id=model_id,\n",
|
" model_id=\"meta-llama/Llama-3.1-8B-Instruct\",\n",
|
||||||
" content=user_input,\n",
|
" content=user_input,\n",
|
||||||
" stream=False,\n",
|
" stream=False,\n",
|
||||||
" sampling_params={\n",
|
" sampling_params={\n",
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue