mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-03 19:57:35 +00:00
Add example documentation
This commit is contained in:
parent
d7cbeb4b8c
commit
8c9b7aa764
1 changed files with 19 additions and 0 deletions
|
@ -188,3 +188,22 @@ vlm_response = client.chat.completions.create(
|
|||
|
||||
print(f"VLM Response: {vlm_response.choices[0].message.content}")
|
||||
```
|
||||
|
||||
### Rerank Example
|
||||
|
||||
The following example shows how to rerank documents using an NVIDIA NIM.
|
||||
|
||||
```python
|
||||
rerank_response = client.inference.rerank(
|
||||
model="nvidia/llama-3.2-nv-rerankqa-1b-v2",
|
||||
query="query",
|
||||
items=[
|
||||
"item_1",
|
||||
"item_2",
|
||||
"item_3",
|
||||
],
|
||||
)
|
||||
|
||||
for i, result in enumerate(rerank_response.data):
|
||||
print(f"{i+1}. [Index: {result.index}, Score: {result.relevance_score:.3f}]")
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue