Commit graph

40 commits

Author SHA1 Message Date
Jash Gulabrai
40e2c97915
feat: Add Nvidia e2e beginner notebook and tool calling notebook (#1964)
# What does this PR do?
This PR contains two sets of notebooks that serve as reference material
for developers getting started with Llama Stack using the NVIDIA
Provider. Developers should be able to execute these notebooks
end-to-end, pointing to their NeMo Microservices deployment.
1. `beginner_e2e/`: Notebook that walks through a beginner end-to-end
workflow that covers creating datasets, running inference, customizing
and evaluating models, and running safety checks.
2. `tool_calling/`: Notebook that is ported over from the [Data Flywheel
& Tool Calling
notebook](https://github.com/NVIDIA/GenerativeAIExamples/tree/main/nemo/data-flywheel)
that is referenced in the NeMo Microservices docs. I updated the
notebook to use the Llama Stack client wherever possible, and added
relevant instructions.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
- Both notebook folders contain READMEs with pre-requisites. To manually
test these notebooks, you'll need to have a deployment of the NeMo
Microservices Platform and update the `config.py` file with your
deployment's information.
- I've run through these notebooks manually end-to-end to verify each
step works.

[//]: # (## Documentation)

---------

Co-authored-by: Jash Gulabrai <jgulabrai@nvidia.com>
2025-06-16 11:29:01 -04:00
Mark Campbell
b2adaa3f60
docs: fix evals notebook preview (#2277)
# What does this PR do?
Fixes the preview of the Evals Benchmark Notebook

## Explanation 
I took the original notebook, opened it in Google Colab and downloaded
it again from Colab.
I then replaced the original with the new fixed version 
cc: @leseb 

Closes #2142 

## Test Plan
You can view the nb preview from my fork
https://github.com/Bobbins228/llama-stack/blob/fix-evals-nb/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb
2025-05-27 15:18:20 +02:00
Sébastien Han
43e623eea6
chore: remove last instances of code-interpreter provider (#2143)
Was removed in https://github.com/meta-llama/llama-stack/pull/2087

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-05-12 10:54:43 -07:00
Christian Zaccaria
feb9eb8b0d
docs: Remove datasets.rst and fix llama-stack build commands (#2061)
# Issue
Closes #2073 

# What does this PR do?
- Removes the `datasets.rst` from the list of document urls as it no
longer exists in torchtune. Referenced PR:
https://github.com/pytorch/torchtune/pull/1781

- Added a step to run `uv sync`. Previously, I would get the following
error:

```
➜  llama-stack git:(remove-deprecated-rst) uv venv --python 3.10
source .venv/bin/activate
Using CPython 3.10.13 interpreter at: /usr/bin/python3.10
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
(llama-stack) ➜  llama-stack git:(remove-deprecated-rst) INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type venv --run
zsh: llama: command not found...

```

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

To test: Run through `rag_agent` example in the `detailed_tutorial.md`
file.

[//]: # (## Documentation)
2025-05-06 09:51:20 -07:00
Daniele Martinoli
ba14552a32
fix: Misleading code in Llama Stack Benchmark Evals notebook (#1774)
# What does this PR do?
Closes #1773

Signed-off-by: Daniele Martinoli <dmartino@redhat.com>
2025-03-25 07:04:47 -07:00
Xi Yan
094eb6a5ae
feat(rag): entire document context with attachments (#1763)
# What does this PR do?
**What**
Instead of adhoc creating a vectordb and chunking when documents ae sent
as an attachment to agent turn, we directly pass raw text from document
into messages to model for user context, and let model perform
summarization directly.

This removes the magic behaviour, and yields better performance than
existing approach.

**Improved Performance**
- RAG lifecycle notebook
  - Model: 0.3 factuality score
  - (+ websearch) Agent: 0.44 factuality score
  - (+ vector db) Agent: 0.3 factuality score
  - (+ raw context) Agent: 0.6 factuality score

Closes https://github.com/meta-llama/llama-stack/issues/1478

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
- [NEW] added section in RAG lifecycle notebook shows better performance

<img width="840" alt="image"
src="https://github.com/user-attachments/assets/a0c4e816-809a-41c0-9124-89825983e3f5"
/>


[//]: # (## Documentation)
2025-03-23 16:57:48 -07:00
ehhuang
ea6a4a14ce
feat(api): simplify client imports (#1687)
# What does this PR do?
closes #1554 

## Test Plan
test_agents.py
2025-03-20 10:15:49 -07:00
Xi Yan
5287b437ae
feat(api): (1/n) datasets api clean up (#1573)
## PR Stack
- https://github.com/meta-llama/llama-stack/pull/1573
- https://github.com/meta-llama/llama-stack/pull/1625
- https://github.com/meta-llama/llama-stack/pull/1656
- https://github.com/meta-llama/llama-stack/pull/1657
- https://github.com/meta-llama/llama-stack/pull/1658
- https://github.com/meta-llama/llama-stack/pull/1659
- https://github.com/meta-llama/llama-stack/pull/1660

**Client SDK**
- https://github.com/meta-llama/llama-stack-client-python/pull/203

**CI**
- 1391130488
<img width="1042" alt="image"
src="https://github.com/user-attachments/assets/69636067-376d-436b-9204-896e2dd490ca"
/>
-- the test_rag_agent_with_attachments is flaky and not related to this
PR

## Doc
<img width="789" alt="image"
src="https://github.com/user-attachments/assets/b88390f3-73d6-4483-b09a-a192064e32d9"
/>


## Client Usage
```python
client.datasets.register(
    source={
        "type": "uri",
        "uri": "lsfs://mydata.jsonl",
    },
    schema="jsonl_messages",
    # optional 
    dataset_id="my_first_train_data"
)

# quick prototype debugging
client.datasets.register(
    data_reference={
        "type": "rows",
        "rows": [
                "messages": [...],
        ],
    },
    schema="jsonl_messages",
)
```

## Test Plan
- CI:
1387805545

```
LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/datasets/test_datasets.py
```

```
LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/scoring/test_scoring.py
```

```
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb
```
2025-03-17 16:55:45 -07:00
Xi Yan
b8c519ba11
feat: rag eval lifecycle notebook (#1458)
# What does this PR do?

- Add RAG eval lifecycle notebook
- Closes https://github.com/meta-llama/llama-stack/issues/1113

- Best reviewed in
https://github.com/meta-llama/llama-stack/blob/rag_eval_notebook/docs/notebooks/Llama_Stack_RAG_Lifecycle.ipynb

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
Run notebook

[//]: # (## Documentation)
2025-03-07 10:41:50 -08:00
ehhuang
ca2910d27a
docs: update test_agents to use new Agent SDK API (#1402)
# Summary:
new Agent SDK API is added in
https://github.com/meta-llama/llama-stack-client-python/pull/178

Update docs and test to reflect this.

Closes https://github.com/meta-llama/llama-stack/issues/1365

# Test Plan:
```bash
py.test -v -s --nbval-lax ./docs/getting_started.ipynb

LLAMA_STACK_CONFIG=fireworks \
   pytest -s -v tests/integration/agents/test_agents.py \
  --safety-shield meta-llama/Llama-Guard-3-8B --text-model meta-llama/Llama-3.1-8B-Instruct
```
2025-03-06 15:21:12 -08:00
Xi Yan
bcc5370d2e
feat: effective agent workflow notebook (#1372)
# What does this PR do?

- Add Notebook: Build and Monitor Agent Workflows with Llama Stack +
Anthropic's Best Practice
- Better reviewed in:
https://github.com/meta-llama/llama-stack/blob/effective_agents/docs/notebooks/Llama_Stack_Agent_Workflows.ipynb
- Closes https://github.com/meta-llama/llama-stack/issues/1371

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

```
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Agent_Workflows.ipynb
```

<img width="671" alt="image"
src="https://github.com/user-attachments/assets/e5a7e312-ab3d-406a-a0f8-3b1d836e7b46"
/>


[//]: # (## Documentation)
2025-03-05 11:53:25 -08:00
Xi Yan
0d18274d34
chore: update hf source for eval notebook (#1403)
# What does this PR do?
- update llamastack/evals to llamastack/simpleqa

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
```
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb
```

[//]: # (## Documentation)
2025-03-05 09:38:30 -08:00
Xi Yan
e9a37bad63
chore: rename task_config to benchmark_config (#1397)
# What does this PR do?

- This was missed from previous deprecation:
https://github.com/meta-llama/llama-stack/pull/1186
- Part of https://github.com/meta-llama/llama-stack/issues/1396

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
```
pytest -v -s --nbval-lax ./llama-stack/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb 
```

[//]: # (## Documentation)
2025-03-04 12:44:04 -08:00
Ashwin Bharambe
c54164556a
fix: update notebooks to avoid using the nutsy --image-name __system__ thing (#1308)
The `--image-name __system__` thing was a hack and a bad one at that.
The actual intent was to somehow automatically detect the notebook
environment so we could avoid unnecessarily confusing things in the
llama stack build cmd-line. But I failed which led us to use the backup
`__system__` thing.

Let's just do the simple thing.

Note that `build_venv.sh` I haven't changed for now (so it still honors
the __system__ special name just that no new user should use it.)

## Test Plan

Open the notebooks from this branch in Colab (see example url below) and
ensure the builds work.


https://colab.research.google.com/github/meta-llama/llama-stack/blob/foo/docs/getting_started.ipynb

In the notebook, install llama-stack from this branch directly using:

```
!pip install -U https://github.com/meta-llama/llama-stack/archive/refs/heads/foo.zip
```

Verify that `!UV_SYSTEM_PYTHON=1 llama stack build --template together
--image-type venv` afterwards succeeds and the library client
initialization also works.
2025-02-27 16:39:04 -08:00
Botao Chen
9a3db9a290
feat: update the post training notebook (#1280)
##  What does this PR do?
- add 'open in colab' icon that links to the notebook
- update the pip install llama-stack pkg part

## test
preview
 
<img width="938" alt="Screenshot 2025-02-26 at 1 25 34 PM"
src="https://github.com/user-attachments/assets/951b7f0f-a15e-4618-ad02-07c77c65a5ad"
/>

<img width="934" alt="Screenshot 2025-02-26 at 1 25 38 PM"
src="https://github.com/user-attachments/assets/de872530-84b9-4f8b-ae93-06aa7d2e5bd8"
/>
2025-02-26 13:39:16 -08:00
Reid
abfc4b3bce
fix: the pre-commit new line issue (#1272)
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]


3783861877

```
diff --git a/docs/notebooks/Alpha_Llama_Stack_Post_Training.ipynb b/docs/notebooks/Alpha_Llama_Stack_Post_Training.ipynb
index c55c8da..3979088 100644
--- a/docs/notebooks/Alpha_Llama_Stack_Post_Training.ipynb
+++ b/docs/notebooks/Alpha_Llama_Stack_Post_Training.ipynb
@@ -6431,4 +6431,4 @@
   },
   "nbformat": 4,
   "nbformat_minor": 0
-}
\ No newline at end of file
+}
Error: Process completed with exit code 1.
```

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

---------

Signed-off-by: reidliu <reid201711@gmail.com>
Co-authored-by: reidliu <reid201711@gmail.com>
2025-02-26 04:25:41 -05:00
Botao Chen
123fb9eb24
feat: [post training] support save hf safetensor format checkpoint (#845)
## context

Now, in llama stack, we only support inference / eval a finetuned
checkpoint with meta-reference as inference provider. This is
sub-optimal since meta-reference is pretty slow.

Our vision is that developer can inference / eval a finetuned checkpoint
produced by post training apis with all the inference providers on the
stack. To achieve this, we'd like to define an unified output checkpoint
format for post training providers. So that, all the inference provider
can respect that format for customized model inference.

By spotting check how
[ollama](https://github.com/ollama/ollama/blob/main/docs/import.md) and
[fireworks](https://docs.fireworks.ai/models/uploading-custom-models) do
inference on a customized model, we defined the output checkpoint format
as /adapter/adapter_config.json and /adapter/adapter_model.safetensors
(as we only support LoRA post training now, we begin from adapter only
checkpoint)

## test
we kick off a post training job and configured checkpoint format as
'huggingface'. Output files
![Screenshot 2025-02-24 at 11 54
33 PM](https://github.com/user-attachments/assets/fb45a5d7-f288-4d30-82f8-b7a8da2859be)



we did a proof of concept with ollama to see if ollama can inference our
finetuned checkpoint
1. create Modelfile like 

<img width="799" alt="Screenshot 2025-01-22 at 5 04 18 PM"
src="https://github.com/user-attachments/assets/7fca9ac3-a294-44f8-aab1-83852c600609"
/>

2. create a customized model with `ollama create llama_3_2_finetuned`
and run inference successfully

![Screenshot 2025-02-24 at 11 55
17 PM](https://github.com/user-attachments/assets/1abe7c52-c6a7-491a-b07c-b7a8e3fd1ddd)


This is just a proof of concept with ollama cmd line. As next step, we'd
like to wrap loading / inference customized model logic in the inference
provider implementation.
2025-02-25 23:29:08 -08:00
Hardik Shah
30f79fafcb
fix: Update Llama_Stack_Benchmark_Evals.ipynb (#1246)
Update eval notebook to use `--image-name __system__`
2025-02-24 18:22:42 -08:00
Xi Yan
ea1faae50e
chore!: deprecate eval/tasks (#1186)
# What does this PR do?
- Fully deprecate eval/tasks

[//]: # (If resolving an issue, uncomment and update the line below)
Closes #1088 

NOTE: this will be a breaking change. We have introduced the new API in
0.1.3 .

Notebook has been updated to use the new endpoints.

## Test Plan
```
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb 
```
<img width="611" alt="image"
src="https://github.com/user-attachments/assets/79f6efe1-81ba-494e-bf36-1fc0c2b9bc6f"
/>



cc @SLR722  for awareness

[//]: # (## Documentation)
2025-02-20 14:06:21 -08:00
Xi Yan
8b655e3cd2
fix!: update eval-tasks -> benchmarks (#1032)
# What does this PR do?

- Update `/eval-tasks` to `/benchmarks`
- ⚠️ Remove differentiation between `app` v.s. `benchmark` eval task
config. Now we only have `BenchmarkConfig`. The overloaded `benchmark`
is confusing and do not add any value. Backward compatibility is being
kept as the "type" is not being used anywhere.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
- This change is backward compatible 
- Run notebook test with

```
pytest -v -s --nbval-lax ./docs/getting_started.ipynb
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb
```

<img width="846" alt="image"
src="https://github.com/user-attachments/assets/d2fc06a7-593a-444f-bc1f-10ab9b0c843d"
/>



[//]: # (## Documentation)
[//]: # (- [ ] Added a Changelog entry if the change is significant)

---------

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
Signed-off-by: Ben Browning <bbrownin@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
Signed-off-by: reidliu <reid201711@gmail.com>
Co-authored-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
Co-authored-by: Ben Browning <ben324@gmail.com>
Co-authored-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu <reid201711@gmail.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-13 16:40:58 -08:00
Xi Yan
62cd3c391e notebook point to github as source of truth 2025-02-03 15:08:25 -08:00
Dinesh Yeduguru
d0be9288a3
Llama_Stack_Building_AI_Applications.ipynb -> getting_started.ipynb (#854)
Llama_Stack_Building_AI_Applications.ipynb -> getting_started.ipynb
2025-01-23 12:04:06 -08:00
Ashwin Bharambe
35c71d5bbe
Update OpenAPI generator to output discriminator (#848)
oneOf should have discriminators so Stainless can generate better code

## Test Plan

Going to generate the SDK now and check.
2025-01-22 22:15:23 -08:00
Ashwin Bharambe
f3d8864c36 Rename builtin::memory -> builtin::rag 2025-01-22 20:22:51 -08:00
Ashwin Bharambe
494e969f8d add a bunch of NBVAL SKIPs to unblock ugh 2025-01-22 15:28:45 -08:00
Ashwin Bharambe
82d942b501 Foo 2025-01-22 13:58:17 -08:00
Ashwin Bharambe
55d01339c2 Update notebook 2025-01-22 13:31:11 -08:00
Dinesh Yeduguru
7a4b382ae9
add section for mcp tool usage in notebook (#831)
# What does this PR do?

Adds a section to the notebook on how to use tools hosted in MCP server.


![Screenshot 2025-01-21 at 11 05
39 AM](https://github.com/user-attachments/assets/23e900f1-e2a7-4a46-be9b-13642753dca1)
Notebook:
https://colab.research.google.com/drive/1hBKX01NlG6p2BUrBU0ynwIlWjXQRxc3k?usp=sharing

Rendered notebook on this branch:
https://github.com/meta-llama/llama-stack/blob/mcp-notebook/docs/notebooks/Llama_Stack_Building_AI_Applications.ipynb
2025-01-21 13:10:42 -08:00
Yuan Tang
6da3053c0e
More generic image type for OCI-compliant container technologies (#802)
It's a more generic term and applicable to alternatives of Docker, such
as Podman or other OCI-compliant technologies.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-17 16:37:42 -08:00
Xi Yan
c2a072911d
fix eval notebook & add test to workflow (#803) 2025-01-16 23:11:21 -08:00
Hardik Shah
f1faa9c924 pop fix 2025-01-16 14:09:59 -08:00
Dinesh Yeduguru
fcd1a57429 update notebook 2025-01-16 14:00:48 -08:00
Hardik Shah
74e4d520ac un-skip telemetry cells in notebook 2025-01-16 11:54:25 -08:00
Hardik Shah
17fd2d2fd0
Make notebook testable (#780)
# What does this PR do?

This PR updates the notebook to run as a pytest by using a package
called `nbval`.

- [ ] Addresses issue (#issue)


## Test Plan
```
pytest -v -s --nbval-lax  docs/notebooks/Llama_Stack_Building_AI_Applications.ipynb

=================================== test session starts ====================================
platform linux -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0 -- /home/hjshah/.conda/envs/nbeval/bin/python
cachedir: .pytest_cache
rootdir: /home/hjshah/git/llama-stack
configfile: pyproject.toml
plugins: nbval-0.11.0, anyio-4.8.0
collected 20 items                                                                         

docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 0 SKIPPED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 1 SKIPPED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 2 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 3 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 4 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 5 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 6 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 7 SKIPPED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 8 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 9 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 10 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 11 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 12 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 13 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 14 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 15 SKIPPED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 16 PASSED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 17 SKIPPED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 18 SKIPPED
docs/notebooks/Llama_Stack_Building_AI_Applications::ipynb::Cell 19 PASSED

========================= 14 passed, 6 skipped in 89.69s (0:01:29) =========================
```

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-01-15 19:28:17 -08:00
Hardik Shah
a51c8b4efc
Convert SamplingParams.strategy to a union (#767)
# What does this PR do?

Cleans up how we provide sampling params. Earlier, strategy was an enum
and all params (top_p, temperature, top_k) across all strategies were
grouped. We now have a strategy union object with each strategy (greedy,
top_p, top_k) having its corresponding params.
Earlier, 
```
class SamplingParams: 
    strategy: enum ()
    top_p, temperature, top_k and other params
```
However, the `strategy` field was not being used in any providers making
it confusing to know the exact sampling behavior purely based on the
params since you could pass temperature, top_p, top_k and how the
provider would interpret those would not be clear.

Hence we introduced -- a union where the strategy and relevant params
are all clubbed together to avoid this confusion.

Have updated all providers, tests, notebooks, readme and otehr places
where sampling params was being used to use the new format.
   

## Test Plan
`pytest llama_stack/providers/tests/inference/groq/test_groq_utils.py`
// inference on ollama, fireworks and together 
`with-proxy pytest -v -s -k "ollama"
--inference-model="meta-llama/Llama-3.1-8B-Instruct"
llama_stack/providers/tests/inference/test_text_inference.py `
// agents on fireworks 
`pytest -v -s -k 'fireworks and create_agent'
--inference-model="meta-llama/Llama-3.1-8B-Instruct"
llama_stack/providers/tests/agents/test_agents.py
--safety-shield="meta-llama/Llama-Guard-3-8B"`

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [X] Ran pre-commit to handle lint / formatting issues.
- [X] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [X] Updated relevant documentation.
- [X] Wrote necessary unit or integration tests.

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-01-15 05:38:51 -08:00
Dinesh Yeduguru
6964510dc1
update notebook to use new tool defs (#745)
# What does this PR do?

Update notebook for new tool defs
2025-01-13 15:07:15 -08:00
Dinesh Yeduguru
a5c57cd381
agents to use tools api (#673)
# What does this PR do?

PR #639 introduced the notion of Tools API and ability to invoke tools
through API just as any resource. This PR changes the Agents to start
using the Tools API to invoke tools. Major changes include:
1) Ability to specify tool groups with AgentConfig
2) Agent gets the corresponding tool definitions for the specified tools
and pass along to the model
3) Attachements are now named as Documents and their behavior is mostly
unchanged from user perspective
4) You can specify args that can be injected to a tool call through
Agent config. This is especially useful in case of memory tool, where
you want the tool to operate on a specific memory bank.
5) You can also register tool groups with args, which lets the agent
inject these as well into the tool call.
6) All tests have been migrated to use new tools API and fixtures
including client SDK tests
7) Telemetry just works with tools API because of our trace protocol
decorator


## Test Plan
```
pytest -s -v -k fireworks llama_stack/providers/tests/agents/test_agents.py  \
   --safety-shield=meta-llama/Llama-Guard-3-8B \
   --inference-model=meta-llama/Llama-3.1-8B-Instruct

pytest -s -v -k together  llama_stack/providers/tests/tools/test_tools.py \
   --safety-shield=meta-llama/Llama-Guard-3-8B \
   --inference-model=meta-llama/Llama-3.1-8B-Instruct

LLAMA_STACK_CONFIG="/Users/dineshyv/.llama/distributions/llamastack-together/together-run.yaml" pytest -v tests/client-sdk/agents/test_agents.py
```
run.yaml:
https://gist.github.com/dineshyv/0365845ad325e1c2cab755788ccc5994

Notebook:
https://colab.research.google.com/drive/1ck7hXQxRl6UvT-ijNRZ-gMZxH1G3cN2d?usp=sharing
2025-01-08 19:01:00 -08:00
Ashwin Bharambe
b438e616ff kill api key from notebook 2025-01-02 11:26:19 -08:00
Ashwin Bharambe
2f9fdb0ea7 Update notebook 2024-12-17 18:52:02 -08:00
Xi Yan
815f4af6cf
add colab notebook & update docs (#619)
# What does this PR do?

- add notebooks
- restructure docs

## Test Plan
<img width="1201" alt="image"
src="https://github.com/user-attachments/assets/3f9a09d9-b5ec-406c-b44b-e896e340d209"
/>

<img width="1202" alt="image"
src="https://github.com/user-attachments/assets/fdc1173f-2417-4ad6-845e-4f265fc40a31"
/>

<img width="1201" alt="image"
src="https://github.com/user-attachments/assets/b1e4e2a8-acf6-4ef2-a2fc-00d26cf32359"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-13 19:15:15 -08:00