Commit graph

527 commits

Author SHA1 Message Date
Ashwin Bharambe
ceadaf1840 Dont include 3B / 1B models for bedrock since they arent ondemand 2024-12-18 06:30:02 -08:00
Ashwin Bharambe
c39a3777b5 Make bedrock "just" work 2024-12-18 06:22:33 -08:00
Ashwin Bharambe
f1d6cb22d7 Update URL type to avoid string-ifying and creating complexity 2024-12-17 22:50:11 -08:00
Xi Yan
75e72cf2fc model_type=llm for filering available models for playground 2024-12-17 19:42:38 -08:00
Ashwin Bharambe
0fb4b7de6f Add more debugging logs to when llama guard fails 2024-12-17 18:52:02 -08:00
Xi Yan
af8f1b3531 model selection playground fix 2024-12-17 18:13:52 -08:00
Dinesh Yeduguru
3700022d6f
store attributes values in builtin types to avoid otel warnings (#649)
# What does this PR do?

Serialize objects to built in types to avoid otel warnings


## Test Plan

╰─❯ llama stack run
~/.llama/distributions/llamastack-together/together-run.yaml
2024-12-17 17:10:43 -08:00
Henry Tu
0e2a99e223
Update Cerebras from Llama 3.1 to 3.3 (#645)
# What does this PR do?

Cerebras is rolling out support for llama 3.3 70b and deprecating llama
3.1 70b. This PR updates the documentation, config, and internal mapping
to reflect this change.

cc: @ashwinb @raghotham
2024-12-17 16:28:24 -08:00
Ashwin Bharambe
b7a7caa9a8 Fix conversion to RawMessage everywhere 2024-12-17 14:00:43 -08:00
Ashwin Bharambe
fbca51d6da Fix to conda env build script 2024-12-17 12:19:34 -08:00
Ashwin Bharambe
0452c6a0c7 add missing init file 2024-12-17 11:49:03 -08:00
Ashwin Bharambe
8de8eb03c8
Update the "InterleavedTextMedia" type (#635)
## What does this PR do?

This is a long-pending change and particularly important to get done
now.

Specifically:
- we cannot "localize" (aka download) any URLs from media attachments
anywhere near our modeling code. it must be done within llama-stack.
- `PIL.Image` is infesting all our APIs via `ImageMedia ->
InterleavedTextMedia` and that cannot be right at all. Anything in the
API surface must be "naturally serializable". We need a standard `{
type: "image", image_url: "<...>" }` which is more extensible
- `UserMessage`, `SystemMessage`, etc. are moved completely to
llama-stack from the llama-models repository.

See https://github.com/meta-llama/llama-models/pull/244 for the
corresponding PR in llama-models.

## Test Plan

```bash
cd llama_stack/providers/tests

pytest -s -v -k "fireworks or ollama or together" inference/test_vision_inference.py
pytest -s -v -k "(fireworks or ollama or together) and llama_3b" inference/test_text_inference.py
pytest -s -v -k chroma memory/test_memory.py \
  --env EMBEDDING_DIMENSION=384 --env CHROMA_DB_PATH=/tmp/foobar

pytest -s -v -k fireworks agents/test_agents.py  \
   --safety-shield=meta-llama/Llama-Guard-3-8B \
   --inference-model=meta-llama/Llama-3.1-8B-Instruct
```

Updated the client sdk (see PR ...), installed the SDK in the same
environment and then ran the SDK tests:

```bash
cd tests/client-sdk
LLAMA_STACK_CONFIG=together pytest -s -v agents/test_agents.py
LLAMA_STACK_CONFIG=ollama pytest -s -v memory/test_memory.py

# this one needed a bit of hacking in the run.yaml to ensure I could register the vision model correctly
INFERENCE_MODEL=llama3.2-vision:latest LLAMA_STACK_CONFIG=ollama pytest -s -v inference/test_inference.py
```
2024-12-17 11:18:31 -08:00
Xi Yan
99f331f5c8
[bugfix] no shield_call when there's no shields configured (#642)
# What does this PR do?

**Why**
- When AgentConfig has no `input_shields` / `output_shields` defined, we
still outputs a shield_call step with violation=None. This is impossible
to distinguish the case b/w (1) no violation from running shields v.s.
(2) no shields call

**What**
- We should not have a shield_call step when no `input_shields` /
`output_shields` are defined.

- Also removes a never reached try/catch code block in agent loop.
`run_multiple_shields` is never called in the try block (verified by
stacktrace print)

**Side Note**
- pre-commit fix

## Test Plan

Tested w/ DirectClient via:
https://gist.github.com/yanxi0830/b48f2a53b6f5391b9ff1e39992bc05b3

**No Shields**
<img width="858" alt="image"
src="https://github.com/user-attachments/assets/67319370-329f-4954-bd16-d21ce54c6ebf"
/>

**With Input + Output Shields**
<img width="854" alt="image"
src="https://github.com/user-attachments/assets/75ab1bee-3ba9-4549-ab51-23210be83da7"
/>

**Input Shields Only**
<img width="858" alt="image"
src="https://github.com/user-attachments/assets/1897206b-13dd-4ea5-92c2-b39bf68e9286"
/>


E2E pytest
```
LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v ./tests/client-sdk/agents/test_agents.py
```

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-17 11:10:19 -08:00
Ashwin Bharambe
c2f7905fa4 Fix bedrock inference impl 2024-12-16 14:22:34 -08:00
Ashwin Bharambe
eb37fba9da Small fix to library client 2024-12-16 14:08:30 -08:00
Ashwin Bharambe
2e5bfcd42a
Update Telemetry API so OpenAPI generation can work (#640)
We cannot use recursive types because not only does our OpenAPI
generator not like them, even if it did, it is not easy for all client
languages to automatically construct proper APIs (especially considering
garbage collection) around them. For now, we can return a `Dict[str,
SpanWithStatus]` instead of `SpanWithChildren` and rely on the client to
reconstruct the tree.

Also fixed a super subtle issue with the OpenAPI generation process
(monkey-patching of json_schema_type wasn't working because of import
reordering.)
2024-12-16 13:00:14 -08:00
Xi Yan
78e2bfbe7a
[tests] add client-sdk pytests & delete client.py (#638)
# What does this PR do?

**Why**
- Clean up examples which we will not maintain; reduce the surface area
to the minimal showcases

**What**
- Delete `client.py` in /apis/*
- Move all scripts to unit tests
  - SDK sync in the future will just require running pytests

**Side notes**
- `bwrap` not available on Mac so code_interpreter will not work

## Test Plan

```
LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v ./tests/client-sdk
```
<img width="725" alt="image"
src="https://github.com/user-attachments/assets/36bfe537-628d-43c3-8479-dcfcfe2e4035"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-16 12:04:56 -08:00
Botao Chen
20383bfea5
[3/n][torchtune integration] add validation logic (#600)
## What does this PR do?
- add validation logic in SFT recipe (validation loss and perplexity)
- add progress bar in both training and validation to better track the
progress on server side (eval has the similar logic)


## Test Plan
validation logic shows up in the Checkpoint training_metric part  
<img width="799" alt="Screenshot 2024-12-12 at 3 21 52 PM"
src="https://github.com/user-attachments/assets/36330ffe-0555-4b2d-93f0-9487dfdf7b4e"
/>

progress bar shows up as 
<img width="476" alt="Screenshot 2024-12-12 at 3 38 11 PM"
src="https://github.com/user-attachments/assets/77306fa2-cb9c-460f-8efc-b41bbe424a7d"
/>
expected
2024-12-13 16:35:06 -08:00
Botao Chen
c294a01c4b
[2/n][torchtune integration] implement job management and return training artifacts (#593)
### Context 
In this PR, we 
- Implement the post training job management and get training artifacts
apis
  - get_training_jobs
  - get_training_job_status
  - get_training_job_artifacts
- get_training_job_logstream is deleted since the trace can be directly
accessed by UI with Jaeger
https://llama-stack.readthedocs.io/en/latest/building_applications/telemetry.html#jaeger-to-visualize-traces
- Refactor the post training and training types definition to make them
more intuitive.
- Rewrite the checkpointer to make it compatible with llama-stack file
system and can be recognized during inference


### Test
Unit test
`pytest llama_stack/providers/tests/post_training/test_post_training.py
-m "torchtune_post_training_huggingface_datasetio" -v -s --tb=short
--disable-warnings`

<img width="1506" alt="Screenshot 2024-12-10 at 4 06 17 PM"
src="https://github.com/user-attachments/assets/16225029-bdb7-48c4-9d13-e580cc769c0a">


e2e test with client side call

<img width="888" alt="Screenshot 2024-12-10 at 4 09 44 PM"
src="https://github.com/user-attachments/assets/de375e4c-ef67-4dcc-a045-4037d9489191">
2024-12-13 15:00:04 -08:00
Dinesh Yeduguru
516e1a3e59
add embedding model by default to distribution templates (#617)
# What does this PR do?
Adds the sentence transformer provider and the `all-MiniLM-L6-v2`
embedding model to the default models to register in the run.yaml for
all providers.

## Test Plan
llama stack build --template together --image-type conda
llama stack run
~/.llama/distributions/llamastack-together/together-run.yaml
2024-12-13 12:48:00 -08:00
Ashwin Bharambe
e893b22868 export LibraryClient 2024-12-13 12:08:00 -08:00
Botao Chen
aeb76390fc
[1/n] torchtune <> llama-stack integration skeleton (#540)
### Context 
This is the 1st of series PRs that integrate torchtune with llama-stack
as meta reference post-training implementation. For MVP, we will focus
on single device LoRA SFT.

Though this PR is still WIP, we want to get early feedback on the high
level design of this skeleton while still working on several details

### Scope
To limit the scope of this PR, we focus on the skeleton of the
implementation.

**What are included?**
- refine the post-training SFT apis
- skeleton of supervised_fine_tune implementation. We verified that we
can call the supervised_fine_tune API successfully from llama stack
client SDK (client side PR:
https://github.com/meta-llama/llama-stack-client-python/pull/51)
- a very basic single device LoRA training recipe based on torchtune
core components
- parity check with torchtune library and post training api unit test

**What are not includes?**
- implementation of other job management, get training artifacts apis
(separate PR)
- refactor the meta reference inference logic to support eval on
finetuned model (separate PR)
- several necessary functionality in the training recipe such as
logging, validation etc (separate PR)
- interop with telemetry for tracing and metrics logging, currently
temporarily log to local disk (separate PR)

### Testing
**e2e test**
Although we haven't added detailed testing and numerical parity check
with torchtune yet, we did a simple E2E test from client to server
1. setup server with` llama stack build --template
experimental-post-training --image-type conda` and `llama stack run
experimental-post-training `
2. On client, run `llama-stack-client --endpoint
http://devgpu018.nha2.facebook.com:5000 post_training
supervised_fine_tune`
3. Training finishes successfully. On server side, get the finetune
checkpoints under output dir. On client side, get the job uuid

server 
<img width="1110" alt="Screenshot 2024-12-02 at 5 52 32 PM"
src="https://github.com/user-attachments/assets/b548eb90-7a9b-4edc-a858-ee237cc4361d">

client 
<img width="807" alt="Screenshot 2024-12-02 at 5 52 37 PM"
src="https://github.com/user-attachments/assets/1138ffa8-4698-40fa-b190-3d7b99646838">

**parity check**
torchtune dataloader output and llama-stack post training dataloader
output are same
<img width="1116" alt="Screenshot 2024-12-04 at 8 18 46 PM"
src="https://github.com/user-attachments/assets/5e295cdc-4c24-4ea6-82c0-ca96ef1bd6ee">

torchtune LoRA SFT and llama-stack post training LoRA SFT on alpaca
dataset with llama3.2 3B instruct model are numerical match

<img width="860" alt="Screenshot 2024-12-04 at 8 17 01 PM"
src="https://github.com/user-attachments/assets/c05cf0a8-c674-4d2e-9f0a-c5d01b2dca99">

<img width="1049" alt="Screenshot 2024-12-04 at 8 17 06 PM"
src="https://github.com/user-attachments/assets/b911d4e2-e7b1-41a9-b62c-d75529b6d443">

**unit test ** 
![Uploading Screenshot 2024-12-09 at 1.35.10 PM.png…]()
2024-12-13 11:05:35 -08:00
Matthew Farrellee
2a9b13dd52
add test for completion logprobs (#532)
# What does this PR do?

adds a test for the completion api's logprobs parameter

tbd which providers pass this test


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
2024-12-12 12:19:48 -08:00
Dinesh Yeduguru
96e158eaac
Make embedding generation go through inference (#606)
This PR does the following:
1) adds the ability to generate embeddings in all supported inference
providers.
2) Moves all the memory providers to use the inference API and improved
the memory tests to setup the inference stack correctly and use the
embedding models

This is a merge from #589 and #598
2024-12-12 11:47:50 -08:00
Xi Yan
8b45d147df
[/datasetio] drop columns not specified by dataset schema for huggingface provider (#611)
# What does this PR do?

**Why**
- huggingface datasets could have extra unused columns, some of these
columns (e.g. images) is unable to be casted as JSON over http requests
for datasetio.
- it is also inefficient to create a new dataset that's a subset of
columns

**Solution**
- drop columns not specified by dataset schema

## Test Plan

Tested with script:
https://gist.github.com/yanxi0830/23be5725e0d82d79e24cc5dd1d21b571


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-12 10:23:09 -08:00
Ashwin Bharambe
b7cb06f004
Allow using an "inline" version of Chroma using PersistentClient (#567)
The same code is used (inside providers/remote/memory/chroma/chroma.py)
but it is driven by separate configurations and changes which Chroma
client to use. Note that the dependencies are separate
(`chromadb-client` vs `chromadb` -- the latter is a _much_ heavier
package.)

```
pytest -s -v -m chroma memory/test_memory.py --env CHROMA_DB_PATH=/tmp/chroma_test
pytest -s -v -m chroma memory/test_memory.py --env CHROMA_URL=http://localhost:6001
```
2024-12-11 16:02:04 -08:00
Xi Yan
41487e6ed1
refactor scoring/eval pytests (#607)
# What does this PR do?

- remove model registration & parameterize model in scoring/eval pytests

## Test Plan

```
pytest -v -s -m meta_reference_eval_together_inference eval/test_eval.py
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio eval/test_eval.py
```

```
pytest -v -s -m llm_as_judge_scoring_together_inference scoring/test_scoring.py --judge-model meta-llama/Llama-3.2-3B-Instruct
pytest -v -s -m basic_scoring_together_inference scoring/test_scoring.py
```
<img width="860" alt="image"
src="https://github.com/user-attachments/assets/d4b0badc-da34-4097-9b7c-9511f8261723"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-11 10:47:37 -08:00
Dinesh Yeduguru
47b2dc8ae3
Revert "add model type to APIs" (#605)
Reverts meta-llama/llama-stack#588
2024-12-11 10:17:54 -08:00
Dinesh Yeduguru
8e33db6015
add model type to APIs (#588)
# What does this PR do?

This PR adds a new model type field to support embedding models to be
registered. Summary of changes:
1) Each registered model by default is an llm model. 
2) User can specify an embedding model type, while registering.If
specified, the model bypass the llama model checks since embedding
models can by of any type and based on llama.
3) User needs to include the required embedding dimension in metadata.
This will be used by embedding generation to generate the requried size
of embeddings.


## Test Plan

This PR will go together will need to be merged with two follow up PRs
that will include test plans.
2024-12-11 10:16:53 -08:00
Matthew Farrellee
b52df5fe5b
add completion api support to nvidia inference provider (#533)
# What does this PR do?

add the completion api to the nvidia inference provider


## Test Plan

while running the meta/llama-3.1-8b-instruct NIM from
https://build.nvidia.com/meta/llama-3_1-8b-instruct?snippet_tab=Docker

```
➜ pytest -s -v --providers inference=nvidia llama_stack/providers/tests/inference/ --env NVIDIA_BASE_URL=http://localhost:8000 -k test_completion --inference-model Llama3.1-8B-Instruct
=============================================== test session starts ===============================================
platform linux -- Python 3.10.15, pytest-8.3.3, pluggy-1.5.0 -- /home/matt/.conda/envs/stack/bin/python
cachedir: .pytest_cache
rootdir: /home/matt/Documents/Repositories/meta-llama/llama-stack
configfile: pyproject.toml
plugins: anyio-4.6.2.post1, asyncio-0.24.0, httpx-0.34.0
asyncio: mode=strict, default_loop_scope=None
collected 20 items / 18 deselected / 2 selected                                                                             

llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_completion[-nvidia] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_completion_structured_output[-nvidia] SKIPPED

============================= 1 passed, 1 skipped, 18 deselected, 6 warnings in 5.40s =============================
```

the structured output functionality works but the accuracy fails

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
2024-12-11 10:08:38 -08:00
Xi Yan
a4bcfb8bba
[/scoring] add ability to define aggregation functions for scoring functions & refactors (#597)
# What does this PR do?

- Add ability to define aggregation functions for scoring functions via
`ScoringFnParams`
- Supported by `basic` / `regex_parser` / `llm_as_judge` scoring
functions


## Test Plan

```
pytest -v -s -m basic_scoring_together_inference scoring/test_scoring.py
```
<img width="855" alt="image"
src="https://github.com/user-attachments/assets/12db8e6e-2ad4-462e-b9b9-70ba6c050a6c">


```
pytest -v -s -m llm_as_judge_scoring_together_inference scoring/test_scoring.py
```
<img width="858" alt="image"
src="https://github.com/user-attachments/assets/bf806676-6f5e-456d-be9f-f81a26d1df19">



**Example Response** (`basic`)
<img width="863" alt="image"
src="https://github.com/user-attachments/assets/0e57a49c-8386-45cc-8fa9-3e61aaa9a3be">

**Example Response** (`llm-as-judge`)
<img width="854" alt="image"
src="https://github.com/user-attachments/assets/38065bc2-b724-47ed-9535-79b6099c4362">


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-11 10:03:42 -08:00
Dinesh Yeduguru
e128f2547a
add tracing back to the lib cli (#595)
Adds back all the tracing logic removed from library client. also adds
back the logging to agent_instance.
2024-12-11 08:44:20 -08:00
Aidan Do
1c03ba239e
[#342] RAG - fix PDF format in vector database (#551)
# What does this PR do?

Addresses issue (#342)

- PDFs uploaded from url are being loaded into vector db as raw bytes
- Instead this PR extracts text from PDF if mime_type is
"application/json"
- Adds tests to cover new cases

## Test Plan

Ran these unit tests:

```bash
llama stack build --template meta-reference-gpu --image-type conda
conda activate llamastack-meta-reference-gpu
pip install pytest pytest-asyncio pypdf
pytest llama_stack/providers/tests/memory/test_vector_store.py -v
```

```
platform linux -- Python 3.10.15, pytest-8.3.3, pluggy-1.5.0 -- /home/ubuntu/1xa100-2/llama-stack/envs/bin/python
cachedir: .pytest_cache
rootdir: /home/ubuntu/1xa100-2/llama-stack
configfile: pyproject.toml
plugins: anyio-4.6.2.post1, asyncio-0.24.0, httpx-0.35.0
asyncio: mode=strict, default_loop_scope=None
collected 3 items                                                                                                                          

llama_stack/providers/tests/memory/test_vector_store.py::TestVectorStore::test_returns_content_from_pdf_data_uri PASSED              [ 33%]
llama_stack/providers/tests/memory/test_vector_store.py::TestVectorStore::test_downloads_pdf_and_returns_content PASSED              [ 66%]
llama_stack/providers/tests/memory/test_vector_store.py::TestVectorStore::test_downloads_pdf_and_returns_content_with_url_object PASSED [100%]

======================================================= 3 passed, 1 warning in 0.62s =======================================================
```

Tested manually via [this
script](afc8f8bebf/init.py)
to initialize and [this
script](afc8f8bebf/query.py)
to query

```bash
# Ran with meta-reference-gpu with safety
llama stack build --template meta-reference-gpu --image-type conda && llama stack run distributions/meta-reference-gpu/run-with-safety.yaml \
  --port 5001 \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-11B-Vision-Instruct

# Run init.py script
wget https://raw.githubusercontent.com/aidando73/llama-stack/afc8f8bebf70e1ad065d87e84692e1a3a45d9e19/init.py
pip install httpx==0.27.2 # Due to issue https://github.com/meta-llama/llama-stack-client-python/issues/54
python init.py
# Run query.py script
wget https://raw.githubusercontent.com/aidando73/llama-stack/afc8f8bebf70e1ad065d87e84692e1a3a45d9e19/query.py
python query.py
```

Should output valid text chunks

```
Chunk(content=' that it has a significantly\nlower violation rate than the competing standalone open source model, trading off a higher false refusal rate.\nLong-context safety. Long-context models are vulnerable to many-shot jailbreaking attacks without targeted\nmitigation (Anil et al., 2024). To address this, we finetune our models on SFT datasets that include examples\nof safe behavior in the presence of demonstrations of unsafe behavior in context. We develop a scalable\nmitigation strategy that significantly reduces VR, effectively neutralizing the impact of longer context attacks\neven for 256-shot attacks. This approach shows little to no impact on FRR and most helpfulness metrics.\nTo quantify the effectiveness of our long context safety mitigations, we use two additional benchmarking\nmethods: DocQA and Many-shot. For DocQA, short for “document question answering,” we use long documents\nwith information that could be utilized in adversarial ways. Models are provided both the document and a set\nof prompts related to the document in order to test whether the questions being related to information in the\ndocument affected the model’s ability to respond safely to the prompts. For Many-shot, following Anil et al.\n(2024), we construct a synthetic chat history composed of unsafe prompt-response pairs. A final prompt,\nunrelated to previous messages, is used to test whether the unsafe behavior in-context influenced the model\n45\nto response unsafely. The violation and false refusal rates for both DocQA and Many-shot are shown in\nFigure 20. We see that Llama 405B (with and without Llama Guard) is Pareto-better than the Comp. 2\nsystem across both violation rates and false refusal rates, across both DocQA and Many-shot. Relative to\nComp. 1, we find that Llama 405B is significantly safer, while coming at a trade off on false refusal.\nTool usage safety. The diversity of possible tools and the implementation of the tool usage call and integration\ninto the model make tool usage a challenging capability to fully mitigate (Wallace et al., 2024). We focus on\nthe search usecase. Violation and false refusal rates are shown in Figure 20. We tested against the Comp. 1\nsystem, where we find that Llama 405B is significantly safer, though has a slightly higher false refusal rate.\n5.4.5 Cybersecurity and Chemical/Biological Weapons Safety\nCyberSecurity evaluation results. To evaluate cybersecurity risk, we leverage the Cyber', document_id='num-0', token_count=512)0.7354530813978312
Chunk(content='.\nThrough careful ablations, we observe that mixing0.1% of synthetically generated long-context data with the\noriginal short-context data optimizes the performance across both short-context and long-context benchmarks.\nDPO. We observe that using only short context training data in DPO did not negatively impact long-context\nperformance as long as the SFT model is high quality in long context tasks. We suspect this is due to the\nfact that our DPO recipe has fewer optimizer steps than SFT. Given this finding, we keep the standard\nshort-context recipe for DPO on top of our long-context SFT checkpoints.\n4.3.5 Tool Use\nTeaching LLMs to use tools such as search engines or code interpreters hugely expands the range of tasks\nthey can solve, transforming them from pure chat models into more general assistants (Nakano et al., 2021;\nThoppilan et al., 2022; Parisi et al., 2022; Gao et al., 2023; Mialon et al., 2023a; Schick et al., 2024). We train\nLlama 3 to interact with the following tools:\n• Search engine. Llama 3 is trained to use Brave Search7 to answer questions about recent events that go\nbeyond its knowledge cutoff or that require retrieving a particular piece of information from the web.\n• Python interpreter. Llama 3 can generate and execute code to perform complex computations, read files\nuploaded by the user and solve tasks based on them such as question answering, summarization, data\nanalysis or visualization.\n7https://brave.com/search/api/\n24\n• Mathematical computational engine. Llama 3 can use the Wolfram Alpha API8 to more accurately solve\nmath, science problems, or retrieve accurate information from Wolfram’s database.\nThe resulting model is able to use these tools in a chat setup to solve the user’s queries, including in multi-turn\ndialogs. If a query requires multiple tool calls, the model can write a step-by-step plan, call the tools in\nsequence, and do reasoning after each tool call.\nWe also improve Llama 3’s zero-shot tool use capabilities — given in-context, potentially unseen tool definitions\nand a user query, we train the model to generate the correct tool call.\nImplementation. We implement our core tools as Python objects with different methods. Zero-shot tools can\nbe implemented as Python functions with descriptions, documentation (i.e., examples for', document_id='num-0', token_count=512)0.7350672465928054
Chunk(content=' Embeddings RoPE (θ = 500, 000)\nTable 3 Overview of the key hyperparameters of Llama 3. We display settings for 8B, 70B, and 405B language models.\n• We use a vocabulary with 128K tokens. Our token vocabulary combines 100K tokens from thetiktoken3\ntokenizer with 28K additional tokens to better support non-English languages. Compared to the Llama\n2 tokenizer, our new tokenizer improves compression rates on a sample of English data from 3.17 to\n3.94 characters per token. This enables the model to “read” more text for the same amount of training\ncompute. We also found that adding 28K tokens from select non-English languages improved both\ncompression ratios and downstream performance, with no impact on English tokenization.\n• We increase the RoPE base frequency hyperparameter to 500,000. This enables us to better support\nlonger contexts; Xiong et al. (2023) showed this value to be effective for context lengths up to 32,768.\nLlama 3 405B uses an architecture with 126 layers, a token representation dimension of 16,384, and 128\nattention heads; see Table 3 for details. This leads to a model size that is approximately compute-optimal\naccording to scaling laws on our data for our training budget of3.8 × 1025 FLOPs.\n3.2.1 Scaling Laws\nWe develop scaling laws (Hoffmann et al., 2022; Kaplan et al., 2020) to determine the optimal model size for\nour flagship model given our pre-training compute budget. In addition to determining the optimal model size,\na major challenge is to forecast the flagship model’s performance on downstream benchmark tasks, due to a\ncouple of issues: (1) Existing scaling laws typically predict only next-token prediction loss rather than specific\nbenchmark performance. (2) Scaling laws can be noisy and unreliable because they are developed based on\npre-training runs conducted with small compute budgets (Wei et al., 2022b).\nTo address these challenges, we implement a two-stage methodology to develop scaling laws that accurately\npredict downstream benchmark performance:\n1. We first establish a correlation between the compute-optimal model’s negative log-likelihood on down-\nstream tasks and the training FLOPs.\n2. Next, we correlate the negative log-likelihood on downstream tasks with task accuracy, utilizing both', document_id='num-0', token_count=512)0.7172908346230037
```

## Before submitting

- [x] N/A - This PR fixes a typo or improves the docs (you can dismiss
the other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [x] N/A - Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
2024-12-10 21:33:27 -08:00
varunfb
f5c36c47ed
Added support for llama 3.3 model (#601)
# What does this PR do?

Llama-Stack does not support the 3.3 model. So added the support so
llama-stack can do inferencing with 3.3 model.
2024-12-10 20:03:31 -08:00
Aidan Do
76eb558bde
doc: llama-stack build --config help text references old directory (#596)
# What does this PR do?

- llama-stack build --config help text references example_configs which
no longer exists
- Update to refer new directory format to avoid confusion

## Before submitting

- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
2024-12-10 17:42:02 -08:00
Xi Yan
e2054d53e4
Fix issue 586 (#594)
# What does this PR do?

- Addresses issue (#586 )


## Test Plan

```
python llama_stack/scripts/distro_codegen.py
```


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-10 10:22:04 -08:00
Ashwin Bharambe
fa68ded07c Remove the unnecessary message after llama stack build 2024-12-10 09:46:56 -08:00
Dinesh Yeduguru
885bb0900b memory retrival to print only the bytes injected 2024-12-10 09:32:18 -08:00
Dinesh Yeduguru
2e3d3a62a5 Revert "add tracing to library client (#591)"
This reverts commit bc1fddf1df.
2024-12-10 08:50:20 -08:00
Dinesh Yeduguru
16d103842a Revert "await end_trace in libcli"
This reverts commit 7615da78b8.
2024-12-10 08:47:32 -08:00
Dinesh Yeduguru
f969b561ea Revert "Disable telemetry in library client for now"
This reverts commit 176ebddf47.
2024-12-10 08:47:18 -08:00
Dinesh Yeduguru
686f8d5b8d remove info logging in agent instance 2024-12-10 08:40:42 -08:00
Ashwin Bharambe
176ebddf47 Disable telemetry in library client for now 2024-12-09 22:17:25 -08:00
Ashwin Bharambe
a4d8a6009a
Fixes for library client (#587)
Library client used _server_ side types which was no bueno. The fix here
is not the completely correct fix but it is good for enough and for the
demo notebook.
2024-12-09 17:14:37 -08:00
Dinesh Yeduguru
7615da78b8 await end_trace in libcli 2024-12-09 15:54:42 -08:00
Dinesh Yeduguru
bc1fddf1df
add tracing to library client (#591) 2024-12-09 15:46:26 -08:00
Xi Yan
ab7145a04f minor refactor 2024-12-09 15:43:12 -08:00
Xi Yan
cd40a5fdbf
update template run.yaml to include openai api key for braintrust (#590)
# What does this PR do?

**Why**
- braintrust provider needs OpenAI API Key set in config for
DirectClient to work

## Test Plan
```
python llama_stack/scripts/distro_codegen.py 
```

<img width="340" alt="image"
src="https://github.com/user-attachments/assets/eae38296-f880-40f0-9a9e-46a12038db64">

- set API key in client via provider_data
<img width="907" alt="image"
src="https://github.com/user-attachments/assets/3d74cd7c-dc7e-4a42-8a40-c22f19b0c534">


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-09 15:40:59 -08:00
Xi Yan
c699e884b5
fix telemetry import (#585)
# What does this PR do?

fix issue

<img width="921" alt="image"
src="https://github.com/user-attachments/assets/26f7499f-fae1-4c93-9de3-1ae7ee7c5144">


## Test Plan

```
llama stack run
```
<img width="657" alt="image"
src="https://github.com/user-attachments/assets/266b6ac2-f991-4b38-841c-2a610b7d9f0f">


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-09 11:18:53 -08:00
Ashwin Bharambe
a2170353af better detection for jupyter 2024-12-09 09:38:11 -08:00