Xi Yan
3c72c034e6
[remove import *] clean up import *'s ( #689 )
...
# What does this PR do?
- as title, cleaning up `import *`'s
- upgrade tests to make them more robust to bad model outputs
- remove import *'s in llama_stack/apis/* (skip __init__ modules)
<img width="465" alt="image"
src="https://github.com/user-attachments/assets/d8339c13-3b40-4ba5-9c53-0d2329726ee2 "
/>
- run `sh run_openapi_generator.sh`, no types gets affected
## Test Plan
### Providers Tests
**agents**
```
pytest -v -s llama_stack/providers/tests/agents/test_agents.py -m "together" --safety-shield meta-llama/Llama-Guard-3-8B --inference-model meta-llama/Llama-3.1-405B-Instruct-FP8
```
**inference**
```bash
# meta-reference
torchrun $CONDA_PREFIX/bin/pytest -v -s -k "meta_reference" --inference-model="meta-llama/Llama-3.1-8B-Instruct" ./llama_stack/providers/tests/inference/test_text_inference.py
torchrun $CONDA_PREFIX/bin/pytest -v -s -k "meta_reference" --inference-model="meta-llama/Llama-3.2-11B-Vision-Instruct" ./llama_stack/providers/tests/inference/test_vision_inference.py
# together
pytest -v -s -k "together" --inference-model="meta-llama/Llama-3.1-8B-Instruct" ./llama_stack/providers/tests/inference/test_text_inference.py
pytest -v -s -k "together" --inference-model="meta-llama/Llama-3.2-11B-Vision-Instruct" ./llama_stack/providers/tests/inference/test_vision_inference.py
pytest ./llama_stack/providers/tests/inference/test_prompt_adapter.py
```
**safety**
```
pytest -v -s llama_stack/providers/tests/safety/test_safety.py -m together --safety-shield meta-llama/Llama-Guard-3-8B
```
**memory**
```
pytest -v -s llama_stack/providers/tests/memory/test_memory.py -m "sentence_transformers" --env EMBEDDING_DIMENSION=384
```
**scoring**
```
pytest -v -s -m llm_as_judge_scoring_together_inference llama_stack/providers/tests/scoring/test_scoring.py --judge-model meta-llama/Llama-3.2-3B-Instruct
pytest -v -s -m basic_scoring_together_inference llama_stack/providers/tests/scoring/test_scoring.py
pytest -v -s -m braintrust_scoring_together_inference llama_stack/providers/tests/scoring/test_scoring.py
```
**datasetio**
```
pytest -v -s -m localfs llama_stack/providers/tests/datasetio/test_datasetio.py
pytest -v -s -m huggingface llama_stack/providers/tests/datasetio/test_datasetio.py
```
**eval**
```
pytest -v -s -m meta_reference_eval_together_inference llama_stack/providers/tests/eval/test_eval.py
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio llama_stack/providers/tests/eval/test_eval.py
```
### Client-SDK Tests
```
LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v ./tests/client-sdk
```
### llama-stack-apps
```
PORT=5000
LOCALHOST=localhost
python -m examples.agents.hello $LOCALHOST $PORT
python -m examples.agents.inflation $LOCALHOST $PORT
python -m examples.agents.podcast_transcript $LOCALHOST $PORT
python -m examples.agents.rag_as_attachments $LOCALHOST $PORT
python -m examples.agents.rag_with_memory_bank $LOCALHOST $PORT
python -m examples.safety.llama_guard_demo_mm $LOCALHOST $PORT
python -m examples.agents.e2e_loop_with_custom_tools $LOCALHOST $PORT
# Vision model
python -m examples.interior_design_assistant.app
python -m examples.agent_store.app $LOCALHOST $PORT
```
### CLI
```
which llama
llama model prompt-format -m Llama3.2-11B-Vision-Instruct
llama model list
llama stack list-apis
llama stack list-providers inference
llama stack build --template ollama --image-type conda
```
### Distributions Tests
**ollama**
```
llama stack build --template ollama --image-type conda
ollama run llama3.2:1b-instruct-fp16
llama stack run ./llama_stack/templates/ollama/run.yaml --env INFERENCE_MODEL=meta-llama/Llama-3.2-1B-Instruct
```
**fireworks**
```
llama stack build --template fireworks --image-type conda
llama stack run ./llama_stack/templates/fireworks/run.yaml
```
**together**
```
llama stack build --template together --image-type conda
llama stack run ./llama_stack/templates/together/run.yaml
```
**tgi**
```
llama stack run ./llama_stack/templates/tgi/run.yaml --env TGI_URL=http://0.0.0.0:5009 --env INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
```
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md ),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-27 15:45:44 -08:00
Xi Yan
a4bcfb8bba
[/scoring] add ability to define aggregation functions for scoring functions & refactors ( #597 )
...
# What does this PR do?
- Add ability to define aggregation functions for scoring functions via
`ScoringFnParams`
- Supported by `basic` / `regex_parser` / `llm_as_judge` scoring
functions
## Test Plan
```
pytest -v -s -m basic_scoring_together_inference scoring/test_scoring.py
```
<img width="855" alt="image"
src="https://github.com/user-attachments/assets/12db8e6e-2ad4-462e-b9b9-70ba6c050a6c ">
```
pytest -v -s -m llm_as_judge_scoring_together_inference scoring/test_scoring.py
```
<img width="858" alt="image"
src="https://github.com/user-attachments/assets/bf806676-6f5e-456d-be9f-f81a26d1df19 ">
**Example Response** (`basic`)
<img width="863" alt="image"
src="https://github.com/user-attachments/assets/0e57a49c-8386-45cc-8fa9-3e61aaa9a3be ">
**Example Response** (`llm-as-judge`)
<img width="854" alt="image"
src="https://github.com/user-attachments/assets/38065bc2-b724-47ed-9535-79b6099c4362 ">
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md ),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-11 10:03:42 -08:00
Xi Yan
ab7145a04f
minor refactor
2024-12-09 15:43:12 -08:00
Xi Yan
cd40a5fdbf
update template run.yaml to include openai api key for braintrust ( #590 )
...
# What does this PR do?
**Why**
- braintrust provider needs OpenAI API Key set in config for
DirectClient to work
## Test Plan
```
python llama_stack/scripts/distro_codegen.py
```
<img width="340" alt="image"
src="https://github.com/user-attachments/assets/eae38296-f880-40f0-9a9e-46a12038db64 ">
- set API key in client via provider_data
<img width="907" alt="image"
src="https://github.com/user-attachments/assets/3d74cd7c-dc7e-4a42-8a40-c22f19b0c534 ">
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md ),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-09 15:40:59 -08:00
Xi Yan
16769256b7
[llama stack ui] add native eval & inspect distro & playground pages ( #541 )
...
# What does this PR do?
New Pages Added:
- (1) Inspect Distro
- (2) Evaluations:
- (a) native evaluations (including generation)
- (b) application evaluations (no generation, scoring only)
- (3) Playground:
- (a) chat
- (b) RAG
## Test Plan
```
streamlit run app.py
```
#### Playground
https://github.com/user-attachments/assets/6ca617e8-32ca-49b2-9774-185020ff5204
#### Inspect
https://github.com/user-attachments/assets/01d52b2d-92af-4e3a-b623-a9b8ba22ba99
#### Evaluations (Generation + Scoring)
https://github.com/user-attachments/assets/345845c7-2a2b-4095-960a-9ae40f6a93cf
#### Evaluations (Scoring)
https://github.com/user-attachments/assets/6cc1659f-eba4-49ca-a0a5-7c243557b4f5
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md ),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-04 09:47:09 -08:00
Xi Yan
6e10d0b23e
precommit
2024-12-03 18:52:43 -08:00
Xi Yan
fd19a8a517
add missing __init__
2024-12-03 18:50:18 -08:00
Xi Yan
50cc165077
fixes tests & move braintrust api_keys to request headers ( #535 )
...
# What does this PR do?
- braintrust scoring provider requires OPENAI_API_KEY env variable to be
set
- move this to be able to be set as request headers (e.g. like together
/ fireworks api keys)
- fixes pytest with agents dependency
## Test Plan
**E2E**
```
llama stack run
```
```yaml
scoring:
- provider_id: braintrust-0
provider_type: inline::braintrust
config: {}
```
**Client**
```python
self.client = LlamaStackClient(
base_url=os.environ.get("LLAMA_STACK_ENDPOINT", "http://localhost:5000 "),
provider_data={
"openai_api_key": os.environ.get("OPENAI_API_KEY", ""),
},
)
```
- run `llama-stack-client eval run_scoring`
**Unit Test**
```
pytest -v -s -m meta_reference_eval_together_inference eval/test_eval.py
```
```
pytest -v -s -m braintrust_scoring_together_inference scoring/test_scoring.py --env OPENAI_API_KEY=$OPENAI_API_KEY
```
<img width="745" alt="image"
src="https://github.com/user-attachments/assets/68f5cdda-f6c8-496d-8b4f-1b3dabeca9c2 ">
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md ),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-26 13:11:21 -08:00
Xi Yan
d3956a1d22
fix description
2024-11-25 22:02:45 -08:00
Xi Yan
654722da7d
fix model id for llm_as_judge_405b
2024-11-21 11:34:49 -08:00
Xi Yan
0784284ab5
[Agentic Eval] add ability to run agents generation ( #469 )
...
# What does this PR do?
- add ability to run agents generation for full eval (generate +
scoring)
- pre-register SimpleQA benchmark llm-as-judge scoring function in code
## Test Plan


#### Simple QA w/ Search

- eval_task_config_simpleqa_search.json
```json
{
"type": "benchmark",
"eval_candidate": {
"type": "agent",
"config": {
"model": "Llama3.1-405B-Instruct",
"instructions": "Please use the search tool to answer the question.",
"sampling_params": {
"strategy": "greedy",
"temperature": 1.0,
"top_p": 0.9
},
"tools": [
{
"type": "brave_search",
"engine": "brave",
"api_key": "API_KEY"
}
],
"tool_choice": "auto",
"tool_prompt_format": "json",
"input_shields": [],
"output_shields": [],
"enable_session_persistence": false
}
}
}
```
#### SimpleQA w/o Search

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md ),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-18 11:43:03 -08:00
Xi Yan
788411b680
categorical score for llm as judge
2024-11-14 22:33:59 -05:00
Xi Yan
2eab3b7ed9
skip aggregation for llm_as_judge
2024-11-14 17:50:46 -05:00
Xi Yan
d5b1202c83
change schema -> dataset_schema ( #442 )
...
# What does this PR do?
- `schema` should not a field w/ pydantic warnings
- change `schema` to `dataset_schema`
<img width="855" alt="image"
src="https://github.com/user-attachments/assets/47cb6bb9-4be0-46a5-8701-24d24e2eaabd ">
## Test Plan
```
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio eval/test_eval.py
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md ),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-13 10:58:12 -05:00
Dinesh Yeduguru
fdff24e77a
Inference to use provider resource id to register and validate ( #428 )
...
This PR changes the way model id gets translated to the final model name
that gets passed through the provider.
Major changes include:
1) Providers are responsible for registering an object and as part of
the registration returning the object with the correct provider specific
name of the model provider_resource_id
2) To help with the common look ups different names a new ModelLookup
class is created.
Tested all inference providers including together, fireworks, vllm,
ollama, meta reference and bedrock
2024-11-12 20:02:00 -08:00
Xi Yan
84c6fbbd93
fix tests after registration migration & rename meta-reference -> basic / llm_as_judge provider ( #424 )
...
* rename meta-reference -> basic
* config rename
* impl rename
* rename llm_as_judge, fix test
* util
* rebase
* naming fix
2024-11-12 10:35:44 -05:00
Dinesh Yeduguru
0a3b3d5fb6
migrate scoring fns to resource ( #422 )
...
* fix after rebase
* remove print
---------
Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
2024-11-11 17:28:48 -08:00
Xi Yan
b4416b72fd
Folder restructure for evals/datasets/scoring ( #419 )
...
* rename evals related stuff
* fix datasetio
* fix scoring test
* localfs -> LocalFS
* refactor scoring
* refactor scoring
* remove 8b_correctness scoring_fn from tests
* tests w/ eval params
* scoring fn braintrust fixture
* import
2024-11-11 17:35:40 -05:00