llama-stack/llama_stack/distribution
Xi Yan 3c72c034e6
[remove import *] clean up import *'s (#689)
# What does this PR do?

- as title, cleaning up `import *`'s
- upgrade tests to make them more robust to bad model outputs
- remove import *'s in llama_stack/apis/* (skip __init__ modules)
<img width="465" alt="image"
src="https://github.com/user-attachments/assets/d8339c13-3b40-4ba5-9c53-0d2329726ee2"
/>

- run `sh run_openapi_generator.sh`, no types gets affected

## Test Plan

### Providers Tests

**agents**
```
pytest -v -s llama_stack/providers/tests/agents/test_agents.py -m "together" --safety-shield meta-llama/Llama-Guard-3-8B --inference-model meta-llama/Llama-3.1-405B-Instruct-FP8
```

**inference**
```bash
# meta-reference
torchrun $CONDA_PREFIX/bin/pytest -v -s -k "meta_reference" --inference-model="meta-llama/Llama-3.1-8B-Instruct" ./llama_stack/providers/tests/inference/test_text_inference.py
torchrun $CONDA_PREFIX/bin/pytest -v -s -k "meta_reference" --inference-model="meta-llama/Llama-3.2-11B-Vision-Instruct" ./llama_stack/providers/tests/inference/test_vision_inference.py

# together
pytest -v -s -k "together" --inference-model="meta-llama/Llama-3.1-8B-Instruct" ./llama_stack/providers/tests/inference/test_text_inference.py
pytest -v -s -k "together" --inference-model="meta-llama/Llama-3.2-11B-Vision-Instruct" ./llama_stack/providers/tests/inference/test_vision_inference.py

pytest ./llama_stack/providers/tests/inference/test_prompt_adapter.py 
```

**safety**
```
pytest -v -s llama_stack/providers/tests/safety/test_safety.py -m together --safety-shield meta-llama/Llama-Guard-3-8B
```

**memory**
```
pytest -v -s llama_stack/providers/tests/memory/test_memory.py -m "sentence_transformers" --env EMBEDDING_DIMENSION=384
```

**scoring**
```
pytest -v -s -m llm_as_judge_scoring_together_inference llama_stack/providers/tests/scoring/test_scoring.py --judge-model meta-llama/Llama-3.2-3B-Instruct
pytest -v -s -m basic_scoring_together_inference llama_stack/providers/tests/scoring/test_scoring.py
pytest -v -s -m braintrust_scoring_together_inference llama_stack/providers/tests/scoring/test_scoring.py
```


**datasetio**
```
pytest -v -s -m localfs llama_stack/providers/tests/datasetio/test_datasetio.py
pytest -v -s -m huggingface llama_stack/providers/tests/datasetio/test_datasetio.py
```


**eval**
```
pytest -v -s -m meta_reference_eval_together_inference llama_stack/providers/tests/eval/test_eval.py
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio llama_stack/providers/tests/eval/test_eval.py
```

### Client-SDK Tests
```
LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v ./tests/client-sdk
```

### llama-stack-apps
```
PORT=5000
LOCALHOST=localhost

python -m examples.agents.hello $LOCALHOST $PORT
python -m examples.agents.inflation $LOCALHOST $PORT
python -m examples.agents.podcast_transcript $LOCALHOST $PORT
python -m examples.agents.rag_as_attachments $LOCALHOST $PORT
python -m examples.agents.rag_with_memory_bank $LOCALHOST $PORT
python -m examples.safety.llama_guard_demo_mm $LOCALHOST $PORT
python -m examples.agents.e2e_loop_with_custom_tools $LOCALHOST $PORT

# Vision model
python -m examples.interior_design_assistant.app
python -m examples.agent_store.app $LOCALHOST $PORT
```

### CLI
```
which llama
llama model prompt-format -m Llama3.2-11B-Vision-Instruct
llama model list
llama stack list-apis
llama stack list-providers inference

llama stack build --template ollama --image-type conda
```

### Distributions Tests
**ollama**
```
llama stack build --template ollama --image-type conda
ollama run llama3.2:1b-instruct-fp16
llama stack run ./llama_stack/templates/ollama/run.yaml --env INFERENCE_MODEL=meta-llama/Llama-3.2-1B-Instruct
```

**fireworks**
```
llama stack build --template fireworks --image-type conda
llama stack run ./llama_stack/templates/fireworks/run.yaml
```

**together**
```
llama stack build --template together --image-type conda
llama stack run ./llama_stack/templates/together/run.yaml
```

**tgi**
```
llama stack run ./llama_stack/templates/tgi/run.yaml --env TGI_URL=http://0.0.0.0:5009 --env INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
```

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-27 15:45:44 -08:00
..
routers [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
server [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
store [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
tests Fix bedrock inference impl 2024-12-16 14:22:34 -08:00
ui model_type=llm for filering available models for playground 2024-12-17 19:42:38 -08:00
utils Ensure model_local_dir does not mangle "C:\" on Windows 2024-11-24 14:18:59 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
build.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
build_conda_env.sh Fix to conda env build script 2024-12-17 12:19:34 -08:00
build_container.sh Make run yaml optional so dockers can start with just --env (#492) 2024-11-20 13:11:40 -08:00
build_venv.sh Miscellaneous fixes around telemetry, library client and run yaml autogen 2024-12-08 20:40:22 -08:00
client.py use API version in "remote" stack client 2024-11-19 15:59:47 -08:00
common.sh API Updates (#73) 2024-09-17 19:51:35 -07:00
configure.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
configure_container.sh docker: Check for selinux before using --security-opt (#167) 2024-10-02 10:37:41 -07:00
datatypes.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
distribution.py Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00
inspect.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
library_client.py fix trace starting in library client (#655) 2024-12-19 16:13:52 -08:00
request_headers.py fixes tests & move braintrust api_keys to request headers (#535) 2024-11-26 13:11:21 -08:00
resolver.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
stack.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
start_conda_env.sh Move to use argparse, fix issues with multiple --env cmdline options 2024-11-18 16:31:59 -08:00
start_container.sh Move to use argparse, fix issues with multiple --env cmdline options 2024-11-18 16:31:59 -08:00