llama-stack-mirror/llama_stack
Ashwin Bharambe e84d4436b5
Since we are pushing for HF repos, we should accept them in inference configs (#497)
# What does this PR do?

As the title says. 

## Test Plan

This needs
8752149f58
to also land. So the next package (0.0.54) will make this work properly.

The test is:

```bash
pytest -v -s -m "llama_3b and meta_reference" test_model_registration.py
```
2024-11-20 16:14:37 -08:00
..
apis Support Tavily as built-in search tool. (#485) 2024-11-19 20:59:02 -08:00
cli Don't depend on templates.py when print llama stack build messages (#496) 2024-11-20 15:44:49 -08:00
distribution Make run yaml optional so dockers can start with just --env (#492) 2024-11-20 13:11:40 -08:00
providers Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00
scripts make sure codegen doesn't cause spurious diffs for no reason 2024-11-20 13:56:30 -08:00
templates make sure codegen doesn't cause spurious diffs for no reason 2024-11-20 13:56:30 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00