mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
* fix non-streaming api in inference server * unit test for inline inference * Added non-streaming ollama inference impl * add streaming support for ollama inference with tests * addressing comments --------- Co-authored-by: Hardik Shah <hjshah@fb.com> |
||
|---|---|---|
| .. | ||
| cli | ||
| common | ||
| data | ||
| dataset/api | ||
| evaluations/api | ||
| inference | ||
| memory | ||
| models/api | ||
| post_training/api | ||
| reward_scoring/api | ||
| safety | ||
| synthetic_data_generation/api | ||
| __init__.py | ||
| utils.py | ||