llama-stack-mirror/llama_stack/providers/remote
Ashwin Bharambe 14f973a64f
Make LlamaStackLibraryClient work correctly (#581)
This PR does a few things:

- it moves "direct client" to llama-stack repo instead of being in the
llama-stack-client-python repo
- renames it to `LlamaStackLibraryClient`
- actually makes synchronous generators work 
- makes streaming and non-streaming work properly

In many ways, this PR makes things finally "work"

## Test Plan

See a `library_client_test.py` I added. This isn't really quite a test
yet but it demonstrates that this mode now works. Here's the invocation
and the response:

```
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct python llama_stack/distribution/tests/library_client_test.py ollama
```


![image](https://github.com/user-attachments/assets/17d4e116-4457-4755-a14e-d9a668801fe0)
2024-12-07 14:59:36 -08:00
..
agents impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
datasetio Telemetry API redesign (#525) 2024-12-04 11:22:45 -08:00
inference Make LlamaStackLibraryClient work correctly (#581) 2024-12-07 14:59:36 -08:00
memory Fix opentelemetry adapter (#510) 2024-11-22 18:18:11 -08:00
safety Remove the "ShieldType" concept (#430) 2024-11-12 12:37:24 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00