forked from phoenix-oss/llama-stack-mirror
This PR does a few things: - it moves "direct client" to llama-stack repo instead of being in the llama-stack-client-python repo - renames it to `LlamaStackLibraryClient` - actually makes synchronous generators work - makes streaming and non-streaming work properly In many ways, this PR makes things finally "work" ## Test Plan See a `library_client_test.py` I added. This isn't really quite a test yet but it demonstrates that this mode now works. Here's the invocation and the response: ``` INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct python llama_stack/distribution/tests/library_client_test.py ollama ```  |
||
|---|---|---|
| .. | ||
| bedrock | ||
| cerebras | ||
| databricks | ||
| fireworks | ||
| nvidia | ||
| ollama | ||
| sample | ||
| tgi | ||
| together | ||
| vllm | ||
| __init__.py | ||