forked from phoenix-oss/llama-stack-mirror
This PR does a few things: - it moves "direct client" to llama-stack repo instead of being in the llama-stack-client-python repo - renames it to `LlamaStackLibraryClient` - actually makes synchronous generators work - makes streaming and non-streaming work properly In many ways, this PR makes things finally "work" ## Test Plan See a `library_client_test.py` I added. This isn't really quite a test yet but it demonstrates that this mode now works. Here's the invocation and the response: ``` INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct python llama_stack/distribution/tests/library_client_test.py ollama ```  |
||
|---|---|---|
| .. | ||
| routers | ||
| server | ||
| store | ||
| tests | ||
| ui | ||
| utils | ||
| __init__.py | ||
| build.py | ||
| build_conda_env.sh | ||
| build_container.sh | ||
| client.py | ||
| common.sh | ||
| configure.py | ||
| configure_container.sh | ||
| datatypes.py | ||
| distribution.py | ||
| inspect.py | ||
| library_client.py | ||
| request_headers.py | ||
| resolver.py | ||
| stack.py | ||
| start_conda_env.sh | ||
| start_container.sh | ||