llama-stack/llama_stack/providers/remote/inference
Ashwin Bharambe 14f973a64f
Make LlamaStackLibraryClient work correctly (#581)
This PR does a few things:

- it moves "direct client" to llama-stack repo instead of being in the
llama-stack-client-python repo
- renames it to `LlamaStackLibraryClient`
- actually makes synchronous generators work 
- makes streaming and non-streaming work properly

In many ways, this PR makes things finally "work"

## Test Plan

See a `library_client_test.py` I added. This isn't really quite a test
yet but it demonstrates that this mode now works. Here's the invocation
and the response:

```
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct python llama_stack/distribution/tests/library_client_test.py ollama
```


![image](https://github.com/user-attachments/assets/17d4e116-4457-4755-a14e-d9a668801fe0)
2024-12-07 14:59:36 -08:00
..
bedrock Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
cerebras Cerebras Inference Integration (#265) 2024-12-03 21:15:32 -08:00
databricks Inference to use provider resource id to register and validate (#428) 2024-11-12 20:02:00 -08:00
fireworks fix 3.2-1b fireworks 2024-11-19 14:20:07 -08:00
nvidia allow env NVIDIA_BASE_URL to set NVIDIAConfig.url (#531) 2024-11-26 17:46:44 -08:00
ollama Make LlamaStackLibraryClient work correctly (#581) 2024-12-07 14:59:36 -08:00
sample migrate model to Resource and new registration signature (#410) 2024-11-08 16:12:57 -08:00
tgi Tgi fixture (#519) 2024-11-25 13:17:02 -08:00
together fix llama stack build for together & llama stack build from templates (#479) 2024-11-18 22:29:16 -08:00
vllm use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00