llama-stack-mirror/llama_stack
Ashwin Bharambe 4fa467731e Fix a bug in meta-reference inference when stream=False
Also introduce a gross hack (to cover grosser(?) hack) to ensure
non-stream requests don't send back responses in SSE format. Not sure
which of these hacks is grosser.
2024-10-08 17:23:02 -07:00
..
apis more memory related fixes; memory.client now works 2024-10-08 17:23:02 -07:00
cli A few bug fixes for covering corner cases 2024-10-08 17:23:02 -07:00
distribution Fix a bug in meta-reference inference when stream=False 2024-10-08 17:23:02 -07:00
providers Fix a bug in meta-reference inference when stream=False 2024-10-08 17:23:02 -07:00
scripts Add a test for CLI, but not fully done so disabled 2024-09-19 13:27:07 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00