* fix non-streaming api in inference server
* unit test for inline inference
* Added non-streaming ollama inference impl
* add streaming support for ollama inference with tests
* addressing comments
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>