forked from phoenix-oss/llama-stack-mirror
* API Keys passed from Client instead of distro configuration * delete distribution registry * Rename the "package" word away * Introduce a "Router" layer for providers Some providers need to be factorized and considered as thin routing layers on top of other providers. Consider two examples: - The inference API should be a routing layer over inference providers, routed using the "model" key - The memory banks API is another instance where various memory bank types will be provided by independent providers (e.g., a vector store is served by Chroma while a keyvalue memory can be served by Redis or PGVector) This commit introduces a generalized routing layer for this purpose. * update `apis_to_serve` * llama_toolchain -> llama_stack * Codemod from llama_toolchain -> llama_stack - added providers/registry - cleaned up api/ subdirectories and moved impls away - restructured api/api.py - from llama_stack.apis.<api> import foo should work now - update imports to do llama_stack.apis.<api> - update many other imports - added __init__, fixed some registry imports - updated registry imports - create_agentic_system -> create_agent - AgenticSystem -> Agent * Moved some stuff out of common/; re-generated OpenAPI spec * llama-toolchain -> llama-stack (hyphens) * add control plane API * add redis adapter + sqlite provider * move core -> distribution * Some more toolchain -> stack changes * small naming shenanigans * Removing custom tool and agent utilities and moving them client side * Move control plane to distribution server for now * Remove control plane from API list * no codeshield dependency randomly plzzzzz * Add "fire" as a dependency * add back event loggers * stack configure fixes * use brave instead of bing in the example client * add init file so it gets packaged * add init files so it gets packaged * Update MANIFEST * bug fix --------- Co-authored-by: Hardik Shah <hjshah@fb.com> Co-authored-by: Xi Yan <xiyan@meta.com> Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
42 lines
1.5 KiB
Python
42 lines
1.5 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
from llama_stack.apis.inference import (
|
|
ChatCompletionResponseEventType,
|
|
ChatCompletionResponseStreamChunk,
|
|
)
|
|
from termcolor import cprint
|
|
|
|
|
|
class LogEvent:
|
|
def __init__(
|
|
self,
|
|
content: str = "",
|
|
end: str = "\n",
|
|
color="white",
|
|
):
|
|
self.content = content
|
|
self.color = color
|
|
self.end = "\n" if end is None else end
|
|
|
|
def print(self, flush=True):
|
|
cprint(f"{self.content}", color=self.color, end=self.end, flush=flush)
|
|
|
|
|
|
class EventLogger:
|
|
async def log(self, event_generator):
|
|
async for chunk in event_generator:
|
|
if isinstance(chunk, ChatCompletionResponseStreamChunk):
|
|
event = chunk.event
|
|
if event.event_type == ChatCompletionResponseEventType.start:
|
|
yield LogEvent("Assistant> ", color="cyan", end="")
|
|
elif event.event_type == ChatCompletionResponseEventType.progress:
|
|
yield LogEvent(event.delta, color="yellow", end="")
|
|
elif event.event_type == ChatCompletionResponseEventType.complete:
|
|
yield LogEvent("")
|
|
else:
|
|
yield LogEvent("Assistant> ", color="cyan", end="")
|
|
yield LogEvent(chunk.completion_message.content, color="yellow")
|