forked from phoenix-oss/llama-stack-mirror
API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51)
* add tools to chat completion request * use templates for generating system prompts * Moved ToolPromptFormat and jinja templates to llama_models.llama3.api * <WIP> memory changes - inlined AgenticSystemInstanceConfig so API feels more ergonomic - renamed it to AgentConfig, AgentInstance -> Agent - added a MemoryConfig and `memory` parameter - added `attachments` to input and `output_attachments` to the response - some naming changes * InterleavedTextAttachment -> InterleavedTextMedia, introduce memory tool * flesh out memory banks API * agentic loop has a RAG implementation * faiss provider implementation * memory client works * re-work tool definitions, fix FastAPI issues, fix tool regressions * fix agentic_system utils * basic RAG seems to work * small bug fixes for inline attachments * Refactor custom tool execution utilities * Bug fix, show memory retrieval steps in EventLogger * No need for api_key for Remote providers * add special unicode character ↵ to showcase newlines in model prompt templates * remove api.endpoints imports * combine datatypes.py and endpoints.py into api.py * Attachment / add TTL api * split batch_inference from inference * minor import fixes * use a single impl for ChatFormat.decode_assistant_mesage * use interleaved_text_media_as_str() utilityt * Fix api.datatypes imports * Add blobfile for tiktoken * Add ToolPromptFormat to ChatFormat.encode_message so that tools are encoded properly * templates take optional --format={json,function_tag} * Rag Updates * Add `api build` subcommand -- WIP * fix * build + run image seems to work * <WIP> adapters * bunch more work to make adapters work * api build works for conda now * ollama remote adapter works * Several smaller fixes to make adapters work Also, reorganized the pattern of __init__ inside providers so configuration can stay lightweight * llama distribution -> llama stack + containers (WIP) * All the new CLI for api + stack work * Make Fireworks and Together into the Adapter format * Some quick fixes to the CLI behavior to make it consistent * Updated README phew * Update cli_reference.md * llama_toolchain/distribution -> llama_toolchain/core * Add termcolor * update paths * Add a log just for consistency * chmod +x scripts * Fix api dependencies not getting added to configuration * missing import lol * Delete utils.py; move to agentic system * Support downloading of URLs for attachments for code interpreter * Simplify and generalize `llama api build` yay * Update `llama stack configure` to be very simple also * Fix stack start * Allow building an "adhoc" distribution * Remote `llama api []` subcommands * Fixes to llama stack commands and update docs * Update documentation again and add error messages to llama stack start * llama stack start -> llama stack run * Change name of build for less confusion * Add pyopenapi fork to the repository, update RFC assets * Remove conflicting annotation * Added a "--raw" option for model template printing --------- Co-authored-by: Hardik Shah <hjshah@fb.com> Co-authored-by: Ashwin Bharambe <ashwin@meta.com> Co-authored-by: Dalton Flanagan <6599399+dltn@users.noreply.github.com>
This commit is contained in:
parent
35093c0b6f
commit
7bc7785b0d
141 changed files with 8252 additions and 4032 deletions
|
@ -32,6 +32,16 @@ class ModelTemplate(Subcommand):
|
|||
self._add_arguments()
|
||||
self.parser.set_defaults(func=self._run_model_template_cmd)
|
||||
|
||||
def _prompt_type(self, value):
|
||||
from llama_models.llama3.api.datatypes import ToolPromptFormat
|
||||
|
||||
try:
|
||||
return ToolPromptFormat(value.lower())
|
||||
except ValueError:
|
||||
raise argparse.ArgumentTypeError(
|
||||
f"{value} is not a valid ToolPromptFormat. Choose from {', '.join(t.value for t in ToolPromptFormat)}"
|
||||
) from None
|
||||
|
||||
def _add_arguments(self):
|
||||
self.parser.add_argument(
|
||||
"-m",
|
||||
|
@ -46,6 +56,18 @@ class ModelTemplate(Subcommand):
|
|||
help="Usecase template name (system_message, user_message, assistant_message, tool_message)...",
|
||||
required=False,
|
||||
)
|
||||
self.parser.add_argument(
|
||||
"--format",
|
||||
type=str,
|
||||
help="ToolPromptFormat (json or function_tag). This flag is used to print the template in a specific formats.",
|
||||
required=False,
|
||||
default="json",
|
||||
)
|
||||
self.parser.add_argument(
|
||||
"--raw",
|
||||
action="store_true",
|
||||
help="If set to true, don't pretty-print into a table. Useful to copy-paste.",
|
||||
)
|
||||
|
||||
def _run_model_template_cmd(self, args: argparse.Namespace) -> None:
|
||||
from llama_models.llama3.api.interface import (
|
||||
|
@ -56,22 +78,32 @@ class ModelTemplate(Subcommand):
|
|||
from llama_toolchain.cli.table import print_table
|
||||
|
||||
if args.name:
|
||||
template, tokens_info = render_jinja_template(args.name)
|
||||
tool_prompt_format = self._prompt_type(args.format)
|
||||
template, tokens_info = render_jinja_template(args.name, tool_prompt_format)
|
||||
rendered = ""
|
||||
for tok, is_special in tokens_info:
|
||||
if is_special:
|
||||
rendered += colored(tok, "yellow", attrs=["bold"])
|
||||
else:
|
||||
rendered += tok
|
||||
rendered += "\n"
|
||||
print_table(
|
||||
[
|
||||
("Name", colored(template.template_name, "white", attrs=["bold"])),
|
||||
("Template", rendered),
|
||||
("Notes", template.notes),
|
||||
],
|
||||
separate_rows=True,
|
||||
)
|
||||
|
||||
if not args.raw:
|
||||
rendered = rendered.replace("\n", "↵\n")
|
||||
print_table(
|
||||
[
|
||||
(
|
||||
"Name",
|
||||
colored(template.template_name, "white", attrs=["bold"]),
|
||||
),
|
||||
("Template", rendered),
|
||||
("Notes", template.notes),
|
||||
],
|
||||
separate_rows=True,
|
||||
)
|
||||
else:
|
||||
print("Template: ", template.template_name)
|
||||
print("=" * 40)
|
||||
print(rendered)
|
||||
else:
|
||||
templates = list_jinja_templates()
|
||||
headers = ["Role", "Template Name"]
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue