forked from phoenix-oss/llama-stack-mirror
* add tools to chat completion request
* use templates for generating system prompts
* Moved ToolPromptFormat and jinja templates to llama_models.llama3.api
* <WIP> memory changes
- inlined AgenticSystemInstanceConfig so API feels more ergonomic
- renamed it to AgentConfig, AgentInstance -> Agent
- added a MemoryConfig and `memory` parameter
- added `attachments` to input and `output_attachments` to the response
- some naming changes
* InterleavedTextAttachment -> InterleavedTextMedia, introduce memory tool
* flesh out memory banks API
* agentic loop has a RAG implementation
* faiss provider implementation
* memory client works
* re-work tool definitions, fix FastAPI issues, fix tool regressions
* fix agentic_system utils
* basic RAG seems to work
* small bug fixes for inline attachments
* Refactor custom tool execution utilities
* Bug fix, show memory retrieval steps in EventLogger
* No need for api_key for Remote providers
* add special unicode character ↵ to showcase newlines in model prompt templates
* remove api.endpoints imports
* combine datatypes.py and endpoints.py into api.py
* Attachment / add TTL api
* split batch_inference from inference
* minor import fixes
* use a single impl for ChatFormat.decode_assistant_mesage
* use interleaved_text_media_as_str() utilityt
* Fix api.datatypes imports
* Add blobfile for tiktoken
* Add ToolPromptFormat to ChatFormat.encode_message so that tools are encoded properly
* templates take optional --format={json,function_tag}
* Rag Updates
* Add `api build` subcommand -- WIP
* fix
* build + run image seems to work
* <WIP> adapters
* bunch more work to make adapters work
* api build works for conda now
* ollama remote adapter works
* Several smaller fixes to make adapters work
Also, reorganized the pattern of __init__ inside providers so
configuration can stay lightweight
* llama distribution -> llama stack + containers (WIP)
* All the new CLI for api + stack work
* Make Fireworks and Together into the Adapter format
* Some quick fixes to the CLI behavior to make it consistent
* Updated README phew
* Update cli_reference.md
* llama_toolchain/distribution -> llama_toolchain/core
* Add termcolor
* update paths
* Add a log just for consistency
* chmod +x scripts
* Fix api dependencies not getting added to configuration
* missing import lol
* Delete utils.py; move to agentic system
* Support downloading of URLs for attachments for code interpreter
* Simplify and generalize `llama api build` yay
* Update `llama stack configure` to be very simple also
* Fix stack start
* Allow building an "adhoc" distribution
* Remote `llama api []` subcommands
* Fixes to llama stack commands and update docs
* Update documentation again and add error messages to llama stack start
* llama stack start -> llama stack run
* Change name of build for less confusion
* Add pyopenapi fork to the repository, update RFC assets
* Remove conflicting annotation
* Added a "--raw" option for model template printing
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Dalton Flanagan <6599399+dltn@users.noreply.github.com>
120 lines
4.3 KiB
Python
120 lines
4.3 KiB
Python
import unittest
|
|
|
|
from llama_models.llama3.api import * # noqa: F403
|
|
from llama_toolchain.inference.api import * # noqa: F403
|
|
from llama_toolchain.inference.prepare_messages import prepare_messages
|
|
|
|
MODEL = "Meta-Llama3.1-8B-Instruct"
|
|
|
|
|
|
class PrepareMessagesTests(unittest.IsolatedAsyncioTestCase):
|
|
async def test_system_default(self):
|
|
content = "Hello !"
|
|
request = ChatCompletionRequest(
|
|
model=MODEL,
|
|
messages=[
|
|
UserMessage(content=content),
|
|
],
|
|
)
|
|
messages = prepare_messages(request)
|
|
self.assertEqual(len(messages), 2)
|
|
self.assertEqual(messages[-1].content, content)
|
|
self.assertTrue("Cutting Knowledge Date: December 2023" in messages[0].content)
|
|
|
|
async def test_system_builtin_only(self):
|
|
content = "Hello !"
|
|
request = ChatCompletionRequest(
|
|
model=MODEL,
|
|
messages=[
|
|
UserMessage(content=content),
|
|
],
|
|
tools=[
|
|
ToolDefinition(tool_name=BuiltinTool.code_interpreter),
|
|
ToolDefinition(tool_name=BuiltinTool.brave_search),
|
|
],
|
|
)
|
|
messages = prepare_messages(request)
|
|
self.assertEqual(len(messages), 2)
|
|
self.assertEqual(messages[-1].content, content)
|
|
self.assertTrue("Cutting Knowledge Date: December 2023" in messages[0].content)
|
|
self.assertTrue("Tools: brave_search" in messages[0].content)
|
|
|
|
async def test_system_custom_only(self):
|
|
content = "Hello !"
|
|
request = ChatCompletionRequest(
|
|
model=MODEL,
|
|
messages=[
|
|
UserMessage(content=content),
|
|
],
|
|
tools=[
|
|
ToolDefinition(
|
|
tool_name="custom1",
|
|
description="custom1 tool",
|
|
parameters={
|
|
"param1": ToolParamDefinition(
|
|
param_type="str",
|
|
description="param1 description",
|
|
required=True,
|
|
),
|
|
},
|
|
)
|
|
],
|
|
tool_prompt_format=ToolPromptFormat.json,
|
|
)
|
|
messages = prepare_messages(request)
|
|
self.assertEqual(len(messages), 3)
|
|
self.assertTrue("Environment: ipython" in messages[0].content)
|
|
|
|
self.assertTrue("Return function calls in JSON format" in messages[1].content)
|
|
self.assertEqual(messages[-1].content, content)
|
|
|
|
async def test_system_custom_and_builtin(self):
|
|
content = "Hello !"
|
|
request = ChatCompletionRequest(
|
|
model=MODEL,
|
|
messages=[
|
|
UserMessage(content=content),
|
|
],
|
|
tools=[
|
|
ToolDefinition(tool_name=BuiltinTool.code_interpreter),
|
|
ToolDefinition(tool_name=BuiltinTool.brave_search),
|
|
ToolDefinition(
|
|
tool_name="custom1",
|
|
description="custom1 tool",
|
|
parameters={
|
|
"param1": ToolParamDefinition(
|
|
param_type="str",
|
|
description="param1 description",
|
|
required=True,
|
|
),
|
|
},
|
|
),
|
|
],
|
|
)
|
|
messages = prepare_messages(request)
|
|
self.assertEqual(len(messages), 3)
|
|
|
|
self.assertTrue("Environment: ipython" in messages[0].content)
|
|
self.assertTrue("Tools: brave_search" in messages[0].content)
|
|
|
|
self.assertTrue("Return function calls in JSON format" in messages[1].content)
|
|
self.assertEqual(messages[-1].content, content)
|
|
|
|
async def test_user_provided_system_message(self):
|
|
content = "Hello !"
|
|
system_prompt = "You are a pirate"
|
|
request = ChatCompletionRequest(
|
|
model=MODEL,
|
|
messages=[
|
|
SystemMessage(content=system_prompt),
|
|
UserMessage(content=content),
|
|
],
|
|
tools=[
|
|
ToolDefinition(tool_name=BuiltinTool.code_interpreter),
|
|
],
|
|
)
|
|
messages = prepare_messages(request)
|
|
self.assertEqual(len(messages), 2, messages)
|
|
self.assertTrue(messages[0].content.endswith(system_prompt))
|
|
|
|
self.assertEqual(messages[-1].content, content)
|