forked from phoenix-oss/llama-stack-mirror
## What does this PR do? This is a long-pending change and particularly important to get done now. Specifically: - we cannot "localize" (aka download) any URLs from media attachments anywhere near our modeling code. it must be done within llama-stack. - `PIL.Image` is infesting all our APIs via `ImageMedia -> InterleavedTextMedia` and that cannot be right at all. Anything in the API surface must be "naturally serializable". We need a standard `{ type: "image", image_url: "<...>" }` which is more extensible - `UserMessage`, `SystemMessage`, etc. are moved completely to llama-stack from the llama-models repository. See https://github.com/meta-llama/llama-models/pull/244 for the corresponding PR in llama-models. ## Test Plan ```bash cd llama_stack/providers/tests pytest -s -v -k "fireworks or ollama or together" inference/test_vision_inference.py pytest -s -v -k "(fireworks or ollama or together) and llama_3b" inference/test_text_inference.py pytest -s -v -k chroma memory/test_memory.py \ --env EMBEDDING_DIMENSION=384 --env CHROMA_DB_PATH=/tmp/foobar pytest -s -v -k fireworks agents/test_agents.py \ --safety-shield=meta-llama/Llama-Guard-3-8B \ --inference-model=meta-llama/Llama-3.1-8B-Instruct ``` Updated the client sdk (see PR ...), installed the SDK in the same environment and then ran the SDK tests: ```bash cd tests/client-sdk LLAMA_STACK_CONFIG=together pytest -s -v agents/test_agents.py LLAMA_STACK_CONFIG=ollama pytest -s -v memory/test_memory.py # this one needed a bit of hacking in the run.yaml to ensure I could register the vision model correctly INFERENCE_MODEL=llama3.2-vision:latest LLAMA_STACK_CONFIG=ollama pytest -s -v inference/test_inference.py ```
54 lines
1.6 KiB
Python
54 lines
1.6 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
import asyncio
|
|
import logging
|
|
|
|
from typing import List
|
|
|
|
from llama_stack.apis.safety import * # noqa: F403
|
|
|
|
log = logging.getLogger(__name__)
|
|
|
|
|
|
class SafetyException(Exception): # noqa: N818
|
|
def __init__(self, violation: SafetyViolation):
|
|
self.violation = violation
|
|
super().__init__(violation.user_message)
|
|
|
|
|
|
class ShieldRunnerMixin:
|
|
def __init__(
|
|
self,
|
|
safety_api: Safety,
|
|
input_shields: List[str] = None,
|
|
output_shields: List[str] = None,
|
|
):
|
|
self.safety_api = safety_api
|
|
self.input_shields = input_shields
|
|
self.output_shields = output_shields
|
|
|
|
async def run_multiple_shields(
|
|
self, messages: List[Message], identifiers: List[str]
|
|
) -> None:
|
|
responses = await asyncio.gather(
|
|
*[
|
|
self.safety_api.run_shield(
|
|
shield_id=identifier,
|
|
messages=messages,
|
|
)
|
|
for identifier in identifiers
|
|
]
|
|
)
|
|
for identifier, response in zip(identifiers, responses):
|
|
if not response.violation:
|
|
continue
|
|
|
|
violation = response.violation
|
|
if violation.violation_level == ViolationLevel.ERROR:
|
|
raise SafetyException(violation)
|
|
elif violation.violation_level == ViolationLevel.WARN:
|
|
log.warning(f"[Warn]{identifier} raised a warning")
|