forked from phoenix-oss/llama-stack-mirror
## What does this PR do? This is a long-pending change and particularly important to get done now. Specifically: - we cannot "localize" (aka download) any URLs from media attachments anywhere near our modeling code. it must be done within llama-stack. - `PIL.Image` is infesting all our APIs via `ImageMedia -> InterleavedTextMedia` and that cannot be right at all. Anything in the API surface must be "naturally serializable". We need a standard `{ type: "image", image_url: "<...>" }` which is more extensible - `UserMessage`, `SystemMessage`, etc. are moved completely to llama-stack from the llama-models repository. See https://github.com/meta-llama/llama-models/pull/244 for the corresponding PR in llama-models. ## Test Plan ```bash cd llama_stack/providers/tests pytest -s -v -k "fireworks or ollama or together" inference/test_vision_inference.py pytest -s -v -k "(fireworks or ollama or together) and llama_3b" inference/test_text_inference.py pytest -s -v -k chroma memory/test_memory.py \ --env EMBEDDING_DIMENSION=384 --env CHROMA_DB_PATH=/tmp/foobar pytest -s -v -k fireworks agents/test_agents.py \ --safety-shield=meta-llama/Llama-Guard-3-8B \ --inference-model=meta-llama/Llama-3.1-8B-Instruct ``` Updated the client sdk (see PR ...), installed the SDK in the same environment and then ran the SDK tests: ```bash cd tests/client-sdk LLAMA_STACK_CONFIG=together pytest -s -v agents/test_agents.py LLAMA_STACK_CONFIG=ollama pytest -s -v memory/test_memory.py # this one needed a bit of hacking in the run.yaml to ensure I could register the vision model correctly INFERENCE_MODEL=llama3.2-vision:latest LLAMA_STACK_CONFIG=ollama pytest -s -v inference/test_inference.py ```
58 lines
1.5 KiB
Python
58 lines
1.5 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
from enum import Enum
|
|
from typing import Any, Dict, List, Optional, Protocol, runtime_checkable
|
|
|
|
from llama_models.schema_utils import json_schema_type, webmethod
|
|
from pydantic import BaseModel, Field
|
|
|
|
from llama_stack.apis.inference import Message
|
|
from llama_stack.apis.shields import Shield
|
|
|
|
from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol
|
|
|
|
|
|
@json_schema_type
|
|
class ViolationLevel(Enum):
|
|
INFO = "info"
|
|
WARN = "warn"
|
|
ERROR = "error"
|
|
|
|
|
|
@json_schema_type
|
|
class SafetyViolation(BaseModel):
|
|
violation_level: ViolationLevel
|
|
|
|
# what message should you convey to the user
|
|
user_message: Optional[str] = None
|
|
|
|
# additional metadata (including specific violation codes) more for
|
|
# debugging, telemetry
|
|
metadata: Dict[str, Any] = Field(default_factory=dict)
|
|
|
|
|
|
@json_schema_type
|
|
class RunShieldResponse(BaseModel):
|
|
violation: Optional[SafetyViolation] = None
|
|
|
|
|
|
class ShieldStore(Protocol):
|
|
async def get_shield(self, identifier: str) -> Shield: ...
|
|
|
|
|
|
@runtime_checkable
|
|
@trace_protocol
|
|
class Safety(Protocol):
|
|
shield_store: ShieldStore
|
|
|
|
@webmethod(route="/safety/run-shield")
|
|
async def run_shield(
|
|
self,
|
|
shield_id: str,
|
|
messages: List[Message],
|
|
params: Dict[str, Any] = None,
|
|
) -> RunShieldResponse: ...
|