forked from phoenix-oss/llama-stack-mirror
* Add distribution CLI scaffolding * More progress towards `llama distribution install` * getting closer to a distro definition, distro install + configure works * Distribution server now functioning * read existing configuration, save enums properly * Remove inference uvicorn server entrypoint and llama inference CLI command * updated dependency and client model name * Improved exception handling * local imports for faster cli * undo a typo, add a passthrough distribution * implement full-passthrough in the server * add safety adapters, configuration handling, server + clients * cleanup, moving stuff to common, nuke utils * Add a Path() wrapper at the earliest place * fixes * Bring agentic system api to toolchain Add adapter dependencies and resolve adapters using a topological sort * refactor to reduce size of `agentic_system` * move straggler files and fix some important existing bugs * ApiSurface -> Api * refactor a method out * Adapter -> Provider * Make each inference provider into its own subdirectory * installation fixes * Rename Distribution -> DistributionSpec, simplify RemoteProviders * dict key instead of attr * update inference config to take model and not model_dir * Fix passthrough streaming, send headers properly not part of body :facepalm * update safety to use model sku ids and not model dirs * Update cli_reference.md * minor fixes * add DistributionConfig, fix a bug in model download * Make install + start scripts do proper configuration automatically * Update CLI_reference * Nuke fp8_requirements, fold fbgemm into common requirements * Update README, add newline between API surface configurations * Refactor download functionality out of the Command so can be reused * Add `llama model download` alias for `llama download` * Show message about checksum file so users can check themselves * Simpler intro statements * get ollama working * Reduce a bunch of dependencies from toolchain Some improvements to the distribution install script * Avoid using `conda run` since it buffers everything * update dependencies and rely on LLAMA_TOOLCHAIN_DIR for dev purposes * add validation for configuration input * resort imports * make optional subclasses default to yes for configuration * Remove additional_pip_packages; move deps to providers * for inline make 8b model the default * Add scripts to MANIFEST * allow installing from test.pypi.org * Fix #2 to help with testing packages * Must install llama-models at that same version first * fix PIP_ARGS --------- Co-authored-by: Hardik Shah <hjshah@fb.com> Co-authored-by: Hardik Shah <hjshah@meta.com>
55 lines
1.6 KiB
Python
55 lines
1.6 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
from typing import List, Optional
|
|
|
|
from llama_models.sku_list import CoreModelId, safety_models
|
|
|
|
from pydantic import BaseModel, validator
|
|
|
|
|
|
class LlamaGuardShieldConfig(BaseModel):
|
|
model: str = "Llama-Guard-3-8B"
|
|
excluded_categories: List[str] = []
|
|
disable_input_check: bool = False
|
|
disable_output_check: bool = False
|
|
|
|
@validator("model")
|
|
@classmethod
|
|
def validate_model(cls, model: str) -> str:
|
|
permitted_models = [
|
|
m.descriptor()
|
|
for m in safety_models()
|
|
if m.core_model_id == CoreModelId.llama_guard_3_8b
|
|
]
|
|
if model not in permitted_models:
|
|
raise ValueError(
|
|
f"Invalid model: {model}. Must be one of {permitted_models}"
|
|
)
|
|
return model
|
|
|
|
|
|
class PromptGuardShieldConfig(BaseModel):
|
|
model: str = "Prompt-Guard-86M"
|
|
|
|
@validator("model")
|
|
@classmethod
|
|
def validate_model(cls, model: str) -> str:
|
|
permitted_models = [
|
|
m.descriptor()
|
|
for m in safety_models()
|
|
if m.core_model_id == CoreModelId.prompt_guard_86m
|
|
]
|
|
if model not in permitted_models:
|
|
raise ValueError(
|
|
f"Invalid model: {model}. Must be one of {permitted_models}"
|
|
)
|
|
return model
|
|
|
|
|
|
class SafetyConfig(BaseModel):
|
|
llama_guard_shield: Optional[LlamaGuardShieldConfig] = None
|
|
prompt_guard_shield: Optional[PromptGuardShieldConfig] = None
|