forked from phoenix-oss/llama-stack-mirror
llama-models should have extremely minimal cruft. Its sole purpose should be didactic -- show the simplest implementation of the llama models and document the prompt formats, etc. This PR is the complement to https://github.com/meta-llama/llama-models/pull/279 ## Test Plan Ensure all `llama` CLI `model` sub-commands work: ```bash llama model list llama model download --model-id ... llama model prompt-format -m ... ``` Ran tests: ```bash cd tests/client-sdk LLAMA_STACK_CONFIG=fireworks pytest -s -v inference/ LLAMA_STACK_CONFIG=fireworks pytest -s -v vector_io/ LLAMA_STACK_CONFIG=fireworks pytest -s -v agents/ ``` Create a fresh venv `uv venv && source .venv/bin/activate` and run `llama stack build --template fireworks --image-type venv` followed by `llama stack run together --image-type venv` <-- the server runs Also checked that the OpenAPI generator can run and there is no change in the generated files as a result. ```bash cd docs/openapi_generator sh run_openapi_generator.sh ```
27 lines
936 B
Python
27 lines
936 B
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
from typing import Any, Dict, Tuple, Type, TypeVar
|
|
|
|
T = TypeVar("T")
|
|
|
|
|
|
class SlotsMeta(type):
|
|
def __new__(cls: Type[T], name: str, bases: Tuple[type, ...], ns: Dict[str, Any]) -> T:
|
|
# caller may have already provided slots, in which case just retain them and keep going
|
|
slots: Tuple[str, ...] = ns.get("__slots__", ())
|
|
|
|
# add fields with type annotations to slots
|
|
annotations: Dict[str, Any] = ns.get("__annotations__", {})
|
|
members = tuple(member for member in annotations.keys() if member not in slots)
|
|
|
|
# assign slots
|
|
ns["__slots__"] = slots + tuple(members)
|
|
return super().__new__(cls, name, bases, ns) # type: ignore
|
|
|
|
|
|
class Slots(metaclass=SlotsMeta):
|
|
pass
|