forked from phoenix-oss/llama-stack-mirror
# What does this PR do? This change adds a compact type to include metrics in response as opposed to the full MetricEvent which is relevant for internal logging purposes. ## Test Plan ``` LLAMA_STACK_CONFIG=~/.llama/distributions/fireworks/fireworks-run.yaml pytest -s -v agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --text-model meta-llama/Llama-3.1-8B-Instruct llama stack run ~/.llama/distributions/fireworks/fireworks-run.yaml curl --request POST \ --url http://localhost:8321/v1/inference/chat-completion \ --header 'content-type: application/json' \ --data '{ "model_id": "meta-llama/Llama-3.1-70B-Instruct", "messages": [ { "role": "user", "content": { "type": "text", "text": "where do humans live" } } ], "stream": false }' { "metrics": [ { "metric": "prompt_tokens", "value": 10, "unit": null }, { "metric": "completion_tokens", "value": 522, "unit": null }, { "metric": "total_tokens", "value": 532, "unit": null } ], "completion_message": { "role": "assistant", "content": "Humans live in various parts of the world...............", "stop_reason": "out_of_tokens", "tool_calls": [] }, "logprobs": null } ``` |
||
|---|---|---|
| .. | ||
| routers | ||
| server | ||
| store | ||
| ui | ||
| utils | ||
| __init__.py | ||
| build.py | ||
| build_conda_env.sh | ||
| build_container.sh | ||
| build_venv.sh | ||
| client.py | ||
| common.sh | ||
| configure.py | ||
| datatypes.py | ||
| distribution.py | ||
| inspect.py | ||
| library_client.py | ||
| request_headers.py | ||
| resolver.py | ||
| stack.py | ||
| start_stack.sh | ||