mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-07 04:45:44 +00:00
Codemod from llama_toolchain -> llama_stack
- added providers/registry - cleaned up api/ subdirectories and moved impls away - restructured api/api.py - from llama_stack.apis.<api> import foo should work now - update imports to do llama_stack.apis.<api> - update many other imports - added __init__, fixed some registry imports - updated registry imports - create_agentic_system -> create_agent - AgenticSystem -> Agent
This commit is contained in:
parent
2cf731faea
commit
76b354a081
128 changed files with 381 additions and 376 deletions
|
@ -482,7 +482,7 @@ Once the server is setup, we can test it with a client to see the example output
|
|||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-toolchain pip package will work
|
||||
|
||||
python -m llama_stack.inference.client localhost 5000
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
||||
This will run the chat completion client and query the distribution’s /inference/chat_completion API.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue