forked from phoenix-oss/llama-stack-mirror
Fix tgi adapter (#796)
# What does this PR do? - Fix TGI adapter ## Test Plan <img width="851" alt="image" src="https://github.com/user-attachments/assets/0084cbc6-6713-4079-b87b-0befd9aca0b0" /> - most inference working - agent test failure due to model outputs ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
This commit is contained in:
parent
73215460ba
commit
0fefd4390a
3 changed files with 8 additions and 6 deletions
|
@ -15,10 +15,6 @@ class TGIImplConfig(BaseModel):
|
|||
url: str = Field(
|
||||
description="The URL for the TGI serving endpoint",
|
||||
)
|
||||
api_token: Optional[SecretStr] = Field(
|
||||
default=None,
|
||||
description="A bearer token if your TGI endpoint is protected.",
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def sample_run_config(cls, url: str = "${env.TGI_URL}", **kwargs):
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue