* Use huggingface_hub inference client for TGI inference
* Update the default value for TGI URL
* Use InferenceClient.text_generation for TGI inference
* Fixes post-review and split TGI adapter into local and Inference Endpoints ones
* Update CLI reference and add typing
* Rename TGI Adapter class
* Use HfApi to get the namespace when not provide in the hf endpoint name
* Remove unecessary method argument
* Improve TGI adapter initialization condition
* Move helper into impl file + fix merging conflicts
* TGI adapter and some refactoring of other inference adapters
* Use the lower-level `generate_stream()` method for correct tool calling
---------
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>