Mandatory Hugging Face token to use HuggingFaceEndpoint #20740
-
Checked other resources
Commit to Help
Example Codefrom langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
# with a huggingface/text-generation-inference server running locally
HuggingFaceEndpoint(
endpoint_url=INFERENCE_SERVER_URL,
huggingfacehub_api_token=HF_TOKEN, # now mandatory
)
print(model.invoke("test"))
# previously:
model=HuggingFaceTextGenInference(
endpoint_url=INFERENCE_SERVER_URL,
)
print(model("test")) DescriptionI struggle to understand why huggingfacehub_api_token is now mandatory to provide in order to make an instance of the HuggingFaceEndpoint class when the LLM is running in a local Hugging Face TGI deployment. Previously I was using the now deprecated HuggingFaceTextGenInference, and was never required to give my huggingfacehub_api_token. I would like to return to the previous behavior as I believe there is no need to check my hf_api_token if I am doing a local deployment. System Infolangchain==0.1.16 |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
The requirement for a For local deployments where authentication might not be necessary, you can bypass the The transition from the deprecated In summary, while the
|
Beta Was this translation helpful? Give feedback.
-
I am also facing the same issue. I have hosted LLM model using huggingface's Text Generation Inference, I am not using HuggingFace's services, why should I pass my huggingfacehub_api_token? Also by omitting the token we get error message
Please remove the mandatory token requirement or provide a bypass mechanism. Thanks |
Beta Was this translation helpful? Give feedback.
-
Is this still a thing? |
Beta Was this translation helpful? Give feedback.
I believe it is solved by using langchain_huggingface