Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authorization header is correct, but the token seems invalid #2507

Open
DevPatel1412 opened this issue Sep 4, 2024 · 3 comments
Open

Authorization header is correct, but the token seems invalid #2507

DevPatel1412 opened this issue Sep 4, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@DevPatel1412
Copy link

Describe the bug

So my issue is this error, I was working with same 'HF_token' having the write permission and I am working with Mistral Nemo 12B Instruct , the model was working well from last few days without any issue and today suddenly this error appears .
This error appears to be persistant, i have refresh the token and also tried with other model too and also check the hugging face interface api server status as well, but the issue remains the same.

Issue

BadRequestError
huggingface_hub.utils._errors.BadRequestError: (Request ID: Mq1mDWKbogI0AleJS0HJM)

Bad request:
Authorization header is correct, but the token seems invalid

This is my app.py file, and I have not modified or added any other files:

from huggingface_hub import InferenceClient
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Authenticate with Hugging Face
HFT = os.getenv('HF_TOKEN')
client = InferenceClient(model="mistralai/Mistral-Nemo-Instruct-2407", token=HFT) 

#results
response = ""
    for message in client.chat_completion(
        messages=[system_role, user_prompt],
        max_tokens=3000,
        stream=True,
        temperature=0.35,
    ):
        response += message.choices[0].delta.content

Reproduction

No response

Logs

Traceback (most recent call last):
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\huggingface_hub\utils\_errors.py", line 304, in hf_raise_for_status
    response.raise_for_status()
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Instruct-2407/v1/chat/completions

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\flask\app.py", line 1498, in __call__
    return self.wsgi_app(environ, start_response)
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\flask\app.py", line 1476, in wsgi_app
    response = self.handle_exception(e)
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\flask\app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\flask\app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "C:\Users\Admin\Documents\_app_30082024\envo\lib\site-packages\flask\app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\flask\app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "C:\Users\Admin\Documents\app_30082024\app5.py", line 78, in process_file 
    per_data = Model_PersonalDetails_Output(resume, client)
  File "C:\Users\Admin\Documents\app_30082024\utility\llm_generate.py", line 127, in Model_PersonalDetails_Output
    for message in client.chat_completion(
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\huggingface_hub\inference\_client.py", line 837, in chat_completion
    data = self.post(
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\huggingface_hub\inference\_client.py", line 304, in post
    hf_raise_for_status(response)
  File "C:\Users\Admin\Documents\app_30082024\envo\lib\site-packages\huggingface_hub\utils\_errors.py", line 358, in hf_raise_for_status
    raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError:  (Request ID: Mq1mDWKbogI0AleJS0HJM)

Bad request:
Authorization header is correct, but the token seems invalid

System info

- huggingface_hub version: 0.24.6
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: C:\Users\Admin\.cache\huggingface\token
- Has saved token ?: True
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.4.0
- Jinja2: 3.1.4
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 10.4.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.26.4
- pydantic: N/A
- aiohttp: N/A
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: C:\Users\Admin\.cache\huggingface\hub
- HF_ASSETS_CACHE: C:\Users\Admin\.cache\huggingface\assets
- HF_TOKEN_PATH: C:\Users\Admin\.cache\huggingface\token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
@DevPatel1412 DevPatel1412 added the bug Something isn't working label Sep 4, 2024
@Wauplin
Copy link
Contributor

Wauplin commented Sep 4, 2024

Hi @DevPatel1412, this is definitely a user token issue. Could you try to create a new finegrained one in https://huggingface.co/settings/tokens and click on "make calls to the serverless Inference API"
image

Then use it in your code by passing token="hf_***". If that doesn't work, please let us know.
Once that work, it's preferable to use HF_TOKEN env variable or huggingface-cli login (and then no need to do HFT = os.getenv('HF_TOKEN')).

Note: a "write" token should work as well but is not recommended security-wise.

@DevPatel1412
Copy link
Author

Hi @Wauplin ,
I have tried the fine-grained token as well and also checked with token from my friends account which shows the same error. This is the second time I got this issue, and this appear as temporary, but it affects for whole day. If I checked the service for the next day, it works well as if nothing happened at all.

@DevPatel1412
Copy link
Author

Hi @Wauplin,
So today I changed nothing in the code, used the same token, and the model is generating the output properly. But I didn't get why sometimes the token does not work properly and throws the above error? It will be helpful to know more about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants