You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On serverless API on the test when we try to reach Llama-3.2-3B, if the token is not properly set to welcome inferences, it does not work : we get a 401.
I would suggest to add this information on the prerequisite of the token in the comment just above
## You need a token from https://hf.co/settings/tokens **and set it with the property "Make calls to inference providers" on.** If you run this on Google Colab, you can set it up in the "settings" tab under "secrets". Make sure to call it "HF_TOKEN"
os.environ["HF_TOKEN"]="hf_xxxxxxxxxxxxxx"
The text was updated successfully, but these errors were encountered:
On serverless API on the test when we try to reach Llama-3.2-3B, if the token is not properly set to welcome inferences, it does not work : we get a 401.
I would suggest to add this information on the prerequisite of the token in the comment just above
The text was updated successfully, but these errors were encountered: