-
Codevideo no .88 Time stamp : 13:27:57 My errorI keep getting errors in the My codeX_train , X_test = X_train.to(device) , X_test.to(device)
y_train, y_test , y_train.to(device),y_test.to(device)
epochs = 100
#training loop
for epoch in range(epochs) :
#training
model_0.train()
#forward pass
y_logits = model_0(X_train)
y_preds = torch.softmax(y_logits, dim=1)
y_preds = y_preds.argmax(dim=1).type(torch.float) # <<----- WHY DOES THE grad_fn DISAPPEAR
#print(y_preds.grad_fn) output: None
#calculate loss
loss = loss_fn( y_preds ,y_train )
#calculate accuracy
#acc =accuracy_fn(y_true = y_train , y_pred = y_preds ) <<-- i keep getting an error here but not that much of a deal so i just commented it out
#zero grad
optimizer.zero_grad()
#back propagation
loss.backward()
#optimizer step
optimizer.step()
#testing phase
model_0.eval()
with torch.inference_mode():
test_logits = model_0(X_test.type(torch.float))
test_preds= torch.softmax(test_logits,dim=1).argmax(dim=1).type(torch.float) #<<--- grad_fn doesn't matter here since it's just testing
test_acc = accuracy_fn(y_pred= test_preds ,y_true= y_test)
test_loss = loss_fn(test_preds , y_test.type(torch.float))
#print results
if epoch%10 == 0 :
print(f"EPOCHS:{epoch} | train loss:{loss:.4f} | train accuracy: N/A % | test loss:{test_loss :.4f} | test accuracy:{test_acc:.2f}%")stuff I tried1.I tried using |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
|
Some theorical explanation **Practical |
Beta Was this translation helpful? Give feedback.
-
|
Hi @mo3az-14, Wha happens if you use For example: for epoch in range(epochs):
### Training
model_4.train()
# 1. Forward pass
y_logits = model_4(X_blob_train) # model outputs raw logits
y_pred = torch.softmax(y_logits, dim=1).argmax(dim=1) # go from logits -> prediction probabilities -> prediction labels
# print(y_logits)
# 2. Calculate loss and accuracy
loss = loss_fn(y_logits, y_blob_train)
acc = accuracy_fn(y_true=y_blob_train,
y_pred=y_pred)I just ran through all of the code in notebook 02 and it functions as expected: https://github.com/mrdbourke/pytorch-deep-learning/blob/main/02_pytorch_classification.ipynb Update (from @nazarPuriy): Calculate the loss on the Good: # Calculate loss on logits
loss = loss_fn(y_logits, y_train)Bad: # Calculate loss on preds
loss = loss_fn(y_preds, y_train) |
Beta Was this translation helpful? Give feedback.
Hi @mo3az-14,
Wha happens if you use
some_tensor.argmax()instead oftorch.argmax(), have you tried that?For example:
I just ran through all of the code in notebook 02 and it functions as expected: https://github.com/mrdbourke/pytorch-d…