In 66th cell of the 00_pytorch_fundamentals.ipynb notebook (and in PyTorch for Deep Learning & Machine Learning – Full Course video) it's said that changing the original array does not influence the tensor created from that array earlier with from_numpy(). The notebooks code:
# Change the array, keep the tensor
array = array + 1
array, tensor
which is said to result in:
(array([2., 3., 4., 5., 6., 7., 8.]),
tensor([1., 2., 3., 4., 5., 6., 7.], dtype=torch.float64))
However, this can lead to a misunderstanding since an array and a tensor created with .from_numpy() actually share memory. Here's description from the PyTorch documentation: The returned tensor and ndarray share the same memory. Modifications to the tensor will be reflected in the ndarray and vice versa. The returned tensor is not resizable.
It's not reflected in the code above because when using:
instead of:
we reassign the array to a new memory spot.