Skip to content

TensorAccessor on GPU tensor leaks memory #1517

@brianberns

Description

@brianberns

Describe the bug

Creating a TensorAccessor on a GPU tensor leaks a CPU tensor after the accessor has been disposed.

To Reproduce

let test () =
    use scope = torch.NewDisposeScope()
    do
        use tensor = torch.tensor([|1.0f; 2.0f; 3.0f|], device=torch.CUDA)
        use accessor = tensor.data<float32>()
        printfn "%A" <| accessor.ToArray()
    printfn "Disposables: %A" scope.DisposablesCount
    for disposable in scope.DisposablesView do
        printfn "%A" disposable

Expected behavior

Output should be:

[|1.0f; 2.0f; 3.0f|]
Disposables: 0

Actual output:

[|1.0f; 2.0f; 3.0f|]
Disposables: 1
[3], type = Float32, device = cpu

Note that setting device=torch.CPU when creating the original tensor does result in the expected output of 0 disposables.

Please complete the following information:

  • OS: Windows
  • Package Type: torchsharp-cuda-windows
  • Version: 0.105.2

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions