I've been benchmarking a few matrix loaders, and noticed a performance degradation on PIGO if the matrix being loaded is a large fraction of the available RAM.
See https://github.com/alugowski/sparse-matrix-io-comparison
The machine has 16GiB RAM (it's a laptop). The 1GiB file shows amazing read and write performance from PIGO, but the 10GiB file is about an order of magnitude slower in each. While experimenting I noticed that the performance drop is gradual and dependent on memory fraction. So an 8GiB file shows less degradation than 10GiB, but more than 6 GiB.
(The generated MatrixMarket files and code are defined such that the filesize is roughly equal to the memory requirement of the matrix)
I noticed the PIGO paper used a 1TB machine to load at most ~30GiB files, so this may or may not be important.
I've been benchmarking a few matrix loaders, and noticed a performance degradation on PIGO if the matrix being loaded is a large fraction of the available RAM.
See https://github.com/alugowski/sparse-matrix-io-comparison
The machine has 16GiB RAM (it's a laptop). The 1GiB file shows amazing read and write performance from PIGO, but the 10GiB file is about an order of magnitude slower in each. While experimenting I noticed that the performance drop is gradual and dependent on memory fraction. So an 8GiB file shows less degradation than 10GiB, but more than 6 GiB.
(The generated MatrixMarket files and code are defined such that the filesize is roughly equal to the memory requirement of the matrix)
I noticed the PIGO paper used a 1TB machine to load at most ~30GiB files, so this may or may not be important.