-
Notifications
You must be signed in to change notification settings - Fork 14
Description
System information
| Type | Version/Name |
|---|---|
| Distribution Name | OSX |
| Distribution Version | 10.13.4 |
| Kernel | root:xnu-4570.51.1~1/RELEASE_X86_64 x86_64 |
| Architecture | Intel |
| ZFS Version | zfs-macOS-2.1.0-1 |
| SPL Version | zfs-kmod-2.1.0-1 |
| RAM | 64gb DDR4 2666 MHz |
| CPU | Intel Core i7-8700 |
Summary of Problem
Poor read performance on encrypted ZFS dataset.
I have a 12 disk zraid2 zvol in a zpool. The disks I'm using are ST4000NM0033, Dell branded with Firmware GA6E.
I've created this using the following commands
sudo zpool create -f -o ashift=12 \
-O casesensitivity=insensitive \
-O normalization=formD \
-O compression=lz4 \
-O atime=off \
-O recordsize=256k \
ZfsMediaPool raidz2 \
/var/run/disk/by-path/PCI0@0-SAT0@17-PRT5@5-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-SAT0@17-PRT4@4-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT31@1f-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-SAT0@17-PRT3@3-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-SAT0@17-PRT2@2-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-SAT0@17-PRT1@1-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-SAT0@17-PRT0@0-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT2@2-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT3@3-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT28@1c-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT4@4-PMP@0-@0:0 \
/var/run/disk/by-path/PCI0@0-RP21@1B,4-PXSX@0-PRT29@1d-PMP@0-@0:0
zpool add ZfsMediaPool log /dev/disk5s3
zpool add ZfsMediaPool cache /dev/disk5s4
zpool set feature@encryption=enabled ZfsMediaPool
zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase ZfsMediaPool/bryan
zfs set com.apple.mimic_hfs=hfs ZfsMediaPool/bryan
Reading and writing from the non-encrypted dataset works as expected with speeds exceeding 500 mbytes/s.
When reading from the encrypted dataset the performance goes way down, and any random I/O will bring the system to a crawl. Writing to the dataset causes the CPU load to go to 600%. A program such as thunderbird is almost unusable on the encrypted ZFS dataset.
I've tested this against a ZFS dataset with an encrypted APFS container and it has performed much better than native ZFS encryption.
Describe how to reproduce the problem
I created a random 10gb file by concatenating 10 copies of 1g from /dev/urandom
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ZfsMediaPool 8.18T 24.9T 7.07T /Volumes/ZfsMediaPool
ZfsMediaPool/bryan 1.08T 24.9T 1.08T /Users/bryan
ZfsMediaPool/mailcorestorage 31.6G 24.9T 31.6G -
My encrypted pool is mounted on /Users/bryan/
I did the following while logged in as root and no processes were accessing the dataset
I have the ZfsMediaPool/bryan dataset encrypted and mounted at /Users/bryan
/Volumes/ZfsMediaPool is the same zpool, but not encrypted
Below is a 10g random file being copied with no other processes using the pool
# this is writing it from the root, which is a NVME disk
dd if=/random-10g of=/Users/bryan/random-10g bs=1m
10737418240 bytes transferred in 71.818303 secs (149508103 bytes/sec)
# encrypted read
dd if=/Users/bryan/random-10g bs=1m of=/dev/null
10737418240 bytes transferred in 179.890006 secs (59688798 bytes/sec)
# non encrypted dataset
dd if=/Volumes/ZfsMediaPool/random-10g bs=1m of=/dev/null
10737418240 bytes transferred in 18.207343 secs (589730098 bytes/sec)
# First two are from the crypto disk, 1 is first, 2 is second but cache
$time du -hc /Users/bryan/Library/Thunderbird
real 0m3.202s
user 0m0.006s
sys 0m0.153s
$ time du -hc /Users/bryan/Library/Thunderbird
real 0m0.024s
user 0m0.003s
sys 0m0.020s
# This is from the non-crypto disk and not cached:
$ time du -hc /Users/bryan/Library/Thunderbird
real 0m0.552s
user 0m0.005s
sys 0m0.071s
I created an encrypted apfs disk on a zfs dataset
# zfs create -s -V 50g ZfsMediaPool/mailcorestorage
# ls /var/run/zfs/zvol/dsk/ZfsMediaPool/mailcorestorage -al
lrwxr-xr-x 1 root daemon 11 Oct 2 02:27 /var/run/zfs/zvol/dsk/ZfsMediaPool/mailcorestorage -> /dev/disk16
# diskutil eraseDisk JHFS+ dummy GPT /dev/disk16
(I was lazy and used diskutility to make an APFS encrypted disk)
copying from the same pool to the APFS dataset:
# dd if=/Volumes/ZfsMediaPool/random-10g of=/Volumes/encryptedMail/random-10g bs=1m
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 83.464137 secs (128647089 bytes/sec)
Reading that file from the APFS dataset after remounting it:
dd if=/Users/bryan/Library/Thunderbird/random-10g of=/dev/null bs=1m
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 34.071565 secs (315143090 bytes/sec)
Include any warning/errors/backtraces from the system logs
I've attached three spindumps during these operation and one while unison was running.
zfs-read-from-crypto-dataset-Spindump.txt
zfs-while-unison-running-Spindump.txt
zfs-write-to-crypto-big-dataset-Spindump.txt
zpool layout:
ComicBookGuy:~ root# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
ZfsMediaPool 43.7T 10.7T 32.9T - - 0% 24% 1.00x ONLINE -
raidz2 43.7T 10.7T 32.9T - - 0% 24.6% - ONLINE
PCI0@0-SAT0@17-PRT5@5-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-SAT0@17-PRT4@4-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-RP21@1B,4-PXSX@0-PRT31@1f-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-SAT0@17-PRT3@3-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-SAT0@17-PRT2@2-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-SAT0@17-PRT1@1-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-SAT0@17-PRT0@0-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-RP21@1B,4-PXSX@0-PRT2@2-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-RP21@1B,4-PXSX@0-PRT3@3-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-RP21@1B,4-PXSX@0-PRT28@1c-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-RP21@1B,4-PXSX@0-PRT4@4-PMP@0-@0:0 - - - - - - - - ONLINE
PCI0@0-RP21@1B,4-PXSX@0-PRT29@1d-PMP@0-@0:0 - - - - - - - - ONLINE
logs - - - - - - - - -
PCI0@0-RP09@1D-PXSX@0-IONVMeController-IONVMeBlockStorageDevice@1-@1:3 15.5G 1.19M 15.5G - - 0% 0.00% - ONLINE
cache - - - - - - - - -
PCI0@0-RP09@1D-PXSX@0-IONVMeController-IONVMeBlockStorageDevice@1-@1:4 128G 108G 20.1G - - 0% 84.3% - ONLINE