Will Adding a SATA SSD Slow Down My NVMe ZFS Pool?

The obvious answer to that title would be yes, but by how much?

I recently built a new server with an NVMe SSD as the primary VM storage. For redundancy, I wanted to mirror it with another drive, but the motherboard only had one NVMe slot. My only option was to use a SATA SSD. But would adding the slower SATA drive drag down the performance of my NVMe ZFS pool?

Test Setup

  • Motherboard: Supermicro A2SDi-H-TF
  • NVMe SSD: Crucial P3 Plus 2TB
  • SATA SSD: Crucial MX500 2TB

The NVMe drive was connected via PCIe 3.0 x2. The SATA drive plugged into one of the SATA3 ports.

Baseline Read Performance

First, I tested the sequential read speed of the NVMe SSD alone using hdparm. Results came out at about 600 MB/sec avg.

> # hdparm -tv /dev/disk/by-id/nvme-CT2000P3PSSD8_xxx                                                                                                                                                                                                                                                          

/dev/disk/by-id/nvme-CT2000P3PSSD8_xxx:
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 1907729/64/32, sectors = 3907029168, start = 0
 Timing buffered disk reads: 1822 MB in  3.00 seconds = 607.21 MB/sec

Next, we performed the same test on the SATA SSD:

> # hdparm -tv /dev/disk/by-id/ata-CT2000MX500SSD1_xxx                                                                                                                                                                                                                                                         

/dev/disk/by-id/ata-CT2000MX500SSD1_xxx:
 multcount     =  0 (off)
 IO_support    =  1 (32-bit)
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 243201/255/63, sectors = 3907029168, start = 0
 Timing buffered disk reads: 1486 MB in  3.00 seconds = 494.76 MB/sec

As expected, the SATA drive read speed was around 80% of the NVMe drive.

Baseline Write Performance

I also checked the sequential write speed of the single-drive ZFS pool on the NVMe SSD:

> # dd if=/dev/zero of=/mnt/mydrive/test oflag=direct bs=128k count=320k                                                                                                                                                                                                                                
327680+0 records in
327680+0 records out
42949672960 bytes (43 GB, 40 GiB) copied, 33.0295 s, 1.3 GB/s

Mirroring the Drives

I added the SATA SSD to the ZFS pool:

zpool attach -o ashift=12 mypool nvme-CT2000P3PSSD8_xxx ata-CT2000MX500SSD1_xxx


  pool: host1-nvme1
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Jun 30 12:04:38 2023
        836G scanned at 2.09G/s, 94.2G issued at 241M/s, 836G total
        95.5G resilvered, 11.26% done, 00:52:31 to go
config:

        NAME                                  STATE     READ WRITE CKSUM
        host1-nvme1                           ONLINE       0     0     0
          mirror-0                            ONLINE       0     0     0
            nvme-CT2000P3PSSD8_2239E66E634E   ONLINE       0     0     0
            ata-CT2000MX500SSD1_2116E598CDFC  ONLINE       0     0     0  (resilvering)

This triggered a resilver to sync the drives. At the time, I started regretting not having done a sequential resilver, which would have made things much faster. Total time to resilver 881GB was 47 minutes.

Performance Impact

With both drives mirrored, the read speed has skyrocketed.I have thinking this might also have to do with the benchmarking tool used ( dd vs hdparm ?).

> # dd if=test of=/dev/null bs=128k count=320k                                                                                    
327680+0 records in
327680+0 records out
42949672960 bytes (43 GB, 40 GiB) copied, 18.1141 s, 2.4 GB/s

And the write speed dropped, but just a bit:

> $ dd if=/dev/zero of=test oflag=direct bs=128k count=320k                                                                  
327680+0 records in
327680+0 records out
42949672960 bytes (43 GB, 40 GiB) copied, 36.6347 s, 1.2 GB/s

That’s excellent performance for a mirrored pool, alievate any performance concern! The NVMe SSD is able to stretch its legs despite having a slower SATA partner.

Conclusion

Mirroring my NVMe ZFS pool with a SATA SSD resulted in only a minor performance reduction. The sequential read and write speeds are still great. And I now have full redundancy and protection against drive failure. For homelab usage, the SATA/NVMe combo works surprisingly well!