

If you are willing to sacrifice the space, mirroring would be likely be faster. Running multiple small data read/write tasks at once might show you better IOPS numbers but I think the physics of ZFS is your main issue. Single process/thread write tasking isn't the best case for ZFS in general. With 16 disks, you are effectively always going to wait for a near-full disk rotation before one of the writes so you are held up by physics and how ZFS commits data. In effect, you have 16 medium speed drives and for each write pass, you wait for all 16 drives to check in with the controller and say "done" before the next write starts. When reading from RAID-Z vdevs, the same rules apply, as the process is essentially reversed (no round robin shortcut like in the mirroring case): Better bandwidth if you're lucky (and read the same way as you've written) and a single disk's IOPS read performance in the majority of cases that matter. Therefore, from the point of view of a single application waiting for its IO to complete, you'll get the IOPS write performance of the slowest disk in the RAID-Z vdev. This means that each write I/O will have to wait until all disks in the RAID-Z vdev are finished writing. When writing to RAID-Z vdevs, each filesystem block is split up into its own stripe across (potentially) all devices of the RAID-Z vdev.
#OPENZFS V5000 FULL#
See this nice write up for the full details but these two quotes should help you see the issue (emphasis mine): RAIDZ2 is analogous to RAID6 but the variant has features that make it slower and safer. While this is a good, safe configuration you have to understand that RAIDZ isn't designed primarily for speed. You didn't post the zpool status for this but you imply in the post that all 16 disks are in a single vdev in RAIDZ2. Zfs get all output: NAME PROPERTY VALUE SOURCE Update: High speeds were cached I/O (thanks Speeds of ≈300MB/s still seem too slow for this utilization during I/O is negligible (All cores <5%). I'm running ~20-30x below optimal! What am I missing? Any ideas? I also created a ZFS volume on the pool along side the filesystem, and formatted as HFS+. OS X's read of the array's optimal I/O block size (16-drives RAIDZ2, ZFS): OS X's read of the boot drive optimal I/O block size (1TB SSD, HFS+): I'm seeing transfer rates around 300MB/s from the array in OS X and from the terminal using default commands (note file being copied is 10GB random data): The ZFS filesystem was created with ashift=12 (Advanced Format HDD's with 4096 byte blocks) and a recordsize=128k.
#OPENZFS V5000 MAC#
I am running a 16-drive SATA3 RAIDZ2 OpenZFS on OSX (v1.31r2) filesystem (v5000)over Thunderbolt 2 (twin Areca 8050T2's) to a 12-core 64GB Mac Pro. What can I do to improve general I/O to at least 1GB/s? ) and all my is ~300MB/s dd gives the same performance with bs=1k.

Tl dr - My ZFS RAIDZ2 array reads at 7.5+ GB/s and writes at 2.0+ GB/s when I specify a bs=128K or greater with dd.
