I have a CentOS 7 server (kernel 3.10.0-957.12.1.el7.x86_64
) with 2 NVMe disks with the following setup:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 477G 0 disk
├─nvme0n1p1 259:2 0 511M 0 part /boot/efi
├─nvme0n1p2 259:4 0 19.5G 0 part
│ └─md2 9:2 0 19.5G 0 raid1 /
├─nvme0n1p3 259:7 0 511M 0 part [SWAP]
└─nvme0n1p4 259:9 0 456.4G 0 part
└─data-data 253:0 0 912.8G 0 lvm /data
nvme1n1 259:1 0 477G 0 disk
├─nvme1n1p1 259:3 0 511M 0 part
├─nvme1n1p2 259:5 0 19.5G 0 part
│ └─md2 9:2 0 19.5G 0 raid1 /
├─nvme1n1p3 259:6 0 511M 0 part [SWAP]
└─nvme1n1p4 259:8 0 456.4G 0 part
└─data-data 253:0 0 912.8G 0 lvm /data
Our monitoring and iostat
continually shows nvme0n1
and nvme1n1
with 80%+ io utilization while the individual partitions have 0% io utilization and are fully available (250k iops, 1GB read/write per sec).
avg-cpu: %user %nice %system %iowait %steal %idle
7.14 0.00 3.51 0.00 0.00 89.36
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 0.00 50.50 0.00 222.00 8.79 0.73 0.02 0.00 0.02 14.48 73.10
nvme1n1p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1p2 0.00 0.00 0.00 49.50 0.00 218.00 8.81 0.00 0.02 0.00 0.02 0.01 0.05
nvme1n1p3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1p4 0.00 0.00 0.00 1.00 0.00 4.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1 0.00 0.00 0.00 49.50 0.00 218.00 8.81 0.73 0.02 0.00 0.02 14.77 73.10
nvme0n1p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1p2 0.00 0.00 0.00 49.50 0.00 218.00 8.81 0.00 0.02 0.00 0.02 0.01 0.05
nvme0n1p3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1p4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 0.00 48.50 0.00 214.00 8.82 0.00 0.00 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.00 1.00 0.00 4.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00
Any ideas what can be the root cause for such behavior?
All seems to be working fine except for monitoring triggering high io alerts.
nvme1n1p2
andnvme0n1p2
are showing activity, and they are the components of yourmd2
RAID1 device. Perhaps the RAID set is syncing or scrubbing in the background? Look into/proc/mdstat
to see the RAID device status. The monitoring results might be a quirk resulting from how the background syncing/scrubbing is implemented in the kernel. If so, it should be basically "soft workload" that will be automatically restricted when there are actual user-space disk I/O operations to do. – telcoM May 08 '19 at 06:30md2 : active raid1 nvme0n1p2[0] nvme1n1p2[1]
and20478912 blocks [2/2] [UU]
– mike May 08 '19 at 08:01scheduler
of the NVMEs set tonone
? https://access.redhat.com/solutions/3901291 – Thomas May 25 '19 at 10:07iostat
shows wrong utilization for NVME drives configured withnone
scheduler, which is the default for NVME drives. It seems to be a bug which has been identified. – Thomas May 25 '19 at 13:40