Drive Performance Under KVM Using Virtio

Drive Performance Under KVM Using Virtio

Using KVM I was experiencing erratic performance from disk I/O on my OpenMediaVault guest. Aside from the OS volume (8GB on a Vertex Plus SSD) I had 4*3TB drives:

4 * Seagate ST3000DM001
2 * Toshiba DT01ACA300

I added all disks using virt-manager:

Add Storage-Pool->disk: Physical Disk Device->Source Path:/dev/disk/by-id/[DISK_ID]

The only subtle variation in adding the drives was that the 2 * Toshibas were blank, added with Format = auto and Build Pool: Unchecked.

The Seagates had existing, but unwanted partitions. An apparent bug in virt-manager meant I could not delete the pre-existing partitions so I had to add them with Format = gpt and Build Pool: Checked.

I was under the impression that in both cases the raw drive would be presented to the guest…. so let’s take a look at the resulting performance.

Read benchmark:
hdparm -t --direct
Write Benchmark:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync

On Host
[table]
Model, OCZ Vertex Plus[/dev/sda], TOSHIBA DT01ACA300[/dev/sdc], Seagate ST3000DM001[/dev/sde]
Read, 221.93MB/s, 186.89MB/s, 169.22MB/s, 179.48MB/s
Write, 158.67MB/s, N/A, N/A
[/table]

On Guest
[table]
Model, OCZ Vertex Plus[/dev/vda], TOSHIBA DT01ACA300[/dev/sdc], Seagate ST3000DM001[/dev/sde]
Read, 126.23MB/s, 185.86MB/s, 98.87MB/s
Write, 122MB/s, 117.67MB/s, 88.83MB/s
[/table]

That’s a huge difference in performance between the Toshiba and the Seagate. Not only that, but the read/write performance on the Seagates was extremely unstable.

Let’s take a look at the Storage Volume configurations for these drives.

Slow Drive

<pool type='disk'>
  <name>Media1</name>
  <uuid>2c5a4e7b-6d61-9644-4162-c97cf11185e4</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/disk/by-id/ata-ST3000DM001-9YN166_S1F0T0Q6'/>
    <format type='gpt'/>
  </source>
  <target>
    <path>/dev/disk/by-id</path>
    <permissions>
      <mode>0711</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

Fast Drive

<pool type='disk'>
  <name>Backup</name>
  <uuid>b445343e-39e7-ff85-2c31-ba331ae10311</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/disk/by-id/ata-TOSHIBA_DT01ACA300_Y3DBDBMGS'/>
    <format type='unknown'/>
  </source>
  <target>
    <path>/dev/disk/by-id</path>
    <permissions>
      <mode>0711</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

Due to the quirky behaviour of virt-manager, It had seemingly recreated a nested GPT volume inside my pre-existing partition, and this abstraction was causing the performance issues.

Leave a Reply

Your email address will not be published. Required fields are marked *