Browsed by
Category: Virtualization

Unifi to Grafana (using Prometheus and unifi_exporter)

Unifi to Grafana (using Prometheus and unifi_exporter)

Documenting the process of getting this up and running. We already had Prometheus and Grafana running on our docker swarm cluster (we promise to document this all one day).

There was only one up to date image of unifi_exporter in DockerHub and it had no documentation so we were not comfortable using it.

1) Download, build and push unifi_exporter.

$ git clone [email protected]:mdlayher/unifi_exporter.git
...
$ cd unifi_exporter
$ sudo docker build -t louisvernon/unifi_exporter:$(git describe --tags) . # yields a tag like 0.4.0-18-g85455df
$ sudo docker push louisvernon/unifi_exporter:$(git describe --tags)

2) Create read only admin user for unifi_exporter service:

3) Create config.yml on storage mounted on dockerswarm node. In our case we have a glusterfs volume mounted across all nodes. If you are using the self-signed cert on your unifi controller then you will need to set insecure to true.

$ $ cat /data/docker/unifi-exporter/config.yml 
listen:
  address: :9130
  metricspath: /metrics
unifi:
  address: https://unifi.vern.space
  username: unifiexporter
  password: random_password
  site: Default 
  insecure: false
  timeout: 5s

4) Deploy to docker swarm. The docker image does not contain any trusted certs, so we mounted the host certs as readonly.

$ docker service create --replicas 1 --name unifi_exporter \
    --mount type=bind,src=/data/docker/unifi-exporter/config.yml,dst=/config.yml \
    --mount type=bind,src=/etc/ssl,dst=/etc/ssl,readonly \
    --publish 9130:9130 \
    --replicas=1 \
    louisvernon/unifi_exporter:0.4.0-18-g85455df -config.file=/config.yml

5) You should see something like this from the logs (we use portainer to quickly inspect our services).

2018/06/12 01:10:47 [INFO] successfully authenticated to UniFi controller
2018/06/12 01:10:47 Starting UniFi exporter on ":9130" for site(s): Default

First time around (before we bind mounted /etc/ssl) we had an x509 error due to the missing trusted certs..

6) Add unifi_exporter as a new target for prometheus.

$ cat /data/docker/prometheus/config/prometheus.yml
...
  - job_name: 'unifi_exporter'
    static_configs:
      - targets: ['dockerswarm:9130']
        labels:
          alias: unifi_exporter
...

7) Point your browser at http://dockerswarm:9130/metrics and make sure you see stats. In our case the payload was 267 lines.

8) Restart the prometheus service: `docker service update –force prometheus`

9) Hop on over to prometheus to make sure the new target is listed and UP: http://dockerswarm:9090/targets

10) Finally we import the dashboard into Grafana. Our options are a little sparse right now, but this dashboard gives us somewhere to start. we made some tweaks to this to make it multi-AP friendly with some some extra stats:
Unifi-1516201148080

The result:

Quick GlusterFS Volume Creation Steps

Quick GlusterFS Volume Creation Steps

Here are some quick steps to create a three drive three node replicated distributed GlusterFS volume for use by docker swarm. We are not using LVM for this quick test so we lose features like snapshotting.

1) Create brick mount point on each node

mkdir -p /data/glusterfs/dockerswarm/brick1

2) Format the drives with xfs

 mkfs.xfs -f -i size=512 /dev/sd_

3) Add drives to fstab

/dev/disk/by-id/ata_ /data/glusterfs/dockerswarm/brick1  xfs rw,inode64,noatime,nouuid      1 2

4) Mount

mount /data/glusterfs/dockerswarm/brick1

5) Create volume mount point under brick mount point*

mkdir -p /data/glusterfs/dockerswarm/brick1/brick

5) Create volume

 $gluster volume create dockerswarm replica 3 transport tcp server1:/data/glusterfs/dockerswarm/brick1/brick server2:/data/glusterfs/dockerswarm/brick1/brick server3:/data/glusterfs/dockerswarm/brick1/brick 

volume create: dockerswarm: success: please start the volume to access data

* The reason we mount the volume to a directory inside the brick mount is to ensure the brick has been mounted on the host. If not the brick directory will not be present and gluster will act as if the brick is unavailable.

libvirt – adding storage pools manually

libvirt – adding storage pools manually

I use direct disk pass-through for several of my KVM guests. I usually use Virt-Manager to set these up, but a bug in the latest version (1.2.1) made that impossible.

Fortunately it’s pretty easy to add drives using virsh. First check the existing storage pools:

$ virsh pool-list --all
Name State Autostart
-------------------------------------------
Backup active yes
BigParity inactive yes
default active yes
Parity active yes

Create a storage pool xml file. Look at the existing pools in  /etc/libvirt/storage/ for reference. Create the file locally:

$ cat Parity5TB.xml
<pool type='disk'>
<name>Parity5TB</name>
<uuid>8a4550e0-3bcf-4351-ad36-496b51737c</uuid>
<capacity unit='bytes'>0</capacity>
<allocation unit='bytes'>0</allocation>
<available unit='bytes'>0</available>
<source>
<device path='/dev/disk/by-id/ata-TOSHIBA_MD04ACA500_55F'/>
<format type='unknown'/>
</source>
<target>
<path>/dev/disk/by-id</path>
<permissions>
<mode>0711</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>

 

Note that I use /dev/disk/by-id. You can use any /dev/disk/by-* reference, but NEVER use /dev/sd* (you’ll undestand why after the first time you add or remove a drive).

Assuming it’s already formatted (I find it easiest to format on the host with gparted and pass through the pre-formatted disk) you can quickly get the uuid with blkid. Then either use /dev/disk/by-uuid, or lookup the symbolic links in the /dev/disk/by-X directory.

Add the pool to your definitions:

$ virsh pool-define Parity5TB.xml
$ virsh pool-list --all
Name State Autostart
-------------------------------------------
Backup active yes
BigParity inactive yes
default active yes
Parity active yes
Parity5TB active no

That’s it. This does not autostart the drive, or attach it to any guests, but you can still do this through virt-manager.

Drive Performance Under KVM Using Virtio

Drive Performance Under KVM Using Virtio

Using KVM I was experiencing erratic performance from disk I/O on my OpenMediaVault guest. Aside from the OS volume (8GB on a Vertex Plus SSD) I had 4*3TB drives:

4 * Seagate ST3000DM001
2 * Toshiba DT01ACA300

I added all disks using virt-manager:

Add Storage-Pool->disk: Physical Disk Device->Source Path:/dev/disk/by-id/[DISK_ID]

The only subtle variation in adding the drives was that the 2 * Toshibas were blank, added with Format = auto and Build Pool: Unchecked.

The Seagates had existing, but unwanted partitions. An apparent bug in virt-manager meant I could not delete the pre-existing partitions so I had to add them with Format = gpt and Build Pool: Checked.

I was under the impression that in both cases the raw drive would be presented to the guest…. so let’s take a look at the resulting performance.

Read benchmark:
hdparm -t --direct
Write Benchmark:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync

On Host
[table]
Model, OCZ Vertex Plus[/dev/sda], TOSHIBA DT01ACA300[/dev/sdc], Seagate ST3000DM001[/dev/sde]
Read, 221.93MB/s, 186.89MB/s, 169.22MB/s, 179.48MB/s
Write, 158.67MB/s, N/A, N/A
[/table]

On Guest
[table]
Model, OCZ Vertex Plus[/dev/vda], TOSHIBA DT01ACA300[/dev/sdc], Seagate ST3000DM001[/dev/sde]
Read, 126.23MB/s, 185.86MB/s, 98.87MB/s
Write, 122MB/s, 117.67MB/s, 88.83MB/s
[/table]

That’s a huge difference in performance between the Toshiba and the Seagate. Not only that, but the read/write performance on the Seagates was extremely unstable.

Let’s take a look at the Storage Volume configurations for these drives.

Slow Drive

<pool type='disk'>
  <name>Media1</name>
  <uuid>2c5a4e7b-6d61-9644-4162-c97cf11185e4</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/disk/by-id/ata-ST3000DM001-9YN166_S1F0T0Q6'/>
    <format type='gpt'/>
  </source>
  <target>
    <path>/dev/disk/by-id</path>
    <permissions>
      <mode>0711</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

Fast Drive

<pool type='disk'>
  <name>Backup</name>
  <uuid>b445343e-39e7-ff85-2c31-ba331ae10311</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <device path='/dev/disk/by-id/ata-TOSHIBA_DT01ACA300_Y3DBDBMGS'/>
    <format type='unknown'/>
  </source>
  <target>
    <path>/dev/disk/by-id</path>
    <permissions>
      <mode>0711</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

Due to the quirky behaviour of virt-manager, It had seemingly recreated a nested GPT volume inside my pre-existing partition, and this abstraction was causing the performance issues.

Poor CIFS/SMB performance on KVM Guest

Poor CIFS/SMB performance on KVM Guest

This is kind of stating the obvious. Running OpenMediaVault 0.5.48 as a KVM guest.

CIFS transfers from Windows 8.1 to OMV over gigabit ethernet maxed out at ~30MB/s.

Despite selecting Generic Kernel > 2.6 from within Virt-Manager, KVM defaulted to the virtual Realtek 8139 NIC.

Switching to the virtio driver mostly resolved the performance issues with CIFS transfers fluctuating from 60MB/s to 110MB/s.

Not sure what the source of the throughput fluctuation is, as it will be stable at 110MB/s for 10 minutes and suddenly drop.

I recommend using iftop on the KVM host to measure performance.