Browsed by
Category: Computers

Docker to solve SuperMicro IPMI iKVM – JavaWS Problems

Docker to solve SuperMicro IPMI iKVM – JavaWS Problems

icedtea-web 1.6.2 does not seem to work with SuperMicro’s IPMI Java iKVM viewer. SuperMicro’s helpful response is to only use Oracle’s Java.

net.sourceforge.jnlp.LaunchException: Fatal: Initialization Error: Could not initialize application. The application has not been initialized, for more information execute javaws from the command line.

Even when you have the right version of Java you often have to dance through security hoops or Java versions just to get it to work.

If you have Docker installed there is a great solution that avoids installing Oracle’s Java and/or tweaking any security settings. solarkennedy has created a very nice Docker container that encapsulates everything needed to access various Java based IPMI consoles.

 docker run -p 8080:8080 solarkennedy/ipmi-kvm-docker

Now point your browser to http://localhost:8080 and voila:

You are looking at a Java enabled Firefox (and OS) through a web VNC client accessed from the Docker host. Not bad!

libvirt – adding storage pools manually

libvirt – adding storage pools manually

I use direct disk pass-through for several of my KVM guests. I usually use Virt-Manager to set these up, but a bug in the latest version (1.2.1) made that impossible.

Fortunately it’s pretty easy to add drives using virsh. First check the existing storage pools:

$ virsh pool-list --all
Name State Autostart
-------------------------------------------
Backup active yes
BigParity inactive yes
default active yes
Parity active yes

Create a storage pool xml file. Look at the existing pools in  /etc/libvirt/storage/ for reference. Create the file locally:

$ cat Parity5TB.xml
<pool type='disk'>
<name>Parity5TB</name>
<uuid>8a4550e0-3bcf-4351-ad36-496b51737c</uuid>
<capacity unit='bytes'>0</capacity>
<allocation unit='bytes'>0</allocation>
<available unit='bytes'>0</available>
<source>
<device path='/dev/disk/by-id/ata-TOSHIBA_MD04ACA500_55F'/>
<format type='unknown'/>
</source>
<target>
<path>/dev/disk/by-id</path>
<permissions>
<mode>0711</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>

 

Note that I use /dev/disk/by-id. You can use any /dev/disk/by-* reference, but NEVER use /dev/sd* (you’ll undestand why after the first time you add or remove a drive).

Assuming it’s already formatted (I find it easiest to format on the host with gparted and pass through the pre-formatted disk) you can quickly get the uuid with blkid. Then either use /dev/disk/by-uuid, or lookup the symbolic links in the /dev/disk/by-X directory.

Add the pool to your definitions:

$ virsh pool-define Parity5TB.xml
$ virsh pool-list --all
Name State Autostart
-------------------------------------------
Backup active yes
BigParity inactive yes
default active yes
Parity active yes
Parity5TB active no

That’s it. This does not autostart the drive, or attach it to any guests, but you can still do this through virt-manager.

iperf for testing HDMI balun runs

iperf for testing HDMI balun runs

It seems reasonable that jitter will be a big factor in the performance of CAT5e/6 runs used for HDMI baluns. I ran two high quality unshielded CAT5e cables and two high quality shielded (but ungrounded by me) CAT6 cables.

They cross over multiple power lines and unfortunately run parallel to power lines for about 11 foot of the ~ 30 foot run.

Testing Jitter:

On the server:

<pre>iperf -s -w 128k -u</pre>

This means start iperf server with a 128KB buffer in UDP mode. UDP best reflects the nature of traffic of HDMI over ethernet.

On the client:

<pre>iperf -c serverip -u -b 1000m -w 128k -t 120</pre>

This means start the client and connect to serverip, use UDP and send traffic at gigabit speeds for 120 seconds. I needed to run the test for at least 120 seconds to get consistent jitter results.

First I tested with a 3 foot CAT5e cable connected directly to the switch, this represents ‘ideal’ performance. The same short cable was connected to the ends of the long CAT5e and CAT6 runs.

[table]
[ ID], Interval, Transfer, Bandwidth, Jitter, Lost/Total Datagrams
IDEAL, 0.0-120.0 sec, 9.65 GBytes, 691 Mbits/sec, 0.013 ms, 47713/7095556 (0.67%)
STP-CAT6, 0.0-120.0 sec, 9.62 GBytes, 689 Mbits/sec, 0.015 ms, 26399/7052134 (0.37%)
CAT5e, 0.0-120.0 sec, 9.57 GBytes, 685 Mbits/sec, 0.180 ms, 108956/7100704 (1.5%)
[/table]

So there we have it. From a simple bandwidth perspective the long CAT5e run offers basically the same performance as the STP-CAT6 run, however when it comes to jitter, the CAT6 cable is marginally worse than the ideal case, while the CAT5e run is an order of magnitude worse.

Very interesting! In the end I’m sure it’s the shielding rather than the CAT6 rating that makes the STP-CAT6 superior.

Poor CIFS/SMB performance on KVM Guest

Poor CIFS/SMB performance on KVM Guest

This is kind of stating the obvious. Running OpenMediaVault 0.5.48 as a KVM guest.

CIFS transfers from Windows 8.1 to OMV over gigabit ethernet maxed out at ~30MB/s.

Despite selecting Generic Kernel > 2.6 from within Virt-Manager, KVM defaulted to the virtual Realtek 8139 NIC.

Switching to the virtio driver mostly resolved the performance issues with CIFS transfers fluctuating from 60MB/s to 110MB/s.

Not sure what the source of the throughput fluctuation is, as it will be stable at 110MB/s for 10 minutes and suddenly drop.

I recommend using iftop on the KVM host to measure performance.