• Home
  • Blog
  • Virtualized Linux guest in FreeNAS 9.10 using iohyve

Virtualized Linux guest in FreeNAS 9.10 using iohyve

Vincent Danen

March 24, 2017

During the day I'm a manager of one of the greatest security teams on the planet (in my biased estimation), but at night (and random times throughout the day), I'm a sysadmin tinkerer. There's just something about goofing off with operating systems that appeals to me; this is likely what caused me to devote five years of my life to working on Annvix back in the day.

I've been running a local IPA install for quite a few years, but because you really need IPA to run on its own dedicated system (and given I have enough machines running in this house) I've been using KVM to handle the virtualization of an IPA server. IPA is really cool and allowed me to discard my homegrown LDAP+Kerberos setup for something with some enterprise gusto to manage authentication, identification, and authorization policies for my home network (insert obligatory comment about overkill here). I started using IPA on CentOS 6 and a year ago moved both guest and host to CentOS 7 which is working pretty good other than, for some odd reason, python is randomly segfaulting at times. I don't know what the cause is, I've filed abrt reports, and it's a concern when the language that your system updater is based on decides to start crashing before installing updates or, even worse, during the installation of updates and making a mess of things (this does not make RPM happy!). The oddest thing is that none of the other CentOS 7 systems (on bare metal) exhibit this behaviour.

The other problem is if my IPA server decides to tip over, I don't have a failover setup (again this being at home). So while I was thinking about the best way to stand up another IPA server as a replicating slave to then promote it to the master and migrate away from whatever is causing all these nasty segaults, I was reminded by FreeNAS that an update was available for it, and I started thinking about jails and whether they would run Linux. After starting down the rabbit trail, I found out about iohyve which is a FreeBSD (what FreeNAS is based on) bhyve manager. Byhve is a hypervisor that runs on FreeBSD, basically like KVM (which I've been using) or OpenVZ (which I've used with VPS hosting), or Xen.

So bhyve is for FreeBSD what KVM is for Linux. And this is where the sysadmin/tinkerer/geek in me thinks "cool" and away disappears a weekend.

For the purposes of this post/tutorial, you need to be running FreeNAS 9.10 (you can probably do this easily enough with FreeBSD, but I've not tried). There is also documentation on Using iohyve.

From your FreeNAS system you need to know your ethernet interface name (in the web UI go to Network -> Network Interfaces, in my case em0) and the storage pool name (Storage -> Volumes, in my case the pool is named storage). The actual setup of iohyve needs to be done as root over SSH, so you'll need that running as well.

As root, we need to create the environment iohyve requires. I used the following commands to create the pool for its use:

# iohyve setup pool=storage kmod=1 net=em0
Setting up iohyve pool...
On FreeNAS installation.
Checking for symbolic link to /iohyve from /mnt/iohyve...
Symbolic link to /iohyve from /mnt/iohyve successfully created.
Loading kernel modules...
bridge0 is already enabled on this machine...
Setting up correct sysctl value...
net.link.tap.up_on_open: 0 -> 1

This tells iohyve to install the required ZFS datasets and kernel modules. We use kmod=1 to tell iohyve to load the required kernel module, pool=storage tells it which pool to use for files (in this case, storage) and net=em0 sets up the network bridge to this interface (iohyve can only be bound to a single interface). You can use multiple pools for iohyve, however I only have one pool on the system.

Next, you need to create a few tunables in FreeNAS. Heading back to the web UI, go to System -> Tunables and create the following two tunables:

  • variable: iohyve_enable, values: YES, type: rc_conf
  • variable: iohve_flags, values: kmod=1 net=em0, type: rc_conf

The iohyve_enable variable tells FreeNAS to load iohyve support at boot, and the iohyve_flags are the same kmod and net options we used when setting up iohyve initially.

The next step is to download an ISO image for iohyve to use for installing a virtual machine. In my case, I want to run CentOS 7. There are plenty of mirrors to choose from for the minimal ISO which is probably what you want since you can install any specific software required using yum after it's installed.

# iohyve fetch http://centos.mirror.iweb.ca/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso
Fetching http://centos.mirror.iweb.ca/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso...
/iohyve/ISO/CentOS-7-x86_64-Minimal-1611.iso/C100% of  680 MB    9 MBps 01m08s

You need to use iohyve to fetch the ISO image (somewhat annoyingly) so you can't just copy an existing ISO over (although I believe you can do a fetch over NFS or provide it locally via HTTP from another system on your network).

Once you have the ISO downloaded, you can configure a new virtual machine. Check to make sure you have the ISO available:

# iohyve isolist
Listing ISO's...

Then, to create the machine with 20GB of space (the same size as my existing KVM machine for IPA, which is more than enough):

# iohyve create ipa-slave 20G
Creating ipa-slave...
# iohyve list
Guest      VMM?  Running  rcboot?  Description
ipa-slave  NO    NO       NO       Sun Mar 12 16:39:10 MDT 2017

Now you can configure the specifics of the machine:

# iohyve set ipa-slave ram=1G cpu=1 os=custom loader=grub-bhyve
Setting ipa-slave ram=1G...
Setting ipa-slave cpu=1...
Setting ipa-slave os=cuscom...
Setting ipa-slave loader=grub-bhyve...

This sets my virtual machine to have 1GB of RAM, use one virtual CPU, use the "custom" operating system type (we need this later, even though we will be using CentOS 7), and uses the grub-bhyve loader which is required by Linux guests. The iohyve wiki has more details on operating system types and which values to use depending on which Linux operating system you intend to install.

When using a CentOS 7 guest, iohyve currently cannot boot from an XFS partition (which is the default), and due to the limitations of the commandline installer, we can't tell Anaconda to use something other than XFS. Another thing I found, with some trial and error, is you want to use traditional partitions and not the LVM-based partition scheme (so plan out your filesystem in advance to ensure you have enough size!). This is the main reason for using the "custom" operating system type. We'll fix that later.

To work around this, we'll use a simple kickstart file to get us to a minimal working system from which we can install the rest of what we want.

In order to make grub boot and use the kickstart file, you need to edit /iohyve/ipa-slave/grub.cfg so it looks like:

linux (cd0)/isolinux/vmlinuz inst.ks=http://somewhere.internal/ks.cfg
initrd (cd0)/isolinux/initrd.img

and the ks.cfg file would look something like (see the documentation for more info):

# System authorization information
auth --enableshadow --passalgo=sha512

# Use CDROM installation media
# Use text install
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8

# Network information
network  --bootproto=static --device=eth0 --gateway= --ip= --nameserver= --netmask= --noipv6 --activate
network  --hostname=ipa-slave.mydomain.com
# Root password
rootpw --iscrypted $6$/GdEAa2DwhlmU.Vr$R/L.fEc6QwtFiTMLd04HR1SuS7NrsdA.NuQyQ17RbBk8p37oGD/hVvRIOw0v5x6pSC6uU4NigueNmEXvQ8pzo0
# System services
services --enabled="chronyd"
# System timezone
timezone America/Edmonton --isUtc --ntpservers=ntp.mydomain.com
# System bootloader configuration
bootloader --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --drives=sda --all
# Disk partitioning information
autopart --type=plain --fstype="ext4"


%addon com_redhat_kdump --disable --reserve-mb='auto'


Before starting the installation, make sure you can retrieve that file with something like curl. Next, start the installation:

# iohyve isolist
Listing ISO's...
# iohyve install ipa-slave CentOS-7-x86_64-Minimal-1611.iso
Installing ipa-slave...
GRUB Process does not run in background....
If your terminal appears to be hanging, check iohyve console ipa-slave in second terminal to complete GRUB process...

From another terminal, ssh into your FreeNAS server again in order to connect to the serial console by using:

# iohyve console ipa-slave
Starting console on ipa-slave...
~~. to escape console [uses cu(1) for console]
Starting installer, one moment...
anaconda for CentOS Linux 7 started.
 * installation log files are stored in /tmp during the installation
 * shell is available on TTY2
 * when reporting a bug add logs from /tmp as separate text/plain attachments
14:21:39 Not asking for VNC because of an automated install
14:21:39 Not asking for VNC because text mode was explicitly asked for in kickstart

Starting automated install..

Now sit back while it automatically installs. This will be quite the minimal install, however it will get you up and running with an ext4-based system that iohyve can boot up, and from there you can install individual packages or package groups.

When the install is complete, you will see something like this on the console:

[  OK  ] Started Restore /run/initramfs.
[  OK  ] Reached target Shutdown.
dracut Warning: Killing all remaining processes
[  819.859124] Restarting system.

However, it's not actually restarting the system. Switch back to the other console and you will see:

Unhandled ps2 keyboard command 0xf6

# iohyve list
Guest      VMM?  Running  rcboot?  Description
ipa-slave  YES   NO       NO       Sun Mar 12 16:39:10 MDT 2017

As you can see from the list command, the virtual machine is not running even though it said it was restarting. In order to start the machine, you must use:

# iohyve start ipa-slave
Starting ipa-slave... (Takes 15 seconds for FreeBSD guests)
[root@heimdall] ~# GRUB Process does not run in background....
If your terminal appears to be hanging, check iohyve console ipa-slave in second terminal to complete GRUB process...

Now switch back to the original console (that you've not disconnected) and you will see the system booting. I had some difficulty with dracut-initqueue warning about incessant timeouts, so it took some time for the system to boot and even then I ended up in a rescue shell.

As annoying as this is, it's not too terribly difficult to solve although I hate how hackish it needs to be. The root of the problem seems to be that dracut wants to connect to this kickstart file before the network is up. So we need to do some chroot shenanigans and fix the initramfs:

# mkdir /mnt
# mount /dev/sda3 /mnt
# mount /dev/sda1 /mnt/boot
# chroot /mnt
# cd /boot
# cp initramfs-3.10.0-514.el7.x86_64.img initramfs-3.10.0-514.el7.x86_64.img.bak
# dracut -f /boot/initramfs-3.10.0-514.el7.x86_64.img 3.10.0-514.el7.x86_64
# ls -al initramfs*
-rw-------  1 root root 45180381 Mar 14 08:24 initramfs-0-rescue-715baff9360a47a89af2ddbc55b9f0cf.img
-rw-------  1 root root 44766225 Mar 14 09:26 initramfs-3.10.0-514.el7.x86_64.img
-rw-------  1 root root 17437682 Mar 14 09:25 initramfs-3.10.0-514.el7.x86_64.img.bak

Judging by the difference in size between the rescue image, the newly created image, and the previous one that was copied, this is a pretty strong indication that something is missing.

Also, there are a few more steps to do before we can get CentOS 7 booted properly. The first is to edit /iohyve/ipa-slave/grub.conf and remove the kickstart reference. I'm not 100% sure this is required after we perform the next action, but after a lot of beard-tugging (and to spare you the same) I'm suggesting you remove it. Also, you need to set the operating system type back to CentOS 7:

# iohyve set ipa-slave os=centos7
Setting ipa-slave os=centos7...

If you've stopped the virtual machine, great, if it's still waiting for network timeouts, you can forcibly stop it with iohyve destroy ipa-slave and then we can use iohyve start ipa-slave to start it back up again (this from the session not attached to the console, of course).

Now on the console, you should be able to watch the virtual machine boot and arrive at a login prompt.

Once you login, you probably want to install a few other things given we opted for the minimal install:

# yum update -y
# yum install net-tools vim-enhanced zsh ipa-server
# systemctl status sshd

The last is to make sure that sshd is running so you can ssh in and carry on (at least for me, an 80x25 console is pretty darn tiny). I also prefer the enhanced vim, and having tools like ipaddr and ifconfig are just plain old handy, and of course the whole point of this exercise was to set this up as an IPA server.

Finally, once you verify you can ssh into the server, disconnect from the console by typing the tilde and CTRL-D (so ~ + CTRL-D).

To finish up, let's give the virtual machine a decent description and tell it to start at boot:

# iohyve set ipa-slave description="IPA Slave server"
Setting ipa-slave description=IPA Slave server...
#  iohyve set ipa-slave boot=1
Setting ipa-slave boot=1...
# iohyve list
Guest      VMM?  Running  rcboot?  Description
ipa-slave  YES   YES      YES      IPA Slave server

At this point, you can make a backup or snapshot of the virtual machine (which is one of the first things I wanted to do after all of the effort of figuring out the above!):

# iohyve snap ipa-slave@base-install-20170314
Taking snapshot ipa-slave@base-install-20170314
# iohyve snaplist

Currently there doesn't seem to be a way to remove snapshots.

I haven't played with it enough to know whether or not the performance is better than KVM on Linux, but I enjoyed fiddling with this to get it figured out and working. Hopefully this is helpful to others; there have been quite a few references to the desire to run CentOS 7 on FreeNAS using iohyve, but even the upstream site indicates it is currently not possible (although according to this comment, it's on the roadmap). With a bit of fiddling, it is possible.

The next step now is to figure out how to get an IPA replication slave setup because the documentation is not intuitive at all and setting up a replicated slave is about the only way to migrate IPA from one machine to another. Wish me luck, and I hope this is helpful to people interested in running CentOS 7 with iohyve.

Vincent Danen
March 28, 2017 @ 9:42 PM

Also noting this was posted on the iohyve wiki which is pretty cool. =)


August 09, 2017 @ 7:15 PM

@Vincent I found your blog post through the iohyve github page. I'm trying to get centos 7.4 running on my FreeNASmachine. I'm consolidating my Plex server and my NAS to cut down on machines in my office. This guide is excellent and I really appreciate it. I tried converting my existing VMware .vmdk to a raw disk with vboxmanager but couldn't get it to boot. It kept hanging at starting dev.mapper or something. I am still learning about the underpinnings of Linux and BSD so I decided to just start from a fresh centos install and migrate my Plex install.

I'm not sure what to do with the kickstart file. You say:

[code]linux (cd0)/isolinux/vmlinuz inst.ks=http://somewhere.internal/ks.cfg initrd (cd0)/isolinux/initrd.img boot[\code]

Should I put the file in my zvol/dataset or somewhere in the root of the FreeNAS environment? And what how would I put in "inst.ks=http:///ks.cfg"?

Thanks for any help or any info you can point me to. I haven't been able to find much on centos in iohyve. My Google Fu has failed me.

Thanks MonkadelicD

Vincent Danen
August 09, 2017 @ 7:34 PM

Hey, MonkadelicD.

So that was on an HTTP server on my network. So anywhere you can host the file (another web server that will serve that file, or (I believe) FTP or NFS will work also). If you point to that kickstart file on another system it should work. It's a little bit annoying, but that's how kickstart is meant to work.

I've tried converting vmdk files as well and it seems hit-or-miss, so I prefer to do a fresh install and set things up. Ansible is nice for that sort of re-configuration (I need to learn more Ansible, it's pretty nice from what little I've managed to play with).

david scialom
November 07, 2017 @ 2:12 PM

Hello MonkadelicD

It is a very nice information and procedure. When installing centos7 with your prodedure, I was alway landing at the dracut rescue shell and the kickstart never kicked in. As you pointed out "dracut wants to connect to this kickstart file before the network is up" so i figure out how help to lauch the kickstart file:

instead of defining the grub.cfg file as: linux (cd0)/isolinux/vmlinuz inst.ks=http://somewhere.internal/ks.cfg initrd (cd0)/isolinux/initrd.img boot I replace it by: linux (cd0)/isolinux/vmlinuz ip= inst.ks=http://somewhere.internal/ks.cfg initrd (cd0)/isolinux/initrd.img boot

the additional ip setting must be coherent with the kickstart file line:

network --bootproto=static --device=eth0 --gateway= --ip= --nameserver= --netmask= --noipv6 --activate network --hostname=centOS7guest

those ip address are an example and must be adapted to the individual case.

Hope that those additional can help.



November 08, 2017 @ 6:29 AM

Hi Vincent

I followed steps but got stuck on the ks script so i just did the install manually which seemed fine until reboot when I had to go into the emergency shell as described. However, I tried following your instructions but I only had sda, sda1 and sda2 on my /dev.. I did not have any sda3. Therefore could not type:

mount /dev/sda3 /mnt

I therefore tried changing the os type to centos7 but could not get any response on the serial. I changed it back to custom and now whenever I start it, I go into GNU GRUB menu. Any advice here how I can proceed?

Thank you

Vincent Danen
November 08, 2017 @ 5:19 PM

Hey Jamie. That's pretty odd. If it was a CentOS 7 with the ks script as noted above with the auto-partition, it should have created /dev/sda3 for the root filesystem (IIRC sda1 would be /boot, and sda2 would be swap). You could try mounting /dev/sda2 or /dev/sda1 and see what happens (one will fail if it's swap). Maybe it created a single / partition that contained /boot as well? Hard to tell based on what you described.

November 11, 2017 @ 9:14 AM

Hey Vincent, I tried again and started from scratch, following the instructions exactly apart from I have put the ks.cfg file locally and used rootpw --plaintext password instead of a hash value.

iohyve install centos-qr CentOS-7-x86_64-Minimal-1611.iso followed by iohyve console centos-qr in a different terminal just gives me a grub menu. Attempting to type 'boot' here returns 'you need to load the kernel first'

ls /iohyve/centos-qr/ returns grub.cfg ks.cfg

Getting centos-qr iohyve properties... bargs -A_-H_-P boot 0 con nmdm0 cpu 4 description Wed Nov 8 15:51:59 GMT 2017 install yes loader grub-bhyve name centos-qr os custom persist 1 ram 6G size 200G tap tap0 template NO vnc NO vnc_h 600 vnc_ip vnc_tablet NO vnc_w 800 vnc_wait NO

Any ideas much appreciated


Vincent Danen
November 11, 2017 @ 11:10 AM

Hey Jamie. Off the top of my head I'm not sure what might be the cause of your problems. I would have to walk through the instructions again myself to see if I come across the same thing (sadly I don't have time for that today). I don't know if you can do it with the ks.cfg locally though, I've not tried that before as I had a web server do that for me. I wonder if it's not finding/reading the local ks.cfg maybe?

I have no other clever thoughts right now. I might be able to scrounge up some time next week to try this again; it's not something you can dig into without doing it over again. =)

November 20, 2017 @ 2:55 AM

Hi Vincent

Thank you very much for your help. I tried again today with a ks.cfg file hosted on a webserver, but still the console takes me to grub menu. Do you know of any logs that will show what the problem might be with the install not happening?


Vincent Danen
November 25, 2017 @ 10:30 AM

Hi Jamie.

Off the top of my head, no. This is all "pre-logging" AFAIK. You checked the web server logs to make sure it was loading the ks.cfg file? That's probably the only logging you're going to get, unfortunately. That's where I would start, make sure it's loading the file. Beyond that, I'm not sure. I've not really had a chance to go through and run through the setup again.

November 27, 2017 @ 3:48 AM

Hi Vincent

I upgraded to Freenas 11 which now allows you to do this from the GUI (I was not aware of this, silly me).

Now all good and working ! :)


Vincent Danen
November 27, 2017 @ 5:19 PM

Hey glad to hear that FreeNAS 11 makes this so much easier! I've not yet tried it yet as my wife will get upset with me if I break plex =) I'm not sure I want to try the upgrade yet (perhaps something to do over christmas where I have time to fix things if they break),

Kelly Hubbard
October 10, 2021 @ 4:10 AM

If the disk you are extending is in a linux guest OS once you have extended the disk in vmware go to the guest OS as root and use fdisk. Do a df to see the device name, something like /dev/sda, thus fdisk /dev/sda, enter p to see current partitions, n to create a new one, accept the default size and save with w. Once done you can now see the physical disk partition in Linux. You can then use lvm gui or manual commands to initialise the partition and extend your file system. Thanks Paul for pointing me in the right direction. Kelly Hubbard

Leave a Comment

Comments use MarkDown. Need help? MarkDown Cheatsheet