I posted a contributed OpenVZ OS Template today. The contributed OS Template is Scientific Linux 6 32 bit and it was contributed by Vic from powerpbx.org (firstname.lastname@example.org).
I asked him to share information about he created it and this is what he replied back with via email:
I have no plans to create a x86_64 version or provide regular updates to the x86 version at this time. The only reason I created the x86 version is because I needed a RHEL (or clone) v6 template for my own use. It is easy enough to update/modify/copy by someone else now that this version is out there.
I created it using this procedure and rsync from VMWARE to OpenVZ. Then I manually went through all the installed packages and took out as much as I could to get the size down. When in doubt I compared to the installed packages in a CentOS 5 template.
Yum would not remove kernel so I had to do a "rpm -e --nodeps kernel"
In the newly rsync'ed OpenVZ container I create a file called "vz.repo" in /etc/yum.repos.d with the following text:
then "yum install vzdev vzdummy-apache vzdummy-jre-el5 vzdummy-kernel-el5"
Could not get "vzdummy-glibc" to work. It caused the template to not load on reboot. Someone smarter than me will have to figure that one out. Perhaps vzdummy-glibc needs to be updated for RHEL 6.
Additional things I ran into that appear to be RHEL v6 specific are as follows.
You must comment out "console" in /etc/init/rc.conf and /etc/init/rcS.conf
You must also delete or rename tty.conf and start-ttys.conf.
I noticed Kir's blog post about the updated vzctl today. Cool! Finally I can create Fedora 14 containers... and the container restart mechanism has been fixed up too.
I downloaded the beta OS Template that the OpenVZ Projects offers for Fedora 14, created a container, did all of the updates, removed the samba* packages, added a few packages I wanted (mc, screen, links), and modified the httpd.conf so it is more like factory. Then I disabled a few services that aren't really needed... after all, who needs xinetd running when it it doesn't have any services configured? Then I stopped the container, cleaned up the container filesystem some, and tar.gz'ed it up and uploaded it as a contrib OS Template.
I did this for both the 32-bit and 64-bit OS Templates. Enjoy!
I don't usually repost mailing list messages but just got this one in my inbox from the OpenNode folks. Since I'm a big virtualization geek, I'm sharing. Haven't heard of OpenNode? Here's a brief description before I get to the status update email:
OpenNode is a open source server virtualization solution providing easy to use (CentOS / RHEL based) bare-metal ISO installer and supporting both OpenVZ container-based virtualization and emerging KVM full virtualization technology on the same physical host.
So, OpenNode is a lot like Proxmox VE except OpenNode is based on CentOS and uses libvirt, virt-manager, and other Red Hat standard tools.
Just wanted to mention a few news items from the OpenVZ Project.
Updated vzctl - vzctl 3.0.24 has been released. Even though the version number only changed from 3.0.23 to 3.0.24 there are a ton of changes, fixes and some feature additions. Of special interest is the --swappages option as well as being able to refer to a container by its name rather than requiring the CTID with vzmigrate. Over all it was a long overdue, much appreciated update.
Updated Official OS Templates - The last wiki notice is dated April 27th but looking today at the dates on the OS Templates they appear to have been updated May 27th. One thing to note is that there are now OS Templates for Ubuntu 10.04 which I'm sure Ubuntu folks will be happy about.
Beta Fedora 13 OS Templates - And speaking of OS Templates, Kir just released Beta OS Templates for Fedora 13. On the day Fedora 13 was released I tried creating my own OS Templates by taking Fedora 12 containers and updating them but ran into a snag. With Fedora 13 a lot of new stuff has been added to the init setup and some of it causes a container to just hang during startup. I was glad to see the beta OS Templates released. I created containers from them, made my own changes, and then uploaded those to the contrib section.
As luck would have it, later in the afternoon the Fedora Project released a whole bunch of updates and among them was a new initscripts package. I suspected that when I upgraded my container whatever changes the OpenVZ folks had made to the init setup that made it work in a container would be wiped out and I was correct as upgrading the initscripts package did make the container get stuck in the init process upon container reboot. I ended up filing two bugs: 1566 and 1567. I joyfully await their resolution.
2.6.32 devel kernel - There have been a few releases of the 2.6.32 devel kernel and it appears to be making progress. While there have been a number of OpenVZ devel kernels that died on the vine, 2.6.32 should be different mainly because it is the kernel in the upcoming Red Hat Enterprise Linux 6, the upcoming Debian 6, and in Ubuntu 10.04. I have no guess as to when it'll be marked stable. My guess would be sometime after RHEL 6 is released.
***Please note that any URLs mentioned (and the information they contain) in this posting are time sensitive and will surely be outdated not long after posting.
Shorewall and Proxmox VE Cluster Configuration
This is a follow up article describing how to use Proxmox VE and Shorewall together. This article focus on using Shorewall within your Proxmox cluster. If you have not read the first article I recommend that you do so, it will aid your understanding with what is going in this one.
Network Layout and Shorewall Configuration
We are going to be using a bridging configuration. This is what Proxmox VE uses with by default. Bridging allows for easy migration of hosts without having to re-configure the firewall each time a machine is migrated.
Proxmox VE does not come with a firewall by default there are several solutions to this problem but the most flexible and robust is integrating the Shorewall firewall. This document assumes a basic knowledge of the Shorewall program and will not cover all of Shorewall capabilities but will give you a good working model to get you started. For more advanced topics check out the Shorewall documentation.
Shorewall will have 3 zones: 1) the fw zone which is the Proxmox host, 2) the net zone which is the Internet and 3) the dmz zone which is where the virtual machines will reside. The hardware just has one network interface card; vmbr0 is a just a bridge interface.
Here is the video of my presentation from the Utah Open Source Conference 2009 entitled, "Introduction to OS Virtualization, Containers and OpenVZ". Warren Sanders manned the camera. I used Kdenlive to edit it and create the title screen. Attached below you can find PDFs for my slides, the OpenVZ Brochure we were handing out, as well as white paper from the Linux Foundation about who writes the Linux kernel.
For those interested in a much higher quality Ogg Theora version, you can find that here:
(right-click, save link as...)
I've been aware of Proxmox VE for a couple of years now. I've installed it a few times and tested it out. I have recommended it to others and know a few local people using it in production (at MSU-Bozeman and Rocky Mountain College for example). Since I'm involved in the OpenVZ community I've also noticed some of the contributions to OpenVZ that have come from Proxmox VE (vzdump for example) and have run into Martin Maurer in the comments section of this site. I asked him if he would be interested in doing an interview and he accepted.
What is Proxmox VE?
Proxmox VE is a very light-weight Debian-based distribution that includes a kernel with support for both KVM and OpenVZ. This means you get the best of both virtualization worlds... containers (OS Virtualization) and fully-virtualized machines (Machine Virtualization). Proxmox VE also includes a very powerful yet easy to use web-based management system with clustering features. Boot the Proxmox VE install media, answer a few simple questions, and within 10 minutes you have a very powerful virtualization platform you can manage from a web browser. Install it on one or more additional machines that are networked together and use Proxmox VE's cluster management tool to create a virtualization cluster that allows for centralized management, automated backups, iso media and template syncing, as well as virtual machine migration features. Proxmox VE really is a time saving turnkey solution... and it is freely available under a GPL license.
Andrew Niemantsverdriet from Rocky Mountain College gave a presentation entitled, "Proxmox to Virtualize Infrastructure" at Linuxfest Northwest 2009 in Bellingham, WA. A PDF of his slides has been added as an attachment.
To view the video, click on the full story or the thumbnail image on the right.