If you didn't notice, Fedora 22 was released today. Today I refreshed the Fedora 22 OS Template I made for OpenVZ and uploaded it to contrib. For fun, I thought I'd build a MATE Desktop GUI container right in front of your eyes... and then connect to it via x2go. Enjoy!
For those with iFrame issues, here's a direct link to the webm video:
You can pretty much use the same recipe for other desktop environments. The only thing you want to avoid are desktop environments that require accelerated 3D because those won't work over x2go. Which desktops use that? GNOME and Plasma 5... Cinnamon probably... and if you were on Ubuntu, Unity. XFCE, MATE, OpenBox, LXQT, etc work fine... although I haven't tried them all.
Since I'm such a big container fan (been using them on Linux since 2005) and I recently blogged about Docker, LXC, and OpenVZ... how could I pass up posting this? Some Canonical guys gave a presentation at the recent OpenStack Summit on "LXD vs. KVM". What is LXD? It is basically a management service for LXC that supposedly adds a lot of the features LXC was missing... and is much easier to use. For a couple of years now Canonical has shown an interest in LXC and has supposedly be doing a lot of development work around them. I wonder what specifically? They almost seem like the only company who is interested in LXC.. or at least they are putting forth a publicly noticeable effort around them.
Why Should You Care?
If Canonical can actually deliver on their LXD roadmap it is possible that it will be a suitable substitute for OpenVZ. The main "problem" with OpenVZ is that it is not in the mainline kernel, whereas LXC is. In practice you have to purposefully make an OpenVZ host (currently recommended on RHEL6 or clone) but with LXC/LXD any contemporary Linux system should be able to do full-distro containers... aka containers everywhere for everyone.
How About a Roadmap
Where is LXD now? Well, so far it seems to be mostly a technology preview available in Ubuntu 15.04 with the target "usable and production ready" release slated for the next Ubuntu LTS release (16.04)... which if you weren't familiar with their numbering scheme is 2016 April.
That's about a year away, right... so what do they still have left to do? If you go to about 23:30 in the video you'll get to the "Roadmap" section. They have work to do on storage, networking, resource management and usage reporting, and live migration. A bit of that falls within the OpenStack context... integrating with various OpenStack components so containers will be more in parity with VMs for OpenStack users... but still, that's quite a bit of work.
The main thing I care about absolutely being there is isolation and resource management which are really the killer features of OpenVZ. So far as I can tell, LXD does not offer read-only base images and layering like Docker... so that would be an area for improvement I would suggest. BTW they are using CRIU for checkpointing and live migration... thanks Parallels/OpenVZ!
Certainly LXD won't really make it no matter how good it is until it is available in more Linux distributions than just Ubuntu. In a video interview a while back (which I don't have the link handy for at the moment) Mark Shuttleworth stated that he hopes and expects to see LXD in other distributions. One of the first distros I hope to see with LXD is Fedora and that's the reason I tagged this post appropriately.
Broadening the Echosystem
Historically I've been a bit of an anti-Canonical person but thinking more about it recently and taking the emotion out of it... I do wish Ubuntu success because we definitely need more FLOSS companies doing well financially in the market... and I think Red Hat (and OpenVZ) will have an incentive to do better. Competition is good, right? Anyway, enjoy the video. BTW, everything they tout as a benefit of LXD over KVM (density, speed of startup, scalability, etc) is also true of OpenVZ for almost a decade now.
For those with iFrame issues, here's the YouTube link: LXD vs. KVM
Containers Should Contain
Let's face it, Docker (in its current form) sucks. Why? Well, ok... Docker doesn't totally suck... because it is for applications and not a full system... but if a container doesn't contain, it isn't a container. That's just how language works. If you have an airplane that doesn't fly, it isn't an airplane, right? Docker should really say it is an "Uncontainer" or "Uncontained containers"... or better yet, just use a different word. What word? I'm not sure. Do you have any suggestions? (Email me: email@example.com)
What is containment? For me it is really isolation and resource control. If a container doesn't do that well, call it something else. OpenVZ is a container. No, really. It contains. OpenVZ didn't start life using the word container. On day one they were calling them "Virtual Environments" (VEs). Then a year or two later they decided "Virtual Private Server" (VPS) was the preferred term. Some time after switching to VPS, VPS became quite ambiguous and used by hosting companies using hardware virtualization backends like Xen and VMware (KVM wasn't born yet or was still a baby). Then OpenVZ finally settled on the word "container".
If you want a fairly good history of the birth and growth of OpenVZ over the years, see Kir's recent presentation.
Hopefully LXD will live up to "container" but we'll have to wait and see.
I've been busy lately trying to learn more about Docker. I'm not much of a fan of "application containers" and still prefer a full-blown "distro container" like that provided by LXC (good) or OpenVZ (better)... but I have to admit that the disk image / layering provided by Docker is really the feature everyone loves... which provides almost instantaneous container creation and start-up. If OpenVZ had that, it would be even more awesome.
OpenVZ certainly has done a lot development over the past couple of years. They realized that simfs just wasn't cutting it and introduced ploop storage... and then made that the default. ploop is great. It provides for instant snapshots which is really handy for doing zero-downtime backups. I wonder how ploop differs these days from qcow2? I wonder how hard it would be to add disk layering features like Docker to OpenVZ with ploop snapshots?
Applications Containers In the Beginning
Ok, so Docker has taken off but I really can't figure out why. I mean Red Hat introduced OpenShift some time ago. First it was a service, then a product, and lastly a open source product that you can deploy yourself if you don't need support. A couple of years ago I attended an OpenShift presentation and at that time it provided "Gears" which were basically chrooted processes with a custom SELinux policy applied... and cgroup resource management? Something like that. While (non-OpenVZ) containers don't contain, with the SELinux added, OpenShift gears seemed to be secure enough.
OpenShift offered easy deployment via some git-based scheme (if I remember correctly) and a bunch of pre-packaged stacks, frameworks, and applications called "cartridges" which I see as functionally equivalent to the Docker registry.. It didn't have the disk image layering and instant startup of Docker so I guess that's was a minus.
These days I guess OpenShift is going to or has shifted to using Docker.
Docker Crawls Before It Can Walk
Docker started off using aufs but that was an out-of-tree filesystem that isn't going to make it into mainline. Luckily Red Hat helped by adapting Docker to use device mapper-based container storage... and then btrfs-based container storage was added. What you get as default seems to depend on what distro you install Docker on. Which of the three is performant and which one(s) sucks... again that depends on who you talk to and what the host distro is.
Docker started off using LXC. I'm not sure what that means exactly. We all know that LXC is "LinuX native Containers" but LXC seems to vary greatly depending on what kernel you are running and what distro you are using... and the state of the LXC userland packages. Docker wised up there and decided to take more control (and provide more consistency) and created their own libcontainer.
The default networking of Docker containers seems a bit sloppy. A container gets a private network address (either via DHCP or manually assigned, you pick) and then if you want to expose a service to the outside world you have to map that to a port on the host. That means if you want to run a lot of the same service... you'll be doing so mostly on non-standard ports... or end up setting up a more advanced solution like a load balancer and/or a reverse proxy.
Want to run more than one application / service inside of your Docker container? Good luck. Docker was really designed for a single application and as a result a Docker container doesn't have an init system of its own. Yeah, there are various solutions to this. Write some shell scripts that start up everything you want... which is basically creating your own ghetto init system. That seems so backwards considering the gains that have been made in recent years with the switch to systemd... but people are doing it. There is something called supervisor which I think is a slight step up from a shell script but I don't know much about it. I guess there are also a few other solutions from third-parties.
Due to the complexity of the networking and the single-app design... and given the fact that most web-services these days are really a combination of services that are interconnected, a single Docker container won't get you much. You need to make two or three or more and then link them together. Links can be private between the containers but don't forget to expose to the host the port(s) you need to get your data to the outside world.
While there are ways (hacks?) that make Docker do persistent data (like mapping one or more directories as "volumes" into the container or doing a "commit"), Docker really seems more geared toward non-persistent or stateless use.
Because of all of these complexities, which I really see as the result of an over-simplified Docker design, there are a ton of third-party solutions. Docker has been trying to solve some of these things themselves too. Some of Docker's newer stuff has been seen by some (for example CoreOS) as a hijacking of the original platform and as a result... additional, currently incompatible container formats and tools have been created. There seems to be a new third-party Docker problem solver start-up appearing weekly. I mean there are a ton of add-ons... and not many of them are designed to work together. It's kind of like Christianity denominations... they mostly believe the same stuff but there are some important things they disagree on. :)
Application Containers Are Real
Ok, so I've vented a little about Docker but I will admit that application containers are useful to certain people... those into "livestock" virtualization rather than "pet" virtualization aka "fleet computing". Those are the folks running big web-services that need dozens, hundreds or thousands of instances of the same thing serving a large number of clients. I'm just one one of those folks so I prefer the more traditional full-distro style of containers provided by OpenVZ.
Working On Fedora 22
I've already blogged about working on my own Fedora 22 remix but I've also made a Fedora 22 OpenVZ OS Template that I've submitted to contrib. Yeah, it is pre-release but I'll update it over time... and Fedora 22 is slated for release next week unless there are additional delays.
Like so many OpenVZ OS Templates my contributed Fedora 22 OS Template doesn't have a lot of software installed and is mainly for use as a server. For my own use though I've added to that with the MATE desktop, x2goserver, Firefox, LibreOffice, GIMP, Dia, Inkscape, Scribus, etc. It makes for a pretty handy yet light desktop environment. It was a little tricky to build because adding any desktop environment will drag in NetworkManager which will overpower ye 'ole network service and break networking in the container upon next container start. So while building it "vzctl enter" access from the OpenVZ host node was required. With a handful of systemctl disable / mask commands it was in working order again. Don't forget to change the default target back to multi-user from graphical... and yeah, you can turn off the display manager because you don't need that since x2go is the access method of choice.
BTW, there was a libssh update that broke x2go but they should have that fixed RSN.
Multi-purpose OS Templates
I also decided to play with LXC some on my Fedora 22 physical desktop. I found a libvirt-related recipe for LXC on Fedora. Even though it was a little dated it was very helpful.
The yum-install-in-chroot method of building a container filesystem really didn't work for me. I guess I just didn't have a complete enough package list or maybe a few things have changed since Fedora 20. I decided to re-purpose my Fedora 22 OpenVZ OS Template. I extracted it to a directory and then edited a few network related files (/etc/sysconfig/network, removed /etc/sysconfig/network-scripts/ifcfg-venet*, and added an ifcfg-eth0 file). I also chroot'ed into the directory and set a root password and created a user account that I added to the wheel group for sudo access.
After a minute or so for the minor modifications (and having left the chroot'ed environment) I did the virt-install command to create a libvirt managed LXC container using the new Fedora 22 directory / filesystem... and bingo bango that worked. I also added some GUI stuff and just like with OpenVZ I had to disable NetworkManager or it broke networking in the container. Anyway... running an LXC container is a like OpenVZ on a mainline kernel... just without all of the resource management and working security. Baby steps.
Containers Taken Too Far?
While hunting down some videos on Docker I ran into RancherVM. What is that? To quote from their description:
RancherVM is a new open source project from Rancher Labs that makes it simple to run KVM inside a Docker container.
What they heck? Run KVM VMs inside of Docker containers? Why would anyone want to do that? Well, so you can embed KVM VM disk images inside of Docker images... and easily deploy a KVM VM (almost) as easily as a Docker container. That kind of makes my head hurt just thinking about running a Windows 7 Desktop inside of a Docker container... but someone out there is doing that. Yikes!
I would consider myself an unlikely Linux ambassador. Not that I hide any Linux use or fascination but that I am not out there on a mission to encourage or convert people to Linux. Mostly it would be an occasional conversation about me using Linux for something or a conversation where I am explaining that there are more operating systems then just Windows or OS X. Most of the time my Linux conversations are with those that already have some connection to Linux. To be honest I have probably been a much bigger "Ambassador" to LibreOffice than to Linux; and I am not an uber LibreOffice or ODF fan boy but one that believes for most basic users it will work just fine without all the Microsoft expense. All of that has taken a slight detour within the past couple of weeks.
At the end of April I was finally able to do something "geeky" by attending LinuxFest Northwest (check box in the 'ol geek bucket list). It's not that the sessions where so enlightening or life altering that I have been sending up Linux smoke signals from the Bridger mountains or anything like that; although they were informative and enjoyable. Nor was I overpowered and brain washed by the four guys I carpooled with for the 14'ish hour drives. Although we did have some good conversations some regarding Linux and some regarding Open Source issues, and to say none of it has had an influence on any of my views would be untrue. I did come back from LinuxFest Northwest with a more renewed interest in Linux and have been using Fedora now almost exclusively outside of work. I also returned with what I consider some cool SWAG. Now it's the SWAG that's the most important thing of all right? I came back with a nice black Fedora t-shirt that I thought fellow carpooler dowdle was going to mug me for, a beanie with GNU printed on it from the Free Software Foundation that I purchased, and a hippie looking acid dyed t-shirt promoting Linux and LinuxFest Northwest 2015 which I also purchased. I have been intentionally wearing them when I can.
My first "Ambassador" moment came when my oldest boy asked me about the GNU printed on my beanie. This gave me an opportunity, as well as a challenge, to explain to an almost 9 year old what GNU meant within the context of the Free Software Foundation. This included the discussion of locked down proprietary software and the negatives of such as well as the pro's of software that is open and free to improve or be fixed. In addition to my oldest son being in the car with me I also had two of my other boys one of which was listening to the conversation intently as well.
My second "Ambassador" moment came about on a quick trip to Walmart. I was once again wearing my GNU beanie. I was in the produce area and walked near a Latino family whose dad looked in my direction. Shortly thereafter his family passed by me and as he did he looked at me, smiled, and said "GNU huh?" and kept walking. Now it is certainly possible he thought my beanie was promoting the Wildebeest but I like to think he knew it was in reference to Free Software.
The third and most recent "Ambassador" moment once again took place at my local Walmart store a little after eleven at night. This time I was wearing my "Peace, Love and Linux", LinuxFest Northwest t-shirt. I had just finished checking out at one of the self checkout stations and ended up having a conversation with gentleman named Keith (we exchanged names as we parted ways). Keith saw my shirt and asked me if I used Linux which turned into a nice conversation. Now Keith knew of Linux and knew of OpenOffice but that was probably all. It's even possible Keith had an experience long ago or perhaps he has just read about Linux and OpenOffice but beyond that I would say Keith was someone that probably had some degree of interest in Linux. He asked me the usual questions of how easy or hard it was to install these days and where could a person get Linux, did you have to look on eBay? I told him he could just download it from the distribution's website. Told him briefly about the DistroWatch website which a person could find links to the actual distributions websites. I told him most users probably used Ubuntu or some derivative of Ubuntu or they probably used Fedora. I told him either one should install and work just fine on most hardware. He asked about OpenOffice which led to a discussion of OpenOffice and the origins of LibreOffice and which one was probably the best to use and how most distributions most likely included it by default. I even explained how LibreOffice was also available for Windows and OS X too. All in all it was an enjoyable conversation that lasted several minutes and ended in a hand shake and the exchanging of names. Keith also verified a couple of times the names of the two distributions (Ubuntu & Fedora) I had recommended. Now I have no idea if Keith will actually try installing Linux or try using LibreOffice. Nor do I know if he will have a good experience or a bad experience if he does decide to try using Open Source software. What I do know is that because I simply wore an article of clothing promoting Linux, Keith saw an opportunity to express an interest in something to someone that might be able to answer questions and provide some first hand feedback.
I have never really found a way to "get involved" with a project before as I am not a coder, have no deep comprehension of the inner workings of Linux, nor do I feel I would make a good candidate for documentation writing. This wasn't a bad way to get involved and to be honest it was a lot easier and more enjoyable then attempting to submit a bug report.
Here is a preview video of the upcoming Fedora 22 release (running in a KVM VM). This is my personal remix (with non-Fedora provided rpmfusion-free packages, google-chrome-stable, and flash-plugin added) and I haven't bothered with the branding nor customization at all... and I don't really publicly distribute it.. but I'd be happy to share my kickstart file if anyone wants it.
In the video I show the install process, and then show all of the desktop environments that are pre-installed which include... GNOME 3.16.1, Plasma 5.3.0, LXQT 0.10, MATE 1.10, XFCE 4.12.1, and Cinnamon 2.4.8. Enjoy!
Please Note: The official Fedora live media always automatically logs the liveuser in... but on my personal remix I haven't bothered to set that up so I have to put in "liveuser". Again, the Fedora media is NOT like that.
I have run across a few people who are perplexed by
firewalld and I must admit that I was for a while until I did some reading and experimentation. What is
firewalld? It is basically a replacement for the ancient
iptables service on RHEL and Fedora systems. So many of us were just used to manually editing
/etc/sysconfig/iptables and then coping that file from system to system as desired, that the switch to
firewalld was a bit scary. I mean, who wants to learn something new, right?
Another thing that is scary about
firewalld is the complexity of the rules it shows when you do something like:
While the configuration, tools and output has dramatically changed... really
firewalld makes things easier and more manageable. Really. One of the problems with Linux across distros is that there really hasn't been a standardized way to handle the host-based firewall. Each distro seems to have their own way of doing it... and popular packages like Shorewall have been around for years. I think
firewalld tries for a happy medium somewhere between simple and complex and a standard that distros can choose to adopt.
Anyway, here are some basics (as root or via sudo) but if you want more be sure and check out the documentation:
Main documentation: www.firewalld.org/documentation/
Fedora Documentation: fedoraproject.org/wiki/FirewallD
RHEL Documentation: access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html
firewall-cmd --list-all(shows human readable firewall settings)
firewall-cmd --add-service=sshd --permanent(opens up port 22 which is sshd and saves to config)
firewall-cmd --add-service=http --permanent(opens up port 80 which is http and saves to config)
firewall-cmd --add-service=https --permanent(opens up port 443 which is https and saves to config)
firewall-cmd --remove-service=https --permanent(closes port 443 and saves to config)
If you want don't want your changes saved just leave off --permanent.
Want to open arbitrary ports for some service (like voxelands-server for example)? That is easy too:
firewall-cmd --add-port=30000/tcp --permanent
Want UDP? Ok:
firewall-cmd --add-port=30000/udp --permanent
After your changes it doesn't hurt to verify again with:
Want to manage firewalld via a config management system? There is a formula for SaltStack here and supposedly Ansible also supports firewalld.
Want to edit a file instead of running
firewall-cmd? That's possible too.
firewalld stores everything somewhere under /etc/firewalld/. In particular the changes listed above would get written to
/etc/firewalld/zones/public.xml. Yeah, it's an "xml" file but make a change or two via firewall-cmd and see what it adds or removes from it and you'll see that it is very easy to monkey-see-monkey-do for those that want to edit the file directly. After updating or replacing any of firewalld's configuration files you want to make firewalld aware of the change with:
systemctl reload firewalld
If you are brave enough to manually edit the config just be aware that you are responsible for your typos.
I've only touched the tip of the iceburg for the most common stuff. Need more info? Yeah, there is a ton of documentation including a couple of man pages.
I've been following the development of Fedora 21 since a little before the alpha release. Getting my MontanaLinux remix to build was actually quite easy and the fact that rpmfusion has a rawhide repo means all of the multimedia codecs / applications were good to go as well. I've done few dozen installs as KVM virtual machines and thought it was time to try physical hardware.
First I installed it on my Acer netbook that is 32-bit only and about 5 years old now. The battery in it is shot and smartd has been telling me for over a year that the hard drive has been getting more and more bad sectors... which is a fairly good indicator that the hard drive is going bad. Doing the install from a LiveUSB it took a while because the installer was finding some of the bad spots on the drive. For whatever reason during the install the progress bar immediately said 100% and I knew that was wrong... so I kept switching over to a text console to periodically do a df -h to see how much had been written to the hard drive. Oddly whenever I'd switch over to the text console, the green illuminated power button would go amber and the screen would go blank... which to me meant it was suspending to RAM or something. At that point I'd have to hit a few keys on the keyboard and it would wake back up. For whatever reason it did this at least a dozen times during the install. I really wasn't expecting a good install given the flaws in my hardware and how they were manifesting themselves during the install process... but being patient paid off... and it actually was successful... and seems be working just fine post-install.
Installing it on my Optiplex 9010 desktop at work was also more complicated than I was expecting. For whatever reason (maybe a BIOS setting?) I could NOT get my machine too display the bootloader menu from a LiveUSB although other Dell models at work seemed to work fine. So I burned a DVD with the burner in the Optiplex 9010. Oddly the same drive that wrote the DVD seems unable to read it about 19 out of 20 tries. That meant that I couldn't get it to boot from the DVD either. I finally decided to try something different... and I got an external / USB optical drive and plugged it into the USB port and I was able to get it to successfully read the DVD and the bootloader to appear. With a functioning bootloader I was able to boot the DVD and the live system worked great... and the installer went flawlessly.
Fedora 21 pre-beta actually seems quite stable. As you may recall I have all of the desktop environments installed as part of my remix so I can check them all out... but I primarily use KDE. On both of my machines I have /home as a separate partition so my personal data is retained across installs. I also backup /etc and /root to /home/backups/ so any of my previous configurations (stuff like ssh keys) can be retrieved and used if desired.
I picked lightdm as the default login manager. In the past I've mainly used kdm but KDE is in the process of transitioning to sddm which seems a bit buggy still.
One of the main features in Fedora 21 I'm wanting to play with actually is provided by the rpmfusion repos... ffmpeg 2.3.3. I'm wanting to do some testing with the newer ffmpeg that does a reasonable job at webm encoding with vp9 and opus. I'd also like to try out GNOME 3 under the Wayland display server... which is supposedly working fairly well in Fedora 21... but I haven't tried it yet.
One weird glitch I ran into was with the Google-provided google-chrome-stable package. I'm not much of a Google Chrome user but I do occasionally use it for (legacy) sites that require Adobe Flash. I use Firefox the vast majority of the time... but I've decided to no longer install the Adobe provided flash-plugin package (at version 11.x). As you probably know Google has taken over maintenance of newer Flash versions (currently 15.x) on Linux and include it as part of Google Chrome. As a result, whenever there is a Flash update from Adobe, there is a Google Chrome update that soon follows. Anyway, very early in the Fedora 21 development cycle (pre-alpha), the Google Chrome package refused to install because Fedora 21 had a much newer version of some library (I don't recall which one) and it wanted the older version. A few Google Chrome package updates later... and it is happy with regards to dependencies... but installing it with rpm... it gets stuck on the post-install and just sits there. I had to ^c rpm (which you generally don't want to do) because it wasn't going to finish... and just to be safe I did an rpm --rebuilddb and everything seems fine. The google-chrome-stable package verifies just fine (rpm -V google-chrome-stable) and the package works as expected.
Overall everything I've tried works fine. I like to get started with new Fedora releases as early as possible in the development cycle so I can help report any bugs I find (in Fedora provided packages) and be up-to-speed with all of the new features on release day so I can deploy to other machines immediately. I've been doing it that way for several releases now. I do really appreciate all of the work the Fedora developers put into each release.
I ran across this on Monday night. Anyone else watch Major Crimes? Enjoy!
I just accidentally discovered a feature I didn't even know existed. What feature? I'll call it the Firefox Resolution Tester feature although I'm sure that is NOT the real name of it. I don't know how long it has been a feature of Firefox... maybe for a long time... but like I said... I just found it in Firefox 30. How do you access it? Hit
CONTROL-SHIFT-m. That's it.
I accidentally discovered it when I wasn't paying attention to which application window I was using had the focus. I thought it was konsole (KDE GUI terminal).
CONTROL-SHIFT-m in konsole toggles the menu on and off. In Firefox it takes the current web page you are viewing and puts a black border around that has a control menu at the top left of that black border. The control menu allows you to pick from several pre-defined resolutions or even add additional presets if desired. Picking a different resolution resizes the view of the page (and increases the black border around it accordingly) to the desired resolution. It also has a screenshot feature (saves to your default download directory and auto-names images something like "Screen Shot 2014-06-01 at 07.42.29.png"). You can also rotate the resolution to simulate a mobile device. It has a "Simulate Touch Event" button but I'm not sure what that does. Anyone?
At work they recently licensed a commercial web content management system that primarily targets larger educational institutions -- OmniUpdate Campus. The web developers (which I am not one) at work have created a nice responsive theme that everyone can use for their departmental websites and it works great. Don't have a responsive site handy? You can try this temporary testing one I made in OmniUpdate. That's just a shell but it'll show you responsiveness.
Anyway, I kind of got off track. Yeah, Firefox. Try
CONTROL-SHIFT-m and enjoy. Can anyone tell me what version of Firefox first included this feature?
MontanaLinux: Please tell me about yourself... as much as you feel comfortable with... as open or as closed as you want to be... family, education, work, hobbies, religion, volunteering, diet, music tv / movies, etc.
Máirín Duffy: Let's see if I can cover all of those in one sentence. I'm a married vegetarian Roman Catholic mom and RPI alumna living in Boston who is happily employed at Red Hat as an interaction designer and occasionally teaches kids how to use FLOSS creative software, and additionally am a big fan of Zooey Deschanel (so I like New Girl and She & Him.)
ML: What software-based graphics tools have you worked with in the past and what currently are your favorite applications?
MD: My first digital painting program was the Smurfs Paint 'n Play for the Coleco ADAM. When we finally got a PC with a VGA card, I used Deluxe Paint II and the Disney Animation Studio painting programs. I used Photoshop and Gimp when I was in high school and was introduced to the Macromedia tools in college. My tools of choice in college were Macromedia Fireworks for almost everything, and Adobe Illustrator, Macromedia Director, and Macromedia Flash for everything else. I followed sodipodi and I switched over 100% to FLOSS tools around Inkscape 0.39.
At this point I've been using a fully FLOSS creative work flow for around 7 years — starting with Inkscape and Gimp on Fedora and branching out to Scribus, MyPaint, Krita, Blender, Darktable, etc. as new applications became available. Inkscape is by far my favorite graphics tool and I use it for almost everything.
ML: Are there any hardware accessories that you like to use in your daily creative work? ...and if so, please tell me about them.
MD: I have a Lenovo x220T which is a laptop with a swivel-screen that has a built-in Wacom tablet and stylus. I use it often when I need to do illustration work; it's good for sketching, too.
ML: I believe you have done some FOSS training sessions at a few educational institutions. Please tell me about those experiences and do you hope to do more of those in the future?
MD: I've done a lot of sessions, but the two longest ones were probably the Inkscape class I did at a middle school near Red Hat's Boston-area office (10 weeks) and a Gimp and Inkscape class I did with the Girl Scouts in Boston (8 weeks.) (You can read more about each respectively at http://opensource.com/education/10/4/introducing-open-source-middle-school and http://blog.linuxgrrl.com/category/girl-scouts-class/)
It is a lot of work to put together classes like that, but working with kids is really fun. The thing about these classes is that the kids get to keep the software. For example, I found out that a student I taught in one of the classes still uses Inkscape at home and entered an art contest with a design she made in it well after the class was over. You know, when I was a similar age to those kids, I was a student in an outreach program at the school that later became my alma mater (RPI.) We were taught character animation in Macromedia Director. Not only could I not go home after the class was over to practice and build on my skills because I didn't have the software — it was quite expensive back then — but by the time I entered college Macromedia Director was basically sunset as a product in favor of Flash. Is there anything today that can even open up *.dcr files from Director? My Director projects are probably bitrotted forever, and I'm not that old.
I think us folks in the FLOSS world take it for granted that there are these amazing and free tools using open standards right at our fingertips and the more we can share and teach them to others, I truly believe the world will be a better place.
ML: What FOSS / Linux conferences have you participated in? Have you given any presentations and if so, on what topics?
MD: I have; probably too many to list them all at this point. I was just at Grace Hopper a couple of weeks ago and participated in the Open Source Day Systers Mailman 3 Hackfest (got a few commits in to Mailman 3 too!) Last year, I gave a talk at the Red Hat Summit OpenSource.com panel about FLOSS creative software. I keynoted at Software Freedom Day Boston a couple years ago or so about attracting more designers to FLOSS. I gave a talk at the Linux Plumbers' Conference a couple years ago about storage technologies and UX (related to some of the Anaconda design work I did.) I wrote a paper about designing for free software as part of an open source workspace at the ACM CHI conference in 2010 (http://blog.linuxgrrl.com/2010/04/06/contributing-to-free-open-source-software-as-a-designer/.) I've also given talks at the Libre Graphics Meeting, SXSW, GUADEC, FUDCon, LibrePlanet, FOSS.in, LinuxCon, and probably others I'm not recalling at the moment.
ML: Do you consider yourself a technology lover? If so, do you have any visions of the future you'd like to share?
MD: I learned how to read by playing text-parser adventure games on an old 8088 machine; I grew up with a love of technology and auspicious thoughts for a future with even better technology. Unfortunately, I'm a bit more cautious about technology now - particularly tablets and phones. I'm a new parent, and I worry about the ubiquity of these devices and how they may affect childrens' development. It's also sad to go to pretty much every public place and see heads bent down staring at screens rather than healthy social interaction (I'm as guilty as anybody.) What about the divide between people who can afford these devices and a ubiquitous internet connection and those who can't? There's bad that comes with the good that technology brings, and I think a lot more careful thought and action needs to be put into mitigating that bad.
ML: What hardware have you purchased for yourself? Do you have a laptop, smartphone and/or a tablet? If so, what?
MD: I have an x220T tablet, a Samsung Galaxy S4, and a Nook HD. I've never purchased a piece of Apple hardware in my life, although I was - oddly enough - given a gumstick iPod Shuffle once when I signed a lease on an apartment. It bricked after a few months.
ML: Do you use any non-FOSS software? If so, what and why?
MD: The only non-FLOSS desktop software I use is my income tax software. I have it installed on a hard drive with a Windows install on it that came with my husband's personal laptop before we swapped an SSD into it. I use that once a year, because I'd rather do my own taxes and the laws change too frequently for any potential FLOSS solution to be workable.
I do have more non-FLOSS apps on my phone than I'd like to admit, and I use more non-FLOSS web apps than I'd prefer (although many are FLOSS.) I used to not even use Flickr or Twitter because they aren't open source - as I get older, I think I'm getting more soft in tolerating that. One piece of proprietary software I'm particularly proud of avoiding, though, is Gmail.
About your work with Red Hat
ML: How long have you worked for Red Hat and what positions have you held there?
MD: I've worked for Red Hat close to 9 years, if you don't count my summer internship before I started full-time. I've been an interaction designer the entire time, but I have worked on different teams over the years.
ML: What projects have you worked on within Red Hat?
MD: I've worked on a lot of projects over the 9 year period so it would be boring to list them all out. The two main products/projects I've worked on are Red Hat Network / Red Hat Network Satellite and Fedora. As of late I've been spending a lot of time on Hyperkitty, the mail archiver web UI for Mailman 3.
About your work with the Fedora Project
ML: How did you first get involved with the Fedora Project?
MD: I used Red Hat Linux when I was a high school student but was converted to Debian in college. I migrated to Fedora and got involved with the Fedora Project when I did a summer internship with Red Hat's desktop team in the summer of 2004.
ML: What things have you worked on in Fedora?
MD: A lot of things! I've been the community design team lead for several years at this point, which has involved a lot of wrangling to get the artwork ready for each release on time, although honestly gnokii (Sirko Kemter) does a lot of this work now - he makes sure things are ready and prepares a lot of the release artwork. Most recently on the Fedora design team I've done a lot of artwork for the Fedora Badges system (badges.fedoraproject.org) I redesigned the fedoraproject.org website with a team of Fedora design team members in 2009-2010 although it is due for some updates these days, I think. I've done a lot of UX work for a lot of different components that are in Fedora, including virt-manager and anaconda, the installer. I've also done UX work for some of the infrastructure applications that help run Fedora, including the Fedora Packages app (http://apps.fedoraproject.org/packages). I've served on the Fedora board too, but not a full term.
ML: What do you think Fedora's strengths and weaknesses are both as a FOSS development project and as a general purpose Linux distribution?
MD: By development project, I'm assuming you mean a FLOSS project community and not necessarily a development platform (but please correct me if I'm wrong.) As a community I think Fedora has a major strength in that it attracts many smart and passionate folks. The corresponding weakness (which is shared by other FLOSS projects) is that demographically it's not a particularly diverse set of folks. We are actively working on that though; for example, Fedora has participated in the FLOSS Outreach Program for Women in the past couple of rounds or so.
Another strength of Fedora is that as a community, Fedora is particularly concerned about openness and transparency with respect to project discussion but also privacy when it comes to personal information — so much so that, for example, when we proposed a Fedora badge for folks voting in a vote to choose wallpapers, we had to make the badge an opt-in badge rather than automatic because concern was voiced about individuals' privacy with respect to whether or not they voted. Folks aren't just interested in software freedom; they are interested in other social justice issues (like privacy) as well. A weakness to counter that is that in a community of passionate people, the discussions can get pretty heated — particularly on the mailing lists. This is also not unique to the Fedora Project. We are hoping to help tame the flames with affordances provided by Hyperkitty, when it's ready.
As a general-purpose Linux distribution, I think Fedora's strengths are that you have highly-skilled, full-time software developers alongside community developers putting out software in Fedora first. The corresponding weakness is that because there's new technology in Fedora first — for example, systemd or GNOME 3 — when you use Fedora you have to learn the new things sooner than other distros' users since you're getting it first. That can take time; there might not be as much documentation and as many how-to blogs available yet to learn because the tech is so new.
ML: What things would you change about Fedora if you had a magic wand?
MD: If I had a magic wand, Hyperkitty would be ready and deployed across all of the Fedora mailing lists and we'd be having much more productive and efficient conversations!
About your Anaconda design work
ML: Anaconda is the installer that Fedora uses on the install and live media. Why did Fedora need to redesign Anaconda?
MD: Will Woods covered this much better than I could in a two-part blog series that starts here:
Adam Williamson and I wrote this too to go along with the initial release of the new UI:
The redesign was initiated by the development team. The design wasn't conjured up first and then pushed on to them. They made a decision that a redesign had to be done, and they sought out UX help for the UI part of it. That is how I became involved in the project. A lot of underlying problems with Anaconda were addressed - quite well - by the development team but haven't gotten as much attention as the UI because the UI is simply more visible. (The two links above give specific examples.)
ML: Is the Anaconda redesign completed or is it an ongoing process?
MD: The major chunks of it are completed, but improvements are an ongoing process. We've done several usability test rounds (with the help of two interns, Stephanie Manuel and Filip Kosik), including a couple where we performed tests at DevConf.cz in Brno, Czech Republic and in Red Hat's Boston-area office. You can see the test documents from those tests here: https://fedoraproject.org/wiki/Anaconda/UX_Redesign#Usability and a summary of the analysis from the test results from Brno https://www.redhat.com/archives/anaconda-devel-list/2013-April/msg00018.html and Boston https://www.redhat.com/archives/anaconda-devel-list/2013-April/msg00011.html. We combed through all of the test videos and reports and incorporated some user feedback we'd gotten on the redesign as well and came up with an exhaustive list of issues to address, many of which have already been addressed: http://fedoraproject.org/wiki/Anaconda/UX_Redesign/Usability_Test_Suggestions
The Anaconda developers, as they encounter usability-related UI bugs, will often consult with me or other UX folks to hash out a solution and then fix the bugs.
So the redesign isn't stagnant; as bugs filter in they are addressed on top of our active work to identify issues and address them through QA and usability testing.
ML: What has been the most challenging part of the redesign?
MD: The most challenging part of the redesign (which my vent^W talk at the Linux Plumbers' Conference a couple of years ago actually focused on) was any interaction involving storage technologies. I actually converted that talk into a blog, so if you want to go more in depth on this you can take a look at that: http://blog.linuxgrrl.com/2013/01/04/storage-from-a-ux-designers-perspective/ Storage technology is all over the place and I think it's a bit of a miracle it is as standardized as it is. There are so many tough situations you can get into when you get storage wrong. (An example from Anaconda: http://blog.linuxgrrl.com/2013/03/04/refreshing-storage/)
ML: I must admit that when I first encountered the partitioning screens starting with Fedora 18, I was not fond of them. For various reasons, I do a lot of Fedora installs and eventually it grew on me... but I still hear quite a bit of complaining. How much and what feedback have you gotten from the community?
MD: The main feedback we get about most of the new Anaconda screens is either complaining about the position of the 'done' buttons in the upper left (folks want it in the lower right) and the text of the 'done' button (it's previously been 'back,' and we got complaints then because it doesn't always mean back, so we changed it to 'done.')
I think these complaints are symptoms of the hub and spoke model we've chosen to go with. For most of the screens, hub and spoke works great. Where it falls apart is the storage-related screens, because the storage 'spoke' is actually a little mini linear wizard in itself. I'm not convinced that hub and spoke generally was the wrong model to go with, but we've been making small tweaks here and there to the storage area so that it'll make more sense based on the usability test results and feedback we've gotten.
I also think some of these issues are symptoms of a design goal we had — make it possible for less-experienced users who just want a working system without much fuss to get one. In the design, I think I tried a bit too hard to prevent those users from being exposed to more complicated bits like custom partitioning. I mean, where I was coming from was reasonable: if you know how to create a RAID 4 array and you can actually recite the difference between all of the RAID levels, you're probably smart enough to poke around and find what you want. If 'RAID' makes you think of bug spray in all-caps, you back away slowly from the computer if you encounter any form of custom partitioning. Again, though, I think I made some bad calls there and those are shaking out in the usability test results and user feedback.
Actually last week I talked to Chris about instead of only having custom partitioning accessible from the 'Done' button on the first storage screen, to also have a button to just go to it since many people want to go there and aren't sure how when they get to the first storage screen. (It really is not intuitive to hit 'Done' to get to custom partitioning.) I'm hoping to mock up the fix this week. (I know this does seem a bit late to get to it, but to be fair I was on maternity leave for much of the Fedora 19 development cycle.)
ML: What complaints have you encountered and how valid do you think they are?
MD: That's a particularly broad question so I'll answer it broadly with respect to the user feedback we've gotten. Generally, I was pleasantly surprised by the feedback we got on the UI part of the redesign. Any time you embark on a major redesign — especially if the thing you're redesigning has been pretty much the same for a 10-year long period — you're going to get backlash. The things folks had the most trouble with were the parts of the redesign I knew were weak (particularly storage.) The parts of the design I thought worked well, for the most part, didn't receive a lot of complaints. I was seriously expecting hate mail, but I didn't receive any. I actually received a few well-reasoned and productive critiques that then went straight into the 'Usability Test Suggestions' wiki bucket list of items to address. The most common off-the-cuff feedback I hear is along the lines of, "I didn't like it at first, but now that I've had time to get used to it, it works fine for me."
I think a big reason for the less-severe-than-expected and surprisingly-helpful-feedback was how open we were with the redesign and rewrite process — I worked pretty hard to blog our major design decisions and process and the Anaconda team are very open in IRC and on the anaconda mailing lists. If someone was really bothered by it and did a search, my guess is they'd find all of that documentation and be able to see where we were coming from with some of the decisions and that may have moderated their feedback a bit.
I will also be the first person to say that there's no way that the design work I put together for the Anaconda team was perfect. Have you ever taken a drawing or painting class? If you start in with the eyes, and try to render them perfectly, then move out to the nose and the mouth and the head and the body — rendering each as accurately as you can, one-by-one — you are going to find yourself with a disproportionate and likely poorly-composed work with many smaller nicely-rendered bits. Nobody, not even the great Renaissance masters, sit down to an easel and draw a perfectly lifelike rendering of what's in front of them starting with the very first stroke. You have to start with broad strokes first, lay down the foundation for your composition, do one pass for gesture, do another pass to pick out the positions of the major body parts, another pass to hit the major shadows, do another pass... so on and so forth, until you get down to precisely-rendered, life-like details. You don't start with those details. The design we started with for Anaconda is a few passes in, but we're working with the feedback we get to make additional passes and refinements.
ML: I've hear some people advocating for the use of a separate tool for the complex task of partitioning... perhaps gparted... but that wouldn't work would it? If I understand correctly, the Fedora installer has requirements for more advanced partitioning / filesystem features that gparted does not offer... things like Linux software RAID, LVM, and various combinations of those. Is using an external tool even a viable option?
MD: Using a separate tool for partitioning would be a good idea in that, at least for the full / non-live installer, you're in a limited environment to get your work done. Partitioning is scary work, even if you really know what you're doing. Anaconda itself is a limited environment — you don't have a web browser to look up help and it's constrained in ways a full desktop is not. Anaconda handles these things because it has to, but I personally think it would be ideal if Anaconda handled installation and a lot of the complicated storage bits were handled by another tool that specialized in storage. As you point out, though, the tools that exist now don't meet the same requirements that Anaconda does, so Anaconda continues to handle storage configuration like partitioning. Anaconda is now modular, though, so the storage-handling code is now actually broken out into a separate library called blivet. Maybe at some point an external tool could be written that uses that library.
ML: Are there any upcoming changes in Anaconda in Fedora 20 that you'd like to mention?
MD: If I can get that mockup I mentioned earlier done and the team has enough time to implement it, maybe there will be a button to go to directly to custom partitioning. If not Fedora 20, hopefully Fedora 21.
ML: Is there anything I neglected to ask about that you'd like to mention?
MD: Yes, "How do you like working at Red Hat?" To which I would answer, "It is the best job in the world." Thank you for the interview, I enjoyed it and I hope it helps your readers understand a bit more about the Anaconda UI redesign.
ML: Thank you for taking the time to answer my questions.