Thoughts and Ramblings

General things I find of interest.

Docker in FreeNAS 9.10

As some may know, Docker is being added to FreeNAS 10, but it is still in beta and not for production use. However, if you have upgraded to FreeNAS 9.10, you can use Docker. It’s just not integrated into the UI and you must do everything from the command-line.

IOHyve

First iohyve must be setup. FreeNAS 9.10 comes with iohyve already but it must be configured. As root, run:

iohyve setup pool=<storage pool> kmod=1 net=<NIC>

In my case I set storage pool to my main pool and NIC to my primary NIC (igb0). This will create a new dataset on the specified pool called iohyve and create a few more datasets underneath. Then in the web GUI -> System -> Tunables, you should add iohyve_enable with a value of YES in the rc.conf and make sure it is enabled. Also add iohyve_flags with a value of kmod=1 net=igb0 in the rc.conf and make sure it is enabled. I included my configuration above but you should change net to match your configuration.


FreeBSD with ZFS root

It’s been a while since I posted here. I’ve been busy with a new job, new house, and a bunch of other things. One of these things was setting up my new file server. This is something that’s been in the works for a long time, as can be seen from the various posts on ZFS. I spent a long time researching this, and finally came up with my solution:

I did consider FreeNAS for a really long time. It is essentially a FreeBSD install with most of the administrative work done for you through a web-based GUI. It hit most of my checkboxes in that it supported ZFS, AFP, Bonjour, and a few others. While this is nice, I found it also to be limiting when one wants to stray off the beaten path. I didn’t want to lose ZFS, but I wanted something where I could tinker. I decided to go with FreeBSD.


Trac.fcgi Memory Usage

I’ve been slowly transitioning to using nginx as the web front-end in an effort to reduce Apache’s memory usage. In keeping with this task, I’m moving more and more off of Apache. One piece I recently moved was trac, transitioning to using it directly by nginx by running it in fast-cgi mode where as previously it was running as cgi though Apache.

While fast-cgi is faster, it has inherent issues, such as any memory leak can result in ever growing memory usage, which is exactly why Apache has a setting for each child to serve a limited number of requests before exiting. Trac.fcgi has no such directive, and has the equivalent of a large memory leak, a non-expiring cache. While it’s not as bad as a memory leak, which will indefinitely grow instead of reaching a limit, if the cache size is larger than the available memory for trac to use, it’s just as serious. The only solution, without fixing trac’s caching mechanisms, is to restart trac periodically, but during the time trac is restarting, all requests are lost, causing bad gateway errors to the user. Additionally, the restart needs to be done manually. Clearly not an ideal solution.


Thrashing Server

Well, last Sunday, we released a new version of Perian. It didn’t occur to me at the the time that this would mean a large number of people would be visiting the site. Anyway, Monday morning I noticed that the web server was very slow, which began my fun. I decided that the best course of action was to increase the number of servers. The system had CPU to spare as well as memory, so this is the natural choice. So, I increased the number, restarted the web server, and it helped, some. So, I increased the number further, reloaded the web server, and watched top for cpu and memory usage. I kept increasing the number, until I realized that reloading the web server didn’t actually reload this part of the configuration and I needed to restart the web server. So, I restarted the web server, and watched top in horror as the server ran out of memory and started swapping. I quickly issued a /etc/init.d/apache2 stop command, but the command never completed. I quickly scrambled to see if there was any other shells I could gain to the server. Everything was running horribly slow because the server was thrashing. It became clear to me about 10 minutes later that the kill process was not keeping up with the new apache processes being created; so I must do something to stop the new apache processes first. So, I had the sense to issue a iptables -A INPUT -p tcp dport 80 -j DROP. This firewalled off the web server from the entire world. Then, over the course of the next minute, the server starts becoming responsive again. Finally, I managed to actually kill the web server, set it’s child count to a more reasonable value, and start it back up again. Then, a quick flush of the firewall rules, and it was working again. If I didn’t have the sense to run this, I likely would have had to resort to a reboot of the server into single user mode; a prospect to which I was not very amenable. So, one of these days I’ll reconfigure the thing to use the threaded version of the web server; however, php doesn’t work there, so I guess I’ll have to use some sort of workaround.


Firewall Ban Activated

Well, shortly after my last post, the Chinese spammer struck again. I just blocked a bit over half a million addresses in China in response. I have no tolerance for such things, I don’t trust anyone in China to care enough to do anything about this guy; so blocking is really the best recourse.