The other day someone asked on the #zfs IRC (irc.freenode.net) chat about using ZFS at home. As one of the early adopters, I can say it is a great idea! I've been running ZFS at home since late 2005. The first pool of "stuff" I created has been upgraded, expanded, and had its drives replaced. In 2008 I created the latest version of "stuff" as a simple mirrored pair of HDDs. The prior version of "stuff" was transferred to the 2008 pool which is still in use.
One of the nice changes to the kstat (kernel statistics) command in illumos is its conversion to C from perl. There were several areas in the illumos (nee OpenSolaris) code where perl had been used. But these were too few to maintain critical mass and it is difficult for interpreted runtimes to change at the pace of an OS, so keeping the two in lockstep is simply not worthwhile.
Latency and performance problems in storage subsystems can be tricky to understand and tune. If you've ever been stuck in a traffic jam or waited in line to get into a concert, you know that queues can be frustrating to understand and trying on your patience. In modern computing systems, there are many different queues and any time we must share a constrained resource, one or more queues will magically appear in the architecture.
We are hosting illumos and ZFS day events in San Francisco October 1 - 3, 2012. Our good friends from DDRdrive, Delphix, Joyent, and Nexenta are also sponsoring the event. I will be talking about how to optimize the design of ZFS-based systems and explain how to get the best bang for your buck. Jason and Garrett are also on the speakers list, talking about how illumos has really taken hold as a foundation for building modern businesses.
When I originally wrote cifssvrtop (top for CIFS servers), all of the systems I tested with had one thing in common: the workstations (clients) had names. Interestingly, I recently found a case where the workstations are not named, so the results were less useful than normal.2012 Sep 11 23:50:48, load: 3.11, read: 0 KB, write: 176448 KB
Modern systems are continuing to evolve and become more tolerant to failures. For many systems today, a simple performance or availability analysis does not reveal how well a system will operate when in a degraded mode. A performability analysis can help answer these questions for complex systems.
A legacy view of system performance is that bigger I/O is better than smaller I/O. This has led many to worry about things like "jumbo" frames for Ethernet or setting the maximum I/O size for SANs. Is this worry justified? Let's take a look...This post is the second in a series looking at the use and misuse of IOPS for storage system performance analysis or specification.