Friday, May 10, 2013

Cats, Cattle and Zebras


Or, the Animal kingdom in the cloud


Something about IT seems to attract references to the animal kingdom. It might be caused by lively imagination or the unkempt nature of practitioners in the field (I'm thinking about myself). For example, the Agile movement seems to like the story of "the Chicken and the Pig". Just as an indication as to how prevalent this meme became, I've been asked (actually, usually I do the asking) - who are the chickens and the pigs in a meeting. Only those few unaware of the meme took offense. As most powerful memes, this one helps in communicating briefly a much more complex set of ideas, hence it's power.

"You promised Cats…." you might say about now, "and all you've talked about are pigs. And when did this blog become veterinarian focused?"

Bare with me for a bit...

Ok. Cats. A more recent meme I've been hearing about is "Cat vs Cattle", or more frequently known as "Pet vs Cattle". The idea behind this meme is to capture the effect of scale on your approach to things. Say you have a cat. You name it "tickles". You feed it its favorite food. You tend to its every need, an pay the vet ridiculous amounts of money when it gets sick. This works since you have one or just a handful. Each and every instance of the Cat is precious and has a unique charter and sometimes quirky behavior (I had a cat who woke me up every morning by biting my cheeks). 
Now, consider the difference in attitude to Cattle.You don't have 5 cows, you have a whole herd, say 500 or 5000. In general you try to keep them happy and healthy. The herd as a whole might be broken into categories, but they mostly get the same treatment - same food, same lodging etc. You might have some aggregate affection to the herd, and maybe the lifestyle. But (sorry PITA) in general, you won't go far out of your way if one of the cows fell ill. Heck, if the disease is infections, you might actually sacrifice the one cow for the good of the herd.

Ok.. so what does this have to do with Cloud computing?


Replace the Cat with a pet server, and the cattle with a compute farm, and the analogy might become apparent. If you have a few servers, you'll name them things like "Fred" and "Branny" and "Pebbles". They'd each have a unique character (e.g. the OS flavor, version of software installed, firewall rules etc). With a compute farm, you name your servers something like "r12c2-east" and some more esoteric like "tge0-0-0-0.border1" or "ae-8-8.ebr1.newyork1" (these happen to be names of routers I plunked out of a trace route…). You strive to achieve uniformity among your servers, and
If one of these servers is sick (bad hardware, or malicious software detected), you power it off, toss it aside, and replace it with a new one. In the cloud this is even easier, since tossing a server aside just means a few mouse clicks in the console (or better yet an automated script that invokes the right API's).


Ok… so Zebras?



While I like the Pet vs Cattle analogy, I think it is missing an important ingredient that's pivotal in the Cloud. The pet vs cattle captures the reality of dealing with one axis of complexity - going across order of magnitudes in numbers - from 1-10, to 100-1000. The attitude tools and processes required to handle this transition across scales have to adjust the farther you go. This is captured well. The missing part is the diversity aspect.

Looking at wealth of services offered on Amazon AWS, and more importantly at the 20 some pages explaining "How AWS pricing works" might start you down the path of realizing that the cloud is not a matter of cattle, it's a matter of Zebras, or more correctly a whole zoo.

In a zoo, each species of animal has it's own needs - food, habitat, social preferences and various other demands on your operations staff. To be successful, the staff must be intimately aware of what does it take to make that certain species happy (or at least keep it alive). For bonus points, the staff also needs to be cost conscious in caring for the animals (e.g. if the Lion will eat gizzards, don't feed it filet mignon).

Complexity on the axis of scale yielded new tools and processes: Configuration management tools, like Chef and Puppet, Monitoring tools like Nagios and capacity planning tools like Ganglia and many others. Tools to handle the complexity axis (the zoo part) are starting to emerge, as more and more cloud users are faced with the need to economically handle their Zebras. 

I'm proud and happy to be working on one of those tools, and if your Zebras are getting too complex and expensive, or if you'd like to figure out if your lions will accept gizzards, I'd like to hear your story.

As a parting, in case I under delivered on cats, you can have your fill here.  


Saturday, March 16, 2013

Your Customer's pain is not always yours

Or, the one and the many

The inspiration for this post was a discussion with Q&A folks about how Crowbar should behave when failures are encountered while configuring  the storage subsystem on a node. Well, that and a binge of reading and listening to folks talking about Lean Startups and the importance of solving real customer issues.

The Q&A engineer was adamant, that on a server with 24 drives, Crowbar should be just ignoring a single failed drive, and just use the other 23. For the use case he was trying to solve, this might make sense. He had limited resources (only a handful of servers), and needed to quickly turn up a cluster. The fact that Crowbar flagged the server with the bad disk as having a problem, and refused to use it was nothing but annoyance to him.

Crowbar was designed to enable DevOps operations at very large scale. In a recent customer install (more about it in another post, i hope), the customer purchased 5 racks of servers, rather than 5 servers, among them 20 servers dedicated for storage. Each of those servers has 40 separate disks attached to it (it's kinda cool hardware, checkout the C8000XD while the link works).
The calculus that applies to 5 servers does not apply to 5 racks of servers.

Just imagine this scenario - you just spent the last 2 hours bringing this mass of bare-metal servers into an functioning Openstack Swift cluster (yea.. you can do that in about 2 hours with Crowbar). Then you go and inspect the cluster, and discover that rather than having the 20x40=800 disks.... you're missing 2 or 3. Now go find them, and figure out what the heck. That is a real pain.

The pain that customers experience should not be materially different than the pain the Q&A "customer" experiences in his scenario. 

The design of Crowbar is intended to address the real customer pain.

When dealing with large installations, in which the paramount importance is the delivering the desired performance at the desired TCO (O=operation, not necessarily Ownership, but more on that some other day).  In an environment with 10's or 100's or 1000's of nodes, partial node failure which ends up impacting the performance of the overall system is not acceptable. Throw the rutted apple out, and save the time and cost of trying to salvage bits of good flesh that might still be in there. The overall system will react intelligently and recover, rather than hiccup inexplicably a blew through SLA's.

The post is getting long, so time for some parting thoughts

  • Q&A is an important function - if done right they're your friendliest customers, they'll patiently enter very informative problem reports, and give you access to their environment. However, make sure that the enhancements they seek actually reflect the pain that a real customer will have
  • As deployments - both physical and in the cloud grew in size, the operations calculus changes dramatically. Its easier/faster/cheaper to through out the bad apple, rather than analyze what got it sick.
Align your stars and do the math before you take wasteful action.



Thursday, January 31, 2013

Democratizing Storage

Or, you control your bits

Traditional storage solutions gravitated towards some central bank of disks - SAN, NAS, Fiber Channel, take your pick, they share a few traits that are not very democratic:

  • They cost lots, and large parts of the cost is Intellectual Property embedded in the solution (i.e. the markup on the underlying hardware is huge)
  • The OEM makes lots of trade-off decisions for you - e.g. ratio of controllers to disks, replication policies and lots of others (most OEM's graciously expose some options that the user can control, but those are just a tiny fraction)
  • They typically require 'forklift updates' - if you use up your capacity, call the forklift to install the next increment, which typically requires a forklift worth of equipment
On the plus side, in general those type of systems provide you with reliable, performant storage solution (based on the $$$ you spend, you get more of either qualities).

But, in the world of large scale deployments based on open source software, the traditional storage solutions are an anachronism. 
 traditional storage solutions are an anachronism. 

There are now a slew of distributed storage solutions that solve the pain points of the old solutions and democratize storage. 

The different solution differ along a few axis:
  • Type of access they provide - Block (e.g. iSCSI), File system (e.g. similar to ext3 or other POSIX filesystems),  Object (e.g. Amazon S3, Swift) or tailored API (e.g. Hadoop)
  • Is it a  generic storage solution, or tailored for a higher purpose -
    • Ceph offers lots of different access methods, and can be used for pretty much any type of storage (e.g. Block, file system, and object)
    • Hadoop FS is tailor made for .... you've guessed it - Hadoop workloads
    • Swift and S3 only offer Object storage semantics
    • SheepDog is tailor made for virtualization based workloads.
  • The complexity of their metadata handling- or in simpler terms, if you have a blob of bits, how complex is the name you can give it? and if you have lots of these blobs, how smart is the technology in handling all these names?
    • Swift choose a very simple naming scheme - you have Accounts which contain Containers which contain Objects. That's it 2 level deep naming scheme. This simplicity allows swift to be very smart about replicating this information, providing high availability and performance.
    • Hadoop provides a full directory structure, similar to traditional filesystems (e.g Dos/Fat or linux  ext3). But it's a bit dismal about replicating it (better in Hadoop 2.0).  It relies on another Apache project (ZooKeeper) to maintain and synchronize the little beasts.
    • Ceph takes a mixed approach - the underlying object store library has Pools and Objects, each having a simple name (pools also have policies attached). But it also provide rich and capable additional metadata services
So, you have lots of options, that are much more cost effective and capable. But, you haven't found the panacea of storage quite yet. These solutions have their dark shadows too:
  • In many cases you get a Swiss army knife with lots of blades to play with, and get hurt by. Those tradeoffs that OEMs perform for you in the old solutions... you have to evaluate and select (or hire consultants)
  • The solutions above provide the software, but as famously said - the cloud doesn't run on water vapor - you still need to pick and buy the hardware (or buy a packaged solution)
  • It's all on you... no vendor to call up and nag with support calls (unless you pay a support vendor / solution provider).

Are the shadows scary? A bit. Can they be tackled.... with a bit of work and research, absolutely! and it's worth it!

In a follow up post, I'm planning to describe some of the hardware choices and considerations that go into deploy a Petabyte hardware platform for a distributed storage deployment, based on a recent project.
 






Tuesday, January 22, 2013

Openstack 'secret sauce'

Or, some less than obvious reasons why refactoring is "A Good Thing"

At a meetup tonight, someone challenged me to explain what's really good about Openstack. This was in the context of Openstack-BostonChef-Boston discussion about Openstack and the effort around Community deployment cookbooks, and an approach that uses Pull From Source (which I'll post about in a later date).

While I could have spent lots of time describing the CI testing infrastructure and the great work done by Monty and his team, frankly that's not unique to Openstack. It's an enabler for lots of other things.

To me, one of the primary sources of excellence in Openstack is the courage to refactor.

Not too long ago, there were only 2 services - Nova for Compute, and Swift for Object storage. In Grizzly, through large efforts, there are dedicated services with clear focus and dedicated team passionate about the technology area each services.

One of the first refactors was Keystone. Both Nova and Swift had their own approach for providing authorization, authentication and separation between tenants. During the Diablo release, the keystone service was carved off to provide a centralized function for these capabilities.

While the immediate end user benefits are clear - a single signon system, what the discussion tonight helped me put into words is the benefit to the community and the overall vibrancy of Openstack. I'll keep the suspense, and provide another example.

The Cinder block storage service in the upcoming Grizzly release started itself deep inside nova, as nova-volume. In that location, it shared some code, but mostly Project lead (PTL) and developers with nova. As a stand alone project it has a separate (though somewhat overlapping) sub-community dedicated to storage technology.
( I'd be remiss if I don't mention Quantum, the Software Defined Networking service, which started it's life as nova-network and followed a similar path during the Essex release)

Is the picture emerging?

As technologies areas are identified within their current "home", they're spawned into their own project, under the Openstack umbrella. This allows a community of enthusiasts to form around the project and drive its development.

Going back to Cinder, as a poster child of success - now that there's a focused block storage related community forming around it, vendors are getting engaged. Over 11 vendors have contributed at least their 'drivers' (a driver allows Cinder to "talk" the unique protocol to a particular back end storage platform). In the process, Cinder it self is becoming better.

Would the storage vendors have had the incentive to contribute to nova-volume? maybe. Is Openstack stronger by creating a focused set of PTL, core code reviewers and engaged contributes that only care about storage - I think so.
(Again, not to neglect Quantum... exactly the same result ! and Keystone too)

OpenStack's willingness to refactor encourages deep experts to join the project because they get to take ownership of code.  That 'secret sauce' drives excellence and community growth.