Where to place your bet in the poker game that is the cloud?
“The reality, he said, is that engineers know the various components of their systems are going to fail and they design around the known fallibilities. But when you’re building some of the largest computing systems ever assembled, you’re bound to run into problems for which you haven’t planned or didn’t even know existed. Bug testing against every possible problem across hundreds of thousands of servers and multiple data centers just is neither easy nor, really, feasible.” - Geoff Arnold.
The fact of the matter is that the cloud is so complex that the traditional IT hardware is in need of an evolution itself. The question is – do hardware suppliers design specifically for the cloud, or do they continue to design for the on-premise data center expect the cloud to take care of itself through the software stack?
Today, there are 2 ways to go about building a cloud infrastructure…use the lest expensive components possible and meet your “5 nines” uptime metric through replication and failure mitigation via elaborate and intelligent software. Google for example, has adopted this route with a very complex software layer that can handle hardware failures so that they are invisible to the customer. The second, if you are the rest of the world, is to continue to invest in reliability, performance, data integrity, redundancy…traditional data center staples that are part of every enterprise hardware solution out there.
Times are changing though. Open source software and open platform architectures are driving a less complex hardware infrastructure and a more complex software stack. That appears to be the formula for success…invest less in hardware and proprietary software stacks, and invest more in developing your own software…at least for the public cloud providers. To compete with the likes of Google and Amazon, for the long run, you have to have a strong cost structure, and we know a lot of cost is wrapped up in hardware.
I think it’s going to be somewhere in the middle.
Sure, the large cloud players will continue to invest in storage architects, cloud architects, software developers, etc., but there are only so many of those that specialize in cloud environments to go around, and since they are in high-demand, they demand high dollars. The OpenStack community should help, but that is going to take time, and companies will still need to invest in in-house talent for customization, deployment, management, etc. The middle will be a combination of this open standards and software and lower cost enterprise grade hardware. Suppliers continue to innovate to either drive more cost out of their products, while maintaining enterprise-like performance, reliability, integrity, etc, features.
Case in point: TryStack. “It works a lot like Amazon’s Elastic Compute Cloud, except that it runs on the open source OpenStack software…and HP’s Calxeda-based Redstone servers running Ubuntu Linux, ARM chipmaking startup Calxeda,” according to arstechnica. And the bonus, it’s currently FREE for developers to test drive.
Could this be the middle I refer to…sounds like it, but that’s just the compute leg. What about the storage and networking legs to this three legged stool that makes up the cloud? We’ll have to see what happens with hard drives and SSDs in the coming months, and networking…WAN optimization seems to be the hottest topic lately.
Cloud data centers: low cost hardware running custom software or the enterprise IT status quo? Where do you see it heading?