Worried you’ll miss us?
Subscribe to get our articles and updates in your inbox.
The other night at the local Software Craftsmanship Group meeting we began discussing the business case for the “cloud”, and whether or not we thought that companies would truly embrace it. It was a good discussion, and I wanted to capture some thoughts in a blog post, so here we are.
First, I think that a move from computing as we know it now into a “cloud” environment is an inevitable conclusion. As my good friend Kevin Hazzard said at the meeting, computing really started out with a bunch of little clouds. A few universities had the resources and money to setup giant mainframes and then people would keep all of their data on these mainframes and log in through a dumb terminal in order to get time slices for running computations from their datasets. Or maybe they just needed a slice to read a document. Everything was done on big centralized computers.
When computers started getting smaller and PCs came about, things started shifting toward the model of having computing resources decentralized so that everyone had computing power on their own desk. When the web came around and some of the bigger companies decided that they needed to setup websites they started hosting servers inside of their own offices. They would build server rooms with fast internet connections, lots of air conditioning, raised floors, racks, etc… Even companies that weren’t all that large had these rooms, talk about a waste of money! As commercial datacenters became more popular only very large companies could still see a benefit from hosting their own hardware. As of today, it makes no more sense for most companies to buy and maintain their own datacenter than it would for them to go and build a power plant in order to produce their own electricity.
But even as many companies are moving computers they own into datacenters, others are actually leasing their hardware from these same datacenters. They are paying for their servers to be maintained and to have spares nearby in case of failure. In that sense, they are leveraging the economies of scale that these large datacenters have access to. But in the process, they have completely outsourced their computing hardware to a third party. In this case, the third party is charging the customers for space in a rack and the customer is paying for that hardware no matter how much or how little they use it. (Ignoring the costs of bandwidth)
In order to make this more efficient, many customers have started buying beefier boxes and then putting their own virtualization layer on top of it. This way they could have 5 logical machines doing 5 different tasks on the same physical box. This would theoretically give them the ability to better maximize their hardware usage and save a few pennies by not needing as many spaces in the datacenter’s racks. The problem is that now the customer has two things to maintain, their physical machines and their virtualization mechanisms.
Isn’t There A Better Way?
But wouldn’t it just make it easier on everyone to get away from physical hardware that needs to be formatted, installed, maintained, etc? Wouldn’t it be easier to have to stop worrying about whether a box was physical or virtual? Wouldn’t it be easier to just move to machines out in the ether that can be shifted moved, created, destroyed, etc… without having to touch physical hardware or setup your own workflows for doing this? Definitely. It would allow even the smallest customers to create, destroy, backup, suspend, etc… any number of machines in their own virtual datacenter. No more worrying about purchasing a new machine and waiting days for it to be “built” and installed. And when you don’t need that new machine anymore, you just shut it down and it goes away.
This whole “cloud” thing certainly sounds like Nirvana doesn’t it? In fact, there are companies, analysts, experts, etc… that are all waiting in the wings to tell you that it is going to solve every problem you have from how you host your servers to allowing you to get cheaper coffee for your workers. The “cloud” is a ridiculously overloaded term. It is almost as if people have just renamed the internet. When most people (especially “analysts” or “experts”) talk about the cloud they are talking about any service that you use over the internet that you pay for usage. If we start to adopt that definition then almost any service can be described as being “in the cloud”. They start talking about things like Salesforce and other SAAS offerings as “outsourcing to the cloud”. But I don’t really buy it.
When I talk about the “cloud” I am mostly talking about “utility computing”. An idea that has been around since the 50s (maybe even longer than that). The idea that you can buy computing as a resource and simply pay for your usage rather than having to pay to keep hardware and software around when you need and get rid of it when you don’t. It is this concept that, in my mind, separates “cloud” from “not cloud”.
So within my view of the cloud there are a few different classes of offering:
- Storage Infrastructure – On-demand ability to store huge amounts of data without having to worry about disks, capacity, etc… Amazon Simple Storage Service (S3), Microsoft Azure Blob Storage
- Computing Infrastructure – Basic on-demand hardware. Services that provide you with the ability the create virtual machines on-demand. Allows you to spin up dozens of machines with a few clicks or a few web service calls. Examples of this are Amazon Elastic Compute Cloud (EC2), Rackspace Cloud, CloudLayer, and GoGrid.
- Application Hosting – Building block applications that run on virtual hardware, but you don’t manage the hardware or the software. Often these “hosted applications” are used as fundamental parts of other cloud based or in-house applications. Examples of this are Amazon Relational Database Service (RDS), Amazon SimpleDB, Amazon Elastic Map Reduce (EMR), Microsoft SQL Azure, Amazon Simple Queue Service (SQS), and Microsoft Azure Queue Service.
- Application Platforms – Hosted environments that allow you to publish your applications to them and not have to worry about managing the hardware or software. They generally require you to write your application to a particular API or in a particular manner so that they can be deployed and scaled as needed by demand. Examples of this are Google AppEngine and Microsoft Windows Azure.
Currently I think that the shift is going to start with number 1 and then slowly move down through to number 4. Actually, the shift towards cloud storage is already raging full force. Realistically though, the shift into cloud computing infrastructure is already going full force for large sectors of the market. Application hosting is also starting to pick up a bit with many cloud database providers. As much as vendors would like you to believe differently, I don’t really think that the demand for the cloud application platforms has really picked up.
So What Is The Problem?
But if everything is so great, what are the major factors that are holding back the shift?
- Trust – Currently, this is the big one. You don’t own the disks that your data is being written to. In many data center scenarios you own the disks that you are writing data to and you control their ultimate fate. In the cloud it is important that you don’t really know what disk you are writing to. If you know, then you are now involved in management decisions about those devices. The whole point of the cloud is to abstract away much of the underlying hardware from the software that you are executing. Many companies just don’t trust any external provider to keep their data secure, and are not going to make a move until guarantees or at least some sort of liability system can be enforced.
- Familiarity – This one is fairly obvious, people just aren’t familiar with the technology yet. Services like Amazon didn’t really help with this either, because until recently they didn’t even provide a way for customers to go in and try their services without hitting web services or going through an API. While I’d like to think that this isn’t a hurdle for developers, sadly that doesn’t seem to be the case. Tools such as S3Fox and ElasticFox filled in some of these gaps, but you had to know they existed and where to get them. Microsoft has launched Windows Azure with easy to use plugins for Visual Studio along with an easy to use web interface, making getting started very easy. And recently Amazon has launched their management console which allows a user to login and manage machine instances easily from a GUI.
- Lack of Competition – It is hard to commit to a strategy knowing full well that only one company can be a provider for said services. For example, if I was going to use Amazon SimpleDB for my application, I better be sure that Amazon is going to keep the application up and maintained. If for any reason Amazon let’s the performance of the tool lag or decides to cancel the service, I am in a world of hurt. Unless I put a significant amount of abstraction on top of this tool when I worked it into my application, moving from one tool to another can be exceedingly difficult.
- Lack of support for common scenarios – Many of the cloud service offerings available, such as Google AppEngine and Microsoft Windows Azure, forces you into developing your application in a certain way and against a certain set of APIs. While this is fine for new applications, this doesn’t do any good for the millions of custom applications that are out there today. If we count on having to rebuild our applications in order to move them into the cloud, then many companies will never get there.
Number 4 (lack of support for common scenarios) is the issue that I think is being resolved the most quickly. In fact, I think that in Amazon’s case, they have solved this problem almost entirely. They have the ability to fire up Windows or Linux machines, assign them permanent external IP addresses, attach large amounts of persistable storage to them, load balance instances, auto-scale instances, put a CDN in front of them, use a relational database, etc… The one thing I think they are missing is the ability to attach large data stores to multiple machines concurrently in the way that you can with a SAN. With the exception of that one scenario, they have enabled me to put virtually every application I have ever built into their cloud. Which personally I think is really freaking cool.
So where does this leave us?
We are in a situation now where I can can go out today and fire up an entire web farm in the blink of an eye, but for customers paying for rackspace in a colocated facility, they are still seeing better economies of scale even at relatively small sizes. For instance, I could probably go and rent a rack full of servers from a datacenter for cheaper than I could run a rack worth of instances on EC2. However that is likely to change very quickly, and once the switch is made, it’ll never reverse itself. Datacenters just won’t be able to keep up with the scale that companies like Amazon, Microsoft, Google, and Rackspace will bring to bear. Slowly over time as the price and feature disparity widens you’ll start to see more and more companies moving into the cloud. Eventually you’ll see that only the largest companies will continue to house their own hardware.
I welcome there coming changes. For the small and medium size business it certainly helps level the playing field. When your service becomes popular you no longer have to struggle with an increasing user demand while trying to balance the purchasing of hardware and rack space. You now need to merely fire up more VMs whenever you see (or your algorithms see) that your load is surpassing your current bandwidth. These are abilities that people could only have dreamed of 10 years ago, and now they are available to everyone, and yet very few seem to be leveraging them. I encourage you to get out there and try out some of these cloud services, you may find something that you really like!
Here are a list of the cloud services mentioned in this post:
Loved the article? Hated it? Didn’t even read it?
We’d love to hear from you.