Cloud vs dedicated vs colocation comparison
In the past couple of months I’ve encountered many calculations, plans, expectations about whether buying hardware, leasing it, or going full-cloud is best. The features of cloud vs anythingelse are clear and are not a topic here, but the discussion of what is cheaper got me thinking and calculating. And here is the result.
To be clear, my opinion is that buying hardware for production services that are web-reachable (apps, sites, mail, db) is nonsense. Hardware loses value, breaks, requires space, electricity, cooling, maintenance and so on. All of that is not your core business (unless you’re a datacenter, IT maintenance company or live in very cool places) and should not require effort.
But, for the sake of calculating, let’s put personal opinions aside and try to get all apples in one basket and see which one is more expensive over 2 years. The server I picked for comparison is a Dell PowerEdge with a pricetag of around 1500€. Thats a Xeon quadcore server with about 16GB of RAM.
- Own server (Dell PowerEdge with a Xeon E3-1220)
- AWS Extra-large instance (4 virtual cores, each 2 EC2 compute units, 15GB memory)
- Leaseweb (HP ProLiant DL120 with a Xeon X3440, 16GB memory)
- Hetzner (Xeon E3-1245, 16GB memory)
Storage, bandwidth and other optional or included services are excluded for the sake of simplicity.
For colocation I’m comparing the following options:
- TusHosting (Slovenia)
- Leaseweb (Netherlands)
- Hetzner (Germany)
The own server costs around 1500€. The AWS server costs 1190€ upfront and then 110€ monthly. The Leaseweb server costs a flat 119,20€ monthly. The Hetzner server costs 69€ upfront and 69€ monthly. Colocation at TusHosting costs 99€ monthly, at Leaseweb 50€ upfront and 23,20€ monthly and at Hetzner 59€ upfront and 39€ monthly.
In 2 years the AWS server will cost you 5200€, the Leaseweb leased 2860€ and the Hetzner one 1725€. Your own server at TusHosting will cost you 3876€, at Hetzner 2495€ and at Leaseweb 2106€.
|Provider||1 year||2 years|
AWS haters will jump of joy because of this last table. AWS is without any doubt, the most expensive of all options presented here, but is also the only completely virtualised. And we’ll take that into account a little bit later. What made me smile is that leasing a server at Hetzner is unbelievably affordable. Parking your server in a relatively small datacenter locally is, no surprises here, very costly and without any actual benefit (more on this later).
When picking how to deploy your app, think about the long run. Why spend hard earned capital into buying expensive hardware when you can get the same for a monthly fee. Think about why would you want to finance a purchase of hardware that will become obsolete in 12-24 months. Ownership is not always the best option.
As you know, colocation is a service that does not deal with your hardware. You have to take care of disk, PSU, cooling fan failures and replacement is done by either your team or your manufacturers team. This means that placing your own servers in a data center away from your HQ means somebody will have to travel. And that’s the #1 reason why companies put servers in-house. Leasing a server, on the other hand, means your data center got you by the balls. If they sneeze you catch a flu. If they go bust, you’re left with your own backups. But on the other hand, they are willing to give you hardware as a service. They are responsible for hardware maintenance meaning you only deal with the software side. And the biggest advantage is that the server you’re using is not an asset, meaning you simply scrap it after a year or two and upgrade without any capital investments.
But even a leased server can fail. The average failure rates of HDDs at AWS is 18 months (I remember Werner Vogels saying that on the NYC 2011 AWS summit, but can’t find the exact quote – the number can be a couple of months off the actual one), meaning there is a high probability your leased server will encounter at least one HDD failure in 24 months. You avoid that by having disks in a RAID configuration (RAID 5 for example), but still, disks fail. And while HDD failure can occur without downtime, standard leased servers do not include redundant PSU. And those fail too.
When an IT guy from Vimeo got on stage at the AWS summit in 2011 he said one thing that actually summarizes why IaaS is a better thing than any hybrid or non-cloud solution:
— Miran Hojnik (@mhojnik) June 10, 2011
Don’t get me wrong, “cloud” providers fail too. And when they fail, they fail big. But failure is just a part of evolution, and no evolution goes forward without failure.
Why IaaS is still the best option
Regardless of being relatively the most expensive, IaaS is the best option available. You don’t need to jump on that ship right away, but in the long run it is the only thing that makes sense. There are a lot of “IT” specialists that are skeptical if not openly against any “cloud” solution, and if you ask them why they will answer “I don’t want my data to be available to some unknown admin”. While that answer is completely untrue, it is completely in line with the five stages of grief where the first step is denial. Your data is actually more secure in a “cloud” environment because security practices are way higher than the ones you would implement in your own mini data center. But the main reason for denial is fear of losing control and importance in a structure.
The effort required when deploying a single compute instance in an IaaS environment is drastically lowered in repetitive actions. The same applies for software failures when sporadic and isolated. Instead of trying to fix one instance, it is much cheaper and simpler to simply remove it from a pool and replace it with a healthy one.
All these bits and pieces make IaaS true value for money. But IaaS is expensive and when you start it is an overkill.
The proposed path
We use both virtual instances and leased dedicated servers. Commercial hosting services are located on dedicated servers (the CPU/RAM/HDD bulk benefit), while critical apps (billing, control panel) are located on virtual instances. We use a total of 4 different providers, all in different data centers across Europe. We use AWS S3 for backups, Route53 for NS hosting of the main cloudhome.eu domain and Simple Queue Service for certain critical queues. Both name servers used for DNS hosting for domains are on separate networks.
The key is to use the resources you need in a practical way. Instead of scaling up, we scale out. We group services based on how are they organized and heavily use web services. Queue systems allow us to split applications and isolate failures.
There are countless apps and services that allow you to organize your app regardless of the infrastructure it will reside on. Think about expected failures, process chains and high usage spots. When you organize your app with that in mind, queue, cache, storage systems will become much easier to implement. And from there migrating to another infrastructure provider when needed will become much easier.
Building your app to use a certain type of infrastructure is very dangerous. When CDNs become popular, many hours were spent splitting static and dynamic content. A very popular news site in Croatia invested ~14k€ in a single server in order to cope with demand, only because the background apps were built to run on a single instance. When that server went down, their entire portfolio went down.
What type of infrastructure you use is, in the end, a question of taste. There are countless arguments why use one or another, but in the end it will be your responsibility to make the best usage of what you have at hand. Remember what the guys at Vimeo said: “AWS lets us treat infrastructure as software”. Think about that next time you’ll buy your new 3k€ server.