The launch of Exocraft.io and the subsequent flood of traffic revealed the flaw of our reliance on Amazon Web Services -- bandwidth cost. Within a month of launching, our costs surged nearly 10x entirely due to bandwidth costs on our EC2 servers (it would have been worse if we weren't using CloudFlare for our static assets). So, we set out to find an alternate provider that could offer similar performance at a more predictable cost.
No benchmark can be a perfect representation of real-world performance, but tracking multiple metrics over time gets us close enough to make some reasonable decisions. We were specifically interested in network speeds, CPU performance and disk I/O.
The virtualized nature of cloud hosting makes benchmarking over a period of time vital to getting the full picture. In order to consistently capture these values across various providers, we wrote and open-sourced a Node.js CLI tool called cloud-bench and made it available through npm. With this tool, you can easily specify a time-frame to benchmark and return a CSV with a variety of metrics:
We made the tool general-purpose not only so that it would work on a variety of cloud solutions, but also so that others could use it to test for their own purposes. Simple usage examples are included in the repo's README, making it trivial to run extended benchmarks of any cloud platforms you are considering using.
With our goal being to achieve lower and more predictable costs, we had to eliminate major players like Google and Azure, as they suffer from the same high bandwidth pricing as AWS. So, our focus turned to lower-cost providers that offer bundled bandwidth with low overage costs. The prices were all fairly similar, so all we needed to narrow down was which offered the best mix of performance across our three main metrics.
This isn't a perfect comparison of servers for a few reasons. First, Packet offers dedicated hardware, but the included Intel Atom C2550 doesn't stand a chance against the Xeon processors the others offer. We also tested CPU optimized servers on Amazon EC2 and DigitalOcean simply because we are looking for the best CPU performance. Even though neither Linode nor Vultr offer a CPU optimized tier, we wanted to test the options we would actually be using if we went with each provider.
We ran benchmarks for 24 hours in parallel on all providers to get a sense of average performance and consistency over a full day. The following charts display the results for the main metrics collected.
The CDN download test used 5 different CDNs to benchmark download speeds from various locations around the world. DigitalOcean was the clear winner with an average speed of 47 Mbps, though all but Vultr were in a reasonable range.
Ping was calculated using the speed-test module, which requests the nearest edge node (meaning it could have been in the same datacenter). Lower is better, and all hosts were reasonably consistent.
Download speed also used the speed-test module, but in this case you can see that DigitalOcean and Amazon EC2 doubled the speeds of the other hosts. However, we must keep in mind that it is possible speedtest nodes were hosted in these datacenters.
Upload speeds paint a similar picture as downloads, though all of the hosts were even closer in range.
This test performed 2.5M random hashing and array operations and calculated the time it took to complete. Amazon EC2 slightly edged out the rest, though DigitalOcean was a close second. Linode and Vultr were nearly twice as slow, with Packet coming in nearly 5x slower (again, expected with Packet's dedicated, but slower CPU). This test really shows that the CPU optimized tiers in EC2 and DigitalOcean aren't just marketing.
All of the disk speed tests used fio to calculate IOPS. You want a higher number here, and all of the hosts outside of DigitalOcean seemed to have a cap. The Amazon EC2 cap was so low that it actually was not able to complete the test and returned no results.
Write speed had a much smaller range than read and actually favored Packet and Vultr, though by a smaller margin than DigitalOcean had on read. It is important to note that DigitalOcean's read and write IOPS were much less stable, likely due to the lack of limits like the other providers.
This was calculated using the ioping tool and is similar to network ping, but measures disk latency instead. Again, Amazon EC2 failed this test, with DigitalOcean showing by far the best consistency and speeds across the 24 hours.
The averages for each metric and each provider are outlined in the following table.
Amazon EC2 | DigitalOcean | Linode | Packet | Vultr | |
CDN Download (Mbps) | 37 | 47 | 31 | 32 | 25 |
Ping | 4.94 | 4.42 | 7.88 | 5.52 | 6.88 |
Download (Mbps) | 1,059 | 1,234 | 575 | 652 | 602 |
Upload (Mbps) | 788 | 747 | 565 | 807 | 708 |
CPU Time (secs) | 10.31 | 12.01 | 19.05 | 52.28 | 22.41 |
Read IOPS | N/A | 119,230 | 37,316 | 39,921 | 53,733 |
Write IOPS | N/A | 29,401 | 12,690 | 37,674 | 39,208 |
IO Ping (μs) | N/A | 229 | 901 | 437 | 571 |
The goal of these tests was to find a way to lower our costs without sacrificing too much in performance. DigitalOcean accomplished both of those and more by besting EC2 in every category but CPU (which was quite close as well). We've since moved Exocraft.io to DigitalOcean and have realized an over 80% reduction in cost, which will only grow as our bandwidth usage increases. In addition to these performance/cost gains, we found DigitalOcean's interface and support to be much more approachable than the alternatives.
Some will argue that the best price-to-performance possible is with dedicated servers. This certainly can't be denied, but that ignores the many benefits hosting in the cloud offers. We've written our backend and structured our infrastructure to take full advantage of the ease at which cloud servers can be scaled up and down (both vertically and horizontally). In fact, we wrote and open-sourced another library called healthcare.js that allows you to specify your DigitalOcean infrastructure as configuration and automatically grow/shrink/heal your servers with no single points of failure.
So, while cloud hosting might not be a perfect fit for everyone, if your use-case calls for it, know that you have more viable options than just the "big three." I'd encourage you to use cloud-bench to do your own testing and find the solution that best suits your project's needs.