What’s the Best Way to Deploy in the Cloud with Amazon Web Services?
Today most software vendors are able to run their applications in the cloud thanks to the work by folks at companies like Amazon, Rackspace, and others. This gives users the option to deploy their applications in the cloud.
Safe is no exception as we no longer have any physical servers for our web presence. We now run all of our training classes via Amazon Web Services (AWS), and allow our customers to deploy FME Server on AWS (our trial program gives people the option to use an AMI, or install on their own cloud-based machine). There is however a big difference between simply running in the cloud and leveraging the cloud.
New Capabilities Offered by the Cloud
Many applications are really “spiky” in terms of their usage patterns; that is they tend to have periods of low (or consistent) use, followed by times of heavy use. This usage pattern is problematic for traditional IT deployments and is one example where the elasticity of the cloud can take solutions to the next level. A cleverly architected cloud application will scale its resources to match the current demand of the system. The trick is to exploit the AWS (or other) licensing model to do the scaling in a way that minimizes cost.
Highway Model vs Electricity Model
In the past organizations have had to use the “highway” model when it came to compute resources. That is organizations had to build infrastructure to handle the peak times. As with highways this approach tends to force a compromise as it is rarely feasible to build the infrastructure to handle the peak loads with no degradation in service. The result is that during quiet times there is excess capacity sitting idle while at peak times the system is overloaded resulting in slower response.
The cloud gives the ability to deploy massively scalable solutions where resources are added and dropped as needed. This is similar to how we purchase electricity. We pay only for what we use.
Electricity Model 2.0 and Amazon’s Licensing Models
The electricity model is the basis to Amazon’s three general types of pricing: on-demand, reserved instances, and spot market.
- On-Demand: This is AWS’s default and is great for experimenting or for uses in which the machines are not up all the time. Here you are paying a premium hourly rate for the flexibility of making no commitment to Amazon for their services. No Commitment = higher cost. While this works for any application that is always running you can significantly reduce costs by using a different licensing model. From an electricity model this maps to how small consumers pay for their electricity. I pay only for what I use but I am not committed to buying any.
- Reserved Instances: With reserved instances you are entering into a more committed relationship with Amazon. I won’t get into all of the details, but for an annual fee you get a reduced hourly cost to your instances. There are several levels which all essentially follow the rule of “The bigger the commitment the lower your total cost” – and using them you can easily reduce your costs by 40%. Again, this maps to how very large power consumers negotiate with the power companies. A large power consumer would guarantee to buy “x” amount of electricity for a lower unit cost.
- Spot Market: Here you bid on instances with the price fluctuating depending on current market demand. You first specify a maximum price you are willing to pay. Then you get access to that instance until the bid price rises above the maximum price that you are willing to pay. Used properly you can reduce your costs by using “excess cloud capacity” to improve the throughput of your system. However, there is no guarantee that you will get resources, an important consideration for mission critical applications. In the electrical market, the spot market too exists enabling companies to buy electricity at lower rates when there is excess. As with any spot market you need to be careful as you could also pay more if resources become tight.
So What’s the Right Model?
I would argue that there’s no right answer, but it’s up to each organization to assess its own needs and requirements. At Safe, we’re using reserved instances more and more as we identify how many AWS instances we need on a continual basis. We still use on-demand for experimenting and for demand peaks.
For our clients and anyone running on AWS, we recommend you seriously examine the pricing models. For anything that has a regular usage pattern you should consider reserved instances. We are now looking at how to enable our products to play the spot market to drive down instance costs further. One product that looks very promising is StarCluster from MIT. We will keep you posted on our experience with all of this and with FME Server deployments in the cloud.
Have you deployed anything in the cloud? If so, how did it go? Which models worked best for your situation?