Todas las entradas hechas por sciencesoft

A four step guide to triple your savings on AWS deployments

(c)iStock.com/zakokor

The playground for cloud infrastructure providers is busy, as the most recent  Gartner report shows, and Amazon Web Services (AWS), though being challenged by Microsoft Azure, still look like the unattainable leader. What makes Amazon appear a cure-all solution and how many of its services de facto bring tangible benefits to cloud neophytes?

AWS comes with a stack of tempting benefits such as elastic capacity, scalability, speedy and agile delivery, focus on business and global reach. However, the CXO-level decision-makers will often be cautious, and with good reason, when it comes to migrating their apps to the cloud. Besides security concerns and prejudices surrounding the cloud infrastructure, there is another underlying reason: who wants to undergo the trouble of revamping their architecture with outcomes that are hard to calculate?

Amazon’s cloud has over 15 services, but you are not likely to need all of them. We have distilled our experience of how one of our clients has cut monthly expenses on the cloud by three times and come up with this guide.

Painless migration at 1-2-3

Before plunging head first into migration, you need to put the existing infrastructure and processes under a microscope to find out what you are actually going to migrate. This would help a lot with the financial assessment to compare the bottom line costs of running an in-house data centre or co-located facilities with the expenses on a cloud-based architecture. Preceded by a security, technical and functional assessment to decide which parts of infrastructure are more suited to the cloud and can be moved, you can go on to develop a detailed roadmap.

Surprisingly, most parts of the existing architecture can be re-implemented in the cloud as they are and only later modified, so do not view the legacy architecture as a roadblock but rather as a springboard on the way to cloud. NB: if you use third-party licensed products do check out their terms and conditions and how AWS policies work with them.

Once we started exploring the topic of Amazon’s cloud we expected some problems with this stage, but surprisingly it is possible to move live services without harmful downtime. The key to success is to act step by step.

Prepare

At the initial stage the task is to get familiar with the AWS capabilities and to recreate the same services in the cloud as on the dedicated servers. A tool like Amazon EC2 makes it quite easy to add a new server, to set up security and to calculate the costs. Already at this stage you will have a pretty accurate estimation of the cloud capacities and associated costs which are necessary to support your existing architecture. But this has been just the first step – the next two are much more risky.

Clone

The next big thing is to move the data from the old server to the cloud. You can start with creating a relational database in the cloud with the help of Amazon RDS which is a service that supports replication from outside sources. Thus the copy of your data will be hosted in the cloud while the end-users will continue to use the old system. The synchronisation of data between the two systems should be set up until the cloud infrastructure is fully tested and ready for finalising the migration.

Switch

At the final stage you want to re-route your clients to the cloud-based system. This is a critical task and it can be handled with the help of a highly available and scalable cloud DNS service called Amazon Route 53. Utilising the domain name system allows to create a transitional workflow according to which for some period the users will be accessing either the old or the new system while the data will continue to be copied from the dedicated server. The switching itself starts with setting a new IP for the domain name and the minimal time to live (TTL) at 60 seconds for your DNS records on this stage. Once the new system is secure and stable, the DNS will be switched will go almost unnoticed for the end-users.

Why is it all about scalability?

When speaking about the cloud, one benefit that is often hyped about is the flexibility which comes along once you can scale the capacity of your servers up and down depending on your current needs. What practical issue does it address?

To visualise the problem caused by system overload, imagine a busy petrol station, with only two pumps available to serve the clients. It means that only two cars can be refuelling at any given moment, while the others will be queueing. The line will be growing and once the gas station physical space is filled, new cars will not be even able to enter. To ease the situation and dissolve the queue it would be only logical to turn on more pumps. The same happens with the server which can process only a limited number of requests per minute and if the traffic to your site increases it needs extra capacity to handle it.

For example, before migrating to AWS one of our customers operated 17 dedicated database servers and 20 application servers and they had to maintain all of them at all times even if the load is lower than their capacity. On the other hand, at peak times it would take them days if not weeks to add extra capacity to meet the growing demand. That created a “lose-lose” situation when they had either pay for the idle time, or lose business through the disruptions in services availability when the system was overloaded.

Does Amazon Web Services help to solve this problem? Yes, thanks to the traffic balancing and horizontal scaling which involves vertical software optimisation and on-demand availability of server memory. There is a trade off here – to enjoy the advantages you have to invest certain efforts first. You will have to implement the  shared nothing architecturewhich means splitting the app into several parts, each being independent and self-sufficient, and then to scale separate parts of it, for example, to serve more clients.

Scalability is further ensured with the help of Elastic Load Balancer. This service automatically routes incoming application traffic across multiple instances and even across multiple physical Availability Zones in the cloud.

Manual scaling vs. auto-scaling

Maximising the use of the cloud resources through scaling might be a way to partially justify the migration efforts and rather high costs of extra services. Amazon attracts clients with the promise of auto-scaling which makes the maintenance as easy as pie. That’s how it works: the web service Amazon CloudWatch makes your cloud system visible through and through and gives an insight into how the resources are utilized, how the application performs and how healthy it is. This service collects and analyses information from log files and metrics and based on these data can predict the resource needs, such as, for example, request spikes.

Based on the data gathered by this service you can go on to set up auto-scaling which means that more servers or database capacity will be activated when most needed. Depending on the type of your business the spikes may vary according to day time, day of the week (e.g. weekend activity), day of the year (e.g. seasonal or holiday spikes), so scaling your service availability up just before the spike start or down – after its end – will help to serve more customers, save money by reducing server idle time and ensure that the service is stable and available at all times.

Understanding the peak scenarios, you can set up rules for automatic scaling. Put simply, such a rule may sound like this: if the traffic increases by 20%, a new server machine is added. For example, one of our clients normally receives 1000 requests per minute while during the Black Friday this skyrocketed within a short time hitting 4000 requests per minute. By activating more servers, the end-users did not suffer from slow service and the system operated normally.

Squeezing more value from AWS

Once you have the scalability functionality of the cloud up and running, it is time to take a breath and look at additional features from the AWS stack that can probably enrich your application and cut corners for better cost-efficiency. Our favourites are elastic search, databases and to some extent content delivery network.

Why bother with development efforts on elastic search?

The search functionality for applications and websites hosted in the cloud can be powered by Amazon CloudSearch service. The core advantages of this service are elasticity, speed and scaling which meansthat your website will be able to handle even 400 million customers should they simultaneously decide to search for a specific item across your assets.

Additionally, setting up CloudSearch will facilitate indexation of your files and configuring the search parameters to make them smart and convenient for your customers and users. For example, such features as free text, Boolean, and Faceted search and autocomplete suggestions are taken for granted by Internet users because of their availability in the major search engines, but these features require complex algorithms and may be costly to implement as a standalone functionality.

However, if you make use of the already implemented AWS features including support for 34 languages, your users would feel at home navigating your system. Another business driver for you might be customisable relevance ranking and query-time rank expressions which helps to better target your marketing efforts and to show up in the right place at the right time with your offering.

Struggling with big data

As your business grows, so does the amount of data that it generates. This may include various types of operational metrics, logs, application status updates, access entries or configuration changes, geolocation information about objects or process status for activities in a workflow. So on the one hand, large amounts of such information should be stored to meet numerous legal and archival regulations, while on the other hand, business data per se is acquiring commercial value once you decide to leverage it in business analysis. For some businesses such as online gaming, it is also vital that requests to the database are processed quickly and the updates from multiple gamers are synchronised in real time.

To configure and maintain large databases is a mundane and resource-consuming endeavor. Though you can get some relief with agile Amazon SimpleDB which can help to easy a bit your administration costs. SimpleDB is pretty much what it says it is: a flexible non-relational data storage which is at the same time marketed as “highly available” and “easy to administer”. So far so true about the small response time, but another advantage that has caught our eye is multiple geographical distribution of your data. Pretty useful in case when even one of the location fails your service will remain uninterrupted.

Content delivery network

Another feature from Amazon is the content delivery network provided by the service called Amazon CloudFront. It does not offer upfront advantage and would not make much difference for you if you runa small local business. Nevertheless it is worth considering in the long run especially for companies with customers spread across different continents. By integrating with other Amazon Web Services products it helps to distribute content fast, secure and also to reach end users with low latency. Because of fault tolerance and backups in other locations we would recommend this service to public limited companies whose investors set high demands to quick disaster recovery.

The bottom line

The odds are that cloud technologies have come to stay, but like any major infrastructure change requires time and effort to implement as well as an adaptation period to go through. AWS does offer a bunch of sophisticated services but it also costs a lot of money. If you have less than 10 servers, it is probably early to migrate to AWS. But once you cross this threshold and feel that your business is getting affected by traffic surges migration may save you up to 60%.

The migration strategy described in this article is just one of many ways to do it, but based on our experience we aimed at slow immersion and step-by-step exploration of the cloud. The major hidden perk of un-grounding your business system is that the future innovations and changes would be easier to implement and the price of experiment, even if failed, would be considerably lower. And in today’s race for innovation, the game is worth the candles.