Category Archives: AWS

Study Finds Enterprise Cloud Focus Shifting From Adoption to Optimization

Cloudyn together with The Big Data Group has released the latest AWS customer optimization data, reinforcing the positive growth trend expected for the year ahead.

We set out to evaluate whether the projected 2013 ‘year of cloud optimization’ is on course and discovered that we are well into the public cloud adoption life cycle. In 2011 and 2012 the conversation centered around how and when to move to the cloud. Now it is all about companies looking for efficiencies and cost controls,” commented David Feinleib, Managing Director of The Big Data Group.

The study, based on over 450 selected AWS and Cloudyn customers, highlights a more mature approach to cloud deployments reflected by a deeper understanding of where inefficiencies lurk and how to optimize them. EC2 makes up for 62% of total AWS spend, with more than 50% of customers now using Reserved Instances in their deployment mix. However, On-Demand pricing remains the top choice for most, accounting for 71% of EC2 spend. Even for customers using reservations, there is still opportunity for further efficiency.

For example, Cloudyn’s Unused Reservation Detector has assisted customers in finding a startling 24% of unused reservations. These can be recycled by relocating matching On-Demand instances to the availability zone of the unused reservation.

There is also a shift away from large instance types to medium, where two medium instances cost the same as one large, but can produce 30% more output. However, with the low 8-9% utilization rates of the popular instance types, there is certainly more work to be done on the road to cloud optimization.

Cloudyn and The Big Data Group host a webinar on May 1, 2013 at 9:00 am PT focused on deployment efficiency.

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

Yet Another Analyst Insists on AWS Spinoff, Others Disagree

Not for the first time an investment analyst, this time Oppenheimer analyst Tim Horan in a report published on Monday, insists in a report that AWS will inevitably be spun off to avoid “channel conflict”, etc.

“In our view, we believe an ultimate spin-off of AWS is inevitable due to its channel conflicts and the need to gain scale. We see the business as extremely valuable on a standalone basis…”

The Register has a useful take on Horan’s opinion, with a well-thought-out contrary view.

The crack in this bout of crystal-ball gazing is that Oppenheimer is an investment firm that by nature likes predictable cash above everything else, and Amazon’s leader Jeff Bezos is a mercurial, ambitious figure who has demonstrated time and time again a love for risky, long-term projects*.

This Reg hack believes the Oppenheimer spin-off analysis misses the temple for the gold fixtures: keeping Amazon Web Services yoked to Amazon holds a slew of major advantages, many of which could be critical in the battle for dominance of the cloud, but they will all take time to play out and are not a sure thing.

 

Cloudyn Power Tools Aim for Increased Efficiency, Savings for AWS Customers

Cloudyn has released new Amazon Web Services optimization Power Tools, aiming for increased efficiency and savings for AWS cloud deployments.

“The power tools were developed in response to what we perceive as the market’s growing need for clarity and control over cloud capacity, cost and utilization. The market is ripe for a significant overhaul with companies no longer able to ignore the fluctuating costs associated with the dynamic use of their cloud. Our data shows that 29% of customers spend $51,000-250,000 annually with AWS; only 6% of  customers spend $250,001 – $500,000, but this is the group with the largest saving potential with an average of 46%. All AWS customers Cloudyn monitors have cost optimization potential of between 34% – 46%,” commented Sharon Wagner, CEO of Cloudyn.

The popular Reserved Instance Calculator, which launched in October 2012, is being complemented with the release of the EC2 and RDS reservation detectors. Moving beyond optimal reservation pricing, Cloudyn now recommends which On-Demand instances can be relocated to unused and available reservations. When On-Demand instances don’t match any idle reservations, sell recommendations for the unused reservation are generated.

“Nextdoor’s growing social network relies heavily on AWS and managing cost is a priority for us,” comments Matt Wise, Senior Systems Architect at Nextdoor.com. “Cloudyn gives us clarity into all our cloud assets and ensures that we utilize them fully. Additionally, Cloudyn’s sizing and pricing recommendations enable us to use the cloud in the most cost-effective way possible.”

A new S3 Tracker analyzes S3 usage tracked by bucket or top-level folders and highlights inefficiencies together with step-by-step recommendations on how to optimize. A shadow version detector reveals otherwise hidden shadow S3 versions which inflate the monthly bill.

“We were surprised to learn how many companies simply don’t know what’s going on inside their S3 storage. The new tool splits S3 across buckets and allocates cost per usage providing crystal clear visibility. Interestingly, the most expensive ‘Standard’ storage type is also the most widely used, dominating with 84%. Post-optimization, this can be reduced to 60% and redistributed to the Reduced and Glacier storage alternatives,” continued Mr. Wagner.

F5 Adds BIG-IP Solutions for Amazon Web Services

F5 Networks, Inc. today introduced a BIG-IP® virtual edition for AWS, which leverages F5’s complete portfolio of BIG-IP products for the AWS cloud. This announcement addresses organizations’ escalating demand to extend their data center and applications to AWS, maintaining enterprise-class reliability, scale, security, and performance. F5’s new AWS offering is also the featured attraction at the company’s booth (#506) at the AWS re: Invent conference at the Venetian Hotel in Las Vegas, November 27–29.

“Enterprise customers have come to rely on BIG-IP’s strategic awareness that provides important information on how applications, resources, and users interact in order to successfully deliver applications,” said Siva Mandalam, Director of Product Management and Product Marketing, Cloud and Virtualization Solutions at F5. “Since BIG-IP for AWS will have equivalent features to physical BIG-IP devices, customers can apply the same level of control for their applications in AWS. With BIG-IP running in enterprise data centers and on AWS, customers can establish secure tunnels, burst to the cloud, and control the application from end to end.”

The BIG-IP solution for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. With the new F5 offering:

  • F5 ADN services operate seamlessly in the cloud – BIG-IP
    virtual editions are being made available to a growing number of
    customers seeking to leverage cloud offerings. Availability for AWS
    expands on F5’s broad support for virtualized and cloud environments
    based on vSphere, Hyper-V, Xen, and KVM.
  • Enterprises can confidently take advantage of cloud resources
    AWS customers can easily add F5’s market-leading availability,
    optimization, and security services to support cloud and hybrid
    deployment models.
  • IT teams are able to easily scale application environments
    Production and lab versions of BIG-IP virtual editions for AWS enable
    IT teams to move smoothly from testing and development into production
    to support essential business applications. Customers can leverage
    their existing BIG-IP configuration and policies and apply them to
    BIG-IP running on AWS.

Supporting Facts and Quotes

  • F5 has the greatest market share for the advanced application delivery
    controller (ADC) market, deployed within the enterprise and service
    providers markets. According to Gartner, Inc., F5 has 59.1% market
    share based on Q2 2012 worldwide revenue.1
  • F5’s initial product offering will use the AWS “bring your own
    license” (BYOL) model, which allows customers to buy perpetual
    licenses from F5 and then apply these licenses to instances running in
    AWS. To evaluate or purchase BIG-IP software modules, customers should
    contact their local
    F5 sales office.

“As enterprises consider which applications to move to the cloud, many customers have asked for the same advanced application control they have in their local data centers,” said Terry Wise, Head of Worldwide Partner Ecosystem at Amazon Web Services. “The BIG-IP solution for AWS enables enterprises to quickly move complex applications to AWS while maintaining high levels of service at a lower overall cost.”

“Enterprises want the flexibility and scale of cloud services, yet they can struggle with application complexity and sufficient control,” said Rohit Mehra, VP of Network Infrastructure at IDC. “The challenge lies in easily expanding IT’s service portfolio with cloud and hybrid capabilities while keeping the applications fast, secure, and available. BIG-IP’s native availability inside Amazon Web Services allows enterprises to deeply embed a strategic awareness of how applications behave in cloud adoption scenarios.”

To learn more about how F5 enables organizations to realize the full potential of cloud computing, visit F5 (booth #506) at the AWS re: Invent conference. During the event, Siva Mandalam from F5 will deliver a presentation focused on “Optimizing Enterprise Applications and User Access in the Cloud” at 1 p.m. PT on Wednesday, November 28.

 

 


Garantia Testing asks “Does Amazon EBS Affect Redis Performance?”

The Redis mavins at Garantia  decided to find out whether EBS really slows down Redis when used over various AWS platforms.

Their testing and conclusions answer the question: Should AOF be the default Redis configuration?

We think so. This benchmark clearly shows that running Redis over various AWS platforms using AOF with a standard, non-raided EBS configuration doesn’t significantly affect Redis’ performance. If we take into account that Redis professionals typically tune their redis.conf files carefully before using any data persistence method, and that newbies usually don’t generate loads as large as the ones we used in this benchmark, it is safe to assume that this performance difference can be almost neglected in real-life scenarios.

Read the full post for all the details.


Benchmarking Redis on AWS: Is Amazon PIOPS Really Better than Standard EBS?

The Redis experts at Garantia Data did some benchmarking in the wake of Amazon’s announcement of

Their conclusion:

After 32 intensive tests with Redis on AWS (each run in 3 iterations for a total of 96 test iterations), we found that neither the non-optimized EBS instances nor the optimized-EBS instances worked better with Amazon’s PIOPS EBS for Redis. According to our results, using the right standard EBS configuration can provide equal if not better performance than PIOPS EBS, and should actually save you money.

Read the full post for details and graphs.


How Amazon Glacier Confronts Entropy

Keeping data around — and readable — for a long, long, time is tough. For users Amazon’s Glacier offers freedom from specific hardware issues. We will no longer be stuck with unreadable zip drives or tapes. But that just moves the problem to Amazon. This interview talks about how they are tackling that problem.

The interview also touches on Amazon’s expectation that if they provide the back-end third-party developers will step and provide archiving and indexing tools.


Netflix Open Sources its Eureka Load Balancing Tool for AWS

Netflix has moved its Eureka mid-tier load-balancing tool, formerly known as the Netflix Discovery Service, to open source.Eureka Architecture Diagram

From the Netflix announcement of the move:

Eureka is a REST based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. We call this service, the Eureka Server. Eureka also comes with a java-based client component, the Eureka Client, which makes interactions with the service much easier. The client also has a built-in load balancer that does basic round-robin load balancing. At Netflix, a much more sophisticated load balancer wraps Eureka to provide weighted load balancing based on several factors like traffic, resource usage, error conditions etc to provide superior resiliency. We have previously referred to Eureka as the Netflix discovery service.


Newvem Launches New Tool to Help Amazon Web Services Customers Make Sense of Reserved Instances

 

Image representing Amazon Web Services as depi...

Newvem has launched a new tool as part of its KnowYourCloud Analytics web application. Newvem’s new Reserved Instances Decision Tool helps Amazon Web Services (AWS) customers make the right decision on exactly which On-Demand Instances should be moved to Reserved Instances. With KnowYourCloud Analytics, AWS users have insight into their cloud usage patterns and can now easily determine – based on flexibility, availability and cost considerations – whether a long-term commitment to Reserved Instances is the right decision for their business.

To keep ahead of competitors and give customers more value, Amazon is promoting Reserved Instances, which, compared to On-Demand Instances – the popular pay-as-you-go model that AWS is known for, offer even more cost savings and assured capacity availability. Reserved Instances require long-term commitments to Amazon with contracts ranging from one to three years. The problem is that moving to Reserved Instances is an extremely complex decision for IT and finance managers, who must weigh the tradeoffs between costs and utilization over time and between flexibility and a long-term commitment.

“Newvem’s KnowYourCloud Analytics is like Google Analytics for cloud computing,” said Zev Laderman, Newvem’s co-founder and CEO. “It scans AWS usage patterns and lets AWS users know if they can benefit from Reserved Instances, indicates which parts of their cloud would benefit the most, and offers recommendations on how to execute the move.”