Archivo de la categoría: AWS

Cloudyn Power Tools Aim for Increased Efficiency, Savings for AWS Customers

Cloudyn has released new Amazon Web Services optimization Power Tools, aiming for increased efficiency and savings for AWS cloud deployments.

“The power tools were developed in response to what we perceive as the market’s growing need for clarity and control over cloud capacity, cost and utilization. The market is ripe for a significant overhaul with companies no longer able to ignore the fluctuating costs associated with the dynamic use of their cloud. Our data shows that 29% of customers spend $51,000-250,000 annually with AWS; only 6% of  customers spend $250,001 – $500,000, but this is the group with the largest saving potential with an average of 46%. All AWS customers Cloudyn monitors have cost optimization potential of between 34% – 46%,” commented Sharon Wagner, CEO of Cloudyn.

The popular Reserved Instance Calculator, which launched in October 2012, is being complemented with the release of the EC2 and RDS reservation detectors. Moving beyond optimal reservation pricing, Cloudyn now recommends which On-Demand instances can be relocated to unused and available reservations. When On-Demand instances don’t match any idle reservations, sell recommendations for the unused reservation are generated.

“Nextdoor’s growing social network relies heavily on AWS and managing cost is a priority for us,” comments Matt Wise, Senior Systems Architect at Nextdoor.com. “Cloudyn gives us clarity into all our cloud assets and ensures that we utilize them fully. Additionally, Cloudyn’s sizing and pricing recommendations enable us to use the cloud in the most cost-effective way possible.”

A new S3 Tracker analyzes S3 usage tracked by bucket or top-level folders and highlights inefficiencies together with step-by-step recommendations on how to optimize. A shadow version detector reveals otherwise hidden shadow S3 versions which inflate the monthly bill.

“We were surprised to learn how many companies simply don’t know what’s going on inside their S3 storage. The new tool splits S3 across buckets and allocates cost per usage providing crystal clear visibility. Interestingly, the most expensive ‘Standard’ storage type is also the most widely used, dominating with 84%. Post-optimization, this can be reduced to 60% and redistributed to the Reduced and Glacier storage alternatives,” continued Mr. Wagner.

F5 Adds BIG-IP Solutions for Amazon Web Services

F5 Networks, Inc. today introduced a BIG-IP® virtual edition for AWS, which leverages F5’s complete portfolio of BIG-IP products for the AWS cloud. This announcement addresses organizations’ escalating demand to extend their data center and applications to AWS, maintaining enterprise-class reliability, scale, security, and performance. F5’s new AWS offering is also the featured attraction at the company’s booth (#506) at the AWS re: Invent conference at the Venetian Hotel in Las Vegas, November 27–29.

“Enterprise customers have come to rely on BIG-IP’s strategic awareness that provides important information on how applications, resources, and users interact in order to successfully deliver applications,” said Siva Mandalam, Director of Product Management and Product Marketing, Cloud and Virtualization Solutions at F5. “Since BIG-IP for AWS will have equivalent features to physical BIG-IP devices, customers can apply the same level of control for their applications in AWS. With BIG-IP running in enterprise data centers and on AWS, customers can establish secure tunnels, burst to the cloud, and control the application from end to end.”

The BIG-IP solution for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. With the new F5 offering:

  • F5 ADN services operate seamlessly in the cloud – BIG-IP
    virtual editions are being made available to a growing number of
    customers seeking to leverage cloud offerings. Availability for AWS
    expands on F5’s broad support for virtualized and cloud environments
    based on vSphere, Hyper-V, Xen, and KVM.
  • Enterprises can confidently take advantage of cloud resources
    AWS customers can easily add F5’s market-leading availability,
    optimization, and security services to support cloud and hybrid
    deployment models.
  • IT teams are able to easily scale application environments
    Production and lab versions of BIG-IP virtual editions for AWS enable
    IT teams to move smoothly from testing and development into production
    to support essential business applications. Customers can leverage
    their existing BIG-IP configuration and policies and apply them to
    BIG-IP running on AWS.

Supporting Facts and Quotes

  • F5 has the greatest market share for the advanced application delivery
    controller (ADC) market, deployed within the enterprise and service
    providers markets. According to Gartner, Inc., F5 has 59.1% market
    share based on Q2 2012 worldwide revenue.1
  • F5’s initial product offering will use the AWS “bring your own
    license” (BYOL) model, which allows customers to buy perpetual
    licenses from F5 and then apply these licenses to instances running in
    AWS. To evaluate or purchase BIG-IP software modules, customers should
    contact their local
    F5 sales office.

“As enterprises consider which applications to move to the cloud, many customers have asked for the same advanced application control they have in their local data centers,” said Terry Wise, Head of Worldwide Partner Ecosystem at Amazon Web Services. “The BIG-IP solution for AWS enables enterprises to quickly move complex applications to AWS while maintaining high levels of service at a lower overall cost.”

“Enterprises want the flexibility and scale of cloud services, yet they can struggle with application complexity and sufficient control,” said Rohit Mehra, VP of Network Infrastructure at IDC. “The challenge lies in easily expanding IT’s service portfolio with cloud and hybrid capabilities while keeping the applications fast, secure, and available. BIG-IP’s native availability inside Amazon Web Services allows enterprises to deeply embed a strategic awareness of how applications behave in cloud adoption scenarios.”

To learn more about how F5 enables organizations to realize the full potential of cloud computing, visit F5 (booth #506) at the AWS re: Invent conference. During the event, Siva Mandalam from F5 will deliver a presentation focused on “Optimizing Enterprise Applications and User Access in the Cloud” at 1 p.m. PT on Wednesday, November 28.

 

 


Garantia Testing asks “Does Amazon EBS Affect Redis Performance?”

The Redis mavins at Garantia  decided to find out whether EBS really slows down Redis when used over various AWS platforms.

Their testing and conclusions answer the question: Should AOF be the default Redis configuration?

We think so. This benchmark clearly shows that running Redis over various AWS platforms using AOF with a standard, non-raided EBS configuration doesn’t significantly affect Redis’ performance. If we take into account that Redis professionals typically tune their redis.conf files carefully before using any data persistence method, and that newbies usually don’t generate loads as large as the ones we used in this benchmark, it is safe to assume that this performance difference can be almost neglected in real-life scenarios.

Read the full post for all the details.


Benchmarking Redis on AWS: Is Amazon PIOPS Really Better than Standard EBS?

The Redis experts at Garantia Data did some benchmarking in the wake of Amazon’s announcement of

Their conclusion:

After 32 intensive tests with Redis on AWS (each run in 3 iterations for a total of 96 test iterations), we found that neither the non-optimized EBS instances nor the optimized-EBS instances worked better with Amazon’s PIOPS EBS for Redis. According to our results, using the right standard EBS configuration can provide equal if not better performance than PIOPS EBS, and should actually save you money.

Read the full post for details and graphs.


How Amazon Glacier Confronts Entropy

Keeping data around — and readable — for a long, long, time is tough. For users Amazon’s Glacier offers freedom from specific hardware issues. We will no longer be stuck with unreadable zip drives or tapes. But that just moves the problem to Amazon. This interview talks about how they are tackling that problem.

The interview also touches on Amazon’s expectation that if they provide the back-end third-party developers will step and provide archiving and indexing tools.


Netflix Open Sources its Eureka Load Balancing Tool for AWS

Netflix has moved its Eureka mid-tier load-balancing tool, formerly known as the Netflix Discovery Service, to open source.Eureka Architecture Diagram

From the Netflix announcement of the move:

Eureka is a REST based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. We call this service, the Eureka Server. Eureka also comes with a java-based client component, the Eureka Client, which makes interactions with the service much easier. The client also has a built-in load balancer that does basic round-robin load balancing. At Netflix, a much more sophisticated load balancer wraps Eureka to provide weighted load balancing based on several factors like traffic, resource usage, error conditions etc to provide superior resiliency. We have previously referred to Eureka as the Netflix discovery service.


Newvem Launches New Tool to Help Amazon Web Services Customers Make Sense of Reserved Instances

 

Image representing Amazon Web Services as depi...

Newvem has launched a new tool as part of its KnowYourCloud Analytics web application. Newvem’s new Reserved Instances Decision Tool helps Amazon Web Services (AWS) customers make the right decision on exactly which On-Demand Instances should be moved to Reserved Instances. With KnowYourCloud Analytics, AWS users have insight into their cloud usage patterns and can now easily determine – based on flexibility, availability and cost considerations – whether a long-term commitment to Reserved Instances is the right decision for their business.

To keep ahead of competitors and give customers more value, Amazon is promoting Reserved Instances, which, compared to On-Demand Instances – the popular pay-as-you-go model that AWS is known for, offer even more cost savings and assured capacity availability. Reserved Instances require long-term commitments to Amazon with contracts ranging from one to three years. The problem is that moving to Reserved Instances is an extremely complex decision for IT and finance managers, who must weigh the tradeoffs between costs and utilization over time and between flexibility and a long-term commitment.

“Newvem’s KnowYourCloud Analytics is like Google Analytics for cloud computing,” said Zev Laderman, Newvem’s co-founder and CEO. “It scans AWS usage patterns and lets AWS users know if they can benefit from Reserved Instances, indicates which parts of their cloud would benefit the most, and offers recommendations on how to execute the move.”


Amazon Web Services Launches High Performance Storage Option for Amazon Elastic Block Store

Image representing Amazon as depicted in Crunc...

Amazon Web Services today announced new features for customers looking to run high performance databases in the cloud with the launch of Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS. Provisioned IOPS (input/output operations per second) are a new EBS volume type designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With Provisioned IOPS, customers can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume. To get started with Amazon EBS, visit http://aws.amazon.com/ebs.

Provisioned IOPS volumes are engineered to allow customers to develop, test, and deploy production applications and be confident that they will receive their desired performance. With a few clicks in the AWS Management Console, customers can create an EBS volume provisioned with the storage and IOPS they need and attach it to their Amazon EC2 instance. Amazon EBS currently supports up to 1,000 IOPS per Provisioned IOPS volume, with plans to deliver higher limits soon. Customers can attach multiple Amazon EBS volumes to an Amazon EC2 instance and stripe across them to deliver thousands of IOPS to their application.

To enable Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume, Amazon EC2 is introducing the ability to launch selected Amazon EC2 instance types as EBS-Optimized instances. EBS-Optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second and 1,000 Megabits per second depending on the instance type used. The combination of EBS Provisioned IOPS and EBS-Optimized instances allows customers to run their most performance-sensitive applications on Amazon EC2, giving them predictable scaling with the same ease of use, durability, and flexibility of provisioning benefits they expect from Amazon EC2 and Amazon EBS.

“AWS introduced Amazon EBS in 2008 to provide a highly scalable virtual storage service and now, four years later, our customers are running applications on Amazon EC2 using EBS volumes at tremendous scale,” said Peter De Santis, Vice President of Amazon EC2. “Customers have been asking for the ability to set their performance rate to achieve consistently high performance. With EBS Provisioned IOPS volumes, EBS-Optimized instances and the recently launched High I/O SSD-based EC2 instances, customers have a range of choices for running their most demanding applications and databases on AWS while achieving peak performance in a predictable manner.”

At NASA’s Jet Propulsion Laboratory, Amazon EBS is used to support various missions and research programs. Consistent performance of I/O is a major requirement for numerous use cases across NASA ranging from scientific computing to large scale database deployments. JPL now routinely provisions cloud compute capacity in an elastic manner but database latencies have proven difficult. To help meet this challenge, JPL’s missions and its Office of the CIO prototyped the new EBS Provisioned IOPS capability to provision flexible compute capacity and overcome database latency restrictions. The results were highly successful and the release of EBS Provisioned IOPS, coupled with Amazon EC2 High I/O SSD-based instances, will introduce a whole new realm of I/O intensive scientific applications for JPL from radar data processing to the quest of black holes.

Stratalux is a leader in building and managing tailored cloud solutions for customers of all sizes. “A common request we see from both our large and small customers is the need to support high performance database applications. Throughput consistency is critical for these workloads,” said Jeremy Przygode, CEO at Stratalux. “Based on positive results in our early testing, the combination of EBS Provisioned IOPS and EBS-Optimized instances will enable our customers to consistently scale their database applications to thousands of IOPS, enabling us to increase the number of I/O intensive workloads we support.”

Amazon EBS Provisioned IOPS volumes are currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), Asia Pacific (Singapore), and Asia Pacific (Japan) regions with additional Region launches coming soon.


AWS Outage Postmortum: “the generators did not pick up the load”

Amazon has provided their take on how the big derecho storm that hit the Eastern US (and still leaves millions without power during a heat wave) brought down one of their data centers. Basically it was “hardware failure” — in this case a couple of emergency generators.

In the single datacenter that did not successfully transfer to the generator backup, all servers continued to operate normally on Uninterruptable Power Supply (“UPS”) power. As onsite personnel worked to stabilize the primary and backup power generators, the UPS systems were depleting and servers began losing power at 8:04pm PDT.

Read the AWS statement for more detail.


Eastern US Storms Also Disrupted the Technology Cloud

The New York Times has an interesting article on new concerns over Cloud Computing (that is to say, AWS) reliability in the wake of recent outages caused by the weather.

The interruption underlined how businesses and consumers are increasingly exposed to unforeseen risks and wrenching disruptions as they increasingly embrace life in the cloud. It was also a big blow to what is probably the fastest-growing part of the media business, start-ups on the social Web that attract millions of users seemingly overnight.

As someone who was involved during the pre-cloud era in private data centers and later colocation facilities for startups, small and medium-sized companies, I have a question:

Does anyone really think they can do any better on their own?

Read the article.