Archivo de la categoría: AWS

Developers Hit With Big, Unexpected AWS Bills, Thousands on GitHub Exposed

Amazon Web Services (AWS) is urging developers using the code sharing site GitHub to check their posts to ensure they haven’t inadvertently exposed their log-in credentials.

When opening an account, users are told to “store the keys in a secure location” and are warned that the key needs to remain “confidential in order to protect your account”. However, a search on GitHub reveals thousands of results where code containing AWS secret keys can be found in plain text, which means anyone can access those accounts.

From a security perspective it means they can basically go in and gain access to any of the files that are stored in the AWS account.

According to an AWS statement,  ”When we become aware of potentially exposed credentials, we proactively notify the affected customers and provide guidance on how to secure their access keys,”

There is more detail (and some cautionary tales involving big, and unexpected, AWS bills) here.

Cloud Mystery: What’s the Tech Secret Behind Amazon Glacier?

ITProPortal has a good writeup on Amazon Glacier technology: tape? cheap disks they power down? It’s more than just a post filled with wild speculation because it includes informed reasoning on the current state of the art for each of the candidate technologies behind Glacier:

…of all the services offered by AWS, none have fuelled the same level of speculation and interest as Amazon’s Glacier. Though the service is well-known and widely-used in enterprise, no one knows exactly what’s behind it.

Amazon has retained a thick veil of secrecy around its most mysterious web service. The Seattle-based company has always kept the processes behind its services fairly quiet, but the omerta surrounding Glacier has been especially strict, leaving experts in the tech community perplexed about what Amazon could be hiding.

TL;DR: It might be old-fashioned robot tape libraries; it might be cheap disks they fill up then turn off until they need them for retrieval; it might be some clever hybrid of the two.

Read the article.

Amazon Goes Beyond AWS Training with AWS Certification

You can now go beyond AWS training and take tests to earn AWS Certification. Meant to provide a way for Solution Architects, System Administrators, and Developers to formally certify knowledge of AWS.

The AWS Certifications are credentials that you (as an individual) can earn to certify your expertise (skills and technical knowledge) in the planning, deployment, and management of projects and systems that use AWS. Once you complete the certification requirements, you will receive an AWS Certified logo badge that you can use on your business cards and other professional collateral. This will help you to gain recognition and visibility for your AWS expertise.

The first certification, AWS Certified Solutions Architect – Associate Level, is available now. Additional certifications for System Administrators and Developers are planned for 2013.

Certification Exams are delivered by Kryterion, in more than 100 countries at over 750 testing locations worldwide. You can register online to take the exam through Kryterion.

Stackdriver Launches Intelligent Monitoring Service Public Beta

Stackdriver has launched the public beta  of Stackdriver Intelligent Monitoring, a flexible and intuitive SaaS offering that provides rich insight into the health of cloud-powered systems, infrastructure, and applications.  The service features seamless integration with Amazon Web Services and Rackspace Cloud and is optimized for teams that manage complex distributed applications.  Customers can access the service immediately via the company’s website at www.stackdriver.com.

Stackdriver’s engineers set out to build a solution that:

  • Monitors applications, systems, and infrastructure components,
  • Identifies anomalies using modern analytics and machine learning, and
  • Drives remediation and automation using a proprietary policy framework.

Edmodo, a leading social learning platform that runs on AWS, has relied on Stackdriver for several months.  “The technology stack that powers Edmodo’s online learning platform is very sophisticated. We use a variety of application building blocks, including AWS services and open source server software,” noted Kimo Rosenbaum, Infrastructure Architect.  “Before Stackdriver, we monitored our stack with many disparate tools, often designed without the dynamic nature of the cloud in mind.  With Stackdriver, we can monitor our systems, AWS services, and applications with one simple interface built for cloud-based services.”

Stackdriver Intelligent Monitoring is available free of charge for companies using Amazon Web Services and Rackspace Cloud.  Today, Stackdriver manages nearly 100,000 cloud resources and processes over 125 million measurements per day.  Nearly 100 customers, paid and non-paid, use the service, including Edmodo, Yellowhammer Media, Exablox, Atomwise, Qthru, and Webkite.

Study Finds Enterprise Cloud Focus Shifting From Adoption to Optimization

Cloudyn together with The Big Data Group has released the latest AWS customer optimization data, reinforcing the positive growth trend expected for the year ahead.

We set out to evaluate whether the projected 2013 ‘year of cloud optimization’ is on course and discovered that we are well into the public cloud adoption life cycle. In 2011 and 2012 the conversation centered around how and when to move to the cloud. Now it is all about companies looking for efficiencies and cost controls,” commented David Feinleib, Managing Director of The Big Data Group.

The study, based on over 450 selected AWS and Cloudyn customers, highlights a more mature approach to cloud deployments reflected by a deeper understanding of where inefficiencies lurk and how to optimize them. EC2 makes up for 62% of total AWS spend, with more than 50% of customers now using Reserved Instances in their deployment mix. However, On-Demand pricing remains the top choice for most, accounting for 71% of EC2 spend. Even for customers using reservations, there is still opportunity for further efficiency.

For example, Cloudyn’s Unused Reservation Detector has assisted customers in finding a startling 24% of unused reservations. These can be recycled by relocating matching On-Demand instances to the availability zone of the unused reservation.

There is also a shift away from large instance types to medium, where two medium instances cost the same as one large, but can produce 30% more output. However, with the low 8-9% utilization rates of the popular instance types, there is certainly more work to be done on the road to cloud optimization.

Cloudyn and The Big Data Group host a webinar on May 1, 2013 at 9:00 am PT focused on deployment efficiency.

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

Yet Another Analyst Insists on AWS Spinoff, Others Disagree

Not for the first time an investment analyst, this time Oppenheimer analyst Tim Horan in a report published on Monday, insists in a report that AWS will inevitably be spun off to avoid “channel conflict”, etc.

“In our view, we believe an ultimate spin-off of AWS is inevitable due to its channel conflicts and the need to gain scale. We see the business as extremely valuable on a standalone basis…”

The Register has a useful take on Horan’s opinion, with a well-thought-out contrary view.

The crack in this bout of crystal-ball gazing is that Oppenheimer is an investment firm that by nature likes predictable cash above everything else, and Amazon’s leader Jeff Bezos is a mercurial, ambitious figure who has demonstrated time and time again a love for risky, long-term projects*.

This Reg hack believes the Oppenheimer spin-off analysis misses the temple for the gold fixtures: keeping Amazon Web Services yoked to Amazon holds a slew of major advantages, many of which could be critical in the battle for dominance of the cloud, but they will all take time to play out and are not a sure thing.

 

Cloudyn Power Tools Aim for Increased Efficiency, Savings for AWS Customers

Cloudyn has released new Amazon Web Services optimization Power Tools, aiming for increased efficiency and savings for AWS cloud deployments.

“The power tools were developed in response to what we perceive as the market’s growing need for clarity and control over cloud capacity, cost and utilization. The market is ripe for a significant overhaul with companies no longer able to ignore the fluctuating costs associated with the dynamic use of their cloud. Our data shows that 29% of customers spend $51,000-250,000 annually with AWS; only 6% of  customers spend $250,001 – $500,000, but this is the group with the largest saving potential with an average of 46%. All AWS customers Cloudyn monitors have cost optimization potential of between 34% – 46%,” commented Sharon Wagner, CEO of Cloudyn.

The popular Reserved Instance Calculator, which launched in October 2012, is being complemented with the release of the EC2 and RDS reservation detectors. Moving beyond optimal reservation pricing, Cloudyn now recommends which On-Demand instances can be relocated to unused and available reservations. When On-Demand instances don’t match any idle reservations, sell recommendations for the unused reservation are generated.

“Nextdoor’s growing social network relies heavily on AWS and managing cost is a priority for us,” comments Matt Wise, Senior Systems Architect at Nextdoor.com. “Cloudyn gives us clarity into all our cloud assets and ensures that we utilize them fully. Additionally, Cloudyn’s sizing and pricing recommendations enable us to use the cloud in the most cost-effective way possible.”

A new S3 Tracker analyzes S3 usage tracked by bucket or top-level folders and highlights inefficiencies together with step-by-step recommendations on how to optimize. A shadow version detector reveals otherwise hidden shadow S3 versions which inflate the monthly bill.

“We were surprised to learn how many companies simply don’t know what’s going on inside their S3 storage. The new tool splits S3 across buckets and allocates cost per usage providing crystal clear visibility. Interestingly, the most expensive ‘Standard’ storage type is also the most widely used, dominating with 84%. Post-optimization, this can be reduced to 60% and redistributed to the Reduced and Glacier storage alternatives,” continued Mr. Wagner.

F5 Adds BIG-IP Solutions for Amazon Web Services

F5 Networks, Inc. today introduced a BIG-IP® virtual edition for AWS, which leverages F5’s complete portfolio of BIG-IP products for the AWS cloud. This announcement addresses organizations’ escalating demand to extend their data center and applications to AWS, maintaining enterprise-class reliability, scale, security, and performance. F5’s new AWS offering is also the featured attraction at the company’s booth (#506) at the AWS re: Invent conference at the Venetian Hotel in Las Vegas, November 27–29.

“Enterprise customers have come to rely on BIG-IP’s strategic awareness that provides important information on how applications, resources, and users interact in order to successfully deliver applications,” said Siva Mandalam, Director of Product Management and Product Marketing, Cloud and Virtualization Solutions at F5. “Since BIG-IP for AWS will have equivalent features to physical BIG-IP devices, customers can apply the same level of control for their applications in AWS. With BIG-IP running in enterprise data centers and on AWS, customers can establish secure tunnels, burst to the cloud, and control the application from end to end.”

The BIG-IP solution for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. With the new F5 offering:

  • F5 ADN services operate seamlessly in the cloud – BIG-IP
    virtual editions are being made available to a growing number of
    customers seeking to leverage cloud offerings. Availability for AWS
    expands on F5’s broad support for virtualized and cloud environments
    based on vSphere, Hyper-V, Xen, and KVM.
  • Enterprises can confidently take advantage of cloud resources
    AWS customers can easily add F5’s market-leading availability,
    optimization, and security services to support cloud and hybrid
    deployment models.
  • IT teams are able to easily scale application environments
    Production and lab versions of BIG-IP virtual editions for AWS enable
    IT teams to move smoothly from testing and development into production
    to support essential business applications. Customers can leverage
    their existing BIG-IP configuration and policies and apply them to
    BIG-IP running on AWS.

Supporting Facts and Quotes

  • F5 has the greatest market share for the advanced application delivery
    controller (ADC) market, deployed within the enterprise and service
    providers markets. According to Gartner, Inc., F5 has 59.1% market
    share based on Q2 2012 worldwide revenue.1
  • F5’s initial product offering will use the AWS “bring your own
    license” (BYOL) model, which allows customers to buy perpetual
    licenses from F5 and then apply these licenses to instances running in
    AWS. To evaluate or purchase BIG-IP software modules, customers should
    contact their local
    F5 sales office.

“As enterprises consider which applications to move to the cloud, many customers have asked for the same advanced application control they have in their local data centers,” said Terry Wise, Head of Worldwide Partner Ecosystem at Amazon Web Services. “The BIG-IP solution for AWS enables enterprises to quickly move complex applications to AWS while maintaining high levels of service at a lower overall cost.”

“Enterprises want the flexibility and scale of cloud services, yet they can struggle with application complexity and sufficient control,” said Rohit Mehra, VP of Network Infrastructure at IDC. “The challenge lies in easily expanding IT’s service portfolio with cloud and hybrid capabilities while keeping the applications fast, secure, and available. BIG-IP’s native availability inside Amazon Web Services allows enterprises to deeply embed a strategic awareness of how applications behave in cloud adoption scenarios.”

To learn more about how F5 enables organizations to realize the full potential of cloud computing, visit F5 (booth #506) at the AWS re: Invent conference. During the event, Siva Mandalam from F5 will deliver a presentation focused on “Optimizing Enterprise Applications and User Access in the Cloud” at 1 p.m. PT on Wednesday, November 28.

 

 


Garantia Testing asks “Does Amazon EBS Affect Redis Performance?”

The Redis mavins at Garantia  decided to find out whether EBS really slows down Redis when used over various AWS platforms.

Their testing and conclusions answer the question: Should AOF be the default Redis configuration?

We think so. This benchmark clearly shows that running Redis over various AWS platforms using AOF with a standard, non-raided EBS configuration doesn’t significantly affect Redis’ performance. If we take into account that Redis professionals typically tune their redis.conf files carefully before using any data persistence method, and that newbies usually don’t generate loads as large as the ones we used in this benchmark, it is safe to assume that this performance difference can be almost neglected in real-life scenarios.

Read the full post for all the details.