Reliability matters in the cloud: Check the performance markers

(c)iStock.com/giac

Cloud vendors claiming to be the “fastest” or “best performing” deserve a little scepticism. While speed and performance can appear impressive if conditions are aligned in a particular way, it doesn’t mean they will hold up in the long-term.

For companies searching for the right cloud provider, data on performance expectations is essential as it relates to picking the right host, managing scalability, and effectively spending on resources. Companies can use the following three tips to correctly gauge cloud performance and reliability.

Get the most out of every pound, euro or dollar

In the best case scenario, you want reliable performance at a fair price while also escaping any hidden costs that may ruin the true ROI. You want servers that can offer superb disk read performance and disk write capabilities while also performing well under varied test scenarios.

For instance, ask for test data that shows the servers’ I/O profile for both large and small block sizes. Review several comparable pieces of hardware to find outstanding performers. Why does this type of performance matter?

Here’s an example: If you are running a SQL Server database that typically works on 64k blocks, you want a server that offers consistent storage, reliable performance and no additional charges for provisioned input or output operations per second (IOPS). You want the perfect mix of fewer required resources and transparent costs.

Focus on efficient decision making

Remember the book “The Paradox of Choice”, which explores choice overload and why offering too many options is detrimental? The same issues apply to picking a server.

Some cloud providers offer many server choices, which can lead customers into selecting one that has too much RAM in order to meet another criteria such as having enough CPUs. Aim to find a vendor that doesn’t bury you under a mountain of canned server sizes, but rather lets you choose the server capacity that best suits your workload.

You also want the flexibility to choose the amount of CPUs or memory that makes sense for your business – similar to how you would purchase traditional servers. Spend less time reviewing dozens of server configurations and more time focusing on app and services development.

Scalability and predictable performance are crucial

While performance testing can help you understand the best way to scale an application, a conclusion can’t be reached without understanding how the platform reacts to spikes in its capacity.

Performance metrics from reputable third-parties, such as CloudHarmony, can be used to compare a range of cloud servers to a bare metal reference system. You want to be sure this performance metric improves linearly with the addition of CPU cores.

Understanding this server performance data can help you make the most of a cloud portfolio; it will give you peace of mind to know ahead of time that you can add resources to a VM before requiring more hardware, and that you can reduce costs too. If you choose cloud hardware that can scale both up and out, you’ll be in the best situation to plan your scaling events.

Performance metrics only show a moment in time, but having a long-term performance profile allows companies to make educated choices while still lowering costs.

Best Practices for Load Server Calibration By @DanBoutinSOASTA | @CloudExpo #Cloud

I know of several large financial institutions that do all of their performance testing inside the firewall, and thus they own all of their own infrastructure, including dedicated servers that are used solely for load generation. With some of the large load generation requirements (which can be in the hundreds of thousands of vUsers), you can imagine that even a small bump in optimization of a load server could potentially save a company quite a bit of infrastructure costs in the load server hardware alone. Which is why, as part of our best practices, SOASTA advocates calibration of load servers when using CloudTest.

read more

Migrating Legacy SANs to Microsoft Cloud Azure Services | @CloudExpo #Cloud

Azure is Microsoft’s cloud computing platform, a growing collection of integrated services—analytics, computing, database, mobile, networking, storage, and web. In the 12 months since Build 2014, Microsoft has delivered over 500 new Azure services and features, and greatly expanded the footprint and capabilities of what Azure delivers. Here’s a snapshot of Azure Allure, articulated by Microsoft exec Scott Guthrie, Executive Vice President of Cloud and Enterprise, at the Microsoft Build 2015 conference in San Francisco.

read more

Making a Successful Journey to the Cloud | @CloudExpo #Cloud

In 2011 the US Federal Government issued a Cloud First policy mandating that agencies take full advantage of cloud computing benefits to maximize capacity utilization, improve IT flexibility and responsiveness, and minimize cost. Cloud computing is a design style that allows for efficient use of compute, storage, and memory in order to decrease cycle time for mission delivery and promises to change the way that agencies deliver services to citizens for the next twenty years.

Roger Hockenberry, CEO of Cognitio and former CTO for the National Clandestine Services of the Central Intelligence Agency, helped create and realize the potential of cloud capabilities for the Intelligence Community. Getting the Intelligence Community to accept Cloud computing as a viable platform was a difficult road to travel. This interview with Roger offers some thoughts and suggestions for a successful Cloud journey:

read more

Taming the API Sprawl | @DevOpsSummit #API #DevOps

Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales – most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services – like enter and update record. That allows the customer service team to write their own tools using the web, the command line, scheduled, or any other interface.

Sound too good to be true, doesn’t it? It is true, but it comes at a cost. For example, I never defined a mechanism to manage the explosion of APIs that will result under this approach. Consider this: uncontrolled growth is one definition of cancer.
Since the rapid creation of APIs is not quite so deadly, I will call this the API Sprawl; I’ve seen it across every client that moved to web-services approach, typically two to four years into conversion.

read more

Tech News Recap for the Week of 8/10/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 8/10/2015.

Google announced a corporate restructuring, forming an umbrella company called Alphabet and naming a new CEO to the core business of Google. Symantec has agreed to sell its data storage unit, Veritas, for $8 billion. Microsoft dropped a new Windows 10 Mobile build for ‘fast ring’ subscribers of its Insider program. Global SMB IT spending is heading towards the $600 billion mark.

Tech News Recap

Download this whitepaper to learn how Docker can help you save time, money & avoid production bugs.

 

 

By Ben Stephenson, Emerging Media Specialist

IBM announces Linux mainframe app development cloud

IBM is trying to keep mainframes relevant in the cloud era

IBM is trying to keep mainframes relevant in the cloud era

IBM is open sourcing a large set of Linux mainframe code and launching the LinuxONE Developer Cloud, a cloud-based platform for developers to create applications for a Linux server based on the mainframe.

The LinuxONE Developer Cloud, which will be deployed in select IBM datacentres globally, will provide developers access to a cloud-based development, piloting and testing environment for Linux-based mainframe workloads.

The move coincides with the company’s launch of a portfolio of Linux mainframe services, called LinuxONE, that IBM says are optimised to run cloud-native workloads like Dockerized apps and NoSQL databases.

“Fifteen years ago IBM surprised the industry by putting Linux on the mainframe, and today more than a third of IBM mainframe clients are running Linux,” said Tom Rosamilia, senior vice president, IBM Systems.

“We are deepening our commitment to the open source community by combining the best of the open world with the most advanced system in the world in order to help clients embrace new mobile and hybrid cloud workloads. Building on the success of Linux on the mainframe, we continue to push the limits beyond the capabilities of commodity servers that are not designed for security and performance at extreme scale,” Rosamilia said.

As part of the move the company is contributing tens of thousands of lines of code to the recently created Open Mainframe Project, formed by the Linux Foundation to optimise Linux deployments on mainframes.

“Linux on the mainframe has reached a critical mass such that vendors, users and academia need a neutral forum where they can work together to advance Linux tools and technologies and increase enterprise innovation,” said Jim Zemlin, the Linux Foundation executive director.

“The Open Mainframe Project is a direct response to the demands of Linux users and the supporting open source ecosystem to address unique features and requirements built into mainframes for security, availability and performance,” Zemlin said.

Netflix to retire on-prem datacentres by summer’s end

Netflix is making big changes to how it architects its service

Netflix is making big changes to how it architects its service

Netflix said it plans to move its last remaining on-prem systems to the cloud in a move aimed at streamlining its datacentre strategy.

According to a recent report in The Wall Street Journal’s CIO Journal, while its entire customer-facing business runs on AWS, Netflix said the company is planning to completely retire its own datacentres later this summer.

While most of its internal applications also run in the public cloud the company still uses its own infrastructure to store backups of its video collection, and for persistent failover.

It is clear Netflix has until very recently continued to invest in that infrastructure. Earlier this year the video streaming giant swapped 16 existing storage systems for three XIV systems to reduce datacentre floor space used by about 80 per cent and boost its database transactions-per-minute.

It was also testing IBM’s recently announced Spectrum Storage software, which is designed to optimise storage and ease management within hybrid cloud environments.

Moving all of its systems and applications to the cloud will complement a massive architectural overhaul announced earlier this year.

The company said rising demand for its service, which is mostly deployed on AWS infrastructure from multiple locations (initially just in the US) has prompted an effort to simplify its architecture so that it can scale more rapidly and reduce outages.

“Over the past 7 years, Netflix streaming has expanded from thousands of members watching occasionally to millions of members watching over two billion hours every month.  Each time a member starts to watch a movie or TV episode, a “view” is created in our data systems and a collection of events describing that view is gathered.  Given that viewing is what members spend most of their time doing on Netflix, having a robust and scalable architecture to manage and process this data is critical to the success of our business,” the company said at the time.

Parallels RDP for Linux Now Supports RemoteFX

Parallels has recently released the latest version of Parallels RDP Client for Linux, with important improvement on usability. It now allows users to watch videos and run 3D applications with performance close to a native desktop experience. In addition, Parallels RDP Client for Linux now supports RemoteFX. RemoteFX is a set of protocols for Microsoft’s Remote […]

The post Parallels RDP for Linux Now Supports RemoteFX appeared first on Parallels Blog.

Collaboration and agility shows why enterprise cloud is “worth the effort”

(c)iStock.com/AzmanL

A new report from Harvard Business Review Analytic Services argues that as cloud usage in the enterprise continues to rise, collaboration is now the key benefit ahead of business agility.

The research, which had more than 450 participants and was sponsored by Verizon Enterprise Solutions, found 84% of respondents agreeing their organisation’s use of cloud computing had increased in the past year. 72% said their use of cloud increased collaboration options, while 71% argued business agility had been increased. Essentially, the value of the cloud comes from speed as opposed to cost savings, enabling more fluid working relationships between traditionally separate entities.

One sign of a growing maturity in the space is that companies are not as quick to espouse the competitive advantage cloud normally brings. Whereas in 2014 30% of respondents argued cloud gave their organisation a “significant” competitive advantage, that number drops to 16%. The majority of this disparity is found in those who said cloud gave “a little” competitive advantage – 23% in 2015 compared to 11% in 2014.

Cloud is no longer a differentiator – however not being on it would be a significant disadvantage

Interestingly, as the benefits of cloud computing become more apparent to the C-suite, the number of respondents who admit they don’t know if their organisation has benefitted has dropped (18% in 2014, 11% in 2015). Yet as one respondent put it, “Cloud is no longer a differentiator; however, not being on it would be a significant disadvantage.”

As ever in these instances, security (29%) continues to be top of mind for qualities that matter in a cloud provider. Integration with other systems (26%) was also seen as important by respondents, alongside compliance (18%), long term financial stability (18%), and the ability to provide cloud management capabilities (14%).

Despite this, Siki Giunta, global SVP cloud at Verizon Enterprise, insists security is no longer the barrier to adoption it once was. “Cloud is no longer on the fringes of enterprise IT – but as it matures, there are new obstacles to be faced,” she wrote. “Only now the challenges aren’t about gaining acceptance, but about how to get more from cloud to keep driving growth and profitability.”

You can find the full report here (email required).