How to Approach Application Failures in Production

In my recent article, “Software Quality Metrics for your Continuous Delivery Pipeline – Part III – Logging,” I wrote about the good parts and the not-so-good parts of logging and concluded that logging usually fails to deliver what it is so often mistakenly used for: as a mechanism for analyzing application failures in production. In response to the heated debates on reddit.com/r/devops and reddit.com/r/programing, I want to demonstrate the wealth of out-of-the-box insights you could obtain from a single urgent, albeit unspecific log message if you only are equipped with the magic ingredient; full transaction context:

read more

Moving to the Cloud: Lessons from Jason Segel & Cameron Diaz

By Ben Stephenson, Journey to the Cloud

 

As you probably know, Jason Segal and Cameron Diaz recently came out with a new movie called “Sex Tape.” In the movie, Segel and Diaz play a couple who decide to make an adult home movie, and it accidently gets released online.

I saw the trailer a couple of weeks ago and one clip from it grabbed my attention. After Jason and Cameron realize the tape has been released, they start to panic. The clip shows the two driving in a frenzy talking about how the tape got released (you can watch the clip here).

 

Cameron: “How do you forget to delete your sex tape?”

Jason: “It kept slipping my mind, and then the next thing I knew it went up…it went up to the cloud.”

Cameron: “And you can’t get it down from the cloud?”

Jason: “Nobody understands the cloud. It’s a mystery”

 

This got me thinking about some companies’ abruptness to go to the cloud without fully understanding the consequences. IT decision makers are under increasing pressure from CEOs & CFOs to utilize the benefits of cloud. If your IT department doesn’t have a well thought out strategy, however, chances are you’re not going to be successful. You don’t want to move to the cloud for the sake of moving to the cloud and then be in a frenzy if something goes wrong. I spoke with one of our bloggers, and cloud expert, John Dixon to get his take on what organizations need to consider before deciding to move to the cloud. Here’s what John had to say:

Not everything is a great fit for cloud

Have some high-end CAD desktops? Not a great fit for cloud (at least right now). Just setup a new ERP system on new hardware? Not a great fit for cloud at the moment. Testing some new functionality on your website or a new brand entirely? Now we’re talking — setup the infrastructure, test the market, scale up if needed, take everything down if needed. 

Benefits of cloud are potentially huge, but hard to measure

Back in the datacenter consolidation days, the ROI of doing a virtualization project was dead easy. Consolidate at least 10 physical servers down to 1, and you have instant savings in power, cooling, floor space, administrative burden, etc. Some nice features came from having virtualized infrastructure, like the ability to provision servers from templates, easier DR, etc. Infrastructure teams were the big winners. In short, you could easily calculate the financial benefit of virtualizing servers. With cloud, this is not the case. If you look closely, there is limited benefit in just “moving” servers to the cloud. In fact, it may cost you more to host servers in the cloud than it does in your physical datacenter. However, IaaS clouds allow you to do things that you couldn’t do on your own. A pharmaceutical company can “rent” 10,000 servers to run a 2 day simulation; an online school can build infrastructure in a cloud datacenter in Australia to test a new market in APAC; a startup can use cloud to run ALL of its technical services (with no capital investment). In short, you’ve got to understand your existing costs, your use cases, and the benefits you are seeking. Jumping in to cloud without this mindset may comprise the benefits of doing so. 

Portability or optimization? Especially in Amazon Web Services

As of now, you can’t have both. Choose wisely. Optimizing your application for cloud (for example, in AWS) by making use of RDS, SNS, SQS, Cloudformation, Autoscaling, Cloudwatch, etc. can deliver some amazing benefits in terms of scalability, supportability, and reliability. However, doing so destroys portability and any hope of brining that application back in house. On the flip side, VMware vCHS offers awesome portability, but the opportunities for optimization are fewer.

 

So, lessons for the kids out there? Don’t upload certain home videos online and have a well thought out cloud strategy before jumping into anything you may regret.

Oh, and Jason…borderline offensive when you say “no one gets the cloud”…because GreenPages gets it…

 

If you would like to hear more from John, download his ebook on the evolution of the corporate IT department.

 

 

OpenNebula Partners with Microsoft to Enable Hybrid Clouds with Azure

As a result of the collaboration between OpenNebula and Microsoft, a new set of plug-ins to support Microsoft Azure has been included in OpenNebula. This partnership has been announced today by Microsoft Open Technologies at the O’Reilly Open Source Convention (OSCON).
«With this set of plug-ins, IT pros and system integrations can use OpenNebula’s rich set of infrastructure management tools to manage cloud deployments across Microsoft’s private, public and hosted cloud platforms.»

read more

Test All Apps to Keep Hackers from Penetrating Castle Walls

Despite all the news about hackers infiltrating major corporations, most businesses continue to leave themselves woefully unprotected. Some surveys estimate more than 70% of businesses perform vulnerability tests on less than 10% of their cloud, mobile and web applications. A majority also confess they have been hacked at least once in the last two years.
While most large businesses have begun application vulnerability testing, there is still a long way to go. After all, you are only as strong as your weakest link; hackers will undoubtedly find and attack any application without sufficient defenses.
Although testing and creating protection for high-value and mission-critical applications is better than not doing anything at all, leaving low-priority applications unprotected is still a major risk. If hackers can exploit just one application, that means they can then access the rest of your infrastructure. They’ll eventually figure out a way to also attack your high-value applications.

read more

VMware opens up second UK data centre, sees customer demand

Virtualisation bod VMware has opened up a second UK data centre, with a new site in Chessington, London, adding to the current data centre in Slough.

The move is evidently designed to represent UK and EMEA customers who want quicker access to their data, and comes a few months after VMware launched its hybrid disaster recovery service (vCHS), which provides a continuously available recovery platform if a data centre goes down.

Gavin Jackson, general manager and VP cloud services EMEA, said: “Our customers have started off using our hybrid service on specific projects for the easy, affordable and seamless movement of workloads between private and public cloud, knowing they can move their legacy and new applications back and forth with ease.

“Now they have seen what VMware vCloud Hybrid Service can enable, they’re turning to it to power their strategic transformation programmes.”

According to VMware, more than 800 EMEA-based individuals at partner organisations have been accredited in its vCloud Hybrid Service since it launched in the UK.

Jackson said in a Q&A blog post on the VMware site that the hybrid cloud service is so popular as it’s “the answer to the cloud dilemma.”

“Businesses are challenged in moving their apps to the public cloud because of the perceived problems it can pose,” he said. “vCHS means you can scale to the cloud without risk, and it’s an example of where highly innovative and trustworthy technology is meeting the needs of increasingly demanding companies and truly responding to business problems.”

It is a bit gushy, yes, but opening up UK data centres and catering for more bespoke, hybrid options from clients seems to be de rigeur with cloud vendors. SoftLayer opened the doors to a UK data centre last month, with further expansion expected in Central Europe.

VMware isn’t the only vendor to advocated hybrid, either. Rackspace unveiled its Managed Cloud offering which enables customers to have a variety of public, private or on-prem, either managed by Rackspace or DIY, while one of IBM’s latest product releases, Cloud Modular Management, offers similar capability.

Monitor App Performance Early and Often

“Vote early and vote often.” Back in the 1920s and ’30s, when neither election technology nor oversight were as effective as they are today, and the likes of Al Capone were at work gaming the system, this phrase wasn’t a joke. It was a best practice.
If you want guaranteed results, what better way than to get people to the polls early, and then repeatedly, to vote for your candidate?
None of this sitting around until the end of the day, hoping that the election goes the way you want. Capone would tell you, “That’s for saps.”
What does this have to do with cloud computing? All too often we see IT teams taking a “buy it and hope it works” strategy when it comes to adopting cloud-based apps. They migrate their entire user base to the cloud on faith, assuming that they can worry about performance and availability issues later, if ever. After all, everybody in the company accesses the Internet today without issues so your cloud apps should work just fine, right?

read more

Cloud migration best practices for law firms

By David Linthicum

Legal IT Professionals’ online survey of its readership presented a split decision on about a move to the cloud.  “The online news publication covering international legal information technology asked readers: ‘If your law firm’s management asked for your advice regarding moving key applications to the cloud, would you be in favour of this strategy?’

The 438 responses from legal information technology staff, lawyers and paralegals was nearly split down the middle, with 46% opposing and 45% in favour, while 9% had no opinion.”  The complete survey report can be found here.

The participants of this 2013 survey might be a bit more cloud-oriented these days, as more law firms find a new home for IT in the cloud. However, overall, law firms continue to balk at the idea of moving toward the cloud since they do not know how to take the first steps. Moreover, as stated by the law firms’ IT staff who responded to the survey, their firms would require new skills to transition, manage and support the new cloud service services.

The cloud migration path for law firms is not unlike that of other small business, with a few more issues to deal with around privacy, compliance, and governance.  Here is a quick process that most law firms should consider:

First, access the mission of the core legal practice.  What are the specialty areas?  International, family, tax, criminal, patent?  In many cases, the practice covers several areas.  Understand the patterns of security and the patterns of governance that are required, such as rules and regulations around how client data should be handled, and even ethical requirements.  Can data be stored in other states or even other countries?  Cloud providers have servers everywhere, but many will isolate data geographically, if necessary.  What level of security is a legal requirement? 

With the cloud, available security and legal security requirements continue to evolve.  Even those skilled with the law often don’t understand the current state of these regulations.  Create a security and governance plan from this effort.

Second, access the existing practice management systems, and get a good understanding of the ongoing operational costs.  In many instances, systems that run in the legal office’s data center carry a huge cost that most of those who manage the practice don’t really understand.  You need to figure out that cost to see if the cloud will be an improvement.  This information will allow you to pick the data and applications that are good candidates to relocate to the cloud.  Create business cases, and systems migration prioritization from this work.

Third, create a migration plan that includes applications and data sets that will be more cost effective when run from a public cloud. Once you understand the specific applications and the data, figure out how those applications and data will migrate.

In some cases, the applications are packaged and you simply want a SaaS version of the same software systems, or an alternative product that provides a SaaS version.  In the case of data, you need to find analogs in the public cloud, including the same versions of the database that run in a public cloud (e.g., Oracle), or opportunities to leverage more purpose-built databases that may provide higher performance and lower costs, such as NoSQL databases.

Finally, execute the plan and include a stepwise path to migrate some of the applications and some of the data.  Start slowly.  Consider security and governance at each step, and make sure that the migration efforts align with the needs of the users in the practice.

Cloud computing is still a little scary to those who work in law offices, but the more innovative and fastest growing practices are moving to the cloud.  The move is certainly for cost reasons, but, more and more, it’s around the practice’s need to grow quickly, without limitations from IT.

The post Cloud Migration Best Practices for Law Firms appeared first on Cloud Computing News.

Demand Revs Up Avaya’s Fast-Track to the Cloud Initiative

Avaya has escalated its efforts to get customers to fast-track to the cloud with the first of several sweeping initiatives that will optimize solutions and processes for delivery of collaboration as a service (CaaS) via the Avaya Collaborative Cloud. Strong demand for Avaya communications and collaboration applications from large and midsize enterprises is driving a focus on expanded scale and reach as well as simpler, faster provisioning that requires fewer resources.
Avaya collaboration solutions, including unified communications, video and contact center via the cloud, offer a new path to greater business agility. While many other areas of IT have successfully adopted cloud-based solutions, the communications infrastructure is one of the last remaining hold-outs due to complexity and concerns around reliability, security and privacy. As the business environment changes, however, new communications solutions and delivery models must support a mobile, flexible workforce that characterizes today’s rapidly changing economic landscape. The Avaya Collaborative Cloud fills this need.

read more

Toward a More Confident Cloud Security Strategy

The cloud has hit the mainstream. Businesses in the United States currently spend more than $13 billion on cloud computing and managed hosting services, and Gartner projects that by 2015, end-user spending on cloud services could be more than $180 billion worldwide. It is estimated that 50 percent of organizations will require employees to use their own devices by 2017, which will depend on shared cloud storage. All of this requires encryption.
Organizational deployment of encryption has increased significantly in recent years. Its use spans everything from encrypting data in databases and file systems, in storage networks, on back-up tapes, and while being transferred over a public and internal networks. Although this might seem that we are moving in the right direction when it comes to enterprise data protection, there’s a real risk of creating fragmentation and inconsistency – referred to as encryption sprawl – as different organizations deploy diverse technologies in different places to secure different types of data. Adding fuel to the fire, the cloud poses its own unique threats and challenges. With an undeniable value proposition, it seems clear that the cloud is inevitable and that protecting data within it will be a top priority.

read more

Why physical security is essential to combating the ever present and growing threat to data centres

Nick Razey, CEO and Co-founder of Next Generation Data

Although recognised as important, the absolute criticality of the data centre is often underestimated and this is possibly due to its relative cost in comparison to other elements of the IT stack. While a rack footprint might cost £10K pa, the hardware might cost over £50K and the managed service £100K.

However, if the data centre fails the lost business can reach millions of pounds per day. So, while the data centre is considered the least important element when it is working, it immediately becomes the most important element if it fails.

While this is of course recognised by data centre designers who are focused on building a resilient mechanical and electrical infrastructure – normally with the objective of achieving “concurrent maintainability” – ensuring a comparable focus on security measures is being overlooked.

Targeting physical infrastructure is commonplace in time of war. Ports, airports and road arteries have been the traditional targets of those seeking to disrupt and destroy.

While the data centre is considered the least important element when it is working, it immediately become the most important if it fails

In today’s world, where data is the oxygen that drives almost everything we do and there is hardly a business in the land without some form of information technology at its core, it is unsurprising that the locations which store active data, the data centres, have become recognised as threatened.

In their research paper Predicts 2013: Infrastructure Services Threatened as New Structural, Political, Competitive and Commercial Challenges Emerge, Gartner note that the flip side of greatly enhanced cyber security through the use of mind-bending algorithms is that it pushes those with an axe to grind to consider a plan B where a physical assault makes the most sense. The London riots of 2011 might have created just such a spark had the rioters wished to target a particular company or government department on which to vent their frustration.

Gartner’s report forewarns senior IT decision makers that by 2016 it is likely government regulation will dictate minimum physical levels of security for data centre infrastructure. No longer will it be acceptable to store data in facilities that do not have rigorous security protocols so executives must prepare now for this change.

In the current world this lack of focus on the physical aspects of security is somewhat surprising as not only have we seen the destruction of the World Trade Center (which housed many data centres) and the London bombings of 2005, but as Gartner point out, going back further there is an even more pertinent example:

The bombing of the financial district of London’s Docklands in February 1996 demonstrated the vulnerability of data centre buildings and surrounding areas to major disruption caused by a terrorist bomb.

The emphasis that most data centre operators have placed on London is understandable. The vast majority of companies have their headquarters in London and the financial sector is almost exclusively based there. In the days when IT was unreliable and communication links were expensive it made sense to house equipment close to the main office – to be a “server hugger”.  The focus on maintain cheaper communications links meant that companies’ IT equipment began to cluster together in a few mega data centres in London’s Docklands. However, this clustering has created a concentration of risk as Gartner point out:

As large data centers serve more and more enduser organizations, their potential as a target for criminal and terrorist activity increases.

And according to Gartner it is also the risk to critical infrastructure that means the Government might be required to legislate:

“Targeting critical infrastructure is a well-established strategy in times of war, and by terrorists (international and domestic alike). Data centers are critical infrastructure for the effective operations of most world economies. Information security measures continue to make «getting inside» harder for those with malicious intentions, thus requiring a reversion to the oldest form of gaining access — kick down the door!

The emphasis that most data centre operators have placed on London is understandable

“Such actions could be taken by disenfranchised domestic groups, contracted corporate espionage agents, or even the result of international conflict. Although the actual success of such efforts may be limited, the mere attempt will be enough to cause many governments to recognize data centers as being vital critical infrastructure requiring regulations potentially on par with those found in nuclear facilities.”

Fortunately, thanks to increasingly sophisticated remote control and monitoring, and cheap fibre, the data centre no longer needs to be tightly shackled to the Head Office.  Responsible CIOs should therefore focus on priorities other than convenience and consider data centre locations which are more secure than central London.

So why is a more remote out of town location more secure? Firstly, good security requires space – space enough for double layer fencing, space enough for the data centre building to be at least 25m from the road. In congested and expensive London this is very difficult to achieve whereas NGD’s facility in South Wales, for example, has a 25 acre site with military grade fences. Secondly, a secure site should have a low footfall – remote from highly populated focal points for crowd unrest or riots, a location where unwanted strangers are easy to spot. And of course, the data centre should not be in the vicinity of natural terrorist targets such as Canary Wharf, flight paths or flood plains.

In summary, the UK government has already begun to recognise the growing security threat facing data centres. In order to minimise risk, buyers will in turn need to assess physical security capabilities far more rigorously than at present and look toward the selection of redundant delivery centres that are more geographically dispersed.