Google says trade agreement amendment hinders security vulnerability research

Google says the US DoC amendments would massively hinder its own security research

Google says the US DoC amendments would massively hinder its own security research

Google hit out at the US Department of Commerce and the Bureau of Industry and Security this week over proposed amendments to trade legislation related to the Wassenaar Arrangement, a multilateral export control agreement, arguing they will negatively impact cybersecurity vulnerability research.

The Wassenaar Arrangement is a voluntary multi-national agreement between 41 countries and intended to control the export of some “dual use” technologies – which includes security technologies – and its power depends on each country passing its own legislation to align its trade laws with the agreement. The US is among the agreement’s members.

As of 2013 software specifically designed or modified to avoid being found by monitoring tools has been included on that list of technologies. And, a recent proposal put forward by the US DoC and BIS to align national legislation with the agreement suggests adding “systems, equipment, components and software specially designed for the generation, operation or delivery of, or communication with, intrusion software include network penetration testing products that use intrusion software to identify vulnerabilities of computers and network-capable devices” to the list of potentially regulated technologies, as well as “technology for the development of intrusion software includes proprietary research on the vulnerabilities and exploitation of computers and network-capable devices.”

Google said the US DoC amendments would effectively force it to issue thousands of export licenses just to be able to research and develop potential security vulnerabilities, as companies like Google depend on a massive global pool of talent (hackers) that experiment with or use many of the same technologies the US proposes to regulate.

“We believe that these proposed rules, as currently written, would have a significant negative impact on the open security research community. They would also hamper our ability to defend ourselves, our users, and make the web safer. It would be a disastrous outcome if an export regulation intended to make people more secure resulted in billions of users across the globe becoming persistently less secure,” explained Neil Martin, export compliance counsel, Google Legal and Tim Willis, hacker philanthropist, Chrome security team in a recent blog post.

“Since Google operates in many different countries, the controls could cover our communications about software vulnerabilities, including: emails, code review systems, bug tracking systems, instant messages – even some in-person conversations! BIS’ own FAQ states that information about a vulnerability, including its causes, wouldn’t be controlled, but we believe that it sometimes actually could be controlled information,” the company said.

Google also said the way the proposed amendment is worded is far too vague and proposed clarifying the DoC-proposed amendments as well as the Wassenaar Arrangement itself.

“The time and effort it takes to uncover bugs is significant, and the marketplace for these vulnerabilities is competitive. That’s why we provide cash rewards for quality security research that identifies problems in our own products or proactive improvements to open-source products. We’ve paid more than $4 million to researchers from all around the world.”

“If we have information about intrusion software, we should be able to share that with our engineers, no matter where they physically sit,” it said.

SAP’s Q215 financial results: Strong cloud growth alongside profit warnings

It’s that time of year again. Companies are issuing their latest quarterly financial figures, and SAP’s Q215 numbers see continued growth in cloud services, but overall profits remaining relatively stagnant.

Cloud subscriptions and support for the German software giant stood at €552 million for the second quarter of 2015, a more than 100% change from the previous year. Other software licenses and support saw a 13% change at €3.51 billion. Total revenue hit €4.97bn, at an increase of 20%, yet operating profit of €701m was a mere 1% rise, and profit after tax dropped 16% year over year.

Despite this the company is reiterating its 2015 business outlook, expecting full year non-IFRS cloud subscriptions and support revenue to be in a range of €1.95bn to €2.05bn, full year non-IFRS cloud and software revenue to increase by 8% to 10%, and full year 2015 non-IFRS operating profit to be in the range of €5.6-€5.9bn.

SAP’s modus operandi, as any regular reader of CloudTech will know, is to migrate its mammoth legacy on-premises software revenues to the cloud. It is by no means the only company attempting that – IBM and Oracle instantly spring to mind – but all three companies are suffering to some degree.

For SAP, everything is going in the right direction; just not at the pace it would like. SAP shifted its long term goals again at the start of this year, predicting a 2017 operating profit target of between €6.3bn and €7bn, from a previous total of €7.7bn.

All three companies have had the Campaign for Clear Licensing (CCL) on its tail, with the organisational body fighting against obfuscatory software practices. Oracle was lambasted for its “arms length, impoverished” relationship with customers, while SAP told this publication in February it would “welcome any customer feedback” following the CCL audit.

Bill McDermott, SAP CEO, is unrepentant in his cloud vision. He said: “Our business is thriving because we have the most complete vision for how to make this transition to digital business a simple one. I am confident that our strategy to deliver a platform, applications and business networks is exactly what customers need from SAP.”

The Tour de France and cloud disaster recovery – more in common than you think

(c)iStock.com/Razvan

As Tour de France fever ratchets up around the world, it’s clear to me that businesses can draw several parallels with professional cycling and their disaster recovery (DR) strategy. If businesses and cyclists plan ahead, it’s possible for both to have a real-time support crew on hand the moment something goes wrong.  And, just seconds after it goes wrong, the support crew solves the problem and the cyclist, or business, gets back to what it does best.

Imagine you’re midway through the Tour de France, your leading the peloton and then your brakes overheat and your tyre bursts. Up until then you were leading the peloton with a 28-second advantage. What next, how do you recover and can you still win the race?

In a perfect world, your team support crew appears with a new bike – completely identical to your old one- and five seconds later you’re back on the seat and still comfortably in the lead. Winning a stage of the Tour de France usually comes down to a few seconds, and sometimes milliseconds. You just don’t have time to pull over and replace your tyre.

Shifting data recovery into gear

The first, most obvious point is achieving 100% uptime.  No cyclist or business can afford the luxury of downtime when they have their shredded tyre moment – for a business most often in the form of a failed production server.

As any glance at the news will tell you, enterprises of all sizes are at the risk of data loss due to disasters, but the problem is, like blown tyres, data disasters are unplanned and unexpected–and usually aren’t caused by Mother Nature. Although advance planning cannot eliminate or prevent an unexpected event, such as an attack from a malicious hacker, it can provide an edge in overcoming any long-term consequences like lost sales information and damaged internal records that can be caused by a disaster.  To stay upright in the peloton, you have to look ahead and watch out for sharp corners and potholes, as it’s a bad idea to put too much trust in the rider in front of you.

For your business, a DR plan should have three primary goals to ensure its business continuity/ DR plan is built both for speed, agility and endurance. First, it should be designed to protect all of your files and records, including the physical and virtual servers themselves. Second, the plan should provide a framework with the capability to quickly retrieve information and virtually replicate your business. This will allow your operations to continue at a new location, if necessary. Third, because DR infrastructure so often sits underutilised, in these times of tight budgets and staffing it’s critical to get more value out of your DR strategy even when you’re not experiencing downtime.

To achieve these goals, the organisation must have technology in place that will be the support crew in the event of a disaster. A hybrid cloud or disaster recovery as a service (DRaaS) approach is rapidly becoming an effective choice.  Because DRaaS doesn’t have the physical infrastructure and configuration synchronisation associated with traditional disaster recovery, it’s a flexible option.  A hybrid cloud-based solution combines on-premises hardware, public cloud and SaaS automation software to make continuity planning easier than ever.  The DR cloud provides data backup, failover of servers and the ability to have a secondary data centre at a different site to allow for regional disaster recovery. 

Test recovery capabilities

Cyclists would never consider entering a race without first going for at least one training ride, and typically, they ride segments multiple times, learning the curves and changes in elevation. The same approach should apply to a continuity plan – don’t wait until an unplanned outage to see how things unfold, instead test your DR plan.

With DRaaS solutions, there is also computing capacity on standby to recover applications if there is a disaster. This can be easily tested without impacting the production servers or unsettling the daily business routine.  A so-called ‘sandbox’ copy is created in the cloud, which is only accessible by the system administrator. These copies are created on demand, paid for while being used and deleted once the test is complete.  This makes testing simple, cost effective and does not disrupt the business.

There’s a cadence to pedalling your bike and to conducting live DR testing.  You can test recovery, software updates and server configuration changes every day without missing a beat. Test cases can be performed against the recovery systems in as little as 15 minutes depending on the application, often with no incremental costs. Applications and services are immediately available for other uses, enabling businesses to efficiently adopt cloud infrastructure or speed time to production for new initiatives.

Creating business value beyond disaster recovery

Truly elite bicycling teams take a thoughtful approach to having the proper equipment and components on hand based on the circumstances of a particular race. They don’t waste resources or space on items not critical for competing with the pack.

Beyond the operational advantages, there are financial benefits to cloud-based testing.  Service providers regularly offer sliding scales for DR testing. Disaster recovery on demand allows businesses to only pay for what you need, when you need it. Putting your DR solution in the cloud also means there isn’t a redundant in-house infrastructure that is sitting unused most of the time. You can prioritise recovery based on the level of protection you want for each server without wasting time and money.

Another challenging part of a DR plan is to get employees to know what to do if an outage occurs.  People learn by repetition, so just like learning how to overcome shredded tyres on a mountainside, we have to create practice DR drills, which are critical to a DR plan. Companies who don’t do these regularly should not be shocked if their employees don’t respond appropriately and panic when a server goes down.  But you will still find more companies with self-hosting based DR services hoping for the best.

Wearing the yellow jersey

To win the Tour de France you have to attack every stage, every hill and every time trial. Likewise, it’s clear that firms would be foolish not to protect themselves against data loss. In fact, most CFOs and IT leaders understand the need for disaster preparedness but have previously found it difficult to formulate a DR plan.

The main barriers to implementation are now being broken down by DRaaS. It not only addresses recovery plan goals, it also supports regular testing without the traditional overhead costs and logistical nightmares. You can protect your company against data loss and have peace of mind that whenever you need to implement a new business application or process, it’ll work the first time, every time.  But you must plan well, keeping your operations out of the wind until you need to recover from a disaster, without getting boxed in by traditional backup and recovery methods that just don’t cut it anymore.  

So be smart with your DR plan, equip your team with technology that can deliver real-time recovery fast, and you’ll have a clear shot at wearing the yellow jersey.

Accenture: For most enterprises, IT-as-a-Service will have to wait

Enterprises are slow to adopt ITaaS

Enterprises are slow to adopt ITaaS

Enterprises are looking to adopt IT-as-a-service (ITaaS) models and modernise their digital systems in a bid to become more competitive, but recently published research suggests most aren’t budging on their existing strategies. Michael Corcoran, senior managing director, global growth and strategy at Accenture, the firm that commissioned the research, told BCN that leaning more on cloud services, using analytics and becoming more automated could help them speed up the transition.

The transition to ITaaS is up there with DevOps and Agile when it comes to cultural and organisational modernisation and service improvement. It implies IT moving from being a monolithic procurement centre to a dynamic internal service provider, something most big organisations need to do in order to more effectively compete in digital.

Accenture and HfS Research surveyed 716 enterprise service buyers and found that 53 per cent of senior executives view ITaaS as critical for their organisation, yet 68 per cent of respondents said their core enterprise processes will not be delivered as-a-service for five or more years.

The research suggests this may be partly due to differing opinions or objectives within the organisations polled. More than half of service buyer senior leaders view aaS as critical and 61 per cent are ready to replace legacy providers in order to achieve their desired outcomes. But the same can’t be said for middle manager and delivery staff: just 29 per cent see the value of aaS in the same way.

“Many enterprise operations executives and service providers must make intrinsic changes to how they operate to stay relevant in an uncertain and challenging future,” said Phil Fersht, chief executive and founder, HfS Research. “It’s the forward-thinking service buyers and providers who set out their vision and path forward for sourcing with defined business outcomes aligned to the as-a-service ideals, that will achieve success. The conservative among us who refuse to accept these times of unprecedented, disruptive transition will be competitively challenged.”

Corcoran told BCN that much of the onus is on service providers, which need to invest in developing as-a-service capabilities. But enterprises also need to deploy the right mix of technologies and invest in the right skills to make the transition happen.

“By effectively moving to the cloud and applying the right digital technology, automation, artificial intelligence and analytics to unlock competitive advantage from data, and utilizing talent smartly, companies are in a better position to innovate faster, create new services and drive business outcomes that positively impact their top and bottom-line,” Corcoran explained.

“49 per cent of today’s enterprise buyers expect to move to a “wide-scale transformation of business processes enabled by new technology tools/platforms” in just two years. So it’s clear that many operational leaders are recognizing the need to steer their enterprises away from legacy delivery models and move towards the cloud and its material business outcomes.”

Installing Microsoft Office and Other Third-Party Software in Parallels Desktop

Guest blog by Manoj Raghu, Parallels Support Team In previous blogs, we talked about setting up your Windows virtual machine, tuning it and using advanced functionality. Now let’s take a look at installing Windows-based programs in a Parallels Desktop VM. Although this process is pretty similar to installing programs on a PC, there are a […]

The post Installing Microsoft Office and Other Third-Party Software in Parallels Desktop appeared first on Parallels Blog.

Unveiling Parallels Mac Management 4.0 at MacIT

Last week, our Parallels Business Solutions team had a blast attending MacIT—that’s right, three days of discussing our favorite topic: OS X for organizations with over 600 attendees. In addition to shoptalk, we also got to unveil Parallels Mac Management 4.0! Just check out some of our highlights: Stop by booth 101 at #macITchat & tell us how you manage […]

The post Unveiling Parallels Mac Management 4.0 at MacIT appeared first on Parallels Blog.

ComputeNext Launches CloudEd

ComputeNext, a Bellevue-based cloud company, has recently announced the launch of its new Channel Training & Education Program, “CloudED.” The new program will serve to aid cloud resellers migrate customers to cloud-based solutions. The program, created by ComputeNext’s director of channel development Dan Moore, provides marketplace-specific video tutorials necessary for channel professionals as well as general cloud certification provided by partner CompTIA, which will be free of charge. The certifications include certificates for data recovery, IT security and unified communications as well as executive certificates in both cloud computing in beginner and advanced levels.

cloud-Ed

ComputeBext claims the program “covers basic and advanced rationales for the cloud computing thought process, as well as hands-on tutorial training on the ComputeNext Marketplace itself.” The program was designed to help meet the difficulties of keeping revenues up while transitioning customers from traditional, physical IT to newer cloud services.

Moore also said there is “a huge need for education and training resources that can better equip these organizations to lead cloud conversations, understand the larger market dynamics of cloud computing, and ultimately exude a sense of trustworthiness to their clients. Unfortunately, the plethora of platform & technology vendors respective to cloud services coupled with the barrage of marketing information about ‘going to the cloud’ can be very daunting and confusing as to what your first steps should be.”

The company’s Marketplace is a platform that allows users to edit and manage cloud services from a multitude cloud hosting providers around the world.

The post ComputeNext Launches CloudEd appeared first on Cloud News Daily.

[slides] What, Why & How of IaaS & PaaS By @YungChou | @CloudExpo #Cloud

The essence of cloud computing is that all consumable IT resources are delivered as services.
In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transition into and take advantages of these emerging service models.

read more

Unlike Ashley Madison, How to Avoid Baring It All By @IanKhanLive | @CloudExpo #Cloud

Today’s case of Ashley Madison getting hacked and literally being kept at ransom is a classic case of something not very new, but something we need to take a look at with a fresh set of eyes. It’s not all the trouble all their customers will get into that I’m talking about, but the mere corporate nightmare of having your entire customer data leaked. Today it’s one organization, who know is who is next tomorrow. Want to know how to avoid getting caught with your pants down? Read on.

read more

CenturyLink open sources more cloud tech

CenturyLink has open sourced a batch of cloud tools

CenturyLink has open sourced a batch of cloud tools

CenturyLink has open sourced a number of tools aimed at improving provisioning for Chef on VMware infrastructure as well as Docker deployment, orchestration and monitoring.

Among the projects open sourced by the company include a Chef provisioning driver for vSphere, Lorry.io – a tool for creating, composing and validating Docker images, and imagelayers.io – a tool that helps improve Docker image visualisation in order to help give developers more visibility into their workloads.

“The embrace of open-source technologies within the enterprise continues to rise, and we are proud to be huge open-source advocates and contributors at CenturyLink,” said Jared Wray, senior vice president of platforms at CenturyLink.

“We believe it’s critical to be active in the open-source community, building flexible and feature-rich tools that enable new possibilities for developers.”

While CenturyLink’s cloud platform is proprietary and developed in house Wray has repeatedly said open source technologies form an essential part of the cloud ecosystem – Wray himself was a big contributor to Cloud Foundry, the open source PaaS tool, when developing Iron Foundry.

The company has also previously open sourced other tools, too. Last summer it punted a Docker management platform it calls Panamax into the open source world, a platform is designed to ease the development and deployment of any application sitting within a Docker environment. It has also open sourced a number of tools designed to help developers assess the total cost of ownership of multiple cloud platforms.