How Confused.com enabled staff to take ownership of its digital transformation


Bobby Hellard

31 Oct, 2018

Despite being an internet company, Confused.com still needed to undergo a digital transformation.

Such is the rapid rise of AI and cloud computing, the 16-year-old price comparison website only adopted a cloud service five years ago. And the tricky part was not about introducing new technology to the business, it was about introducing it to the workforce.

“We have always been tech geeks at heart and we make sure we are using technology to help customers,” says Louise O’Shea, Confused.com’s CEO. “I hate this idea that if we have contractors in or we outsource something, then where is the knowledge share? Where is the ownership?

“You want your team that’s with you every day to feel proper ownership of what is going on, you don’t want them to think ‘oh that’s something that is going on over there and I don’t need to know about it’ or think that someone else is doing all the sexy exciting things. That’s not fair.”

Those doing the “sexy exciting” things, were engineers from Microsoft Azure who helped to migrate the comparison site’s data to the cloud. But rather than gawp in amazement from afar, or just accept the new tech would be Microsoft’s problem, O’Shea and the Confused.com management team, wanted the staff to take ownership and sought to educate them.

Confused.com is a small organisation which employs 200 people, all of whom work in Cardiff, South Wales. Only a third of its staff have technology-based roles and earlier in the year the company launched a school of tech.

“I was very clear with the staff,” says O’Shea. “I said: ‘look, as an employer, it is my responsibility to make sure you guys are educated in technology because it’s changing what we do as a business and changing what you do in your day to day job, very, very quickly’. I wanted to make sure that they had the skills while they were at Confused.com or if they left, to succeed in the future.”

“We wanted them to understand what the technology can do and the possibilities, because they are the ones that will spot where they can use it in their own day to day job. The engineers in the business know how this works, but you’ve got to bring these two parties together.”


When it comes to digital transformation, a CIO with an innovative mindset – and the right team – is ideally positioned to take the reins. Discover why in this whitepaper.

Download now


Part of the learning curve was to partnering the staff with an engineer to both understand the technology and take away any fear.

“They are the ones that are close to what they are doing every day and can see the opportunities. If they can understand what the engineers do, then they can spot those opportunities easier. We’ve already had some successes with 20% of our staff going through that program,” noted O’Shea. 

“Ownership is really important. If people don’t own it, they are just going to say’ it’s someone else’s thing, so I’m not going to touch it’.”

How Confused.com enabled staff to take ownership of its digital transformation


Bobby Hellard

31 Oct, 2018

Despite being an internet company, Confused.com still needed to undergo a digital transformation.

Such is the rapid rise of AI and cloud computing, the 16-year-old price comparison website only adopted a cloud service five years ago. And the tricky part was not about introducing new technology to the business, it was about introducing it to the workforce.

“We have always been tech geeks at heart and we make sure we are using technology to help customers,” says Louise O’Shea, Confused.com’s CEO. “I hate this idea that if we have contractors in or we outsource something, then where is the knowledge share? Where is the ownership?

“You want your team that’s with you every day to feel proper ownership of what is going on, you don’t want them to think ‘oh that’s something that is going on over there and I don’t need to know about it’ or think that someone else is doing all the sexy exciting things. That’s not fair.”

Those doing the “sexy exciting” things, were engineers from Microsoft Azure who helped to migrate the comparison site’s data to the cloud. But rather than gawp in amazement from afar, or just accept the new tech would be Microsoft’s problem, O’Shea and the Confused.com management team, wanted the staff to take ownership and sought to educate them.

Confused.com is a small organisation which employs 200 people, all of whom work in Cardiff, South Wales. Only a third of its staff have technology-based roles and earlier in the year the company launched a school of tech.

“I was very clear with the staff,” says O’Shea. “I said: ‘look, as an employer, it is my responsibility to make sure you guys are educated in technology because it’s changing what we do as a business and changing what you do in your day to day job, very, very quickly’. I wanted to make sure that they had the skills while they were at Confused.com or if they left, to succeed in the future.”

“We wanted them to understand what the technology can do and the possibilities, because they are the ones that will spot where they can use it in their own day to day job. The engineers in the business know how this works, but you’ve got to bring these two parties together.”

Part of the learning curve was to partnering the staff with an engineer to both understand the technology and take away any fear.

“They are the ones that are close to what they are doing every day and can see the opportunities. If they can understand what the engineers do, then they can spot those opportunities easier. We’ve already had some successes with 20% of our staff going through that program,” noted O’Shea. 

“Ownership is really important. If people don’t own it, they are just going to say’ it’s someone else’s thing, so I’m not going to touch it’.”

Microsoft Future Decoded: The three forces driving the AI revolution


Bobby Hellard

31 Oct, 2018

The theme for this year’s Microsoft Future Decoded is AI, and specifically, how it can transform your business faster than any technology before it.

But artificial intelligence is not new; It’s been around since Alan Turing was cracking codes in World War II. So, what is actually accelerating this revolution?

According to Cindy Rose, Microsoft’s UK CEO, there are three reasons why and she outlined them on stage during her keynote speech to open this year’s event.

“Firstly, it’s the explosive growth of data,” she said. “These connected consumer devices and IoT [internet of things] sensors are producing more data today then humans can possibly make sense out of.”

Indeed, using AI for data processing is a necessity as she gave an estimate that 2.5 quintillion bytes of data are being created every day. That’s more than 15 million text messages and 100 million spam emails every minute.

“It is also the power and pervasiveness of cloud,” she added. “Cloud is what enables the efficient and rapid analysis of all this data. Microsoft is investing billions of dollars in a global cloud infrastructure to make sure we can deploy AI, quickly and at scale.

This “explosive” growth of data and the capabilities within cloud computing, combined, are enabling the development of increasingly powerful algorithms — ones “we’ve never seen before”, according to Rose. It’s indisputable, the two need to be combined because the sheer volume of data we now generate cannot be processed by a human.

This speed of processing is Rose’s third reason for the sharp rise in the AI revolution. This is what makes it a far more game-changing technology than anything that has gone before it.

“It’s taken us nearly four decades to put a PC on every desk and a smartphone in every pocket,” added Rose. “But the pace of AI deployment will be much faster and its impact more profound.

“And the pace the of change of these dynamics is why we believe that the time to embrace AI in your organisation is right now.”

Microsoft Future Decoded: The three forces driving the AI revolution


Bobby Hellard

31 Oct, 2018

The theme for this year’s Microsoft Future Decoded is AI, and specifically, how it can transform your business faster than any technology before it.

But artificial intelligence is not new; It’s been around since Alan Turing was cracking codes in World War II. So, what is actually accelerating this revolution?

According to Cindy Rose, Microsoft’s UK CEO, there are three reasons why and she outlined them on stage during her keynote speech to open this year’s event.

“Firstly, it’s the explosive growth of data,” she said. “These connected consumer devices and IoT [internet of things] sensors are producing more data today then humans can possibly make sense out of.”

Indeed, using AI for data processing is a necessity as she gave an estimate that 2.5 quintillion bytes of data are being created every day. That’s more than 15 million text messages and 100 million spam emails every minute.

“It is also the power and pervasiveness of cloud,” she added. “Cloud is what enables the efficient and rapid analysis of all this data. Microsoft is investing billions of dollars in a global cloud infrastructure to make sure we can deploy AI, quickly and at scale.

This “explosive” growth of data and the capabilities within cloud computing, combined, are enabling the development of increasingly powerful algorithms — ones “we’ve never seen before”, according to Rose. It’s indisputable, the two need to be combined because the sheer volume of data we now generate cannot be processed by a human.

This speed of processing is Rose’s third reason for the sharp rise in the AI revolution. This is what makes it a far more game-changing technology than anything that has gone before it.

“It’s taken us nearly four decades to put a PC on every desk and a smartphone in every pocket,” added Rose. “But the pace of AI deployment will be much faster and its impact more profound.

“And the pace the of change of these dynamics is why we believe that the time to embrace AI in your organisation is right now.”

Why it’s time to fight back against cyber risk to cloud computing and virtual machines

Cloud computing is now a primary driver of the world’s digital economy. Governments, large corporations and small businesses are increasingly implementing cloud-based infrastructures and solutions to store their sensitive data and manage their operations.

While the cloud offers lower costs, scalability and flexibility, it also expands a company’s risk profile exponentially. In fact, attackers are continually refining their techniques to take advantage of the millions of identical binary templates for virtual environments (aka golden images) that power those cloud and Virtual Machine (VM) benefits.

Cloud and VM environments share parallels with Genetically Modified (GM) crops – yields are extremely high around carefully developed identical DNA sequences, but a single bug or virus can scale to destroy not just one, but all crops in a monoculture since there is no natural diversity to protect them. In a cloud context, a zero-day attack can take down all production systems and disaster recovery systems, disrupting business continuity and prompting financial loss.

Because traditional cybersecurity protections such as encryption, firewalls, intrusion prevention, and endpoint protection have been historically successful, adversaries have introduced new zero-day techniques to bypass them. Such modern techniques include memory corruption, return/jump oriented programming (ROP/JOP), and compromised supply chain attacks. The White House described the recent NotPetya supply chain attack as the “the most destructive and costly cyber-attack in history.”   

Growing risks in cloud computing

One of the greatest unintended consequences of both the cloud and VMs is that they expand the attack surface. Whenever data is stored across remote servers and VMs, risk is not just involved, but elevated. While a company may know its own source code, configurations, equipment, personnel and processes, cloud computing introduces the vulnerabilities of globally sourced third-party hardware, software and configurations that surround, penetrate, and bind the remote environment altogether.

Unfortunately, zero-days are not conveniently located in easy to inspect areas but can instead spread between components and layers in the network, storage, and server stack, from firmware, to bootloaders, hypervisors, containers, operating systems, middleware, libraries, and frameworks, and apps. A report by the Ponemon Institute found that “fileless” (memory-based) malware attacks are now almost ten times more likely to succeed in infecting a machine than traditional file-based attacks. These attacks evade detection by using a system’s own trusted files to obtain access.

Supply chain attacks are also on the rise and grew by more than 200 percent in 2017, according to Symantec's annual Internet Security Report. And so far in 2018, the Zero Day Initiative noted a 275 percent spike in virtualisation software bugs that offer the possibility of compromising within or across VMs.

Even in the physical world, examples of massively replicated golden images exist. In 2015, hackers compromised one Jeep truck, forcing manufacturer FCA Group to recall 1.4 million vehicles for updates – the world’s first vehicle cybersecurity recall. And in 2017, the FDA recalled nearly 500,000 pacemakers for firmware updates when it discovered lax cybersecurity could allow the devices to be hacked.

Why once successful security tools now fail

Traditional perimeter security tools no longer offer full protection in this complex and connected environment. The cybersecurity paradigm over the last 40 years has been one of increasingly clever detection via patterns, rules, analytics, and artificial intelligence rather than on preventing attacks from happening in the first place.  Zero-day is another name for the increasing numbers of attacks detection engines miss, inadvertently adding an organisation’s name to yet another “wall of victim logos” slide for the next cybersecurity forensics and after-action reporting conference.

There is already a growing chorus for stronger security. The Department of Defense says cyber defence must move beyond “just the networks,” and the National Security Agency notes adversaries are increasingly turning to supply chain exploitation.  Security standards and common defence can differ from provider to provider. Many strive to meet the standards of their industry, whether that be FedRAMP for government or PCI for finance. But even being compliant with standards, rules and regulations sometimes isn’t enough.

The problem is that most standards focus on detection and after-action reporting with limited attention to newer fileless or supply chain attacks.  A common hope is that strong encryption will somehow catch new types of attacks, However, there is actually no effect on memory corruption or compromised supply chain attacks that can come hidden in correctly signed and encrypted updates, or simply be pre-positioned within third party infrastructure.

Adding a deeper layer of defence

RASP is a term initially coined in a 2012 Gartner report titled, “Runtime Application Self Protection:  A Must-Have, Emerging Security Technology.” It’s a technology that is linked or built into an application or application runtime environment that is capable of controlling runtime execution and detecting and preventing real-time attacks. Forrester notes that RASP tools are used as a deeper layer of application defence by using insider information of the applications they protect to help more effectively detect and deflect malicious attacks. RASP techniques are enjoying widespread adoption – so much so that the RASP market is forecast to grow at a CAGR of 48% between 2018 and 2022 by ResearchandMarkets.com.

An implementation of RASP can bridge the growing security gap in the cloud. It can stop attacks and attack scaling rather than simply remediating symptoms. RASP offers built-in security to prevent real-time attacks with techniques such as binary stirring, control flow integrity, and stack frame randomization, reducing the attack surface and rendering zero-days built on memory corruption and supply chain attacks inert.

Early attempts at RASP added too much overhead to the code, were too limited in scope or perturbed functionality by trying to graft agents onto code. Others also had impractical requirements like the need for access to source code and recompilation, or the need for new hardware, new software or new services that made them impractical to use. But those limitations have now been overcome. Modern RASP can be added to existing or new binaries quickly, easily and economically.

RASP is also not a replacement for current tools since all the traditional attack vectors still occur; but it represents a new layer of protection that can quickly and easily integrate with existing on-premises, cloud, or web-based development deployments and update processes.

At a time when cloud-based applications and virtual machines are critical to the operations of government institutions and private enterprises, we can no longer put all of our security in the perimeter security and detection tools basket. Utilising RASP technology might just be our best chance for society to stay one step ahead of attackers, and prevent scaling, memory and compromised supply chain attacks from executing.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Days-long Microsoft outage leaving users unable to login to Office 365


Keumars Afifi-Sabet

31 Oct, 2018

Microsoft is investigating the cause of a lengthy Office 365 outage that has persisted for several days, with business customers, predominantly based in the UK, experiencing difficulties signing in to their accounts.

Users have been reporting problems with logging in to their Office 365 accounts across social media since Friday 26 October, with system information site DownDetector also seeing a spike in user complaints.

These complaints receded over the weekend, but resumed again on Monday 29 October, and have been peaking during working hours since. The issue appears to be predominantly affecting users in the UK.

The issue manifests as additional login prompts appearing after users have entered their details into the username and password fields. The appearance of a second “security prompt” means many business users have been unable to access critical services.

Microsoft confirmed yesterday it was investigating the issue, adding a handful of recently-made changes were rolled back in an attempt to resolve the symptoms.

“We rolled back recent changes that were made in the environment and some customers are reporting that impact has been mitigated for SP152610 and EX152471. The source of the issue remains under investigation,” Microsoft tweeted.

“If you have a user that is actively experiencing impact, please reply to us or contact support so we can gather additional information to assist with our investigation.”

further update, released by Microsoft at 13.00 GMT today, suggested no additional reports of disruption had been received, and that the “impact was remediated on Tuesday, October 30” late in the evening.

But a handful of customers replying to Microsoft’s tweet have suggested this is not correct, with one user John Gardner admitting he still had at least three users still affected.

The frequency of users registering Office 365 complaints in the last few days on DownDetector

This issue, which has persisted for more than three working days, is the latest in a series of high-profile outages that Microsoft has sustained in recent months.

Microsoft Azure and some Office 365 services suffered disruption for more than 24 hours in September following a “severe weather event” that knocked an entire data centre offline.

Customers in the US, and a host of European countries were unable to access a number of cloud-based apps after lightning strikes caused a power surge to Microsoft’s San Antonio, Texas-based data centre.

In April meanwhile, a similar but less severe Office 365 outage meant users for were unable to login to their accounts for a short period, affecting customers in the UK, France, the Netherlands and Belgium.

A map of the areas affected by the latest Office 365 outage, taken on Monday 29 October from DownDetector

“The continuity of business critical systems is vital for organisations today to maintain productivity and effective customer service,” said cyber resilience expert at Mimecast, Pete Banham.

“This Office 365 issue is a clear reminder that in the cloud age, it’s often down to individual organisations to ensure they have a plan B.”

“Employees can also create security and compliance risks during downtime when using unsanctioned or consumer IT services to get the job done.

“We are urging organisations to consider a cyber resilience strategy that assures the ability to recover and continue with business as usual.”

Asked to assess the increasingly cloud-centric business ecosystem, in light of recent outages, Banham told Cloud Pro it’s a balancing act between bottom-line cost reduction, and putting faith into potentially unreliable third parties.

“The merit of this approach is most likely to be a bottom line cost reduction on paper, but the true cost of a single outage could negate this entirely.

“An ecosystem where businesses wholly depend on the reliability of cloud hosting services is unlikely to be sustainable. After all, few organisations can tolerate lengthy or frequent disruption to their IT services.

“There should always be a backup plan that assures the ability to recover and continue with business as usual despite an outage. This particular incident is another reminder that relying on a single cloud service isn’t the most effective cyber resilience strategy.”

Cloud Pro approached Microsoft for further comment, and for details as to how the issue arose. The company did not respond at the time of writing.

Days-long Microsoft outage leaving users unable to login to Office 365


Keumars Afifi-Sabet

31 Oct, 2018

Microsoft is investigating the cause of a lengthy Office 365 outage that has persisted for several days, with business customers, predominantly based in the UK, experiencing difficulties signing in to their accounts.

Users have been reporting problems with logging in to their Office 365 accounts across social media since Friday 26 October, with system information site DownDetector also seeing a spike in user complaints.

These complaints receded over the weekend, but resumed again on Monday 29 October, and have been peaking during working hours since. The issue appears to be predominantly affecting users in the UK.

The issue manifests as additional login prompts appearing after users have entered their details into the username and password fields. The appearance of a second “security prompt” means many business users have been unable to access critical services.

Microsoft confirmed yesterday it was investigating the issue, adding a handful of recently-made changes were rolled back in an attempt to resolve the symptoms.

“We rolled back recent changes that were made in the environment and some customers are reporting that impact has been mitigated for SP152610 and EX152471. The source of the issue remains under investigation,” Microsoft tweeted.

“If you have a user that is actively experiencing impact, please reply to us or contact support so we can gather additional information to assist with our investigation.”

The frequency of users registering Office 365 complaints in the last few days on DownDetector

This issue, which has persisted for more than three working days, is the latest in a series of high-profile outages that Microsoft has sustained in recent months.

Microsoft Azure and some Office 365 services suffered disruption for more than 24 hours in September following a “severe weather event” that knocked an entire data centre offline.

Customers in the US, and a host of European countries were unable to access a number of cloud-based apps after lightning strikes caused a power surge to Microsoft’s San Antonio, Texas-based data centre.

In April meanwhile, a similar but less severe Office 365 outage meant users for were unable to login to their accounts for a short period, affecting customers in the UK, France, the Netherlands and Belgium.

A map of the areas affected by the latest Office 365 outage, taken on Monday 30 October from DownDetector

“The continuity of business critical systems is vital for organisations today to maintain productivity and effective customer service,” said cyber resilience expert at Mimecast, Pete Banham.

“This Office 365 issue is a clear reminder that in the cloud age, it’s often down to individual organisations to ensure they have a plan B.”

“Employees can also create security and compliance risks during downtime when using unsanctioned or consumer IT services to get the job done.

“We are urging organisations to consider a cyber resilience strategy that assures the ability to recover and continue with business as usual.”

IT Pro approached Microsoft for further comment, and for details as to how the issue arose. The company did not respond at the time of writing but tweeted that it would provide a further update at 13.00 GMT today.

Pakistan Government Harasses @ExpoDX, CIA-Affiliated Event to Take Place at Its @RooseveltNYC Property in New York City

DX WorldExpo LLC Leased space at the hotel to present its 22nd International event on November 12-13, 2018. Two weeks before the event the event producer received two separate proforma invoices with fictional charges which do not appear in the contract. The second invoice sent to the event producer by Pakisan government owned hotel demanded a payment of $473,616.35 within 24 hours or Pakisan would release the contracted space due to “breach of contract” clause. The show producer DX World EXPO LLC to present government sessions at the event. In previous conferences, the company presented keynotes by the CIA and by National Reconnaissance Office. Roosevelt Hotel in New York City is run by Managing Director Najeeb Samie on behalf of the Pakistan government.

read more

IBM and Red Hat appears a match made in container heaven – but is there more than meets the eye?

Analysis What about Big Blue Hat? Or how about Big Purple? Any offers for ‘in the pink?’ The proposed coming together of IBM and Red Hat, costing the former $34 billion in total enterprise value, is the stuff of dreams for caption writers. But what does it mean for the cloud industry and the two companies involved?

From IBM’s perspective, this is a continuation of a long-standing message. Those who have followed the company closely – for instance, when announcing its multi-cloud management tool earlier this month – would note their position that many enterprises are only a short way along their cloud journey. “This is the next chapter of the cloud,” said IBM CEO Ginni Rometty in a statement. “It requires shifting business applications to hybrid cloud, extracting more data and optimising every part of the business, from supply chains to sales.”

Red Hat owns amazing assets for this cloud era, has great developer love – and provides access to pretty much every CIO on the planet

And it’s fair to say that if it’s hybrid and open you want, then Red Hat is the best place to go. As the company explained at its London customer forum as far back as 2016, the company saw itself as the second player in Kubernetes – behind Google – and the second player in Docker, behind…well, Docker. Being open is key to both companies’ ethos. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience,” said Red Hat CEO Jim Whitehurst.

Containers are key in this equation – indeed, they are key to both quicker app deployment and multi-cloud initiatives. Indeed, while the likes of Amazon Web Services (AWS), Microsoft and Oracle became members of the Kubernetes-controlling Cloud Native Computing Foundation (CNCF) last year, IBM was a founder member in 2015. Todd Moore, IBM VP for open technologies, sits as the CNCF’s governing board chair. Red Hat, meanwhile, has Kubernetes-based platform OpenShift.

It makes an interesting fit – but was the news a surprise? Sort of. One industry executive predicted that Red Hat would get acquired this year – but they got the wrong company. Sacha Labourey, CEO of continuous delivery software firm CloudBees, posited in January that Red Hat would be bought by Google. While the target was slightly off, the sentiment was bang on.

Why? Labourey noted in a Medium post two reasons why the move was ‘inevitable’ – why Red Hat would need an open-looking IT behemoth as  surrogate, and vice versa. “Red Hat owns amazing assets for this cloud era, has great developer love and also provides access to pretty much all CIOs on the planet,” Labourey wrote. “[It] acts as a fantastic gateway to the public cloud.” On the flip side, Red Hat’s long-standing commitment to open source means that, while its yearly revenues north of $3 billion are valiant, it can’t compare to those with more billable proprietary offerings. Labourey noted VMware as an example of the latter.

Bill Mew, a 16-year veteran of IBM and now a cloud industry pundit, noted a similar convergence; IBM’s move from products to services under former CEO Lou Gerstner – albeit still making most of its money through middleware and mainframes, a ship Rometty continues to turn around – combined with Red Hat’s lack of profitability.

“The jewel in Red Hat’s crown is OpenShift,” said Mew. “As these two services organisations seek to merge their very different cultures, they are pinning their hopes on OpenShift providing a means of managing complex hybrid and multi-cloud environments.”

Yet there was a caveat. What happens when businesses are no longer struggling with the complexity of container deployment? “The problem [IBM and Red Hat] face is that they have no ecosystem control,” Mew added. “The hyperscalers hold all the cards here. Offering to provide workload portability to avoid lock-in and offering to help overcome complexity will only go so far.”

Offering to provide workload portability to avoid lock-in and offering to help overcome container complexity will only go so far

The analysts however seem to be more optimistic. IBM and Red Hat both received stellar marks in Forrester’s recent reports around public cloud development-only platforms and enterprise container platforms – and a combination of the two could see domination.

“The combined company has a leading Kubernetes and container-based cloud-native development platform, and a much broader open source middleware and developer tools portfolio than either company separately,” said Dave Bartoletti, Forrester VP and principal analyst. “While any acquisition of this size will take time to play out, the combined company will be sure to reshape the open source and cloud platforms market for years to come.”

Time will of course tell as to how this blockbuster acquisition will go. But as it transpires, Labourey’s initial bet could have been right all along. According to CNBC, citing sources familiar with the matter, Red Hat had discussed a potential sale with various buyers before going with IBM – including Google.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

As more companies put sensitive data in the public cloud – so the security threats increase

More organisations are putting their sensitive data in the public cloud – so it comes as no surprise that cloud threats, and mistakes in SaaS, IaaS and PaaS implementation are at an all-time high.

That is the key finding from a new report by McAfee, which argues the old bugaboo of shared responsibility continues to kick in and give organisations a kick in the teeth when it comes to cloud security.

The study, the security firm’s latest Cloud Adoption and Risk Report, analysed aggregated and anonymised cloud usage data for more than 30 million users globally. Among the findings were that more than one in five (21%) of all files in the cloud contained sensitive data, while the sharing of sensitive data with a publicly accessible link has gone up 23% year over year.

Organisations have more than 2,200 individual misconfigurations per month in their infrastructure as a service and platform as a service public cloud instances, the report noted. In other words, this means an average of 14 misconfigured instances running at any one time. This makes for interesting reading when compared with findings from Netskope earlier this month, which found a plethora of violations among users’ systems based on the Center for Internet Security (CIS) benchmark. The vast majority of these were related to identity and access management.

Even more worryingly, the report found that 5.5% of AWS S3 buckets analysed were set to ‘world read’ permissions. This, as regular readers of this publication will be more than aware, can essentially be a come-and-steal-my-data plea to nefarious actors. As far back as July last year, AWS was sending emails to customers to ‘promptly review’ their S3 buckets and ensure world read was only for such instances as public websites or publicly downloadable content.

McAfee warned of the risks organisations are taking when it came to IaaS security, which encompasses the aforementioned identity and access, as well as applications, network controls and host infrastructure. As the shared responsibility model notes, providers are responsible for security of the cloud, while customers are responsible for security in the cloud.

Multi-cloud has become de rigueur; while 94% of IaaS and PaaS use is in AWS, more than three quarters (78%) of organisations have both AWS and Azure. Continuous auditing and monitoring of each infrastructure and platform configuration is the only way forward, the report argues.

“Operating in the cloud has become the new normal for organisations – so much so that our employees do not think twice about storing and sharing sensitive data in the cloud,” said Rajiv Gupta, McAfee SVP of cloud security. “Accidental sharing, collaboration errors in SaaS cloud services, configuration errors in IaaS/PaaS cloud services, and threats are all increasing.

“In order to continue to accelerate their business, organisations need a cloud-native and frictionless way to consistently protect their data and defend from threats across the spectrum of SaaS, IaaS and PaaS,” added Gupta.

Writing for this publication in April, Micah Montgomery, cloud services architect at Mosaic451, noted that AWS is highly secure, but only when configured properly – and it is companies’ responsibility to ensure so. Montgomery gave five tips to organisations: know what you are doing; know what data you have; take advantage of the tools available to secure your AWS environment; beware AWS’ complexity; and ask for help if needed.

“In a general IT environment, there is a management console for every area and tool,” wrote Montgomery. “Routers, switches, firewalls, servers, and data storage all have their own, different tools, and each tool has its own management console. Once you add a cloud environment, you add another management console.

“There are already hundreds of ways to screw things up in an on-premises data environment,” added Montgomery. “The cloud adds yet another layer of complexity, and organisations must understand how it will impact their overall cyber security.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.