Announcing @Darktrace Silver Sponsor of @CloudEXPO Silicon Valley | #Cloud #CIO #AI #AIOps #Infosec #MachineLearning #SmartCities

Darktrace is the world’s leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace’s Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal’ for all devices and users, updating its understanding as the environment changes.

read more

Sponsorship Opportunities at @CloudEXPO | #Cloud #IoT #Blockchain #Serverless #DevOps #Monitoring #Docker #Kubernetes

CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM’s Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.

read more

UK remains key focus growth area for NetSuite


Maggie Holland

5 Apr, 2019

Europe, and the UK, in particular, remains a key market for NetSuite, with much-untapped potential yet so explore.

So says the company’s vice president of EMEA, Nicky Tozer, who talked about key focus areas and milestones during an interview with Cloud Pro at NetSuite’s annual user conference, SuiteWorld, in Las Vegas this week.

“We’re still focusing on growing the UK. It’s a mature market and, actually, the largest in terms of revenue delivery. So, we’re still continuing to grow that. That also has a very mature partner and customer NetSuite sales ecosystem,” Tozer said.

“The focus is pretty much the same, it’s pretty simple: it’s growth into EMEA. When we were acquired, we only had one office in the UK in EMEA. When Oracle bought us, they bought access to our market and probably saw that we have huge potential for expansion and would get a return on its money for that.”

Thus far, the company has enjoyed growth in 15 countries in EMEA, specifically in Western Europe, the Middle East and Africa. But, according to Tozer, in addition to helping ramp up the pace of expansion, Oracle has also assisted massively with R&D, recruitment and onboarding.

“NetSuite has been able to operate business as usual since we were acquired. That’s been huge in terms of continuing our success. It’s actually quite visionary of Oracle, really. Quite often, with acquisitions, they are just consumed into operations,” Tozer added.

“We were the second largest acquisition Oracle has made. We thought that would happen to us as a relatively small fish in a big pond. But, Oracle has allowed us to carry on operating as we have been.”

Tozer was appointed to the role in July 2019, as announced in a blog post by David Turner, senior director of marketing for EMEA. Prior to taking up the EMEA leadership role, replacing Mark Woodhams, Tozer looked after Northern European operations, as well as being instrumental in UK and Ireland business growth.

An 80:20 rule is key to NetSuite’s expansion, according to Tozer. By using a foundational level that exists everywhere, the firm is able to localise or customise on top of that layer, rather than re-inventing the wheel each and every time.

“We take 80% of what we do everywhere else in the world. Someone who needs a single, integrated business platform to run their business and be able to grow internationally fundamentally needs the same thing, whether they’re in Dubai, the UK, France, Germany or wherever. Then you obviously need to add that 20% localisation that makes it relevant.

“But the 80% of what we do is what has really allowed us to grow and ramp up that quickly. Some people call it cookie cutter or you know, or NetSuite in a box, but it’s that kind of approach.”

When asked how the company would ensure it maintains the same levels of service, support and experiences existing customers and partners are used to as it so rapidly expands, Tozer seemed confident.

“One very key metric is renewal rates,” Tozer said.

“If we’re not keeping our customers happy, they will vote with their bank account. We simply can’t let that happen.”

Why Africa’s cloud and data centre ecosystem will – eventually – be a land of serious opportunity

Take a look at the data centre footprint of the three largest cloud infrastructure vendors – Amazon Web Services (AWS), Microsoft Azure and Google Cloud – and you are met with a breathless marketing message.

Azure (middle) promises 54 regions worldwide – the terminology differs for each one – promising ‘more global regions than any other cloud provider.’ AWS’ cloud (top) ‘spans 61 availability zones within 20 geographic regions around the world’, while Google (bottom) promises 58 zones, 134 network edge locations and ‘availability in 200+ countries and territories’.

If you look at the maps, however, two continents stand out. South America, population approximately 420 million, has only Sao Paulo – albeit the largest city on the continent representing 3.4% of the entire South American populace – as designated data centre bases for all three providers. Yet Africa meanwhile – population approximately 1.22 billion – is even barer.

It does appear to be something of an oversight. Yet things are changing.

A recent report from Xalam Analytics, a US-based research and analysis firm, had explored the ‘rise of the African cloud.’ The study went live at the time Microsoft’s Johannesburg and Cape Town data centre sites were switched on, and has explored Africa’s ‘cloud readiness’, industry expectations, as well as its associated services.

Ultimately, the fact remains that, thus far, Africa has not been worth investing in. According to Xalam’s estimates, less than 1% of global public cloud services revenue was generated in the continent. This figure, the company adds, is lower than mobile operators’ revenue for SMS.

The report argued that only five of the 25 African countries analysed were considered ‘cloud-ready’; South Africa far at the top, Mauritius, Kenya, Tunisia and Morocco. 11 nations, including Egypt, Nigeria and Ivory Coast, were considered on the cusp of cloud-readiness.

This makes for an interesting comparison when put against the bi-annual cloud studies from the Asia Cloud Computing Association (ACCA). The overall landscape in Asia is one of contrasts. Those at the top, such as Singapore and Hong Kong, are considered global leaders. Those at the bottom, such as India and China, arguably share many of the failings of African countries; large land masses and poor connectivity. South Africa, ACCA estimated, would have placed just under halfway in the 14-nation Asian ranking, between Malaysia and The Philippines.

These issues are widely noted in the Xalam analysis. “Africa is a tricky place for cloud services,” the company wrote in a blog post. “Many countries don’t have broadband speeds adequate and affordable enough to support reliable cloud service usage. Where cloud services are built upon a reliance on third party providers, provider distrust is deeply ingrained in many African enterprises, having been nurtured by decades of failing underlying infrastructure and promises not kept.

“Where the public thrives on an open, decentralised Internet, many African governments profess a preference for a more centralised, monitored model – and some are prone to shutting down the Internet altogether,” the company added.

Guy Zibi, founder and principal of Xalam Analytics, said there were two other concerns uncovered in the research. “Data sovereignty concerns are prevalent; in most African markets, the sectors that typically drive the uptake of cloud services – financial, healthcare, and even the public sector – are not allowed to store user data out of the country,” he told CloudTech. “Given that there was no public cloud data centre in the region and local options were not entirely trusted, this naturally impacted uptake.

“There is [also] still a fair amount of distrust in the ability of third-party providers to manage mission-critical enterprise workloads,” added Zibi. “In many countries, the underlying supporting infrastructure – power primarily – has historically been shaky, making it difficult for providers to provide quality services. That lack of trust has long held up the expansion of third-party managed services, including cloud services.”

Yet the winds of change – “trust levels are improving but there’s some way to go, and from a low base”, as Zibi put it – are prevalent. Alongside Microsoft’s newly-opened data centre complexes in South Africa, AWS is planning a region in Cape Town. Zibi said the launch was a ‘game-changer’ and a ‘seminal event for African cloud services.’

“It did four things in our view,” he said. “It offered validation from one of the world’s largest cloud providers that there was deep enough cloud demand in the region and a credible economic case to support this type of investment. It suggested that the underlying supporting infrastructure – South Africa’s at least – was solid enough to support a hyperscale data centre, and it probably accelerated local service deployment timelines by competitors like AWS and Huawei.

“It [also] gave more confidence to local corporate customers that they could start moving to the cloud more aggressively,” Zibi added.

Naturally, as these things are announced, customers are asked if they could tell their stories. Microsoft gave three examples of companies utilising cloud services. Nedbank, Microsoft said, had ‘adopted a hybrid and multi-vendor cloud strategy’ with Microsoft an ‘integral’ partner, eThekwini Water was using Azure for critical application monitoring and failover among other use cases, while The Peace Parks Foundation was utilising rapid deployment of infrastructure as well as offering radio over Internet protocol – ‘a high-tech solution to a low-tech problem’ – to improve radio communication over remote and isolated areas.

Zibi noted that the facets binding the cloud-ready African nations together was strong adoption of high speed broadband, adequate power supply, good data infrastructure and solid regulations fostering adoption of cloud services. While Azure and AWS are the leaders, Google is seen by Xalam only as a challenger. In a move that is perhaps indicative of the state of play in African IT more generally, Oracle and IBM – alongside VMware – are the next strongest providers.

The report predicted that top line annual cloud services revenue in Africa is set to double between now and 2023, and public cloud services revenue to triple in that time. The barriers understandably remain in such a vast area, but the upside is considerable. “Few other segments in the African ICT space are as likely to generate an incremental $2bn in top line revenue over the next five years, and at least as much in adjacent enabling ecosystem revenue,” the company noted.

“But the broader upside is unmistakable, and the battle for the African cloud is only beginning.”

You can find out more about the report here (customer access only).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Financial services moving to hybrid cloud – but rearchitecting legacy systems remains a challenge

The move to hybrid cloud is one which virtually every industry is undertaking – but the financial services industry is getting there ahead of most.

According to the latest data issued by Nutanix for its Enterprise Cloud Index Report, more than one in five financial organisations polled (21%) are deploying a hybrid cloud model today. This is up from the global average of 18.5%. 91% of those polled said hybrid cloud was their ‘ideal’ IT model.

Yet while the push to hybrid is still an important one, there are plenty of areas where financial firms are struggling. Like many other industries – insurance being another one, as sister publication AI News found out when speaking to LV= earlier this week – a serious concern remains over rearchitecting and organising legacy systems.

88% of respondents to the Nutanix survey found that while they expected hybrid cloud to positively impact their businesses, hybrid cloud skills themselves were scarce. According to the data, financial services firms run more traditional data centres than other industries, with 46% penetration, as well as a lower average usage of private clouds; 29% compared to 33% overall.

Naturally, this is the part of the wider report which covers many more industries. Yet Nutanix wanted to shed a light on the financial side because of its apparent highs and lows.

“Increased competitive pressure, combined with higher security risks and new regulations, will require all of the industry to look at modernising their IT infrastructure,” said Chris Kozup, Nutanix SVP of global marketing in a statement. “The current relatively high adoption of hybrid cloud in the financial services industry shows that financial firms recognise the benefits of a hybrid cloud infrastructure for increased agility, security and performance.

“However, the reality is that financial services firms still struggle to enable IT transformation, even though it is critical for their future,” added Kozup.

Writing for this publication last month, Rob Tribe, regional SE director for Western Europe at Nutanix, noted how organisations across industry were waking up to the need for hybrid, but noted the need for up-to-date tools to help expedite the process. Tribe noted how expert analytical tools, cloud-based disaster recovery (DR) and cross-cloud networking tools were key for performance, availability and security.

“Delivering these and other hybrid cloud management tools will be far from easy, and will require a lot more cooperation between cloud vendors and service providers than we’re seeing at present,” wrote Tribe. “However, with growing numbers of enterprise customers moving to hybrid, it’s very much in everyone’s best interests to work together.

“It’s time to join up the dots between clouds and deliver the visibility, technologies and tools needed to make it easier to exploit this exciting – and soon to be de facto – way of provisioning and managing enterprise IT,” Tribe added.

You can analyse the full Nutanix survey data here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

2019’s highest-paying IT certifications


Keri Allan

5 Apr, 2019

In a competitive talent market, such as IT, obtaining a certification is a sure way to verify your expertise, demonstrate your knowledge quickly to others, and ultimately make job hunting a far smoother process. Recruiters look for credentials to back up details provided on an applicant’s CV and many companies request certain types of certification in order for an applicant to even be considered for a role.

According to training provider Global Knowledge, 89% of the global IT industry is certified. It recently published its list of the 15 top paying IT certifications in 2019, showing that employers are focusing on specific areas, in particular, cloud computing, cyber security, networking and project management. In fact, cloud and project management dominated the top five spots.

Global Knowledge 2019 report:

No. Certification Avg. salary (in $)
1. Google Certified Professional Cloud Architect 139,529
2. PMP – Project Management Professional 135,798
3. Certified ScrumMaster 135,441
4. AWS Certified Solutions Architect (Associate) 132,840
5. AWS Certified Developer (Associate) 130,369
6. Microsoft Certified Solutions Expert – Server Infrastructure 121,288
7. ITIL Foundation 120,566
8. Certified Information Security Manager 118,412
9. Certified in Risk and Information Systems Control 117,395
10. Certified Information Systems Security Professional 116,900
11. Certified Ethical Hacker 116,306
12. Citrix Certified Associate – Virtualisation 113,442
13. CompTIA Security+ 110,321
14. CompTIA Network+ 107,143
15. Cisco Certified Network Prof. Routing and Switching 106,957

Although the figures provided represent a look at the US market, we can see that Google’s own Cloud Architect certification is now the best qualification to pursue in terms of average salary, closely followed by qualifications in project management and then development roles for AWS.

“The two leading areas are cyber security and cloud computing, followed by virtualisation, network and wireless LANs,” notes Zane Schweer, Global Knowledge’s director of marketing communications. “Up and coming certifications focus on AI, cognitive computing, machine learning, IoT, mobility and end-point management.”

Cloud comes out on top

“Cloud computing is paramount to every aspect of modern business,” explains Deshini Newman, managing director EMEA of non-profit organisations (ISC)2. “It’s reflective of the highly agile and cost-effective way that businesses need to work now, and so skilled professionals need to demonstrate that they are proficient in the same platforms, methodologies and approaches towards development, maintenance, detection and implementation.”

Jisc, a non-profit which specialises in further and higher education technology solutions, has joined many other organisations in adopting a cloud-first approach to IT, and so relies heavily on services like Amazon AWS and Microsoft Azure.

“Certified training in either or both of these services is important for a variety of roles,” explains Peter Kent, head of IT governance and communications at Jisc, “either to give the detailed technical know-how to operate them or simply to demonstrate an understanding of how they fit into our infrastructure landscape.”

“Accompanying these, related networking and server certifications such as Cisco Certified Network Associate (CCNA) and Microsoft Certified Solutions Expert (MSCE) are important as many cloud infrastructures still need to work with remaining or hybrid on-premise infrastructures,” he notes.

Security certifications are also high on the most-wanted list, but they are required across a variety of different platforms and disciplines. One of the growth areas (ISC)2 has seen is in cybersecurity certifications in relation to the cloud. “This is something that is reflected by the positioning of the cloud within the Global Knowledge top 15,” Newman points out.

Aside from technical training, ITIL is still considered a key certification as a way of benchmarking an individual’s understanding of the infrastructure and process framework that IT teams have in place.

“But with ITIL v4 just around the corner I’d recommend holding off any training until v4 courses are widely available,” advises Kent.

And it’s not just about the accreditation – it can often also be about the company behind the certification itself. This is part of what makes the most desirable certifications desirable – the credibility and support of the issuing bodies.

The benefits of certification

Global Knowledge’s report highlighted that businesses believe that having certified IT professionals on staff offer a number of benefits – most importantly helping them meet client requirements, close skills gaps and solve technical issues more quickly.

This is great for the company, but what do you gain as an individual? Well, aside from being in higher demand and the ability to perform a job faster, the main answer is a larger paycheque.

“In North America, it’s roughly a 15% increase in salary, while in EMEA its 3%,” says Schweer. “We attribute cost of living and other unique circumstances within each country to the difference,” he notes.

Research by (ISC)2 and recruitment firm Mason Frank International also showed similar results.

“In our latest Salesforce salary survey 39% of respondents indicated that their salary had increased since becoming certified and those holding certifications at the rarer end of the spectrum are more likely to benefit from a pay increase,” says director Andy Mason.

“While the exact amount of money an individual can earn will fluctuate from sector to sector, it is clear that certifications in any sector can and do make a big financial difference,” agrees Newman. “That’s on top of setting individuals apart at the top of their profession.”

Does certification create an ‘opportunity shortage’?

However, not everyone regards certifications as the be-all-and-end-all for recruiting the best possible staff. Some, such as Mango Solutions’ head of data engineering, Mark Sellors, actually believe that it can often ‘lock-out’ certain candidates that might be perfect for a role.

“This can be troubling for a number of reasons,” he says. “In many cases certifications are worked out in an individual’s personal time. This means those with significant responsibilities outside of their existing job may not be in a position to do additional study, and that’s not to mention the cost of some of these certs.”

He adds that using certifications as a bar above which one must reach can also further reduce gender diversity within the IT space, as a past study by Hewlett Packard found that women are much less likely than men to apply for a job if they don’t meet all of the listed entry requirements.

It’s Sellors’ belief that the problem facing many hiring managers is not just a talent one, but rather a one of opportunity.

“They’re not giving great candidates the opportunity to excel in these roles as they’ve latched on to the idea that talent can be proven with a certificate,” explains Sellors. “Certifications can be useful in certain circumstances – for example when trying to prove a certain degree of knowledge during a career switch, or moving from one technical area to another. They’re also a great way to quickly ramp up knowledge when your existing role shifts in a new direction.

“More often than not, however, they prove little beyond the candidate’s ability to cram for an exam. Deep technical knowledge comes from experience and there’s sadly no shortcut for that.”

M&C Saatchi London gets creative with NetSuite


Maggie Holland

4 Apr, 2019

Creative Agency Network M&C Saatchi is using NetSuite’s ERP system to make better use of data and boost debt collection processes in its London offices.

It now hopes to expand its use of the technology, internationally, according to Michael Saunders, finance director at M&C Saatchi London.

The incumbent was mainframe-based, which generated Excel-based reports that weren’t very enlightening. Now, with NetSuite, it’s much easier to slice and dice data and drill down into transactions quickly and easily, Saunders told Cloud Pro.

“Things now happen in a much more timely way,” he said. “We didn’t have much of an ability to look at things by department or look at what you’ve sold in this month, where your spend is versus last year in this area etc. You can spot trends and things going wrong a lot faster. Once we get more of the forecasting elements done, it will improve our kind of month end process. We should have more accurate forecasting.”

Saunders added: “I ran a selection process and we ended up with 15 potential ERP systems on the list. It was a strategic move to go with NetSuite as it’s very easy to build on it and it’s easy to integrate it with other systems. All the other ones we looked at didn’t have as good a UI and didn’t feel like they had as much money being pumped into them.

“It’s a quite simple, reliable system to use. If you gave it to a kid, they could probably work out how to do certain things quite easily.”

There are more than a dozen entities operating under the M&C Saatchi umbrella in London, with around 800 employees. Globally, the organisation has around 2,000 members of staff.

After a thorough selection process that started in 2016, Saunders and key stakeholders settled upon NetSuite as their ERP system of choice for the UK business. The system went live in January 2018 with a two-week training programme initiated to get employees up and running quickly and easily.

The firm also kept its implementation partner, RSM, on board for the initial few months to help ease any teething problems with the user training side of things.

M&C Saatchi has and is growing around the world, but organically and through acquisition so it still uses a number of different ERP systems depending on location and other needs. Using the UK as a test bed, the plan is to try and rationalise the number of systems in use, even if that still results in multiple ERP instances.

“The first stage of the project was always to focus on the UK as we didn’t want to do too much at once. We’ve been on the same system in the UK for 18 years. Now, we’re starting to look at how it can be more of a platform for the systems we use. We’re integrating our budgeting and forecasting tool, Adaptive Insights, into it, and Sage People. All you have to work out is where your master record for each type of information is going to be held. You can get them all talking to each other quite quickly,” Saunders added.

“To bring in a new office of region is not a new implementation. I could almost set up a new subsidiary on my phone right now… Hopefully, that is where we will also see value – we won’t need a professional services team to do a new country. There are economies of scale as you bring more data and more people onto it.”

Serverless cloud – is it for everyone?


Sandra Vogel

4 Apr, 2019

Serverless cloud. It sounds impossible, doesn’t it? How can cloud computing be serverless? All that deployed code, all that accumulated data being crunched, examined and interpreted. Surely servers are needed for these tasks?

Well, as it happens, ‘serverless cloud’ is a poor marketing term, as nothing runs in this day and age without some form of compute engine – it’s false advertising if anything. Yet the term defines a particular type of cloud deployment – the ability to use cloud without having to have any servers in your business, and without having to specifically buy or rent any server resource from your cloud provider.

Sure, the provider (AWS, Google, Microsoft, IBM, Oracle and the rest) uses servers, but it allocates resources around these as needed, taking server related matters out of the financial relationship it has with clients.

So, why go serverless?

Serverless cloud has some very distinct advantages for organisations working in a cloud environment. Sue Daley, associate director, technology and innovation, techUK, explains that many customers are drawn to low costs and the convenience often associated with serverless.

“Paying for compute resources based upon the amount of transaction performed rather than specifying a virtual machine spec to handle the busiest predicted workloads often equates to compute charging by the microsecond rather than the hour,” explains Daley. “Not having to consider or manage the underlying infrastructure, capacity or operating systems [means organisations can] just write and develop code.”

Ramanan Ramakrishna, Cloud CoE Lead at Capgemini added that those organisations which operate across multiple sites stand to gain the most from serverless.

“[It can be used] as an impetus to promote event-driven computing and a micro-services mindset among developers,” he explains. “This is because the applications being built have to be split into distinct services which can then be deployed and accessed from serverless cloud. This sets the stage for a quicker and easier adoption of agile development principles and CI/CD without falling into traditional waterfall methods.”

Is serverless right for you?

Well, you might already be using it.

“Serverless solutions may already be used by organisations that are unaware of the fact if they are using third-party APIs in their solutions already,” explains Tony Lock, distinguished analyst at Freeform Dynamics. Notwithstanding that, he explains, “it is certainly a good idea to learn a little about the practical implications of serverless computing to help understand when will be the right time to look more deeply or jump in and use”.

A major advantage of serverless is the additional pricing options that come with paying for only what you use, rather than being forced to make a projection. This is particularly useful for small businesses and startups that are looking to trim costs and scale at the same time, but all businesses could stand to benefit.

“[Serverless] is a tool that can be used by any organisations regardless of size and scale”, explains Daley. “Where it really can add value and impact is in helping small and growing organisations that only need computing power and resources when an event is triggered. For example, those innovative companies that are part of today’s growing gig economy such as car ride share app or a site that allow files or photos to be uploaded which only need to be able to spin up computing resources when a customer request, like a car ride, is requested.”

Things to consider

If serverless is something you feel would be a good fit for your business, there are a few things you need to consider.

Firstly, it’s important to understand the various needs of your business. “Learn how you could exploit serverless, find somewhere simple to start that is likely to have a fast return of visible benefits, and look for partners who have already started down the road and learn from their experiences, good and bad,” urges Lock.

It’s also important to consider your skillsets. Serverless is still a relatively young technology trend and most organisations will lack the technical skills to architect a serverless setup effectively. Of course, that’s doesn’t stop organisations outsourcing to other providers, which may be a short-term solution for most.

Ramanan Ramakrishna suggests “organisations should work towards incorporating serverless cloud into a digital transformation agenda for ‘born in the cloud’ initiatives rather than force-fitting it into traditional applications and associated methodologies”.

There is the issue of going all in on serverless needlessly. Daniel Kroening, a professor of Computer Science at Oxford University, argues that the best way to adopt a serverless strategy is to weave it into existing setups.

“It makes sense to see them as a valuable addition to the existing toolkit available to developers and system designers – not as a replacement,” he explains.

Third-party Facebook app leaked 540m user records on AWS server


Connor Jones

4 Apr, 2019

Facebook’s heavily criticised app integration system has led to more than 146GB worth of data being left publicly exposed on AWS servers owned and operated by third-party companies.

It’s believed 540 million records relating to Facebook accounts were stored on the servers, including comments, likes, reactions, names and user IDs, obtained when users engaged with applications on the platform – the same methods unearthed during the investigation into Cambridge Analytica.

Two apps have been associated with the data hoard so far: Cultura Colectiva, a Mexico-based media company that promotes content to users in Latin America, and ‘At the Pool’, a service that matched users with other content, which has been out of operation since 2016.

At the Pool is said to have held 22,000 passwords for its service in plaintext alongside columns relating to Facebook user IDs – the fear being that many users may have been using the same password for their Facebook accounts.

Both of the app’s datasets were stored in Amazon S3 buckets which were found to be misconfigured to allow public download of the files. Despite being commonly used among businesses, as they allow data to be distributed across servers in a wide geographical area, there have been multiple incidents involving companies failing to adequately safeguard their data.

Facebooked condemned the practices of both the apps. “Facebook’s policies prohibit storing Facebook information in a public database,” said a Facebook spokesperson. “Once alerted to the issue, we worked with Amazon to take down the databases. We are committed to working with the developers on our platform to protect people’s data.”

AWS was made aware of the exposed data on 28 January 2019, following an alert issued by security research firm UpGuard. AWS confirmed it had received the report and was investigating it, but the data was only secured on Wednesday this week.

“AWS customers own and fully control their data,” an AWS spokesperson told IT Pro. “When we receive an abuse report concerning content that is not clearly illegal or otherwise prohibited, we notify the customer in question and ask that they take appropriate action, which is what happened here.”

This statement aligns with UpGuard’s in that the researchers alerted Cultura Colectiva before AWS on 10 January 2019 but have still yet to receive a response from the company.

Accenture, Experian, WWE, and the NSA have all been found to have stored data on unsecured AWS servers in recent years, with the problem becoming so prevalent that hackers have started creating tools specifically designed to target these buckets.

“While Amazon S3 is secure by default, we offer the flexibility to change our default configurations to suit the many use cases in which broader access is required, such as building a website or hosting publicly downloadable content,” said AWS. “As is the case on-premises or anywhere else, application builders must ensure that changes they make to access configurations are protecting access as intended.”

The news coincides with an article published in The Washington Post in which Facebook’s Mark Zuckerberg called for a ‘worldwide GDPR’ and greater regulation on the data protection principles of big tech outside the EU, despite the company itself facing 10 major GDPR investigations.

The discovery of the data has once again raised the issue of Facebook’s data sharing policies, something that facilitated the improper sharing of user data for political purposes by Cambridge Analytica. This prompted Facebook to change its sharing policies to restrict access by third-parties, although the fear is that data troves such as this have already been widely shared.

“Cambridge Analytica was the most high profile case that led to some significant changes in how Facebook interacts with third-party developers, but I suspect there are many troves of Facebook data sitting around where they shouldn’t be, including this one,” said privacy advocate Paul Bischoff of Comparitech.com.

“Even though Facebook has limited what information third-party developers can access, there’s still nothing Facebook can do about abuse or mishandling until after the fact,” he said.

Facebook records exposed on AWS cloud server lead to more navel-gazing over shared responsibility

Researchers at security firm UpGuard have disclosed two separate instances of Facebook user data being exposed to the public internet – and it again asks questions of the strategy regarding cloud security and shared responsibility.

The story, initially broken by Bloomberg, noted how one dataset, originating from Mexico-based Cultura Colectiva, contained more than 540 million records detailing comments, likes, reactions, and account names among others. The second, from a now-defunct Facebook-integrated app called ‘At the Pool’, contained plaintext passwords for 22,000 users.

UpGuard said that the Cultura Colectiva data was of greater concern in terms of disclosure and response. The company sent out its first notification email to the company on January 10 this year, with a follow-up email being sent four days later – to no response. Amazon Web Services (AWS), on which the data was stored, was contacted on January 28, with a reply arriving on February 1 informing that the bucket’s owner was made aware of the exposure.

Three weeks later, however, the data was still not secured. A further email from UpGuard to AWS was immediately responded to. Yet according to the security researchers, it was ‘not until the morning of April 3, after Facebook was contacted by Bloomberg for comment, that the database backup, inside an AWS S3 storage bucket titled ‘cc-datalake’, was finally secured.’

So whither both parties? For Facebook, this can be seen as another blow, as UpGuard explained. “As Facebook faces scrutiny over its data stewardship practices, they have made efforts to reduce third party access. But as these exposures show, the data genie cannot be put back in the bottle,” the company said.

“Data about Facebook users has been spread far beyond the bounds of what Facebook can control today,” UpGuard added. “Combine that plenitude of personal data with storage technologies that are often misconfigured for public access, and the result is a long tail of data about Facebook users that continues to leak.”

As far as AWS is concerned, this is again not their first rodeo in this department. But the question of responsibility, as this publication has covered on various occasions, remains a particularly thorny one.

Stefan Dyckerhoff, CEO at Lacework, a provider of automated end-to-end security across the biggest cloud providers, noted that organisations needed to be more vigilant. “Storing user data in S3 buckets is commonplace for every organisation operating workloads and accounts in AWS,” said Dyckerhoff. “But as the Facebook issue highlights, they can inadvertently be accessible, and without visibility and context around the behaviour in those storage repositories, security teams simply won’t know when there’s a potential vulnerability.”

This admittedly may be a stance easier said than done given the sheer number of partners either building apps on the biggest companies’ platforms or using their APIs – many of whom may no longer exist. Yet it could be argued that of a shared responsibility, both parties may be missing the mark. “At issue is not [the] S3 bucket, but how it’s configured, and the awareness around configuration changes – some of which could end up being disastrous,” added Dyckerhoff.

In February, Check Point Software found that three in 10 respondents to its most recent security report still affirmed that security was the primary responsibility of the cloud service provider. This concerning issue is one that the providers have tried to remediate. In November AWS announced extra steps to ensure customers’ S3 buckets didn’t become misconfigured, having previously revamped its design to give bright orange warning indicators as to which buckets were public.

Writing for this publication in August, Hatem Naguib, senior vice president for security at Barracuda Networks, outlined the rationale. “Public cloud customers need to become clearer on what their responsibility is for securing their data and applications hosted by public cloud providers,” Naguib wrote. “Their platforms are definitely secure and migrating workloads into the cloud can be much more secure than on-premise data centres – however organisations do have a responsibility in securing their workloads, applications, and operating systems.”

You can read the full UpGuard post here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.