All posts by James

McAfee notes gap between cloud competence and transformation – with CASBs key to success

It is another case of mind the gap, according to McAfee: while the vast majority of companies are seeing some level of business acceleration through their cloud initiatives, only a fraction are exploiting its full potential.

The security provider has released a special edition of its Cloud and Risk Adoption Report, which polled 1,000 enterprise organisations worldwide alongside collating data from anonymised cloud events across its cloud access security broker (CASB) product.

87% of organisations polled said they do experience some business acceleration from their use of cloud services. More than half (52%) of respondents said they found better security in the cloud than in on-premise IT environments.

Looking at where data resided, almost two thirds (65%) of enterprise data analysed lives in software as a service (SaaS) environments, such as business collaboration tools, with a quarter (25%) in infrastructure as a service (IaaS). The remaining 10% is the big unknown, as shadow IT. Only a third (36%) of those polled said they could enforce data loss prevention in the cloud, with a similar number (33%) saying they could control collaboration settings which determined how data was shared.

The report – as may be expected given how the figures were acquired – explored the impact CASBs made on operations. Not altogether surprisingly, the findings were positive. McAfee argued that organisations were over 35% more likely to launch new products, gain quicker time to market, as well as expanding to new markets, when using a CASB. Despite this, only one in three companies polled were currently using a cloud access security broker.

“This research shines a light on organisations who are leading the charge in cloud adoption, prioritising the security of their data as they roll out new cloud services and winning in the market because of the actions they are taking,” said Rajiv Gupta, senior vice president for cloud security at McAfee. “Organisations often tell us how much faster their business moves when security is addressed in the cloud, and it is exciting for us now to quantify this experience and share our data and recommendations with the rest of the market.”

Read more: Gartner’s latest Magic Quadrant shows the need for cloud access security brokers going forward

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Exploring the journey from cloud to AI – with a few big data bumps along the way

The potential of cloud computing and artificial intelligence (AI) is irresistible. Cloud represents the backbone for any data initiative, and then AI technologies can be used to derive key insights for both greater business intelligence and topline revenue. Yet AI is only as good as the data strategy upon which it sits.

At the AI & Big Data Expo in Amsterdam today, delegates were able to see that the proof of the pudding was in the eating through NetApp's cloud and data fabric initiatives, with Dreamworks Animation cited as a key client who was able to transform its operations.

For the cloud and AI melting pot, however, there are other steps which need to be taken. Patrick Slavenburg, a member of the IoT Council, opened the session with an exploration of how edge computing was taking things further. As Moore's Law finally begins to run out of steam, Slavenburg noted there are up to 70 startups working solely on new microprocessors today. 

Noting how technology history tends to repeat itself, he added today is a heyday for microprocessing architecture for the first time since the 1970s. The key aspect for edge here is being able to perform deep learning at that architectural level, with the algorithms being more lightweight.

Florian Feldhaus, enterprise solutions architect at NetApp, sounded out that data was the key to running AI. According to IDC, by 2020 90% of corporate strategies will explicitly mention data as a critical enterprise asset, and mention analytics as an essential competency. "Wherever you store your data, however you manage it, that's the really important piece to get the benefits of AI," he explained.

The industry continues to insist that it is a multi-cloud, hybrid cloud world today. It is simply no longer a choice between Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP), but assessing which workloads fit which cloud. This is also the case in terms of what your company's data scientists are doing, added Feldhaus. Data scientists need to use data wherever they want, he said – use it in every cloud and move the data around to make it available to them.

"You have to fuel data-driven innovation on the world's biggest clouds," said Feldhaus. "There is no way around the cloud." With AI services available in seconds, this was a key point in terms of getting to market. It is also the key metric for data scientists, he added.

NetApp has been gradually moving away from its storage heritage to focus on its 'data fabric' offering – an architecture which offers access to data across multiple endpoints and cloud environments, as well as on-premises. The company announced yesterday an update to its data fabric, with greater integration across Google's cloud as well as support for Kubernetes.

Feldhaus noted the strategy was based on NetApp 'wanting to move to the next step'. Dreamworks was one customer looking at this future, with various big data pipelines allied with the need to process data in a short amount of time.

Ultimately, if organisations want to make the most of the AI opportunity – and time is running out for laggards – then they need their data strategy sorted out. Yes, not everything can be moved to the cloud and some legacy applications need a lot of care and attention, but a more streamlined process is possible. Feldhaus said NetApp's data fabric had four key constituents; discovering the data, activating it, automating, and finally optimising.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Dropbox revamps as enterprise collaboration space to help users conquer ‘work about work’

Dropbox has launched a significant redesign, repositioning itself as an enterprise collaboration workspace and moving away from its file storage heritage.

The move will see Dropbox aim to be a one-stop-shop. The relaunched desktop app will enable users to create, access and share content across the Google and Microsoft portfolio, opening Google Docs and Microsoft Office files, offer synchronised search, alongside partnerships with Atlassian, Slack, and Zoom.

The latter partnerships are of particular interest; users will be able to start Slack conversations and share content to Slack channels directly from Dropbox, while also being able to add and join Zoom meetings from Dropbox, as well as again share files. The new features with Atlassian weren’t announced, but Dropbox promises ‘enhanced integrations [to] help teams more effectively manage their projects and content.’

As a blog post from the Dropbox team put it, the primary motivator for the move was to address the ‘work about work’ which slowed many organisations down. “Getting work done requires constant switching between different tools, and coordinating work with your team usually means a mountain of email and meetings,” the company wrote. “It all adds up to a lot of time and energy spent on work that isn’t the actual work itself. But we’ve got a plan, and we’re excited to share how we’re going to help you get a handle on all this ‘work about work.’”

From the company’s perspective, the move makes sense. As regular readers of this publication will be more than aware, the industry – and almost all organisations utilising cloud software – has moved on from simple storage.

Dropbox has made concerted efforts in the past to help customers get more out of their data, rather than the data itself. In October the company upgraded its search engine, Nautilus, to include machine learning capabilities – primarily to help understand and predict users’ needs for documents they search for as and when, rather than being slaves to any one algorithm.

Indeed, it can be argued the company has shifted away from cloud computing as both a marketing message and as an internal business process. Writing for Bloomberg at the time of Dropbox’s IPO filing last March, Shira Ovide noted that the company building out its own infrastructure – a two and a half year project to move away from Amazon Web Services (AWS) – helped make its IPO proposition more viable.

You can read more about the redesign here.

Picture credit: Dropbox

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud looks to Looker for greater data analytics – but with the multi-cloud focus

Late last week, Google announced its intention to acquire business intelligence platform Looker for $2.6 billion (£2.05bn) in an all-cash transaction, with Looker joining Google Cloud upon the acquisition’s close.

For Google Cloud – whose bill is chump change compared to what Salesforce is outlaying for Tableau – Looker represents more options for customers looking to migrate data from legacy technology stacks to BigQuery, Google’s enterprise data warehouse. As Google Cloud chief Thomas Kurian put it, it will help offer customers “a more complete analytics solution from ingesting data to visualising results and integrating data and insights into… daily workflows.” Looker, meanwhile, gets a surrogate while shareholders get a pile of cash. Yet the key to making it all work is multi-cloud.

Google’s primary focus at Next in San Francisco back in April, as this publication noted at the time, was around hybrid cloud, multi-cloud – and in particular open source. The highlight of the keynote was a partnership with seven open source database vendors, including Confluent, MongoDB, and Redis Labs. Looker is compatible across all the major cloud databases, from Amazon Redshift, to Azure SQL, Oracle, and Teradata. CEO Frank Bien confirmed that customers should expect continuing support across all cloud databases.

“[The] announcement also continues our strategic commitment to multi-cloud,” wrote Kurian. “While we deepen the integration of Looker into Google Cloud Platform, customers will continue to benefit from Looker’s multi-cloud functionality and its ability to bring together data from SaaS applications like Salesforce, Marketo, and Zendesk, as well as traditional data sources. This empowers companies to create a cohesive layer built on any cloud database, as well as on other public clouds and in on-premise data centres.

“Looker customers can rest assured that the high-quality support experience that Looker has provided will be bolstered by the resources, expertise, and global presence of our cloud team,” Kurian added. “We will also continue to support best of breed analytics and visualisation tools to provide customers the choice to use a variety of technologies with Google Cloud’s analytics offering.”

Google had long been partners with Looker before the acquisition developed. In July, Looker announced an integration with BigQuery whereby data teams could create machine learning models directly in the latter via the former. The companies shared more than 350 customers, including Buzzfeed, Hearst, and Yahoo!

“The data analytics market is growing incredibly fast as companies look to leverage all of their data to make more informed decisions,” said Frank Gens, senior vice president and chief analyst at IDC. “Google Cloud is one of the leaders in the data warehouse market, and the addition of Looker will further strengthen their ability to serve the needs of enterprise customers while also advancing their commitment to multi-cloud.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Joyent bids farewell to the public cloud in ‘difficult’ decision

It was one of the most innovative early-stage cloud vendors – but Joyent’s public cloud offering will be no more.

The company announced its departure from the public cloud space in a blog post today, scaling back its availability to customers of its single-tenant cloud offering.

Affected customers have five months to find a new home; a documentation page confirmed the Joyent Triton public cloud will reach end of life on November 9, while the company has separately put together a list of available partners, including Microsoft Azure and OVH.

Steve Tuck, Joyent president and chief operating officer (COO), cited strained resources in developing both its public cloud and single-tenant cloud as the reason behind a ‘difficult’ decision.

“To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home,” Tuck wrote. “We are truly grateful for your business and the commitment that you have shown us over the years; thank you.”

Joyent had been acquired by Samsung in 2016 after the Korean giant had explored Manta, the company’s object storage system, for implementation. Samsung liked the product so much that it outright bought it; as Bryan Cantrill, CTO of Joyent, explained at the time, Samsung offered hardware to Joyent after its proposal proved too much heft for the startup to cope with.

Prior to the days of public cloud and infrastructure as a service (IaaS) domination from Amazon Web Services (AWS), Microsoft, Google, and other hyperscalers with frighteningly deep pockets, Joyent enjoyed a stellar reputation. The company was praised by Gartner, in its 2014 IaaS Magic Quadrant, for having a “unique vision”, as well as previously being the corporate steward of Node.js, growing it into a key standard for web, mobile, and Internet of Things (IoT) architectures.

“By providing [an] easy on-ramp to on-demand cloud infrastructure, we have had the good fortune to work with an amazing array of individuals and companies, big and small,” added Tuck.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Organisations need to ‘acknowledge challenges’ in not keeping 100% uptime, argues Veeam

It’s the big downtime downturn; according to a new study from Veeam, three in four organisations admit they are not able to meet users’ demands for uninterrupted access to applications and data.

The findings appear in the company’s latest Cloud Data Management Report, which surveyed more than 1,500 senior business and IT leaders across 13 countries. Ultimately, the need for more sophisticated data management is something that Veeam feels as though it is an expert in – the company cites itself as the leader in ‘cloud data management’ – yet the stats are interesting.

In particular, the research found that lost data from mission-critical application downtime costs organisations more than $100,000 per hour on average, while app downtime translates to a cost of $20.1 million globally in lost revenue and productivity.

Evidently, the research has noted how organisations are struggling with their current data management methods. 44% of those polled said more sophisticated data management was critical to their organisation’s success in the coming two years. Four in five respondents said better data management strategies led to greater productivity, while two thirds found greater stability.

Perhaps surprisingly, of those polled, software as a service (SaaS) was not completely saturated; just over three quarters (77%) said they were already using it, with this number set to rise to 93% by the end of 2019. The golden nugget comes from when organisations see the dividend of adopting new technologies; financial benefits come along after nine months on average, with operational benefits arriving after approximately seven months.

“We are living in a data-driven age, and organisations need to wake up and take action to protect their data,” said Ratmir Timashev, Veeam co-founder and EVP sales and marketing. “Businesses must manage their data in a way that always delivers availability and leverage its value to drive performance. This is no longer a luxury, but a business necessity.

“There is a significant opportunity and competitive advantage for those who effectively manage their data,” Timashev added. “Ask yourself – are you confident that your business data will always be available? If you are unsure it’s time to act – and our study shows that many are not acting fast enough.”

You can find out more about the Veeam report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft and Oracle partner up to interconnect clouds – with retail customers cited

Here’s proof that cloudy collaboration can happen even at the highest levels: Microsoft and Oracle have announced an ‘interoperability partnership’ aimed at helping customers migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud.

Organisations who are customers of both vendors will be able to connect Azure and Oracle Cloud seamlessly. The Oracle Ashburn data centre and the Azure US East facilities are the only ones available for connection at this stage, however both companies have plans to expand to additional regions.

The two companies will also offer unified identity and access management to manage resources across Azure and Oracle Cloud, while Oracle’s enterprise applications, such as JD Edwards EnterpriseOne and Hyperion, can be deployed on Azure with Oracle databases running in Oracle’s cloud.

“As the cloud of choice for the enterprise, with over 95% of the Fortune 500 using Azure, we have always been first and foremost focused on helping our customers thrive on their digital transformation journeys,” said Scott Guthrie, executive vice president for Microsoft’s cloud and AI division in a statement. “With Oracle’s enterprise expertise, this alliance is a natural choice for us as we help our joint customers accelerate the migration of enterprise applications and databases to the public cloud.”

This move may be seen as a surprise to some who may see Microsoft and Oracle as competitors in public cloud, but it is by no means the most surprising – that honour still goes to Oracle and Salesforce’s doomed romance in 2013 – cloud partnership.

Indeed, the rationale is a potentially interesting one. The press materials gave mention to three customers. Aside from energy supplier Halliburton, the other two – Albertsons and Gap Inc – are worth considering. Albertsons, as regular readers of this publication will know, moved over to Microsoft earlier this year. At the time, CIO Anuj Dhanda told CNBC the company went with Azure because of its ‘experience with big companies, history with large retailers and strong technical capabilities, and because it [wasn’t] a competitor.’

Gap was announced as a Microsoft customer in a five-year deal back in November. Again speaking with CNBC – and as reported by CIO Dive – Shelley Branston, Microsoft corporate VP for global retail and consumer goods, said retailers shied away from Amazon Web Services (AWS) because they want ‘a partner that is not going to be a competitor of theirs in any other parts of their businesses.’

Albertsons said in a statement that the Microsoft/Oracle alliance would allow the company ‘to create cross-cloud solutions that optimise many current investments while maximising the agility, scalability and efficiency of the public cloud’, while Gap noted the move would help ‘bring [its] omnichannel experience closer together and transform the technology platform that powers the Gap Inc. brands’.

Yet it’s worth noting that the retail cloud ‘war’ may be a little overplayed. Following the Albertsons move Jean Atelsek, digital economics unit analyst at 451 Research, told CloudTech: “It’s easy to get the impression that retailers are fleeing AWS. Microsoft’s big partnership with Walmart seems to be the example that everyone wants to universalise the entire cloud space. However since a lot of retailers also sell through/on AWS, they’re less likely than Walmart to see Amazon (and by extension AWS) as the devil.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

NASCAR moves onto AWS to uncover and analyse its racing archive

As sporting teams and franchises continue to realise the value of their archive – and balk at how much data it commands – many are in the process of migrating their operations to the cloud. NASCAR is the latest, announcing it will utilise Amazon Web Services (AWS) for archiving purposes.

The motor racing governing body is set to launch new content from its archive, titled ‘This Moment in NASCAR History’, on its website, with the service powered by AWS. NASCAR is also using image and video analysis tool Amazon Rekognition – otherwise known for its facial recognition capabilities – to automatically tag specific video frames with metadata for easier search.

“We are pleased to welcome AWS to the NASCAR family,” said Jon Tuck, NASCAR chief revenue officer in a statement. “This relationship underscores our commitment to accelerate innovation and the adoption of cutting-edge technology across our sport.

“NASCAR continues to be a powerful marketing vehicle and will position AWS’s cutting-edge cloud technology in front of industry stakeholders, corporate sponsors, broadcast partners, and ultimately our fans,” Tuck added.

The move marks another key sporting client in AWS’ roster. In July, Formula 1 was unveiled as an Amazon customer, with the company moving the majority of its infrastructure from on-premises data centres to AWS. Formula 1 is also using various AWS products, from Amazon SageMaker to apply machine learning models to more than 65 years of race data, to AWS Lambda for serverless computing.

Ross Brawn, Formula 1 managing director of motor sports, took to the stage at AWS re:Invent in November to tell attendees more of the company’s initiatives. The resultant product, ‘F1 Insights Powered By AWS’, was soft-launched last season giving fans race insights, and Brawn noted plans for further integrating telemetry data, as well as using high performance computing (HPC) to simulate environments which led to closer racing.

Two weeks after Formula 1 was unveiled, Major League Baseball (MLB) extended its partnership with AWS citing machine learning (ML), artificial intelligence, and deep learning as a key part of its strategy. The baseball arbiter already used Amazon for various workloads, including Statcast, its facts and figures base, but added SageMaker for ML use cases. Among the most interesting was its plan to use SageMaker, alongside Amazon Comprehend, to “build a language model that would create analysis for live games in the tone and style of iconic announcers.”

NASCAR is also keen to utilise these aspects of Amazon’s cloud. The company said AWS was its preferred ‘cloud computing, cloud machine learning and cloud artificial intelligence’ provider.

It’s worth noting however that AWS is not the only game in town. The Football Association (FA) announced it was partnering with Google as its official cloud and data analytics partner last week, while the Golden State Warriors are another confirmed customer of Google’s cloud.

You can read more about the NASCAR move here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Enterprises not seeing total fulfilment with cloud strategies – but hybrid remains the way to go

For enterprises looking to migrate to the cloud, with sprawling workloads and data, it can be a long, arduous journey. According to a new survey, more than two thirds of large enterprises are not getting the full benefits of their cloud migration journeys.

The study from Accenture, titled ‘Perspectives on Cloud Outcomes: Expectation vs. Reality” – polled 200 senior IT professionals from large global businesses and identified security and complexity of business and operational change as key barriers to cloud success.

This doesn’t mean enterprises struggle to see any benefits of the cloud – overall satisfaction was at above 90% on average – but when it came to cost, speed, business enablement and service levels, only one in three companies said they were fully satisfied on those metrics.

This breaks down further when looking at specific rollouts (below). Overall, enterprises are seeing greater benefits the more chips they put in; satisfaction levels climb to almost 50% among those with heavy investments, compared to less than 30% for those just starting their journeys.

When it came to public and hybrid cloud, the results showed an evident cost versus speed trade-off. More than half of those with public cloud workloads said they had fully achieved their cost objectives, while for speed it dropped below 30%. Hybrid cloud initiatives, the research noted, saw much more consistent results across the board, if not quite the same cost savings.

This makes for interesting reading when compared with similar research. According to a study from Turbonomic in March, the vast majority of companies ‘expect workloads to move freely across clouds’, with multi-cloud becoming the de facto deployment model for organisations of all sizes.

Yet the Accenture study argued this would not be plain sailing. 42% of those polled said a lack of skills within their organisation hampered their initiatives. Securing cloud skills is of course a subject which continues to concern – but according to Accenture, managed service providers (MSPs) may provide the answer. 87% of those polled said they would be interested in pursuing this initiative.

“Like most new technologies, capturing the intended benefits of cloud takes time; there is a learning curve influenced by many variables and barriers,” said Kishore Durg, senior managing director of Accenture Cloud for Technology Services. “Taking your cloud program to the next level isn’t something anyone can do overnight – clients need to approach it strategically with a trusted partner to access deep expertise, show measurable business value, and expedite digital transformation.

“If IT departments fail to showcase direct business outcomes from their cloud journeys, they risk becoming less relevant and losing out to emerging business functions, like the office of the chief data officer, that are better able to use cloud technologies to enable rapid innovation,” added Durg.

You can read the full report here (pdf, no opt-in required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google confirms network congestion as contributor to four-hour cloud outage

Google has confirmed a ‘network congestion’ issue which affected various services for more than four hours on Sunday has since been resolved.

A status update at 1225 PT noted the company was investigating an issue with Google Compute Engine, later diagnosed as high levels of network congestion across eastern USA sites. A further update arrived at 1458 to confirm engineering teams were working on the issue before the all-clear was sounded at 1709.

“We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimise future recurrence,” the company wrote in a statement. “We will provide a detailed report of this incident once we have completed our internal investigation.”

The outage predominantly affected users in the US, with some European users also seeing issues. While various Google services, including Google Cloud, YouTube, and G Suite were affected, many companies who run on Google’s cloud also experienced problems. Snapchat – a long-serving Google Cloud customer and considered a flagship client before the company’s major enterprise push – saw downtime, as did gaming messaging service Discord.

According to security provider ThousandEyes, network congestion is a ‘likely root cause’ of the outage. The company spotted services behaving out of sync as early at 1200 PT at sites including Ashburn, Atlanta and Chicago, only beginning to come back at approximately 1530 (below). “For the majority of the duration of the 4+ hour outage, ThousandEyes detected 100% packet loss for certain Google services from 249 of our global vantage points in 170 cities around the world,” said Angelique Medina, product marketing director at ThousandEyes.

Previous Google cloud snafus have shown the company can learn lessons. In November 2015 Google Compute Engine went down for approximately 70 minutes, with the result being the removal of manual link activation for safety checks. The following April, services went down for 18 minutes following a bug in Google Cloud’s network configuration management software.  

According to research from Gartner and Krystallize Technologies published last month, Microsoft is the poor relation among the biggest three cloud providers when it comes to reliability. As reported by GeekWire, 2018 saw Amazon and Google achieve almost identical uptime statistics, at 99.9987% and 99.9982% respectively. Microsoft, meanwhile, trailed with 99.9792% – a ‘small but significant’ amount.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.