Dead Netflix accounts reactivated by hackers

Bobby Hellard

29 Nov, 2019

Hackers have exploited Netflix’s data retention policies to reactivate cancelled customer subscriptions and steal their accounts.

Former subscribers say they noticed their accounts had been reinstated when they were charged a monthly fee, months after cancellation.

The hackers can log in to dormant accounts and reactivate them without knowing users bank details, according to the BBC.

This is due to the streaming service storing customer data, including billing information, for ten months after cancellation. This is to enable a speedy account recovery should a user wish to rejoin.

However, this is proving to be a benefit for hackers who just need an email address and password to reactivate an account.

Radio 4’s You and Yours programme spoke to Emily Keen who said she cancelled her subscription in April 2019 but was charged £11.99 by Netflix in September. She tried to log in to the account but found that email and password were no longer recognised as the hackers had changed her details and signed her up to the more expensive service option.

Keen contacted Netflix and was assured her card would be blocked and she would receive a full refund, but the streaming service went on to take two further payments in October and November.

Other users that have had their accounts mysteriously reactivated have hit out at the company on Twitter.

“Super disappointed with my @netflix customer service experience,” one user posted on the social media site. “Our account was hacked, supposed to have been deactivated, was reactivated by hacker, and continued to use our credit card. We were told to file chargeback and @netflix would not offer refund.”

Stolen Netflix login details have reportedly been found on sites like eBay, sold as “lifetime” accounts for as little as £3. The same issue was reported for Disney+ accounts just hours after the service launched in the US, with login details serfacing on hacking forums. 

Cloud Pro has approached Netflix for comment.

Alibaba Cloud releases Alink machine learning algorithm to GitHub

Alibaba Cloud has announced it has made the ‘core codes’ of its machine learning algorithm Alink available on GitHub.

The company notes it is one of the top 10 contributors to the GitHub ecosystem, with approximately 20,000 contributors. Alink was built as a self-developed platform to aid batch and stream processing, with applications for machine learning tasks such as online product recommendation and intelligent customer services.

Not surprisingly, Alibaba is targeting data analysts and software developers to build their own software focusing on statistical analysis, real-time prediction, and personalised recommendation.

“As a platform that consists of various algorithms combining learning in various data processing patterns, Alink can be a valuable option for developers looking for robust big data and advanced machine learning tools,” said Jia Yangqing, Alibaba Cloud president and senior fellow of its data platform. “As one of the top 10 contributors to GitHub, we are committed to connecting with the open source community as early as possible in our software development cycles.

“Sharing Alink on GitHub underlines our such long-held commitment,” Jia added.

With the US enjoying a well-earned holiday rest, and the majority of the world hunting out Black Friday deals, Alibaba had a chance to rush the opposition with Singles Day earlier this month. The numbers put out by the company did not disappoint: zero downtime was claimed, with $1 billion of gross merchandise volume achieved within 68 seconds of launch.

A recent report from ThousandEyes aimed to explore benchmark performance of the hyperscalers, noting that Alibaba, alongside Amazon Web Services (AWS), relied more heavily on the public internet rather than Microsoft and Google, who generally prefer private backbone networks. The report also noted that, contrary to opinion, Alibaba suffered packet loss when it came to China’s Great Firewall.

You can take a look at the Alibaba Cloud Alink GitHub by visiting here. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

GitGuardian, the security startup hunting down online secrets to keep companies safe from hackers

Victoria Woollaston

28 Nov, 2019

When the login details of an Uber engineer were exposed in 2016 – signalling one of the most high-profile breaches of recent years – the names and addresses of 57 million riders and drivers were left at the mercy of hackers. 

None of Uber’s corporate systems had been directly breached, though. Its security infrastructure was working as it should. Instead, the credentials were found buried within the code of an Uber developer’s personal GitHub account. This account and its repositories were hacked, reportedly due to poor password hygiene and the stolen credentials used to access Uber’s vast datastore. This breach, which Uber sat on for a year, resulted in a then-record-breaking $148 million fine.

Yet despite this public lesson in how not to handle private credentials, so-called company secret leakage is an everyday occurrence

The rise of secret leakage

Research from North Carolina State University found that in just six months between October 2017 and April 2018, more than half a million secrets were uploaded to GitHub repositories, including sensitive login details, access keys, auth tokens and private files. A 2019 SANS Institute survey found that half of company data breaches in the past 12 months were a result of credential hacking – higher than any other attack method among firms using cloud-based services. 

This is where GitGuardian comes in. 

Founded in 2017 by Jérémy Thomas and Eric Fourrier – a pair of applied mathematics graduates and software engineers specialising in data science, machine learning and AI – the Paris-based cybersecurity startup uses a combination of algorithms, including pattern matching and machine learning, to hunt for signs of company secrets in online code. According to the company’s figures, more than a staggering 3,000 secrets make their way online every day.

“The idea for GitGuardian came when Eric and I spotted a vulnerability buried in a GitHub repository,” CEO and co-founder Thomas tells Cloud Pro. “This vulnerability involved sensitive credentials relating to a major company being leaked online that had the potential to cost the firm tens of millions of dollars if they had got into the wrong hands. We alerted the company to the vulnerability and it was able to nullify it in less than a week.” 

“We then built an algorithm and real-time monitoring platform that automated and significantly built-upon the manual steps we took when we made that initial detection, and this platform attracted interest from GitHub’s own Scott Chacon as well as Solomon Hykes from Docker and Renaud Visage from EventBrite.” 

How the cloud is fuelling secret leakage

The problem of sensitive data leakage stems in part from the increasing reliance of software developers on third-party services. To integrate such services, developers often juggle hundreds of credentials with varying sensitivity, from API keys used to provide mapping features on websites to Amazon Web Services login details, and private cryptographic keys for servers. Not to mention the many secrets designed to protect data, surrounding payment systems, intellectual property and more. 

In the process of handling these integrations, more than 40 million developers and almost 3 million businesses and organisations globally use GitHub, the public platform that lets developers share code and collaboratively work on projects. Either by accident (in the majority of cases), or occasionally knowingly, these uploads have company secrets buried within them alongside the code that’s being developed. As was seen with the Uber breach, hackers can theoretically scour this code, steal credentials and hack company accounts all without the developer and their employer being any the wiser.

How GitGuardian plugs these leaks

GitGuardian’s technology works by first linking developers registered on GitHub to their respective companies. This already gives the company greater insight over who their developers are on GitHub and the levels of public activity they’re involved in. This is especially important for developers’ personal repositories because they’re completely out of their companies’ control, yet too often contain corporate credentials. 

Once linked, GitGuardian’s algorithms scrutinise any and all code changes, known as commits, made by these developers in real-time, looking for signs of company secrets. Such signs within these commits range from code patterns to file types that have previously been found to contain credentials.  

“Our algorithms scan the content of more than 2.5 million commits a day, covering over 300 types of secrets from keys to database connection strings, SSL certificates, usernames and passwords,” Thomas continues.

Once a leak occurs, it takes four seconds for GitGuardian to detect it and send an alert to the developer and their security team. On average, the information is removed within 25 minutes and the credential is revoked within the hour. For every alert, GitGuardian seeks feedback from its developers and security teams who rate the accuracy of the detection: were company secrets actually exposed or was it a false positive? Consequently, the algorithm is constantly evolving in response to new secrets and how they are leaked.

This seems like a simple premise, even if the technology behind it is far from simple. But what’s to stop a hacker building a similar algorithm to intercept the secrets before GitGuardian’s platform spots it? 

“GitGuardian is indeed competing with individual black hat hackers, as well as organised criminal groups,” Thomas explains. “We constantly improve our algorithms to be quicker and smarter than they are, and to be able to detect a wider scope of vulnerabilities, which requires a dedicated, highly skilled team.

“We’re helped in this by our users and customers who give us feedback – at scale – that we reinject into our algorithms. Our white hat approach allows us to collect feedback and this gives us a tremendous edge over black hats. You can see this as the unfair advantage you get by doing good.”

GitGuardian has already supported global government organisations, more than 100 Fortune 500 companies and 400,000 individual developers. It’s now setting its sights on adding even more developers and companies to its platform to further improve its algorithm, and extend this technology for use on private sites. 

“We started GitGuardian by tackling secrets in source code and private sites,” concludes Thomas. “Our ambition really is to be developers’ and cybersecurity professionals’ best friend when it comes to securing the vulnerability area that is emerging due to modern software development techniques [and] we’re on the road to doing this.”

McAfee notes the gap between cloud-first and cloud-only – yet optimism reigns on success

Two in five large UK organisations expect their operations to be cloud-only by 2021 according to a new report – but the gap between the haves and the have-nots is evident.

The findings appear in a new report from McAfee. The security vendor polled more than 2000 respondents – 1310 senior IT staff and 750 employees – across large businesses in the UK, France, and Germany to assess cloud readiness.

40% of large UK businesses expect to be cloud-only by 2021, yet only 5% surveyed already consider themselves to be at this stage, the research found. 86% of UK-based senior IT staff saw their business as cloud-first today, comparing similarly to France (90%) and Germany (92%), while optimism reigned over becoming cloud-only when given an indeterminate future date. 70% of UK respondents agreed this would occur, albeit lower than their French (75%) and German (86%) counterparts.

The benefits are clear among respondents. 88% of senior IT staff polled in the UK said moving to the cloud had increased productivity among end users. 84% said the move had improved security, while supplying more varied services (85%) and increased innovation (84%) were also cited.

The question of responsibility is an interesting one, and shows where the waters begin to muddy. Never mind the issue around vendor versus customer, consensus does not particularly exist within senior leadership. Ultimately, the majority believe responsibility lies with the head of IT (34%), compared with the CIO (19%), CEO (14%), or CISO (5%). One in five (19%) employees surveyed admitted to using apps which had not been approved by IT.

“The key to security in a cloud-first environment is knowing where and how data is being used, shared and stored by employees, contractors and other third parties,” said Nigel Hawthorn, director of McAfee’s EMEA cloud security business. “When sensitive corporate data is under the IT team’s control – whether in collaboration tools or SaaS and IaaS applications – organisations can ensure the right policies and safeguards are in place to protect data from device to cloud, detect malicious activity and correct any threats quickly as soon as they arise.”

Those wondering ‘whither McAfee?’ with regards to cloud security research will notice the company’s long-standing pivot to this arena. The abovementioned ‘device to cloud’ reference is taken direct from McAfee’s branding as the company looks to gather expertise as a cloud access security broker (CASB).

This is not without success, as McAfee was named for a second year, alongside Bitglass, Netskope and Symantec, as a leader in Gartner’s CASB Magic Quadrant last month. Last year Gartner noted, with the acquisition of Skyhigh Networks, McAfee’s expertise in raising awareness of shadow IT. 2019’s Quadrant sees one new face in the winners’ enclosure in the shape of Microsoft.

In April, McAfee released a special edition of its Cloud and Risk Adoption Report. According to the 1,000 enterprise organisations polled, more than half (52%) said they found security better in the cloud than on-premise, with organisations who adopt a CASB more than 35% likelier to launch new products and gain quicker time to market. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AT&T and Microsoft launch edge computing network

Bobby Hellard

27 Nov, 2019

Microsoft and AT&T have integrated 5G with Azure to launch an edge computing service for enterprise customers.

The two companies signed a $2 billion deal in July, which involved the migration of AT&T data and workflows to Azure, and introduced plans to accelerate work on 5G and cloud computing.

The first joint announcement to come out of the deal, announced on 26 November, is a pilot launch of an edge computing service called Network Edge Compute, a virtualised 5G core that can deploy Azure services.

It’s available to certain customers, initially in Dallas, but will roll out to some in Los Angeles and Atlanta over the next year.

“With our 5G and edge computing, AT&T is collaborating uniquely with Microsoft to marry their cloud capabilities with our network to create lower latency between the device and the cloud that will unlock new, future scenarios for consumers and businesses,” said Mo Katibeh, EVP and chief marketing officer, AT&T Business.

“We’ve said all year developers and businesses will be the early 5G adopters and this puts both at the forefront of this revolution.”

The collaboration will see AT&T become a “public-cloud first” business, according to Microsoft. The telecoms giant’s migration is well underway and is set to be completed by 2024.

“We are helping AT&T light up a wide range of unique solutions powered by Microsoft’s cloud, both for its business and our mutual customers in a secure and trusted way,” said Corey Sanders, corporate VP of Microsoft Solutions.

“The collaboration reaches across AT&T, bringing the hyper-scale of Microsoft Azure together with AT&T’s network to innovate with 5G and edge computing across every industry.”

It’s also another big deal for Microsoft, which has made its public cloud strategy clear with a number of acquisitions for migration specialists. Most recently the tech giant snapped up Mover, which swiftly followed a deal to buy similarly named Movere.

Microsoft and AT&T expand upon partnership to deliver Azure services on 5G core

Microsoft and AT&T have beefed up their strategic partnership, announcing a new offering where AT&T’s growing 5G network will be able to run Azure services.

The companies will be opening select preview availability for network edge compute (NEC) technology. The technology ‘weaves Microsoft Azure cloud services into AT&T network edge locations closer to customers,’ as the companies put it.

Microsoft and AT&T first came together earlier this year, with the former somewhat stealing the thunder of IBM, who had announced a similar agreement with AT&T the day before.

While the operator will be using Microsoft’s technology to a certain extent – the press materials noted it was ‘preferred’ for ‘non-network applications’ – the collaborative roadmap, for edge computing and 5G among other technologies – was the more interesting part of the story. The duo noted various opportunities that would be presented through 5G and edge. Mobile gaming is on the priority list, as is utilising drones for augmented and virtual reality.

Regarding AT&T’s personal cloudy journey, the commitment to migrating most non-network workloads to the public cloud by 2024 was noted, while the commitment for the operator to become ‘public-cloud first’ was reaffirmed.

“We are helping AT&T light up a wide range of unique solutions powered by Microsoft’s cloud, both for its business and our mutual customers in a secure and trusted way,” said Corey Sanders, Microsoft corporate vice president in a statement. “The collaboration reaches across AT&T, bringing the hyperscale of Microsoft Azure together with AT&T’s network to innovate with 5G and edge computing across every industry.”

After many false starts – remember Verizon’s ill-fated public cloud product offering? – telco is finding a much surer footing in the cloud ecosystem. As VMware CEO Pat Gelsinger put it in August: “Telcos will play a bigger role in the cloud universe than ever before. The shift from hardware to software is a great opportunity for US industry to step in and play a great role in the development of 5G.”

You can read the full Microsoft and AT&T update here. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS offloads Alexa processing to the cloud

Bobby Hellard

26 Nov, 2019

AWS has enabled its voice control services to be available on lower-powered devices by offloading the majority of the work to the cloud.

Alexa Voice Services (AVS) is already widely used with its Echo smart speakers and other devices that can be connected to a network or the internet, such as lightbulbs and TVs.

Adding voice controls was costly as Alexa devices had a minimum requirement of at least 100 megabytes of on-device RAM and an ARM Cortex “A” class microprocessor to have enough processing power to handle voice commands. 

That’s no longer the case as the tech giant will use its cloud to handle most of the processing requirements with Alexa Voice Services for IoT, reducing the costs of voice control by up to 50%. The baseline requirement has now been reduced to 1MB of RAM and Arm Cortex M-class microcontrollers.

The move also means that retrieving, buffering and decoding on devices will also be offloaded to its cloud. As such, everything from light switches to thermostats can now be controlled entirely using voice with AVS for IoT.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told TechCrunch. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.

“It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub – you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

The cloud giant made a number of IoT announcements aimed at simplifying IoT services for companies deploying large swathes of devices. It revealed added features to AWS IoT Greengrass, for example, which has been given capabilities for Docker. This extends AWS functions to connected devices, allowing businesses to perform data collection and analysis at the edge. The update is with Docker containers, which make it easier to move compute workloads to and from the edge.

HPE hybrid IT revenue falls 11% triggering share slump

Dale Walker

26 Nov, 2019

HPE missed analyst estimates in its fourth-quarter earnings report on Monday, largely driven by slowing demand for core products and wider economic uncertainty.

Shares slumped 4% in after-hours trading, prompted by a drop of 11% in the company’s Hybrid IT division, by far HPE’s largest unit comprising its servers, storage and data centre products. Revenue for the unit came in at $5.67 billion, just short of the $5.74 billion expected by analysts.

The company’s Intelligence Edge unit, a field that HPE has aggressively targeted, also saw revenue slump by 6.5%, falling from $773 million to $723 million year over year.

In September, HPE CEO Antonio Neri warned that turbulent economic factors, including trade tensions, were creating “uneven demand” and would rattle customer confidence for some time to come.

Commenting on this week’s earnings, he said: “We had a very successful fiscal year, marked by strong and consistent performance. Through our disciplined execution, we improved profitability across the company and significantly exceeded our original non-GAAP earnings and cash flow outlook, while sharpening our focus, transforming our culture and delivering differentiated innovation to our customers as they accelerate their digital transformations.

“I am confident in our ability to drive sustainable, profitable growth as we continue to shift our portfolio to higher-value, software-defined solutions and execute our pivot to offering everything as a service by 2022,” Neri continued. “Our strategy to deliver an edge-to-cloud platform-as-a-service is unmatched in the industry.”

There were some positive signs for the company. Quarterly profit was slightly higher than analyst estimates, earning 49 cents per share compared to the 46 cents per share that was anticipated, as reported by Reuters.

HPE also gave a positive outlook for the year ahead, estimating a $1.01 to $1.17 per share in profits and $1.78 to $1.94 per share in adjusted profits.

The earnings report brings to an end a year marked by a series of strategic acquisitions that will likely serve to further diversify HPE’s earnings in 2020. In May the company acquired supercomputing giant Cray, a deal that came as somewhat of a surprise but will certainly lead to more developments in its high-performance computing division.

HPE also acquired the intellectual property of big data analytics specialist MapR in August. This deal included a bunch of AI technology and expertise that HPE said would be put towards its Intelligent Data Platform.

Study shows continued cloud maturation in Nordics – with manufacturing a standout

A new report from Nordic IT services provider Tieto has found the region’s cloud landscape has matured significantly since 2015 from both a strategic and operational perspective – with Sweden and Finland fighting for supremacy.

The study, the latest Cloud Maturity Index which was based on responses from almost 300 decision-makers across the public and private sectors in the Nordics, placed almost one in five (18%) organisations as ‘mature’, while a quarter (27%) were seen as ‘proficient’, 42% at a basic level, and 13% ‘immature’.

In other words, it’s a broad church, with just a slight emphasis on the have-nots rather than the haves. Those who are described as mature use cloud services to a larger extent – virtually everything (97%) being cloud-based – and are much likelier to exploit the technology’s advantages compared with their immature cousins. Being classified as a mature cloud business means an approximately 20% lower IT operation costs, and on average 15% more efficiency in increasing business competitiveness.

When it came to specific industries, finance came out on top for Nordic organisations, maintaining its lead previously forged in the 2015 and 2017 surveys. The public sector continues to report the lowest strategic and operational maturity. Yet the gap is closing when it comes to traditionally ‘slower’ verticals, with manufacturing proving particularly effective. Whereas finance scored 6.0 in 2015 and 6.3 this time around, the manufacturing industry has leapt to 6.0 from 4.4.

The report also noted the importance of environmental factors in organisations’ initiatives. This is not entirely surprising given the temperate climate has enabled many data centre providers to set up shop in the Nordics. Approximately half of companies polled said they were already considering issues such as energy consumption or CO2 emission as part of their cloud strategy. Again less than surprisingly, mature cloud organisations were considerably further ahead on environmental initiatives than their immature brethren.

Despite the report’s figures – again ranked out of 10 – which showed Sweden and Finland comfortably ahead of Norway, according to Tieto’s head of cloud migration and automation Timo Ahomaki it is the latter who should be celebrating. Data sovereignty, Ahomaki argues, is an area which is ‘quite polarised’ in Sweden, with Finland’s more advanced cloud security meaning it is ‘at the forefront’ of the Nordic public sector.

Regular readers of this publication will be aware of the various initiatives which have taken place regarding the emerging data centre industry in the Nordics. As far back as 2015, CloudTech reported on a study from the Swedish government – which was later put into legislation – to give tax breaks for data centre providers. Last year, DigiPlex announced a project whereby wasted heat from its data centres would be used to warm up residential homes in Oslo.

You can read the full report here (email required). in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Think of data as the new uranium rather than the new oil – and treat it like it’s toxic

In May 2017, The Economist famously ran with a front-page headline proclaiming that “The world’s most valuable resource is no longer oil, but data.” It focused on big tech’s collection and use of data and argued that the data economy demands a new approach to antitrust rules.

I agree with the idea that data is now about the world’s most valuable resource, but would suggest that it is more like uranium. It has power and energy, but too much of it can be potentially explosive. Indeed, thinking about data as if it were like uranium, might be a good way to approach data protection.

Handling data

You would not expect your staff to handle uranium without caution or without the right protective gear. Nobody treats nuclear fuels the way that Homer Simpson does! Likewise, you need to educate your staff to handle data with equal care and need to equip them with the tools that they need to do so. 

Numerous studies have found that the greatest data protection threat to a business is the one that walks out of the business at the end of each day – your staff. The insider threat, as it is known, outweighs all others.

If your staff was handling nuclear fuel, you’d expect them to do so with the utmost care. But with data, even after extensive education and training programs, the temptation can be to take short cuts or overlook the proper procedures. For this reason, ease of use (making it as easy to do the right thing as it is to do anything else) is as important in security terms as functionality.

The problem is that the cybersecurity arena is exceedingly fragmented, and we are typically expected to understand how to use a number of different tools.

Thankfully, organisations like Lenovo are focusing on exactly this challenge, bringing a selection of best-of-breed security tools from the likes of Intel and Microsoft together into a single integrated portfolio called ThinkShield and making it easy to use.

Unfortunately, the reality is that you can’t always trust users to know the right thing to do. Nor can you oversee their every move. But with ThinkShield, you not only get comprehensive and customisable end-to-end IT security that you can trust to significantly reduce the risk of being compromised, but it’s also in a package that is easy for users to understand and use. It means less business interruption for your staff and less work for your IT admins.

Data concentration

Much of the focus in The Economist was on how much data certain players were collecting and the risks that go with this. It argued that new antitrust rules were needed to address the concentration of data and of power in the hands of a few giant players. 

Again, this makes data far more like uranium than oil – after all nuclear fuels are relatively safe in small quantities. It is only once you have a critical mass that it becomes potentially explosive.

In a recent interview, Edward Snowden suggested that GDPR had been a step in the right direction, but that the real threat came not from data protection, but from data concentration.
Elizabeth Warren’s threats to break up some of the tech giants may never happen, but further regulation both in the EU and the US is most likely and will focus on ensuring nuclear safety in the digital economy.

Cyber response

Anyone in the nuclear industry will be familiar with scenario planning and simulation exercises. They run regular drills to train staff on how to deal with catastrophes such as leakage of nuclear waste. Few firms realise that GDPR mandates the need for “a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.” In other words, if you don’t run do scenario planning or run simulation exercises to test how you’d respond to a data breach, then you’re not GDPR compliant.

Obviously, most organisations have in-house information security teams, just as they have legal and PR teams, but when a breach does occur your in-house teams are going to need help – they’re unlikely to have the specialist skills to deal with everything. As a result it is best to work with specialists – my latest venture, The Crisis Team, is a good example – that work alongside your internal teams offering world-leading expertise. After all, when things get serious, you don’t want the B team.

It is also worth including letting these experts support your scenario planning and simulation exercises. It will leverage their expertise and ensure that you develop a mutual understanding and are able to practice working together – something that will come in handy if or when the worst does occur.

Considering all of this, maybe treating your data as if it is toxic, and as if it were uranium, might be a good approach. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.