Slack launches Workflow Builder


Bobby Hellard

16 Oct, 2019

Slack has introduced Workflow Builder, a tool to create a customisable channel within a channel.

This, the company says, will remove productivity “roadblocks” by streamlining projects within Slack.

It’s been a busy year for the communications platform, with new productivity features launched every other month, such as tighter security controls and user-friendly upgrades such as dark mode.

Here, the company is going after workflow productivity with a feature that enables users to get information from their team sent straight to them, instead of creating an extra, temporary channel.

“Raise your hand if you rely on other people and teams to get work done,” Slack rhetorically asked in its blog. “By our count, there’s a 70% chance you do. Coordinating projects with others requires getting the right information to the right people in real-time. Yet steps like making requests, asking for updates, and providing context to teammates are hardly instantaneous – and often halt progress altogether.”

Slack’s answer to this is Workflow Builder, a “visual tool” for users to automate routine functions with custom workflows – essentially a channel within a channel to collect specific requests for your projects.

Slack uses a new starter as an example, where rather than tracking down the relevant people, whom you might not know yet, or trying to find specific documents relating to a project they’re now on, Workflow Builder provides a quick and accessible repository where the newbie can get up to speed.

Alternatively, the organisation can set up a “welcome workflow” that allows the new person to fill out a form letting the rest of the team know about them, which is one way to cut out awkward getting to know you chit-chat.

These custom forms are also a way of working out what your channel’s needs are. For example, finding out everyone’s thoughts on team meetings: Each member can fill in the form and let each other know about their availability and so on.

This is also useful for incident reports, recording issues with your company’s website for instance, where you let the team know something’s wrong and you create a shareable file of the issue for the IT department.

This is also a very quick feature, requiring just a minute and a reasonable amount of clicks (it took us only five clicks to create our own workflow).

Amazon completes consumer database migration from Oracle to AWS

Amazon Web Services (AWS) claims it has fully migrated its consumer databases from Oracle to its own offerings – but don’t expect the mudslinging between the two companies to end just yet.

Jeff Barr, AWS evangelist, took to the Seattle giant’s official blog to confirm the last Oracle database had been turned off. AWS claims it has reduced its database costs by more than 60%, latency of consumer-facing applications was reduced by 40%, and database admin overhead went down by 70% moving to managed services.

“Over the years we realised that we were spending too much time managing and scaling thousands of legacy Oracle databases,” wrote Barr. “Instead of focusing on high-value differentiated work, our database administrators spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted.”

This was by no means the only snarky remark. AWS went to the trouble of creating a video, complete with cheers, encapsulating the moment when the final Oracle DB was switched off (below), while the slide which accompanied Barr’s blog, albeit lacking somewhat in detail, was captioned ‘bye bye Oracle’.

Regular watchers of AWS and Oracle keynotes will recall various claims made by one company against the other. Only last month Larry Ellison’s OpenWorld keynote in San Francisco, which touted the company’s autonomous, next-generation cloud, compared variously with Amazon’s position on shared responsibility – a concept which Oracle is looking to eradicate.

Last November, AWS chief executive Andy Jassy noted that AWS had turned off its last Oracle data warehouse at the beginning of that month, and added that almost 90% of all databases had moved to cloud-based relational database Aurora and non-relational database DynamoDB.

The overall migration project was not limited to switching off databases. Barr added that employees who focused primarily on Oracle database admin work were retrained on AWS, as well as ‘cloud-based architectures’ and cloud security. “They now work with both internal and external customers in an advisory role, where they have an opportunity to share their first-hand experience with large-scale migration of mission-critical databases,” wrote Barr.

AWS did note that some third-party applications were ‘tightly bound’ to Oracle and were therefore not migrated.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Commvault sounds warning for multi-cloud “new world order”


Keumars Afifi-Sabet

16 Oct, 2019

With multi-cloud and hybrid cloud environments on the rise, businesses need to approach data management differently than in the past, Commvault’s CEO Sanjay Mirchandani has claimed.

Specifically, this will involve avoiding data lock-in, addressing skill gaps and making information more portable while also, in some instances, doing more with less when it comes to implementing new technology.

Mirchandani, who only joined Commvault in February, used the company’s annual Go conference as an opportunity to outline his vision for the future.

During his keynote address, he highlighted the importance of offering customers the flexibility to deliver services to their native environments, whatever that may be. 

Recent research has backed this premise up, with findings showing that 85% of organisations are now using multiple clouds in their businesses.

Drawing from his time as a chief information officer (CIO) at EMC, the Commvault boss also castigated point solutions, a term used in the industry to describe tools that are deployed to solve one specific business problem, saying he wants the company to move away from this.

“With the technological shifts that are happening, you need to help your businesses truly capitalise on that opportunity,” he said.

“Give them the freedom to move data in or out, anywhere they want, on-prem or off-prem, any kind of cloud, any kind of application; traditional or modern. You need that flexibility, that choice, and that portability.”

“If I could give you one piece of advice, don’t get taken in by shiny point solutions that promise you the world, because they’re a mirage. They capture your attention, they seduce you in some way, and then they won’t get you to Nirvana. They’re going to come up short.”

He added that businesses need services in today’s age that are truly comprehensive and process a multitude of scenarios, spanning considerations that centre on central storage to edge computing.

Moving forwards, Commvault’s CEO said the company will look to address a number of key areas, from how the company approaches the cloud in a fundamental way in the future, to reducing ‘data chaos‘ created by the tsunami of data that businesses are collecting.

Mirchandani’s long-term vision for the company centres on finding a way to build platforms to service customers that work around the concept of decoupling data from applications and infrastructure.

It’s a long-term aim that will involve unifying data management with data storage to work seamlessly on a single platform, largely by integrating technology from the recently-acquired Hedwig.

From a branding perspective, meanwhile, granting Metallic its own identity, and retaining Hedwig’s previous one, instead of swallowing these into the wider Commvault portfolio, has been a deliberate choice.

The firm has suggested separating its branding would allow for the two products to run with a sense of independence akin to that of a start-up, with Metallic, for instance, growing from within the company.

However, there’s also an awareness that the Commvault brand carries connotations from the previous era of leadership, with the company keen to alter this from a messaging perspective.

One criticism the company has faced in the past, for instance, has been that Commvault’s tech was too difficult to use. Mirchandani added that due to recent changes to the platform, he considers it a “myth” that he’s striving to bust.

“The one [point] I want to spend a minute on, and I want you to truly give us a chance on this one, is debunking the myth that we’re hard to use,” he said in his keynote.

“We’re a sophisticated product that does a lot of things for our customers and over the years we’ve given you more and more and more technology – but we’ve also taken a step back and heard your feedback.”

Commvault, however, has more work to do in this area, according to UK-based partner Softcat, with prospective customers also anxious the firm’s tech is too costly.

An aspect that would greatly benefit resellers would be some form of guidance as to how to handle these conversations with customers, as well as a major marketing effort to effectively eliminate the sales barrier altogether.

Moving from DevOps to modern ops: Why there is no room for silos when it comes to cloud security

It started with DevOps. Then there was NetOps. Now SecOps. Or is it DevSecOps? Or maybe SecDevOps?

Whatever you decide to call it, too often the end result is little more than the same old siloes with shiny new names. We've become so focused on "what do we call these folks" that we sometimes forget "what is it we're trying to accomplish".

Shakespeare said that a rose would smell as sweet by any other name. Let's apply that today to the number of factions rising in the operations game. Changing your name does nothing if you don't change your core behaviours and practices.

Back when cloud first rose – pun intended – there were plenty of pundits who dismissed enterprise efforts to build private (on-premises) cloud. Because it didn't fit the precise definition they wanted to associate with cloud. They ignored that the outcome was the measure of success, not measuring up to someone else's pedantic definition. They sought agility and efficiency and speed by changing the way infrastructure was provisioned, configured, and managed. They changed behaviours and practices through the use of technology.

Today the terminology wars are focused on X-Ops and what we should call the latest arrival, security.

I know I've used the terms, and sometimes I use them all at the same time. But perhaps what we need is fewer distinctions. Perhaps I should just say you're either adopting "modern ops" in terms of behaviours and practices or you're remaining "traditional ops" and that's all there is to it.

Modern ops employ technology like cloud and automation to build pipelines that codify processes to speed delivery and deployment of applications.

And they do it by changing behaviours and practices. They are collaborative and communicative. They use technology to modernise and optimise decades old processes that are impeding delivery and deployment. They work together, not in siloed X-Ops teams, to achieve their goal of faster, more frequent releases that deliver value to the business and delight consumers.

Focusing on what to call "security" as they get onboard with modern ops can be detrimental to the basic premise that delivery and deployment can only succeed at speed with a collaborative approach. Slapping new labels on a new focused team just builds differenter siloes; it doesn't smash them and open up the lines of communication that are required to operate at speed and scale.

It also unintentionally gives permission to other, non-security ops to abdicate security responsibilities to the <SecDevOps | DevSecOps> team. Because it's in their name, right?

That's an increasingly bad idea given that application security is a stack and thus requires a full stack to implement the right protections.  You need network security and transport security and you definitely need application security. The attack surface for an app includes all seven layers and, increasingly, the stack comprising its operational environment. There is no room for silos when it comes to security.

The focus of IT as its moving through its digital transformation should be to modernise ops – from the technology to the teams that use it to innovate and deliver value to the business. Modern ops are not consumed by concern for titles, they are passionate about producing results. Modern ops work together, communicate freely, and collaborate across concerns to build out an efficient, adaptive delivery and deployment pipeline.

That will take network, security, infrastructure, storage, and development expertise working together.

In the network, we use labels to tag traffic and apply policies that control what devices can talk to which infrastructure and applications. In container clusters we use labels to isolate and restrict, to constrain and to disallow.

Labels in organisations can have the same affect.

So maybe it would be better if we just said you are either modern ops or traditional ops. And that some are in a transitional state between the two. Let's stop spending so many cycles on what to call each other that we miss the opportunity to create a collaborative environment in which to deliver and deploy apps faster, more frequently, and most of all, securely.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How AI developers are driving new demand for IT vendor services

Preparing for the adoption of new technologies is challenging for many large enterprise organisations. That's why savvy CIOs and CTOs seek information and guidance from vendors that can assist them on the journey to achieve digital business transformation. Meanwhile, investment in artificial intelligence (AI) systems and services will continue on a high-growth trajectory.

According to the latest worldwide market study by International Data Corporation (IDC), spending on AI systems will reach $97.9 billion in 2023 – that's more than two and a half times the $37.5 billion that will be spent in 2019. The compound annual growth rate (CAGR) for AI in the 2018-2023 forecast period will be 28.4 percent.

Artificial intelligence market development

"The AI market continues to grow at a steady rate in 2019 and we expect this momentum to carry forward," said David Schubmehl, research director at IDC. "The use of artificial intelligence and machine learning (ML) is occurring in a wide range of solutions and applications from ERP and manufacturing software to content management, collaboration, and user productivity."

Artificial intelligence and machine learning are top of mind for most organisations today, and IDC expects that AI will be the disrupting influence changing entire industries over the next decade.

Spending on AI systems will be led by the retail and banking industries, each of which will invest more than $5 billion in 2019. Nearly half of the retail spending will go toward automated customer service agents and expert shopping advisors & product recommendation systems. The banking industry will focus its investments on automated threat intelligence and prevention systems and fraud analysis and investigation.

Other industries that will make significant investments in AI systems throughout the forecast include discrete manufacturing, process manufacturing, healthcare, and professional services. The fastest spending growth will come from the media industry and federal or central governments with five-year CAGRs of 33.7 percent and 33.6 percent respectively.

Investments in AI systems continue to be driven by a wide range of use cases. The three largest use cases — automated customer service agents, automated threat intelligence and prevention systems, and sales process recommendation and automation — will deliver 25 percent of all spending in 2019. The next six use cases will provide an additional 35 percent of overall spending this year.

The use cases that will see the fastest spending growth over the 2018-2023 forecast period are automated human resources (43.3 percent CAGR) and pharmaceutical research and development (36.7 percent CAGR). However, eight other use cases will have spending growth with five-year CAGRs greater than 30 percent.

Decision-makers across all industries are now grappling with the question of how to effectively proceed with their AI journey.  That's why the largest share of technology spending in 2019 will go toward services, primarily IT services, as firms seek outside expertise to design and implement their AI projects.

Hardware spending will be somewhat larger than software spending in 2019 as firms build out their AI infrastructure, but purchases of AI software and AI software platforms will overtake hardware by the end of the forecast period with software spending seeing a 36.7 percent CAGR.

Outlook for AI applications development growth

On a geographic basis, the United States will deliver more than 50 percent of all AI applications development spending throughout the forecast period, led by the retail and banking industries. Western Europe will be the second-largest geographic region, led by banking and discrete manufacturing.

China will be the third-largest region for AI spending with retail, state or local government, and professional services vying for the top position. The strongest spending growth over the five-year forecast period will be in Japan (45.3 percent CAGR) and China (44.9 percent CAGR).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Commvault launches ‘Metallic’ SaaS backup suite


Keumars Afifi-Sabet

15 Oct, 2019

Backup specialist Commvault has lifted the lid on a spin-off software as a service (SaaS) venture that allows customers to safeguard their either on-prem or cloud-based files and application data.

Launched at the firm’s annual Commvault GO conference, the Metallic portfolio is geared towards addressing a growing demand among Commvault’s customers for SaaS backup and recovery services.

Metallic will be pitched at large businesses of between 500 and 2,500 employees and is set to launch with three strands that span the breadth of SaaS-based data management, including one service devoted entirely to Microsoft Office 365.

Its launch is also significant in the way Commvault has pointedly decided to assign the platform a brand in and of itself, rather than including this under the Commvault umbrella.

This, according to the firm’s CEO Sanjay Mirchandani, is because Metallic signifies a divergence from how Commvault has traditionally developed and launched a product.

“Part of what Metallic represented for us as a company is a new way of building,” said Mirchandani. “We funded it and created a startup within the company, they could tap into anything they wanted to within Commvault or not.

“Choose the go-to-market model, choose the partners they wanted to work with, give them the freedom to create something that is world-class and designed to solve real problems for customers. And they had the best of both worlds.”

The three strands comprising Metallic include Core, Office 365 and Endpoint services, each aimed at varying elements of protecting data within a large organisation.

Core, for instance, centres on the ‘essentials’ of data spanning from VMware data protection to Microsoft SQL database backup. By contrast, Endpoint backup and recovery focuses on protecting data stored locally on machines within an organisation.

The Office 365 provision, meanwhile, is dedicated to protecting an organisation’s work within the productivity suite of apps and services to safeguard against potential issues like accidental deletion and corruption.

Available in only the US at first, these are available either through monthly or annual subscriptions, while prospective customers can sign up to a free trial through the platform’s dedicated website.

Commvault decided to build the Metallic brand, Mirchandani added, after extensive consultation with partners and its customers. Its developers decided the best approach to building Metallic would be to adopt the viewpoint of an organisation’s chief information officer (CIO) and consider their backup needs.

Why the future of data security in the cloud is programmable

It’s the way software used to be purchased, and often still is. A CEO, or GM, or line-of-business owner calls into IT, and the security and compliance teams, to let them know that they are purchasing a new piece of software to drive innovation in how they deliver their products or services. Because the software needs to be customised, integrated and controlled in the company’s on-prem or cloud environment, the IT team needs to deploy it and the security team needs to secure it.

The problem is that IT, security, and compliance are already behind. As the “Defenders” of the business, they must now apply multiple other third-party products to that application in order to to gain fine-grained control over who accesses it and what data they can access. While a growing body of regulations state that security and privacy must be implemented “by design,” they didn’t design the application that the “Builders” delivered. At this point, everything they do is fundamentally an afterthought.

The conundrum of the defender

The job of the Defender is a difficult one, because security and privacy as an afterthought creates both complexity and vulnerability. The complexity comes especially from security products needing to be customised in order to function in lockstep with the application whose data they are protecting. The larger and more complex the application to protect, the more you have to invest to configure and maintain the products that secure it.

Vulnerabilities arise because between the application and the security products meant to protect it, there are seams—gaps in communication, coordination, and capability that occur naturally when two systems that are constantly evolving occupy two different infrastructure spaces. It is those seams that endlessly produce new exposure every day.

More vulnerabilities lead to more security products, which lead to more complexity, and you can see where this is going. Large enterprises own an average of between 50-70 security products, and lack the personnel and resources to marshal those products to deal with the sometimes hundreds of thousands of open vulnerabilities that have been created by the patchwork.

Where this is reflected in the business is that spending on cybersecurity increases every year, but that spending appears to be doing nothing to stem the tide of data breaches and privacy exposures, which are expanding at an even faster rate.

Enter the builders

The perspective of the developer, the Builders of applications, has changed. More and more, requirements around managing performance, reliability, and scalability have migrated into development processes as dev-ops and cloud infrastructure have gone mainstream. Security has followed suit, as progressive developers and dev-ops teams have adopted the mantra that the secret to fighting this battle is to get more involved in security upfront.

The initial steps in this movement have been focused on decreasing the coding of vulnerabilities, meaning that tools have been introduced into the application assembly line that analyse code for security weaknesses and prompt developers to address those weaknesses before applications get released.

This is a huge step, as those code vulnerabilities, if not caught ahead of time, are what lead to the dreaded “security patch.” Patches are software afterthoughts which IT often finds very painful to apply, as it can mean taking a system down for maintenance or other contortions that are highly disruptive to the business.

It makes sense to write more secure code, because coding is what developers do. But many developers are doing more. Now tools are becoming available that developers can embed into applications that give security, compliance, and risk-management visibility into and control over the flows of data.

These tools are not an afterthought, they are part of the application—a forethought. Most of the complexity that an IT-delivered security product introduces is avoided because the utility of the application is delivered along with security, and everything is on the same page and in the same context.

More powerfully, the seams between the application and its security products which fuel the runaway train of vulnerabilities disappear. We see at ALTR that when an application developed using the programmable model is delivered, it has tools to manage data in a changing world of security, compliance, and risk delivered along with it. Data security and governance has been “programmed in”.

Programmable as cloud-native

With the ability to monitor data access, govern it, and selectively protect data even from developers themselves wired into applications, there is another door that swings wide open: application portability.

Many companies, from traditional manufacturing all the way to software companies themselves, are looking for ways to leverage the economics and flexibility of cloud infrastructure. For most of these companies, the number 1 and number 2 concerns as to going to infrastructure that they don’t control are security and compliance.

But when a development team wires in tools to allow for the control of data regardless of where the application is deployed, the business is free to determine the best infrastructure for the application in question based on performance, cost, reliability and other IT priorities. Cloud options from platform-as-a-service all the way to serverless architecture, where IT doesn’t have to maintain any of the infrastructure stack, are all on the table.

Through this lens the economic benefits of programmable data security come completely into focus. Adopting this approach, by way of example, ALTR has been able to help a business optimise its digital footprint based on delivery of technology services, not on the security of them. Also, there are some additional savings that these organisations realise in the consolidation of security products, because many existing products are tied to the infrastructure in which they are deployed, from physical network appliances to cloud-provider-specific tools.

A programmable future

The world of connected computing that we find ourselves in today is a result of more than 20 years of focus on speed, efficiency, and convenience. While developers and the security teams supporting them have gotten serious about data security, the fundamental approach, and the legacy applianceware that supports that approach, is often still stuck in the world of the afterthought.

By considering data security and implementing tools to provide it at the very conception point of the application that creates the data, we finally accomplish what we needed all along: security and privacy by design. When Builders and Defenders work closely together, we see new levels of data protection implemented in organisations around the world.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Kubernetes and multi-cloud: How to monitor your modern applications effectively

Many companies are moving to a new way of delivering service to customers based on microservices. Rather than building huge and monolithic apps, microservices uses small and interconnected application components instead. These modern applications tend to be easier to update and expand than those traditional applications, as replacement services can be slotted in using APIs rather than requiring full rewrites.

To support this design approach, developers are making more use of cloud and containers. According to the Continuous Intelligence Report for 2019, the percentage of companies adopting containers has grown to 30 percent. Cloud services can host tens or thousands of containers based on how large the application needs to be, while the number of containers can be raised or lowered depending on demand. This makes containers complex to manage. For companies that run their critical applications on these new services, managing all this infrastructure is a huge challenge.

To administer this new infrastructure, companies are adopting Kubernetes as a way to orchestrate their IT. In the CI Report, the percentage of companies adopting Kubernetes ranged from 20 percent for businesses running on AWS alone, through to 59 percent for those running on a combination of AWS and Google Cloud Platform.

For companies running on AWS, GCP and Azure, the adoption of Kubernetes was up to more than 80 percent. For multi-cloud environments, Kubernetes helps to streamline their operations and respond more quickly to changes in demand.

Monitoring Kubernetes

So far, Kubernetes has helped companies turn the idea of multi-cloud into a reality. By being able to run the same container images across multiple cloud platforms, IT teams should be able to maintain  control over their IT and maintain leverage when it comes to pricing.

However, Kubernetes is still a developing technology in its own right. While it provides a strong foundation for developers to build and orchestrate their applications’ infrastructure, there are some gaps when it comes to maintaining, monitoring and managing Kubernetes itself.

Kubernetes pods, nodes and even whole clusters can all be destroyed and rebuilt quickly in response to changes in demand levels. Rather than looking at infrastructure, effectively monitoring what is running in Kubernetes involves looking at the application level and focusing on each Service and Deployment abstraction instead. Monitoring therefore has to align with the way Kubernetes is organised, as opposed to trying to fit Kubernetes into a previous model.

It is also important to understand the different forms of data that might be captured. Log data from an application component can provide insight into what processes are taking place, while metric data on application performance can provide insight into the overall experience that an application is delivering.

Joining up log and metric data should give a complete picture of the application, but this task is not as easy as it sounds. It can be near impossible to connect the dots between metrics on a node to logs from a pod in that node. This is because the metadata tagging of the data being collected is not consistent. A metric might be tagged with the pod and cluster it was collected from, while a log might be categorised using a different naming convention.

To get a true picture of what is taking place in an application running on Kubernetes involves looking at all the data being created and correlating this information together. Using metadata from the application alongside the logs and metrics information coming in, a consistent and coherent view of what is taking place across all the containers being used can be established. This involves collecting all the metadata together and enriching it so that consistent tagging and correlation can be carried out.

Bringing all the data together

Looking at Kubernetes,  it’s easy to see why the number of companies utilising it are on the rise. However, currently developers can have multiple different tools in place to take data out of container instances and bring that information back for analysis and monitoring, which can be hard to scale. For log data collection, Fluent Bit processes and forwards on data from containers; similarly, Fluentd provides log and event data collection and organisation. The open source project Prometheus provides metrics collection for container instances, while Falco provides a way to audit data from containers for security and compliance purposes.

Each of these tools can provide an element of observability for container instances, but ideally they should be combined to get a fuller picture of how containers are operating over time. Similarly, automating the process for gathering and correlating data across multiple tools can help make this information easier to use and digest.

Bringing all these different sets of data together not only provides a better picture of what is taking place in a specific container or pod; the merged data can be used alongside other sources of information too. By gathering all this data together in real time, as it is created, you can see how your company is performing over time. This continuous stream of intelligence can be used to see how decisions affect both IT infrastructure and business performance.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Cloud too expensive for the “vast majority”, claims Zuckerberg


Bobby Hellard

11 Oct, 2019

Mark Zuckerberg has questioned the cost of cloud computing and storage during a discussion about bio sequencing.

The Facebook founder specifically referenced AWS, joking “let’s call up Jeff and talk about this”.

Zuckerberg and his wife Pricilla Chan set up the Chan Zuckerberg Initiative (CZI) in 2015 to find ways of using technology to advance health, social and scientific research.

In a chat between the research centre’s co-presidents Dr Joseph DeRisi and Dr Stephen Quake, moderated by Zuckerberg live on YouTube, the group argued that progress in initiatives is often blocked by the exorbitant cost of cloud subscriptions.

“In our bio board meetings, one of the things we talk about is the cost of the compute, and our AWS bill, for example, is one of the specific points,” Zuckerberg said. “Let’s call up Jeff and talk about this.”

“It’s interesting, the bottleneck for progress, in medical research at this point, a lot of the cost for it, is on compute and the data side and not strictly on the wet labs or how long it takes to turn around experiments.”

The CZI is part-funded by billions of dollars from Facebook and also investment from LinkedIn co-founder Reid Hoffman.

However, Dr Quake noted that most other organisations and research labs around the world are unable to secure this level of funding, and are therefore hamstrung by the price of cloud.

“This is no more apparent than in the developing world or low-income resource settings,” He said. “The cost of the sequencing and the lab work has gotten to the point where you can do this almost anywhere in the world. It’s gotten that cheap.”

“The compute to be able to analyse that data is unfortunately not available to the vast majority of the people that do that. It’s very often the case that you’ll go to one of these low-income resource settings, they’ll have a sequencer but its collecting dust because they can’t compute. Even if they can access the cloud, they can’t afford it.”

According to Synergy Research Group, the cloud market is heading for a world-wide revenue run rate of $100 billion per year. The big providers have capitalised on digital transformations and the need to store and analyse data with AWS consistently at the top of the market.

Companies like Microsoft, Google, IBM, Alibaba and Oracle are all competing for the rest of the market. This is no more apparent than in the race to provide the Pentagon with cloud computing services. Amazon’s cloud arm was set to be the winners of the $10 billion JEDI contract but after complaints made by Oracle and IBM, the decision has been put on hold.

Google is able to access sensitive G Suite customer data, former employee warns


Keumars Afifi-Sabet

11 Oct, 2019

Employees whose organisations deploy G Suite have been urged to stay mindful of keeping sensitive data on the productivity suite, following a report that suggests Google and IT admins have extensive access to private files.

Google itself, as well as administrators within a business, have vast access to the files stored within G Suite, and can monitor staff activity, according to a former Google employee. This data, which is not protected by end-to-end encryption unlike other Google services, can even be shared with law enforcement on request.

This level of intrusion is necessary to perform essential security functions for business users, such as monitoring accounts for attempted access, ex-staffer Martin Shelton claimed in his post, but, in turn, this demands enormous visibility on users’ accounts.

Organisations using G Suite Business or G Suite Enterprise have even offered administrators powerful tools to monitor and track employees’ activity, and retain this information in a Google Vault.

“In our ideal world, Google would provide end-to-end encrypted G Suite services, allowing media and civil society organisations to collaborate on their work in a secure and private environment whenever possible,” Shelton said.

“For now we should consider when to keep our most sensitive data off of G Suite in favour of an end-to-end encrypted alternative, local storage, or off of a computer altogether.”

Of particular concern is a sense of uncertainty over who within Google has access to user data kept on its servers. Shelton added that Google claims to have protections in place, but that it’s not known how many employees are able to clear the bars set by the company.

These protections include authorised key card access, approval from an employee’s manager as well as the data centre director, as well as logging and auditing of all instances of approved access.

G Suite administrators, meanwhile, can see a “remarkable level” of user data within an organisation in light of the powerful tools offered by Google. G Suite Enterprise offers the most amount of access into users’ activities, with G Suite Business allowing for slightly more restricted visibility.

These tools include being able to search through Gmail and Google Drive for content as well as metadata including the subject lines and recipients of emails. Administrators can even create rules for which data is logged and retained, depending on how they wish to configure their G Suite.

Audit logs, for example, lets IT admins see who has looked at and modified documents, while the use of apps like Calendar, Drive and Slides can be monitored on both desktops and mobile devices.

Shelton has recommended that employees audit their own use of G Suite and be mindful of any sensitive data that’s either kept in Drive or discussed with others via Gmail.

The former employee has also suggested users get details from their G Suite administrators pertaining to the level of visibility they have over employees within their organisation, including which rules they’ve enabled as part of Google Vault.

Concerns over privacy within G Suite have emerged in the past after accusations were made in 2018 that third-party developers were able to view users’ Gmail messages.

Google said, at the time, that such a practice was normal across the industry and users had already granted permission as and when this occurred.