Commvault sounds warning for multi-cloud “new world order”


Keumars Afifi-Sabet

16 Oct, 2019

With multi-cloud and hybrid cloud environments on the rise, businesses need to approach data management differently than in the past, Commvault’s CEO Sanjay Mirchandani has claimed.

Specifically, this will involve avoiding data lock-in, addressing skill gaps and making information more portable while also, in some instances, doing more with less when it comes to implementing new technology.

Mirchandani, who only joined Commvault in February, used the company’s annual Go conference as an opportunity to outline his vision for the future.

During his keynote address, he highlighted the importance of offering customers the flexibility to deliver services to their native environments, whatever that may be. 

Recent research has backed this premise up, with findings showing that 85% of organisations are now using multiple clouds in their businesses.

Drawing from his time as a chief information officer (CIO) at EMC, the Commvault boss also castigated point solutions, a term used in the industry to describe tools that are deployed to solve one specific business problem, saying he wants the company to move away from this.

“With the technological shifts that are happening, you need to help your businesses truly capitalise on that opportunity,” he said.

“Give them the freedom to move data in or out, anywhere they want, on-prem or off-prem, any kind of cloud, any kind of application; traditional or modern. You need that flexibility, that choice, and that portability.”

“If I could give you one piece of advice, don’t get taken in by shiny point solutions that promise you the world, because they’re a mirage. They capture your attention, they seduce you in some way, and then they won’t get you to Nirvana. They’re going to come up short.”

He added that businesses need services in today’s age that are truly comprehensive and process a multitude of scenarios, spanning considerations that centre on central storage to edge computing.

Moving forwards, Commvault’s CEO said the company will look to address a number of key areas, from how the company approaches the cloud in a fundamental way in the future, to reducing ‘data chaos‘ created by the tsunami of data that businesses are collecting.

Mirchandani’s long-term vision for the company centres on finding a way to build platforms to service customers that work around the concept of decoupling data from applications and infrastructure.

It’s a long-term aim that will involve unifying data management with data storage to work seamlessly on a single platform, largely by integrating technology from the recently-acquired Hedwig.

From a branding perspective, meanwhile, granting Metallic its own identity, and retaining Hedwig’s previous one, instead of swallowing these into the wider Commvault portfolio, has been a deliberate choice.

The firm has suggested separating its branding would allow for the two products to run with a sense of independence akin to that of a start-up, with Metallic, for instance, growing from within the company.

However, there’s also an awareness that the Commvault brand carries connotations from the previous era of leadership, with the company keen to alter this from a messaging perspective.

One criticism the company has faced in the past, for instance, has been that Commvault’s tech was too difficult to use. Mirchandani added that due to recent changes to the platform, he considers it a “myth” that he’s striving to bust.

“The one [point] I want to spend a minute on, and I want you to truly give us a chance on this one, is debunking the myth that we’re hard to use,” he said in his keynote.

“We’re a sophisticated product that does a lot of things for our customers and over the years we’ve given you more and more and more technology – but we’ve also taken a step back and heard your feedback.”

Commvault, however, has more work to do in this area, according to UK-based partner Softcat, with prospective customers also anxious the firm’s tech is too costly.

An aspect that would greatly benefit resellers would be some form of guidance as to how to handle these conversations with customers, as well as a major marketing effort to effectively eliminate the sales barrier altogether.

Moving from DevOps to modern ops: Why there is no room for silos when it comes to cloud security

It started with DevOps. Then there was NetOps. Now SecOps. Or is it DevSecOps? Or maybe SecDevOps?

Whatever you decide to call it, too often the end result is little more than the same old siloes with shiny new names. We've become so focused on "what do we call these folks" that we sometimes forget "what is it we're trying to accomplish".

Shakespeare said that a rose would smell as sweet by any other name. Let's apply that today to the number of factions rising in the operations game. Changing your name does nothing if you don't change your core behaviours and practices.

Back when cloud first rose – pun intended – there were plenty of pundits who dismissed enterprise efforts to build private (on-premises) cloud. Because it didn't fit the precise definition they wanted to associate with cloud. They ignored that the outcome was the measure of success, not measuring up to someone else's pedantic definition. They sought agility and efficiency and speed by changing the way infrastructure was provisioned, configured, and managed. They changed behaviours and practices through the use of technology.

Today the terminology wars are focused on X-Ops and what we should call the latest arrival, security.

I know I've used the terms, and sometimes I use them all at the same time. But perhaps what we need is fewer distinctions. Perhaps I should just say you're either adopting "modern ops" in terms of behaviours and practices or you're remaining "traditional ops" and that's all there is to it.

Modern ops employ technology like cloud and automation to build pipelines that codify processes to speed delivery and deployment of applications.

And they do it by changing behaviours and practices. They are collaborative and communicative. They use technology to modernise and optimise decades old processes that are impeding delivery and deployment. They work together, not in siloed X-Ops teams, to achieve their goal of faster, more frequent releases that deliver value to the business and delight consumers.

Focusing on what to call "security" as they get onboard with modern ops can be detrimental to the basic premise that delivery and deployment can only succeed at speed with a collaborative approach. Slapping new labels on a new focused team just builds differenter siloes; it doesn't smash them and open up the lines of communication that are required to operate at speed and scale.

It also unintentionally gives permission to other, non-security ops to abdicate security responsibilities to the <SecDevOps | DevSecOps> team. Because it's in their name, right?

That's an increasingly bad idea given that application security is a stack and thus requires a full stack to implement the right protections.  You need network security and transport security and you definitely need application security. The attack surface for an app includes all seven layers and, increasingly, the stack comprising its operational environment. There is no room for silos when it comes to security.

The focus of IT as its moving through its digital transformation should be to modernise ops – from the technology to the teams that use it to innovate and deliver value to the business. Modern ops are not consumed by concern for titles, they are passionate about producing results. Modern ops work together, communicate freely, and collaborate across concerns to build out an efficient, adaptive delivery and deployment pipeline.

That will take network, security, infrastructure, storage, and development expertise working together.

In the network, we use labels to tag traffic and apply policies that control what devices can talk to which infrastructure and applications. In container clusters we use labels to isolate and restrict, to constrain and to disallow.

Labels in organisations can have the same affect.

So maybe it would be better if we just said you are either modern ops or traditional ops. And that some are in a transitional state between the two. Let's stop spending so many cycles on what to call each other that we miss the opportunity to create a collaborative environment in which to deliver and deploy apps faster, more frequently, and most of all, securely.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How AI developers are driving new demand for IT vendor services

Preparing for the adoption of new technologies is challenging for many large enterprise organisations. That's why savvy CIOs and CTOs seek information and guidance from vendors that can assist them on the journey to achieve digital business transformation. Meanwhile, investment in artificial intelligence (AI) systems and services will continue on a high-growth trajectory.

According to the latest worldwide market study by International Data Corporation (IDC), spending on AI systems will reach $97.9 billion in 2023 – that's more than two and a half times the $37.5 billion that will be spent in 2019. The compound annual growth rate (CAGR) for AI in the 2018-2023 forecast period will be 28.4 percent.

Artificial intelligence market development

"The AI market continues to grow at a steady rate in 2019 and we expect this momentum to carry forward," said David Schubmehl, research director at IDC. "The use of artificial intelligence and machine learning (ML) is occurring in a wide range of solutions and applications from ERP and manufacturing software to content management, collaboration, and user productivity."

Artificial intelligence and machine learning are top of mind for most organisations today, and IDC expects that AI will be the disrupting influence changing entire industries over the next decade.

Spending on AI systems will be led by the retail and banking industries, each of which will invest more than $5 billion in 2019. Nearly half of the retail spending will go toward automated customer service agents and expert shopping advisors & product recommendation systems. The banking industry will focus its investments on automated threat intelligence and prevention systems and fraud analysis and investigation.

Other industries that will make significant investments in AI systems throughout the forecast include discrete manufacturing, process manufacturing, healthcare, and professional services. The fastest spending growth will come from the media industry and federal or central governments with five-year CAGRs of 33.7 percent and 33.6 percent respectively.

Investments in AI systems continue to be driven by a wide range of use cases. The three largest use cases — automated customer service agents, automated threat intelligence and prevention systems, and sales process recommendation and automation — will deliver 25 percent of all spending in 2019. The next six use cases will provide an additional 35 percent of overall spending this year.

The use cases that will see the fastest spending growth over the 2018-2023 forecast period are automated human resources (43.3 percent CAGR) and pharmaceutical research and development (36.7 percent CAGR). However, eight other use cases will have spending growth with five-year CAGRs greater than 30 percent.

Decision-makers across all industries are now grappling with the question of how to effectively proceed with their AI journey.  That's why the largest share of technology spending in 2019 will go toward services, primarily IT services, as firms seek outside expertise to design and implement their AI projects.

Hardware spending will be somewhat larger than software spending in 2019 as firms build out their AI infrastructure, but purchases of AI software and AI software platforms will overtake hardware by the end of the forecast period with software spending seeing a 36.7 percent CAGR.

Outlook for AI applications development growth

On a geographic basis, the United States will deliver more than 50 percent of all AI applications development spending throughout the forecast period, led by the retail and banking industries. Western Europe will be the second-largest geographic region, led by banking and discrete manufacturing.

China will be the third-largest region for AI spending with retail, state or local government, and professional services vying for the top position. The strongest spending growth over the five-year forecast period will be in Japan (45.3 percent CAGR) and China (44.9 percent CAGR).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Commvault launches ‘Metallic’ SaaS backup suite


Keumars Afifi-Sabet

15 Oct, 2019

Backup specialist Commvault has lifted the lid on a spin-off software as a service (SaaS) venture that allows customers to safeguard their either on-prem or cloud-based files and application data.

Launched at the firm’s annual Commvault GO conference, the Metallic portfolio is geared towards addressing a growing demand among Commvault’s customers for SaaS backup and recovery services.

Metallic will be pitched at large businesses of between 500 and 2,500 employees and is set to launch with three strands that span the breadth of SaaS-based data management, including one service devoted entirely to Microsoft Office 365.

Its launch is also significant in the way Commvault has pointedly decided to assign the platform a brand in and of itself, rather than including this under the Commvault umbrella.

This, according to the firm’s CEO Sanjay Mirchandani, is because Metallic signifies a divergence from how Commvault has traditionally developed and launched a product.

“Part of what Metallic represented for us as a company is a new way of building,” said Mirchandani. “We funded it and created a startup within the company, they could tap into anything they wanted to within Commvault or not.

“Choose the go-to-market model, choose the partners they wanted to work with, give them the freedom to create something that is world-class and designed to solve real problems for customers. And they had the best of both worlds.”

The three strands comprising Metallic include Core, Office 365 and Endpoint services, each aimed at varying elements of protecting data within a large organisation.

Core, for instance, centres on the ‘essentials’ of data spanning from VMware data protection to Microsoft SQL database backup. By contrast, Endpoint backup and recovery focuses on protecting data stored locally on machines within an organisation.

The Office 365 provision, meanwhile, is dedicated to protecting an organisation’s work within the productivity suite of apps and services to safeguard against potential issues like accidental deletion and corruption.

Available in only the US at first, these are available either through monthly or annual subscriptions, while prospective customers can sign up to a free trial through the platform’s dedicated website.

Commvault decided to build the Metallic brand, Mirchandani added, after extensive consultation with partners and its customers. Its developers decided the best approach to building Metallic would be to adopt the viewpoint of an organisation’s chief information officer (CIO) and consider their backup needs.

Why the future of data security in the cloud is programmable

It’s the way software used to be purchased, and often still is. A CEO, or GM, or line-of-business owner calls into IT, and the security and compliance teams, to let them know that they are purchasing a new piece of software to drive innovation in how they deliver their products or services. Because the software needs to be customised, integrated and controlled in the company’s on-prem or cloud environment, the IT team needs to deploy it and the security team needs to secure it.

The problem is that IT, security, and compliance are already behind. As the “Defenders” of the business, they must now apply multiple other third-party products to that application in order to to gain fine-grained control over who accesses it and what data they can access. While a growing body of regulations state that security and privacy must be implemented “by design,” they didn’t design the application that the “Builders” delivered. At this point, everything they do is fundamentally an afterthought.

The conundrum of the defender

The job of the Defender is a difficult one, because security and privacy as an afterthought creates both complexity and vulnerability. The complexity comes especially from security products needing to be customised in order to function in lockstep with the application whose data they are protecting. The larger and more complex the application to protect, the more you have to invest to configure and maintain the products that secure it.

Vulnerabilities arise because between the application and the security products meant to protect it, there are seams—gaps in communication, coordination, and capability that occur naturally when two systems that are constantly evolving occupy two different infrastructure spaces. It is those seams that endlessly produce new exposure every day.

More vulnerabilities lead to more security products, which lead to more complexity, and you can see where this is going. Large enterprises own an average of between 50-70 security products, and lack the personnel and resources to marshal those products to deal with the sometimes hundreds of thousands of open vulnerabilities that have been created by the patchwork.

Where this is reflected in the business is that spending on cybersecurity increases every year, but that spending appears to be doing nothing to stem the tide of data breaches and privacy exposures, which are expanding at an even faster rate.

Enter the builders

The perspective of the developer, the Builders of applications, has changed. More and more, requirements around managing performance, reliability, and scalability have migrated into development processes as dev-ops and cloud infrastructure have gone mainstream. Security has followed suit, as progressive developers and dev-ops teams have adopted the mantra that the secret to fighting this battle is to get more involved in security upfront.

The initial steps in this movement have been focused on decreasing the coding of vulnerabilities, meaning that tools have been introduced into the application assembly line that analyse code for security weaknesses and prompt developers to address those weaknesses before applications get released.

This is a huge step, as those code vulnerabilities, if not caught ahead of time, are what lead to the dreaded “security patch.” Patches are software afterthoughts which IT often finds very painful to apply, as it can mean taking a system down for maintenance or other contortions that are highly disruptive to the business.

It makes sense to write more secure code, because coding is what developers do. But many developers are doing more. Now tools are becoming available that developers can embed into applications that give security, compliance, and risk-management visibility into and control over the flows of data.

These tools are not an afterthought, they are part of the application—a forethought. Most of the complexity that an IT-delivered security product introduces is avoided because the utility of the application is delivered along with security, and everything is on the same page and in the same context.

More powerfully, the seams between the application and its security products which fuel the runaway train of vulnerabilities disappear. We see at ALTR that when an application developed using the programmable model is delivered, it has tools to manage data in a changing world of security, compliance, and risk delivered along with it. Data security and governance has been “programmed in”.

Programmable as cloud-native

With the ability to monitor data access, govern it, and selectively protect data even from developers themselves wired into applications, there is another door that swings wide open: application portability.

Many companies, from traditional manufacturing all the way to software companies themselves, are looking for ways to leverage the economics and flexibility of cloud infrastructure. For most of these companies, the number 1 and number 2 concerns as to going to infrastructure that they don’t control are security and compliance.

But when a development team wires in tools to allow for the control of data regardless of where the application is deployed, the business is free to determine the best infrastructure for the application in question based on performance, cost, reliability and other IT priorities. Cloud options from platform-as-a-service all the way to serverless architecture, where IT doesn’t have to maintain any of the infrastructure stack, are all on the table.

Through this lens the economic benefits of programmable data security come completely into focus. Adopting this approach, by way of example, ALTR has been able to help a business optimise its digital footprint based on delivery of technology services, not on the security of them. Also, there are some additional savings that these organisations realise in the consolidation of security products, because many existing products are tied to the infrastructure in which they are deployed, from physical network appliances to cloud-provider-specific tools.

A programmable future

The world of connected computing that we find ourselves in today is a result of more than 20 years of focus on speed, efficiency, and convenience. While developers and the security teams supporting them have gotten serious about data security, the fundamental approach, and the legacy applianceware that supports that approach, is often still stuck in the world of the afterthought.

By considering data security and implementing tools to provide it at the very conception point of the application that creates the data, we finally accomplish what we needed all along: security and privacy by design. When Builders and Defenders work closely together, we see new levels of data protection implemented in organisations around the world.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Kubernetes and multi-cloud: How to monitor your modern applications effectively

Many companies are moving to a new way of delivering service to customers based on microservices. Rather than building huge and monolithic apps, microservices uses small and interconnected application components instead. These modern applications tend to be easier to update and expand than those traditional applications, as replacement services can be slotted in using APIs rather than requiring full rewrites.

To support this design approach, developers are making more use of cloud and containers. According to the Continuous Intelligence Report for 2019, the percentage of companies adopting containers has grown to 30 percent. Cloud services can host tens or thousands of containers based on how large the application needs to be, while the number of containers can be raised or lowered depending on demand. This makes containers complex to manage. For companies that run their critical applications on these new services, managing all this infrastructure is a huge challenge.

To administer this new infrastructure, companies are adopting Kubernetes as a way to orchestrate their IT. In the CI Report, the percentage of companies adopting Kubernetes ranged from 20 percent for businesses running on AWS alone, through to 59 percent for those running on a combination of AWS and Google Cloud Platform.

For companies running on AWS, GCP and Azure, the adoption of Kubernetes was up to more than 80 percent. For multi-cloud environments, Kubernetes helps to streamline their operations and respond more quickly to changes in demand.

Monitoring Kubernetes

So far, Kubernetes has helped companies turn the idea of multi-cloud into a reality. By being able to run the same container images across multiple cloud platforms, IT teams should be able to maintain  control over their IT and maintain leverage when it comes to pricing.

However, Kubernetes is still a developing technology in its own right. While it provides a strong foundation for developers to build and orchestrate their applications’ infrastructure, there are some gaps when it comes to maintaining, monitoring and managing Kubernetes itself.

Kubernetes pods, nodes and even whole clusters can all be destroyed and rebuilt quickly in response to changes in demand levels. Rather than looking at infrastructure, effectively monitoring what is running in Kubernetes involves looking at the application level and focusing on each Service and Deployment abstraction instead. Monitoring therefore has to align with the way Kubernetes is organised, as opposed to trying to fit Kubernetes into a previous model.

It is also important to understand the different forms of data that might be captured. Log data from an application component can provide insight into what processes are taking place, while metric data on application performance can provide insight into the overall experience that an application is delivering.

Joining up log and metric data should give a complete picture of the application, but this task is not as easy as it sounds. It can be near impossible to connect the dots between metrics on a node to logs from a pod in that node. This is because the metadata tagging of the data being collected is not consistent. A metric might be tagged with the pod and cluster it was collected from, while a log might be categorised using a different naming convention.

To get a true picture of what is taking place in an application running on Kubernetes involves looking at all the data being created and correlating this information together. Using metadata from the application alongside the logs and metrics information coming in, a consistent and coherent view of what is taking place across all the containers being used can be established. This involves collecting all the metadata together and enriching it so that consistent tagging and correlation can be carried out.

Bringing all the data together

Looking at Kubernetes,  it’s easy to see why the number of companies utilising it are on the rise. However, currently developers can have multiple different tools in place to take data out of container instances and bring that information back for analysis and monitoring, which can be hard to scale. For log data collection, Fluent Bit processes and forwards on data from containers; similarly, Fluentd provides log and event data collection and organisation. The open source project Prometheus provides metrics collection for container instances, while Falco provides a way to audit data from containers for security and compliance purposes.

Each of these tools can provide an element of observability for container instances, but ideally they should be combined to get a fuller picture of how containers are operating over time. Similarly, automating the process for gathering and correlating data across multiple tools can help make this information easier to use and digest.

Bringing all these different sets of data together not only provides a better picture of what is taking place in a specific container or pod; the merged data can be used alongside other sources of information too. By gathering all this data together in real time, as it is created, you can see how your company is performing over time. This continuous stream of intelligence can be used to see how decisions affect both IT infrastructure and business performance.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Cloud too expensive for the “vast majority”, claims Zuckerberg


Bobby Hellard

11 Oct, 2019

Mark Zuckerberg has questioned the cost of cloud computing and storage during a discussion about bio sequencing.

The Facebook founder specifically referenced AWS, joking “let’s call up Jeff and talk about this”.

Zuckerberg and his wife Pricilla Chan set up the Chan Zuckerberg Initiative (CZI) in 2015 to find ways of using technology to advance health, social and scientific research.

In a chat between the research centre’s co-presidents Dr Joseph DeRisi and Dr Stephen Quake, moderated by Zuckerberg live on YouTube, the group argued that progress in initiatives is often blocked by the exorbitant cost of cloud subscriptions.

“In our bio board meetings, one of the things we talk about is the cost of the compute, and our AWS bill, for example, is one of the specific points,” Zuckerberg said. “Let’s call up Jeff and talk about this.”

“It’s interesting, the bottleneck for progress, in medical research at this point, a lot of the cost for it, is on compute and the data side and not strictly on the wet labs or how long it takes to turn around experiments.”

The CZI is part-funded by billions of dollars from Facebook and also investment from LinkedIn co-founder Reid Hoffman.

However, Dr Quake noted that most other organisations and research labs around the world are unable to secure this level of funding, and are therefore hamstrung by the price of cloud.

“This is no more apparent than in the developing world or low-income resource settings,” He said. “The cost of the sequencing and the lab work has gotten to the point where you can do this almost anywhere in the world. It’s gotten that cheap.”

“The compute to be able to analyse that data is unfortunately not available to the vast majority of the people that do that. It’s very often the case that you’ll go to one of these low-income resource settings, they’ll have a sequencer but its collecting dust because they can’t compute. Even if they can access the cloud, they can’t afford it.”

According to Synergy Research Group, the cloud market is heading for a world-wide revenue run rate of $100 billion per year. The big providers have capitalised on digital transformations and the need to store and analyse data with AWS consistently at the top of the market.

Companies like Microsoft, Google, IBM, Alibaba and Oracle are all competing for the rest of the market. This is no more apparent than in the race to provide the Pentagon with cloud computing services. Amazon’s cloud arm was set to be the winners of the $10 billion JEDI contract but after complaints made by Oracle and IBM, the decision has been put on hold.

Google is able to access sensitive G Suite customer data, former employee warns


Keumars Afifi-Sabet

11 Oct, 2019

Employees whose organisations deploy G Suite have been urged to stay mindful of keeping sensitive data on the productivity suite, following a report that suggests Google and IT admins have extensive access to private files.

Google itself, as well as administrators within a business, have vast access to the files stored within G Suite, and can monitor staff activity, according to a former Google employee. This data, which is not protected by end-to-end encryption unlike other Google services, can even be shared with law enforcement on request.

This level of intrusion is necessary to perform essential security functions for business users, such as monitoring accounts for attempted access, ex-staffer Martin Shelton claimed in his post, but, in turn, this demands enormous visibility on users’ accounts.

Organisations using G Suite Business or G Suite Enterprise have even offered administrators powerful tools to monitor and track employees’ activity, and retain this information in a Google Vault.

“In our ideal world, Google would provide end-to-end encrypted G Suite services, allowing media and civil society organisations to collaborate on their work in a secure and private environment whenever possible,” Shelton said.

“For now we should consider when to keep our most sensitive data off of G Suite in favour of an end-to-end encrypted alternative, local storage, or off of a computer altogether.”

Of particular concern is a sense of uncertainty over who within Google has access to user data kept on its servers. Shelton added that Google claims to have protections in place, but that it’s not known how many employees are able to clear the bars set by the company.

These protections include authorised key card access, approval from an employee’s manager as well as the data centre director, as well as logging and auditing of all instances of approved access.

G Suite administrators, meanwhile, can see a “remarkable level” of user data within an organisation in light of the powerful tools offered by Google. G Suite Enterprise offers the most amount of access into users’ activities, with G Suite Business allowing for slightly more restricted visibility.

These tools include being able to search through Gmail and Google Drive for content as well as metadata including the subject lines and recipients of emails. Administrators can even create rules for which data is logged and retained, depending on how they wish to configure their G Suite.

Audit logs, for example, lets IT admins see who has looked at and modified documents, while the use of apps like Calendar, Drive and Slides can be monitored on both desktops and mobile devices.

Shelton has recommended that employees audit their own use of G Suite and be mindful of any sensitive data that’s either kept in Drive or discussed with others via Gmail.

The former employee has also suggested users get details from their G Suite administrators pertaining to the level of visibility they have over employees within their organisation, including which rules they’ve enabled as part of Google Vault.

Concerns over privacy within G Suite have emerged in the past after accusations were made in 2018 that third-party developers were able to view users’ Gmail messages.

Google said, at the time, that such a practice was normal across the industry and users had already granted permission as and when this occurred.

OVH rebrands as OVHcloud, claims more than 70% of revenues are cloud-based

OVH has announced it is changing its name to OVHcloud to mark the company’s 20th birthday – and double down on its ambitions.

The move was announced at OVH's annual jamboree, renamed this year from #OVHSummit to #OVHcloudSummit.

The company has long-held a goal to provide an alternative to the cloud hyperscalers for the European market. OVHcloud cites clarity as the primary reason for changing name: the company claims more than 70% of its revenue is ‘focused on cloud solutions.’ “By adopting the name OVHcloud, the group is aligning its identity with its business development strategy, in order to support its international growth,” the press materials note.

OVHcloud’s primary marketing is an acronym around the ‘smart’ cloud: any solution needs to be simple to implement, multi-local, accessible and predictable, reversible and open, and transparent. It is the reversibility – to ensure organisations avoid bill shock and vendor lock-in, which is particularly key, according to CEO Michel Paulin.

In a recent interview with CIO India (cached), Paulin outlined the rationale, while not naming names. “Many cloud players make it difficult or impossible for their customers to move their data out of the cloud. However we believe that customers should be free to move the data out as and when they want,” said Paulin. “The concept of reversibility is not just beneficial for the customers in the long term but will also make the multi-cloud strategy feasible.”

Writing for this publication in February, Paulin expanded further on his company’s multi-cloud ethos. “From speaking to our customers, combining on-premise and cloud infrastructure with a multi-cloud strategy has allowed them to connect to networks in a totally isolated and secure way, via numerous points of presence around the world,” Paulin wrote.

“What’s more, I’ve noticed how it has allowed organisations to shift to the cloud at their own pace and take a flexible approach – all while responding to their strategic objectives,” Paulin added. “This means businesses can control and run an application, workload, or data on any cloud based on their individual technical requirements.”

OVHcloud is looking at various industries, as well as smaller businesses, to help it differentiate. The company’s Cloud Web hosting product line, for instance, is aimed at developers and agencies. At the other end of the scale the company is promising an enriched bare metal portfolio for its enterprise solution base to provide greater performance and automation capabilities.

While Paulin noted in a statement that the company wants its brand to reflect the reality of cloud around the world, industry watchers may be wary of such nomenclature. In the much more nascent blockchain space, companies have been adding the buzzword to their name before changing their mind. The most recent was Blockchain Power Trust, which changed its name last week to Jade Power Trust after the company officially ceased cryptocurrency mining operations.

You can view the full #OVHcloudSummit keynote here (English version).

Picture credit: OVHcloud/Screenshot

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Box: We’re in the business of protecting companies from themselves


Bobby Hellard

10 Oct, 2019

“We’re 30,000 employees, we’re the size of a small village… there will be crime, or there will be people that do things that they shouldn’t do.”

Box CIO Paul Chapman recalls a conversation he once had with a company executive, who broke the mould somewhat by being more concerned by the actions of rogue employees than threats coming from the outside.

This was at a time when cyber security simply meant protecting against external threats. That doesn’t mean to say the executive wasn’t bothered with external attacks, only that he identified that plenty of development had already gone into creating robust outward-looking defences over the years, while little attention was paid to the workers.

In today’s environment, we are seeing time and again that the biggest threat to security is often a company’s own employees. According to Box, around 55% of breaches are due to negligence in the workplace.

“People often say, ‘what’s the thing that worries you the most?’ Actually it is what we would call ‘negligent users’,” Chapman explains. “People don’t wake up and say ‘hey, I think I’ll be a negligent user today’, they’re just doing their work and what happens is risk builds… part of what keeps me awake is users doing negligent things, without knowing they’re doing them.”

Safety net

Jeetu Patel, Box’s chief product officer, shared a few examples of what the company considers common negligent actions. The first was sharing content to personal email accounts. So, for instance, Rachel wants to invite John into an internal folder full of private company documents. She begins typing ‘Jo’ into the search bar and his email addresses pop up. She picks the first one which happens to be John’s personal Gmail account, sending company documents to a non-company account.

In a second example, John, who is working remotely, might decide to download company documents on a personal device. He doesn’t select the specific documents he needs and instead puts the whole folder onto his unsecured personal device. Without realising, John may have placed sensitive company info, such as financial details, on a device that sits outside the company’s firewall.

It’s this concern that led Box to develop its Shield platform, released in August this year. It aims to fix the many problems and risks that crop up when sharing and collaborating. While it is mainly marketed as an external-facing security product, Box Shield is actually just as useful for preventing these types of human errors filtering through – whether accidental or intentional.

Force preview, for example, gives users access to files in preview before they are given permission to download. So if somebody receives an email with a malicious attachment, it will be flagged by Box’s security system before it’s ever downloaded to a company’s network.

Although the employees should be aware of basic security practices, software needs to account for laziness, according to Box.

“We know it’s better to point to content, we know its better to use links to control content and chain of custody over content, but you still have in an organisation of 20,000 – 35,000 people and someone who goes ‘oh, I think it’s easier to send an attachment’, and off he goes,” Chapman says.

In-house phishing

Chapman and his team at Box accept that we can’t all be experts, particularly when it comes to digital security. And, as clever and intuitive as Box Shield is, it’s not going protect you from everything.

“To me, Box is a piece of the jigsaw puzzle, it’s not the jigsaw puzzle when it comes to how to think about security potential,” he says. “It’s partners, it’s integrations… you have to have people inside your organisation that are thinking through what the architecture is… you can’t just put it in Box and be done. It’s how you configure Shield, how you set it up, it’s a combination of things.”

The workforce at Box is subjected to regular tests from Chapman and his team. They are even tested using dummy internal phishing attacks as a way to train people on how to identify and deal with threats as they arrive. This is the same tactic we’ve seen deployed across other security-savvy organisations, only, as a security specialist, Box is able to take it one step further.

“There are different levels of sophistication, but it is surprisingly scary how easy it is to spoof people,” he says. “We’ve got a red team that will actually try to break everybody’s passwords at least once per month. We will do our own phishing attacks, we look at the results, share them with the company, we don’t do a wall of shame or anything, but we do have a security ‘hero’.”

We’re only human

These ‘heroes’ seem to be in short supply if the latest figures are anything to go by. According to Telstra’s 2019 Security Report, 89% of cyber security risks are now internal. Add to that, a recent Carbon Black report that suggests that hacking and data breaches are becoming the “new normal”, with hackers now turning their attention to vulnerable end-users, rather than trying to break through company firewalls directly.

What’s more, it only takes one lapse in concentration, or one employee to not know the danger, for your business to be crippled by malware. Many towns and cities in the US have been plagued by ransomware attacks that have been specifically designed to target employees that are, for the most part, illiterate in cyber security. For example, Florida’s Riviera Beach lost control of its entire municipal network after a single police department employee opened a malicious email attachment.

In 2017, the average worker made 118 mistakes a year, according to a report from Identity Guard. Predictably, many of those errors revolved around technology and as more and more businesses adopt digital services, that trend is only going to continue. After all, we’re only human.