Kaspersky extends Office 365 protections to OneDrive


Daniel Todd

23 Aug, 2019

Kaspersky has expanded the protections it offers Office 365‘s Exchange Online to also include Microsoft’s storage service OneDrive, in a move designed to help businesses store and share files safely in the cloud, the security company revealed on Friday.

Users will now be better protected against the threat of malware infiltrating the storage service, spreading across their corporate networks and jeopardising sensitive data and overall workflow.

According to Kaspersky’s recent research, 66% of office workers struggle to remember what they have stored in shared folders, ultimately increasing the chances of missing suspicious files or infected emails.

The antimalware service’s multiple layers of protection have been designed to prevent this scenario, thanks to the inclusion of signature-based detection, heuristic and behavioural analysis, as well as the latest threat intelligence to help tackle both known and zero-day threats.

The freshly-bolstered security package detects suspicious content within the storage space and can immediately delete an infected file before it spreads any further, Kaspersky said.

“Shared storage options, such as OneDrive, are popular and widely used business tools. But if employees can have instant and easy access to shared files, then so too can malware,” said Sergey Martsynkyan, head of B2B product marketing at Kaspersky.

“Businesses need to understand this risk and ensure they are not compromising their productivity due to cyberthreats, by protecting their data and workflows. Our product provides such protection for Microsoft Office 365, allowing companies to use its collaborative features and focus on day-to-day operations, rather than worrying about the security of their data.”

Kaspersky also revealed that Security for SharePoint will be the next feature to be added to its Microsoft 365 security package, as it aims to better-protect content management and team workflows.

The addition will allow customers to leverage more benefits of the Microsoft solutions without the concern of threats to communications and business data, the company said.

Why VMware is acquiring Pivotal and Carbon Black


Jane McCallion

23 Aug, 2019

VMware has surprised industry watchers by announcing it intends not only to buy Dell Technologies stablemate Pivotal, but also cyber security firm Carbon Black.

The virtualisation giant’s plans to absorb Pivotal were no secret – while the definitive agreement wasn’t announced until yesterday evening, the company had already let its intentions be known a week before. The news about Carbon Black, however, was rather more unforeseen.

In the company’s Q2 2020 earnings call, held on Thursday, CEO Pat Gelsinger spoke of three strengthening “secular trends” that drove the decision to make these acquisitions.

“First multicloud is the new model for enterprise IT,” said Gelsinger, according to a transcript from Seeking Alpha, “second, digital transformation is driving accelerated pace of cloud native app development. Last, but not least, as businesses move applications to the cloud and access it over distributed networks and from a diversity of endpoints, security has become a significant challenge and priority.

“To address these trends, we are thrilled to announce our intent to acquire Pivotal and Carbon Black. It’s an exciting day for VMware as these acquisitions address critical priorities of CIOs and will meaningfully expand our ability to power our customers’ digital transformation.”

In many ways the less-expected Carbon Black deal is the most straightforward: VMware is one of the biggest cloud players, security is one of the biggest threats to all organisations and for $2.1 billion cash, the company gets all the customers, assets and talent of one of the few security firms that specialises in cloud-native endpoint security. Yes, VMware had (indeed, has) some of its own offerings, but Carbon Black’s use of AI and big data is significantly more advanced, so it’s easy to see why the decision to acquire was made.

The Pivotal acquisition, on the other hand, is somewhat more complex. In some ways, the acquisition is a homecoming given Pivotal was spun out of VMware and its then majority shareholder, EMC, in 2012. As EMC had retained a controlling share in Pivotal, when it was acquired by Dell some four years later, Pivotal became one of the seven arms that make up the company now known as Dell Technologies – with VMware making up a second.

At this point, it’s hard to know whether this is more to do with internal structure at Dell Technologies, industry machinations, or general technology, as there are good arguments for all three.

From the point of view of the customer, channel partners and the company itself, it does make sense to bring the two very cloud-focused branches of Dell Technologies together into a single unit. If organisations increasingly want to buy, sell and deliver cloud virtualisation and development offerings bundled together, why not have a single point of origin?

The deal also means the parent company has increased its stake in VMware to 81.09%, however – although Gelsinger has dismissed the idea that this is part of any plan to fully subsume the veteran virtualisation player, telling CNBC: “Dell is extraordinarily supportive of an independent VMware.”

Expect to hear more about both acquisitions, maybe even with some CEO cameos, at VMworld 2019 next week.

DigiPlex’s data centre guide focuses on sustainability as key to digital transformation

Nordic data centre and colocation provider DigiPlex has launched a guide designed to help businesses solidify their data centre strategies – and avoid making long-term mistakes.

The guide was put together to address gaps in knowledge which DigiPlex argues could significantly impact data centre decision making, with ramifications for the wider business.

Naturally, one of the key topic areas is around alternatives to running on-premises data centres and the skills and experience required, whether it is through cloud or colocation. The report affirms cloud as an inexorable trend.

"IDC predicts significant buildout of data centres in the Nordics beyond 2020 as data centres need to be located close to users to avoid latency in data traffic as cloud transformation and IoT expand," the paper notes. "If your in-house data centre is not ideally located, it is worth evaluating an additional data centre to achieve the digital proximity needed and avoid unfortunate digital congestion."

Sustainability is another important area the report focuses on. High energy efficiency, use of renewable electricity and effective heat recovery are all initiatives the report recommends. This is not an idle promise either; this time last year CloudTech reported on a DigiPlex initiative where waste heat from its facilities was being reused in residential apartments across Oslo.

By moving from a PUE (power usage effectiveness) rating of 1.67 to 1.2, the report asserts that organisations can save as much as a quarter on power consumption, which positively impacts the bottom line with it. This makes for an interesting comparison with other figures; when this reporter visited Rackspace's newest UK data centre in 2015, the claimed PUE was 1.15. The Rackspace build also featured sloped roofs for harvesting rainwater, and cooling using natural air. The Nordics' cooler temperatures mean many of the world's largest companies – Facebook being a prime example – are setting up shop there.

Gisle M. Eckhoff, CEO of DigiPlex, cited further IDC research which argued two in three European CEOs were under 'considerable pressure' to deliver successful digital transformation strategies. "The data centre sits at the core of this challenge as a critical strategic concern and opportunity for competitive advantage and sustainability," said Eckhoff. "Data centres can be owned and operated in many ways, and there has never been a more important time to review and evaulate which options are best for you.

"Regardless of your industry, your current level of digitalisation or how you currently house and manage the data necessary to operate your business, taking an honest look at your data centre requirements and options regularly is critical to long-term success," added Eckhoff.

The full list of 10 requirements the report covers are high reliability, ability to release investment budget for innovation, predictable operating costs, levels of certified renewable electricity used, energy efficiency, heat recovery, high level of physical secrity, proximity to end users, proper connectivity, and a relevant data centre ecosystem.

You can read the full report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Tech giants assemble to tackle cloud and data security


Bobby Hellard

22 Aug, 2019

Some of the largest tech companies in the world, such as Google and IBM, are joining forces to advance confidential computing and cloud security.

The aim is to build trust and security for the next generation of cloud and edge computing with open-source technologies and an agreed set of standards for protecting data.

The Confidential Computing Consortium has been brought together by the Linux Foundation and includes Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.

Confidential computing focuses on securing data in use, rather than current approaches which often address it while in storage or transit.

“The earliest work on technologies that have the ability to transform an industry is often done in collaboration across the industry and with open source technologies,” said Jim Zemlin, executive director at The Linux Foundation.

“The Confidential Computing Consortium is a leading indicator of what’s to come for security in computing and will help define and build open technologies to support this trust infrastructure for data in use.”

A key part of the project will be to provide a fully encrypted lifecycle for sensitive data, which the Linux Foundation called the most challenging step. But confidential computing could potentially enable encrypted data to be processed in memory without exposing it to the rest of the system. This, the foundation said, could reduce visibility for sensitive data allow for greater control and transparency for users.

The Confidential Computing Consortium aims to bring together hardware vendors, cloud providers, developers, open-source experts and academics to influence technical and regulatory standards and build open-source tools that provide the right environment for education.

The big tech firms announced they are already planning to make open source project contributions, such as Intel with its Software Guard Extension and Microsoft with Open Enclave SDK.

The proposed structure for the Consortium includes a governing board, a technical advisory council and separate technical oversight for each technical project.

“The Open Enclave SDK is already a popular tool for developers working on Trusted Execution Environments, one of the most promising areas for protecting data in use,” said Mark Russinovich, CTO, Microsoft.

“We hope this contribution to the Consortium can put the tools in even more developers hands and accelerate the development and adoption of applications that will improve trust and security across cloud and edge computing.”

DevOps learnings: Why every successful marriage requires a solid foundation

The relationship—for lack of a better word—between developers and operations engineers, is more important than many businesses are aware. High-performing teams seem to operate in an easy, DevOps bliss, but for many enterprises, forging new connections to deliver the promise of DevOps ends in frustration and squabbling.

Even the basics, like creating and maintaining useful feedback loops between teams, requires changing attitudes of staff across the organisation. And the requirement for human enlightenment, not simply a yet another technology refresh, is the most critical and overlooked barrier to DevOps success. It’s anathema. We’ve built a world conditioned to seek technical “fixes” first.

The rise in the popularity of DevOps creates excitement—then concern—in boardrooms across enterprises, who hear the hype and fret they’ll fall behind the competition if they don’t embrace this new way of working. Leadership teams the world over have called shotgun weddings between developer and operational teams, many of whom had not only never worked together, but had even been rivals. In the weeks that follow, the scale of the cultural problem becomes apparent.

Developers and ops teams are fundamentally different: they work in different ways, they have different priorities, and they approach problems from different angles. Of course, this means there’s potential for brilliant collaboration, but without the foundations in place to achieve harmony between these disparate teams, “doing DevOps” is doomed to fail.

However, like any great relationship, many organisations discover DevOps teams grow over time, becoming stronger, and encourage a mindset where communication, integration, and real collaboration between dev and ops is welcome. But most significantly, it encourages IT pros and the business to accept a few necessary failures and continually improve—together.

While brilliant for spurring innovation and accelerating transformation, DevOps adoption usually includes some rough patches of disagreement, finger-pointing, and tension. But ask IT team members who’ve made the transition, and most will tell you they’re far more flexible, less stressed, and have higher customer satisfaction than before. Many go further to say they won’t ever go back to a job in a waterfall operation.

The tools for collaboration

While changing team culture is a prerequisite, close on its heels is a reassessment of the DevOps tools your team uses to drive technology change. A well-designed DevOps toolchain frees up software and infrastructure engineers to spend more time working on moving the business forward, like new projects, features, or improvements to architecture, systems, and quality end users notice.

As is always the case with automation, the more time you spend improving your systems, the more they improve. And with DevOps principles, those improvements become data-driven and repeatable, not based on hunches and opinion. Right-tasking your tools will help you break the cycle of primarily fixing things and not preventing incidents in the first place.

The tools employed will vary according to the nature of each organisation. However, some basic principles apply to ensure effective measurement, evaluation, and insight. Because DevOps is relatively new to many technology teams, it’s an opportunity to reintroduce disconnected teams using the tools they already count on, by using them more methodically. Consider tools which emphasise communication, collaboration, and integration first. When software developers, QA engineers, and IT operations all point to the same data in a common dashboard, you’ll likely already see a reduction in friction and faster MTTR.

But what does this look like at scale? With the right tools, developers have easy access to the same performance data operations relies on to monitor effects of their changes on performance.

Almost organically, dev and ops begin incorporating application performance monitoring (APM) tools to move past the limits of estimating user performance from infrastructure metrics. Further, they’ll incorporate APM tools into the development cycles, allowing them to confirm performance and scalability well before code makes its way down the delivery pipeline. Better still, scarce resources like DBAs aren’t pulled in for every CPU or memory alert and instead focus on service indicators like wait time and are free to collaborate on overlooked query or table optimization developers need.

At the production end of the spectrum, IT operations need tools to make collaborating with the extended IT organisation easier. It’s much easier to maintain control over production infrastructure, workloads, and storage with a unified view into application performance. DevOps-focused teams watch application behaviour for changes, anticipated or otherwise, from dev, through QA and perf test, to production. If there’s one driver for unplanned work in ops, it’s being handed an alien application or system with no idea how it’s expected to perform. And Ops hates unplanned work.

It’s important for leaders to ensure the tools supporting the engineers feel natural and valuable to them or they won’t be adopted. While we all have a preferred instrument, the underlying teamwork will discourage bespoke processes, hacks, or brittle workflows. There’s a cultural shift from “what works best for me?” to “what works best for our team?” As an added benefit, teams often incorporate security policy as another monitoring dimension, ensuring governance as business-critical data is dispersed across different dev and ops teams.

As in any relationship, successful DevOps teams have shared more than a few moments of one step forward, and two steps back. You’ll hear some war stories about passionate disagreement, and even occasional disruption between teams. And that’s okay; culture change and individual progress is usually messy, even when you know the endeavour is important.

But when leaders and IT pros alike take steps to minimise the risk of failure by addressing people first, and then the technology, you’re far more likely to use DevOps to full benefit. When colleagues can see the bigger picture and the motivation for change, changing working patterns and habits becomes natural because they all share in the result.

DevOps takes time

A DevOps culture enables most organisations to modernize, and some to even reinvent their use of technology and processes. High-performing, low-friction teams are much more likely to earn freedom to chart their own destinies, shifting from cost centre-like facilities to an agile partner for business growth. When DevOps adoption fails, it’s generally because a few initial failures scare us into retreat, and back to traditional approaches. It’s important to remember that a few rollercoaster moments are important and part of the growth process. Only by personal trial and error in your unique environment can you teams determine how to adapt DevOps principles to your organisation.

A Gartner report entitled New Insights into Success with Agile in Digital Transformation shed light on this. It found that teams with under 12 months experience with a new development process are successful only 34% of the time. By contrast, teams with between one and three years’ experience are successful 64% of the time, while those with more than three years saw that figure jump to 81%. In sum, DevOps requires patience, commitment, and the passage of time.

When we look to the second decade of DevOps, it’s possible we’ll see a rebirth of IT operations as a competitive advantage, not unlike the initial adoption of business computing. Those machines, coupled with a more academic approach to their use, allowed businesses to express their unique value with more speed and less resources. They were successful not simply because of all the tech in their room-sized cabinets, but because they were high-profile, with corresponding investment.

DevOps is a tool like any other, but one which may connect and transform understandably risk-averse, change-resistant teams into a versatile, responsive business partners. And nobody likes living on a cost centre budget.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Box Shield brings security controls to lockdown cloud collaboration


Bobby Hellard

22 Aug, 2019

Content and file management cloud service Box has unveiled a set of features for admins to control access to shared content called Box Shield.

This will include “intelligent” threat detection capabilities and safeguards to prevent accidental data leaks and the misuse of shared files.

The rapid rise of cloud computing has led to greater collaboration, both internally and externally, for most businesses which has resulted in a greater risk of security breaches.

Popular collaboration platforms like Slack have also recently announced more advanced security controls in recent weeks and Box Shield is following that trend.

“Box Shield is a huge advancement that will make it easier than ever to secure valuable content and prevent data leaks without slowing down the business or making it hard for people to get their work done,” said Jeetu Patel, chief product officer at Box.

“With Box Shield, enterprises will receive intelligent alerts and unlock insights into their content security with new capabilities built natively in Box, enabling them to deploy simple, effective controls and act on potential issues in minutes.”

According to the company, Box Shield prevents accidental data leaks through a system of security classifications for files and folders, which can be operated manually or automated. Account administrators can define and customise the classification labels to suit their workflow.

Shared links can have restrictions, with labels that control who can see it both internally and externally. This is also the case for downloads, applications and FTP transfers. There’s also limit controls on collaborations, restricting non-approved members to edit or share certain content.

Box Shield will also come with functions to detect abnormal and malicious behaviour from both internal and external potential threats. This is a machine learning-based service that detects anomalous downloads, suspicious sessions and locations where a compromised account is detected.

“At Indiana University (IU), sensitive information changes hands thousands of times each day on our campuses with over 100,000 users and thousands of collaborators around the world,” says Bob Flynn, manager, cloud technology support at IU.

“With the introduction of Box Shield, we can apply native data classifications and design policies aligned to our own business and compliance rules. By protecting content with precision, we can help IU reduce risk without compromising speed and collaboration.”

Box Shield is in private beta at the moment, but it is due to become generally available in the Autumn.

Druva launches intelligent storage tiering for AWS


Adam Shepherd

21 Aug, 2019

Cloud-based data protection firm Druva has today announced a new storage tiering system for AWS, with the aim of helping customers optimise their storage spending across hot and cold storage.

The new system supports AWS’ S3, Glacier and Glacier Deep Archive offerings, and Druva claims that customers can benefit from a potential reduction of up to 50% in total cost of ownership. Clients can either let Druva automatically handle the tiering of their data for a minimum of hassle, or manually specify the tiering system they want to use for closer oversight.

The intelligent storage system also includes a central data management dashboard, machine learning-powered data protection, and one-click policy management actions.

“IDC estimates approximately 60% of corporate data is ‘cold,’ about 30% ‘warm’ and 10% ‘hot,'” said Phil Goodwin, director of research at IDC. “Organisations have typically faced a tradeoff between the cost of storing ever increasing amounts of data and the speed at which they can access the data. Druva’s collaboration with AWS will allow organisations to tier data in order to optimise both cost and speed of access. Customers can now choose higher speed for the portion of data that needs it and opt for lower costs for the rest of the data that does not.”

“Enterprises are constantly searching for ways to shift budget to innovation projects,” said Druva’s chief product officer, Mike Palmer. “Driving down the cost of storage and administration is seen by the enterprise as the best opportunity to move money from legacy. Beyond cost-savings, the ability to see multiple tiers of data in a single pane of glass increases control for governance and compliance and eventually analytics, and shows customers that the public cloud architecture decreases risk, cost and enables them to deliver on the promise of data.”

The company also announced the general availability of its disaster recovery as a service product for AWS. Like it’s storage tiering, it also claims a potential TCO reduction of up to 50% as well as faster recovery times, easier management and improved reporting functions.

What enterprise IT teams can learn from Google Cloud’s June outage: A guide

In early June 2019, Google Cloud suffered a cascading set of faults that rendered multiple service regions unavailable for a number of hours.

This by itself isn’t totally unprecedented; what made it significant was the way it propagated through the very software that was designed to contain it. Moreover, engineers’ initial attempts to correct the issue were thwarted by the failure of that same software architecture. It was the interconnectedness and interdependencies of Google’s management components that contributed to the outage.

The outage

To understand this situation more fully, the following is a short summary of what happened. A maintenance “event” that normally wouldn’t be such a big deal triggered a reaction in GCP’s network control plane, further exacerbated by a fault in that code enabling it to stop other jobs elsewhere in Google’s infrastructure.

The distributed nature of a cloud platform means that although clusters in one area are designed to be maintained independently of clusters in another, if those management processes leak across regions, the failure spreads like a virus. And because these controllers are responsible for routing traffic throughout their cloud, as more of them turned off, network traffic just became that much more constrained, leading to even more failures. In technical terms:

  • Network control-plane jobs were stopped on multiple clusters, across multiple regions at the same time
  • Simultaneous packet loss and error rates increased across multiple regions
  • As a result, key network routes became unavailable, increasing network latency for certain services
  • Tooling to troubleshoot the issue became unusable because tooling traffic competed with service traffic

Once the root cause was identified, remediating the failures required unraveling the management paths to take the right processes offline in the right order and apply the necessary fixes:

  • Google engineers first disabled the automation tool responsible for performing maintenance jobs
  •  Server cluster management software was updated to no longer accept risky requests which could affect other clusters
  • Updates were made to store configurations locally, reducing recovery times by avoiding the latency caused by rebuilding systems automatically
  • Network fail-static conditions were extended to allow engineers more time to mitigate errors
  • Tooling was improved to communicate status to impacted customers even through network congestion

Ultimately, the fault did not lie in Google’s willingness or ability to address issues, but rather in a systemic problem with how the platform reacted to unforeseen events. In a real-time computing environment, there is no margin during which management systems can be offline to fix a different problem located in the other system used to apply fixes to the first one.

So, what does this teach us about modern IT? It proves the theory that operations need to be centralised across footprints and infrastructure with bias towards platforms over point tools. Here are three specific features you want in a modern IT operations strategy.

A global view for local action

Having a global view of a highly distributed infrastructure is critical. Addressing issues in isolation has the potential of propagating faults into areas not currently under test. You need to create points of observation across all regions simultaneously in order to aggregate management data, thus enabling unified analysis to avoid compounding disparate events. This will also help you build a global service performance model by understanding activity throughout your infrastructure.

You must also consider how management tasks are carried out when systems become unresponsive; protect tooling traffic from service traffic and don’t share the network capacity between operational data and system status / events.

The ability to see impact, impactfully

Your enterprise should design a global impact model. IT operations personnel need to understand topology-driven impact before making changes – even automated changes. Google's automation did not consider an impact model, and instead was straight policy-driven automation that scaled mistakes automatically. It’s equally important to understand service dependencies via topology mapping, thus enabling impact analysis to take into consideration the cascading effects of policy-driven automation.

Configuration stores you can count on

Lastly, “backing up the backups” by storing configuration data locally instead of relying on a distributed hierarchy can reduce service restoration time. Retrieving this data regionally will increase recovery latency since competing network traffic during a fault will restrict bandwidth available for management tasks.

This outage was essentially a gift to enterprise-class IT operations teams everywhere who think they are prepared for any inevitable service disruptions. It’s taught us all the value in building an IT ops management strategy that includes a singular, global view of dependencies and impacts. Every business thinks they want to become Google. But this is one way you don't.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Mozilla, Google move to block Kazakhstan’s attempts to spy on its citizens


Dale Walker

21 Aug, 2019

Google and Firefox developer Mozilla will block attempts by the government of Kazakhstan to intercept the web traffic of its citizens, the companies announced on Wednesday.

The joint action follows reports in July that the Kazakh regime had started forcing internet service providers to adopt custom web certificates, allowing officials to decrypt HTTPS internet traffic.

Despite claiming the certificate would provide greater protection for users against fraud and hacking attempts, the decision sparked widespread condemnation, with many arguing it severely undermines privacy.

Google and Mozilla have both said they distrust this certificate and as such have introduced “technical solutions” that will prevent traffic from being intercepted. For Mozilla’s part, it has revoked the certificate using OneCRL, said to be a “non-bypassable block”.

Google has said it will also block the certificate the government required users to install and added it to the list of those blocked inside Chromium’s source code.

Mozilla, known for its staunch support of user privacy, described Kazakhstan’s methods as an “attack” on user privacy.

“People around the world trust Firefox to protect them as they navigate the internet, especially when it comes to keeping them safe from attacks like this that undermine their security,” said Marshall Erwin, senior director of Trust and Security at Mozilla. “We don’t take actions like this lightly, but protecting our users and the integrity of the web is the reason Firefox exists.”

Google’s senior engineering director Parisa Tabriz said her company would “never tolerate any attempt, by any organisation – government or otherwise – to compromise Chrome user’s data”.

“We have implemented protections from this specific issue, and will always take action to secure our users around the world.”

This marks the second time Mozilla has worked actively against the Kazakh government. In 2015 government agencies asked to have its root certificate included in Mozilla’s root store program, its list of approved certificates that can be used with its browsers. However, the request was eventually denied after it was discovered the certificate would be used to intercept user data.

Further government attempts then ended in failure after a number of organisations took legal action against the administration.

Mozilla is known for taking a stand against state surveillance attempts, maintaining a section on its company website showcasing its latest investigations and providing support for those concerned about privacy.

Microsoft launches bug bounty programme Chromium-based Edge


Connor Jones

21 Aug, 2019

Microsoft has launched a fresh bug bounty programme specifically for its Chromium-based Edge browser, offering rewards double the value of its previous HTML Edge version.

The maximum reward for hunters finding significant flaws in the latest version of its flagship browser has increased to $30,000 for the most critical vulnerabilities.

Other issues will be judged by their significance, depending on how impactful the flaw is to future versions of Edge, with hunters being rewarded from $1,000 upwards.

The launch of the latest bug bounty programme coincides with the launch of the beta preview of the next Edge version and will work hand-in-hand with Microsoft’s Researcher Recognition Program.

The initiative acts somewhat like a loyalty card for bug hunters who follow Microsoft’s vulnerability disclosure process: Points are awarded for every bug they report and these points can be multiplied depending on the product on which they’re found.

A bug found in Azure or Windows Defender, for example, is eligible for a 3x points multiplier whereas Edge on Chromium gets a mere 2x multiplier – GitHub and LinkedIn receive none.

Once a hunter accrues enough points, they “may be recognised in our public leaderboard and rankings, annual Most Valuable MSRC Security Researcher list, and invited to participate in exclusive events and programs,” said Microsoft.

The program will also run alongside the pre-existing bug bounty for the HTML version of Edge, which offers rewards of between $500 – $15,000.

“Vulnerabilities that reproduce in the latest, fully patched version of Windows (including Windows 10, Windows 7 SP1 or Windows 8.1) or MacOS may be eligible for the Microsoft Edge Insider bounty program,” said Microsoft. “Windows Insider Preview is not required.”

Since the browser is powered using Chromium, the new bug bounty programme will support the Chrome Vulnerability Reward Program “so any report that reproduces on the latest version of Microsoft Edge but not Chrome will be reviewed for bounty eligibility based on severity, impact, and report quality,” it added.

The Chrome Vulnerability Reward Program currently offers rewards ranging vastly from $500 to $150,000 with the greatest rewards likely to be issued for bugs found in Chrome OS.

Apple also announced the expansion of its bug bounty programme at Black Hat 2019 in August, making it the most lucrative bounty program in tech.

In addition to dishing out special iPhones to select bug hunters, making it easier for them to investigate the flagship Apple device, it announced a maximum reward for bugs of up to $1.5 million.

Back in March, an Argentinian teenage bug hunter became the first in the world to earn $1 million from lawfully finding and disclosing bugs in bounty programs. He reported more than 1,600 bugs – notable inclusions were major issues with Twitter’s and Verizon’s products.