All posts by Keumars Afifi-Sabet

XSS the most widely-used attack method of 2019


Keumars Afifi-Sabet

23 Dec, 2019

The most widely-used cyber attack method used to breach large companies in 2019 was cross-site scripting (XSS), according to research. 

The hacking technique, in which cyber criminals inject malicious scripts into trusted websites, was used in 39% of cyber incidents this year.

This was followed by SQL injection and Fuzzing, which were used in 14% and 8% of incidents respectively. Among other widely-used methods are information gathering, and business logic, although both were used in less than 7% of incidents.

With 75% of large companies targeted over the last 12 months, the report by Precise Security also revealed the key motivation behind cyber crime has been the opportunity for hackers to learn.

Almost 60% of hackers conducted cyber attacks in 2019 due to the fact it presents a challenge. Other prominent reasons for hacking a company’s systems include to test the security team’s responsiveness, and to win the minimum bug bounty offered. ‘Recognition’ ranked sixth in the list of motivations, and was cited by just 25% of hackers. Bizarrely, 40% also said that they preferred to target companies that they liked.

Digging into industry-specific insights, additional research published this month also revealed the most prominent attack method faced by sectors within the UK economy.

The most prevalent hacking technique in the business, finance and legal sectors, for example, was macro malware embedded into documents, according to statistics compiled by Specops Software. 

Retail and hospitality firms, meanwhile, suffered mostly from burrowing malware, present in 51% of attacks, as did governmental organisations, registering 37% of incidents.

The healthcare industry was susceptible mostly to man-in-the-middle attacks, in which communications between two computer systems are intercepted by a third-party. 

Distributed denial of service (DDoS) attacks were the most common form of attack faced by the technical services industry, with 58% of incidents using this method.

As for how these attacks are conducted specifically, the Precise Security report showed that 72% of platforms used as a springboard for cyber crime are websites. WordPress, for example, is a prime target due to the massive userbase, with 90% of hacked CMS sites in 2018, for instance, powered by the blogging platform.

Application programme interfaces (APIs) were the second-most targeted platforms in 2019, being at the heart of 6.8% of incidents, with statistics showing Android smartphones are usually involved in such attacks.

Google Transfer Service launched for those handling enormous data migrations


Keumars Afifi-Sabet

13 Dec, 2019

Google Cloud Platform (GCP) has developed a software service to help organisations handle massive data transfers between on-premise locations and the cloud faster and more efficiently than existing tools.

The tool has been designed for organisations that need to undergo large-scale data transfers in the region of billions of files, or petabytes of data, between physical sites to Google Cloud storage in one fell swoop.

GCP’s Transfer Service for on-premises data, released in beta, is also a product that allows businesses to move files without needing to write their own transfer software or invest in a paid-for transfer platform.

Google claims custom software options can be unreliable, slow and insecure as well as being difficult to maintain.

Businesses can use the service by installing a Docker container, with an agent for Linux, on data centre computers, before the service co-ordinates the agents to transfer data safely to GCP storage.

The system makes the transfer process more efficient by validating the integrity of the data in real-time as it gradually shifts to the cloud, with an agent using as much available bandwidth to reduce transfer times.

The data transfer service is a larger-scale version of tools such as gsutil, a cloud transfer service also developed by Google, which is unable to cope with the scale of data that Transfer Service has been designed to handle.

The firm has recommended that only businesses with a network speed faster than 300Mbps use its Transfer Service, with gsutil sufficing for those with slower speeds.

Customers also need a Docker-supported 64-bit Linux server or virtual machine that can access the data to be transferred, as well as a POSIX (Portable Operating System Interface)-compliant source.

The product is aimed squarely at enterprise users, and comes several weeks after the company announced a set of migration partnerships aimed at customers running workloads with the likes of SAP, VMware and Microsoft.

This exploit could give users free Windows 7 updates beyond 2020


Keumars Afifi-Sabet

10 Dec, 2019

Members of an online forum have developed a tool that could be used to bypass eligibility checks for Windows 7 extended support and receive free updates after the OS reaches end-of-life.

Only a handful of Windows 7 users can continue to receive updates from Microsoft through its paid-for Extended Support Updates (ESU) programme after 14 January, through to January 2023.

This scheme was first introduced for enterprise customers in August and later extended to SMB users after Microsoft identified “challenges in today’s economy”.

The ESU programme is not available to all businesses, however. Users on tech support platform My Digital Life have therefore developed a prototype tool that could theoretically allow ineligible businesses to continue to receive free updates beyond 14 January.

Before ESU patches are beamed to eligible machines, Windows 7 performs a check to determine whether or not users can receive these updates. This involves the installation and activation of an ESU license key. The created tool bypasses this eligibility check, which is only performed during installation, so users would, in theory, continue to receive Windows 7 updates for free through the ESU scheme without paying an ESU subscription.

The bypass was tested on the Windows 7 update KB4528069, a dummy update which was issued to users in November so they could verify whether or not they were eligible for extended support after 14 January.

Although the tool has worked on the test patch, its creators urged My Digital Life forum members to consider this as a prototype, and not a fully-fledged workaround, as things may change by February 2020.

Microsoft will be keen to ensure there aren’t any ways to undermine the ESU scheme once Windows 7 reaches end-of-life due to the sums it’s charging eligible businesses, and an ultimate desire to shift machines to Windows 10.

The firm is likely to change the way the eligibility check is performed given how simple it’s been proven to bypass.

It’s certainly not a tool that Microsoft is likely to condone, but it does demonstrate the extent to which Windows 7 is still popular as users are trying to retain undisrupted access to the legacy OS.

Businesses have just weeks to upgrade their devices running Windows 7 and Windows XP or face restrictions on accessing critical security updates.

Microsoft launches Office 365 phishing campaign tracker


Keumars Afifi-Sabet

10 Dec, 2019

Microsoft has devised a phishing campaign dashboard for its Office 365 Advanced Threat Protection (ATP) module to give customers a broader overview of phishing threats beyond just individual attacks.

The newly-announced ‘campaign views’ tool provides additional context and visibility around phishing campaigns. This aims to give businesses under constant threat from phishing attempts a fuller story of how attackers came to target an organisation, and how well attempts were resisted. 

Security teams with access to the dashboard can see summary details about a broader campaign, including when it started, any activity patterns and a timeline, as well as how far-reaching the campaign was and how many victims it claimed. 

The ‘Campaign views’ tool also provides a list of IP addresses and senders used to orchestrate the attack, as well as the URLs manifested in the attack. Moreover, security staff will be able to assess which messages were blocked, delivered to junk or quarantine, or allowed into an inbox.

“It’s no secret that most cyberattacks are initiated over an email. But it’s not just one email – it’s typically a swarm of email designed to maximize the impact of the attack,” said Microsoft group program manager with Office 365 security Girish Chander. 

“The common pattern or template across these waves of email defines their attack ‘campaign’, and attackers are getting better and better at morphing attacks quickly to evade detection and prevention. 

“Being able to spot the forest for the trees – or in this case the entire email campaign over individual messages – is critical to ensuring comprehensive protection for the organization and users as it allows security teams to spot weaknesses in defenses quicker, identify vulnerable users and take remediation steps faster, and harvest attacker intelligence to track and thwart future attacks.”

Office 365’s ATP tool is an email filtration system that safeguards an organisation against malicious threats posed by email messages, links and any collaboration tools. 

With the additional information at hand, Microsoft is hoping that security teams within organisations can more effectively help compromised users, and improve the overall security setup by eliminating any configuration flaws. 

Related campaigns to those targeting the organisation can also be investigated, and the teams can help hunt down threats that use the same indicators of compromise.

The ‘campaign views’ dashboards are available to customers with a suite of Office 365 plans including ATP Plan 2, Office 365 E5, Microsoft 365 E5 Security, and Microsoft 365 E5.

These new features have started rollout out into public preview, with Microsoft suggesting the features are expected to be available more generally over the next few days and weeks.

Surge in multi-cloud adoption reveals wider challenges


Keumars Afifi-Sabet

20 Nov, 2019

Although most businesses have adopted a multi-cloud strategy, there are significant challenges in the way these are being implemented including security and lack of expertise.

The adoption of multi-cloud approaches are on the rise, with the majority of companies across the world, approximately two-thirds, having deployed enterprise applications on two or more public clouds, according to findings by the Business Performance Innovation (BPI) Network, in partnership with A10 Networks.

Meanwhile, 84% of companies expect to increase their reliance on public or private clouds over the next two years.

The growth in multi-cloud adoption, however, has led to a rise in significant challenges facing businesses. Ensuring security, for example, across all clouds, networks, applications and data is the biggest concern for businesses.

This crucial challenge is followed by the need to acquire the necessary skills and expertise, as well as dealing with increased complexity in managing cloud environments. There’s also a key challenge in achieving centralised visibility and management across cloud portfolios.

“Multi-cloud is the de facto new standard for today’s software- and data-driven enterprise,” said the head of thought leadership and research for the BPI Network, Dave Murray.

“However, our study makes clear that IT and business leaders are struggling with how to reassert the same levels of management, security, visibility and control that existed in past IT models.

“Particularly in security, our respondents are currently assessing and mapping the platforms, solutions and policies they will need to realise the benefits and reduce the risks associated of their multi-cloud environments.”

To highlight the scale of the challenge businesses face, just 11% of respondents suggested their companies have been ‘highly successful’ in realising the benefits of multi-cloud, despite a significant increase in adoption in recent years.

Businesses suggest they would prioritise centralised visibility and analytics, embedded into security and performance as a requirement for improving this, as well as automated tools to speed response times and reduce costs.

Other aspects needed include a centralised management portal from a single point of control and greater security scale and performance to handle increased traffic.

The individual tools businesses require included centralised authentication, centralised security policies, web application firewalls, and protection against DDoS attacks.

“The BPI Network survey underscores a critical desire and requirement for companies to reevaluate their security platforms and architectures in light of multi-cloud proliferation,” said vice president of worldwide marketing at A10 Networks, Gunter Reiss.

“The rise of 5G-enabled edge clouds is expected to be another driver for multi-cloud adoption. A10 believes enterprises must begin to deploy robust Polynimbus security and application delivery models that advance centralised visibility and management and deliver greater security automation across clouds, networks, applications and data.”

Firefox scraps extension sideloading over malware fears


Keumars Afifi-Sabet

1 Nov, 2019

Support for sideloaded extensions in the Firefox browser will be discontinued from next year following concerns that the function could be exploited to install malware onto devices.

Sideloading is a method of installing a browser extension that adds the file to a specific location on a user’s machine through an executable application installer. These are different from conventional add-ons, which are assigned to profiles, and are also available to download outside official Firefox channels.

From 11 February 2020, the Firefox browser will continue to read sideloaded files, but will copy these over to a user’s individual profile and install them as regular add-ons. Then from 10 March, sideloaded extensions will be phased out entirely.

Mozilla argues that for some users it’s difficult to remove sideloaded extensions completely, as these cannot be fully removed from Firefox’s Add-ons Manager. This has also proved a popular method of installing malware, the firm said.

“Sideloaded extensions frequently cause issues for users since they did not explicitly choose to install them and are unable to remove them from the Add-ons Manager,” said Firefox’s add-ons community manager Caitlin Neiman.

“This mechanism has also been employed in the past to install malware into Firefox. To give users more control over their extensions, support for sideloaded extensions will be discontinued.”

The transition period between February and March has been put in place to ensure that no pre-installed sideloaded extensions will be lost from users’ profiles, given they will have been copied over as conventional add-ons.

Developers have also been urged to update install flows, and direct users to download extensions through either their own web pages or the Firefox Add-Ons hub.

One prominent example of malware installed via side-loading, albeit not on Firefox itself, was a Pokemon Go clone released in 2016 that allowed cyber criminals to gain full control to victims’ smartphones.

Before Pokemon Go was available in Europe, the cyber criminals publicised a non-official version of the app that could be downloaded from sources beyond the Google Play Store.

Businesses stung by highly convincing Office 365 voicemail scam


Keumars Afifi-Sabet

31 Oct, 2019

Cyber criminals are stealing the login credentials of Microsoft Office 365 users using a phishing campaign that tricks victims into believing they’ve been left voicemail messages.

In the last few weeks, there’s been a surge in the number of employees being sent malicious emails that allege they have a missed call and voicemail message, along with a request to login to their Microsoft accounts.

The phishing emails also contain an HTML file, which varies slightly from victim to victim, but the most recent messages observed include a genuine audio recording, researchers with McAfee Labs have discovered.

Users are sent fake emails that inform them of a missed call and a voicemail message

When loaded, this HTML file redirects victims to a phishing website that appears to be virtually identical to the Microsoft login prompt, where details are requested and ultimately stolen.

“What sets this phishing campaign apart from others is the fact that it incorporates audio to create a sense of urgency which, in turn, prompts victims to access the malicious link,” said McAfee’s senior security researcher Oliver Devane.

“This gives the attacker the upper hand in the social engineering side of this campaign.

This Office 365 campaign has made great efforts to appear legitimate, such as through designing the phishing site to resemble the Microsoft login page. Another trick the cyber scammers use to look real is by prepopulating victims’ email addresses into the phishing site and requesting just the password.

The phishing site appears virtually identical to the actual Microsoft login prompt and preloads victims’ emails

Users are presented with a successful login message once the password is provided, and are then redirected to the office.com login page.

Researchers found three different phishing kits being used to generate malicious websites, Voicemail Scmpage 2019, Office 365 Information Hollar, and a third unbranded kit without attribution.

The first two kits aim to gather users’ email addresses, passwords, their IP addresses and location data. The third kit uses code from a previous malicious kit targeting Adobe users in 2017, the researchers said, and it’s likely the old code has been reused by a new group.

A wide range of employees across several industries, from middle management to executive level, have been targeted, although the predominate victims are in the financial and IT services fields. There’s also evidence to suggest several high-profile companies have been targeted.

McAfee has recommended as a matter of urgency that all Office 365 users implement two-factor authentication (2FA). Moreover, enterprise users have been urged to block .html and .htm attachments at the email gateway level so this kind of attack doesn’t reach the final user.

“We urge all our readers to be vigilant when opening emails and to never open attachments from unknown senders,” the researchers added. “We also strongly advise against using the same password for different services and, if a user believes that his/her password is compromised, it is recommended to change it as soon as possible.”

The use of audio in this campaign points to a greater tenacity among cyber fraudsters, who are adopting more sophisticated social engineering techniques. For example, earlier this year artificial intelligence (AI) combined with voice technology was used to impersonate a business owner and fool his subordinate into wiring £200,000 to a hacker’s bank account.

Hosting online banking in the public cloud a ‘source of systemic risk’ amid rising IT failures


Keumars Afifi-Sabet

28 Oct, 2019

The financial services industry is not doing enough to mitigate a rising volume of IT failures, spurred on by a reluctance to upgrade legacy technology, a parliamentary inquiry has found.

Regulators, such as the Financial Conduct Authority (FCA), are also not doing enough to clamp down on management failures within UK banks, which often use cost or difficulty as “excuses” not to make vital upgrades to legacy systems.

With online banking rising in popularity, the severity of system failures and service outages has also seen an “unacceptable” rise, according to findings published by the House of Commons’ Treasury Select Committee.

The report concluded the impact of these failures range from an inconvenience to customer harm, and even threats to a business’ viability. The lack of consistent and accurate recording of data on such incidents is also concerning.

“The number of IT failures that have occurred in the financial services sector, including TSB, Visa and Barclays, and the harm caused to consumers is unacceptable,” said the inquiry’s lead member Steve Baker MP.

“The regulators must take action to improve the operational resilience of financial services sector firms. They should increase the financial sector levies if greater resources are required, ensure individuals and firms are held to account for their role in IT failures, and ensure that firms resolve customer complaints and award compensation quickly.

“For too long, financial institutions issue hollow words after their systems have failed, which is of no help to customers left cashless and cut-off. And for too long, we have waited for a comprehensive account of what happened during the TSB IT failure.”

MPs launched this inquiry to examine the cause behind such incidents, reasons for their frequency, and what regulators can do to mitigate the damage.

As the report identified, TSB’s IT meltdown during 2018 is the most prominent example of an online banking outage in recent years.

The major incident, which lasted several days, was caused by a major transfer of 1.3 billion customer records to a new IT system. A post-mortem analysis by IBM subsequently showed the bank did not carry out rigorous enough testing.

TSB has not been the only institution to have suffered banking outages, with figures compiled by the consumer watchdog Which? showing customers with major banks suffered outages 302 incidents in the last nine months of 2018. Another example of a prominent incident saw NatWest, RBS and Ulster Bank hit by website outages in August this year.

Beyond the work banks must do to ensure their systems are resilient, the MPs found that regulators must do far more to hold industry giants to account when failures do occur. Poor management and short-sightedness, for example, are key reasons why regulators must intervene to ensure banks aren’t exposing customers to risk due to legacy systems.

When companies embrace new technology, poor management of the transitions required is one of the major causes of IT failure, the report added, with time and cost pressures leading banks to “cut corners”.

Banks themselves, moreover, must adopt an attitude to ensure robust procedures are in place when incidents do occur, treating them not as a possibility but a probability.


Data protection and GDPR compliance are primary goals for major firms. Learn about the security features that will help you achieve and sustain compliance in this whitepaper.

Download now


Meanwhile, the use of third-party providers has also come under scrutiny, with the select committee urging regulators to highlight the risks of using services such as cloud providers.

The report highlighted Bank of England statistics that show a quarter of major banks, and a third of payment activity, is hosted on the public cloud. This means banks and regulators must think about the implications for concentrating operations in the hands of just a few platforms.

The risks to services of a major operational incident at cloud providers like Amazon Web Services (AWS) or Google Cloud Platform (GCP) could be significant, with the market posing a “systemic risk”. There should, therefore, be a case for regulating these cloud service providers to ensure high standards of operational resilience.

The report listed a number of suggestions for mitigating the risk of concentration, but conceded the market is already saturated and there was “probably nothing the Government or Regulators can do” to reduce this in the short-term.

Some measures, such as establishing channels of communication with suppliers during an incident, and building applications that can substitute a critical supplier with another, could go towards mitigating damage.

“This call for regulation and financial levies is a step in the right direction towards holding banks accountable for their actions,” said Ivanti’s VP for EMEA Andy Baldin.

“Some calls to action have already been taken to restrict how long banking services are allowed to be down for without consequence, such as last year’s initiative to restrict maximum outage time to two days. However, the stakes are constantly increasing and soon even this will become unacceptable.

“Banks must adopt new processes and tools that leverage the very best of the systems utilised in industries such as military and infrastructure. These systems have the capability to reduce the two-day maximum to a matter of minutes in the next few years – working towards a new model of virtually zero-downtime.”

AWS servers hit by sustained DDoS attack


Keumars Afifi-Sabet

23 Oct, 2019

Businesses were unable to service their customers for approximately eight hours yesterday after Amazon Web Services (AWS) servers were struck by a distributed denial-of-service (DDoS) attack.

After initially flagging DNS resolution errors, customers were informed that the Route 53 domain name system (DNS) was in the midst of an attack, according to statements from AWS Support circulating on social media.

From 6:30pm BST on Tuesday, a handful of customers suffered an outage to services while the attack persisted, lasting until approximately 2:30am on Wednesday morning, when services to the Route 53 DNS were restored. This was the equivalent of a full working day in some parts of the US.

“We are investigating reports of occasional DNS resolution errors. The AWS DNS servers are currently under a DDoS attack,” said a statement from AWS Support, circulated to customers and published across social media.

“Our DDoS mitigations are absorbing the vast majority of this traffic, but these mitigations are also flagging some legitimate customer queries at this time. We are actively working on additional mitigations, as well as tracking down the source of the attack to shut it down.”

The Route 53 system is a scalable DNS that AWS uses to give developers and businesses a method to route end users to internet applications by translating URLs into numeric IP addresses. This effectively connects users to infrastructure running in AWS, like EC2 instances, and S3 buckets.

During the attack, AWS advised customers to try to update the configuration of clients accessing S3 buckets to specify the region their bucket is in when making a request to mitigate the impact of the attack. SDK users were also asked to specify the region as part of the S3 configuration to ensure the endpoint name is region-specific.

Rather than infiltrating targeted software or devices, or exploiting vulnerabilities, a typical DDoS attack hinges on attackers bombarding a website or server with an excessive volume of access requests. This causes it to undergo service difficulties or go offline altogether.

All AWS services have been fully restored at the time of writing, however, the attack struck during a separate outage affecting Google Cloud Platform (GCP), although there’s no indication the two outages are connected.

From 12:30am GMT, GCP’s cloud networking system began experiencing issues in its US West region. Engineers then learned the issue had also affected a swathe of Google Cloud services, including Google Compute Engine, Cloud Memorystore, the Kubernetes Engine, Cloud Bigtable and Google Cloud Storage. All services were gradually repaired until they were fully restored by 4:30am GMT.

While outages on public cloud platforms are fairly common, they are rarely caused by DDoS attacks. Microsoft’s Azure and Office 365 services, for example, suffered a set of routine outages towards the end of last year and the beginning of 2019.

One instance includes a global incident with US government services and LinkedIn sustaining an authentication outage towards the end of January this year.

Commvault sounds warning for multi-cloud “new world order”


Keumars Afifi-Sabet

16 Oct, 2019

With multi-cloud and hybrid cloud environments on the rise, businesses need to approach data management differently than in the past, Commvault’s CEO Sanjay Mirchandani has claimed.

Specifically, this will involve avoiding data lock-in, addressing skill gaps and making information more portable while also, in some instances, doing more with less when it comes to implementing new technology.

Mirchandani, who only joined Commvault in February, used the company’s annual Go conference as an opportunity to outline his vision for the future.

During his keynote address, he highlighted the importance of offering customers the flexibility to deliver services to their native environments, whatever that may be. 

Recent research has backed this premise up, with findings showing that 85% of organisations are now using multiple clouds in their businesses.

Drawing from his time as a chief information officer (CIO) at EMC, the Commvault boss also castigated point solutions, a term used in the industry to describe tools that are deployed to solve one specific business problem, saying he wants the company to move away from this.

“With the technological shifts that are happening, you need to help your businesses truly capitalise on that opportunity,” he said.

“Give them the freedom to move data in or out, anywhere they want, on-prem or off-prem, any kind of cloud, any kind of application; traditional or modern. You need that flexibility, that choice, and that portability.”

“If I could give you one piece of advice, don’t get taken in by shiny point solutions that promise you the world, because they’re a mirage. They capture your attention, they seduce you in some way, and then they won’t get you to Nirvana. They’re going to come up short.”

He added that businesses need services in today’s age that are truly comprehensive and process a multitude of scenarios, spanning considerations that centre on central storage to edge computing.

Moving forwards, Commvault’s CEO said the company will look to address a number of key areas, from how the company approaches the cloud in a fundamental way in the future, to reducing ‘data chaos‘ created by the tsunami of data that businesses are collecting.

Mirchandani’s long-term vision for the company centres on finding a way to build platforms to service customers that work around the concept of decoupling data from applications and infrastructure.

It’s a long-term aim that will involve unifying data management with data storage to work seamlessly on a single platform, largely by integrating technology from the recently-acquired Hedwig.

From a branding perspective, meanwhile, granting Metallic its own identity, and retaining Hedwig’s previous one, instead of swallowing these into the wider Commvault portfolio, has been a deliberate choice.

The firm has suggested separating its branding would allow for the two products to run with a sense of independence akin to that of a start-up, with Metallic, for instance, growing from within the company.

However, there’s also an awareness that the Commvault brand carries connotations from the previous era of leadership, with the company keen to alter this from a messaging perspective.

One criticism the company has faced in the past, for instance, has been that Commvault’s tech was too difficult to use. Mirchandani added that due to recent changes to the platform, he considers it a “myth” that he’s striving to bust.

“The one [point] I want to spend a minute on, and I want you to truly give us a chance on this one, is debunking the myth that we’re hard to use,” he said in his keynote.

“We’re a sophisticated product that does a lot of things for our customers and over the years we’ve given you more and more and more technology – but we’ve also taken a step back and heard your feedback.”

Commvault, however, has more work to do in this area, according to UK-based partner Softcat, with prospective customers also anxious the firm’s tech is too costly.

An aspect that would greatly benefit resellers would be some form of guidance as to how to handle these conversations with customers, as well as a major marketing effort to effectively eliminate the sales barrier altogether.