Todas las entradas hechas por Keumars Afifi-Sabet

Hosting online banking in the public cloud a ‘source of systemic risk’ amid rising IT failures


Keumars Afifi-Sabet

28 Oct, 2019

The financial services industry is not doing enough to mitigate a rising volume of IT failures, spurred on by a reluctance to upgrade legacy technology, a parliamentary inquiry has found.

Regulators, such as the Financial Conduct Authority (FCA), are also not doing enough to clamp down on management failures within UK banks, which often use cost or difficulty as «excuses» not to make vital upgrades to legacy systems.

With online banking rising in popularity, the severity of system failures and service outages has also seen an «unacceptable» rise, according to findings published by the House of Commons’ Treasury Select Committee.

The report concluded the impact of these failures range from an inconvenience to customer harm, and even threats to a business’ viability. The lack of consistent and accurate recording of data on such incidents is also concerning.

«The number of IT failures that have occurred in the financial services sector, including TSB, Visa and Barclays, and the harm caused to consumers is unacceptable,» said the inquiry’s lead member Steve Baker MP.

«The regulators must take action to improve the operational resilience of financial services sector firms. They should increase the financial sector levies if greater resources are required, ensure individuals and firms are held to account for their role in IT failures, and ensure that firms resolve customer complaints and award compensation quickly.

«For too long, financial institutions issue hollow words after their systems have failed, which is of no help to customers left cashless and cut-off. And for too long, we have waited for a comprehensive account of what happened during the TSB IT failure.»

MPs launched this inquiry to examine the cause behind such incidents, reasons for their frequency, and what regulators can do to mitigate the damage.

As the report identified, TSB’s IT meltdown during 2018 is the most prominent example of an online banking outage in recent years.

The major incident, which lasted several days, was caused by a major transfer of 1.3 billion customer records to a new IT system. A post-mortem analysis by IBM subsequently showed the bank did not carry out rigorous enough testing.

TSB has not been the only institution to have suffered banking outages, with figures compiled by the consumer watchdog Which? showing customers with major banks suffered outages 302 incidents in the last nine months of 2018. Another example of a prominent incident saw NatWest, RBS and Ulster Bank hit by website outages in August this year.

Beyond the work banks must do to ensure their systems are resilient, the MPs found that regulators must do far more to hold industry giants to account when failures do occur. Poor management and short-sightedness, for example, are key reasons why regulators must intervene to ensure banks aren’t exposing customers to risk due to legacy systems.

When companies embrace new technology, poor management of the transitions required is one of the major causes of IT failure, the report added, with time and cost pressures leading banks to «cut corners».

Banks themselves, moreover, must adopt an attitude to ensure robust procedures are in place when incidents do occur, treating them not as a possibility but a probability.


Data protection and GDPR compliance are primary goals for major firms. Learn about the security features that will help you achieve and sustain compliance in this whitepaper.

Download now


Meanwhile, the use of third-party providers has also come under scrutiny, with the select committee urging regulators to highlight the risks of using services such as cloud providers.

The report highlighted Bank of England statistics that show a quarter of major banks, and a third of payment activity, is hosted on the public cloud. This means banks and regulators must think about the implications for concentrating operations in the hands of just a few platforms.

The risks to services of a major operational incident at cloud providers like Amazon Web Services (AWS) or Google Cloud Platform (GCP) could be significant, with the market posing a «systemic risk». There should, therefore, be a case for regulating these cloud service providers to ensure high standards of operational resilience.

The report listed a number of suggestions for mitigating the risk of concentration, but conceded the market is already saturated and there was «probably nothing the Government or Regulators can do» to reduce this in the short-term.

Some measures, such as establishing channels of communication with suppliers during an incident, and building applications that can substitute a critical supplier with another, could go towards mitigating damage.

«This call for regulation and financial levies is a step in the right direction towards holding banks accountable for their actions,» said Ivanti’s VP for EMEA Andy Baldin.

«Some calls to action have already been taken to restrict how long banking services are allowed to be down for without consequence, such as last year’s initiative to restrict maximum outage time to two days. However, the stakes are constantly increasing and soon even this will become unacceptable.

«Banks must adopt new processes and tools that leverage the very best of the systems utilised in industries such as military and infrastructure. These systems have the capability to reduce the two-day maximum to a matter of minutes in the next few years – working towards a new model of virtually zero-downtime.»

AWS servers hit by sustained DDoS attack


Keumars Afifi-Sabet

23 Oct, 2019

Businesses were unable to service their customers for approximately eight hours yesterday after Amazon Web Services (AWS) servers were struck by a distributed denial-of-service (DDoS) attack.

After initially flagging DNS resolution errors, customers were informed that the Route 53 domain name system (DNS) was in the midst of an attack, according to statements from AWS Support circulating on social media.

From 6:30pm BST on Tuesday, a handful of customers suffered an outage to services while the attack persisted, lasting until approximately 2:30am on Wednesday morning, when services to the Route 53 DNS were restored. This was the equivalent of a full working day in some parts of the US.

«We are investigating reports of occasional DNS resolution errors. The AWS DNS servers are currently under a DDoS attack,» said a statement from AWS Support, circulated to customers and published across social media.

«Our DDoS mitigations are absorbing the vast majority of this traffic, but these mitigations are also flagging some legitimate customer queries at this time. We are actively working on additional mitigations, as well as tracking down the source of the attack to shut it down.»

The Route 53 system is a scalable DNS that AWS uses to give developers and businesses a method to route end users to internet applications by translating URLs into numeric IP addresses. This effectively connects users to infrastructure running in AWS, like EC2 instances, and S3 buckets.

During the attack, AWS advised customers to try to update the configuration of clients accessing S3 buckets to specify the region their bucket is in when making a request to mitigate the impact of the attack. SDK users were also asked to specify the region as part of the S3 configuration to ensure the endpoint name is region-specific.

Rather than infiltrating targeted software or devices, or exploiting vulnerabilities, a typical DDoS attack hinges on attackers bombarding a website or server with an excessive volume of access requests. This causes it to undergo service difficulties or go offline altogether.

All AWS services have been fully restored at the time of writing, however, the attack struck during a separate outage affecting Google Cloud Platform (GCP), although there’s no indication the two outages are connected.

From 12:30am GMT, GCP’s cloud networking system began experiencing issues in its US West region. Engineers then learned the issue had also affected a swathe of Google Cloud services, including Google Compute Engine, Cloud Memorystore, the Kubernetes Engine, Cloud Bigtable and Google Cloud Storage. All services were gradually repaired until they were fully restored by 4:30am GMT.

While outages on public cloud platforms are fairly common, they are rarely caused by DDoS attacks. Microsoft’s Azure and Office 365 services, for example, suffered a set of routine outages towards the end of last year and the beginning of 2019.

One instance includes a global incident with US government services and LinkedIn sustaining an authentication outage towards the end of January this year.

Commvault sounds warning for multi-cloud “new world order”


Keumars Afifi-Sabet

16 Oct, 2019

With multi-cloud and hybrid cloud environments on the rise, businesses need to approach data management differently than in the past, Commvault’s CEO Sanjay Mirchandani has claimed.

Specifically, this will involve avoiding data lock-in, addressing skill gaps and making information more portable while also, in some instances, doing more with less when it comes to implementing new technology.

Mirchandani, who only joined Commvault in February, used the company’s annual Go conference as an opportunity to outline his vision for the future.

During his keynote address, he highlighted the importance of offering customers the flexibility to deliver services to their native environments, whatever that may be. 

Recent research has backed this premise up, with findings showing that 85% of organisations are now using multiple clouds in their businesses.

Drawing from his time as a chief information officer (CIO) at EMC, the Commvault boss also castigated point solutions, a term used in the industry to describe tools that are deployed to solve one specific business problem, saying he wants the company to move away from this.

«With the technological shifts that are happening, you need to help your businesses truly capitalise on that opportunity,» he said.

«Give them the freedom to move data in or out, anywhere they want, on-prem or off-prem, any kind of cloud, any kind of application; traditional or modern. You need that flexibility, that choice, and that portability.»

«If I could give you one piece of advice, don’t get taken in by shiny point solutions that promise you the world, because they’re a mirage. They capture your attention, they seduce you in some way, and then they won’t get you to Nirvana. They’re going to come up short.»

He added that businesses need services in today’s age that are truly comprehensive and process a multitude of scenarios, spanning considerations that centre on central storage to edge computing.

Moving forwards, Commvault’s CEO said the company will look to address a number of key areas, from how the company approaches the cloud in a fundamental way in the future, to reducing ‘data chaos‘ created by the tsunami of data that businesses are collecting.

Mirchandani’s long-term vision for the company centres on finding a way to build platforms to service customers that work around the concept of decoupling data from applications and infrastructure.

It’s a long-term aim that will involve unifying data management with data storage to work seamlessly on a single platform, largely by integrating technology from the recently-acquired Hedwig.

From a branding perspective, meanwhile, granting Metallic its own identity, and retaining Hedwig’s previous one, instead of swallowing these into the wider Commvault portfolio, has been a deliberate choice.

The firm has suggested separating its branding would allow for the two products to run with a sense of independence akin to that of a start-up, with Metallic, for instance, growing from within the company.

However, there’s also an awareness that the Commvault brand carries connotations from the previous era of leadership, with the company keen to alter this from a messaging perspective.

One criticism the company has faced in the past, for instance, has been that Commvault’s tech was too difficult to use. Mirchandani added that due to recent changes to the platform, he considers it a «myth» that he’s striving to bust.

«The one [point] I want to spend a minute on, and I want you to truly give us a chance on this one, is debunking the myth that we’re hard to use,» he said in his keynote.

«We’re a sophisticated product that does a lot of things for our customers and over the years we’ve given you more and more and more technology – but we’ve also taken a step back and heard your feedback.»

Commvault, however, has more work to do in this area, according to UK-based partner Softcat, with prospective customers also anxious the firm’s tech is too costly.

An aspect that would greatly benefit resellers would be some form of guidance as to how to handle these conversations with customers, as well as a major marketing effort to effectively eliminate the sales barrier altogether.

Commvault launches ‘Metallic’ SaaS backup suite


Keumars Afifi-Sabet

15 Oct, 2019

Backup specialist Commvault has lifted the lid on a spin-off software as a service (SaaS) venture that allows customers to safeguard their either on-prem or cloud-based files and application data.

Launched at the firm’s annual Commvault GO conference, the Metallic portfolio is geared towards addressing a growing demand among Commvault’s customers for SaaS backup and recovery services.

Metallic will be pitched at large businesses of between 500 and 2,500 employees and is set to launch with three strands that span the breadth of SaaS-based data management, including one service devoted entirely to Microsoft Office 365.

Its launch is also significant in the way Commvault has pointedly decided to assign the platform a brand in and of itself, rather than including this under the Commvault umbrella.

This, according to the firm’s CEO Sanjay Mirchandani, is because Metallic signifies a divergence from how Commvault has traditionally developed and launched a product.

«Part of what Metallic represented for us as a company is a new way of building,» said Mirchandani. «We funded it and created a startup within the company, they could tap into anything they wanted to within Commvault or not.

«Choose the go-to-market model, choose the partners they wanted to work with, give them the freedom to create something that is world-class and designed to solve real problems for customers. And they had the best of both worlds.»

The three strands comprising Metallic include Core, Office 365 and Endpoint services, each aimed at varying elements of protecting data within a large organisation.

Core, for instance, centres on the ‘essentials’ of data spanning from VMware data protection to Microsoft SQL database backup. By contrast, Endpoint backup and recovery focuses on protecting data stored locally on machines within an organisation.

The Office 365 provision, meanwhile, is dedicated to protecting an organisation’s work within the productivity suite of apps and services to safeguard against potential issues like accidental deletion and corruption.

Available in only the US at first, these are available either through monthly or annual subscriptions, while prospective customers can sign up to a free trial through the platform’s dedicated website.

Commvault decided to build the Metallic brand, Mirchandani added, after extensive consultation with partners and its customers. Its developers decided the best approach to building Metallic would be to adopt the viewpoint of an organisation’s chief information officer (CIO) and consider their backup needs.

Google is able to access sensitive G Suite customer data, former employee warns


Keumars Afifi-Sabet

11 Oct, 2019

Employees whose organisations deploy G Suite have been urged to stay mindful of keeping sensitive data on the productivity suite, following a report that suggests Google and IT admins have extensive access to private files.

Google itself, as well as administrators within a business, have vast access to the files stored within G Suite, and can monitor staff activity, according to a former Google employee. This data, which is not protected by end-to-end encryption unlike other Google services, can even be shared with law enforcement on request.

This level of intrusion is necessary to perform essential security functions for business users, such as monitoring accounts for attempted access, ex-staffer Martin Shelton claimed in his post, but, in turn, this demands enormous visibility on users’ accounts.

Organisations using G Suite Business or G Suite Enterprise have even offered administrators powerful tools to monitor and track employees’ activity, and retain this information in a Google Vault.

«In our ideal world, Google would provide end-to-end encrypted G Suite services, allowing media and civil society organisations to collaborate on their work in a secure and private environment whenever possible,» Shelton said.

«For now we should consider when to keep our most sensitive data off of G Suite in favour of an end-to-end encrypted alternative, local storage, or off of a computer altogether.»

Of particular concern is a sense of uncertainty over who within Google has access to user data kept on its servers. Shelton added that Google claims to have protections in place, but that it’s not known how many employees are able to clear the bars set by the company.

These protections include authorised key card access, approval from an employee’s manager as well as the data centre director, as well as logging and auditing of all instances of approved access.

G Suite administrators, meanwhile, can see a «remarkable level» of user data within an organisation in light of the powerful tools offered by Google. G Suite Enterprise offers the most amount of access into users’ activities, with G Suite Business allowing for slightly more restricted visibility.

These tools include being able to search through Gmail and Google Drive for content as well as metadata including the subject lines and recipients of emails. Administrators can even create rules for which data is logged and retained, depending on how they wish to configure their G Suite.

Audit logs, for example, lets IT admins see who has looked at and modified documents, while the use of apps like Calendar, Drive and Slides can be monitored on both desktops and mobile devices.

Shelton has recommended that employees audit their own use of G Suite and be mindful of any sensitive data that’s either kept in Drive or discussed with others via Gmail.

The former employee has also suggested users get details from their G Suite administrators pertaining to the level of visibility they have over employees within their organisation, including which rules they’ve enabled as part of Google Vault.

Concerns over privacy within G Suite have emerged in the past after accusations were made in 2018 that third-party developers were able to view users’ Gmail messages.

Google said, at the time, that such a practice was normal across the industry and users had already granted permission as and when this occurred.

SAP launches a string of data-driven cloud services


Keumars Afifi-Sabet

8 Oct, 2019

SAP has announced several improvements to its business technology platform for enterprises to allow customers to exploit business insights and gain value from data points.

The company has released services pertaining to data warehousing, analytics and cloud deployment through its business technology platform, offering enterprise customers a single platform from which to deploy SAP technology.

The SAP Cloud Services suite, set to get a raft of improvements, includes SAP HANA Cloud, SAP Data Warehouse Cloud and SAP Analytics Cloud.

«Our business technology platform brings SAP HANA and analytics closer together with SAP Cloud Platform so users can make smarter, faster and more-confident business decisions,» said SAP CTO Juergen Mueller.

«SAP ensures high levels of openness and flexibility including out-of-the-box integration, modularity and ease of extension in cloud, on-premise and hybrid deployment models. With this open and flexible approach, SAP is committed to helping our customers achieve superior business outcomes.»

SAP Data Warehouse Cloud serves as a platform that ties in all aspects of a business’ data together before translating the dataset into insights relevant to its specific field. The feature will be released towards the end of the year, with approximately 2,000 customers currently registered for the beta programme.

The tool goes beyond conventional modelling to encompass aspects like data governance and creating a single data landscape from several sources such as cloud and on-premise from within a company, as well as externally.

«Thanks to the data virtualization capabilities, we can connect all our data without creating redundancies,» said Andreas Foerger, manager of the analytics and reporting team at SAP customer Randstad Germany.

«The semantic layer that SAP Data Warehouse Cloud comes with helps us sync all our data from different sources in a way that makes perfect sense to the business.»

SAP HANA Cloud, meanwhile, aims to bring customers an interface that offers virtual interactive access with a scalable query engine. This service, alongside SAP Analytics Cloud, which comprises tools that improve internal business planning, will also be released towards the end of the year.

UK data centres blitz climate change targets


Keumars Afifi-Sabet

27 Sep, 2019

Data centre operators in the UK have fulfilled their climate change obligations two years ahead of schedule, exceeding the requirement of a 13.52% reduction in power usage by a healthy margin.

Under the climate change agreement (CCA) scheme for data centres, participants are required to reduce their Power Usage Effectiveness (PUE) by 15% by the end of 2020. Calculations by techUK, a trade association for the UK’s tech industry, show the sector achieved a reduction of 16.72%.

«Provisional results from the Climate Change Agreement (CCA) for Data Centres suggest that the sector has successfully met its efficiency target, the third of four milestones in the life of the scheme,» said techUK’s associate director for data centres.

«Collectively, UK operators have performed so well that they have fulfilled the final scheme target two years ahead of schedule. However, at individual facility level, the picture is more mixed, so the sector is not complacent and will be working harder than ever to build on these improvements in the final stage.»

The headline figure serves only as an aggregation for outcomes in 150 sites, with a more detailed examination suggesting there’s plenty of work to be done still. Of 88 target units, with ‘target units’ defined as combinations several data centre sites, 40 passed the requirements while 48 failed.

Those sites which failed to meet their targets and did not have surplus carbon from previous assessment periods were obliged to buy out the carbon needed to meet their targets if they wished to remain certified.

Brexit uncertainty has been cited as a key reason for this failure among sites that have not met their targets, due to a reduction in enterprise customers in the last few years. Older sites that were full at the start of the scheme in 2013 will also be disproportionately affected as they have struggled to realise the benefits of efficiency improvements.

Despite the reported success of UK data centre operators, critics have criticised the PUE metric as not being a robust enough performance metric of energy efficiency. It’s calculated as a ratio of the total amount of energy used by a facility, against the energy delivered to computing equipment.

«The CCA target of 15 per cent improvement in PUE has been criticised by external observers unfamiliar with the commercial data centre business model,» techUK’s report said. «They claim that this is not nearly tough enough.

«Commercial operators providing colocation (or colocation-style services) control the infrastructure and not the IT, which remains a customer matter. PUE is a performance metric limited to infrastructure, so it is the best, or perhaps more accurately the least worst, metric to use for this type of provider.»

The role of the tech industry in exacerbating climate change has come under scrutiny in recent years. This has especially been the case with regards to the role data centres play in maintaining cryptocurrencies like Bitcoin.

Apple has even speculated on the silver linings of climate change, suggesting more natural disasters could fuel iPhone sales.

The CCA was struck following negotiations between the Department for Business Energy and Industrial Strategy (BEIS) and techUK, and is expected to end in 2023. There is, as of yet, no indication that BEIS will devise a replacement energy efficiency programme once the CCA expires.

Dropbox launches admin controls and collaboration hub amid major workspace push


Keumars Afifi-Sabet

26 Sep, 2019

Dropbox has launched a slew of new features as part of its new desktop app that aims to reposition the file-sharing service as a digital workplace hub.

The headline Dropbox Spaces addition aims to transform folders into collaborative workspaces in which teams can organise and share documents, as well as synchronise calendars and facilitate better communications.

The smart workspace feature will also introduce several key integrations over the course of the next few months, including Paper, HelloSign and Trello. Spaces will also allow workers to add comments directly into files, and configure a notifications feed.

This is part of the company’s wider ambitions to shake off its image as a mere cloud storage service and branch out into productivity. The cloud firm first announced a huge overhaul of its desktop app in June, and announced several features like G Suite and Microsoft Office integration, as well as native integrations with partners like Zoom and Slack.

Dropbox also launched a file-sharing service in July called Dropbox Transfer, which aims to combat the limitations of sharing large files via email. This feature will also make its way into the company’s flagship desktop app in the coming months.

«When we talk about the experience of using technology at work, what was stunning to me, even a few years ago was like, man, our industry just keeps making things more complicated, and just keeps throwing new stuff onto the pile,» Dropbox founder and CEO Drew Houston told Cloud Pro in June. «Like, who’s making everything work together?

«Increasingly we saw that our customers are seeing Dropbox more as this workspace in which they use the Office suites and things like that, which triggered a pretty big mental shift for us and completely changed the concept of the product that we wanted to build.»

The company has also made a number of key changes to Dropbox Business, hoping to make lives easier for IT administrators and team managers. These additions include greater access to employee activity and tools for data security and compliance.

Through the enterprise console, IT admins can gain high-level control and visibility while also delegating levels of control to individual team leaders in appropriate cases. Users can also be reviewed and managed across several workspaces, with different controls and settings based on the needs of each department.

Staff can also be monitored through an activity page with search functionality, and the capacity to filter and report, and take quick action. The firm has also teased a dashboard, set for a future release, that will allow for high-priority user activity to be highlighted.

Microsoft issues urgent Internet Explorer and Windows Defender security patches


Keumars Afifi-Sabet

24 Sep, 2019

Microsoft has urged users to patch their Internet Explorer browsers immediately after learning that cyber criminals are exploiting a flaw to execute arbitrary code on target devices.

The firm has also issued a patch for a separate flaw in Windows Defender, the company revealed on Monday. The severity of the flaws has meant the patches have been released out-of-sync with its Path Tuesday releases; normally the second Tuesday of every month.

The Internet Explorer flaw, dubbed CVE-2019-1367, involves a remote code execution (RCE) vulnerability that allows hackers to exploit the way the web browser’s scripting engine handles objects in memory.

This could corrupt memory in a way that could allow an attacker to execute arbitrary code and gain the same user rights as the target user. This flaw affects Internet Explorer versions 9, 10 and 11, the company confirmed.

«If the current user is logged on with administrative user rights, an attacker who successfully exploited the vulnerability could take control of an affected system,» the Microsoft Security Research Centre (MSRC) said.

«An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

«In a web-based attack scenario, an attacker could host a specially crafted website that is designed to exploit the vulnerability through Internet Explorer and then convince a user to view the website, for example, by sending an email.»

The Windows Defender flaw, dubbed CVE-2019-1255, centres on a denial of service zero-day vulnerability when Microsoft Defender improperly handles files. An attacker could exploit this to prevent legitimate accounts from executing system binaries.

The Windows Defender flaw will be fixed as part of an automatic update mechanism, and will be shipped within the next couple of days through the Microsoft Malware Protection Engine. The Internet Explorer vulnerability, however, must be applied manually.

Although the Internet Explorer vulnerability is being actively exploited in the wild, its userbase has shrunk in recent years to represent just 2.61% of web users, according to W3 Counter figures.

A vast swathe of businesses may still be using Internet Explorer as the browser of choice in their workplace, however, particularly if they’re also persisting with older Windows systems.

Microsoft confirms a Teams client for Linux is on its way


Keumars Afifi-Sabet

10 Sep, 2019

Microsoft is developing an iteration of its collaboration tool, Teams, for Linux systems after high demand from users, but hasn’t provided a release date.

The company confirmed on a user feedback forum last week that it’s actively working on a Teams client, and that more information would be divulged soon. Users have previously been forced to use an in-browser version of Teams on Linux systems, which suffers from limitations in functionality and user experience (UX).

The popular collaboration tool is currently available on Windows, macOS, iOS and Android, as well as within a web browser, with Linux the only missing piece of the puzzle.

The biggest issues with the web iteration of Teams include the inability to video conference or share desktops and applications effectively, as well as difficulty organising presentations.

Linux users have been demanding a client for Teams for years, with the original post that Microsoft replied to on UserVoice, for example, dating back to November 2016.

Notably, Teams’ biggest rival in the collaboration space is Slack, which does have a functional Linux client that launched last year. The Ubuntu Snap tool has been used to put the app into a bubble so it could run in a Linux environment, and provide secure isolation.

In confirming a Linux client for Teams, Microsoft is encroaching on one of Slack’s most significant differentiating factors from the industry giant.

It’s particularly significant given that Microsoft announced in July that it has more users than its key competitor; boasting more than 13 million active daily users versus Slack’s latest reported figures of 10 million users.

This can partially be attributed to the fact it’s packaged into Microsft’s Office 365 ecosystem of productivity apps by default. But it’s also been considered fairly staggering considering Teams was lagging behind its rival as soon as April.

The rivalry between the two platforms has indeed been hitting up during 2019, with Microsoft banning its employees from using Slack in June, declaring some versions of the workplace service are not secure.