All posts by Keumars Afifi-Sabet

Google Plus back from the dead as ‘Google Currents’ enterprise workspace app


Keumars Afifi-Sabet

12 Apr, 2019

The now-defunct social media platform Google Plus has been unexpectedly resurrected as an enterprise application to rival the likes of Slack and Facebook Workplace.

After the service sustained a data leak of half a million accounts in October 2018, Google launched a security review and decided to shut down the platform permanently.

A second data leak in December – this time exposing the private data for 52.5 million users – led Google to accelerate its demise from August to April 2019. The platform then officially closed just 10 days ago.

This has also coincided with the launch of Google Currents, which touts itself as a like-for-like replacement to the Google Plus for G Suite app that had been available only for enterprise users.

“Currents is a G Suite app that enables people to have meaningful discussions and interactions across your organization, helping keep everyone in the know and giving leaders the opportunity to connect with their employees,” the company announced.

“Currents makes it easy to have meaningful discussions by enabling leaders and employees to exchange ideas across the organization and gather valuable feedback and input from others – without flooding inboxes.”

Among features in the new platform are analytical tools that let users track how widely-seen their posts across the network are, and priority offered to posts from leadership teams.

Tags and streams, including a ‘home stream’, are designed to show individuals the most relevant posts for them at any time.

Organisations can register with the Google Currents beta now, and all content for existing Google Plus users will automatically transfer upon enrollment. The app is available in all versions of G Suite.

Incidentally the name ‘Google Currents’ is itself a resurrection of the name once prescribed to a social magazine app, or Google’s answer to Apple News. This app launched in 2011 but was replaced two years later with Google Play Newsstand.

Before retiring just days ago, Google Plus endured a torrid life playing second-fiddle to more widely-used platforms such as Facebook, Instagram and Twitter. The data leak in October would have proved the final nail in the coffin but for the subsequent exposure of 52.5 million users not long after.

But Google yet hopes to keep its social media platform alive in some form, with the core code in its consumer-focused app now porting to a business-oriented workplace service.

This also follows the launch of several new G Suite updates at Google Cloud Next 2019, with Sheets, Hangouts, Calendar and Gmail touted for the near future.

AWS makes double swoop for Volkswagen and Standard Bank


Keumars Afifi-Sabet

27 Mar, 2019

Amazon’s cloud arm has struck separate agreements with Volkswagen (VW) and Standard Bank to boost the two companies’ cloud platform and customer-facing applications respectively.

VW’s deal with Amazon Web Services (AWS) will aim to pave the way for a transformation of the car maker’s manufacturing and logistical processes across its 122 plants, including the effectiveness of assembly equipment, as well as track parts and vehicles.

Together, the two companies will develop a new platform, dubbed the Volkswagon Industrial Cloud, which will deploy technologies such as the Internet of Things (IoT) and machine learning to realise this wider ambition.

AWS’ IoT services will be deployed in full across VW’s new platform in order to detect and collect data from the floor of each plant, then organise and conduct sophisticated analytics on the information to gain operational insights.

Moreover, VW will feed all the information into a data lake built on an Amazon S3 bucket on which data analytics will be conducted. This, the two companies hope, will lead to improvements in forecasting and insight into operational trends. The manufacturing process could also be streamlined, as well as identifying gaps in production and waste management.

“We will continue to strengthen production as a key competitive factor for the Volkswagen Group. Our strategic collaboration with AWS will lay the foundation,” said the chairman of the Porsche AG executive board Oliver Blume.

“The Volkswagen Group, with its global expertise in automobile production, and AWS, with its technological know-how, complement each other extraordinarily well. With our global industry platform, we want to create a growing industrial ecosystem with transparency and efficiency bringing benefits to all concerned.”

AWS has also announced that the South African Standard Bank has decided to use its services to migrate production workloads onto the public cloud provider’s systems. These include many customer-facing platforms and banking apps.

Subject to regulations, the migration will ideally take place across all banking departments including personal banking and corporate investment banking. The firm will also adopt AWS’ data analytics and machine learning tools, to automate financial operations and generally improve web and mobile apps used by its customers.

“Standard Bank Group has been a trusted financial institution for more than 150 years. We look forward to working closely with them as they become Africa’s first bank in the cloud, leveraging AWS to innovate new services at a faster clip, maintain operational excellence, and provide secure banking services to customers around the world,” Andy Jassy, CEO of AWS. 

An AWS cloud centre of excellence will also be created within the bank, featuring a team dedicated exclusively to the public cloud migration. This centre will also build training and certification programmes within the firm to boost employees’ digital skills. This will also be extended one step further with an educational and digital skills programme to be launched across South Africa.

View from the airport: DataWorks Summit 2019


Keumars Afifi-Sabet

22 Mar, 2019

Are you from the Hortonworks side or the Cloudera side? It’s a question I found myself asking a lot at this year’s DataWorks Summit, the first major event since the two companies completed their $5.2 billion merger just months ago. Naturally, a marriage of this scale throws up a tidal wave of questions. Unfortunately, there were no answers to be found in Barcelona.

It’s difficult to put my finger on the mood in the air, but it was closest to uncertainty. Given that DataWorks Summit has conventionally been a Hortonworks event, having the ‘new’ Cloudera spearhead it was jarring. Not just for the press, but the comms team. The reason? The May-time Washington DataWorks Summit, as well as Cloudera’s two Strata Data Conferences had already been planned and organised months before the merger was tied up. So the company has to effectively go through the motions with its 2019 events.

But it was especially confusing given the Hortonworks branding appears to have been discarded entirely. Instead, the two companies, now operating under the Cloudera umbrella, have undergone a complete image refresh, with a newly-designed logo and several buzzy slogans to boot.

A new image is always something to get excited about. But the fact Cloudera was handing out metal pins emblazoned with the company’s old logo summed up the feeling quite effectively. Its Twitter page, too, is still displaying the company’s old logo at the time of writing.

Meanwhile, the event was fronted by the firm’s chief marketing officer (CMO) Mick Hollison. This underpinned the company’s almost singular focus on ‘image’ this week, which on one level made sense. Earnings day the week before made for grim reading. Revenue grew, sure. But so did expenditure, by quite a lot. This doubled losses to more than $85 million. Yet Cloudera is setting itself a target of becoming a billion dollar company before the end of the year, and reinforced its ambitions to target only the largest companies.

But it didn’t seem appropriate that a significant portion of the top brass was left at home. Anybody who could give serious answers about Cloudera’s financial performance, or specific details about the merger, was not available to chat. Then it hit me during the main keynote when it became clear CMO Hollison would be the only Cloudera voice addressing the press, analysts and delegates that morning. Chiefly, at Cloudera’s first major public event since the merger, it begged the question: Where was the CEO?

It’s not fair to say that everybody with prominence was left at home. Hilary Mason, Cloudera’s resident data scientist and the lead on its research division, dazzled on the evolving nature of AI. Meanwhile, there were some interesting insights to gain on data warehousing, open source, and GDPR. The thematic substance of DataWorks Summit 2019 was actually quite positive despite the company’s considered efforts to push its new marketing slogans, namely ‘from the edge to AI’ and ‘the enterprise data cloud’.

But the merger, undoubtedly, was at the forefront of everyone’s minds, with many questions lingering. Though now that it has mostly been completed, it was interesting to hear discussions with Hortonworks were actually underway for three-and-a-half years before the two firms tied the knot.

Yet we still don’t fully know what its flagship service, named the Cloudera Data Platform (CDP), will look like. We do, however, know it’s a mash-up of Hortonworks and Cloudera’s legacy systems, Cloudera Distribution Including Apache Hadoop (CDH) and Hortonworks Data Platform (HDP).

Neither do we know when this will launch, with Cloudera officially saying it will come within the next two quarters. But one customer, Swiss insurance firm Zurich, told Cloud Pro it was coming in June. Meanwhile, while customers are allowed to keep these legacy platforms until around 2022, for Zurich, currently in the process of migrating from HDP 2.0 to 3.0, does this then mean a second big transition in quick succession? The aim is, of course, to transition all customers to CDP eventually.

The future is uncertain. So much so that nobody really knows if the DataWorks Summits held in 2019 will be the last ever. Nevertheless, this presented a fantastic opportunity for Cloudera to address the world post-merger, and take on its major challenges head-on.

But this was an opportunity missed. The fact its most senior staff were left at home spoke volumes, even though the substance of the conference was for the most part engaging. It became clear over the course of the event that there hasn’t been, and probably won’t be, a honeymoon period for the ‘new’ Cloudera as it begins to find its feet in a turbulent market.

DataWorks Summit 2019: Cloudera allays post-merger fears with ‘100% open-source’ commitment


Keumars Afifi-Sabet

20 Mar, 2019

The ‘new’ Cloudera has committed to becoming a fully open-source company, having followed an open-core model prior to its $5.2 billion merger with former rival Hortonworks.

All 32 of the current open source projects found between both Hortonworks and Cloudera’s legacy platforms will remain available as cloud-based services on its new jointly-developed Cloudera Data Platform (CDP).

There were fears Cloudera’s influence could undermine the “100% open source” principles that underpinned Hortonworks, given the former had previously been just an ‘open-core’ company. This amounted to a business model in which limited versions of Cloudera projects were offered in line with open source principles, with additional features available at a cost.

Cloudera first made reassurances over its commitment to open source on a conference call with journalists last week. This call was made to explain the firm’s dismal Q4 2018 financial results which saw the company’s net losses double post-merger to $85.5m.

The commitment, which Cloudera elaborated at the company’s DataWorks Summit 2019 hosted in Barcelona this week, has also coincided with a complete rebranding of the company logo, and further elaboration over its vision for an ‘enterprise data cloud’.

This, according to the firm’s chief marketing officer Mick Hollison, includes multi-faceted data analytics and support for every conceivable cloud model from multiple public clouds to hybrid cloud to containers like Kubernetes.

It would also be underpinned with a common compliance and data governance regime, and would retain a commitment to “100% open source”, with Hollison insisting several times to journalists at a press briefing the term “isn’t just marketing fluff”.

Cloudera’s vice president for product management Fred Koopmans told journalists at the same press briefing that both company’s existing customers valued the principles of ‘openness’ – which starts with open APIs.

“They don’t view that there is one vendor that’s going to serve all of their needs today and in the future,” Koopmans said. “Therefore it’s critical for them to have open APIs so they can bring in other software development companies that can extend it and enhance the platform.

“What open source provides them is no dead-ends; if they’re trying to develop something, and there’s a particular feature they need. They always have the option of going and adding a feature with their own development team. So this is a huge driver for a lot of our larger customers in particular.”

Cloudera also used the DataWorks Summit to outline its intentions to exclusively chase the biggest enterprise customers, insisting the firm is only interested in tackling big data problems for large companies.

CDP, the embodiment of the new vision, is due to make its way to customers as only a public cloud platform later this year, with a private cloud iteration to follow in late 2019 or early 2020. The platform is a mashing-together of Cloudera’s Cloudera Distribution including Apache Hadoop (CDH) and Hortonworks’ Hortonworks Data Platform (HDP).

Microsoft open-sources Azure compression technology


Keumars Afifi-Sabet

15 Mar, 2019

Microsoft hopes that open sourcing the compression technology embedded in its Azure cloud servers will pave the way for the technology’s adoption into a range of other devices.

The company is making the algorithms, hardware design specifications and the source code behind its compression tech, dubbed Project Zipline, available for manufacturers and engineers to integrate into silicon components.

Microsoft announced this move to mark the start of the Open Compute Project’s (OCP) annual summit. Microsoft is a prominent member of the programme, which was started by Facebook in 2011 and includes the likes of IBM, Intel, and Google.

Project Zipline is being released to the OCP to combat the challenges posed by an exploding volume of data that exists in the ‘global datasphere’, in both private and public realms, the company said. Businesses are also increasingly finding themselves burdened with mountains of internal data that should be better managed and utilised.

“The enterprise is fast becoming the world’s data steward once again,” said Microsoft’s general manager for Azure hardware infrastructure Kushagra Vaid.

“In the recent past, consumers were responsible for much of their own data, but their reliance on and trust of today’s cloud services, especially from connectivity, performance, and convenience perspectives, continues to increase and the desire to store and manage data locally continues to decrease.

“We are open sourcing Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) with initial content available today and more coming soon.

“This contribution will provide collateral for integration into a variety of silicon components (e.g. edge devices, networking, offload accelerators etc.) across the industry for this new high-performance compression standard.”

According to the firm, the compression algorithm yields up to twice as high compression ratios versus the widely used Zlib-L4 64KB compression model. Contributing RTL at this level of detail, Vaid added, sets a new precedent for frictionless collaboration, and can open the door for hardware innovation at the silicon level.

Members of the OCP will be able to run their own Project Zipline trials and contribute to the further development of the algorithm, and its hardware specifications.

Microsoft hopes that its technology will be integrated into a variety of silicon components and devices, in the future. These could range from smart SSDs to archival systems, to cloud appliances, as well as IoT and edge devices.

Making its compression technology available represents Microsoft’s latest contribution to OCP, more than five years after the company first began contributing to the open source project. Incremental contributions have been made ever since, with the company, for instance, delivering its Open CloudServer specs to the project in October 2014.

Google fixes ‘highly severe’ zero-day Chrome exploit


Keumars Afifi-Sabet

7 Mar, 2019

Google has confirmed that a Chrome browser patch released last week was a fix for a critical flaw that was being exploited by criminals to inject malware onto a user’s device.

The company is urging Chrome users to immediately update their web browsers to the latest version, released last week, in light of the discovery of a zero-day vulnerability rated ‘highly severe’.

The flaw, termed CVE-2019-5786, is a memory mismanagement bug in Chrome’s FileReader, an API included in all web browsers that allows apps to read files stored on a user’s device or PC.

Its nature as a ‘use-after-free’ error means it tries to access memory after it has been deleted from Chrome’s allocated memory and, through this mechanism, could lead to the execution of malicious code.

“According to the official release notes, this vulnerability involves a memory mismanagement bug in a part of Chrome called FileReader,” said Sophos’ security proselytiser Paul Ducklin.

“That’s a programming tool that makes it easy for web developers to pop up menus and dialogues asking you to choose from a list of local files, for example when you want to pick a file to upload or an attachment to add to your webmail.”

“When we heard that the vulnerability was connected to FileReader, we assumed that the bug would involve reading from files you weren’t supposed to. Ironically, however, it looks as though attackers can take much more general control, allowing them to pull off what’s called Remote Code Execution.”

This breed of attack means cyber criminals could inject malware onto unsuspecting users’ machines without any warning, or seize full control of a device.

The vulnerability was discovered by Clement Lecigne of Google’s threat analysis group on 27 February. Google’s technical program manager Abdul Syed said that the company has become aware of active exploits in the wild, but provided no further information as to the nature of these or who had been targeted.

Google initially released the fix on Friday 1 March, but updated its original announcement to provide further details around the flaw.

Exclusive: European Microsoft 365 outage sent Department for Education’s IT into “meltdown”


Keumars Afifi-Sabet

6 Feb, 2019

The Department for Education (DfE) endured a 12-hour IT nightmare as a result of last month’s European-wide Microsoft 365 outage.

The government department’s IT systems were paralysed on 24 January, with more than 6,000 of its employees locked out of their cloud-based Microsoft and email accounts, according to a DfE source, Cloud Pro has learnt.

Crisis meetings were held throughout the day as officials scrambled with the consequences of a departmental-wide outage entirely out of their hands, and also unexplained at the time.

The civil servant, who requested not to be named, also confirmed that colleagues were forced to share confidential documents using Skype’s instant messaging.

“It beggars belief that we were locked out of email for an entire day, the whole department was in meltdown,” the source said.

A DfE spokesperson confirmed the department’s systems were partially disrupted by the European-wide Microsoft outage on 24 January, and that contingency plans were put in place to mitigate these effects.

“The Department for Education was one of many organisations impacted by Microsoft’s Outlook issues on Thursday 24 January,” the spokesperson told Cloud Pro. “The impact of disruption to email services was managed and services resumed within 24 hours.”

Staff used “smarter working technology” to continue delivering services as smoothly as possible, while “normal business continuity arrangements” were deployed to minimise the impact of disruption to mail services. The spokesperson would not confirm whether the ‘businesses continuity arrangements’ existed prior to 24 January. The department’s video conferencing and shared documents services were unaffected.

The Microsoft 365 outage struck organisations from 9:30am on 24 January, with firms across the continent experiencing severe IT difficulties. Microsoft acknowledged that it was experiencing problems with its services, and engineers worked to restore services at around 8pm the same evening.

“This incident underlines the very real risk authentication delays can have on critical email systems, disrupting government business and preventing officials from sharing confidential information securely,” said Centrify’s vice president John Andrews.

“With rising levels of cyber attacks, it’s vital that all departments ensure privileged access to confidential data is a major priority, so that systems are protected from outsider threats at all times.”

The incident demonstrates just how dependent massive organisations, including critical government services, are on third-party cloud vendors to provide an undisrupted service at the risk of sustaining organisations paralysis.

Microsoft also suffered a global authentication-related outage four days later, with users from nations across the world including the US and Japan unable to login to critical cloud-based services.

Global Microsoft outage leaves users unable to login


Keumars Afifi-Sabet

30 Jan, 2019

A host of Microsoft’s cloud services including Azure Government Cloud and LinkedIn sustained a global authentication outage just a few days after users were blocked from accessing Office 365 in Europe.

Users in parts of Europe, the US, as well as Australia and Japan were blocked from logging into their services between 9pm GMT yesterday and the early hours of this morning due to authentication issues.

A host of Microsoft Cloud services including Dynamics 365 and Office 365, as well as US Government cloud resources, were out of action for a few hours due to problems with its authentication infrastructure.

According to the outage detection service downdetector, the issue may have affected a wide range of services including Skype, OneDrive, Office365, and Outlook.com – which all experienced spikes at roughly the same time. Users also complained across social media about difficulties logging into these platforms.

The issue, which has now been resolved, involved users attempting to log into new sessions, with the Azure status page indicating it concerned an internal DNS provider, describing the issue as ‘Level 3’ after an investigation. Microsoft says that engineers mitigated the outages by failing over CyberLink DNS services to an alternative provider.

These issues were resolved shortly after midnight this morning, but lasted at least a few hours, affecting users in predominantly the Eastern hemisphere who were getting into the crux of their working days.

The global outage arose just five days after Microsoft customers were unable to access their Office 365 accounts for a full working day in Europe.

The company confirmed on Thursday, after initially maintaining that services were running smoothly, that its cloud-powered productivity suite was experiencing difficulties, with the continental outage lasting around nine hours in total. 

This rocky start to the new year reflects a series of outages that Microsoft had sustained with its cloud services in the last few months of 2018, as the Windows-manufacturer struggled to provide 100% reliability. 

AWS launches DocumentDB in a blow to open source


Keumars Afifi-Sabet

10 Jan, 2019

Amazon Web Services (AWS) has launched a managed document database service fully compatible with the widely-used open source software MongoDB.

Amazon DocumentDB, touted as a fast and scalable document database designed to be compatible with existing MongoDB apps and tools, is built from the ground up but is based on the technology used by the aforementioned $44 billion open source company.

The move is seen as a kick in the teeth for open source after MongoDB recently released a set of public licensing policies for third-party commercial use. These aimed to put a stop to large vendors exploiting the firm’s freely available technology.

AWS’ managed database will demonstrate high-performance levels and bring newfound scalability to managed databases, AWS chief evangelist Jeff Barr announced in a blog post, with capacity climbing from a base of 10GB up to 64TB, in 10GB increments.

“To meet developers’ needs, we looked at multiple different approaches to supporting MongoDB workloads,” said AWS vice president for non-relational databases Shawn Bice. “We concluded that the best way to improve the customer experience was to build a new purpose-built document database from the ground up, while supporting the same MongoDB APIs that our customers currently use and like.

“This effort took more than two years of development, and we’re excited to make this available to our customers today.”

AWS says its latest product offers users the capacity to built “performant, highly available applications that can quickly scale to multiple terabytes and hundreds of thousands of reads and writes-per-second”.

The firm added that customers have found using MongoDB inconvenient due to the complexities that came with setting up and managing MongoDB clusters at scale.

DocumentDB uses a purpose-built SSD-based storage layer, with a six-way replication across three availability zones. The storage layer is distributed and self-healing, giving it the qualities needed to run production-scale workloads, Barr added.

AWS’ newly-announced service will fully support MongoDB workloads on version 3.6, with customers also able to migrate their MongoDB datasets to DocumentDB, after which they’ll pay a fee for the capacity they use.

Amazon DocumentDB essentially implements the Apache 2.0 open source MongoDB 3.6 application programming interface (API) by emulating the responses that a MongoDB client would expect from a MongoDB server.

DocumentDB’s six-way storage replication will also ensure data can move from one system to another upon detecting a fault within 30 seconds. Meanwhile, it’ll give customers the option to encrypt their active data, snapshots, and replicas, with authentication enabled by default.

Version 3.6 of MongoDB is little under a year-and-a-half out of date, having been released in November 2017, with the latest release MongoDB 4.0.5, released in December, adding several new features and faster performance.

The two companies previously clashed in April 2017 when the AWS extended its Database Migration Service (DMS) to cover the migration of MongoDB NoSQL databases. At the time the DynamoDB only worked with AWS, where MongoDB’s own service retained compatibility with a plethora of cloud providers.

Mozilla planning revamped Thunderbird for 2019


Keumars Afifi-Sabet

3 Jan, 2019

Mozilla has announced its opensource email client Thunderbird will benefit from a redesigned user interface (UI) and better Gmail support within the next year.

As part of its roadmap for 2019, the firm will grow its team by half a dozen members, from eight to 14 engineers, in order to make the service faster, more secure, and improve the user experience (UX).

Announcing the plans in a blog post, the Firefox developer said it will build on the progress made with the release of Thunderbird 60 in August, which saw major upgrades to its core code and improvements to security and stability.

“We heard from users who upgraded and loved the improvements, and we heard from users who encountered issues with legacy add-ons or other changes that they hurt their workflow,” said Thunderbird community manager Ryan Spies.

“We listened, and will continue to listen. We’re going to build upon what made Thunderbird 60 a success, and work to address the concerns of those users who experienced issues with the update.

“Hiring more staff will go a long way to having the manpower needed to build even better releases going forward.”

Mozilla will prioritise Thunderbird’s design and UX improvements, after receiving “considerable feedback” and complaints, with a primary focus on improving compatibility with Google’s Gmail.

This, specifically, will see better support for Gmail’s labels, a way to categorise messages, and improvements to how Gmail-specific features translate to the Thunderbird client.

Among the project’s engineering priorities for the new year will be looking into methods for measuring slowness, and developing fixes to specific bugs that deteriorate the user experience.

The new staff members will also be put to work re-writing parts of the core code and “working toward a multi-process Thunderbird”.

The client’s notifications and encryption settings will also benefit from an overhaul, the firm confirmed.

Thunderbird will seek to integrate its own notifications with a user’s operating system, while Mozilla will allow users to more easily secure their communications after an engineer was recently hired with a specific remit over security.

Mozilla hasn’t yet completely determined its roadmap, and wouldn’t guarantee that all changes outlined, including the UI redesign, would be available in the next Thunderbird release.