All posts by Keumars Afifi-Sabet

A retrospective on Diane Greene’s tenure as Google Cloud CEO


Keumars Afifi-Sabet

4 Dec, 2018

News of Diane Greene’s departure from Google Cloud after three years in charge came as a surprise given the power and remit she was given to transform the business, and bring it in line with its competitors.

Back then, and to some extent even still, what is considered to be the ‘big four’ of cloud, namely Amazon Web Services (AWS), Microsoft Azure, IBM and Google Cloud, actually resembled more a two-horse-race between AWS and Microsoft Azure.

The VMWare co-founder was brought in to change all this; not only overtake closest rival IBM in terms of market share but to grow the business to the extent it could reasonably compete with AWS and Azure. She was expected to seize the initiative in what could only be seen as an uphill struggle, as the wider company pivoted from hardware and advertising, and more towards cloud services.

Chasing the pack

When Greene was first brought in to lead Google Cloud in 2015, the company’s market share stood at little over 4%; chasing the likes of AWS (31%), Azure (9%), and IBM (7%). But the challenge wasn’t seen as insurmountable by any means, given Amazon and Microsoft had a near-ten year head start over Google, which had only started getting serious about cloud in 2016.

Since her appointment, its platform has grown from “having only two significant customers” in Spotify and Snapchat to a handful of major corporates and large enterprises. These include 20th Century Fox, HSBC, Verizon, and Disney, with Netflix’s business (using Google Cloud for disaster recovery) a solid endorsement given their loyalty to AWS.

The rapid growth of Google Cloud’s case study base is seen by many as a testament to the platform’s superior technology against AWS and Azure. Greene, in her letter, said the company differentiates itself in areas such as security, AI, and G Suite portfolio of workplace apps. She argued that this growth is reflected by a ten-fold increase in the attendance of this year’s flagship Google Cloud Next conference, hosted in San Francisco, against 2016.

The CEO said her organisation had worked hard to reform its approach after being subjected to harsh industry criticism when she first joined, confirming Google Cloud had taken regulator and analyst advice to heart. Research VP & distinguished analyst at Gartner Ed Anderson, told Cloud Pro during the event the firm has proven it can deliver successfully and boasts a growing list of die-hard enterprise fans.

Thunderstorms on high

Despite its commercial wins, the firm has made slow progress in terms of market share, trailing at 3% versus AWS’ gargantuan 41.5% stake and Azure 29.4% (by application workloads). Why, then, is Google Cloud so optimistic as it moves into 2019, and fresh leadership under Oracle’s now-former chief Thomas Kurian?

That confidence comes from its ability to generate revenue. AWS may have carved out almost half of the market’s install base to date, but Google has proven successful at attracting high-value contracts. As a result, Google Cloud claimed in February to be the ‘fastest growing’ public cloud provider after declaring its first billion-dollar financial quarter. Based on this trajectory, and a revigorated strategy under Kurian, Google Cloud will hope it’ll continue making sufficient progress.

And yet questions linger over the nature of Diane Greene’s departure, following a relatively short-lived tenure, with several reports pointing towards internal conflict as a key driver. The ex-VMWare chief made clear in her letter that she had initially just expected to spend two years in the role, extending her stay to three, so could it be as simple as that?

Conflicting reports, however, suggest Greene was at the heart of major disagreements with Google CEO Sundar Pichai over acquisitions strategy, and military contracts, for instance. Meanwhile, dissatisfaction might have brewed over the amount of money Google was pouring into its cloud arm in return for very little payoff in terms of market share.

A major challenge for Google Cloud was an unclear direction that was underlined with tensions between herself and Pichai – particularly over the US Department of Defense Project Maven contract. As part of the deal, Google lent its AI technology to the Pentagon to analyse footage using computer vision algorithms and improve performance.

Sources claim Pichai was sympathetic to the protests, primarily led by more than 3,000 Google employees, while Greene resisted calls to break the Pentagon relationship as it was both a lucrative deal in itself, as well as a stepping stone to further government work.

Disagreements also brewed over sales strategy as Greene’s representatives increasingly joined other Google teams, like advertising and maps, to attempt to bundle cloud into wider offerings – with these efforts a source of frustration to these departments’ chiefs.

Google Cloud in 2019

Kurian now faces a daunting task, but one that will almost certainly be more closely guided by Google’s Pichai. Perhaps we will see the company shift focus somewhat once Greene departs early next year. What’s more certain is Google Cloud will continue its mission to further establish itself as a reliable and committed enterprise cloud platform, but this will unlikely be evidenced by any substantial market share gain.

Google Cloud CTO Brian Stevens told Cloud Pro that he wants to see the company shift away from corporate messages and instead focus on customer case studies. He believes the company’s future lies in its ability to let happy customers sell its platform – a smart move for a company that’s unlikely to chip away at AWS’ market share.

Google’s next stage of growth is therefore likely to involve consolidation of the share it does have, and the poaching of those bigger, prize-winning fish from the AWS and Azure pools.

Everton FC ‘lucky’ to have SureCloud’s data protection suite in place for GDPR kickoff


Keumars Afifi-Sabet

29 Nov, 2018

With each passing season, the footballing industry seems increasingly detached from the realities most businesses face. This is underlined by extortionate sums exchanged between clubs, players, and supporters on a daily basis; not to mention a counterintuitive penchant for amassing mountains of debt to drive footballing success.

But the General Data Protection Regulation (GDPR) has affected every organisation large and small in the same way, with the sporting world no exception. Just as with startups, massive football clubs must comply with demands to bring data practices in line with modern standards – from appointing a Data Protection Officer (DPO), to training staff.

For Everton FC, this process entailed leaving it to as late as January to get things started; putting faith into the all-in-one, modular GDPR suite developed by SureCloud. Maintaining a database of 32,000 season ticket holders, 60,000 registered fans, 360 employees, players and agents as well as third-party suppliers, through Excel spreadsheets, is a laborious task, with or without GDPR. But a changing landscape spurred the Premier League stalwart into re-examining how it managed data and processed GDPR’s additional demands.

Everton was still using a series of spreadsheets to manage its data within the football club, community outreach programme, and pre-school, as soon as January 2018. This is when the club hired Ian Garratt as its DPO to single-handedly oversee the transition to SureCloud. But the platform wasn’t initially up to the standards expected, Garratt tells IT Pro, and needed a significant amount of custom tailoring to suit the club’s data protection needs.

“I hadn’t worked with a full management system before. I’d looked at OneTrust which is an equivalent, very template-based, and then what I’d worked on was spreadsheets, Excel and ones that we’d built in-house, at my old employer.

“So I went into SureCloud with a long list of tailoring. Most of them were only quite minor but there was quite a few.”

Although compliant by 25 May, implementation took so long that Everton considered hanging onto its spreadsheet-based system as the deadline fast-approached. It would’ve posed a massive headache given how slow searching through spreadsheets would have been, not to mention handling internal and external queries taking a great deal longer compared with SureCloud’s touted greater functionality.

“By the time we started the discussions it was probably late January, early February,” Garratt continues. “Knowing we had to get all of the data mapping done, and in place before May, we were considering whether or not we had to do that spreadsheet-based, and import it into SureCloud afterwards, just because of the timing.

“But we were lucky in that they got it all done for us.”

Bringing the human touch for higher-quality data

Before joining Everton Garratt was information governance manager with the Southport and Ormskirk Hospital NHS Trust in Wales. Using spreadsheets in this post meant he could slot straight into the role with Everton, but would have to quickly adapt to the platform.

Fresh to the club, and sole member of the data management team, he had to gain a wider understanding of what data each department held, and their internal processes. He devised an approach to overcome these challenges all at once, sending questionnaires to each department, and inputting the answers into SureCloud himself. But the key, Garratt says, lied in working through them with people one-on-one, to personally guide them through what needed to be sent back.

Instead of giving everybody within the organisation their own SureCloud login, Garratt decided to limit access to the club’s data to three individuals: himself, the director of risk, and head of IT. They also decided against setting up email reminders and alerts, despite the fact this approach takes longer. But, why? 

“I think just from my experience you get better quality input if you actually sit down with people and do it with them, rather than sending an email alert and asking them to update something themselves when they’re not specialists in the area,” he said. 

A matter of when, not if

During implementation, Garratt oversaw the migration of data from on-prem infrastructure to the cloud. But assurances over security and the decision to go with SureCloud in the first place rested with the club and were a matter for before he joined.

“Football clubs are getting targeted more and more often. Certainly, from a backup point of view, I feel happier with it being hosted rather than living on a server,” Garratt says.

“The risk is always there. Cyber security is now on our risk register, and I think always will be. I’d expect it to be on every company’s register nowadays. The other threat I suppose is malicious staff.”

“If we did have an incident,” he explains: “We should straight away be able to see what the data types are, what the fields are, the volume, what systems there are, and what associated systems. So we’d be able to get a really good idea of the scale of the incident, and we’d be able to get that very quickly.”

And what about minor incidents, such as supporters’ email addresses inadvertently leaking due to a lapse in staff concentration, as struck West Ham FC in August?

“If that happened with us, any mass marketing should go up to our marketing department, and they’ve got a system that sends them all as individual emails – all personalised – so you don’t need to do it as BCC.

“If we had a lot of emails like that going out – and it’s largely to Hotmail or Gmail sort-of accounts, we’ve got systems that would flag them, quarantine them, then either myself or someone from the IT department would be able to review them… I imagine West Ham has probably got the same sort of system, and it just, for whatever reason, didn’t go through that system.”

Revisiting supplier contracts proves the biggest GDPR hurdle

The most difficult part of Everton’s wider compliance journey involved re-examining the several existing contracts with the club’s many suppliers. Although just a handful of suppliers have access to personal data held by the club, reaching out to renegotiate a GDPR-compliant addendum proved the toughest aspect for Garratt.

“The data mapping is what took the most time, but that’s because there was a lot of it. But getting contracts in place with suppliers with the GDPR-standard terms has been the hardest bit of the gameplay.

“They would’ve had general data protection and confidentiality terms, but GDPR stipulated a wider scope for what the contracts had to include – even things like assistance with impact assessments, acceptance of audits by us and by the ICO, and breach reporting.”

By using SureCloud, Garratt says, the club was able to list all their third parties, and a subsection of those who were charged with handling the club’s data, as well as whether they were based in an EU country, or a non-EU country with or without data adequacy.

But it was no substitute for the hard graft the club’s had to put in to ensure GDPR-compliant terms were included in each contract individually, with each supplier providing their own template, and seeking to consult with their own legal teams respectively.

Microsoft Office 365 and Azure users locked out of accounts due to MFA issues


Keumars Afifi-Sabet

20 Nov, 2018

Azure and Office 365 users were unable to login to their accounts yesterday due to issues with Microsoft’s multi-factor authentication (MFA) service.

From 4.39am on Monday until later that evening users in the UK and Western Europe, as well as pockets around the world, were unable to access their Office 365 accounts.

Azure services such as Azure Active Directory was also closed off to users whose organisations enforced mandatory MFA.

Although Microsoft says its services are now operating as normal, this incident has angered organisations trying to convince their employees of MFA’s benefits, as well as those who have had to contend with similar outages in recent months.

The cause, according to Azure’s status history, lied with requests from MFA servers, sent to a European-based database, reaching operation threshold, which in turn caused latency and timeouts.

Attempts to reroute traffic through North America ended in failure, and caused a secondary issue when servers become unhealthy and traffic was throttled to handle increased demand.

“Engineers deployed a hotfix which eliminated the connection between Azure Identity Multi-Factor Authentication Service and a backend service. Secondly, engineers cycled impacted servers which allowed authentication requests to succeed,” Microsoft wrote.

“Engineers will continue to investigate to establish the full root cause and prevent future occurrences.”

The firm says it will publish a full analysis of the outage within the next couple of days.

Error messages that users received upon trying to access their Office 365 and Azure accounts

Monday’s issues are the latest in a string of prominent Microsoft Azure and Office 365 outages customers have had to suffer in recent months, with the previous incident occurring just three weeks ago.

The days-long outage, which struck in late October, left predominately UK users unable to login to Office 365 due to additional login prompts appearing after user credentials had already been entered.

Another global outage in September affected Azure and Office 365 users across the world after a “severe weather event” knocked one of Microsoft’s San Antonio-based servers offline.

“With less than a month between disruptions, incidents like today’s Azure multi-factor authentication issue pose serious productivity risks for those sticking to a software-as-a-service monoculture,” said Mimecast’s cyber resilience expert Pete Banham.

“With huge operational dependency on the Microsoft environment, no organisation should trust a single cloud supplier without an independent cyber resilience and continuity plan to keep connected and productive during unplanned, and planned, email outages.

“Every minute of an email outage could costs businesses hundreds and thousands of pounds. Without the ability to securely log in, knowledge worker employees are unable to do their jobs.”

Cloud Pro approached Microsoft for comment.

Corporate data at greater risk in the cloud than thought, report warns


Keumars Afifi-Sabet

1 Nov, 2018

Organisations are putting too much faith in cloud service providers’ ability to keep data secure without applying their own controls, researchers claim.

Companies sustain on average 14 misconfigured infrastructure-as-a-service (IaaS) instances at any given time, leading to 2,269 misconfiguration incidents per month, according to a report released this week.

McAfee’s ‘Cloud Adoption & Risk’ paper highlighted several concerning facets of cloud security, including the fact that sensitive corporate and personal data held and shared in the cloud is rising in conjunction with the number of security incidents.

The report found that 21% of files held in the cloud contain sensitive data – a rise from 17% in the past two years. Cloud threats, meanwhile, have risen in tandem – from 20.4 security incidents per month in 2016, to 24.5 in 2017, to 31.3 per month this year.

“As we all take advantage of the cloud, there’s one thing we can’t forget – our data,” the report said. “Even when using a SaaS service we are still responsible for the security of our data in the service and need to ensure it is only accessed appropriately.

“When using an IaaS/PaaS service, we additionally are responsible for the security of our workloads in the service and need to ensure that we are configuring the underlying application and infrastructure components appropriately.”

AWS leading the pack

The report pinpointed Amazon Web Services (AWS) S3 buckets as being culpable in the security gaps of many organisations, with an estimated 5.5% of all S3 buckets in use misconfigured to be publicly readable.

This chimes with findings published earlier this year that showed misconfigured S3 buckets play a significant role in 12,000 terabytes of publicly-exposed sensitive corporate data found online by researchers.

AWS “absolutely leads the pack” in terms of its popularity with organisations, playing host to 94% of all access events – although 78% of organisations use AWS in conjunction with Azure, typically as part of a multi-cloud strategy.

McAfee also stressed the dangers with misconfiguration come down to the data, with organisations deploying data loss prevention (DLP) strategies experiencing 1,527 DLP incidents per month on average.

Among the most common AWS misconfigurations seen are unrestricted outbound access, unused security groups discovered, and S3 bucket encryption not turned on.

‘The perception gap is shocking’

McAfee’s report also highlighted a number of glaring perception gaps with cloud security, including a total lack of awareness over the number of cloud services that employees believe are in use in their organisation.

A previous survey published in April showed that the average response when asked how many cloud services are deployed across an organisation was 31. The security firm’s latest findings show the reality is 1,935, on average.

“The perception gap is shocking,” the report said, “meaning that 98% of cloud services are not known to IT – leading to obvious cloud risk.”

Asked whether they trust their cloud providers to keep data secure, 69% of respondents to the previous survey said they did, while 12% claimed the service provider bears sole responsibility for securing their data.

But “cloud security is a shared responsibility” according to McAfee’s report, “and no cloud provider delivers 100% security (including data loss prevention (DLP), access control, collaboration control, user behaviour analytics (UBA), etc.)”.

“It’s likely therefore that organisations are underestimating the risk they are entering by trusting cloud providers without applying their own set of controls,” it continued.

The insider threat

Senior site reliability engineer at IT management firm Claranet Steve Smith said the concerns raised aren’t as hinged on the services themselves, as they are on their users.

“The cloud security challenges highlighted in this report have little to do with the platform itself, but everything to do with the people using it and, in our experience, people are the biggest weakness here,” he said.

He added the major cloud providers, such as AWS, have a series of default settings designed to support configuration, but it’s easy to get things wrong without knowledge as to how to use the platform.

“We’ve seen many AWS configurations that end-user businesses have developed themselves or have worked with partners that don’t have the right experience, and, frankly, the configurations can be all over the place.

“A click of a button or slight configuration change can have a major impact on your security posture, so it’s important to get a firm grip of the access controls and have safeguards in place to catch mistakes before they hit the production environment.”

McAfee’s report revealed the majority of cloud security incidents – 14.8 of the 31.3 experienced on average per month – are insider threats. These may include straightforward but significant mistakes such as sharing a spreadsheet with sensitive personal data, or malicious activity such as a sales employee downloading a full contact list before leaving for a rival firm.

The research found 94.3% of organisations experience at least one such incident per month, which is true for 58.2% of organisations with privileged user threats – such as an administrator accessing data in an executive’s account.

Mitigating cloud risks

The security company issued three core recommendations as to how businesses and organisations can bolster their strategy, including routine audits, understanding where sensitive data is held, and locking down sharing.

Leading IaaS and PaaS configurations, such as AWS, Azure, and Google Cloud Platform are a rapidly-growing alternative to on-prem infrastructure, the report said, and so need to be regularly audited to get ahead of misconfigurations before “they open a major hole” in security outlays.

Some of the most sensitive data, meanwhile, is held on platforms such as Office 365 and Box. McAfee recommended in its report that organisations grasp where their most sensitive data is held in order to reduce exposure to risk, and extending DLP policies.

Controlling how data is shared, moreover, and implementing collaboration restrictions on documents can mitigate the risk of inadvertent exposure – for example by configuring share settings to “anyone with a link”, or by sending documents to personal email addresses.

Days-long Microsoft outage leaving users unable to login to Office 365


Keumars Afifi-Sabet

31 Oct, 2018

Microsoft is investigating the cause of a lengthy Office 365 outage that has persisted for several days, with business customers, predominantly based in the UK, experiencing difficulties signing in to their accounts.

Users have been reporting problems with logging in to their Office 365 accounts across social media since Friday 26 October, with system information site DownDetector also seeing a spike in user complaints.

These complaints receded over the weekend, but resumed again on Monday 29 October, and have been peaking during working hours since. The issue appears to be predominantly affecting users in the UK.

The issue manifests as additional login prompts appearing after users have entered their details into the username and password fields. The appearance of a second “security prompt” means many business users have been unable to access critical services.

Microsoft confirmed yesterday it was investigating the issue, adding a handful of recently-made changes were rolled back in an attempt to resolve the symptoms.

“We rolled back recent changes that were made in the environment and some customers are reporting that impact has been mitigated for SP152610 and EX152471. The source of the issue remains under investigation,” Microsoft tweeted.

“If you have a user that is actively experiencing impact, please reply to us or contact support so we can gather additional information to assist with our investigation.”

The frequency of users registering Office 365 complaints in the last few days on DownDetector

This issue, which has persisted for more than three working days, is the latest in a series of high-profile outages that Microsoft has sustained in recent months.

Microsoft Azure and some Office 365 services suffered disruption for more than 24 hours in September following a “severe weather event” that knocked an entire data centre offline.

Customers in the US, and a host of European countries were unable to access a number of cloud-based apps after lightning strikes caused a power surge to Microsoft’s San Antonio, Texas-based data centre.

In April meanwhile, a similar but less severe Office 365 outage meant users for were unable to login to their accounts for a short period, affecting customers in the UK, France, the Netherlands and Belgium.

A map of the areas affected by the latest Office 365 outage, taken on Monday 30 October from DownDetector

“The continuity of business critical systems is vital for organisations today to maintain productivity and effective customer service,” said cyber resilience expert at Mimecast, Pete Banham.

“This Office 365 issue is a clear reminder that in the cloud age, it’s often down to individual organisations to ensure they have a plan B.”

“Employees can also create security and compliance risks during downtime when using unsanctioned or consumer IT services to get the job done.

“We are urging organisations to consider a cyber resilience strategy that assures the ability to recover and continue with business as usual.”

IT Pro approached Microsoft for further comment, and for details as to how the issue arose. The company did not respond at the time of writing but tweeted that it would provide a further update at 13.00 GMT today.

Days-long Microsoft outage leaving users unable to login to Office 365


Keumars Afifi-Sabet

31 Oct, 2018

Microsoft is investigating the cause of a lengthy Office 365 outage that has persisted for several days, with business customers, predominantly based in the UK, experiencing difficulties signing in to their accounts.

Users have been reporting problems with logging in to their Office 365 accounts across social media since Friday 26 October, with system information site DownDetector also seeing a spike in user complaints.

These complaints receded over the weekend, but resumed again on Monday 29 October, and have been peaking during working hours since. The issue appears to be predominantly affecting users in the UK.

The issue manifests as additional login prompts appearing after users have entered their details into the username and password fields. The appearance of a second “security prompt” means many business users have been unable to access critical services.

Microsoft confirmed yesterday it was investigating the issue, adding a handful of recently-made changes were rolled back in an attempt to resolve the symptoms.

“We rolled back recent changes that were made in the environment and some customers are reporting that impact has been mitigated for SP152610 and EX152471. The source of the issue remains under investigation,” Microsoft tweeted.

“If you have a user that is actively experiencing impact, please reply to us or contact support so we can gather additional information to assist with our investigation.”

further update, released by Microsoft at 13.00 GMT today, suggested no additional reports of disruption had been received, and that the “impact was remediated on Tuesday, October 30” late in the evening.

But a handful of customers replying to Microsoft’s tweet have suggested this is not correct, with one user John Gardner admitting he still had at least three users still affected.

The frequency of users registering Office 365 complaints in the last few days on DownDetector

This issue, which has persisted for more than three working days, is the latest in a series of high-profile outages that Microsoft has sustained in recent months.

Microsoft Azure and some Office 365 services suffered disruption for more than 24 hours in September following a “severe weather event” that knocked an entire data centre offline.

Customers in the US, and a host of European countries were unable to access a number of cloud-based apps after lightning strikes caused a power surge to Microsoft’s San Antonio, Texas-based data centre.

In April meanwhile, a similar but less severe Office 365 outage meant users for were unable to login to their accounts for a short period, affecting customers in the UK, France, the Netherlands and Belgium.

A map of the areas affected by the latest Office 365 outage, taken on Monday 29 October from DownDetector

“The continuity of business critical systems is vital for organisations today to maintain productivity and effective customer service,” said cyber resilience expert at Mimecast, Pete Banham.

“This Office 365 issue is a clear reminder that in the cloud age, it’s often down to individual organisations to ensure they have a plan B.”

“Employees can also create security and compliance risks during downtime when using unsanctioned or consumer IT services to get the job done.

“We are urging organisations to consider a cyber resilience strategy that assures the ability to recover and continue with business as usual.”

Asked to assess the increasingly cloud-centric business ecosystem, in light of recent outages, Banham told Cloud Pro it’s a balancing act between bottom-line cost reduction, and putting faith into potentially unreliable third parties.

“The merit of this approach is most likely to be a bottom line cost reduction on paper, but the true cost of a single outage could negate this entirely.

“An ecosystem where businesses wholly depend on the reliability of cloud hosting services is unlikely to be sustainable. After all, few organisations can tolerate lengthy or frequent disruption to their IT services.

“There should always be a backup plan that assures the ability to recover and continue with business as usual despite an outage. This particular incident is another reminder that relying on a single cloud service isn’t the most effective cyber resilience strategy.”

Cloud Pro approached Microsoft for further comment, and for details as to how the issue arose. The company did not respond at the time of writing.

Salesforce in fresh row over US immigration policy after $250,000 donation is rejected


Keumars Afifi-Sabet

20 Jul, 2018

Salesforce faces boycott fears for its work with the US Customs and Border Protection agency (CBP) after a nonprofit immigration advocacy group rejected its $250,000 donation.

Less than a month after the company was criticised for its work with the CBP, company executives offered the Refugee and Immigration Center for Education and Legal Services (RAICES) a hefty donation – only for the Texas-based advocacy group to reject the money.

More than 650 Salesforce employees urged their CEO Mark Benioff in June to review the company’s involvement with CBP in light of its role in separating families at the border.

But the company defended its work with the agency, predominantly involving products such as Community Cloud and Service Cloud to modernise recruitment and engage with citizens, saying it was “not aware of any Salesforce services being used by CBP for this purpose”.

Benioff later confirmed the company would not be making any changes to its work with CBP, but would by committing $1 million to help families affected by the separation policy.

After pledging $250,000 to RAICES, the organisation said it would only accept the money if Salesforce cancelled its work with CBP. The immigration advocacy group told Salesforce that “when it comes to supporting oppressive, inhumane and illegal policies, we want to be clear: the only right action is to stop” in an email exchange seen by Gizmodo.

RAICES’ stance echoes that of 22 Salesforce customers who also this week called for the cloud computing company’s CEO to “cut your contract” in an open letter that stated donations are not enough.

“We are nonprofits, startups, and businesses that are Salesforce’s customers. The tools that Salesforce provides helps us achieve our mission,” the letter read.

“However, we are absolutely appalled that Salesforce is providing assistance to government agencies that are violating human rights. We cannot, in good conscience, ignore this issue.

“We have seen that Salesforce has spoken out against the government’s inhumane practice of separating and detaining children.

“We appreciate that and the donation they have pledged to make to affected families. But that is not enough. As long as Salesforce keeps its contracts with Customs and Border Protection, they are still enabling the agency to violate human rights.”

Cloud Pro approached Salesforce for comment but did not rely at the time of writing. 

Google links US and Europe clouds with transatlantic subsea cable


Keumars Afifi-Sabet

18 Jul, 2018

Google is about to embark on building a massive subsea cable spanning the length of the Atlantic Ocean – from the French coast to Virginia Beach in the United States.

Claimed to be the first private transatlantic subsea cable, named ‘Dunant’ after the Nobel Peace Prize winner Henri Dunant, the latest addition to Google’s infrastructure network will aim to increase high-bandwidth ability, and create highly secure cloud connections between the US and Europe.

Google claims the new connection – which will support the growth of Google Cloud – will also serve its business customers by guaranteeing a degree of connectivity that will help them plan for the future.

Explaining the project in a blog post, Google’s strategic negotiator, Jayne Stowell, said the decision to build the cable privately, as opposed to purchasing capacity from an existing cable provider or building it through a consortium of partners, took several factors into account, including latency, capacity and guaranteed bandwidth for the lifetime of the cable.

Dunant follows Google’s plans to build another massive private cable spanning 10,000km between Los Angeles, California and Chile, dubbed Curie, one of three cables comprising a $30 billion push to expand its cloud network across the Nordics, Asia and the US.

Both Curie and Dunant originated in the success of relatively short pilot cables, dubbed Alpha and Beta as a nod to their software development process.

“Our investments in both private and consortium cables meet the same objectives: helping people and businesses can take advantage of all the cloud has to offer,” Stowell said.

“We’ll continue to look for more ways to improve and expand our network, and will share more on this work in the coming months.”

Google’s efforts to build a transatlantic cable follows the completion of a joint project by fellow tech giants Microsoft and Facebook in September last year, named Marea, that connected Spain with the east coast of the US.

The cable stretches more than approximately 6,600km, and weighs 4.65 million kg or, as Microsoft put it at the time, the equivalent of the weight of 34 blue whales.

Picture: Virginia Beach, US/Credit: Shutterstock

Dropbox plans SMR deployment to transform its Magic Pocket infrastructure


Keumars Afifi-Sabet

14 Jun, 2018

Dropbox has announced plans to deploy shingled magnetic recording (SMR) technology on a massive scale in a bid to transform its in-house cloud infrastructure.

The file hosting platform said deploying SMR drives on its custom-built Magic Pocket infrastructure at exabyte scale will increase storage density, reduce its data centre footprint and lead to significant cost savings, without sacrificing performance.

Dropbox says it is the first company to deploy SMR hard drive technology on such a scale. 

“Creating our own storage infrastructure was a huge technological challenge, but it’s already paid dividends for our customers and our business,” said Quentin Clark, Dropbox’s senior vice president of engineering, product, and design.

“As more teams adopt Dropbox, SMR technology will help us scale our infrastructure by providing greater flexibility, efficiency, and cost savings. We’re also excited to make this technology open-source so other companies can benefit from it.”

SMR, a hard drive technology that allows tracks on circular disks to be layered on top of one another, will be deployed on a quarter of the Magic Pocket infrastructure by 2019, according to Dropbox, with plans to open source the test software created in this process underway in the coming months.

Magic Pocket is the name of Dropbox’s custom-built infrastructure project that was rolled out after the file sharing company decided to migrate away from Amazon Web Services (AWS). The company initially built a prototype as a proof of concept in 2013, before managing to serve 90% of its data from in-house infrastructure in October 2015.

In what Dropbox describes as a “significant undertaking”, SMR technology was chosen for its ability to expand disk capacity from 8TB to 14TB while maintaining performance and reliability. Drives were sourced from third-parties, before the company designed a bespoke hardware ecosystem around it, also creating new software to ensure compatibility with Magic Pocket architecture in the process.

“SMR HDDs offer greater bit density and better cost structure ($/GB), decreasing the total cost of ownership on denser hardware,” the Magic Pocket and hardware engineering teams explained. “Our goal is to build the highest density Storage servers, and SMR currently provides the highest capacity, ahead of the traditional storage alternative, PMR.

“This new storage design now gives us the ability to work with future iterations of disk technologies. In the very immediate future we plan to focus on density designs and more efficient ways to handle large traffic volumes.

“With the total number of drives pushing the physical limit of this form factor our designs have to take into consideration potential failures from having that much data on a system while improving the efficacy of compute on the system.”

Towards the end of the year, the file hosting service says its infrastructure will span 29 facilities across 12 countries, with Dropbox projecting huge cost-saving and increased storage density benefits if SMR deployment is deemed a success.

CEBIT 2018: Huawei launches hybrid cloud offering on Azure Stack


Keumars Afifi-Sabet

12 Jun, 2018

Huawei has launched a hybrid cloud service built for Azure Stack, Microsoft’s offering that brings Azure into customers’ datacentres as a private cloud.

Built on Huawei’s FusionServer V5 servers and CloudEngine switches, Huawei said the tool will allow enterprises to enable digital transformation projects by bringing Azure cloud services to on-premise sites where there is low connectivity, such as an aircraft or an oil rig.

Huawei is one of many firms working with Microsoft on producing services for Azure Stack, but speaking at CEBIT 2018, Microsoft partner director for Azure Stack, Vijay Tewari, labelled the vendor’s relationship with Huawei in particular as deep and strong.

“In terms of working with partners, the amount of time that Huawei [took] to launch the product was the shortest time it took as compared to any other partner, so we have a very strong engineering relationship with [president of server product line] Qiu Long and others at Huawei,” he said.

Huawei believes it is pivotal to pair its infrastructure with partners’ applications as it designs technology for use in smart cities, the cloud, and networking.

The Chinese networking giant likened digital transformation to a “symphony” as it promoted partnerships with a range of companies including Microsoft and UK-based Purple Wi-Fi, the latter of which it is offering its networking infrastructure to allow the Wi-Fi platform to extend the range of analytics tools it can offer customers. 

Purple Wi-Fi will be able to offer customers more detailed tracking information for consumers, with a view to boosting shopping experiences.

The company also outlined how it plans on using its partnerships with local companies to migrate projects to a global scale, with president of Huawei western Europe, Vincent Pang, outlining how a number of small-scale initiatives in Paris and London have helped the company win business elsewhere in the world.

“We want to build a local road here, we want to work with our local partners, we want to have more innovation to create end-to-end best practice here in Europe – but it’s not only for the local innovations, but how we can use these for the global market, and global vertical transformations,” he said.

Pang explained how a smart water project in Paris paved the way for expansion into Shanghai, while a smart logistics project with London’s DHL helped the company win a business case for a car manufacturer in China.

Huawei’s attempt to position itself as a leading player in the smart city scene arose with the lunch of the ‘Rhine Cloud’, a smart city and public services cloud platform, expanding on an initial memorandum of understanding signed earlier this year.

The new framework agreement extends the commitment to building a smart city platform in Duisberg, Germany to serve as a model that the company is hoping to export to the rest of western Europe.

Huawei’s first smart city digital platform includes five resource coordination capabilities for IoT, big data, a geographic information system (GIS) map, video cloud, and converged communications; all combining to share the basic resources with partners, and facilitate development of applications.

Martin Murrack, director of digitisation for Duisberg, outlined some of the benefits citizens should expect from the smart city collaboration with Huawei, including free Wi-Fi access and innovations in education, as well as unveiling the first Rhine Cloud-based SaaS platform, which digitises indoor environments, developed by Navvis.