Adding cloud to your analytics ecosystem: A guide

It’s a common question: what should a business executive consider when determining the best approach for adding cloud to an analytic ecosystem?

We at Teradata have thought a lot about this topic because our customers have substantial environments with significant amounts of data, many dozens or hundreds of applications, and thousands of users all over the world.

Planning for such a “large” scenario in the cloud is vastly different than thinking about what would be required for a “small” or greenfield system because the needs of the latter are orders of magnitude less taxing than those of the former.

And to be clear, I’m not suggesting that tiny is any less important than big; what I’m saying is that the solutions used to address the “large” set of challenges are vastly different than the solutions used to address the “small” set of challenges. This is the case regardless of technology, too: stocking your refrigerator is quite different than stocking an entire supermarket. It’s the same with analytics in the cloud.

As the saying goes, “Quantity has a quality all its own” – and good luck to the executive who assumes that what works in a small analytic proof of concept (POC) will necessarily scale up to work in a large, mission-critical environment. Throughout our history we’ve seen companies leave and then boomerang back once they realise that the grass is NOT greener on the other side of the fence. Buyer beware.

Do this – don’t do that

Advtice: when contemplating migration to the cloud, or when constructing a hybrid (combination of on-premises and cloud) architecture, one should never start with technology and see how it applies to their requirements.

The reverse is the best approach: start with business requirements and then evaluate which tradeoffs, architectures, tools, and mitigation plans are needed to meet the needs. Failing to start with business requirements often leads to an expensive, short-lived “project” rather than an effective, long-term solution. Don’t be “that guy” who assumes. Trust, but verify.

From a business requirements perspective, an ideal cloud analytic solution is one that:

  • Blends seamlessly with existing (usually on-premises) infrastructure and applications
  • Takes advantage of native cloud capabilities, including security and integration
  • Avoids any sort of vendor lock-in which would constrain choice and flexibility in the future

Unfortunately, most folks tend to place too little thought on the first point, because while greenfield cloud deployments are extremely rare – especially for any organisation which is not a startup – it is easiest and simplest to talk about a scenario in which there is nothing to bring along. But, having zero legacy systems or technical debt is probably not (your) reality, so blue sky thinking can only take one so far.

Pick your partner with care

Some of the characteristics that go into what we advocate as a cloud solution include:

  • Consistent user experience regarding the tools, languages, and operating procedures with which your users are already familiar – thus speeding time-to-value and creating an all-encompassing ecosystem rather than separate silos of analytics
  • Consistent enterprise-class security that lines up with existing corporate policies and role-based access controls – thereby making it easy to operate, govern, and audit across all systems as a cohesive entity. Again, the integrated whole is MUCH more valuable than an itemised sum of the parts
  • Advanced optimiser and workload management which enables users and administrators to monitor and manage performance and cost – and adjust manually or automatically to yield the desired mix of outcomes (benefit) vs. inputs (cost)

Let’s cut to the chase: the best (and fastest) way to achieve the right cloud solution for you is to partner with experts who have “been there and done that”. As with any complex undertaking offering high return yet also high risk, starting with a trusted advisor is the soundest approach to a successful outcome.

As the saying goes, “Only a fool learns from his own mistakes. The wise man learns from the mistakes of others.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft aquires Java specialist jClarity to boost Azure workloads


Bobby Hellard

20 Aug, 2019

Microsoft has acquired software platform jClarity in a bid to drive more Java workloads to Azure.

The deal will see jClarity’s AdoptOpenJDK project move to the Azure where its data science teams will add its expertise to Java projects.

AdoptOpenJDK is a community of Java users, developers and vendors, which includes the likes of Amazon, IBM, Pivotal and Red Hat. The organisation is an advocate of OpenJDK, the open-source project which forms the basis of the Java programming language and platform.

Microsoft said that it had seen an increase of large-scale Java installations on Azure, particularly with platforms like Minecraft and Adobe.

“At Microsoft, we strongly believe that we can do more for our customers by working alongside the Java community,” the company said in a blog post. “The jClarity team, with the backing of Microsoft, will continue to collaborate with the OpenJDK Community and the Java ecosystem to foster the progress of the platform.”

Microsoft said that more than half of compute workloads on Azure run on Linux, making it a great platform for open-source software, which includes Java.

For jClarity, the team will continue to work out in the open in various Java communities, its CEO, Martijn Verburg said in a blog post. But the company is anticipating a greater contribution to the Java community with the support of Microsoft.

“It’s always been jClarity’s core mission to support the Java ecosystem,” Verburg said. “We started with our world-class performance tooling and then later became a leader in the AdoptOpenJDK project.

“Microsoft leads the world in backing developers and their communities, and after speaking to their engineering and programme leadership, it was a no brainer to enter formal discussions. With the passion and deep expertise of Microsoft’s people, we’ll be able to support the Java ecosystem better than ever before.”

Cloud Security Alliance publishes ‘egregious 11’ list of top threats to the cloud

If one other thing besides death and taxes is certain, it is that cloud security will remain a key talking point. Whose responsibility is it exactly – and why does the shared responsibility model continue to cause havoc?

Some areas however can be nailed down much more solidly. The Cloud Security Alliance (CSA) has issued what it calls the ‘egregious 11’ in its latest report, giving organisations an up-to-date list of the biggest cloud security concerns to aid better risk management decision making.

Many of the biggest security risks are ones which regular readers of this publication will be more than familiar. Data breaches, insider threats and account hijacking, along with account misconfiguration, are usually at the sharp end of any public snafus, from Capital One in the former, to Facebook in the latter.

As a result, the CSA recommendations are more mantras than anything new. Data is rapidly becoming the primary target for cyberattacks, while data accessible via the Internet is the most vulnerable asset to misconfiguration. Companies need to bring automation into the equation to remediate any misconfiguration issues.  

The section subtitled ‘lack of cloud security architecture and strategy’ is an interesting one – and it is here where the report notes the lack of awareness around shared responsibility as key. “The functionality and speed of migration often take precedence over security,” the report notes. “Implementing security architecture and developing a robust security strategy will provide organisations with a strong foundation to operate and conduct business activities in the cloud.

“Leveraging cloud-native tools to increase visibility in cloud environments will also minimise risk and cost. Such precautions, if taken, will significantly reduce the risk of compromise.”

There is some good news, however. The previous report from the CSA focused around what it called the ‘treacherous 12’. Even for those with a less-than-stellar grasp of mathematics, it is worth noting things are going in the right direction, albeit slowly.

The report argues that many traditional cloud security issues which fall to vendors are no longer seen as a major threat. These include denial of service, shared technology vulnerabilities, and CSP data loss.

Yet while these areas can be seen as being well addressed, the other interpretation is that security issues which are the result of management decisions around cloud strategy and implementation are of much more concern.

“The complexity of cloud can be the perfect place for attackers to hide, offering concealment as a launchpad for further harm,” said John Yeoh, CSA global vice president of research. “Unawareness of the threats, risks and vulnerabilities makes it more challenging to protect organisations from data loss.

“The security issues outlined in this iteration of the report, therefore, are a call to action for developing and enhancing cloud security awareness, configuration and identity management,” Yeoh added.

You can download and read the full report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How the rise of 5G will disrupt cloud computing as we know it

The rollout of 5G has begun in earnest. Verizon and other US carriers have already unveiled the tech and its groundbreaking speeds in a few key markets, and across the pond in the UK, some major carriers are widely expected to deploy 5G later this summer. Many expect 5G to be equally as disruptive – if not more so – than cloud computing has been over the past few years.

All of this raises questions when it comes to the cloud. How will 5G’s breakneck mobile speeds affect cloud computing and many of the most common applications of it? What are examples of the cloud – and technology more generally – that 5G will markedly improve versus those it may render obsolete?

Why 5G is such a big deal

Before we dive too deeply into how 5G will affect the cloud, it will be useful to have at least a layman’s understanding of what 5G actually is and how it works. Like the network standards before it, 5G employs radio frequency (RF) waves to transmit and receive data. The minimum speeds a network must provide to both downloading and uploading for it to be classified as 5G are 20 Gbps per second down and 10 Gbps up. For comparison’s sake, the minimum download and upload speeds for the first iteration of 4G were 150 and 15 megabits, respectively.

As big of an increase as these speeds represent, 5G also presents an equally groundbreaking decrease in latency. Latency is the time it takes for two devices on a network to respond to one another. 3G networks had latency of about 100 milliseconds; 4G is around 30 milliseconds; while 5G will be as low as 1 millisecond, which is for all intents and purposes instantaneous.

What 5G will improve

Thanks to the insanely low latency we mentioned above, things that rely on speed will be obviously improved. Near real-time control of robotics will open up new worlds – and indeed, already has – when it comes to remote surgery, which will literally save lives.

With 5G will come incalculable improvements to the Internet of Things (IoT), which is much more than just being able to tweet from your refrigerator. Smart cities rely on IoT to reduce traffic congestion, stay on top of water distribution needs, increase security, and even decrease pollution. Agriculture uses IoT devices to be more efficient and thus increase the globe’s food supply. 5G will also vastly improve truly autonomous, self-driving cars to the point where wide adoption may very well become a reality. All of these will help keep us safer, healthier, and alive longer.

If it seems like 5G will only being improvements, that might not be the case – especially for the cloud.

Possible impact to the cloud

Trying to come up with a definitive list of every possible aspect of cloud computing that 5G will affect is likely an impossible task, as we won’t fully know until it’s widely rolled out and customers and enterprises have had a chance to acclimatise to it. But even in these days of 5G infancy, there are definite known knowns.

First, as we have covered, 5G will effectively eliminate latency, allowing device to connect nearly instantly. What does that mean for the cloud? In theory – it could mean the death knell for cloud computing as a whole.

Think about it. One of the main reasons the cloud is so beneficial is for numerous devices – either in an organisation for a private cloud or any user with an Internet connection for a public cloud – to connect to and transmit data with a central machine or hard drive located on the cloud. For an employee to share a large video file with a colleague who’s working from home that day, the cloud made it simple – just put it on the shared drive, wait for it to upload, notify your co-worker it’s up there, and he or she can download it from the same shared drive.

But why go through all that if your device can connect with your colleague’s device with only a millisecond of latency and a minimum connection speed of 20 Gbps down and 10 Gbps up? That large 10 gigabyte video can be transferred from user to user directly in about eight seconds and there’s no need to go through an additional step or use an online repository.

While the cloud will still likely have significant use cases in a post-5G world – especially if cloud providers are ready to adapt – it’s not too much of a stretch to envisage a world where the cloud is largely a thing of the past.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Top tech considerations for startups


Sandra Vogel

20 Aug, 2019

It’s challenging and daunting to launch a new business. Much has to be done in a short space of time including taking care of legal and financial matters that just won’t wait. It’s a stressful time. Technology is one of the key things you need to get right from the outset. But what tech should you prioritise at the startup stage, and how should you go about taking the right approach to it and getting best value? Here’s a primer.

Work out what you need

It’s tempting to sit down with a blank sheet of paper, write down every type of technology you can think of and decide you need all of it from day one to make your business flourish. That might not actually be the case, however.

Your business plan will include strategies for growth; referring back to this document, it will almost certainly become apparent that some things might not be needed right at the outset, or at all.

For example, think about social media. Perhaps at some stage you want to be on every available platform, but at the outset some may be more important than others. It may make more sense to prioritise one or two to focus, define time and costs, factor them into the business plan, and also build in a growth plan.


Learn how transforming your document-based processes can improve customer experience, business partnerships and internal productivity in this whitepaper.

Download now


Is it better to rent than buy?

According to government research published in late 2018 (PDF), micro-businesses – those with fewer than 10 employees – accounted for 96% of all businesses in the UK last year. If you’re just starting out, it’s likely you will fall into this category and, statistically, are likely to remain there for some time.

If this is the case for your business, you might not have to buy any technology outright at all. A 2018 survey by the Enterprise Research Group (PDF) showed that 41% of micro-businesses used web based accounting, rather than traditional software, while 43% used cloud computing in general. This latter figure is up from 22% in 2015 and just 9% in 2012, so if you decide to take this route, you won’t be alone.

Prioritising three “must have” tech cornerstones

For startups, then, it makes sense to embrace the cloud from the outset. These providers take care of security, as well as data backup and recovery. They also allow you access to the latest features, and provide anywhere, any time access to their services so you can keep on top of the business from your office, from meetings, from client premises, or wherever you happen to be.

You also need to think about your website – something that is crucial to the success of almost every business in our digital age. Consider whether it’s primarily for advertising, or if will you provide services through it: Does it need to process transactions, hold user accounts, integrate with stock systems or otherwise share data with third party applications? If this is the case, how will you abide by regulations such as GDPR? All of this will affect site development and maintenance costs – don’t cut corners that might end up costing money down the line, but also don’t buy more than you really need.

As for social media, it’s a great way to promote your business and build communities, but it can be a drain on time and resources. Think carefully about what you want to achieve, weigh up the opportunity-cost carefully, and decide how each platform would be managed, and by whom. Consider what success looks like, how and when will you evaluate, and whether to drop (or temporarily rest) under-performing platforms.


Tapping digital tools to help automate document-based transactions can deliver competitive advantages, especially to SMEs undergoing digital transformation. Learn more in this whitepaper.

Download now


Question everything

Hidden inside this discussion are two key watchwords which should really be the guiding principles behind your entire business plan and forward looking strategy, not just technology. The first is to proceed with care: Every penny spent on one thing is a penny you can’t spend on something else, so each penny should be spent wisely.

The second is to question everything. Last week’s great idea might seem iffy after a bit of research and some asking around among trusted critical friends.

For tech in particular, consider what is necessary for success and what’s a “nice to have”. In the foundational startup stage, focus on the former – as your business starts to grow, you can always consider adding on the latter.

Enterprise blockchain firm Cypherium secures Google Cloud partnership, adding to AWS deal

Meet Cypherium. The New York-based blockchain startup is partnering with Google Cloud – making it the third major cloud provider to secure a deal with the company.

The company’s goal is to provide an enterprise-ready blockchain platform which promises up to 5,000 transactions per second. Scalability continues to be a concern for organisations looking to utilise blockchain technologies, and is seen as a determining factor in why a pronounced gap remains between pilot projects and production for blockchain in the enterprise.

The collaboration with Google Cloud, in which Cypherium will join the technology partner program, is to ‘provide enterprises with a full-stack solution to harness the potential’ of distributed ledger technologies (DLTs), in the words of co-founder and CEO Sky Guo.

“The growing demand in the market for DLT solutions in the financial industry and beyond drives our commitment to this collaboration,” said Guo in a statement. “Cloud customers can rest assured that the blockchain solutions they implement using Cypherium Enterprise are clad in robust security, and capable of delivering rapid transaction speeds for its smart contracts and achieving fast data processing from its Java virtual machine.”

Cypherium had previously partnered with Amazon Web Services (AWS) and IBM Cloud. The former was announced in May with Cypherium joining the AWS Marketplace. The startup took pains at the time to confirm the partnership’s validity in a Medium post. “Cypherium is functional and innovative technology that has practical solutions to problems across a number of industries, and for that reason alone, it distinguishes itself,” the company wrote.

Partnerships between cloud providers and blockchain projects are certainly in vogue right now; last month aelf, a decentralised cloud computing blockchain platform, was made available on Microsoft Azure, joining AWS.

This is not something which Cypherium alone is tackling; as sister publication The Block has covered, the Telos Foundation has claimed a current record of 12 million transactions across 24 hours. Douglas Horn, architect at Telos and author of the company’s whitepaper, outlined the rationale for organisations. “Until the network’s there, built and active, that it can be rolled out on, nobody’s going to roll out, because they’re dooming themselves to failure,” said Horn.

“Google Cloud and Cypherium are bound by a perpetual need to innovate,” Guo added. “The future of commerce and blockchain are inextricably linked and we are well-positioned to leverage Google Cloud’s expansive resources and best-in-class infrastructure to accelerate the use of the technology to solve real-world problems faced by businesses today.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

What’s in your cloud? Key lessons to learn after the Capital One breach

The lack of visibility into the expanded cloud attack surface is a fast-growing problem that is only getting worse. Although we have seen misconfigurations in the cloud before, the Capital One breach is a sobering reality check for the security industry. We need to vastly improve threat detection and response in cloud environments.

The attack behaviours associated with the Capital One breach that occurred in March 2019 are consistent with other data breaches with one exception: It transpired quickly over two days due to the attacker’s familiarity with specific Amazon Web Service (AWS) commands.

The simple misconfiguration of a web application firewall (WAF) – which is designed to stop unapproved access – enabled the attacker to obtain an access token from that same WAF to carry out the breach.

AWS enables organisations to issue tokens that give trusted users temporary security credentials that control access to AWS resources. Temporary security credentials work almost identically to long-term access key credentials.

A temporary token is a good way to give a user the right to perform specific tasks and it reduces the need to manage access to certain accounts. However, it runs the risk of exposing passwords from a compromised account.

The misconfiguration of the Capital One WAF enabled a remote attacker to generate a temporary AWS token that could then be used to fetch data from an AWS simple storage service (S3).

It would be easy to say Capital One should have not made this kind of mistake, but when organisations transition to the cloud, these type of mistakes and misconfigurations are unfortunately common. With full access to the web servers, the attacker executed a simple script of AWS commands used for system administration. The first was the S3 list-buckets command to display the names of all the AWS S3 buckets.

This was followed by a sync command that copied 700 folders and buckets of data containing customer information to an external destination. These are AWS commands used every day by cloud administrators that manage data stored in AWS virtual private clouds (VPCs).

The challenge in detecting this type of attack is not the threat behaviours, but the data source. The attack did not use malware, was not persistent on hosts, and did not exhibit unusual network traffic. And the attacker blended in with normal cloud administrative operations.

Data access and compromise occurred using simple AWS commands commonly used in the management interface. Any hope of detecting attackers in this scenario will require insight into the AWS management plane – which doesn’t exist today.

With so much hanging in the balance, high-fidelity visibility into the everyday management of every cloud infrastructure is imperative. In the Capital One case, the attacker was quickly identified by a vigilant observer. The attacker was not a nation-state actor or part of a sophisticated cybercrime ring capable of covering its tracks. Otherwise, this data compromise could have easily gone unnoticed for years.

Managing access

Cloud service providers (CSPs) must ensure that their own access management and controls limit access to cloud tenant environments. And cloud tenants must assume compromise is possible and focus on learning the who, what, when and where of administrative access management.

Properly assigning user access rights helps reduce instances of shared credentials so cloud tenants can concentrate on how and when those credentials are used. Resource access policies can also reduce opportunities for movement between the CSP infrastructure and tenants.

Detection and response

It is critically important to monitor cloud-native and hybrid cloud environments as well as determine how to correlate data and context from both into actionable information for security analysts.

Monitoring resources deployed in the cloud by tenants is essential to increase the ability to detect lateral movement from the CSP infrastructure to tenant environments and vice versa. Visibility into this and other attacker behaviours is dependent on the implementation of proper tools that leverage cloud-specific data.

Cloud tenants who coordinate with CSPs – as well as CSPs who coordinate with cloud tenants – can stitch together a powerful combination of information that can increase the likelihood of detecting the post-compromise activities before a catastrophic breach occurs.

Security operations

Knowing and managing the cloud infrastructure as a part of due diligence should help to identify systems and operations that are compromised, such as in the Capital One breach.

Changes to production systems can be difficult to detect. But with 360-degree visibility into the cloud infrastructure, it is much easier to detect attacker behaviours in compromised systems and services that are clearly operating beyond the scope of what is normally observed.

Ideally, when security operations teams have solid information about expectations for that cloud infrastructure, malicious behaviours will be much easier to identify and mitigate.

Read more: Capital One confirms data breach, cites cloudy approach as key to swift resolution

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

A tale of two oligopolies: How JEDI illustrates the need for multi-cloud

Opinion The Joint Enterprise Defense Infrastructure plan, or JEDI for short, has been mired in controversy ever since the US government opened it up for bidding. The $10bn cloud computing contract is at the centre of the Pentagon’s modernisation project and is set to completely overhaul its information technology systems.

This is the largest public cloud contract in history, and whichever provider is chosen will provide the digital infrastructure for the world’s most powerful defense department. The two names left on the ticket are Amazon and Microsoft – no surprises here, but it’s a decision that should be called into question and one which could have wider repercussions on the global cloud computing market.

In the UK public sector steps have been taken towards de-monopolising the cloud market (as we’ve seen from government policies like Cloud First), and a UK report was published last month by business consultancy Roland Berger and think-tank Internet Economy Foundation (IEF), which advised that the onus should be on governments to take a proactive approach to prevent oligopolies from forming. As the report says, “public administrations must support a balanced cloud portfolio as part of a multi-cloud strategy. In other words, they should source the most suitable cloud solution for each individual task area.”

Meanwhile, the US government’s decision to shortlist the two biggest providers with a view to entering a 10-year contract (bearing in mind how fast the cloud market is growing) has raised eyebrows. This has the potential to seriously stifle innovation and competition in the cloud market, as well as drawing attention to the dangers of vendor lock-in.

Trapped in a cloud

The expense incurred by vendor lock-in may not be the government’s number one priority, but it has long been an issue for the enterprise. Cloud lock-in with first generation providers means hidden expenses such as egress fees that quickly accumulate. These costs are often not felt until months into deployment, meaning that by the time companies feel the effects of such expenses, it has already become too cost-prohibitive to change course. 

For example, a dominant player like Amazon has a plethora of cloud services and, given the lack of interoperability in the cloud vendor market, is able to strong arm customers into retaining a number of their services at a time which can add up to significant costs. While $10bn contracts might be an option for the US government, most companies simply can’t afford to get locked into a cloud platform – they need a service that will bring longer term value and be adaptable to their needs.

For the DoD, the stakes are much higher than cost. We’re talking about a cloud in which the Pentagon and soldiers on the ground can share classified information pertinent to national security. The need for absolute data protection is paramount.

And while both Amazon and Microsoft offer secure storage, if Amazon wins out in the race to the $10bn contract, it means the DoD and CIA would be using the same cloud provider, posing the potential security risk of hackers being able to access data from both organisations in one place. This risk would have been mitigated by adopting a multi-cloud approach.

Multi-cloud is where the future is headed

If public sector and government organisations, which are huge consumers of cloud computing services, helped to create market conditions that promote competition and diversity, it would be far more difficult for monopolies to form. So while it’s unsurprising that the US government is leaning towards Amazon or Microsoft to be their sturdy single cloud provider, it’s neither a good precedent for the cloud computing market as a whole nor indicative of the digital transformation occurring worldwide.

But rest assured there is a vibrant ecosystem on the horizon of interconnected clouds with numerous specialised vendors, and there are already positive signs that vendor lock-in will become a thing of the past, with 62% of public cloud adopters using two or more unique cloud environments and/or platforms.

Just as IBM’s monopoly on America’s hardware market didn’t last, we will see the cloud market become increasingly decentralised in the years to come, as more specialist vendors spring up to meet specific customer needs at better prices. We just have to hope the pending JEDI contract doesn’t feed the giant at the expense of the competition being able to grow.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google staff demand it shuns US immigration contract


Bobby Hellard

15 Aug, 2019

Google employees are demanding that the tech giant publicly commits to not entering a cloud deal with the US border control, citing human rights abuse as the reason.

Some 676 Google employees have signed a petition, initially circulated internally but has since been posted on Medium, calling for their employer to not bid on a cloud computing contract with the US Customs and Border Protection (CBP) agency.

While Google’s cloud computing arm looks to work with different organisations on digital transformation projects, its dealings with the US government have sparked backlashes from its staff. What’s more, the US government itself has also been quite critical of Google in recent weeks.

The employees say they “refuse to be complicit” in the CBP contract as immigration officials are “perpetrating a system of abuse and malign neglect” at the border. The document cites reports of families being separated and children dying during their time in detention.

“It has recently come to light that CBP is gearing up to request bids on a massive cloud computing contract,” the post reads. “The winning cloud provider will be streamlining CBP’s infrastructure and facilitating its human rights abuses.

“It’s time to stand together again and state clearly that we will not work on any such contract. We demand that Google publicly commit not to support CBP, ICE, or ORR with any infrastructure, funding, or engineering resources, directly or indirectly, until they stop engaging in human rights abuses.”

For those signing the petition, this has proved a successful method of forcing Google to drop projects that are deemed controversial. Last year, significant pressure from staff resulted in the tech giant deciding to not renew a contract with the Pentagon’s Project Maven – in which AI technology would be harnessed to improve drone performance – which expired this year. In the days that followed it also announced an ethical code of conduct.

These ‘ethics’ are being seriously tested, not just by Google’s own employees, but also by the US government, which has accused it of being biased. Last week, the president, Donald Trump took to Twitter to attack the company and its CEO.

“Sundar Pichai of Google was in the Oval Office working very hard to explain how much he liked me, what a great job the Administration is doing, that Google was not involved with China’s military, that they didn’t help Crooked Hillary over me in the 2016 Election, & that they are NOT planning to illegally subvert the 2020 Election despite all that has been said to the contrary,” he wrote.

“It all sounded good until I watched Kevin Cernekee, a Google engineer, say terrible things about what they did in 2016 and that they want to ‘Make sure that Trump losses in 2020.’ Lou Dobbs stated that this is a fraud on the American public. Peter Schweizer stated with certainty that they suppressed negative stories on Hillary Clinton, and boosted negative stories on Donald Trump. All very illegal. We are watching Google very closely!”

Data centre M&As surge as companies turn to cloud providers


Dale Walker

15 Aug, 2019

2019 is set to be another record year for data centre mergers and acquisitions, with 52 such deals being signed in the first six months, up 18% on the previous year.

A further eight deals have been closed during the past month alone, as well as a further 14 acquisitions awaiting formal closure, with the total number for 2019 now having exceeded the entirety of 2016.

Research by market analysis firm Synergy found that since the start of 2015, there have been over 300 M&As in the data centre space, said to be worth over $65 billion in total.

Data centre M&A closures since 2015

Synergy chief analyst John Dinsdale believes the figures represent a clear trend of companies not wanting to operate their own data centres, preferring instead to hand them off to specialists.

“As enterprises either shift workloads to cloud providers or use colocation facilities to house their IT infrastructure, more and more data centers are being put up for sale,” said Dinsdale. “This in turn is driving change in the colocation market, with industry giants on a never-ending quest to grow their global footprint and a constant ebb and flow of ownership among small local players.”

It’s likely that this trend is going to continue as a small group of data centre operators seek to consolidate their hold on the market.

The majority of acquisitions during the 2015-19 period have involved Equinix, which famously acquired Verizon’s data centres in 2017, and Digital Reality, which has been on a recent spending spree with facilities in Seoul and Frankfurt. The two colocation providers accounted for 36% of the total deal value over the period.

Data centre operators such as US-based CyrusOne, Iron Mountain, Digital Bridge, Carter Validus, as well as Japan’s NTT, have all been on similar spending sprees in 2019.