Google achieves ‘quantum supremacy’ with new experiment – reports

It may sound like an amalgam of a James Bond and a Jason Bourne film title, but Google may have reached ‘quantum supremacy’ in what was described as “a milestone towards full scale quantum computing.”

Quantum computing, while still at a very early stage, is becoming something of a key battleground for the biggest cloud companies. The technology is based on the principles of quantum mechanics, where subatomic particles can exist in more than one state at any time, leading to significantly greater computation.

While the ceiling is almost unsurpassable, the brittle conditions – the merest change in temperature or ambient noise will render qubits awry – mean a long journey of research ahead. Take a project from Microsoft announced last year which aims to ‘break’ RSA encryption in 100 seconds. For classical computers, such a task would take one billion years.

Google’s achievement is no less impressive. Researchers have put a quantum computer, named Sycamore, through its paces with a series of operations which would take a supercomputer approximately 10,000 years to complete. The quantum computer finished it in 200 seconds.

As originally reported by the Financial Times, the findings appear in a paper prematurely published to a NASA website. The test involved sampling the output of a pseudo-random quantum circuit leading to ‘a nearly random assortment of numbers [which is] extremely difficult to reproduce with a classical computer’, as Science News put it.

At the start of this year this publication reported on an initiative from IBM whereby the company announced the first ‘commercially useable integrated quantum computing system.’ While there was a fair amount of PR razzmatazz involved, the overriding concept was one of advancing quantum computing beyond the laboratory.

This is similar, although writing for CloudTech in October, Travis S. Humble, of the IEEE and Oak Ridge National Laboratory, questioned whether the time was right to go forward. “Many different quantum technologies appear viable for continued exploratory research, [such as] superconducting electronics, trapped ions, and neutral atoms,” Humble wrote. “Each of these technologies face multiple layers of integration complexity that must be monitored, from the low-level physical registers up to application performance.”

“Quantum processors have thus reached the regime of quantum supremacy,” the report noted. “We expect their computational power will continue to grow at a double exponential rate: the classical cost of simulating a quantum circuit increases exponentially with computational volume, and hardware improvements will likely follow a quantum-processor equivalent of Moore’s Law.

“In reaching this milestone, we show that quantum speedup is achievable in a real-world system and is not precluded by any hidden physical laws,” the paper adds.

You can read a plaintext version of the report here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS, Azure or Google: Do the differences between cloud providers really matter?

When evaluating public cloud providers,  it is easy to get hung up on the differences. AWS, Microsoft Azure, and Google Cloud each have their own terminology, pricing, service catalog, and purchasing variations. But do these differences ultimately matter? 

Compute options

Though we are able to align comparable products across AWS, Azure, and Google Cloud, there are of course differences between these offerings. In fact, with the number of products and services available today (we’ve counted 176 from AWS alone), comparing each is beyond the scope of this article.

For our purposes, we can compare what is still the core product for cloud service providers: compute. Compute products make up about two thirds of most companies’ cloud bills, so the similarities and differences here will account for the core of most users’ cloud experiences.

Here’s a brief comparison of the compute option features across cloud providers:

Of course, if you plan to make heavy use of a particular service, such as Function-as-a-Service/serverless, you’ll want to do a detailed comparison of those offerings on their own.

Pricing

That covers functionality. How do the prices compare? One way to do this is by selecting a particular resource type, finding comparable versions across the cloud providers, and comparing prices. Here’s an example of a few instances’ costs as of this writing (all are Linux OS):

For more accurate results, pull up each cloud provider’s price list. Of course, not all instance types will be as easy to compare across providers – especially once you get outside the core compute offerings into options that are more variable, more configurable, and perhaps even charged differently (in fact, AWS and Google actually charge per second). 

Note that AWS and Azure list distinct prices for instance types with the Windows OS, while Google Cloud adds a per-core license charge, on top of the base instance cost.

The table above represents the default On Demand pricing options. However, each provider offers a variety of methods to reduce these base costs, which we’ll look at in the Purchasing Options section.

Terminology 

At first glance, it may seem like the cloud providers each have a unique spread of offerings. But many of these products and services are quite similar once you get the names aligned. Here are a few examples:

Obviously, this is not a sign of substantive differences in offerings – and just goes to show that the providers are often more similar than it might appear at first glance. 

Purchasing options

Comparisons of the myriad purchasing options are worth several articles on their own, so I’ll keep it high level here. These are the most commonly used – and discussed – options to lower costs from the listed On Demand prices for AWS, Microsoft Azure, and Google Cloud. 

Reservations

Each of the major cloud providers offers a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances, Azure Reserved Virtual Machine Instances, and Google Committed Use discounts. There are a few interesting variations, for example, AWS offers an option to purchase “Convertible Reserved Instances”, which allow reservations to be exchanged across families, operating systems, and instance sizes. On the other hand, Azure offers similar flexibility in their core Reserved VM option. Google Cloud’s program is somewhat more flexible regarding resources, as customers must only select a number of vCPUs and memory, rather than a specific instance size and type. 

What about if you change your mind? AWS users have the option to resell their reservations on a marketplace if they decide they’re no longer needed, while Azure users will pay a penalty to cancel, and Google users cannot cancel.

Spot and preemptible instances

Another discounting mechanism is the idea of spot instances in AWS, low-priority VMs in Azure, and preemptible VMs, as they’re called on Google. These options allow users to purchase unused capacity for a steep discount. The cost of this discount is that these instances can be interrupted (or perhaps Azure puts it best with their “evicted” term) in favor of higher priority demand – i.e. someone who paid more. For this reason, this pricing structure is best used for fault-tolerant applications and short-lived processes, such as financial modeling, rendering, and testing. While there are variations in the exact mechanisms for purchasing and using these instance types across clouds, they have similar discount amounts and use cases.

Sustained use discounts

Google Cloud Platform offers another cost-saving option that doesn’t have a direct equivalent in AWS or Azure: Sustained Use Discounts. This is an automatic, built-in discount for compute capacity, giving you a larger percentage off the more you run the instance. Be aware that the GCP prices listed can be somewhat misleading, as a sustained use discount is already built in, assuming full-month usage – but it is nice to see the cloud provider looking after its customers and requiring no extra cost or work for this discount.  

Contracts

A last sort of “purchasing option” is related to contract agreements. With all three major cloud providers, enterprise contracts are available. Typically, these are aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs, Azure Enterprise Agreements. As these are not published options and will depend on the size of your infrastructure, your relationship with the cloud provider, etc., it’s hard to say what impact this will have on your bill and how it will compare between clouds. 

The 'it' factor

There’s also just the pure perception of the differences between cloud providers.

For instance, some may perceive Azure as a bit stodgy, while Google Cloud seems slick but perhaps less performant than AWS. Some appreciate AWS and Azure’s enterprise support more and find Google Cloud lacking here, but this is changing as Google onboards more large customers and focuses on enterprise compatibility. 

There are also perceptions regarding ease of use, but actually, we find these to be most affected by the platform you’re used to using. Ultimately, whatever you’re most familiar with is going to be the easiest – and any can be learned.  

Do the differences really matter?

On some of the factors we went through above, the cloud providers do have variations. But on many variables, the providers and their offerings are so similar as to be equivalent. If there’s a particular area that’s especially important to your business (such as serverless, or integration with Microsoft applications), you may find that it becomes the deciding factor.

The fact of the matter is, you’re likely to be using multiple clouds soon, if you’re not already – so you will have access to the advantages of each provider. Additionally, applications and data are now more portable than ever due to containers.

So, prepare yourself and your environment for a multi-cloud reality. Build your applications to avoid vendor lock-in. Use cloud-agnostic tools where possible to take advantage of the benefits of abstraction layers

Even if you’re only considering one cloud at the moment, these choices will benefit you in the long run. And remember: if your company is telling you to use a specific cloud provider, or an obscure requirement drives you to one in particular – don’t worry. The differences don’t matter that much. 

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

View from the Airport: Oracle OpenWorld 2019


Maggie Holland

23 Sep, 2019

“Researchers identify cures for cancer. Agro-biologists create smart hives to provide food security for the planet. Aid organisations deliver relief faster. Scientists mitigate climate change and utilities provide safe, cleaner, energy.”

These bold statements were the bread and butter of Oracle OpenWorld in San Francisco last week. The unifying element between them all? Data. Indeed, the tech giant talked up the value of data and excelling — not just existing — in a data-driven world during most if not all sessions during the conference.

But it’s not just about hoarding data for data’s sake or trying to make sense out of that data deluge. Far from it. Oracle’s key message was all about using machine learning coupled with human ingenuity so that the “possible becomes achievable.” Or, taking it one step further, Oracle claims it can help people achieve the extraordinary.

While this year’s OpenWorld was filled with more announcements than last year, the common themes of automation, data-focus, integration, and security permeated.

The firm unveiled what it claims is the world’s first “autonomous OS” – named Oracle Autonomous Linux – bigging up its security credentials and ensured the importance of partners was not understated.

Despite all the PR and marketing efforts on ensuring those key themes resonated with attendees, the biggest takeaway for me was that of Ellison’s personal nod to Mark Hurd’s ill-health.

Maybe I’m reading far too much into a simple statement of genuine love, but in a world where we are focused on autonomy, machine learning and robotics, this pure act of humanity shone through for me.

During his closing keynote, about 10 minutes in, Ellison was clearly struggling to hold it all together. So what he said next was really from the heart.

“I would just like to take a moment and say how much I miss Mark Hurd, personally. We’ve worked together for a long time, I love him, and I wish him a speedy recovery,” he said, to much applause and agreement from the audience.

He then quipped: “I don’t have so many friends that I can afford to lose any,” before doing his best to continue his presentation about the company’s Fusion middleware platform.

I’ve been called cynical in my time and, now, I’m ready to be called a sucker, but what Ellison said – unscripted and away from the military operation that is any tech conference, let alone one of Oracle’s – was, I believe, totally genuine.

In a world where tech is so sophisticated, it promises to outpace what humans can do (if it hasn’t already in certain cases) it just goes to show how important humans and human connections still are.

Larry – if you’re ready this, you really pulled on my heartstrings. I’ll admit I had a lump in my throat during the non-Fusion aspect of this presentation. But I really did see through the Silicon Valley veil and the hurt in your eyes and now I know, at the crux of it all, you’re just a guy honestly trying to do good things with technology.

How unified communications could energise your business


Nik Rawlinson

21 Sep, 2019

“The way people are communicating has fundamentally changed,” says Sahil Rekhi, MD of RingCentral EMEA (ringcentral.co.uk). “People are mobile and online, and while the smartphone is their preferred communications device, they’re multi-modal. They can switch between the tablet and the desktop – they just want all of their data available everywhere, and social has become a big component of that.”

Mention “social” and it’s easy to reach for Facebook and Twitter, but business social goes beyond that. Platforms such as Microsoft Teams, Google Cloud and Slack are building a new kind of social: one designed around fundamental business concepts, including collaboration and sharing. It’s an environment in which every form of communication, from landline calling and persistent messaging to presence, can live within one window, alongside directories, databases and files. It’s called unified communications (UC), or unified communications as a service (UCaaS).

As Martin Old, senior product marketing manager for Cisco products at Arkadin (arkadin.com) said, rather than staff necessarily going to work, “work is where they are”. A standalone PBX doesn’t cut it anymore.

“The way we communicate on a personal level has crossed over into business communications,” said Bianca Allery, CMO of 3CX. “It’s more about convenience and efficiency. Getting hold of the right person when we need them, knowing when they are available, using chat instead of a phone call. Rather than call my colleague, waiting for them to answer, calling them again… I can send a chat and go back to working on something else until they are able to respond.”

The consumerisation of UC

“Whichever device I’m using, I’ve got the same experience, whether it’s a video call, landline call or instant messaging,” Old explains. “I can do everything from my desktop, tablet or mobile phone – and that’s the experience that Generation Z expects. The experiences we’ve all had with consumer apps have helped accelerate their uptake in business.”

The average employee splits their time between four communication apps, and switches between business tools ten times an hour. “That wastes 32 days a year,” claims Rekhi. “If you could remove that complexity then, as a CIO or head of HR, you could give employees an extra five days paid holiday every year. What kind of impact is that going to have on motivation and employee loyalty?”
This could be UC’s trump card. By reclaiming lost time, staff don’t need to work longer hours to increase output and could be rewarded for buying into an enterprise-wide rollout. Staff buy-in is essential to any large-scale change, after all – particularly one that will impact them every minute of every working day.

“When you’re introducing a communication technology, there’s a high chance it will be touched by every single person in the business,” said Mat Godolphin of Exponential-e (exponential-e.com), a cloud infrastructure provider. “It has to be up 100% of the time because they’ll always know if there’s a fault on the platform.” Contrast that to email, where a few hours’ delay is rarely critical.

Godolphin, who heads up Exponential-e’s UC and collaboration team, sees the always-on, ever-active network as a way of attracting talent. “When I’m hiring for my team, I’m asked about the flexibility we offer and whether the new hire is expected to be in the office five days a week – all of these work-life balance questions. If that flexibility isn’t an in-built part of the culture of the business, it causes problems.”

Reclaiming lost time

How much of your employees’ time is spent in meetings? UC tools could recover much of it, particularly if the meetings would be held off-site.

While Arkadin’s Old is based in Newport, Wales, his peers sit at desks in Argentina, Singapore and France. “We used to live with email, which did the job, but it’s a bit long in the tooth now and laborious when a short message is all that’s required. Organisations developing a modern workplace are looking beyond the desktop experience and the individual – and looking at how their meeting rooms are set up, [as well as] the additional engagements they’re having outside of individual spaces.”

Inexpensive videoconferencing on tablets, phones and desktops is increasingly replacing in-person meetings and helping to reduce friction. A 2017 Polycom global survey (pcpro.link/300poly) suggested that 35% of business professionals made decisions much quicker when on video than via email, IM or phone.

“[UC] makes business communications much more efficient,” said 3CX’s Allery. “With videoconferencing, webinars, presentation sharing and so on, it’s no longer necessary to travel to clients or partners. Imagine travelling all the way from Newcastle to London for a two-hour meeting; it would take up the whole day plus various expenses. Now imagine wrapping up that same task in the same time it takes to hold the meeting.”

So, what should CIOs focus on? “Mobility is key,” Rekhi said. “It’s not only the biggest driver of change, but also a big driver of how you can deliver flexible working and the future workplace. Cloud is key. It’s the only place that’s going to be able to deliver this technology. Big Data and platform analytics continue to be an area of focus and technology players have a responsibility to capture this data. They can use it to make rational decisions based on what they’re saying and understand behavioural aspects of an organisation so it can build the future of work.”

The changing workplace

In many cases, businesses are buying in this expertise rather than developing it in-house, whether contracting third parties to integrate services or by headhunting. Job titles such as head of digital transformation are becoming increasingly common.

Specialists with expertise in UC adoption and no history in a firm may be best-placed to implement the change. “Very often, when we look at which technology is going to drive change, we only look at part of an organisation – a subset or department – where the impact might not be a net positive,” Rekhi warned. “But if you stand back and think about where the organisation is heading over the next five years and what the company’s trying to achieve, you can see that it has a net positive impact. The person driving that change needs to understand the impact and relay the [longer-term] message to staff before they start implementing that change.”

Old’s advice is similar. “Look at where your business wants to be, what the shape of your organisation is, how you want it to operate and the experience your users will have” and, where there’s historical tech already in use, manage that transition.

Done right, it can transform both the workplace and the workspace. “That’s the key idea of UC,” said Godolphin, who quotes Cisco’s “work is something you do, rather than somewhere you go”. UC allows staff to work anywhere, on any device, in the way they would if they were collaborating in a fixed location, Godolphin reminded us, picturing an environment of smaller break-out spaces and low-end video devices.

And it goes beyond your company’s staff. “UC can really take the customer experience up to the next level. Not only can customer inquiries be dealt with more efficiently, but it also offers new ways for customers to communicate,” said Allery. “Previously, customers mainly had to rely on calling customer service hotlines and the occasional email that would go unanswered for days. Now they are able to utilise methods of communication such as website live chat, which puts them instantly in contact with an agent and is often more convenient and preferred by customers.”

Google invests $3 billion in European data centre expansion


Connor Jones

20 Sep, 2019

Google’s CEO Sundar Pichai announced today that the company will be investing a further three billion euros (£2,642,906,834) into European data centres over the next two years.

This additional investment brings Google’s total investment in European digital infrastructure to 15 billion euros (£13,212,675,000) since 2007 – an endeavour which has supported 13,000 jobs, according to a Copenhagen Economics study.

In addition, a further 600 million euros (£528,393,000) will be pumped into the expansion of its data centre operations in Hamina, Finland, which it originally bought in 2009 and transformed it from an old paper mill to a high-tech facility which supports 4,300 jobs.

“The Nordic countries are great examples of how the internet can help drive economic growth,” said Pichai. “Our Hamina data centre is a significant driver of economic growth and opportunity. It also serves as a model of sustainability and energy efficiency for all of our data centres.”

The Hamina facility is situated near to the Russian border and uses seawater taken from the Gulf of Finland to reduce the amount of energy required to cool the hardware.

Google announced yesterday that it has continued on its commitment to using as much green energy as possible by completing the largest corporate purchase of renewable energy in history.

“These deals will increase our worldwide portfolio of wind and solar agreements by more than 40 percent, to 5,500 MW–equivalent to the capacity of a million solar rooftops,” said Pichai. “Once all these projects come online, our carbon-free energy portfolio will produce more electricity than places like Washington D.C. or entire countries like Lithuania or Uruguay use each year.”

Currently, Google’s other European data centres are located in the Netherlands, Ireland and Belgium, but last year it announced plans to build an entirely carbon-neutral data centre in Denmark, adding to its European data centre portfolio and bolstering its green energy drive.

The tech giant plans to invest $700 million (£616,769,017) into the new green site in Frederica, Denmark and use machine learning to ensure ever watt is used effectively.

Europe is somewhat of a hotbed for data centres, particularly for Google’s in Scandinavia which can operate using better energy efficiency than other locations around the world.

Gap in cloud skills doubles in three years


Bobby Hellard

20 Sep, 2019

90% of organisations have reported a lack of skills in multiple cloud disciplines and that the deficit has doubled over the last three years.

The lack of public cloud platform expertise is also driving organisations towards managed service providers.

While cloud computing has accelerated digital transformation, forcing companies to invest more in IT teams and systems, it’s also created more niche and specialist jobs and functions.

This has resulted in a widening gap where certain cloud roles are not being filled simply because not enough people have the skills for it, according to a report from 451 research.

Demystifying cloud transformation: Where enterprises should start‘ is a pathfinder paper, commissioned by Dell’s Virtustream.

“While enterprise companies are astutely aware of the breadth of cloud options available to them today, they are looking to cloud managed services partners to bridge their own in-house skills and resources gaps, and for access to their deep expertise across cloud assessment, planning, migration and domain experience,” says Melanie Posey, research VP and GM for 451 Research’s Voice of the Enterprise.

According to the report, skills shortages in areas related to the cloud are in platform expertise, DevOps, cloud architecture and security. These were seen as challenges to both cloud transformation and adoption as businesses struggled to find skills and resources in-house.

As such, businesses are increasingly looking for outside expertise where managed service providers are filling that gap. Rather than attempting to find and match employees to specific operations within public and private clouds so that they work in a holistic manner, businesses are favouring third-party support to manage the entire lifecycle of their migration and digital transformation.

Nearly two-thirds of organisations that currently use cloud also use some type of managed service, with 71% of respondents suggesting that managed services will be a better use of their money in the future.

What’s more, a strong majority said that managed services free internal IT staff from mundane chores, enabling them to focus on more productive and strategic activities in IT generally.

Oracle announces key partnership with VMware


Maggie Holland

20 Sep, 2019

Oracle and VMware have solidified and expanded their existing partnership to better help organisations harness the power of hybrid cloud. 

The latest iteration of the joint focus  – which should be made available in the first half of 2020 – will enable companies to support their hybrid cloud efforts by running VMware’s Cloud Foundation on Oracle’s Cloud Infrastructure (OCI). 

Those interested will be able to migrate VMware vSphere workloads over to Oracle’s Generation 2 OCI to take advantage of the latter’s infrastructure and operational investments, in addition to Oracle’s technical support. 

“VMware is delighted that for the first time, Oracle will officially offer technical support for Oracle products running on VMware. This is a win-win for customers,” said Sanjay Poonen, COO of customer operations at VMware.

“We’re also happy to welcome Oracle to the VMware Cloud Provider programme, which will allow them to migrate and manage workloads running on VMware Cloud Foundation in Oracle Cloud Infrastructure.”

Becoming part of VMware’s Cloud Provider programme means the firm and its vibrant partner ecosystem will be able to sell such solutions. What’s more, it means customers will be able to take advantage of the recent investments Oracle has made in its autonomous solutions. 

The VMware tie-up follows hot on the heels of a multi-cloud partnership with Microsoft, which essentially connects the two services and enables joint customers to leverage historic investments. 

Joint customers also seem happy with the news, which is not too dissimilar to the recent partnership Oracle announced with Microsoft. 

“Oracle and VMware are technology providers that we depend on to run our organisation successfully. As a long-time customer of both companies, we are pleased that this partnership demonstrates – with decisive clarity – that Oracle products are indeed supported,” said Dan Young, chief data architect and manager of enterprise database administration at Indiana University.

“This gives us even greater confidence that we have strategic partners that are working together in our best interest to help ensure that in the event something goes wrong, we are fully supported and will face minimal disruption in our operations.“

Oracle: Our cloud will make things easier not more complex


Maggie Holland

20 Sep, 2019

Oracle claims that its cloud is not only the word’s first autonomous one, but also the only one that’s fully integrated. 

It’s that focus on integration as a key consideration rather than an afterthought that will help organisations navigate their way through the complexity and uncertainty they face in their respective industries. 

So claimed the firm’s CEO Safra Catz during her keynote session at Oracle OpenWorld in San Francisco this week. 

“Let me tell you why the Oracle Cloud is unlike any other cloud in the world,” Catz told delegates. 

“At the infrastructure layer – from compute to networking to storage – the Oracle Cloud has been uniquely engineered to be secure and autonomous from the start. No other cloud provider even thinks this way. But, we’ve always thought this way. 

“Your Oracle workloads are the Crown Jewels of your enterprise and we know that. The Oracle Cloud eliminates complexity, manual work and – as you heard last night – most importantly, human error. It delivers a degree of reliability, operational efficiency, and automatic security that other clouds just cannot match.” 

Oracle’s focus on built-in automation and integration also minimises risks and cost, according to Catz.

The opening video prior to the keynote talked about oceans of information being processed in the blink of an eye in the increasingly data-driven world in which we live and work.

And the fact that Oracle has been on the same data-driven journey to the cloud so is perfectly placed to understand and help respond to myriad challenges, Catz said. 

“We needed to be a better service-oriented company. It wasn’t good enough to build a great cloud. We needed to use it. We needed our own cloud to be a platform to enable the business changes we were looking for,” Catz added. 

Oracle’s cloud will help firms maximise efficiency and effectiveness, thanks to enhanced levels of functional integration and embedded AI which, in turn, delivers greater levels of business insight, according to Catz. 

What’s more, Catz said using the Oracle Cloud – which puts the user front and centre – would enable customers to “outpace change” due to new features being provided seamlessly every quarter. She dubbed this “continuous innovation without tedious upgrades.”

“Our goal is to deliver innovation in a way that simplifies IT and business functions. And we believe the best way to do this is to engineer all our products to work together from the beginning, each piece benefiting from the capabilities of its underlying platform,” Catz said. 

“I encourage you to try the Oracle cloud out for yourself – for free. Experience the autonomous cloud and see for yourself what makes Oracle so unique and the best choice to achieve business success.”

Cloud services and infrastructure spending breaks $150bn in six months, says Synergy

While spending across cloud infrastructure may be suffering something of a minor blip, cloud services spending appears to be shoring things up.

The latest analysis from industry analyst Synergy Research shows that, for the first half of 2019, operator and vendor revenues broke $150 billion, at a rise of 24% year on year.

Infrastructure as a service (IaaS) and platform as a service (PaaS), led naturally by the hyperscalers of Amazon Web Services, Microsoft Azure and Google Cloud, was the fastest growing segment at over 40%, while hosted private cloud, led by IBM, Rackspace and NT, grew at over 20% year on year. When it came to cloud-based software, such as enterprise SaaS and unified comms as a service (UCaaS), growth was in the 25% range yearly.

In aggregate, Synergy noted, spending on cloud services was now ‘far greater’ than spending on supporting data centre infrastructure. Despite this, areas such as public and private cloud infrastructure hardware and software, as espoused by Dell EMC, HPE et al, grew at just over 10% year on year.

“Cloud is increasingly dominating the IT landscape,” said John Dinsdale, a chief analyst at Synergy. “Cloud has opened up a range of opportunities for new market entrants and for disruptive technologies and business models. Amazon and Microsoft have led the charge in terms of driving changes and aggressively growing cloud revenue streams, but many other tech companies are also benefiting.

“The flip side is that some traditional IT players are having a hard time balancing protection of legacy businesses with the need to fully embrace cloud,” Dinsdale added.

Synergy issued a note last month which found hyperscaler capex was down 2% based on year-by-year figures. While the most recent quarter saw more than $28 billion in spending, a primary cause was seen to be China’s expenditure declining by 37% year on year.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

IBM’s Quantum Cloud offers access to the ‘single largest quantum computer system’


Bobby Hellard

19 Sep, 2019

IBM has announced the opening of a Quantum Computer Centre in New York that will provide quantum computing over its cloud network.

The centre will be home to the tech giant’s 14th quantum computer, a 53-quantum bit, or qubit, model that will form the data-processing element of the service.

IBM said this will be the single largest quantum computer system available for external access. For context, Google has a 72-qubit computer, but, so far, hasn’t let outsiders run programs on it.

Despite the technology still being largely experimental, IBM has already worked on a number of potential case studies with major clients. According to Dario Gil, director of IBM Research, the firm’s strategy is to move quantum computing beyond isolated lab experiments and into the hands of tens of thousands of users.

“In order to empower an emerging quantum community of educators, researchers, and software developers that share a passion for revolutionising computing, we have built multiple generations of quantum processor platforms that we integrate into high-availability quantum systems,” he said.

“We iterate and improve the performance of our systems multiple times per year and this new 53-qubit system now incorporates the next family of processors on our roadmap.”

To start, ten quantum computer systems have been put online through IBM’s Quantum Computer Center. Its fleet is now composed of five 20-qubit systems, one 14-qubit system and four 5-qubit systems. Five of these systems now have a Quantum Volume of 16 – a measure of the power of a quantum computer – demonstrating a new sustained performance milestone.

In the next month, this portfolio of quantum computers will grow to 14 systems including the new 53-qubit quantum computer.

Earlier this month IBM announced a partnership with applied research organisation Fraunhofer Gesellschaft to study quantum computing in Germany. The tech giant hopes to be a hub in the country as the technology accelerates.

What’s more, IBM is already working on potential use cases with partners, such as bank J.P. Morgan Chase, which has proposed a quadratic speedup algorithm that could allow financial analysts to perform option pricing and risk analysis in near real-time.

The tech giant is also working with Mitsubishi Chemical to develop a quantum computing process to understand the reaction between lithium and oxygen in lithium-air batteries, with the hope that it could lead to more efficient batteries for mobile devices and cars.