Amazon Web Services review: AWS packs in more features than any other cloud service provider


K.G. Orphanides
Andy Webb

2 Aug, 2019

Amazon's one cloud service provider to rule them all isn't always the most economical option for SMEs

Price 
Highly variable

AWS is the big daddy of cloud service providers. It provides the backend infrastructure for half the online services you’ve ever heard of, and it could do the same for your office.

It’s increasingly practical to move small and medium enterprise business networks and servers into the cloud as infrastructure as a service. Unusually – and conspicuously unlike rival platforms from Microsoft and Google – AWS can provide virtualised desktop workstations, as well as core infrastructure.

In this review, we’ll focus on infrastructure and services that can be readily migrated to the cloud – primarily core servers, directory services and a virtual private cloud to both handle virtual networking and provide a VPN endpoint to connect your business’s physical machines to your online infrastructure.

Amazon WorkSpaces cloud desktops could also be of particular value to firms with significant remote workforces. All of these options can represent significant savings on capital expenditure and, particularly with virtual desktops, provide a secure alternative to having staff work from their own PCs.

Amazon says it strives for 99.99% uptime in each AWS region and, if it does go down, provides credits that can be spent on affected services. You can choose which region to host your services in, which can potentially help with both legal compliance and performance for people connecting from that region.

Amazon Web Services review: Deployment

AWS has a frankly dizzying array of features, from machine learning testbeds to augmented reality application development and Internet of Things connection kits, but we’re interested in servers and networking to support a standard office.

For this, you’ll want to deploy a Virtual Private Cloud and, on that, deploy any servers to handle whatever single sign-on, storage and database needs your business has. VPCs are easy to manage if you’re already confident with network infrastructure, but to connect your office to your cloud-based network, you’ll need a fast internet connection and a firewall router powerful enough to handle a high-throughput VPN connection.

When deploying VMs, you can’t just upload an ISO of your choosing and install that – only a rather limited list of Windows and Linux versions are available to install. However, it is possible to upload a VMware, Citrix, Hyper-V or Azure virtual machine image via an Amazon S3 storage bucket or – easier still – via the AWS Server Migration Service and connector software installed on your existing platform.

Amazon Web Services review: Pricing

No matter which data centre region you’re based in, in the world of AWS, everything is in US dollars, right up until the point at which your final bill is calculated in your choice of currency, based on Amazon’s internal exchange rate.

This can be rather annoying, particularly when the pound undergoes major fluctuations due to political events, as it makes your month-to-month costs less consistent than they otherwise would be.

The default option for your AWS deployments is its On-Demand pay-as-you-go pricing. However, as with Microsoft Azure, you can save money if you deploy longer-term reserved instances for any virtual infrastructure that you plan on leaving in operation for an extended period.

Needless to say, the exact costs of any deployments will vary widely depending on your exact needs. To provide a basic example, we use the AWS Simple Monthly Calculator to cost up a single general-purpose virtual machine running Windows Server on a two-core, 8GB VM with a ‘moderate’ connection – estimated by various third party tests at around 300Mbit/sec – costs $152.26 per month, plus $36.60 for a 1024GB HDD.

The speed of that network connection makes a great difference to pricing: two cores and 8GB RAM on an up-to-10GB/sec connection cost $282.56 per month. A little less variably, an Active Directory connector starts at $43.92 and a Virtual PrivateCloud at $36.60 per month for a single connection from your office router.

Critically, the estimation tool – unlike Azure’s – won’t generate a baseline estimate of how much data in and out a business might use every month. You’ll have to estimate that manually: at an estimated 100GB per month in and out (only outbound traffic costs anything in this scenario), we’d pay $17.91 per month.

That adds up to $294.71 (£242.53) per month, including a small free tier discount. For Windows servers, Microsoft’s Azure platform is much more competitive at the moment: £196.68 per month will get you a similar setup.

A lot of that is to do with the cost of licencing Windows, which Microsoft can subsidise for Azure users. Switch that AWS server VM to Linux, and it’ll cost $84.92 per month, rather than $152.26.

Amazon WorkSpaces virtual desktop computers start at $7.25 per month plus $0.17 per hour of active use (or a flat $21 per month) for a Linux desktop system and $7.25 per month and $0.22 per hour (or a flat $25 per month) for a Windows desktop, with one core, 2GB RAM, an 80GB root volume and 10GB of user storage.

AWS can sometimes spring unexpected costs on you, for example by billing hourly for IP addresses that were once attached to a terminated VM. Similarly, leftover key pairs and storage drives associated with virtual machine instances incur charges if they’re not manually deleted when an instance is.

Data throughput and the sometimes arcane relationships between services can also add to the cost of AWS deployments, and you might miss out on its free intra-region data transfer fees if you don’t set everything up correctly.

In the case of a Virtual Private Cloud, you’ll have to create a specific subnet endpoint pointing at the AWS service you’re trying to connect to in order to benefit from free throughput: connecting to a public IP address provided by the service will result in data transfer being billed as though it was going to a location on the wider internet, rather than inside AWS.

Like Microsoft and Google, AWS provides a wide range of free services intended to allow administrators to extensively prototype and test cloud-based systems and services for their business, from short-term free trials to always-free services and free 12-month subscriptions for new AWS subscribers.

In the latter category, new users can run up to 750 hours a month of Linux and Windows EC2 Micro virtual machine instances, 5GB of S3 storage, various Amazon WorkSpaces cloud desktop and AppSteam always-available desktop application streaming bundles, 750 hours of database services and more.

In the Always Free category, you’ll get 10 CloudWatch resource monitoring deployments, 62,000 outbound email messages, 10GB of Glacier cold storage, key and licence management, 100GB of hybrid cloud storage, and Amazon’s Chime unified communications platform among other bits and pieces.

Amazon Web Services review: User interface

The AWS Management Console is a lot nicer to look at and carry out day-to-day management and deployment tasks with than Microsoft’s rival Azure platform. There’s more white space and fewer immediately visible options, which helps to make it feel less cluttered.

Your most recently visited services are front and centre, and you can open a full list of every single one of AWS’ vast catalogue of services. At the top of the page is a search interface, where you can search for services by name or function, so if you search for ‘virtual desktop’, you’ll be directed to WorkSpace and if you search for ‘cold storage’, S3 Glacier pops up.

Below, a range of wizards and tutorials are available to help you deploy and work with popular services such as virtual machines, virtual servers and hosted web apps. Each service has its own management interface which, again, are a little more comfortable to use than Azure’s.

However, there’s a distinct design language at work here that you’ll have to get used to, particularly if you’re primarily familiar with Microsoft’s Server and cloud products. We were pleased to find that free-tier eligible options were clearly marked when we used the VM deployment wizard, which also provides helpful guidance when it comes to keeping your deployments secure, such as locking access to specific IP addresses.

Amazon Web Services review: Verdict

Even compared to its closest competitors, AWS is complex, both in terms of features and pricing, although a well-designed interface does its best to make things simple. When working with AWS, it’s worth using Amazon’s quote generator and cost management tools to ensure that you aren’t running up unexpected expenses, and you’ll have to remember to include data throughput costs in your estimates, as they’re not typically bundled.

AWS’ ubiquity speaks for itself: although its layers upon layers of features are confusing, it’s reliable, highly flexible, can be immensely cost-effective and offers a wider range of services than any of its rivals. However, to make the most of it, your business will need a dedicated expert, either in-house or as contracted support.

By comparison, Microsoft’s Azure isn’t significantly easier to use, but its management interface will feel a bit more familiar to Windows sysadmins and its pricing for Windows-based services is cheaper than AWS’, making it a better option for most office infrastructure migrations.

How technology is revolutionising the healthcare industry


Cloud Pro

12 Aug, 2019

>A technology revolution is transforming the healthcare industry, changing everything from how patients are diagnosed and treated to our battle against some of the world’s most serious diseases. It’s a revolution fuelled by new sources of healthcare data and powered by big data analytics – and it’s being pushed even further by new developments in AI. Between growing populations, ageing populations, drug-resistant microbes and pressures on staff and budgets, healthcare faces some enormous challenges. Yet with data, analytics and AI – supported by new cloud, storage and processor technologies – the industry is moving in the right direction to meet them. This revolution will change and save patients’ lives.

Its foundation is the growth of healthcare data. On the one hand, initiatives like the SAIL (Secure Anonymised Information Linkage) Databank are collecting, pooling and anonymising data, ready for research through analytics. Operating in Wales, SAIL has collected over 10 billion person-based data records over a period of 20 years, using it in projects finding links between social deprivation and high mortality rates following a hip fracture, or uncovering relationships between congenital anomaly registries and maternal medication use during pregnancy. Similarly, projects at the Wrightington, Wigan and Leigh NHS Foundation Trust are finding operational uses for their own large datasets, using them to monitor time lags between referral and treatment or ensure staffing levels meet demand during peak periods.

On the other hand, clinicians are finding ingenious ways to make use of the wealth of data collected by fitness trackers, smart watches and healthcare apps on smartphones – not to mention information being freely and publicly shared over social media. While privacy concerns won’t melt away overnight, researchers hope that, given assurances, the public will back the wider sharing of health information, particularly if it can help us fight diseases or make more informed choices about our diets, our sleep and our exercise regimes. With anonymisation and other appropriate safeguards in place, plus the legalities dealt with, there are endless applications.

From precision to prevention

Many of these harness the power of big data analytics, using these massive datasets to spot patterns or even predict outcomes based on certain factors or criteria. One large study combines genetic information with data from other studies and canSAR, the world’s largest database for cancer drug discovery, to identify pathological mutations and match them to potential drugs. Such studies are finding that, by picking out new genes involved in the development of, say, prostate cancer, they raise the chances of creating bespoke drugs to battle specific mutations. Similar studies hope to isolate the impact of diet and exercise on diabetes, so that sufferers get more motivation to make potentially transformative lifestyle changes.

Clinicians and data scientists refer to this approach as ‘precision medicine’ using analytics to find out what fuels specific variants of a disease in specific individuals, then identifying the right individual treatment path to manage or cure it. Nor is this the only way analytics is transforming healthcare. Researchers fighting antibiotic resistant superbugs hope that analytics could find answers there in the long term, and that, in the shorter term, mathematical modelling could help estimate the global impact of antibiotic resistance and make a powerful case for increased funding.

Meanwhile, new healthcare apps, like Sentimento Ltd’s My Kin, are working to help prevent illness. They do so by bringing in information from smartphones and wearable devices, including physical and social activity, sleep and environmental factors, and then using analytics to pinpoint behavioural changes that could help reduce health risks and prevent the users from developing serious conditions later.

Putting AI into practice

These approaches are only being improved by developments in AI and machine learning, as clinicians and researchers use new techniques to spot patterns faster or get a more accurate diagnosis in less time. At both MIT and the University of Pisa, smart algorithms are enabling MRI image scan comparisons that used to take up to two hours to be done in one thousandth of that time, or to cut down the time patients spend in discomfort during vital MRI screenings. Similar work is being done by Intel and the AI company, MaxQ, to analyse CT scans of stroke and head trauma patients to reduce error rates, or by Intel and the med-tech company, Novartis, to analyse thousands of images of cells during drug research and identify promising drug candidates. By augmenting manual analysis, the technology can reduce screening times from 11 hours to 31 minutes.

It’s even hoped that by combining data analytics and AI, the kind of cancer drug treatment research mentioned earlier could go on to not just target the right treatment but prevent the disease from establishing a foothold. Machine learning and deep learning techniques could spot molecular drivers or mutations early and suggest appropriate action.

These developments make heavy demands on today’s technology; whether you’re working on large datasets in memory or pulling data from disparate sources in the cloud, storage speed and processing power count. Here fast flash storage arrays and persistent memory, like Intel® Optane™ DC persistent memory, is delivering the kind of high-performance, high-capacity storage these applications need – and making it more affordable and accessible.

Much the same is happening on the processing front, where Intel has teamed up with Philips to show that Intel Xeon Scalable processors can perform deep learning inference on X-rays and CT scans without the specialist accelerator hardware usually required. Using AI in medical imaging has been challenging up to now, because the imaging data is high-resolution and multi-dimensional, and because any down-sampling to speed up the process can lead to misdiagnosis. New deep learning instructions in Intel’s 2nd Gen Xeon Scalable processors enable the CPU to handle these complex, hybrid workloads. Through this research, Intel and Philips are bringing the use of AI in medical imaging down to a lower cost.

In doing so, Intel is helping supercharge the new technology in healthcare revolution, providing the industry with the compute and storage performance it needs to transform raw data into personalised treatment plans and great patient outcomes. What’s more, it’s doing so in forms that will only grow more affordable and accessible with time. Combine that with the explosion in healthcare data and there’s potential here for something truly special. Technology might not kill the world’s superbugs or defeat cancer straight away, but it could forge major breakthroughs in these and many more of the world’s biggest healthcare challenges.

Discover more amazing stories powered by big data and Intel technology

Microsoft is killing off Skype for business


Connor Jones

31 Jul, 2019

Microsoft is calling time on Skype for Business Online’s as the company tries to promote the adoption of Microsoft Teams.

Support for the collaboration platform will be terminated on 31 July 2021, just 10 years after it was acquired by the Redmond giant. Every new Microsoft 365 customer will be onboarded to Microsoft Teams by default from 1 Septemeber 2019, with no option to select Skype for Business Online instead.

Microsoft said that current Skype for Business Online customers won’t experience any change in service and will be able to add new users as needed until the termination date.

“Over the last two years, we’ve worked closely with customers to refine Teams, and we now feel we’re at the point that we can confidently recommend it as an upgrade to all Skype for Business Online customers,” said James Skay, senior product marketing manager at Microsoft.

“Teams isn’t just an upgrade for Skype for Business Online, it’s a powerful tool that opens the door to an entirely new way of doing business,” he added.

Microsoft has listed a range of product investments it will be making to ensure an easy migration to Teams for businesses that are firmly settled with Skype for Business Online.

One of these will be the upcoming interoperability between Skype for Business Online and Microsoft Teams, coming in the first quarter of 2020. The update will allow customers on both platforms to communicate via calls and text chats.

Other feature requests from Skype for Business Online users will also be honoured in future Teams updates, such as DynamicE911 which forwards detailed location data when making an emergency phone call directly from the platform. Shorter data retention periods will also be in Teams by the end of 2019, so data that shouldn’t remain on the service will be wiped when the user needs it to be.

Contact centre integration and compliance recording solutions are also in Teams already. First announced at Microsoft’s Inspire event earlier this month, recording software used by businesses for regulatory compliance, liability protection and quality assurance is now supported in Teams with more partnerships in the pipeline.

Teams has so far been a very successful product for Microsoft. Earlier this month, the company announced that it has more active users than its main rival Slack, just two years after launching the platform.

StackRox and Skybox reports warn of dire consequences if container security is not addressed

Containers, when utilised properly, can significantly improve an organisation’s efficiency through speeding up development and automating processes. Yet like with so many technologies before it, security was never quite at the top of the priority list.

StackRox and Skybox Security, two California-based cybersecurity providers, have issued reports over the past week which come to similar conclusions: organisations are struggling with the sprawl of major container and Kubernetes adoption, with security taking a hit as a result.

The industry figures reveal the extent of the concern. In May, this publication noted that the 2019 KubeCon event felt like a milestone for the industry. At the time of Kubernetes being upgraded as a ‘graduate’ of the Cloud Native Computing Foundation (CNCF) last March, Redmonk research found almost three quarters (71%) of the Fortune 100 were using containers in some form.

These are emphasised further by the current reports. StackRox found that, of the 390 IT professionals surveyed across industry, more than four in five had adopted Kubernetes. This 51% increase on just six months ago was described by the company as ‘staggering.’

It is worth noting at this juncture that, of course, container vendors take great care in securing their products in the first place. But this begets another discussion around where responsibility begins and ends.

This publication has variously reported on incidents where cloud infrastructure vendors explore the limits of how much they can guide their customers. Amazon Web Services (AWS), for instance, launched an offering in November which offered further protection to customers lest they launch a public S3 bucket by mistake. Previously, the company had revamped its design, giving public buckets bright orange warning indicators.

It is a similar theme here. With containers in particular, environments change frequently. Old container images, with known vulnerabilities, can be replicated and spread through various cloud infrastructures. Skybox found that vulnerabilities in cloud containers have increased by 46% year over year, and 240% compared to 2017. Despite this, less than 1% of newly published vulnerabilities were exploited in the wild.

The StackRox report focused more on general security trends. Two in three organisations polled had more than 10% of their applications containerised, yet two in five (40%) remained concerned their container strategy does not sufficiently invest in security. When it came to what organisations wanted, there were seven core capabilities respondents cited in a container security solution. These were, in order, vulnerability management, compliance, visibility, configuration management, runtime threat detection, network segmentation, and risk profiling and prioritisation.

“Organisations are putting the operational benefits of agility and flexibility at risk by not investing in security,” said Kamal Shah, StackRox CEO. “Containers and Kubernetes have moved well beyond the early adoption phase – security must be built-in from the start, not bolted-on after the fact, for organisations to securely realise the full potential of cloud-native technologies.”

Amrit Williams, Skybox VP products, added: “It’s critical that customers have a way to spot vulnerabilities even as their environment may be changing frequently. They also need to assess those vulnerabilities’ exploitability and exposure within the hybrid network and prioritise them alongside vulnerabilities from the rest of the environment – on-prem, virtual networks and other clouds.”

You can read the StackRox report here (email required) and the Skybox report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Capital One confirms data breach, cites cloudy approach as key to swift resolution

Capital One has confirmed a ‘data security incident’ which affected more than 100 million customers in the US and Canada – and while Amazon Web Services (AWS) has been identified as the receptacle in which the data was stolen, both customer and vendor appear not to be to blame.

Paige A. Thompson, otherwise known as ‘erratic’, was arrested on Monday and appeared in court in Seattle on a charge of computer fraud and abuse. According to the criminal complaint document (pdf), a ‘firewall misconfiguration’ ensured the vulnerability of the Capital One cloud server.

On July 17, a previously unknown individual emailed Capital One’s responsible disclosure address pointing it to a GitHub account where leaked data resided. “Capital One determined that the [file] contained the IP address for a specific server,” the document notes. “A firewall misconfiguration permitted commands to reach and be executed by that server, which enabled access to folders or buckets of data in Capital One’s storage space at the Cloud Computing Company.”

That cloud computing company, it was later confirmed, was Amazon. The original email, alongside a Slack message purportedly from Thompson, mentioned S3, AWS’ primary storage product. Amazon confirmed this to Bloomberg, adding that the data ‘wasn’t accessed through a breach or vulnerability in AWS systems.’ AWS also confirmed that Thompson had previously been an employee of the company, last working there in 2016.

Capital One is a well-known AWS customer; the company selected Amazon as its ‘predominant cloud infrastructure provider’ in 2016, with the news announced in conjunction with AWS’ re:Invent customer gathering. The financial services provider said at the time it was advocating a cloud-first mindset, with plans to migrate the majority of its core business and customer applications to AWS over the coming five years.

From Capital One’s perspective, the company praised its cloud-first system for the speed at which it was able to remediate the incident. Putting together a specific question-and-answer on the subject in its press materials, Capital One wrote: “This type of vulnerability is not specific to the cloud. The elements of infrastructure involved are common to both cloud and on-premises data centre environments.

“The speed with which we were able to diagnose and fix this vulnerability, and determine its impact, was enabled by our cloud operating model.”

Capital One noted that no credit card account numbers or login credentials were compromised, as well as less than 1% of social security numbers. The press materials curiously noted that ‘no bank account numbers or social security numbers were compromised, other than… about 140,000 social security numbers of… credit card customers.’

Alex Heid, chief research officer at SecurityScorecard, described the company’s response as ‘commendable’, particularly in its disclosure and bug hunting practices, but added a caveat. “From the standpoint of any business handling large amounts of data, the use of third-party hosting services within cloud computing environments is an unavoidable reality of the modern era,” said Heid. “The attack perimeter of a network goes beyond the organisation itself and is often intertwined with a collection of third-party vendors.

“In addition to making use of a continuous monitoring service for all external assets is an important part of understanding the scope, implementing a bug bounty reporting program will go a long way in making sure there’s always an ‘extra set of eyes’ on assets of value.”

You can take a look at the Capital One page dedicated to the incident here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Global IaaS market increased by 31.3% in 2018 to $32.4bn


Daniel Todd

30 Jul, 2019

The infrastructure as a service (IaaS) market grew by 31.3% in 2018, the latest research from Gartner has revealed, rising from $24.7 billion in 2017 to $32.4 billion.

According to the research firm, Amazon continued to lead from the front last year, retaining its status as the number one vendor in the IaaS market, followed by Microsoft, Alibaba, Google and IBM.

In fact, the top five vendors increased their collective dominance, accounting for almost 77% of the worldwide market compared to less than 73% back in 2017.

“Despite strong growth across the board, the cloud market’s consolidation favours the large and dominant providers, with smaller and niche providers losing share,” commented Sid Nag, research vice president at Gartner. “This is an indication that scalability matters when it comes to the public cloud IaaS business.

“Only those providers who invest capital expenditure in building out data centres at scale across multiple regions will succeed and continue to capture market share. Offering rich feature functionality across the cloud technology stack will be the ticket to success, as well.”

Gartner added that market consolidation will continue through 2019, driven by the continued high rate of growth for the top providers. From 2017 to 2018, these big-name vendors experienced aggregate growth of 39% compared with 11% for all other providers for the period.

“Consolidation will occur as organisations and developers look for standardised, broadly supported platforms for developing and hosting cloud applications,” Nag said.

Accounting for almost half the global IaaS market alone, market leader Amazon racked up an estimated $15.5 billion of revenue in 2018, up 27% from the previous year, as it continues to “aggressively” expand into new IT markets via new services, acquisitions and growth of its core cloud business.

For the second-placed Microsoft, which delivers its IaaS capabilities via its Azure offering, revenue surpassed $5 billion last year, up from the $3.1 billion recorded in 2017.

Elsewhere, China’s dominant IaaS provider Alibaba Cloud recorded the strongest growth out of the big five vendors, expanding by 92.6% in 2018. Having built up an ecosystem of MSPs and independent software vendors, its success has been driven by aggressive R&D investment in its portfolio of offerings, with the firm also boasting the financial capacity to continue this trend and invest in global expansion.

Google placed in the fourth spot, growing 60.2% in revenue from 2017, with Gartner suggesting its cloud offering is “something to keep an eye on” thanks to its new leadership focus on customers and the enterprise.

“As the cloud business continues to gather momentum and hyperscale cloud providers consolidate the market, product managers at cloud MSPs must look at other ways to differentiate, such as focusing on vertical industries and getting certified in the hyperscale cloud provider partner programmes in order to drive revenue,” Nag commented.

How data and analytics benefits need to be driven by cultural change

Managing big data apps is a challenge for many IT organisations. Moreover, chief data officers (CDOs) and their data and analytics (DA) teams are not achieving the best balance required to deliver superior performance, according to the latest market study by Gartner.

"CDOs are generally focused upon the right things, but they do not have the right mix of activities," said Debra Logan, vice president at Gartner.

Data and analytics market development

The Gartner survey found that while the creation of a data-driven culture was ranked the number one critical factor to the DA team, there were conflicting rankings for technical and nontechnical activities (data integration and data skills training), and strategic and tactical activities (enterprise information management [EIM] program and architecting a DA platform).

While the implementation of a DA strategy was ranked the number three most-critical success factor by 28% of CDOs, another strategic activity – creating a data literacy program – was ranked only 12th.

This was despite the fact that, in the same survey, ‘poor data literacy’ was rated the number one roadblock to creating a data-driven culture and realising its business benefits.

"The low ranking of strategic activities can be explained because the majority of organisations are at maturity level 3 or higher for EIM and business intelligence and analytics," said Logan.

While the survey shows that information governance is important, especially master data management (MDM), CDOs should never lose sight of the business outcomes they are trying to achieve. Focusing exclusively on governance, even MDM, is not enough to succeed as a CDO.

A majority of CDO respondents rated machine learning (ML) and artificial intelligence (AI) as critical at 76% and 67%, respectively. 65% of respondents were using or piloting ML, while 53% were using or piloting AI.

However, a relatively small percentage of CDOs that were surveyed are already using or piloting smart contracts (18%) or blockchain (16%).

In terms of measuring the value of their organisation’s information and data assets, only 8% of CDOs were measuring the financial value of DA.

45% of CDOs reported they produce some data quality metrics – such as accuracy, completeness, scale and usage – while 29% said they measure the impact of key information and data assets on business processes, such as KPIs.

The Gartner survey also found that the majority of CDOs generated value from information assets to improve internal processes (60%) and increase the value of products and services (57 percent), with a focus on efficiency.

Outlook for DA applications innovation

Half of CDOs reported a focus on enhancing new offerings by innovating with information. Other means to realise value from information assets also lagged. 19% of CDO respondents were selling or licensing information via data brokers or online marketplaces and only 17% were selling or licensing to others for cash.

Overall, respondents using information and data assets to generate indirect economic benefits were more likely to report superior organisational performance when engaged in improving or developing new offerings, in increasing the value of their products or services, and in exchanging information with business partners for goods, services or favourable contract terms.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

From the Peleton to the Cloud: How data keeps the Tour De France running


Steve McCaskill

30 Jul, 2019

To suggest the Tour de France is the most famous cycling race in the world borders on understatement. Founded in 1903 as a vehicle to sell more newspapers, the event attracts millions of spectators from around the world on television and at the roadside every summer.

In the early years of the race, fans would find out the result of each stage in the following day’s newspaper. But as times changed, this became increasingly unacceptable.

The first live radio broadcast of the race aired in the 1920s and highlights were made available to television and cinema newsreels the day after a stage. In 1948, the Tour was broadcast live on television, although only certain stages were shown initially.

A complex operation

Fast-forward to 2019 and public demand has changed once again. Every stage is shown live, while the rise of the smartphone and social media means fans are demanding more coverage and more information about every facet of the race.

The Tour is a national obsession in France and is of huge cultural importance. But its organiser, the Amaury Sports Organisation (ASO), is acutely aware of the need to ensure the race retains its relevance and attracts new audiences – many of whom may be oblivious to the quirks of professional cycling.

As with so many other sports, digital is now a priority for the ASO. But for an event that comprises 3,408km across 21 stages and is contested by 176 riders, it’s an extremely complex operation. Big Data, artificial intelligence (AI) and the cloud are all integral to the Tour de France’s digital transformation.

Increasing engagement

In 2015, the ASO partnered with NTT to improve the level of data that can be extracted from the bikes and the course, which can then be analysed and turned into insights that drive engagement through race commentary and on digital platforms.

The Tour is littered with jargon such as domestique and peloton, while race rules and cycling tactics are unique from many other sports. These need to be explained to casual observers, while more seasoned fans will want in-depth analysis from television. Fans at the roadside, meanwhile, will want real-time stats to improve their experience.

Data is therefore key to engaging all groups. Organisers have been able to record data such as bike speed and position for ten years but lacked the ability to collect it in real time, making it relatively uninteresting for the media.

“In 1999, the ASO added trackers because it knew sport would become digitised. The issue was the underlying network problem – how do you get the data in real time?” says Noelani Wilson, part of the NTT team at the Tour de France.

Previously, the only data available to spectators was a chalkboard carried on a motorcycle featuring approximate time gaps calculated by a stopwatch.

“It was the most rudimentary form of data collection and only catered to people watching on TV,” Wilson explains. “The fans got to see some of it from the roadside but it didn’t reach the masses and didn’t capture the experience of the Tour.”

Solving the network challenge

Once NTT got on board in 2015, it managed to find a way to extract this information. Every bike has a mobile phone-sized device fitted underneath the seat, housing a GPS chip, radio and battery. A reading is taken from every second from every rider to be analysed in the cloud.

The radio transmitters on the bikes, and on the race vehicles, create a mesh network to transmit the readings using white spaces – gaps in spectrum reserved for television. Because the RF equipment used is low power, the risk of any interference between it and the TV broadcasts is low.

The live readings are sent from the bicycles to helicopters used to provide television pictures and then relayed to an antenna on a cherry picker located next to NTT’s tech truck at the stage finish line.

Orange, another partner of the ASO, has committed to installing fibre at every finish location. This means NTT can send the data – coming in at 150 million data points per stage – to the cloud for analysis within 400ms of receiving it.

The power of the cloud

As the data comes in, it’s sent to data centres in Amsterdam where NTT’s algorithms can get to work.

Aside from real-time information for the broadcasters, the data is visualised for social media and digital platforms. This includes heat maps that visualise sprint finishes better, average speed graphics, and animations. A big moment for the team was in 2015 when a visual showing the speeds of riders involved in a major crash went viral. All of this is posted on a dedicated “Le Tour Data” Twitter account.

“We put the data through to different channels depending on who needs it,” explains Wilson. “TV needs real time data so there is minimal visualisation.

“We pilot new innovations on social media as it’s a quick way to see if something works: It helps fans understand more and adds context to the race. Once we’ve got something right, we’ll send it to broadcast.”

Data is also fed to the official website and mobile application of the Tour de France, meaning roadside spectators also benefit. For example, the app uses a smartphone’s GPS and pairs it with race speed data to give the user an accurate prediction of when the race will reach their location.

The tech truck, meanwhile, is located in the broadcasting zone at the finishing line, meaning conversations with the host broadcaster are easy.

Thanks to the cloud, there is also much less equipment than there would be otherwise. Whereas previously there would need to multiple systems and myriad cables installed, all the tech trunk needs to operate are screens, computers and a link to the fibre network.

Machine learning

The cloud also gives NTT the ability to experiment with machine learning. For example, the “Man v Machine” algorithm crunches rider and stage data to predict the ten most likely winners of a certain stage.

It has a 70% success rate but will never be perfect because it’s impossible to predict events like crashes or a sprinter getting boxed in near the finish.

This was in evidence during Stage 19 in 2019, when NTT predicted Frenchman Thibaut Pinot would win the stage, only for him to withdraw from the race that day due to injury. In fact, there was no winner at all because organisers abandoned the stage part way through due to a hailstorm causing flooding and mudslides – a freak occurrence that could never be reasonably accounted for.

NTT also provides advanced video analytics for the broadcasters. It’s possible to overlay positioning data onto a video feed in real time, helping commentators to identify riders in the peloton or to show the speeds of riders in an attack.

For this feature to be useful, it was necessary to introduce three-dimensional tracking so riders descending a winding road could be placed in the same group – one of the biggest challenges that NTT faced.

“It’s about understanding patterns,” says Etienne Reinecke, NTT Group CTO. “Machine learning can do far more than what we can. Over the course of a Tour de France, the software makes 100 trillion decisions. You can’t do that with people. Getting real time accuracy through data is huge.”

Machine learning is now being used to predict the probability of key events. “Le Buzz” analyses video footage to predict attacks, a change in pace or even a crash. For example, an accident might be more likely on cobbled roads or near a feeding station.

The organisational edge

This raises a question about ethics. If NTT can predict the likelihood of a crash, then shouldn’t it warn the riders?

“We’re not interfering in the race,” replies Reinecke. “We send data to the ASO, but racing is always dangerous.”

Away from fan engagement, the data helps the ASO manage the race. As mentioned, the Tour is constantly moving and organisers have to make on-the-fly decisions without having a full idea of what’s happening across the race.

NTT provides the data to official race cars through an in-app dashboard so they can make more informed decisions. Rather than sending data to Amsterdam, NTT makes use of edge computing to improve reliability and lower latency even further.

“Because we don’t want to rely on the overall processing route, we put a smaller version of the dashboard on a car with an edge processor,” explains Reinecke.

An example of this in previous years was when organisers saw Mark Cavendish, the holder of the green jersey (awarded to the best sprinter), was in danger of missing the cut after a difficult mountain stage.

The ASO couldn’t have a situation when the jersey holder was forced to leave the race, so it adjusted the cut-off point to allow more riders to finish the stage.

Looking to the future

The partnership has been a success for both parties. For NTT, the Tour de France is a useful testbed for new innovations and a demonstrator for what it can do for other businesses, while the ASO has dramatically expanded its digital reach.

Since 2015, the Tour de France has increased social media followers from 2.7 million to 8 million, while video news viewers have increased 1,000%. Television viewing figures are stable, but there is a recognition that the ASO needs to move beyond the traditional broadcast revenue model in the future. Digital platforms – and the monetisation of data – are central to that goal.

However, the amount of data that the ASO is able to collect will likely be dictated by politics and ethics. The biggest question is who owns the data. Is it the ASO, because it organises the race, or the teams because they compete? Maybe it belongs to the individual riders? It’s a question that will likely be raised in other industries – and other walks of life – in the near future.

The two parties have extended their partnership until at least 2024 and NTT is already working on additional innovations. It has held internal Code de France hackathons to identify new ways to drive engagement, as well as building a prototype augmented reality (AR) application that overlays real-time positioning data on a 3D model of the stage topography.

As part of their future plans, NTT and the ASO want to devise a “smart stadium” concept for an event that takes place on ordinary roads.

“Analytics, data and AI are leading cycling from analogue era into a digital era,” Reinecke says.

For a race that initially started as a publicity stunt to sell more copies of L’Auto, it’s no surprise that the Tour de France still yearns for coverage in the digital age.

All image rights reserved

VMware strikes public cloud partnership with Google Cloud


Keumars Afifi-Sabet

30 Jul, 2019

Google Cloud Platform (GCP) will support VMware workloads as part of a partnership between the two companies to generate additional options for customers looking to run a hybrid cloud strategy.

Up to now, Google’s cloud arm was the only major public cloud provider to not support VMware. Enterprise customers will, however, from later this year be able to run VMware workloads on the platform.

The Google Cloud VMware Solution, as it’s dubbed, will use software-defined data centre tools including NSX networking, vSAN storage software provided by GCP, as well as vSphere compute. This will be governed through CloudSimple.

The partnership has not yet been formally announced, a spokesperson told Cloud Pro, but is being widely reported by a host of US titles including Bloomberg.

VMware will benefit from their customers given the flexibility to move workloads from their own servers to the public cloud, including existing Vmware tools, policies and practices, according to the firm’s CEO Thomas Kurian.

The firm’s customers will also be given access to Google’s artificial intelligence (AI), machine learning and analytic tools, as well as being able to deploy their apps to regions where Google has data centres. Moreover, these enterprises will also be able to run networking tools through GCP, beyond virtualisation software.

The partnership between GCP and VMware is similar in nature to other agreements struck between the virtualisation firm and rival public cloud providers, including Amazon Web Services (AWS).

These two companies, for instance, struck an agreement in late 2017 in which businesses could migrate their processes and apps to the public cloud. This was extended to Europe in March last year.

In April, meanwhile, Microsoft introduced native VMware support for its Azure cloud platform. The announcement meant customers were able to run their workloads in native environments, also through tools like vSphere, vSAN, vCenter and NSX, with workloads ported to Azure with relative ease.

VMware’s latest partnership with GCP points towards its strengthening in the public cloud arena, as it aims to offer a greater scale of flexibility for its enterprise customers.

How big data can give you a competitive edge in sports


Cloud Pro

30 Jul, 2019

When the athletes dramatised in the 1981 Oscar-winning film Chariots of Fire were competing nearly a century ago, a stopwatch was one of the few devices producing data to measure their sporting achievement. But these days sport is all about measurement and analysing the data it produces. Whether it’s tracking your location, heart rate, oxygen saturation, or nutrition, huge amounts of information are being collected from athletes, including amateurs. But professionals in particular are finding that data collection and analysis could be what gives them the edge in competition.

Sports sensors sit where two of the biggest trends in contemporary computing meet: big data and the Internet of Things. The latter is a significant driver of the former. On the one hand, you need the connected devices that can track the relevant parameters to assess the factors behind sporting excellence and measure improvements. On the other hand, you need powerful data analytics to take the information produced, make sense of it, find trends, and help inform how athletes can do better.

Sales of running watches alone are growing five per cent every year, according to MarketWatch. These have now gone well beyond pedometer wristbands like the original Fitbit, and can include a GPS, heart rate monitor, and even a pulse oximeter to measure blood oxygen levels. They can also link wirelessly to cadence sensors in running shoes and on bikes, monitor your sleep patterns, and then automatically transfer the data collected to the internet. Some sports watches can use an accelerometer to detect which stroke you are using when swimming and when you push off at the beginning of a length, so the number of lengths you swim can be counted automatically.

Even consumer-grade systems can tell you useful things about your exercise ability that can guide how you train, such as VO2 Max, which measures the maximum amount of oxygen a person can utilise during intense exercise. This provides an assessment of cardiovascular fitness and can help you track your progress getting fit as you implement a distance-running programme. However, whilst the amateur trend for tracking exercise is driving device ubiquity and the sheer volume of data, professionals have access to systems that can provide much greater levels of detail, and with it a real edge in performance.

Data gets results in the beautiful game

For example, data is being used to improve football performance by analysing opposing team strategy and finding ways to combat it. Scottish football team Hearts used information from the InStat database to predict that a high-pressing game using players who could keep the pressure on for many kilometres of running would help them beat Celtic – and it worked. They are not alone, as more than 1,500 clubs and national teams use InStat, giving the company information on more than 400,000 players.

But clubs also build up data on their own players using sophisticated devices from companies such as Catapult Sports that can collect up to 1,000 data points per second. Similarly, the STATsports Apex can calculate more than 50 metrics at once, such as max speed, heart rate, step balance, high metabolic distance and dynamic stress load. This goes beyond the pure numbers but adds interpretation about how this is affecting an individual athlete. The data is collected in the cloud for historical comparative use. Teams now use this information to help decide which players to purchase to achieve their objectives for the season, employing services like InStat and Opta to provide the details they need.

InStat collects data for football, ice hockey, and basketball, whilst Opta includes these plus cricket, rugby union, baseball, golf, motorsport, tennis and handball amongst others. Although team games have many variables that can make player statistics only part of the picture, sports that focus on individual performance such as athletics can rely heavily on data to provide clear insights on how to aid improvement. This goes well beyond GPS tracking of outdoor events. Wearable devices with accelerometers, magnetometers and gyroscopes can track hundreds of data points to describe an athlete’s physical motion.

Stryd’s running shoe attachment can capture cadence (steps or cycles per minute) and ground contact. This can be used to analyse running style, which can be compared to previous sessions and other athletes. This information can spot nascent talent or help an athlete hone their style so they can emulate what makes the most successful sportspeople win. It can also detect warning factors like asymmetric movements that might cause a future injury or imply an impending one. EliteHRV’s sensors can provide high levels of detail on heart rate variability to see the physical effects of different levels of performance, so that athletes can recover adequately from their sessions and not over-train.

The secrets of the perfect golf swing

Another individual sport that is gaining considerable benefit from wearable sensors and data analytics is golf. Any sport using a bat, racquet or club can gain benefit from analysing a player’s swing, but in golf, the swing and body posture are particularly constrained, without also having to take into account additional factors like cross-court movement or ball spin, although atmospheric conditions will have an effect. Systems collecting golf performance data include Opta and ShotLink. The latter has results data dating back to 1983 and tracks 93 events a year.

GolfTEC, in contrast, is more focused on how an individual achieves their performance. The company has developed a SwingTRU Motion Study database of 13,000 pro and amateur golfers that includes information on 48 different body motions per swing. The analysis found six key areas that indicate an excellent player. These factors were discovered by correlating swing data with performance. Similarly, TrackMan uses cameras to analyse a player’s swing to aid training.

Rather than just analysing the past, predicting the future is where the application of big data analytics to sport will prove particularly valuable. As with every area of big data analytics, this will be dramatically affected by the application of AI and machine learning. Any massive store of unstructured data can potentially benefit from AI technology, which can help structure the data and find patterns in it proactively. First you feed in performance data, physical metrics during the activities, nutrition, sleep, atmospherics, plus anything else available. Then AI-empowered analytics will look for patterns that could provide strategies that make a difference, particularly when the margins for winning can be so small.

The systems we’ve discussed here are just the beginning. Data-driven sports are still only in their infancy, with much more to come in the next few years to help athletes find a competitive edge. Sport is just one area where big data and analytics are having a major influence, too. Healthcare, smart cities, and our understanding of the natural world are all seeing dramatic contributions.

Discover more amazing stories powered by big data and Intel technology