Exploring the 2019 cloud computing jobs market: Salaries, locations, and the best companies to work for

  • $146,350 is the median salary for cloud computing professionals in 2018
  • There are 50,248 cloud computing positions available in the U.S. today available from 3,701 employers and 101,913 open positions worldwide today
  • Oracle, Deloitte and Amazon have the highest number of open cloud computing jobs today
  • Java, Linux, Amazon Web Services (AWS), software development, DevOps, Docker and infrastructure as a service (IaaS) are the most in-demand skills
  • Washington DC, Arlington-Alexandria, VA, San Francisco-Oakland-Hayward, CA, New York-Newark-Jersey City, NY, San Jose-Sunnyvale-Santa Clara, CA, Chicago-Naperville-Elgin, IL, are the top five cities where cloud computing jobs are today and will be in 2019

Demand for cloud computing expertise continues to increase exponentially and will accelerate in 2019. To better understand the current and future direction of cloud computing hiring trends, I utilised Gartner TalentNeuron. Gartner TalentNeuron is an online talent market intelligence portal with real-time labor market insights, including custom role analytics and executive-ready dashboards and presentations. Gartner TalentNeuron also supports a range of strategic initiatives covering talent, location, and competitive intelligence.

Gartner TalentNeuron maintains a database of more than one billion unique job listings and is collecting hiring trend data from more than 150 countries across six continents, resulting in 143GB of raw data being acquired daily. In response to many readers’ requests for recommendations on where to find a job in cloud computing, I contacted Gartner to gain access to TalentNeuron.

Key takeaways include the following:

$146,350 is the median salary for cloud computing professionals in 2018

Cloud computing salaries have soared in the last two years, with 2016’s median salary being $124,300, a jump of $22,050. The following graphic shows the distribution of salaries for 50,248 cloud computing jobs currently available in the U.S. alone. Please click on the graphic to expand for easier reading.

The Hiring Scale is 78 for jobs that require cloud computing skill sets, with the average job post staying open 46 days

The higher the Hiring Scale score, the more difficult it is for employers to find the right applicants for open positions. Nationally an average job posting for an IT professional with cloud computing expertise is open 46 days. Please click on the graphic to expand for easier reading.

Washington, DC – Arlington-Alexandria, VA leads the top twenty metro areas that have the most open positions for cloud computing professionals today

Mapping the distribution of job volume, salary range, candidate supply, posting period and hiring scale by Metropolitan Statistical Area (MSA) or states and counties are supported by Gartner TalentNeuron.  The following graphic is showing the distribution of talent or candidate supply.  These are the markets with the highest supply of talent with cloud computing skills.

Oracle (NYSE: ORCL), Deloitte and Amazon (NASDAQ: AMZN) have the highest number of open cloud computing jobs today

IBM, VMware, Capital One, Microsoft, KPMG, Salesforce, PricewaterhouseCoopers (PwC), U.S. Bank, and Booz Allen Hamilton, Raytheon Corporation, SAP, Capgemini, Google, Leidos and Nutanix all have over 100 open cloud computing positions today.

Docker and Microsoft team up in initiative for greater container control

Docker has its DockerCon event in Barcelona this week while Microsoft is running its online Connect() conference – and the two have combined on a new product which aims to help empower developers working in container environments.

The product, known as Cloud Native Application Bundles (CNAB), is an open source specification for packaging and running distributed applications, enable a single all-in-one packaging format, across any combination of environments.

As containerisation came to pass, the theory of a truly distributed application – one which can run on multiple computers at the same time, whether on servers or in the cloud – was edging closer. Yet, as is always the way when technologies mature, there are bumps in the road, with the rise of Kubernetes meaning this vision took a step back. Microsoft and Docker are here trying to steer the ship back in the right direction; think of it as a container for containers.

“By design, it is cloud agnostic,” wrote Matt Butcher, Microsoft principal engineer, in a blog post explaining the move. “It works with everything from Azure to on-prem OpenStack, from Kubernetes to Swarm, and from Ansible to Terraform. It can execute on a workstation, a public cloud, an air-gapped network, or a constrained IoT environment. And it is flexible enough to accommodate an array of platform needs, from customer-facing marketplaces to internal build pipelines.”

Patrick Chanezon, chief developer advocate at Docker, put it in similar terms. “As more organisations pursue cloud-native applications and infrastructures for creating modern software environments, it has become clear that there is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications,” Chanezon wrote. “Real-world applications can now span on-premises infrastructure and cloud-based services. Each of these need to be managed separately.”

For Docker, this makes for an interesting development considering the various avenues its competitors recently gone down. Heptio’s acquisition by VMware and Red Hat being bought by IBM have led some to consider whether Docker has missed the boat. At the very end of last year, Chris Short, product marketing manager at Ansible, wrote a personal blog arguing that 2017 would be seen as ‘the year Docker, a great piece of software, was completely ruined by bad business practices leading to its end in 2018.’

Perhaps this collaborative aspect is its future? “At the heart of great developer innovation is community, and that’s why open source is to important,” added Scott Guthrie, EVP of Microsoft’s cloud and enterprise group. “We’re committed to empowering developers at every stage of the development lifecycle – from ideation to collaboration to deployment.

“Our announcements are not only about open-sourcing more of our own products for community collaboration and contribution, but how we are also actively investing in collaborating on initiatives with others.”

You can take a look at the draft specification for CNAB here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series.pngInterested in hearing industry leaders discuss subjects like this and sharing their Cyber Security & Cloud use-cases? Attend the Cyber Security & Cloud Expo World Series events with upcoming shows in Silicon Valley, London and Amsterdam to learn more.

Kubernetes flaw could allow hackers gain dangerous admin privileges if left unpatched


Bobby Hellard

5 Dec, 2018

A serious vulnerability in the Kubernetes could enable an attacker to gain full administrator privileges over the open source container system’s compute nodes. 

The bug, CVE-2018-1002105, is a privilege escalation flaw in RedHat’s OpenShift open source Kubernete’s platform and was spotted by Darren Shephard, founder of Rancher Labs. 

The flaw effectively allows hackers to gain full administrator privileges on Kubernetes compute nodes, the physical and virtual machines on which Kubernetes containers run upon. 

Once those privileges have been gained, hackers can then steal data, input corrupt code or even delete applications and workloads. 

The flaw can be exploited in two ways: one through a ‘normal’ user gaining elevated priveledges over a Kubernetes pod, which is a group of one or more containers that share network and storage resources and run in a shared context, and from there they could wreak havoc.  

The second involves the exploitation of API extensions that connect a Kubernetes application server to a backend server. While a hacker will need to create a tailoured network request to hanress the vulnerability within this context, once done they could send requests over the network conection to the backend of the OpenShift deployment.

From there attacker has ‘cluster-level’ admin privileges – clusters are a collection of nodes – and therefore escalated privileges on any node. This would allow said attacker to alter existing brokered services to deploy malicious code. 

Due to the connection on Kubernetes API servers, where its authenticated with security credentials, malicious connections and unauthenticated users with admin privileges appear above board. This makes the flaw and its exploitation difficult to identify as would-be hackers appear as authorised users. 

According to GitHub, this makes the vulnerability a critical flaw, mostly due to the fact it allows anyone with access to cause damages, but also because of its invisibility as abusing the flaw leaves no traces within system logs. 

“There is no simple way to detect whether this vulnerability has been used. Because the unauthorized requests are made over an established connection, they do not appear in the Kubernetes API server audit logs or server log. The requests do appear in the kubelet or aggregated API server logs, but are indistinguishable from correctly authorized and proxied requests via the Kubernetes API server.”

There are fixes and remedies to this flaw, but it’s mostly about upgrading the version of Kubernetes you run. Now. There is patched versions of Kubernetes, such as v1.10.11, v1.11.5, v1.12.3, and v1.13.0-rc.1 and it is recommended that you stop using Kubernetes v1.0.x-1.9.x.

For those that cannot move up, there are cures; you must suspend use of aggregated API servers and remove pod permissions from users that should not have full access to the kubelet API.

How to choose the right cloud supplier for your business


David Mitchell

13 Dec, 2018

In the last century, businesses wanting to centralise their data storage had to invest in an on-site file server, and maintain and manage it themselves. Nowadays, you can save a great deal of money and simplify your management overhead by taking advantage of cloud storage.

In fact, “cloud storage” undersells the service, as today’s cloud providers offer much more than a remote repository for your data. There’s a wealth of features on offer to allow your staff to easily and safely share documents, as well as collaborate with colleagues all over the world, using their own choice of device.

Syncing services, meanwhile, ensure that employees are working on the latest versions of documents, without having to wait around for files to download on demand. Big email attachments also become a thing of the past, with most providers allowing you to send secure download links to team members or clients outside of your business.

Unfortunately, there are a ton of options out there, with each including a diverse range of sharing and collaboration features. In order to make sense of it all, we’ve put together a quick guide on what to look out for in order to make the right buying decision.

Making plans

All cloud providers offer a range of storage plans, so you can choose the one that best fits your requirements – and your budget. There’s normally a free tier, but these invariably have capacity restrictions, along with limits on the size of files you can upload. Go with one of these and you might regret it when you suddenly find that crucial files can’t be synced in the middle of a project.

1&1 HiDrive provides a public cloud folder for team members to share

A paid service will give you much more data headroom, and will also include an administration console, allowing you to decide precisely who is allowed to access your cloud storage, and enforce file-level access controls. Tiered administration is a good thing to look out for as well: this allows selected employees to be granted some administrative rights, so they can manage features such as access requests, adding new users and password changes.

For all these reasons, we strongly recommend you go with a paid service, and prices start at just a few pounds a month. If your business needs to work with unusually large volumes of data, consider moving up to an advanced plan that provides unlimited cloud storage for all your users. You might be surprised at how affordable these services are.

Cost controls

We’ve emphasised the low cost of cloud storage, but it’s still worth weighing up your needs and making sure you’re not buying more gigabytes than you need – any more is just a waste of money.

Think about who needs access, too. Pricing is based on the number of users, and not everyone in your organisation needs to be included. Most plans can be dynamically upgraded if you want to add users in future, but downgrading isn’t always so simple: it’s worth checking before you sign up.

If you choose a plan with per-user limits on cloud storage, it’s a good idea to publish a policy as to what employees can store there to avoid filling up your storage allocation with personal files. If you can’t curb their enthusiasm, choose a provider that can enforce account usage controls such as storage quotas.

On that subject, a word of warning: we’ve seen starter business plans use slightly misleading advertising, implying that they’re offering a certain amount of storage per user when in fact the quoted figure is the total amount of storage for your whole business. If a plan doesn’t specifically state that a storage allowance is per user, it probably isn’t.

You may also be able to save money by making a longer commitment. All providers will accept payment on a monthly basis, but there are substantial discounts on offer if you sign up for a yearly contract.

Agents of change

Sync agents keep your files up to date, and some can apply bandwidth limits

Real-time file syncing makes cloud storage fantastically convenient and getting set up is easy. In most cases, the administration console can be used to send an email invitation to users, containing a link they can click to download a synchronisation agent and join the collaboration party.

The Box Drive agent provides easy access to all your files in the cloud

Note, though, that the sign-up process requires the user to provide a password for their cloud account. All the usual caveats apply here: the password should be secure and not easily guessed. Good providers offer admin controls that allow you to set policies that ensure passwords adhere to a specific format and strength.

Once the agent is installed, and the user is logged in, the latest files will start automatically downloading to each user’s computer. If you’re working with lots of large documents, look for an agent that lets users decide what should sync automatically and what should live in the cloud. Be warned, though, this can leave staff stymied if they lose internet access for any reason. We were one of many UK businesses that were hit by a recent Virgin Media service outage and were unable to access our cloud files for over four hours.

The Box admin portal gives an overview of all file sharing activities

Assuming all’s working well, cloud services also provide personal web portals, so each user can view their data, upload or download files and share folders with team members. File versioning is a very worthwhile feature, allowing deleted files to be recovered or reverted to an earlier version.

Tresorit provides a rich desktop app with easy access to documents

One last valuable feature is the ability to send large files to other users or external clients simply by creating a web link in the portal and emailing it to them. For increased security, look for a service that allows you to password protect links, and apply download limits and expiry dates.

Some providers also enable external clients to send files to your cloud repository by invitation, without needing their own credentials.

Cloud nine

Business cloud storage services bring a wealth of benefits. They can save you money and increase productivity, and many offer integration with other apps and cloud services such as Office 365.

It’s worth noting that mobile support, although generally good with apps for Android and iOS, can vary considerably in terms of capabilities, so it’s worth checking each provider to see which have the features you need.

IBM chosen to push cloud-native banking platform


Bobby Hellard

4 Dec, 2018

UK software firm Thought Machine has chosen IBM to accelerate the implementation of its cloud-native banking platform.

The system is called Vault and Thought Machine said it was created to give traditional banks with legacy systems and business constraints a platform to meet the demands of modern banking. The partnership has already had some success as Lloyds Banking Group has begun exploring the Vault platform.

The cloud-native core banking system allows banks to fully realise the benefits of IBM Cloud and has been developed to be highly flexible giving banks the ability to quickly add new products, accommodate shifts in a bank’s strategy or react to external changes in the market, according to Thought Machine. 

“Most banks are constrained by their legacy core banking systems. Vault offers the first core banking platform built in the cloud from the ground up, massively scalable and with the flexibility to create and launch new products and services in days versus months, at unprecedented levels of cost and speed,” said Jesus Mantas, managing partner of IBM Global Business Services

“Our relationship with Thought Machine enables us to provide a lower risk option for banks to reduce the cost of operations and increase customer service and speed to market.”

This alliance between Thought Machine and IBM Services will see the creation of a global practice headquartered out of London which the pair hope will bring together banking transformation and implementation expertise from existing consultants along with new hires to support the demand from banks for the transformation of their core infrastructure.

“Building the new foundations for banking is Thought Machine’s mission and Vault is Thought Machine’s next-generation core banking platform. This specially designed cloud-native new platform has been written to bring a modern alternative to current platforms that many banks around the world are struggling to maintain,” said Paul Tayler CEO Thought Machine.

IBM has plenty of experience with banks undergoing digital transformation and cloud migrations, with tech giant being called in to help TSB as it endured an almost endless nightmare after a failed attempt to leave its legacy IT infrastructure behind.

A retrospective on Diane Greene’s tenure as Google Cloud CEO


Keumars Afifi-Sabet

4 Dec, 2018

News of Diane Greene’s departure from Google Cloud after three years in charge came as a surprise given the power and remit she was given to transform the business, and bring it in line with its competitors.

Back then, and to some extent even still, what is considered to be the ‘big four’ of cloud, namely Amazon Web Services (AWS), Microsoft Azure, IBM and Google Cloud, actually resembled more a two-horse-race between AWS and Microsoft Azure.

The VMWare co-founder was brought in to change all this; not only overtake closest rival IBM in terms of market share but to grow the business to the extent it could reasonably compete with AWS and Azure. She was expected to seize the initiative in what could only be seen as an uphill struggle, as the wider company pivoted from hardware and advertising, and more towards cloud services.

Chasing the pack

When Greene was first brought in to lead Google Cloud in 2015, the company’s market share stood at little over 4%; chasing the likes of AWS (31%), Azure (9%), and IBM (7%). But the challenge wasn’t seen as insurmountable by any means, given Amazon and Microsoft had a near-ten year head start over Google, which had only started getting serious about cloud in 2016.

Since her appointment, its platform has grown from “having only two significant customers” in Spotify and Snapchat to a handful of major corporates and large enterprises. These include 20th Century Fox, HSBC, Verizon, and Disney, with Netflix’s business (using Google Cloud for disaster recovery) a solid endorsement given their loyalty to AWS.

The rapid growth of Google Cloud’s case study base is seen by many as a testament to the platform’s superior technology against AWS and Azure. Greene, in her letter, said the company differentiates itself in areas such as security, AI, and G Suite portfolio of workplace apps. She argued that this growth is reflected by a ten-fold increase in the attendance of this year’s flagship Google Cloud Next conference, hosted in San Francisco, against 2016.

The CEO said her organisation had worked hard to reform its approach after being subjected to harsh industry criticism when she first joined, confirming Google Cloud had taken regulator and analyst advice to heart. Research VP & distinguished analyst at Gartner Ed Anderson, told Cloud Pro during the event the firm has proven it can deliver successfully and boasts a growing list of die-hard enterprise fans.

Thunderstorms on high

Despite its commercial wins, the firm has made slow progress in terms of market share, trailing at 3% versus AWS’ gargantuan 41.5% stake and Azure 29.4% (by application workloads). Why, then, is Google Cloud so optimistic as it moves into 2019, and fresh leadership under Oracle’s now-former chief Thomas Kurian?

That confidence comes from its ability to generate revenue. AWS may have carved out almost half of the market’s install base to date, but Google has proven successful at attracting high-value contracts. As a result, Google Cloud claimed in February to be the ‘fastest growing’ public cloud provider after declaring its first billion-dollar financial quarter. Based on this trajectory, and a revigorated strategy under Kurian, Google Cloud will hope it’ll continue making sufficient progress.

And yet questions linger over the nature of Diane Greene’s departure, following a relatively short-lived tenure, with several reports pointing towards internal conflict as a key driver. The ex-VMWare chief made clear in her letter that she had initially just expected to spend two years in the role, extending her stay to three, so could it be as simple as that?

Conflicting reports, however, suggest Greene was at the heart of major disagreements with Google CEO Sundar Pichai over acquisitions strategy, and military contracts, for instance. Meanwhile, dissatisfaction might have brewed over the amount of money Google was pouring into its cloud arm in return for very little payoff in terms of market share.

A major challenge for Google Cloud was an unclear direction that was underlined with tensions between herself and Pichai – particularly over the US Department of Defense Project Maven contract. As part of the deal, Google lent its AI technology to the Pentagon to analyse footage using computer vision algorithms and improve performance.

Sources claim Pichai was sympathetic to the protests, primarily led by more than 3,000 Google employees, while Greene resisted calls to break the Pentagon relationship as it was both a lucrative deal in itself, as well as a stepping stone to further government work.

Disagreements also brewed over sales strategy as Greene’s representatives increasingly joined other Google teams, like advertising and maps, to attempt to bundle cloud into wider offerings – with these efforts a source of frustration to these departments’ chiefs.

Google Cloud in 2019

Kurian now faces a daunting task, but one that will almost certainly be more closely guided by Google’s Pichai. Perhaps we will see the company shift focus somewhat once Greene departs early next year. What’s more certain is Google Cloud will continue its mission to further establish itself as a reliable and committed enterprise cloud platform, but this will unlikely be evidenced by any substantial market share gain.

Google Cloud CTO Brian Stevens told Cloud Pro that he wants to see the company shift away from corporate messages and instead focus on customer case studies. He believes the company’s future lies in its ability to let happy customers sell its platform – a smart move for a company that’s unlikely to chip away at AWS’ market share.

Google’s next stage of growth is therefore likely to involve consolidation of the share it does have, and the poaching of those bigger, prize-winning fish from the AWS and Azure pools.

How the ‘severe’ cloud skills gap will impact on company culture

Another day, another story focused around cloud skillsets – or lack of them. A new survey from artificial intelligence platform provider OpsRamp has found the vast majority of respondents continue to struggle finding the right talent for cloud environments.

The research, which was conducted at Gartner’s Infrastructure, Operations and Cloud Strategies conference, found 94% of the 124 respondents were having a ‘somewhat difficult’ time finding candidates with the right technology and business skills to drive digital innovation. Nine out of 10 hiring managers polled said the digital skills gap was either ‘somewhat big’, ‘quite big’, or ‘huge.’

The survey findings appear in a report which focuses on how the battle for cloud talent has become well and truly joined. With the ominous subtitle of ‘from a cloud-native skills gap to a full-blown crisis’, there are plenty of warnings in place.

Even if there is talent available, it takes a while to get it all together. A quarter of those polled said it takes more than three months to fill an open role in IT operations, DevOps, or IT engineering. The report cites Glassdoor figures which found more than 260,000 unfilled technology jobs cost the US economy almost $20 billion in 2017. Yet it is not for the want of trying. 41% of organisations polled said they were ‘willing’ to pay competitively for skilled IT professionals, while 28% said they were ‘very willing.’

Ultimately, the biggest effect the report notes on organisations is around culture. With a skills gap, companies are challenged with an unwillingness to change pre-existing attitudes towards technology more than anything else. Other concerns are limited ability in adopting new technologies, as well as limitations in workforce productivity. “Unless veteran IT staff gain the core competencies to succeed in a digitally driven environment, organisations will not be able to confidently navigate their digital landscape,” the report noted.

Initiatives are taking place with regards to closing the skills gap. In October Cloud Academy announced the launch of two new products which aim to provide a greater picture of the jobs and skills landscape. Cloud Roster is a job roles matrix which analyses thousands of public job postings per week to provide trending tech skills, while Cloud Catalog provides a stack ranking of technologies based on developer community data.

“We knew that a skills gap existed, but we didn’t truly understand its severity until now,” said Jordan Sher, director of corporate marketing at OpsRamp. “Enterprise IT leaders would like to make the leap to cloud-native technologies but are struggling to adjust their workforce to transform their digital DNA.”

The findings of this report may not be news to some executives. As this publication reported last month from a study by software provider Domo, three quarters (71%) of respondents said a lack of data access and skills could put their organisation at risk, but only one in five was willing to invest in training for existing staff or invest in recruiting employees with strong digital skills.

“Enterprises will need to invest in skill building and retraining programs to equip their internal teams to manage hybrid workloads effectively, given how expensive and difficult it is to hire the right external talent,” added Sher.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

NHS patient records to be stored in AWS cloud platform


Connor Jones

3 Dec, 2018

EMIS Group, one of the UK’s major healthcare suppliers, will migrate one of its core services to Amazon’s cloud service.

EMIS, among other things, make EMIS Web, a flagship product of EMIS’s which 56% of all GPs in the country rely on to provide care to patients. The service will be migrated to Amazon Web Service (AWS) as EMIS-X, a new and optimised cloud-based version of the software.

Packed with new features, EMIS-X uses a range of new technologies, including AI-driven voice recognition to automatically interpret patient-clinician conversations and respond with appropriate data from the patient’s records or to provide suggestions for treatment.

“We see millions of hours currently spent by patients and staff in repeating information at each stage of the patient’s healthcare journey being eliminated and the management of medicines in pharmacy being revolutionised by better insight and more efficient services delivered through EMIS-X,” said Andy Thorburn, EMIS Group CEO in a statement.

It will also support a video consultation feature following the rollout of Babylon, another NHS service of this type, which will allow patients to remotely have a GP appointment, without having to leave their home.

But how will the firm transfer 40 million of the sensitive patient records to the new platform safely?

Speaking to IT Pro, a spokesperson said: “For security reasons, we cannot disclose exactly how the records will be migrated, but can report that the method is highly secure, fully certified, and has been used for migration of critical data by the U.K. and other government departments. Copying records will take only a few weeks, but EMIS will operate dual data sources (with bi-directional updating) for some time to ensure service continuity.”

The software which is used by over 10,000 UK organisations will reportedly migrate slowly, on a module-by-module basis which suggests it’s taking the protection of data seriously.

The firm emphasised the importance it’s placing on data protection by mentioning the “unprecedented levels of protection for patient data, including strong encryption of sensitive data” that the service will provide, Thorburn said.

“From the start, EMIS Group has led the way in interoperability and we have been working closely with clinicians and other customers during 2018 to develop EMIS-X. We believe it is the blueprint for the future of connected healthcare in the UK.”

Federated Appointments is another new feature that will be rolled out in the new cloud-based variant. The feature allows clinicians to more easily search for appointments, such as an MRI scan, at a convenient location for the patient, waving goodbye to appointments made in other counties with unrealistic travel times.

This feature also transcends other healthcare software. If your local hospital doesn’t run EMIS-X but has the closest available appointment, that appointment will still be booked, even if your GP does run EMIS-X.

EMIS Web, its current software, allows medical professionals to make alterations to Electronic Patient Records (ERPs) whether they work in primary, secondary or specialist healthcare organisations promoting more consistent care for patients between healthcare providers.

It’s not clear at this time how long it will take to implement the new cloud-based service as the proposal needs to be approved by NHS Digital but from what the company are saying, it sounds like it will be a gradual migration process.

The NHS has embarked on a massive digital transformation in the past year, delivering new technology as a result of increased funding in a variety of areas. The controversial Babylon chatbot app, the NHS app and the new multi-million pound AI application to cancer detection and treatments have all made headlines in an effort to make healthcare in the UK more efficient.

“Late diagnosis of otherwise treatable illnesses is one of the biggest causes of avoidable deaths,” PM Theresa May said in a speech on the government’s industrial strategy. “The development of smart technologies to analyse great quantities of data quickly and with a higher degree of accuracy than is possible by human beings opens up a whole new field of medical research and gives us a new weapon in our armoury in the fight against disease.”

Earlier this year, the NHS also announced a slew of smaller, separate initiatives over the course of the year geared towards transforming aspects of the health service. These include a huge funding injection to local councils to help social care recipients and commitment to get private medical data to augment its own patient records.

Why 86% of enterprises are increasing their IoT spending in 2019

  • Enterprises increased their investments in IoT by 4% in 2018 over 2017, spending an average of $4.6m this year.
  • 38% of enterprises have company-wide IoT deployments in production today.
  • 84% of enterprises expect to complete their IoT implementations within two years.
  • 82% of enterprises share information from their IoT solutions with employees more than once a day; 67% are sharing data in real-time or near real-time.

These and many other fascinating insights are from Zebra Technologies’ second annual Intelligent Enterprise Index (PDF, 25 pp., no opt-in). The index is based on the list of criteria created during the 2016 Strategic Innovation Symposium: The Intelligent Enterprise hosted by the Technology and Entrepreneurship Center at Harvard (TECH) in 2016. An Intelligent Enterprise is one that leverages ties between the physical and digital worlds to enhance visibility and mobilise actionable insights that create better customer experiences, drive operational efficiencies or enable new business models, “ according to Tom Bianculli, Vice President, Technology, Zebra Technologies.

The metrics comprising the index are designed to interpret where companies are on their journeys to becoming Intelligent Enterprises. The following are the 11 metrics that are combined to create the Index: IoT Vision, Business Engagement, Technology Solution Partner, Adoption Plan, Change Management Plan, Point of use Application, Security & Standards, Lifetime Plan, Architecture/Infrastructure, Data Plan and Intelligent Analysis. An online survey of 918 IT decision makers from global enterprises competing in healthcare, manufacturing, retail and transportation and logistics industries was completed in August 2018. IT decision makers from nine countries were interviewed, including the U.S., U.K./Great Britain, France, Germany, Mexico, Brazil, China, India, and Australia/New Zealand. Please see pages 24 and 25 for additional details regarding the methodology.

Key insights gained from the Intelligent Enterprise Index include the following:

86% of enterprises expect to increase their spending on IoT in 2019 and beyond

Enterprises increased their investments in IoT by 4% in 2018 over 2017, spending an average of $4.6M this year. Nearly half of enterprises globally (49%) interviewed are aggressively pursuing IoT investments with the goal of digitally transforming their business models this decade. 38% of enterprises have company-wide IoT deployments today, and 55% have an IoT vision and are currently executing their IoT plans.

49% of enterprises are on the path to becoming an Intelligent Enterprise, scoring between 50 to 75 points on the index

The percent of enterprises scoring 75 or higher on the Intelligent Enterprise Index gained the greatest of all categories in the last 12 months, increasing from 5% to 11% of all respondents. The majority of enterprises are improving how well they scale the integration of their physical and digital worlds to enhance visibility and mobilise actionable insights. The more real-time the integration unifying the physical and digital worlds of their business models, the better the customer experiences and operational efficiencies attained.

The majority of enterprises (82%) share information from their IoT solutions with employees more than once a day, and 67% are sharing data in real-time or near real-time

43% of enterprises say information from their IoT solutions is shared with employees in real-time, up 38% from last year’s index. 76% of survey respondents are from retailing, manufacturing, and transportation & logistics. Gaining greater accuracy of reporting across supplier networks, improving product quality visibility and more real-time data from distribution channels are the growth catalysts companies competing in retail, manufacturing, and transportation & logistics need to grow. These findings reflect how enterprises are using real-time data monitoring to drive quicker, more accurate decisions and be more discerning in which strategies they choose. Please click on the graphic to expand to view specifics.

Enterprises continue to place a high priority on IoT network security and standards with real-time monitoring becoming the norm

58% of enterprises are monitoring their IoT networks constantly, up from 49%, and a record number of enterprises (69%) have a pre-emptive, proactive approach to IT security and network management. It’s time enterprises consider every identity a new security perimeter, including IoT sensors, smart, connected products, and the on-premise and cloud networks supporting them. Enterprises need to pursue a “never trust, always verify, enforce least privilege” approach and are turning to Zero Trust Privilege (ZTP) to solve this challenge today. ZTP grants least privilege access based on verifying who is requesting access, the context of their request, and ascertaining the risk of the access environment. Designed to secure infrastructure, DevOps, cloud, containers, Big Data, and scale to protect a wide spectrum of use cases, ZTP is replacing legacy approaches to Privileged Access Management by minimising attack surfaces, improving audit and compliance visibility, and reducing risk, complexity, and costs for enterprises. Leaders in this field include Centrify for Privileged Access Management, Idaptive, (a new company soon to be spun out from Centrify) for Next-Gen Access, as well as CiscoF5 and Palo Alto Networks in networking.

Analytics and security dominate enterprise’ IoT management plans this year

66% of enterprises are prioritising analytics as their highest IoT data management priority this year, and 63% an actively investing in IoT security. The majority are replacing legacy approaches to Privilege Access Management (PAM) with ZTP.  Enterprises competing in healthcare and financial services are leading ZTS’ adoption today, in addition to government agencies globally. Enterprises investing in Lifecycle management solutions increased 11% between 2017 and 2018. Please click on the graphic to expand to view specifics.

https://www.iottechexpo.com/wp-content/uploads/2018/09/iot-tech-expo-world-series.pngInterested in hearing industry leaders discuss subjects like this and sharing their IoT use-cases? Attend the IoT Tech Expo World Series events with upcoming shows in Silicon Valley, London and Amsterdam to learn more.

Unsecured server leaks details of 32 million Sky Brazil subscribers


Clare Hopping

3 Dec, 2018

Thirty-two million Sky Brasil customers have been subject of a data breach caused by an unsecured ElasticSearch server.

The subscription TV provider left one of its servers without a password, meaning the information was indexed by search engine Shodan and exposed on the internet.

The leak was uncovered by Fabio Castro, a Brazilian security researcher, reported ZDNet.

Castro explained he wasn’t sure how long the server had been left open but it had been indexed since mid-October.

He uncovered the ElasticSearch server’s data before identifying who it belonged to using two IP addresses. But after examining the data, he discovered it was one of Sky Brasil’s servers.

The data stored on the device was API information and included 28.7GB of log files and 429.1GB of API data, with the details of both personal and business customers.

Data included names, home addresses, phone numbers, birth dates, billing details, and encrypted passwords.

After telling Sky Brasil about the leak, Castro said the server has now been secured with a password. Although the data is still indexed, no one can view the data.

ElasticSearch servers have been flagged by security researchers as a vulnerable storage option for the last year, following a number of data leaks and breaches.

In the last few months, FitMetrix and an unidentified data analytics firm have both been involved in data leaks because of unsecured servers. On both occasions, the administrators failed to add password protection to their devices so anyone could access and take the data residing on them.

However, Elastic, the company behind ElasticSearch said their servers are only designed for use in internal networks, which is why password protection isn’t a requirement during set up.