India to create more than one million cloud computing jobs by 2022, report notes

Cloud computing will continue to be a business driver across the globe – and according to a new report, India will see significant growth in the coming years.

Great Learning, an ed-tech platform provider, argues that more than one million cloud computing jobs will be created in India by 2022. The potential salaries for cloud professionals in the country represents a step up from traditional IT engineers, the report adds.

Moving up to a mid-level managerial role could see employees ‘easily command’ upwards of Rs 20 lakh (£22,000) per year as a salary, while at the very highest level, senior professionals at the largest cloud companies can pay up to Rs 1 crore (£110,000) per annum.

Naturally, Great Learning has an immersive program on offer in cloud computing – with an application deadline of November 29 – which covers various infrastructure models and technologies, from Amazon Web Services (AWS) to Microsoft Azure, to containers and big data. Harish Subramaniam, program director for Great Learning’s cloud portfolio, noted similar interest from senior industry engineers in the program, alongside budding professionals.

According to the most recent Cloud Readiness Index from the Asia Cloud Computing Association (ACCA) back in April, India placed a disappointing 12th position out of 14 nations analysed, representing no change from the previous analysis and above only China and Vietnam. The report noted how cloud infrastructure was ‘the weakness weighing India down’, arguing that while progress was being made, the country’s vast expanse – much like China – meant many areas were still not up to speed.

“Lack of access to quality broadband and sustainable power remain serious issues throughout India, making it difficult for even the most polished security and governance frameworks to drive cloud adoption,” the report noted. “Much like other emerging economies, this is slowly but steadily improving, but the sheer scale of the task poses a serious challenge.”

The largest cloud infrastructure providers do have a presence in India. AWS has a Mumbai region with two availability zones, while Google opened up its doors in November last year. Alibaba, recognised by industry research as the second largest player in the Asia Pacific region, opened its first Mumbai data centre in January of this year, with a second arriving in September.

Analysts have argued serious growth in the Indian data centre market is only a matter of time. Commercial real estate firm Cushman & Wakefield said in April that the market will reach $7 billion by 2020, with interest expanding from Mumbai to New Delhi, Chennai, and Bangalore.

The ACCA report recommended India needed to ‘accelerate digital literacy and support IT startups to ensure its workforce drives cloud adoption in the public and private sectors.’ If Great Learning’s report comes to fruition, stakeholders are preparing to do exactly that.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

What to expect: AWS re:Invent 2018


Bobby Hellard

26 Nov, 2018

It’s a case of Viva Las Vegas for cloud computing this week, as the current king, Amazon Web Services, takes over the casino strip for AWS re:Invent. 

Will the Amazon e-commerce division will still be heavily busy with Black Friday and Cyber Monday, its technology arm will be welcoming 40,000 people to Sin City for all things cloud.

As AMS expands its cloud footprint so too does it expand its presence in Vegas taking over the Venetian, Mirage, MGM Grand, Bellagio, and Aria hotels.

Unlike the expo’s of its biggest rivals, Google Cloud and Microsoft Azure, AWS saves a lot of announcements for re:Invent, which is likely to hold many function releases and version updates. We expect keynotes from CEO Andy Jassy and CTO Dr Werner Vogels to discuss some of these.

Hybrid Cloud

One of the areas AWS will need to address this year will be its hybrid cloud
services. The company is the dominant force in public cloud, but hybrid services have always been downplayed beyond its VMware partnership and its Snowball Edge device. But more and more companies are turning to hybrid clouds, looking to keep certain workloads on-premise, as they take staps towards digital transformation. 

In this instance, AWS is behind most its rivals, such as Microsoft with its Azure Stack and IBM, which recently brought Red Hat to specifically target Hybrid clouds. AWS will want to remain dominant, but it won’t do that with just VMware — we expect some news on this front.

AI and Machine Learning

Both Microsoft and Google made AI the big theme for 2018 events. Google Cloud had ‘AI for everyone’ as its tagline, so it makes sense to follow the trend and we can expect that AWS will have some sort of AI announcement or an update at the least.

In the lead up to the event, the company announced an expansion to its machine learning offerings, with new capabilities for text-to-speech service, Amazon Polly, Amazon Translate and its multi-language transcription service, Amazon Transcribe. Some of these offerings were announced last year, at re:Invent 2017, so we expect some more new expansions of capabilities to be announced during the event.

Defying data gravity: How can organisations escape cloud vendor lock-in?

The process of deriving the maximum possible business value from the data you hold is not a new challenge, but it is one that all too many organisations are still learning to address in the most sustainable way.

The concept of ‘data gravity’, coined by software engineer Dave McCrory in 2010, refers to the ability of bodies of data to attract applications, services and other data. The larger the amount of data, the more applications, services and other data will be ‘attracted’ to it and the faster they will be drawn. As the amount of data increases exponentially it gains mass, and becomes far more rooted in place. In a business context, it becomes harder and harder for that data to be moved to different environments.

What is especially true is that data has far more mass than the compute instances utilising it – for example, moving 1,000 virtual machines to the cloud is far easier than moving 1,000GB of data to the cloud – the same is true for migrating out of the cloud. Therefore with data gravity it has become more important than ever where that data resides, and how ‘portable’ it can really be for it to be utilised to its full potential. Increasingly, the ‘where’ for many businesses is in the cloud.

Locked-in to the cloud?

Most forward-looking businesses agree that it is no longer enough to leverage the tools at their own datacentres – on-premises – and thrive. While cloud security is still among the top concerns for CIOs, the cloud is starting to play a key role for organisations. The ease of migrating data to the cloud creates a common trap that businesses are falling into – based on the assumption that by moving data and compute to one cloud provider their digital transformation journey is complete. On the contrary.

Cloud providers are constantly leapfrogging each other in their ability to provide the ‘next big thing’, so businesses need to define a clear strategy to ensure that data gravity is not tying them down to one cloud provider. Thinking back to data gravity, this is easier said than done! At present, the structure of the cloud market and the volumes of data we’re dealing with have brought many to a position where they are stuck using the compute functionality of the provider they’ve been using for data storage, due to the sheer cost & complexity of extracting and moving that data to another cloud provider. Rather than gaining flexibility and agility and letting the cloud vendors compete for their business, businesses are back in a state of lock-in and can only gain the level of agility that their particular provider has chosen to give them. In many ways their competitive advantage is the hands of the cloud provider.

Remaining competitive in the age of digital transformation means being able to respond and adapt to the latest technologies available to you as a business. So what is the next step in breaking away from cloud vendor lock in?

Regain the sovereignty of your data

The next development in cloud is the disaggregation of storage and compute, with the introduction of a sovereign storage cloud, which is provider-agnostic and ‘neutral’ while at the same time physically located within the same building as the cloud providers to avoid adding latency.

When Google wrote the famous 2003 white paper that laid the foundations for big data, they wrote that placing the data inside the nodes was the only means as no external storage could handle 100s of Terabytes at the time. However, times have changed and 15 years later this assumption doesn't hold true anymore. Taking data out of the servers (disaggregating data and compute) has not only become possible, it also holds the key to reducing cost and improving the efficiency of your clusters

The data within this neutral cloud-adjacent storage can be utilised for compute instances on any cloud platform or environment, allowing businesses to pick and choose cloud compute instances based on which service provides the functionality they need at the time. This is the next iteration of a sustainable multi-cloud strategy for large enterprises, where data is immune to this lock-in.

With businesses gaining the ability to pick and choose cloud computing services whilst entrusting their data to neutral cloud-adjacent storage, this new model will also usher in a greater level of real-time competition for customer workloads between the public cloud providers themselves. The net result of this competition is a win-win for businesses in terms of greater choice, flexibility, and cost benefits.

By regaining the sovereignty of their data whilst still allowing for total flexibility and freedom to adopt the latest innovations in the cloud sphere, forward-thinking businesses will emerge head and shoulders above competitors.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How Hive keeps the lights on with VMware and AWS


Adam Shepherd

22 Nov, 2018

If you’re a techie living in the UK, you’re almost certainly familiar with Hive.

This home-grown smart home firm was created in 2012 by parent company Centrica – which also owns British Gas – as a dedicated division to handle its burgeoning connected heating project. While it’s technically part of Centrica, it’s run as a separate company, operating as a lean startup independent of the rest of the business.

The venture has clearly proved successful; in the six years since it launched, Hive has expanded its portfolio to include smart lighting, motion sensors, surveillance cameras and more, and in May this year the company reached one million customers. However, supporting one million connected homes and counting requires a robust and scalable IT infrastructure.

Hybrid can still be costly

As you’d expect from a modern smart tech company, Hive’s infrastructure is now entirely cloud-based, running on AWS and VMware. This wasn’t always the case, however, as Hive’s infrastructure has evolved as the business and its needs changed over time.

According to Hive’s head of site reliability engineering, Chris Livermore, the man who’s responsible for provisioning and maintaining the infrastructure on which Hive’s software engineers deploy their code, the company initially started out with a hybrid model. The team used cloud environments to build and deliver Hive’s mobile applications but also maintained a physical data centre.

The main reason for this, Livermore says, is that AlertMe – a key partner that provided Hive with a platform for remote monitoring and automation services – only supported on-prem deployments, forcing Hive to run its own infrastructure.

«The data centre we had, we put a virtualisation platform on it, we used OpenStack, but we did that to allow our dev teams to interact with it in a cloud-type manner,» explains Livermore. «We wanted them to be able to spin up a virtual environment to work on without having to stop and wait for somebody in my team to do it. It’s all about moving that empowerment to the developers.»

Hive was investing a lot of time, effort and manpower in maintaining its data centres, Livermore says, and the company ultimately decided to shutter them around two years ago.

«All of those guys still work for me, they just don’t run a data centre any more – they do other stuff,» he explains. «It’s very interesting. We’ve done a lot of consolidation work, but none of it has been from a cost reduction point-of-view, it’s just been a better deployment of resources.»

IoT built on IoT

Now that it’s ditched its data centres, Hive is all-in on cloud; the company runs exclusively on AWS, with anywhere from 18,00 to 22,00 virtual machines running on VMware’s virtualisation software. It’s also a big user of Lambda, AWS’ serverless computing platform, as well as its IoT platform.

The fact that Hive uses Amazon’s IoT service may sound a little odd, given that Hive actually owns and operates its own IoT platform, but the deal allows the company to focus entirely on its own products, and leave much of the overhead management to AWS.

«At the time, it was a means to an end,» Livermore explains. «Five years ago when we started, you couldn’t go out to market and find an IoT platform provider, so in order to deliver Hive we partnered with AlertMe; they had an IoT platform. We subsequently acquired AlertMe and acquired an IoT platform, but you have all the overhead of maintenance of maintaining and evolving that IoT platform.»

Some products, like the relatively complicated Hive heating system, benefit from running on a custom-made platform, but for simpler devices like smart lights and motion sensors, Livermore says that it makes sense to find a platform provider «and let them do all the hard work… we will wherever possible use best-of-breed and buy-in services».

Hive has completely embraced the concept of business agility, and is not shy about periodically reinventing its IT. For example, despite the fact that its entire infrastructure runs on AWS, the company is considering moving portions of its workloads from the cloud to the edge, having the device process more instructions locally rather than pushing them to the cloud and back.

This would mean a reduction in Hive’s usage of AWS, but as with the data centre consolidation efforts from previous years, Livermore stresses that this is about technological efficiency rather than cost-cutting. More on-device processing means lower latency for customers, and a better user experience. «There are certain things that make sense to be a lot closer to the customer,» he says.

Building for scale

This constant pace of change may seem chaotic, but according to Livermore, it’s an essential part of scaling a company. «That presents opportunities to reevaluate what we’re doing and say ‘are there any new suppliers or new services that we can leverage?’.»

«We’re part-way through a re-architecting of our platform,» he tells Cloud Pro, «and we now need to be building a platform that will scale with the business aspirations. You get to these milestones in scaling. Up to half a million customers, the system will scale, [but] then you get to bits where you realise the code isn’t quite right, or that database technology choice you’ve made doesn’t work.»

For Livermore, his role is fundamentally about giving Hive’s developers as easy and seamless an experience as possible.

«Essentially, my job is to give my dev teams a platform where they can deploy their code and do their job with the minimum of fuss,» he says. «It’s all about empowering the developers to spend as much time as possible on solving customer problems and as little time as possible worrying about where the server’s going to come from or where they’re going to put their log files or where their monitoring and telemetry goes.»

British Airways sues data centre supplier for 2017 outage


Connor Jones

22 Nov, 2018

British Airways has filed a lawsuit against CBRE after blaming it for a 2017 IT failure that left 75,000 passengers stranded.

A fault in a system belonging to CBRE, an American outsourcing company that manages BA’s data centres, is thought to have led to the massive outage at Heathrow Airport that forced the cancellation of dozens of flights last year. BA has started legal proceedings against the company, which will be heard at the High Court, according to reports.

The outage in May 2017 resulted in the cancellation of 672 flights and left tens of thousands of passengers stranded. Passenger check-in and operating systems were also affected, and disruption to communications meant that the airline also struggled to locate and contact staff.

In a separate incident earlier this year, BA suffered further IT issues at Heathrow’s terminal five which led to the complete halt of all its flights. Passengers were advised to book overnight accommodation; the airline’s online check-in service was also down.

Willie Walsh, CEO of International Airlines Group, parent company of British Airways, estimated the incident may have cost BA as much as £58 million.

At the time Walsh claimed an engineer had mistakenly switched off the power supply to one of the company data centres which was then turned back on in an uncontrolled fashion, according to the Financial Times.

A BA spokesperson told IT Pro at the time of incident: «There was a loss of power to the UK data centre which was compounded by the uncontrolled return of power which caused a power surge taking out our IT systems. So we know what happened, we just need to find out why. It was not an IT failure and had nothing to do with outsourcing of IT, it was an electrical power supply which was interrupted.»

Speaking at a transport conference in Mexico, Walsh said «it’s very clear to me that you can make a mistake in disconnecting the power … It’s difficult for me to understand how to make a mistake in reconnecting the power».

BA swiftly announced a thorough investigation was to be carried out into the incident to determine the true cause of the outage.

Speaking to IT Pro at the time, CBRE said «we are the manager of the facility for our client BA and fully support its investigation. No determination has been made yet regarding the causes of the incident on May 27».

BA has appointed London law firm Linklaters to bring the case against CBRE. BA and Linklaters are reportedly declining to comment at the time, CBRE also refused to comment.

How DevOps and a hybrid model can make the most out of legacy applications

Do you have an application marked with cryptic warning signs and a wealth of cobwebs that is running on legacy hardware hidden away in the back corner of your data centre? If you’re in enterprise IT, chances are high that you do. These old platforms are often considered a bane to IT. More importantly, legacy applications can present a real headache when attempting to uplift the people, processes and tooling needed to embrace a hybrid cloud model.

Providing self-service, orchestrated workflows and delivery pipelines are just a few signature attributes to constructing a hybrid cloud. Imagine trying to apply these ideas to legacy technology, where the interface requires using an archaic console and software that looks like it was written for Windows 3.1. It’s not fun and often derails any efforts to deliver services living in an on-premises software-defined data centre (SDDC) or public cloud environment. It also splits up a team focused on working within the realm of a DevOps cultural model, because concepts such as infrastructure as code, stateless configuration and collaborative code check-ins crumble.

For most folks I speak with, legacy applications remain with a small skeleton crew that keeps the lights running and the old hardware humming.

Hybridity is a reality for most infrastructures rather than being purely in the public cloud or purely on-premises. This is where the importance of DevOps and embracing an API-first mentality is multiplied. Having an API-first approach isn’t necessarily about having a single code repository or a single application that does “all the things.” It is about leveraging APIs across multiple repositories and multiple applications to weave together a single, programmatic software fabric in which everything can communicate and integrate regardless of whether legacy or cloud-native.

The way to end chaos induced by the complexity with hybridity is a progressive, policy-driven state with proper implementation.

Here are four tips on how to integrate your legacy crew with your DevOps team to create a high-functioning hybrid cloud model:

Find tools that play well with others

Legacy applications do not have to be the anchor holding you back from adopting a DevOps strategy. Find solutions that treat legacy applications, modern applications and cloud-native applications as if they were all first-class citizens. Legacy applications tend to run on old OSes, including some that are no longer supported. Eventually this configuration becomes increasingly fragile, requiring manual care and feeding. Adopting an automation strategy can reduce risk associated with build, testing, deployment, remediation and monitoring.

Strive for simplicity – don’t create silos

Make sure the workflows that deliver services are applicable to the vast majority of use cases. Historically there has been little to no incentive to building mechanisms that are shared across the enterprise. Systems thinking advises that there are dependencies, but if you only improve one application in the cluster, there will be no benefit. The entire cluster must be addressed for successful progress. The use of RESTful APIs is turning tables, allowing for a single set of tooling across many applications, platforms and services. Share data across the organisation; leverage APIs to streamline how the team interacts with legacy workloads by extracting data from the legacy architecture.

Embrace a one-team mentality

Don’t form multiple teams or tiers of teams. If the decision is made to adopt DevOps, then it must be embraced by the entire organisation. As the old adage goes: “There are no legacy systems, just legacy thinking.” DevOps isn’t about developers and operations folks collaborating; rather, it’s about two separate silos becoming one team. Focus on improving communication and set aside time for learning.

Avoid infrastructure-specific solutions

Abstract storage and compute; work instead on the application layer and how to deliver services for those applications. Legacy applications often are coupled tightly to the underlying hardware, making it challenging and risky to manage any application component individually. This means maintenance and upgrades are incredibly time-consuming, difficult and even expensive. The adoption of infrastructure-as-code, whether on-premises or in the cloud, gives teams the permissions and tooling provision infrastructure on-demand. As you can imagine, not every legacy application can be easily migrated to this type of infrastructure, but many can.

Using automation increases the organisation’s agility by reducing human interaction and orchestrating dependencies across the organization. Self-service consumption of infrastructure further eliminates silos. DevOps is not incompatible with legacy applications, but it requires an organisation to evaluate what this implementation actually means and how to embark upon this transformation.

Never fear: You can still depend on a legacy application and embrace DevOps. It requires additional design considerations, elbow grease and a little bit of creativity. The fundamental element is to apply these principles in a consistent and efficient manner in the context of all applications, both legacy and modern.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

The CRM applications which will matter most in 2019 – with AI at the forefront of change

According to recent research by Gartner,

  • Marketing analytics continues to be hot for marketing leaders, who now see it as a key business requirement and a source of competitive differentiation
  • Artificial intelligence (AI) and predictive technologies are of high interest across all four CRM functional areas, and mobile remains in the top 10 in marketing, sales and customer service.
  • It’s in customer service where AI is receiving the highest investments in real use cases rather than proofs of concept (POCs) and experimentation.
  • Sales and customer service are the functional areas where machine learning and deep neural network (DNN) technology is advancing rapidly.

These and many other fascinating insights are from Gartner’s What’s Hot in CRM Applications in 2018 by Ed Thompson, Adam Sarner, Tad Travis, Guneet Bharaj, Sandy Shen and Olive Huang, published on August 14, 2018. Gartner clients can access the study here  (10 pp., PDF, client access required).

Gartner continually tracks and analyses the areas their clients have the most interest in and relies on that data to complete their yearly analysis of CRM’s hottest areas. Inquiry topics initiated by clients are an excellent leading indicator of relative interest and potential demand for specific technology solutions. Gartner organises CRM technologies into the four category areas of marketing, sales, customer service, and digital commerce.

The following graphic from the report illustrates the top CRM applications priorities in marketing, sales, customer service, and digital commerce.

Key insights from the study include the following:

Marketing analytics continues to be hot for marketing leaders, who now see it as a key business requirement and a source of competitive differentiation

In my opinion and based on discussions with CMOs, interest in marketing analytics is soaring as they are all looking to quantify their team’s contribution to lead generation, pipeline growth, and revenue. I see analytics- and data-driven clarity as the new normal. I believe that knowing how to quantify marketing contributions and performance requires CMOs and their teams to stay on top of the latest marketing, mobile marketing, and predictive customer analytics apps and technologies constantly. The metrics marketers choose today define who they will be tomorrow and in the future.

Artificial intelligence (AI) and predictive technologies are of high interest across all four CRM functional areas, and mobile remains in the top 10 in marketing, sales and customer service

It’s been my experience that AI and machine learning are revolutionising selling by guiding sales cycles, optimising pricing and enabling CPQ to define and deliver smart, connected products. I’m also seeing CMOs and their teams gain value from Salesforce Einstein and comparable intelligent agents that exemplify the future of AI-enabled selling.

CMOs are saying that Einstein can scale across every phase of customer relationships. Based on my previous consulting in CPQ and pricing, it’s good to see decades-old core technologies underlying Price Optimisation and Management are getting a much-needed refresh with state-of-the-art AI and machine learning algorithms, which is one of the factors driving their popularity today.

Using Salesforce Einstein and comparable AI-powered apps I see sales teams get real-time guidance on the most profitable products to sell, the optimal price to charge, and which deal terms have the highest probability of closing deals. And across manufacturers on a global scale sales teams are now taking a strategic view of Configure, Price, Quote (CPQ) as encompassing integration to ERP, CRM, PLM, CAD and price optimisation systems. I’ve seen global manufacturers take a strategic view of integration and grow far faster than competitors.

In my opinion, CPQ is one of the core technologies forward-thinking manufacturers are relying on to launch their next generation of smart, connected products.

It’s in customer service where AI is receiving the highest investments in real use cases rather than proofs of concept (POCs) and experimentation

It’s fascinating to visit with CMOs and see the pilots and full production implementations of AI being used to streamline customer service. One CMO remarked how effective AI is at providing greater contextual intelligence and suggested recommendations to customers based on their previous buying and services histories.

It’s interesting to watch how CMOs are attempting to integrate AI and its associated technologies including ChatBots to their contribution to Net Promoter Scores (NPS). Every senior management team running a marketing organisation today has strong opinions on NPS. They all agree that greater insights gained from predictive analytics and AI will help to clarify the true value of NPS as it relates to Customer Lifetime Value (CLV) and other key metrics of customer profitability.

Sales and customer service are the functional areas where machine learning and deep neural network (DNN) technology is advancing rapidly

It’s my observation that machine learning’s potential to revolutionize sales is still nascent with many high-growth use cases completely unexplored. In speaking with the vice president of sales for a medical products manufacturer recently, she said her biggest challenge is hiring sales representatives who will have longer than a 19-month tenure with the company, which is their average today.  Imagine, she said, knowing the ideal attributes and strengths of their top performers and using machine learning and AI to find the best possible new sales hires. She and I discussed the spectrum of companies taking on this challenge, with Eightfold being one of the leaders in applying AI and machine learning to talent management challenges.

Source: Gartner by Ed Thompson, Adam Sarner, Tad Travis, Guneet Bharaj,  Sandy Shen and Olive Huang, published on August 14, 2018.

Lincolnshire Police enlists Motorola in cloud upgrade deal


Clare Hopping

22 Nov, 2018

Lincolnshire Police has signed a 10-year agreement with Motorola to transform its control room, replacing the constabulary’s legacy contact management, computer-aided dispatch, mapping and call logging system.

This is the first time a UK force has run its entire control room tech stack on the cloud, according to Lincolnshire Police, and it will have significant benefits for the public sector organisation, including lower costs and heightened staff productivity.

Motorola will implement its CommandCentral Control Room Solution (CRS) to handle calls in the £6 million deal. Those overseeing the deal hope that its scalable platform will mean people will be able to get in contact with a call handler far faster than currently possible, even at peak times.

Because the solution scales according to demand, Lincolnshire’s Police Force and IT teams don’t have to manually increase capacity or work out the likely call volumes in advance.

“The new command and control system is one important step in that journey and will allow the chief to get assistance to those in need quicker than ever before – and armed with the right information to handle the situation,” said Marc Jones, police and crime commissioner for Lincolnshire Police.

“It has been a high priority for me to ensure frontline officers can be deployed quickly, with the right equipment, and to spend as much time as possible in the field reassuring communities, preventing and fighting crime.”

The implementation of Motorola’s cloud-based contact centre platform was overseen by G4S Policing Services.

“The G4S strategic partnership with Lincolnshire Police will enable Lincolnshire to be at the forefront of technology, supporting officers and staff to make the best decisions by being better informed,” G4S Policing Services’ managing director, John Whitwam said.

“This is the best of a private and public partnership, with G4S, Motorola Solutions and Lincolnshire Police working together to provide sustainable and effective policing services to the public of Lincolnshire.”

The announcement follows a series of initiatives launched by the UK government to modernise policing and embrace cloud-based technologies. In August the Home Office announced it was seeking partners to help it migrate Police and Public Protection systems over from its own data centres to those operated by Amazon Web Service (AWS).

The Met Police also revealed in September that it had chosen Microsoft Cloud partner New Signature to help it modernise its own infrastructure by moving over to Microsoft Azure, considered to be one of G-Clouds largest procurements.

Avoiding vendor-lock-in is ‘crucial to cloud success’


Clare Hopping

22 Nov, 2018

A new report has unpackaged the elements it thinks a business needs to implement in order to take advantage of the cloud, highlighting the importance of AI, taking on a multi-cloud strategy and using open-source technologies.

Cloud firm Amido said that to have the biggest impact on their business, companies must accept that cloud-native applications will become the new norm and so should be open to the switch from legacy platforms.

They should use cloud-powered mobile apps where possible, build data lakes with the assistance of AI to help with data science projects and adopt a multi-cloud approach – implementing open-source tech where possible, according to the report.

Amido argued that the recent open source push by the largest cloud vendors means that there has never been a better time to adopt more than one cloud vendor, avoiding lock-in and ensuring better uptime as a result.

“What’s fascinating right now is the pace at which open source projects, from the likes of Google and Apache, are being embraced as managed offerings by all the big cloud vendors,” said Simon Evans, CTO of Amido.

“These proven and open technologies are rapidly replacing the pioneering first movers in the cloud; projects like Kubernetes, Apache Kafka and Apache Spark are regularly available “as a service” on the big cloud providers, and this is, without doubt, a good thing for the world. This convergence is the key to avoiding vendor lock-in while still enabling a business to focus on their digital USP. It is the enabler for a multi-cloud strategy.”

Amido’s report revealed that businesses are more readily adopting cloud-native strategies and are fully prepared to leave legacy technologies behind in favour of the rapidly evolving cloud – a standpoint supported by the Cloud Industry Forum.

“[Amido’s] Cloud Futures: 2020 report confirms our recent survey findings that UK businesses are clearly recognising the need for transformation and are gradually leaving legacy technologies behind in favour of next-generation ones to pursue competitive advantage,” said Alex Hilton, CEO of the Cloud Industry Forum.

“Cloud is critical to this shift. Thanks not only to the flexibility of the delivery model, but also the ease with which servers can be provisioned, which reduces financial and business risk. Furthermore, cloud’s ability to explore the value of vast unstructured data sets is next to none, which is essential for AI and IoT.”

Why cloud infrastructure is an increasingly exclusive club – with only a few having the cash to get in

The figures keep going up and up for the hyperscalers – as new data from Synergy Research shows another new record for hyperscale operator capex in Q318.

Hyperscaler capex was at more than $26 billion for the most recent quarter, with spending for the first three quarters of 2018 up by 53% when compared with this time last year. This quarter’s figure is the second highest of all time; Q1 still takes the honours, yet Synergy ascribes that to Google’s ‘one-off’ $2 billion purchase of Manhattan’s Chelsea Market building in March.

Nevertheless, aside from a minor blip at the beginning of 2017, it has been a steady upward curve since 2015, with spending now almost double that of three years ago. For the big five – in this instance, Google, Microsoft, Facebook, Apple and Amazon – these are heady days. Alibaba, IBM and Tencent are among those in the second tier, however the former ‘leapt’ in terms of capex spend during the most recent quarter.

“Business at the hyperscale operators is booming,” said John Dinsdale, a chief analyst at Synergy. “Over the last four quarter their year on year revenue growth has averaged 24% and they are investing an ever-growing percentage of their revenues in capex.

“That is a real boon for data centre technology vendors and for colocation/wholesale data centre operators, but it has created a huge barrier for companies wishing to meaningfully compete with those hyperscale firms,” added Dinsdale. “This is a game of massive scale and only a few can play that game.”

For a lot of cloud infrastructure research, the trends remain the same, but the figures are going up. Take Synergy’s exploration of public cloud leadership by region, issued earlier this week. The news was that there was, erm, no news. AWS continued to lead across the board, with Microsoft #2 and Google #3 aside from Asia Pacific, where Alibaba is the second player putting it as #4 overall.

Competing against the biggest players for a slice of their pie is therefore not an option, with Alibaba’s ultra-aggressive approach and huge resources seeing a moderate global dividend thus far. “There will remain opportunities for smaller cloud providers to serve niche markets, especially focused on single countries or local regions, but those companies cannot hope to challenge the market leaders,” said Dinsdale, noting China to be a key exception to the rule.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series.pngInterested in hearing industry leaders discuss subjects like this and sharing their Cyber Security & Cloud use-cases? Attend the Cyber Security & Cloud Expo World Series events with upcoming shows in Silicon Valley, London and Amsterdam to learn more.