Managing cloud lifecycles


Steve Cassidy

20 Feb, 2020

Hardly anybody talks about lifecycles in IT, least of all me. I don’t see the end of use of any device as a special occasion to be marked and celebrated: I still have working PCs from the late 1990s. Even so, I had to stop and pay attention when I heard a senior exec from Arm – the world’s most popular CPU maker no less – mention that major cloud players are now reinvesting in their data centres on a yearly basis.

This is an incredibly short lifecycle, but when it comes to the cloud there are multiple things that might need to be retired, upgraded or otherwise cycled. One is the data centre hardware itself; this might seem like a very fundamental refresh, and it could transform the customer experience, making things either faster or slower. But, in these days of virtual machines and serverless design, it might equally be completely invisible from the outside, except where it leads to a change in tariffs.

Then there are upgrades to the orchestrator or container OS. These tend to happen with little or no notice, excused by the urgency of applying the latest security updates. As a result, any dependencies on old code or deprecated features may only come to light on the day of the switch. As a savvy cloud customer, your best defences against such upheaval are to spread your systems across multiple suppliers, maintain portfolios of containers running different software versions and take a strong DevOps approach to your own estate.

Other scenarios include the sort of big move when a beta site is finally promoted and becomes the main site, and the eventuality of a cloud provider being taken over by another, resulting in a burst of service changes and tariff renegotiation. Remember, lots of high-tech businesses operate with the express intention of being acquired at some point, once they have a good portfolio of customers, a steady revenue stream and hence a high share price. Such a strategy is music to the ears of venture capitalist backers, eager to recoup their investment and profits; I will leave you to consider whether it’s well suited to cloud services, which place a high emphasis on continuous and uninterrupted service. There’s a reason why many cloud company contracts are all about inhibiting customer mobility.

Migration patterns

It’s clear that, when we’re talking about the cloud, “lifecycle” entails a spread of quite different activities, and bringing them all together under one banner doesn’t do you much good: the lessons learnt from going through one of the above events won’t do much to help with others. 

However, the situation doesn’t have to be complicated – at least not if you actually have developers, and aren’t just stuck with a narrow selection of package providers. If you are in this lucky position, and you’ve been listening to at least the tone of your development team’s comments on the various fads and fashions in development, there’s a fair chance that your IT portfolio will have been built with the sorts of tools that produce nice, mobile and tablet-friendly, infinitely resizeable, bandwidth-aware, cloud-scalable websites. If that’s what you’re working with, it can be relatively easy to ride out lifecycle events.

Unfortunately, this is by no means universally the case, especially not for systems that have been around long enough for large parts of the business to have been built on them. If you already have a code base that works, it can be tough to secure the development time and cost commitment to move it from (say) QuickBASIC or COBOL onto Ruby on Rails, Java or PHP. 

Yet this is itself one of the most significant lifecycle events, or at least part of it. It may seem a stretch to refer to code migration as a lifecycle end, but when you first unleash your prototype on a public cloud platform, nobody really knows how it’s going to perform, or how resource-hungry it might be, and your production systems person is not going to want those kind of unknowns messing up their carefully controlled production space. The requirements for releasing that prototype into the big bad world thus emerge from the development and testing process. 

That output ought to, at least, incorporate a statement about what needs to be done, and after how long, with an eye on three quite distinct systems. First, there’s the prototype in its current state, which at this point is probably still languishing on Amazon or Azure. Then, of course, there’s the predecessor system, which is going to hang around for a couple of quarters at least as your fallback of last resort. Then there’s the finished, deployed product – which, despite your diligent testing, will still have bugs that need finding and fixing. Redevelopment involves managing not one, but three overlapping lifecycles.

If you’re wondering how much of this is specific to the cloud, you have a good point. You would have had very similar concerns as a project manager in 1978, working in MACRO-11 or FORTRAN. Those systems lack the dynamic resource management aspect of a cloud service, but while cloud suppliers may seek to sell the whole idea of the “journey to the cloud”, for most businesses reliability, rather than flexibility, remains the priority. 

The question, indeed, is whether your boringly constant compute loads are actually at the end of their unglamorous lifecycle at all. It’s possible to bring up some very ancient operating systems and app loads entirely in cloud-resident servers, precisely because many corporates have concluded that their code doesn’t need reworking. Rather, they have chosen to lift and shift entire server rooms of hardware into virtual machines, in strategies that can only in the very loosest sense be described as “cloud-centric”.

Fun with the law

Despite the best efforts of man and machine, cloud services go down. And when it happens, it’s remarkable how even grizzled business people think that legally mandated compensation will be an immediate and useful remedy. Yes, of course, you will have confirmed your provider’s refund and compensation policy before signing up, but remember that when they run into a hosting issue, or when their orchestrator software is compromised by an infrastructure attack, they will suddenly be obliged to pay out not just for you, but for everybody on their hosting platform. What’s the effect going to be on their bottom line, and on future charges?

If you’ve been good about developing a serverless platform, hopping from one cloud host to another isn’t going to be a big issue. Even if you’re in the middle of a contract, you may be able to reduce your charges from the cloud provider you’re leaving, simply by winding down whatever you were previously running on their platform. After all, the whole point of elastic cloud compute is that you can turn the demand up and down as needed.

Sometimes you might end up in the opposite situation, where you reach the end of a fixed-term contract and have no option but to move on. This comes up more often than your classic techie or development person imagines, thanks to the provider’s imperative to get the best value out of whatever hardware is currently sitting in the hosting centre. If there’s spare capacity in the short term, it makes sense for the vendor to cut you a time-limited deal, perhaps keeping your cloud portfolio on a hosting platform from a few years ago and thereby not overlapping the reinvestment costs on their newer – possibly less compatible – platform.

Hardware and software changes

For some reason that nobody seems minded to contest, it’s assumed in the cloud industry that customers will be agile enough to handle cloud vendors making root and branch changes to the software platform with effectively no notice. You come in to the office with your coffee and doughnuts, to be greeted by a “please wait” or a similarly opaque error, which means that your cloud login and resources are now being managed by something quite new, and apparently untested with at least your password database, if not the content of your various memberships and virtual machines. 

Most people active in IT operations management would not want to characterise this as a lifecycle opportunity. That particular field of business is particularly big on control and forward planning, which are somewhat at odds with the idea of giant cloud suppliers changing environments around without warning. When you and 100 million other users are suddenly switched to an unfamiliar system, the behaviour you have to adopt comes not from the cloud vocabulary, but rather from the British government: we’re talking about cyber-resilience. 

If that sounds like a buzzword, it sort of is. Cyber-resilience is a new philosophy, established more in the UK than the US, which encourages you to anticipate the problem of increasingly unreliable cloud services. It’s not a question of what plan B might look like: it is, rather, what you can say about plan Z. And that’s sound sense, because finding your main cloud supplier has changed software stack could be as disastrous for your business as a ransomware attack. It can also mark a very sharp lifecycle judgement, because your duty isn’t to meekly follow your provider’s software roadmap: it’s to make sure that a rational spread of cloud services, and a minimalist and functionally driven approach to your own systems designs, gives you the widest possible range of workable, reachable, high-performance assets. 

Don’t panic!

If you’re already invested in cloud infrastructure, this talk might seem fanciful; in reality, few businesses experience the full force of all these different scenarios. The biggest difficulties with the cloud usually involve remembering where you left all your experiments, who has copies of which data sets, and how to identify your data once it skips off to the dark web. The dominant mode here is all about things that live on too long past their rightful end, and that’s slightly more manageable than the abrupt cessations of access or service we’ve been discussing.

Even so, it’s important to carry out the thought experiments, and to recognise that lifecycles can be chaotic things that require a proactive mindset. One could even say that the lifecycle of the “lifecycle” – in the sense of a predictable, manageable process – is coming to an end, as the new era of resilience dawns.

Partnerships key for public cloud vendors to succeed in IoT analytics, says ABI Research

The hyperscale cloud providers are looking at Internet of Things (IoT) offerings and connectivity amid a swath of emerging technologies – and according to a new note from ABI Research, cloud suppliers will grow their share of IoT data and analytics management revenues from $6 billion (£4.6bn) to $56bn (£43bn) in the next six years.

The way to do this, the analyst firm notes, is through partnerships. As ABI sees it, public cloud vendor revenues, while impressive, still come primarily through streaming, storage, and data orchestration. Analytics services across cloud vendors, however, are less differentiated – and therefore the need for collaboration is key for now.

One area in which public clouds are doing it for themselves is through streaming. ABI said this was the one analytics technology that all cloud vendors were building into their solution portfolios; Amazon Web Services (AWS), Microsoft, Google, IBM and Oracle all touting proprietary offerings while Cloudera, Teradata et al were building solutions leveraging open source technology.

Ultimately, a lot of strategy right now is focused on co-opetition in the IoT space. AWS and Azure, for instance, have partnered with Seeq for its advanced analytics capabilities, while Oracle, Cisco, and Huawei are expanding their edge portfolio.

“The overall approach shown by cloud suppliers in their analytics services reflects the dilemma they face in the complex IoT partnership ecosystem,” said Kateryna Dubrova, ABI research analyst. “Effectively, do they rely on partners for analytics services, or do they build analytics services that compete with them?

“Ultimately, businesses are moving to an analytics-driven business model which will require both infrastructure and services for continuous intelligence,” Dubrova added. “Cloud vendor strategies need to align with this reality to take advantage of analytics value and revenues that will transition to predictive and prescriptive solutions.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Don’t forget to budget for business objectives to gain digital transformation ROI

Digital transformation is hugely important and extremely difficult. No matter what form an organisation’s transformation takes, it’s bound to introduce new and unanticipated challenges and costs even as it leads to positive long-term outcomes.

While increased challenges and costs are a normal part of digital transformation, they can trigger C-suite leaders to recalibrate their thinking of transformation initiatives and ask to see immediate proof of ROI. When this question inevitably comes up, leaders tend to look for obvious ways to deliver a positive ROI no matter how early in the transformation process they may be.

Too often, the “obvious” solution is to reduce staff whose work is scheduled to be made redundant by new digital processes. In reality, though, such a simplistic solution to the complex problem of calculating the ROI of something as complex as a digital transformation rarely works. In fact, in our experience, immediate downsizing of experienced staff is one of the biggest possible ROI killers in digital transformation initiatives. 

Here’s a look at why, and what organisations can do to realise a sustainable and positive ROI.

Why downsizing tanks the ROI of digital transformation

The simple reason downsizing ends up costing more money than a company will save is that it causes an immediate and dramatic loss of institutional knowledge. We tend to underestimate the value of institutional knowledge, mostly because it’s not often discussed and it’s nearly impossible to quantify. However, underestimating its value can prove expensive in short order.

What usually happens is that, when transformation initiatives aim to digitise processes, internal leaders assume they no longer need employees who used to handle those processes manually. These people are downsized. Short-term ROI numbers look amazing: the savings resulting from a change in headcount makes the digital transformation initiative look like an immediate success.

In reality, though, what typically happens is that, shortly after downsizing or right-sizing a particular team, the organisation realises it still needs certain key skills and that those skills were directly provided by the people who were let go. If the organisation is lucky, it’s able to rehire these people – as contractors – at a significant premium. If not, the downsized workers aren’t interested in returning and the organisation ends up paying their old salary (or more) for a less-experienced person to fill the role.

All told, we usually see it as 30% to 50% less expensive to maintain existing staffing levels during digital transformation initiatives than to develop new employees. Let’s take a look at why.

The ROI-positive alternative to downsizing

A more cost-effective alternative to downsizing current staff following the engagement of a digital transformation initiative is to retain and reskill your best employees. While you may not need all of the skills you have on staff today, it’s rarely wise to let that talent go when you don’t yet know what the final state of your organisation will be post transformation. 

We are constantly amazed at how adaptive teams are to new challenges and opportunities when given the chance to make meaningful contributions in new areas of the business – and they do so with a base of institutional knowledge that can’t easily be recreated.

What’s more, when you maintain your existing workforce, the only cost you incur is that of retraining. You won’t be training for corporate, culture, or industry knowledge, as you would with a new employee. Once your existing employees learn the skills necessary to do the work your organisation now needs, they’ll be able to execute more efficiently than new employees would thanks to their deep institutional knowledge.

Today, with the presence of employer review sites like Glassdoor, it’s important to remember that your current and former employees have a voice and that voice can and will impact your reputation. Get enough negative reviews about how you handled the execution of a strategic initiative, and you could find yourself struggling to recruit the talent you need to get your new digital processes off the ground. This is an all-too-common situation where nobody wins.

ROI-positive digital transformation demands a big-picture view

Even in the best of times, calculating the ROI of a digital transformation initiative can be difficult. The amount of change introduced into a business during these initiatives is substantial. When the alternative to transformation is risking your relevance with customers, partners, and employees, taking the time to invest in the upfront work and the complex management needed to successfully execute the transformation will be worth its weight in gold as the business executes in new ways.

While the most successful digital transformation initiatives define goals from the beginning, many initiatives launch without any clear, measurable goals. If this describes your organisation, take heart: it’s not too late to calculate ROI. Keep in mind, though, that to do so in a way that accurately reflects your financial reality for the long term, you’ll have to look beyond the simplistic short-term measure of cutting costs by downsizing your staff.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Bee-based AI software could power next-generation drones


Roland Moore-Colyer

18 Feb, 2020

Next-generation drones could be powered by artificial intelligence software inspired by how bees adapt to and navigate their surroundings, 

That’s because scientists from Sheffield University have demonstrated how they are reverse engineering bee brains to create a drone prototype that’s influenced by the flying insects’ ability to navigate accurately over several kilometres and learn environmental features on the fly to then find their way back to their hive. 

Professor James Marshall from Sheffield University presented his team’s work at the American Association for the Advancement of Science conference in Seattle, detailing how they aim to create small drones that can effectively navigate their surroundings as bees do.

“Bees are really consummate visual navigators,” said Marshall, according to the Financial Times. “They can navigate a complex 3D environment with minimal learning very robustly, using only a million neurons in a cubic millimetre of brain.” 

“For us they’re at a sweet spot for brain size and intelligence,” added Marshall. Despite their tiny size, bee brains can multitask and they optimise the distances the bee flys from its nest to forage for nectar, meaning the brain learns and adapts to new scenarios very fast. 

Currently, AI systems used for image processing can’t compute what they see any anywhere near as quickly as some of the smallest natural brains. 

To try and replicate how bees navigate, the researchers have split the project, into two experiments.

The first involves attaching radar transponders to bees and analysing their flight paths so that the researchers can gain insight into their neural processes. 

The second experiment involves the more gruesome process of inserting a tethered electrode into a bee’s brain and then observing its movements around a virtual reality environment. By analysing the neural signals, it is hoped the scientists will gain a deeper look into bee movements. 

“We’ve modelled maybe 25 per cent of the honeybee brain, maybe a touch more,” said Marshall. “We have bee-like robots which can fly around lab behaving as a bee would, extracting information from the world.” 

The researchers have two drones, one 600g model and one 250g; the latter is much bigger than a bee, but still rather small, yet can still hold all the computational equipment it needs to navigate like a bee, according to Marshall. 

The project has been funded by a £4.8 million government grant from the UK research and innovation agency. That has meant the research is moving towards becoming a commercial venture, with a spinout company called Opteran Technologies aiming to eventually sell the AI software to drone companies and businesses, such as logistics companies that use drones for delivery purposes. 

Not only does this research show there’s a healthy appetite for AI development in the UK, but it’s also a sign of the potential cutting-edge tech to come that will help fuel digital transformation doctrines in enterprises keen to use such AI technology

Mastercard bolsters fraud fighting with Europe Cyber Resilience Centre


Bobby Hellard

18 Feb, 2020

Mastercard is developing a European cyber security hub as it looks to drive greater collaboration from both the public and private sectors to fight fraud and online threats. 

The European Cyber Resilience Centre will be based at the company’s HQ in Waterloo, Belgium and aims to address the threats facing European payment ecosystems. 

The centre will bring together a number of organisations, banks and law enforcement agencies, including Interpol and the UK’s National Crime Agency (NCA) and the National Cyber Security Centre (NCSC). 

An interim centre will launch in the spring, according to Mastercard, with the official facility expected to be ready in 2021. 
 
«Financial services will always be at the top of the target list for attackers due to the vast pool of customer data and credentials under our responsibility,» said Javier Perez, president Europe at Mastercard. 
 
«Our European Cyber Resilience Centre improves collaboration amongst key organisations, helping to ensure businesses and individuals feel secure when sharing information online.»

The centre will aim to improve prevention and mitigation practices against international cyber crime by bringing together both cyber and physical security experts. As part of its strategy, it will also aim to shorten the lines of communication between internal Mastercard teams and its customers, partners and stakeholders.
 
The Belgium-based centre will also provide a hub of knowledge and best practice sharing for law enforcement agencies and policymakers. 
                                                             
«Fraudsters and hackers know no borders or nationalities, so threats can strike from every corner of the world,» Perez added. «Only a joint effort that involves all parties will be able to place Europe on the frontline of enterprise resilience. 

«This new centre will synchronise our global resources and partners to constantly seek and adopt the best practices for us and our customer network.»

An example of the type of threats faced by European financial institutions was seen a year ago when Malta’s oldest bank took its entire IT system down to counter an active foreign cyber attack in which hackers attempted to steal 13 million. 

3M goes all-in on AWS cloud migration


Bobby Hellard

18 Feb, 2020

American conglomerate 3M is moving its enterprise IT infrastructure to AWS’ cloud infrastructure as part of a digital transformation project.

The firm said it will migrate systems for accounting, manufacturing, e-commerce and more into the tech giant’s cloud platform, in a bid to improve its global operations.

3M, formally the Minnesota Mining and Manufacturing Company, is a 100-year-old US corporation that provides a diverse range of services in markets such as healthcare, automotive, manufacturing and a number of other areas.

Its 96,000 global employees use 51 different technology platforms, according to the company. Moving forward, its plans are to tap into AWS’ portfolio of services, such as machine learning, analytics, storage, security and databases to streamline its business processes and meet changing customer demands.

«AWS, with its proven experience and highly performant global infrastructure, will deliver the agility, speed, and scalability 3M needs to launch new business processes and service models,» said John Turner, CIO at 3M.

«We look forward to expanding our use of AWS’s portfolio of services, including analytics and machine learning, to gain greater insights and become an even more agile company in the cloud.»

This is one of a number of large organisations to go all-in on AWS over the last couple of years, following the likes of the NFL and BP. The cloud giant is also locked in a legal battle with the Pentagon over its decision to award its JEDI contract to Microsoft – which is seen as Amazon’s closest rival in the cloud space.

In January, a Goldman Sachs survey suggested that Microsoft had an edge over AWS, with IT executives suggesting cloud win the so-called ‘cloud wars’ over the next three years. However, Amazon’s cloud division is continuing to score heavy with big organisations and its legal challenge has seen Microsoft’s JEDI contract paused.

How financial services can stay secure in the cloud: A guide

It was only a few years ago that an air of trepidation surrounded the cloud. However, in the present day, there is no question that having got through what Gartner termed a 'phase of disillusionment’, retail financial services firms see the immense value that cloud can bring. What’s more, with the implementation of the second Payment Services Directive (PSD2), the rise of fintech competitors and the emergence of blockchain technologies, many banks are realising that the cloud can be a viable route to future success.

The latest Nutanix ‘Enterprise Cloud Index Report’ for the financial services sector revealed a 21% adoption rate of hybrid cloud among financial services organisations, outpacing the global average of 18.5%. 

This should be welcomed. With the cloud in their hands, financial services organisations have a real chance to transform their industry from where it currently stands. However, as any superhero fan will know: ‘with great power comes great responsibility’. For banks, organisations are aware that the implication of storing their most sensitive data in a technology that they do not yet fully understand could threaten to be detrimental. In our 2019 public cloud survey, respondents exhibited reluctance towards hosting highly sensitive data in the cloud, with customer information (53%) and internal financial data (55%) topping the list of concerns.

The reason for their hesitation? Over half of respondents (56%) confessed that they had doubts about how compliant their cloud set-up was, 47% pointed to the ongoing cybersecurity skills shortage and lack of visibility within the cloud was a worry for 42%. Financial service organisations, now more than ever before, are striving to understand how to fully operate in the cloud and to recognise the potential security challenges cloud computing can present if not properly leveraged and secured.  

However, in today’s ever-evolving cloud landscape, confusion remains. When it comes to deploying the cloud, excessive regulation surrounding data classification and security remains a central and legitimate concern. Many banks struggle to understand what information needs to be retained on the private cloud, what is able to be kept in the public cloud, how different tiers of data need to be secured and who is ultimately responsible for confidential data. This confusion only deepens as regulations and sophisticated online threats continue to proliferate.

In listening to customer concerns, financial service organisations can better understand the importance of providing a way to leverage the cloud without having to worry about how they might allocate security resources across the globe. For example, Barracuda security solutions are engineered for the cloud, offering dynamic scaling, API-based configurations, integration with Azure Active Directory and Azure App Service, meaning that customers can scale the solutions to fit their specific needs and leverage the cloud to protect their customers’ data.   

What to look for in order to stay secure

Full visibility into applications and user awareness of the current threats – such as attacks due to cloud misconfiguration – is of paramount importance. As cloud misconfigurations continue to leave organisations vulnerable, financial services need to find a way to close this attack window on potential cybercriminals. An example of this would be to build secure multi-tier architectures in Azure. Financial services customers are able to keep a level of segregation between tiers in order to ensure optimal security within their cloud management stack. 

When looking for a cloud solution that can keep up with the evolving threatscape, financial services need to consider using a highly scalable security solution that protects applications from targeted and automated attacks, including data breaches, defacement, OWASP Top 10 attacks, and application-layer DDoS.  

If you can invest in a solution that automates security policy compliance in the public cloud, it will give you visibility into your distributed cloud environment, while ensuring your cloud environment is compliant. Such a solution continually scans your infrastructure in order to detect misconfigurations, as well as actively enforcing security best practices, and remediating violations automatically before they even become risks.

Overall, committing to such a solution will stand you in good stead to be able to fully leverage the benefits of the cloud, while maintaining the required security and control. To paraphrase the old adage: take care in the cloud and the pounds will take care of themselves.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why predictive analytics is a top benefit of cloud for SMBs


Sandra Vogel

18 Feb, 2020

Many smaller organisations resist moving to the cloud because they can find it difficult to envision the benefits it brings. This means they miss out on having access to predictive analytics, arguably one of cloud’s most important features. Predictive analytics can open up whole new areas of knowledge, helping any business gain deeper understanding of its products and services, know more about its markets and users, and plan more effectively for its future.

Opening up the box

Predictive analytics encompasses a very wide range of tools. Bryan Betts, Principal Analyst at Freeform Dynamics explains: “Predictive analytics is a catch-all term for a set of extremely powerful statistical tools. In business they can support planning and decision making – for example, predictive analytics can look for patterns in data and use those to estimate the risk of a customer defaulting, of a machine-tool breaking down, or of traffic patterns on your local network being indicative of a malware or ransomware infestation.”

In short, anything an organisation thinks it would like to know can become available to it. In addition, predictive analytics often opens up new avenues of information that may not even have been considered before. 

A third revolution in analysing data

Organisations have been doing a version of predictive analytics since the days when they totted up sales figures in manual ledgers and worked out what generated the most – and least – income for them, what sold best at different times of the year, what effect changes in pricing had, and so on. 

When spreadsheets came along it became possible to ‘do the maths’ more quickly, to do more complex analyses, and even to automate the production of reports periodically. This was a revolution. It was possible to learn more, though not necessarily easy to mine out the truly useful information from the mountain generated from growing amounts of raw data. 

What’s happened with cloud is another revolution – bringing a myriad more ways to look at data, adding in tools to help organisations see easily what’s important and useful, and producing really complex results with lightning speed. And it’s not just old-fashioned maths that’s used. ‘Fuzzy’ logic, sentiment analysis and more can provide new types of insights. 

All of this means that organisations can gain a far greater depth of insight into their data. As Bryan Betts put it: “This stuff thrives – no, depends – on having loads of data to play with. So as well as trends from data within your own organisation, you can get predictions based on data from the wider community too.”

Why not do it in-house?

Tools that are used by cloud services to analyse data in these multivariate and complex ways could be employed in-house, but there are very strong reasons why cloud is preferable for any organisation. Those same reasons are even more pronounced for smaller organisations, for whom the benefits of cloud are even greater. 

Bill Hammond, founder of Big Data LDN explains: “Small organisations can take advantage of the massive compute power in the cloud to perform batches of analysis which would be almost impossible with the consumer-level hardware many small businesses use day-to-day.” He continues, saying that the upfront cost of buying the amount of compute necessary to match cloud computing power “would be prohibitive for a lot of small businesses, meaning they simply wouldn’t be able to afford the insight predictive analytics would offer them”.

Bryan Betts echoes this, saying the biggest advantage of using cloud to smaller organisations is “you can get access to enterprise-grade technology quickly and easily”.

Speculate to accumulate

There is no getting away from the fact that an organisation will have to spend money to get access to cloud. But if they do so, the benefits of sophisticated, leading edge tools to take an in-depth look at the masses of data they collect, and extract pearls of wisdom from that data can be vital to strategic planning. 

“The advantages of cloud analytics include helping businesses more efficiently process and report data findings, enhance collaboration, and provide decision-makers faster access to business intelligence,” Hammons says. He gives a very practical example: “With small businesses having small budgets to match, every item on the shelves needs to be bought and predictive analytics can help forecast what customers will be looking for based on previous years.”

All this means that buying into cloud based predictive analytics gives smaller businesses a key advantage. As Bill Hammond concludes: “With the capability to predict unknown future events and get vital planning data, small businesses can grow by acting smart. They can punch above their weight, giving them a fighting chance against larger competitors with traditional resources.”

Alibaba Cloud breaks $1.5bn in revenues amid hope of eCommerce migration encouragement

Alibaba Cloud hit more than RMB10 billion (£1.18bn) in revenue in its most recent quarter, with revenues up 62% year over year.

Total revenues for Alibaba were at RMB161.4bn (£17.7bn) for Q419, at a yearly growth of 38%, meaning Alibaba Cloud comprises 6.7% of the China-based retail giant’s overall revenues. Cloud revenues for Q2 and Q3 were at $1.13bn (£867m) and $1.3bn (£997m) respectively.

On a wider scale, major focus was placed on the emerging coronavirus epidemic, with chairman and CEO Daniel Zhang admitting that it will ‘present near-term challenges’ to Alibaba, already having a ‘significant impact’ on China’s economy. “At the same time, we will see opportunities created by the forces of change,” Zhang told analysts.

Perhaps the best showcase for Alibaba’s cloud infrastructure is the 11.11 one day shopping festival. Alibaba noted that its infrastructure was ‘scalable, reliable and secure’, handling almost $40 billion in transactions at more than 544,000 orders per second at its peak without disruption. Alongside this, Alibaba migrated its core eCommerce system to its public cloud. Zhang noted that the move should help ‘encourage others’ to adopt Alibaba Cloud for their infrastructure.

Among recent highlights for the company was the launch of its Alink machine learning algorithm to GitHub in November, as well as joining the Confidential Computing Consortium, a Linux Foundation cloud and edge security initiative, in August. Other inaugural members include Google Cloud, IBM, and Microsoft.

The company appears to be targeting media as an industry of interest. Last month, Alibaba Cloud was certified with the Trusted Partner Network (TPN) certification, an initiative between the Motion Picture Association (MPA) and the Content Delivery & Security Association (CDSA), touted as the first cloud provider to get such an award.

According to the most recent figures from Synergy Research, Alibaba holds 5% of the cloud infrastructure market globally, in fifth position behind AWS (33%), Microsoft (18%), Google (8%) and IBM (6%). Not surprisingly, Alibaba continues to dominate the Chinese market, yet its focus on the wider Asia Pacific (APAC) region appears to be paying off.

Figures from Synergy in May found that due to China’s growing spend, Alibaba was the entrenched #2 player across APAC, behind only AWS. Alibaba Cloud issued what the PR industry calls a ‘momentum’ release – read: showing off – in December saying APAC client base growth had been ‘exceptional’ in 2019. Media was cited as a key industry, alongside fintech, retail, gaming, and agriculture.

You can read Alibaba’s full Q4 financial report here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Amazon wins injunction to temporarily halt Microsoft JEDI contract award – reports

A US judge has temporarily paused Microsoft’s $10 billion JEDI cloud computing contract following an appeal from Amazon, signalling a significant win for the latter.

According to various reports, Judge Patricia Campbell-Smith, of the US Federal Claims Court, agreed to the initial step. While the existence of the injunction can be made public, the documents pertaining to them are currently sealed.

Amazon has been ordered to pay a $42 million bond – petty cash considering Amazon Web Services (AWS) alone hit almost $10 billion in revenues for its most recent quarter – to cover costs should the court find the motion was filed wrongfully.

The awarding of the JEDI (Joint Enterprise Defense Infrastructure) cloud computing contract to Microsoft by the Department of Defense (DoD) elicited surprise and even derision from industry watchers. AWS has been running the CIA’s cloud for the past five years with little complaint following a lengthy battle with IBM for the contract.

Given the DoD insisted on a single cloud provider throughout the majority of the procurement process, the smart money was always on AWS as the cloud infrastructure market leader. Yet in the press release confirming the award of the contract to Microsoft in October, the DoD said it ‘continued… [the] strategy of a multi-vendor, multi-cloud environment… as the department’s needs are diverse and cannot be met by any single supplier.’

AWS alleged in its appeal, the reports of which first came to light two weeks afterwards, of potential presidential interference – Amazon CEO Jeff Bezos owning the Washington Post – making the contract process ‘very difficult’ for government agencies. Around the time Oracle’s legal challenge over its exit dismissed, President Trump announced he was looking into the contract, citing ‘tremendous complaints’ from other companies. A CNBC article reported that a book from James Mattis, former secretary of defence, alleged Trump told him to ‘screw Amazon’ out of the contract.

The company has separately filed paperwork to depose the President and current secretary of defence Mark Esper.

A Microsoft spokesperson told Ars Technica that “while we are disappointed with the additional delay, we believe that we will ultimately be able to move forward with the work to make sure those who serve our country can access the new technology they urgently require.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.