Microsoft doubles internal carbon tax to drive green data centres


Clare Hopping

17 Apr, 2019

Microsoft has vowed to “do more” to cut down the company’s carbon footprint and its latest initiative is to double its internal carbon fee to $15 per metric ton on all carbon emissions, making its departments accountable for cutting emissions.

It means that each departments’ emissions will be continuously scrutinised and budget must be set aside for the tax. The money raised from this penalty will be re-injected into the company’s carbon neutrality efforts and fund new tech initiatives to boost sustainability.

Some of these funded projects include the development of sustainable campuses and data centres, including its HQs in Washington and Redmond.

“In practice, this means we’ll continue to keep our house in order and improve it, while increasingly addressing sustainability challenges around the globe by engaging our strongest assets as a company – our employees and our technologies,” said Microsoft president Brad Smith.

Microsoft will invest in sustainability data research, analysing information to advise scientists about the state of the planet via schemes such as the AI For Earth Programme.

The Redmond company said it’s going to help its customers build more sustainable futures too, helping them identify how they can be more conscious of the environment and helping them build and reach their own carbon footprint targets.

Finally, the company will push for change. Microsoft announced it’s joined the Climate Leadership Council to help identify and drive forward worldwide initiatives to save our planet.

“Addressing these global environmental challenges is a big task,” Smith added. “Meeting this raised ambition will take the work of everyone across Microsoft, as well as partnerships with our customers, policymakers and organizations around the world. This road map is far from complete, but it’s a first step in our renewed commitment to sustainability.”

The unforgiving cycle of cloud infrastructure costs – and the CAP theorem which drives it

Long read Modern enterprises that need to ship software are constantly caught in a race for optimisation, whether in terms of speed (time to ship/deploy), ease of use or, inevitably, cost. What makes this a never-ending cycle is that these goals are often at odds with each other.

The choices that organisations make usually inform what they’re optimising for. At a fundamental level, these are the factors that drive, for example, whether enterprises use on-premises infrastructure or public clouds, open source or closed source software, or even certain technologies – such as containers versus virtual machines.

With this in mind, it is prudent to take a deeper look into the factors that drive this cyclical nature of cloud infrastructure optimisations, and how enterprises move through the various stages of their optimisation journey.

Stage #1: Legacy on-premises infrastructure

Large premises often operate their own data centres. They’ve already optimised their way here by getting rid of large houses of racks and virtualising a lot – if not all – of their workloads. The consolidation and virtualisation resulted in great unit economics. This is where we start our journey.

At a particular point, the linear cost of paying per VM becomes more expensive than creating and managing an efficient data centre – 451 Research put it as roughly 400 VMs managed per engineer

Unfortunately, their development processes are now starting to get clunky: code is built using some combination of build automation tools, and home-grown utilities, and usually involves IT teams to requisition virtual machines to deploy.

In a world where public clouds offer developers the ability to get all the way from code to deploy and operate within minutes, this legacy process is too cumbersome, even though this infrastructure is presented to the developers as a ‘private cloud’ of sorts. The fact that the process takes so long has a direct impact on business outcomes because it greatly delays cycle times, response times, the ability to release code updates more frequently and – ultimately – time to market.

As a result, enterprises look to optimise for the next challenge: time to value.

Public clouds are evidently a great solution here, because there is no delay associated with getting the infrastructure up and running. No requisition process is needed, and the entire pipeline from code to deploy can be fully automated.

Stage #2: Public cloud consumption

At this point, an enterprise has started using public clouds – typically, and importantly, a single public cloud – to take advantage of the time-to-value benefit that they offer. As this use starts expanding over time, more and more dependencies are introduced on other services that the public cloud offers.

For example, once you start using EC2 instances on AWS, pretty quickly your cloud-native application also starts relying on EBS for block storage, RDS for database instances, Elastic IPs, Route53, and many others. You also double down further by relying on tools like CloudWatch for monitoring and visibility.

In the early stages of public cloud use, the ease of use when going with a single public cloud provider can trump any other consideration an enterprise may have, especially at reasonably manageable scale. But as costs continue to grow with increased usage at scale, cost control becomes almost as important. You then start to look at other cloud cost management tools to keep these skyrocketing costs in check – ironically from either the cloud provider itself (AWS Budgets, Cost Explorer) or from independent vendors (Cloudability, RightScale and many others). This is a never-ending cycle until, at some point, the public cloud infrastructure flips from being a competitive enabler to a commodity cost centre.

At a particular tipping point, the linear cost of paying per VM becomes more expensive than creating and managing an efficient data centre with all its incumbent costs. A study by 451 Research pegged this tipping point to be at roughly 400 VMs managed per engineer, assuming an internal, private IaaS cloud.

Thus there is a tension between the ease of using as many services as you can from a one-stop-shop and the cost of being locked into a single vendor. The cost associated with this is two-fold:

  • Being at the mercy of a single vendor, and subject to any cost/pricing changes made here. Being dependent on a single vendor means that your leverage is reduced in price negotiations, not to mention being subjected to further cross- and up-sells to other related offerings that further perpetuates the lock-in. This is an even larger problem with the public cloud model because of the ease with which multiple services can proliferate
  • Switching costs. Moving away from the vendor incurs a substantial switching cost that keeps consumers locked in to this model. It also inhibits consumers’ ability to choose the right solution for their problem

In addition to vendor lock-in, another concern with the use of public clouds is the data security and privacy issues associated with off-premises computing that may, in itself, prove to be a bridge too far for some enterprises.

One of the recent trends in the software industry in general, and cloud infrastructure solutions in particular, is the rise of open source technology solutions that help address this primary concern of enabling ease of use alongside cost efficiency, and lock-in avoidance. Open source software gives users the flexibility to pay vendors for support – either initially, or for as long as it is cost-effective – and switch to other vendors or to internal teams when it is beneficial (or required, for various business reasons).

Note that there are pitfalls here too – it is sometimes just as easy to get locked in to a single open source software vendor as it is with closed source software. A potential mitigation is to follow best practices for open source infrastructure consumption, and avoid vendor-specific dependencies – or forking, in the case of open source – as much as possible.

Stage #3: Open source hell

You’ve learned that managing your data centre with your own homegrown solutions kills your time-to-value, so you tried public clouds that gave you exactly the time-to-value benefit you were looking for. Things went great for a while, but then the scale costs hit you hard, made worse by vendor lock-in, and you decided to bring computing back in-house. Except this time you were armed with the best open source tools and stacks available that promised to truly transform your data centre into a real private cloud (unlike in the past), while affording your developers that same time-to-value benefit they sought from public clouds.

If you belong to the forward-looking enterprises that are ready to take advantage of open source solutions as a strategic choice, then this should be the cost panacea you’re looking for. Right?

Wrong. Unfortunately, most open source frameworks that would be sufficient to support your needs are extremely complex to not only set up, but manage at reasonable scale.

This results in another source of hidden operational costs (OPEX) – management overhead, employee cost, learning curve and ongoing admin – which all translate to a lot of time spent, not only on getting the infrastructure to a consumable state for the development teams, but also keeping it in that state. This time lost due to implementation delays, and associated ongoing maintenance delays, is also costly; it means you cannot ship software at a rate that you need to stay competitive in your industry.

Large enterprises usually have their own data centres, and administration and operations teams, and will build out a private cloud using open source stacks that are appropriately customised for their use. There are many factors that go into setting this up effectively, including typical data centre metrics like energy efficiency, utilisation, and redundancy. The cost efficiency of going down this path is directly dependent on optimising these metrics. More importantly, however, this re-introduces our earliest cost factor: the bottom line impact of slow time-to-value and the many cycles and investment spent not just on simply getting your private cloud off the ground, but having it consumable by development teams, and in an efficient manner.

You have now come, full circle, back to the original problem you were trying to optimise for.

The reason we’re back here is that the three sources of cost we’ve covered in this post – lock-in, time-to-value and infrastructure efficiency – seemingly form the cloud infrastructure equivalent of the famous CAP theorem in computer science theory. You can usually have one, or two, but not all three simultaneously. In order to complete the picture, let’s introduce solutions that solve for some of these costs together.

Approach #1: Enabling time-to-value and lock-in avoidance (in theory)

This is where an almost seminal opportunity in terms of cloud infrastructure standardisation comes in: open source container orchestration technologies, especially Kubernetes.

Kubernetes offers not only an open source solution that circumvents the dreaded vendor lock-in, but also provides another layer of optimisation beyond virtualisation, in terms of resource utilisation. The massive momentum behind this technology, along with the community behind it, has resulted in all major cloud vendors having to agree on this as a common abstraction for the first time. Ever. As a result, AWS, Azure and Google Cloud all offer managed Kubernetes solutions as an integral part of their existing managed infrastructure offerings.

While Kubernetes can be used locally as well, it is notoriously difficult to deploy and even more complex to operate at scale, on-premises. This means that, just like with the IaaS solutions of the public clouds, to get the fastest time-to-value out of the open source Kubernetes, many are choosing to use one of the Kubernetes as a service (KaaS) services offered by the public clouds. This hence achieves time-to-value and possible lock-in avoidance, since presumably you’d be able to port your application at any point to a different provider.

Only, the chances are you never will. In reality, you’re risking being dependent, once more, on the rest of the cloud services offered by the public cloud. The dependency is not just in the infrastructure choice, but is felt more in the application itself and all the integrated services. It goes without saying too that if you go with a Kubernetes service offered by the public clouds, then these solutions have the same problem that IaaS solutions do at scale in the public cloud – around rising costs – along with the same privacy and data security concerns.

In practice, the time-to-value here is essentially tied to Kubernetes familiarity, assuming you’re going with the public cloud offering, or advanced operational expertise, assuming you’re attempting to run Kubernetes at scale, on-prem.

From the perspective of day one operations (bootstrapping), if your team is already familiar with, and committed to, going with Kubernetes as their application deployment platform, then they can get up and running quickly. There is a big caveat here – this assumes your application is ready to be containerised and can be deployed within an opinionated framework like Kubernetes. If this isn’t the case there is another source of hidden costs that will add up in regards to re-architecture or redesigning the application to be more container-friendly. On a side note, there are enabling technologies out there that aim to reduce this ramp-up time to productivity or redesign, such as serverless or FaaS technologies.

Most hybrid cloud implementations end up being independent silos of point solutions that can only optimise against one or two of the CAP theorem axes

The complexities of day two operations with Kubernetes for large scale, mission critical applications that span on-premises or hybrid environments are enormous, and a topic for another time. But suffice it to say that if you’re able to deploy your first cluster quickly with any open source tool – for example, the likes of Rancher or Kops among others – to achieve fast time-to-value for day one, you’re still nowhere close to achieving time-to-value as far as day two operations are concerned.

Operations around etcd, networking, logging, monitoring, access control, and all the many management burdens of Kubernetes for enterprise workloads have made it almost impossible to go on-prem without planning for an army of engineers to support your environments, and a long learning curve and skills gap to overcome.

Approach #2: Enabling time-to-value and infrastructure efficiency

This is where hyperconverged infrastructure solutions come in. These solutions offer the promise of better time-to-value outcomes because of their turnkey nature, but the consumer pays for this by, once again, being locked in to a single vendor and their entire ecosystem of products – which makes these solutions more expensive. For example, Nutanix offers not only their core turnkey hyperconverged offering, but also a number of ‘essentials’ and ‘enterprise’ services around this.

Approach #3: Enabling infrastructure efficiency and lock-in avoidance (in theory)

We can take an open source approach to hyperconverged infrastructure as well, via solutions like Red Hat HCI for instance. These provide the efficiency promise of hyperconverged, while also offering an open source alternative to single-vendor lock-in. Like any other complex open source infrastructure solutions, though, they suffer from a poor time-to-value for consumers.

This then is the backdrop against which most ‘hybrid cloud’ efforts are framed – how to increase time-to-value and enable portability between environments, while improving unit efficiency and data centre costs. Most hybrid cloud implementations end up being independent silos of point solutions that, once more, can only optimise against one or two of the CAP theorem axes. These silos of infrastructure and operations have further impact on overhead, and hence the cost, of management.

Breaking this cloud infrastructure-oriented CAP theorem would require a fundamentally different approach to delivering such systems. ‘Cloud-managed infrastructure’ helps deliver the time-to-value, user experience and operational model of the public cloud, as well as on hybrid data centres too. Utilising open infrastructure to ensure portability and future-proofing systems and applications can help remediate costs as well.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Altaro VM Backup 8.3 review: Drag and drop, straight to the top


Dave Mitchell

16 Apr, 2019

Protecting your virtual machines doesn’t get easier than this

Price 
£445 exc VAT

There may be a wealth of backup solutions aimed at securing virtualized environments but many offer this as an additional feature, so SMEs may find themselves paying through the nose for excess baggage. Not so with Altaro VM Backup: this software product is designed from the ground up to protect VMware and Hyper-V VMs (virtual machines).

Another bonus is its pricing structure, because unlike many products that use the number of sockets or CPUs, Altaro bases costs purely on the number of hosts. The Standard edition starts at a mere £445 per host and this allows you to schedule backups for up to five VMs per host.

The Unlimited edition begins at £545 and, unsurprisingly, supports unlimited VMs per host but also enables high-efficiency inline deduplication, cluster support, GFS (grandfather, father, son) archiving and Exchange item-level restore. Moving up to a still very affordable £685 per host, the Unlimited Plus edition brings Altaro’s cloud management console into play and adds WAN-optimised replication, CDP (continuous data protection) and support for offsite backups to Microsoft Azure.

Altaro VM Backup 8.3 review: Deployment

Altaro claims it’ll take you 15 minutes to install the software and get your first backup running and it’s not wrong. It took us 5 minutes to install it on a Windows Server 2016 host after which we declared our first Hyper-V host, added a Qnap NAS appliance network share as our primary backup location, picked a VM from the list presented and manually ran the job.

The console is very easy to use and we also declared the lab’s VMware ESXi host and another Hyper-V host running our Exchange 2013 and SQL Server 2014 services. Along with NAS appliance shares, Altaro supports a good choice of destinations including local storage, iSCSI targets, USB and eSATA external devices, UNC share paths and RDX cartridges.

For secondary off-site locations, you can copy data to Altaro’s free Offsite Server (AOS) app which supports Windows Server 2012 upwards. The OS and AOS app can also be hosted in the cloud using a range of providers including Microsoft Azure.

Altaro VM Backup 8.3 review: User interface

Creating VM backup strategies doesn’t get any easier as most operations are drag and drop. We viewed all VMs presented by our Hyper-V and VMware hosts and simply dragged them across and dropped them on our primary backup location.

At this stage, you can run them manually with a single click, but applying a schedule is just another drag and drop procedure. Altaro provides two predefined schedules and we could create our own with custom start times plus weekly and monthly recurrences.

A set of default data retention policies are provided and you can easily create new ones for on-site and off-site backup locations. Choose how many versions you want to keep, decide whether older ones are deleted or archived and then just drop VMs onto them to apply the policy.

You can add off-site copies to a schedule at any time by – you’ve guessed it – dragging and dropping VMs onto the secondary backup location icon. CDP can be enabled on selected VMs and scheduled to run as often as every 5 minutes, while application consistent backups can be applied to VMs running VSS-aware apps such as Exchange and SQL Server.

Altaro VM Backup 8.3 review: Replication and restoration

Selected VMs can be replicated to the remote AOS host where CDP defaults to updating them every 5 minutes, or less frequently if you want. Depending on which type of VMs are being replicated, AOS requires access to local Hyper-V and VMware hosts, where it manages VM creation and handles all power up and shutdown commands.

Both the Altaro primary and AOS hosts must be running identical OSes and for the latter, we defined iSCSI storage for off-site copies and declared local Hyper-V and VMware hosts to provide VM replication. Initial off-site copies can be sped up as Altaro provides an option to copy the data to a removable device for seeding the remote vault.

General recovery features are excellent: you can restore a virtual hard disk, clone a VM or boot one straight from a backup to its original host or to another one. We tested the Boot from Backup feature and Altaro provisioned a SQL Server 2014 VM from its latest backup on a new Hyper-V host and had it running and waiting at the Windows login screen in one minute.

Altaro’s Sandbox feature takes the worry out of recovery by verifying the integrity of selected backups. Along with checking the data stored in backups, it clones VM backups to the same host to make sure they will boot when needed – and it does this all in the background.

GRT (granular recovery technology) restores are provided for recovering files, folders and Exchange items. Exchange GRT is undemanding; we selected this for our Exchange 2013 VM, chose a backup and its virtual hard disk, browsed for the EDB file and viewed our users and mailboxes plus items such as individual emails, contacts and calendars. The console creates a PST file containing the recovered items and we used the Exchange Admin Center web app to grab the file and import its contents into the relevant user’s mailbox.

Altaro VM Backup 8.3 review: Verdict

During testing, we were very impressed with Altaro VM Backup’s fast deployment and extreme ease of use. The clever console design makes it easy to create backup strategies for VMs plus it offers a wealth of valuable recovery and replication features. Protecting your Hyper-V and VMware virtualized environments really doesn’t get any easier or more affordable, making Altaro VM Backup a top choice for SMEs.

Jamf unveils Apple management cloud platform in UK


Clare Hopping

16 Apr, 2019

Jamf has brought its Jamf Premium Cloud service for Apple management to UK servers, meaning customers using the service in the region will be able to continue using it without a hitch following the UK’s split from the EU.

Bringing the Jamf Premium Cloud product to a UK server means businesses in highly regulated industries such as finance and education will be able to ensure their data is stored securely and firmly within the region.

It allows businesses to manage and grant access to IP and network addresses and allow certain users to log into the server without interruption.

Jamf’s premium cloud product allows businesses to white label their own server address too, not only making sure it fits online with company branding but also offering peace of mind to users that it’s the right server.

Previously, the firm’s closest data centre to the UK was in Germany (its only European facility), with additional servers in the US, Japan and Australia. This latest move means Jamf can better tailor its products and services to a UK-focused audience.

“We are excited to bring Jamf Premium Cloud to the U.K. as it will bring flexibility to our customers here,” said Mark Ollila, director, cloud operations at Jamf.

“Jamf users will be able to reap the benefits of the cloud securely, as well as experience additional layers of security and customised branding. As the U.K. prepares to part ways with the EU, the cloud will be an essential tool for British businesses to compete globally, which is why we have launched Jamf Premium Cloud tailored for those in the U.K.”

Qualcomm sees a key part of the market up for grabs with Cloud AI 100 chip launch

The race for artificial intelligence (AI) and the cloud continues to be well and truly joined. AI, alongside the cloud, can be seen as two of the technologies that, in tandem, will power the next cycle of business. As VMware CEO Pat Gelsinger put it last year: cloud enables mobile connectivity, mobile creates more data, more data makes AI better, AI enables more edge use cases, and more edge means more cloud is needed to store the data and do the computing.

As this publication has frequently argued, for those at the sharp end of the cloud infrastructure market, AI, along with blockchain, quantum and edge to name three more, are the next wave of cloud services and where the new battle lines are being drawn. Yet there is a new paradigm afoot.

Qualcomm went into the fray last week with the launch of the Qualcomm Cloud AI 100. “Built from the ground up to meet the explosive demand for AI inference processing in the cloud, the Qualcomm Cloud AI 100 utilises the company’s heritage in advanced signal processing and power efficiency,” the press materials blazed. “With this introduction, Qualcomm Technologies facilitates distributed intelligence from the cloud to the client edge and all points in between.”

While the last dozen or so words in that statement may have seemed like the key takeaway, it is the power efficiency side which makes most sense. Where that is Qualcomm’s heritage, in terms of using its technology to power millions of smartphones, it does not have the same impact when it comes to the data centre. In December, the company announced it would lay off almost 270 staff, confirming it was ‘reducing investments’ in the data centre business.

Its competition in this field, chiefly Intel but also NVIDIA, is particularly strong. Yet Kevin Krewell, principal analyst at Tirias Research, told Light Reading last week that “to fit more easily into existing rack servers, new inference cards need to be low power and compact in size.” This, therefore, is where Qualcomm sees its opportunity.

With Cloud AI 100, Qualcomm promises a more than 10 times greater performance per watt over the industry’s most advanced AI inference solutions deployed today, and a chip ‘specifically designed for processing AI inference workloads.’

“Our all-new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today’s data centres,” said Keith Kressin, Qualcomm SVP product management. “Furthermore, Qualcomm Technologies is now well positioned to support complete cloud-to-edge AI solutions all connected with high speed and low-latency 5G connectivity.”

Crucially, this is an area where cooperation, rather than competition, with the big cloud infrastructure providers may be key. Microsoft was unveiled as a partner, with the two companies’ visions similar and collaboration continuing ‘in many areas.’

Writing for this publication in November, Dr. Wanli Min, chief machine intelligence scientist at Alibaba Cloud, noted how this rise was evolutionary rather than revolutionary. “For many organisations it has been a seamless integration from existing systems, with AI investment gathering pace quickly,” he wrote. “Over the next few years we can expect to see the industry continue to boom, with AI driving cloud computing to new heights, while the cloud industry helps bring the benefits of AI to the mainstream.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why the antidote for multi-cloud complexity is a unified management strategy

Start counting off the benefits of multi-cloud and you soon run out of fingers; avoiding vendor lock-in, matching the right tool for the job, democratised access to stakeholders, balancing performance and cost, geographically aligning workloads to name just five.

Yet there’s always a catch. With management of multiple clouds, one of the big gotchas is complexity. A recent MIT Technology Review/VMware study (pdf) found that 57% of senior IT managers surveyed report technical and skills challenges were ‘critical learnings’ from their multi-cloud implementations. In a recent Morpheus Data and 451 Research webinar, it was revealed that 90% of IT leaders reported skills shortages in cloud-related disciplines, up from 50% just a few years ago.

What to do when you reach the multi-cloud ‘tipping point’

Many organisations still struggle to do one cloud right. When looking at multi-cloud deployments alongside cloud skills gaps, it’s clear that IT teams are in trouble. Sooner or later, your multi-cloud setup will reach what cloud industry expert David Linthicum refers to as the ‘tipping point’ at which ‘the number of services you use exceeds your ability to properly manage them.’ The exact tipping point varies based on your company’s size, the complexity of the services you use, security and governance, as well as your staff’s skill set.

Linthicum lists four factors that indicate your multi-cloud will benefit from a third-party cloud management platform such as Morpheus:

  • Are your developers unhappy about how long it takes for them to allocate resources to their applications?
  • Are your managers uncertain about who is responsible for the security of specific cloud resources?
  • Are your users griping about performance glitches, many of which are caused by applications not getting the cloud resources they need?
  • Are you unable to charge back cloud costs to the appropriate departments and users?

If the answer to any of these questions is “yes,” you should consider using a multi-cloud management platform (CMP). Your developers benefit by being able to allocate various cloud resources to their apps directly and on-demand via GUI or API/CLI. A CMP also makes it easy to track who is provisioning specific resources and confirm that they are properly securing the workloads.

The smart folks over at Gartner have spent hundreds of hours talking to customers and vendors to come up with what is a pretty slick framework to think about the CMP space. In their “wheel” you can see the core categories of capability. There are tools that provide one of these capabilities across multiple cloud platforms. There are also tools that provide a range of these features within a narrow set of platforms. And then there are the unicorns… truly multi-function and multi-platform CMPs which are agnostic and not tied to a legacy hypervisor or hardware vendor.  They go into detail on this space in their 2019 Magic Quadrant for Cloud Management Platforms.

Your multi-cloud strategy must meet the needs of multiple stakeholders

In the modern multi-cloud world, companies need a way to move between public and private clouds quickly, simply, and reliably. The only way to accomplish this is by cutting through the inherent complexity of multiple individual services, as BusinessWorld‘s David Webster explains. The key is to shift your focus to collaboration: place the customer experience in the centre by creating “new customer engagement models.”

Improving the customer experience, managing costs, and enhancing DevOps velocity are all possible with the right multi-cloud orchestration approach, one that treats Infrastructure teams, Developers, and Business users as equal citizens. Collaboration and partnerships are easier to establish when all parties share the platform that delivers the apps and underlying analytics that drive the business forward.

These personas have different needs however, so it’s key to strike a balance that delivers on their key need without compromising that of the others. For example, IT operations teams have KPIs around security and service levels which tends to lead to more conservative approaches to technology adoption. Developer teams on the other hand, are all about velocity and continuous innovation. Business teams care about differentiation and innovation but not at the expense of reputation or cost.

Business and IT operations: Security, cost, and cross-cloud management

TechRepublic‘s Alison DeNisco Rayome reports that 86 percent of cloud technology decision makers at large enterprises have a multi-cloud strategy. The benefits cited by the executives include improved IT infrastructure management and flexibility (33 percent), improved cost management (33 percent), and enhanced security and compliance (30 percent).

Transitioning to a cloud-first IT operation is bound to entail overcoming inertia, adjusting to changing roles, and learning new skills. Realising multi-cloud benefits requires overcoming challenges in three areas in particular, according to CloudTech‘s Gaurav Yadav:

  • Public cloud security: While the security of the public cloud is considered robust, the transit of data from on-premises infrastructure to the public cloud needs to be carefully planned and implemented
  • Cost accounting: Multi-cloud commoditises cloud resources by letting users choose the services that best meet their specific needs. To accomplish this, enterprise IT must transition from vendor-enforced workflows to a vendor-agnostic infrastructure
  • Unified cross-cloud view: The goal is to give users a single management platform that lets them visualise and implement workloads using multiple cloud services that are viewed as a single resource rather than as “isolated entities"

Developers: New kids with new demands

What do developers need out of the multi-cloud management equation? They are interested in full API/CLI access, infrastructure as code, and speed of deployment. As David Feuer writes on Medium, the proliferation of developer products and services is matched by increases in use cases and backend technical complexity. Feuer recommends building your multi-cloud strategy from the ground up, putting APIs and developers first.

Developers want to use cutting-edge tools to create modern apps. The results of the 2018 Stackoverflow Developer Survey show that when choosing an employer, developers’ second-highest priority — after salary and benefits — is the languages, frameworks, and other technologies they will be working with. Considering that more than half of the developers surveyed have had their current job for less than two years, it pays for companies to give talented developers access to the tools they need to excel.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Dynatrace extends AI-powered software intelligence platform to hybrid mainframe environments


Clare Hopping

15 Apr, 2019

Dynatrace’s Davis AI engine has been updated with support for native support for IBM Z support for CICS, IMS and middleware, to offer end-to-end data visibility in hybrid environments.

It means businesses running a combination of data environments are now able to track performance, whether they’re using a cloud-based set-up, traditional mainframe or both to run their apps.

“While enterprises are moving applications to modern cloud stacks for agility and competitive advantage, these applications often still depend on critical transactions and ‘crown jewels’ customer data residing on IBM Z mainframes,” said Steve Tack, senior vice president of products at Dynatrace.

“This puts pressure on these resources to perform tasks that were not envisioned when the mainframes were launched.”

Although mainframe still leads the worldwide transaction-led infrastructure – powering 30 billion transactions a day – Dynatrace recognises firms are increasingly moving to the cloud to transform their processes. And as they’re making the move to the cloud, Dynatrace explained that organisations are finding “blind spots” in their current back-end monitoring tech.

A slip up, or missed data could mean huge costs for businesses, not to mention performance issues for end-users, lost transactions and other issues that can affect its reputation as well as its bottom line.

“Because Dynatrace provides end-to-end hybrid visibility, customers can optimise new services, catch performance degradations before user impact, and understand exactly who has been impacted by an incident,” Tack added.

“This enables customers to confidently innovate applications that leverage data from mainframes to increase revenue, build brand loyalty, and create competitive advantage.”

Email security market to boom as firms head to the cloud


Clare Hopping

15 Apr, 2019

Businesses are stepping up their email security game as hackers ramp up their email-based attacks, according to analyst firm Frost & Sullivan.

The company revealed that spending on email security worldwide rose by 15.9% year on year in 2018, with predictions indicating this could generate a compound annual growth rate (CAGR) of 9.9% by 2022.

Frost & Sullivan explained that this is presenting a huge opportunity for security vendors, suggesting if they’re not already offering an email security product, they certainly should explore it, taking advantage of the growing popularity of Office 365 and Google G-Suite.

“[Vendors] will also be looking to build out global data centres to meet data privacy regulations, strengthen cloud resilience, and engage with public cloud (AWS, Azure) for higher scalability,” said Tony Massimini, the firm’s senior industry analyst for Digital Transformation.

“Furthermore, they may invest in a global threat intelligence network in order to leverage threat intelligence and analytics for advanced threat detection and other functions for email security.”

To date, the email security marketplace has been a crowded and fragmented sector, but now vendors looking to differentiate themselves from the competition should be looking to collaborate with other firms and offer something truly unique.

For example, they should be introducing automation to help fill the security skills gap, focus on GDPR compliance and ensuring they offer all-in-one security solutions to address a business’s entire infrastructure.

Other areas expected to increase revenue growth are integrating malware-less threat detection using threat analytics and behavioural analysis. Data loss prevention is also a growing trend that firms are expecting to invest in.

“Already, vendors like Mimecast offer a fully integrated suite of proprietary cloud services, while the Symantec Email Security solution tightly integrates with security environments via the Symantec Integrated Cyber Defense platform,” Massimini added.

Uncovering the insight behind Gartner’s $331 billion public cloud forecast

Gartner is predicting the worldwide public cloud services market will grow from $182.4 billion in 2018 to $214.3bn in 2019, a 17.5% jump in just a year.

  • Gartner predicts the worldwide public cloud service market will grow from $182.4bn in 2018 to $331.2bn in 2022, attaining a compound annual growth rate (CAGR) of 12.6%
  • Spending on infrastructure as a service (IaaS) is predicted to increase from $30.5bn in 2018 to $38.9bn in 2019, growing 27.5% in a year
  • Platform as a service (PaaS) spending is predicted to grow from $15.6bn in 2018 to $19B in 2019, growing 21.8% in a year
  • Business intelligence, supply chain management, project and portfolio management and enterprise resource planning (ERP) will see the fastest growth in end-user spending on SaaS applications through 2022

Gartner’s annual forecast of worldwide public cloud service revenue was published last week, and it includes many interesting insights into how the research firm sees the current and future landscape of public cloud computing. Gartner is predicting the worldwide public cloud services market will grow from $182.4bn in 2018 to $214.3bn in 2019, a 17.5% jump in just a year.

By the end of 2019, more than 30% of technology providers’ new software investments will shift from cloud-first to cloud-only, further reducing license-based software spending and increasing subscription-based cloud revenue.

The following graphic compares worldwide public cloud service revenue by segment from 2018 to 2022. Please click on the graphic to expand for easier reading.

Comparing compound annual growth rates (CAGRs) of worldwide public cloud service revenue segments from 2018 to 2022 reflects IaaS’ anticipated rapid growth. Please click on the graphic to expand for easier reading.

Gartner provided the following data table this week as part of their announcement:

BI, supply chain management, project and portfolio management and ERP will see the fastest growth in end-user spending on SaaS applications through 2022

Gartner is predicting end-user spending on business intelligence SaaS applications will grow by 23.3% between 2017 and 2022.  Spending on SaaS-based supply chain management applications will grow by 21.2% between 2017 and 2022. Project and portfolio management SaaS-based applications will grow by 20.9% between 2017 and 2022. End-user spending on SaaS ERP systems will grow by 19.2% between 2017 and 2022.

Sources: Gartner Forecasts Worldwide Public Cloud Revenue to Grow 17.5 Percent in 2019 and Forecast: Public Cloud Services, Worldwide, 2016-2022, 4Q18 Update (Gartner client access)

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Outlook.com hack much worse than initially thought


Bobby Hellard

15 Apr, 2019

A hack that Microsoft said affected “some” of its users’ email accounts is much worse than initially thought, according to reports.

On Saturday, the company confirmed that some users of its email services had been targeted by hackers. But the issue is thought to be much worse than previously reported as the hackers were able to access email content from a large number of Outlook, MSN, and Hotmail email accounts.

The tech giant has been notifying Outlook.com users that the hackers were able to access their accounts for the first three months of this year after it discovered that a support agent’s credentials were compromised for its webmail services. This resulted in unauthorised access to accounts between 1 January and 28 March 2019.

According to Microsoft, the hackers could have viewed account email addresses, folder names and the subject lines of emails – but not the content of the emails or any attachments.

“We addressed this scheme, which affected a limited subset of consumer accounts, by disabling the compromised credentials and blocking the perpetrators’ access,” said a Microsoft spokesperson in an email to Tech Crunch.

However, in March –  before the company publicly announced the attack – an unnamed source told Motherboard that this abuse of customer support portals allowed the hackers to gain access to any email account as long as it wasn’t a corporate level one.

“We have identified that a Microsoft support agent’s credentials were compromised, enabling individuals outside Microsoft to access information within your Microsoft email account,” a Microsoft email posted on Reddit said.

It’s not clear how many users have been affected by the breach, or who the hackers are, but they weren’t able to steal login details or other personal information. As a cautionary measure, Microsoft is recommending that affected users reset their passwords.

“Microsoft regrets any inconvenience caused by this issue,” says the security notification. “Please be assured that Microsoft takes data protection very seriously and has engaged it’s internal security and privacy teams in the investigation and resolution of the issue, as well as additional hardening of systems and processes to prevent such recurrence.”

This latest security incident comes just weeks after a former security researcher pleaded guilty to hacking into Microsoft and Nintendo servers at Blackfriars Crown Court. And, Microsoft’s Windows development servers were breached for a number of weeks in January 2017, allowing hackers across Europe to access pre-release versions of the OS.

Interestingly, the time frame for this latest hack means it was going on while Microsoft’s Office 365 cloud-powered productivity suite suffered outages across Europe, with users reporting issues connecting to the cloud-hosted email servers back in January.