How Abbots Care gained greater assurances around data security with a revamped DR and backup strategy

Case study All data is equal, but for some industries, data is more equal than others. As a result, great care needs to be taken when it comes to keeping that data secure, whether in the cloud or anywhere else.

Healthcare, across its various channels, is a classic example. Some healthcare organisations are moving with less trepidation towards the cloud. In February, for instance, a study from Nutanix found that, by 2021, more than one in three healthcare organisations polled said they would be deploying hybrid cloud solutions. At the start of this year, pharmaceutical giant Walgreens Boots Alliance selected Microsoft as its primary cloud provider, with the majority of its infrastructure moving across to Azure.

Regardless of where it is hosted, the non-negotiables for healthcare providers are that the data can be accessed to its demands and that it is unimpeachable.

Abbots Care, a home care company based in Hertfordshire, is like any responsible UK provider under the regulatory jurisdiction of the Care Quality Commission. As managing director Camille Leavold puts it, one data breach could mean the company’s licence is taken away.

Leavold therefore wanted more assurance of how secure her company’s data was – and as a result she turned to managed IT services provider Fifosys.

“About two years ago, we were at a stage where we had quite a lot of data,” Leavold tells CloudTech. “Although we were using a company that said our data was secure and safe, we actually didn’t have any way of being able to evidence that.

“Obviously we’re quite in a compliant sector, and we needed to be able to evidence it. That started us looking,” she adds. “We were also looking for a company that was 24/7, because we are too.”

Mitesh Patel, managing director of Fifosys, went through the standard detailed audit when the work originally went out to tender. Basic questions around the backing up of data, recovery times and sign-off process highlighted risks which ‘weren’t acceptable’ to Leavold, as Patel puts it. Fifosys’ solution ties in to the company’s partnership with business continuity provider Datto, whose technology, according to Fifosys technical director James Moss, is ‘effectively a mini-DR test every day.’

Fifosys runs two official recovery tests a year, with the results sent to Leavold who can then present them to the board. “It’s no longer something hidden where you’ve gone ‘okay, there’s a vendor dealing with it, we’re going to be blind to it,” Patel tells CloudTech. “The recovery process… they get a report, that’s discussed – is this timeframe acceptable? – [and] are there any tests they want to do outside of this?”

Like many healthcare providers, Abbots Care also needs a good ERP system to ensure all its strands are tied up – particularly with care workers out in the field, checking on their tablets and devices which patients they need to see, their medication, and the service which needs to be provided at that time. "There's a lot for Abbots Care that they need to have up and running, and when you're scheduling so many people out in the field, these systems need to be up," says Patel.

Another consoling aspect is that the company’s backup and disaster recovery is all in one place. “[If] you can’t answer the [audit] questions and you’ve got five or six different vendors involved in delivering your backup, your continuity, applications, recovery… it’s fine you’ve got these vendors in, but your recovery time is extended continuously,” explains Patel. “Who’s actually responsible? Whose neck is on the line in the event that something does happen?”

Outages are unfortunately a fact of life, as even the largest cloud providers will testify, but can be mitigated with the right continuity processes in place. “Continuity was a big, big part for them, and then it’s all in terms of protecting the data and having versions of it,” explains Patel.

“There are organisations who say they’ve got four sites, and [they’re] just going to replicate across those four sites and invest in the same infrastructure on all four. That’s very difficult to maintain, administer and manage,” Patel adds. “When you are testing, you find people are only testing one of their sites rather than all four.

“You should be doing four tests at least twice a year – but the time involved in doing that, many people underestimate [it] and then start compromising.”

You can find out more about the case study by visiting here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why adaptability is critical to meet future data centre demands

The next wave of technology innovation is already here with new applications transforming the way we live, work and travel. With the emergence of the Internet of Things (IoT), artificial intelligence (AI), cloud-based services, 4K videos and 5G networks, data centre operators must provide more data capacity and higher computing power if they hope to keep up with the unprecedented demands.

The sheer scale and scope of the gap the industry faces, demand that network operators rethink the way they have traditionally organised the design and deployment of networks and data centres.

According to Gartner, by 2025, the number of micro data centres will quadruple, due to technological advances such as 5G, new batteries, hyperconverged infrastructure (HCI) and various software-defined systems (SDx). By the same year, enterprise data centres will have five times more computational capacity per physical area than today.

With demands for data increasing at such an unprecedented rate worldwide, operators are under mounting pressure to develop and build data centres that can handle the increased connectivity and bandwidth demands that these new technologies bring. Data centres are being forced to adapt to the demands of their ever-changing environments and in order for to prepare for future demands, operators need to invest in technology that will grow alongside them.

Change needs to happen in order to be prepared for the future

With the continuing advancement of technologies at a rapid rate, the availability of computing and storage with ultra-low latency needs to be at the forefront of operators’ minds. Downtime of data centres carries enormous cost implications to an operator, making it crucial that there are fibre management solutions in place that make day-to-day operations as seamless as possible. 

If data demands continue at the current rate, it is predicted that hyperscale data centres would need to be upgraded every two years to keep up with the bandwidth and storage demands. The cost and time implications for operators to overhaul the entire system every couple of years would be astronomical. Instead, operators need to be doing all they can to invest in technology that future-proofs their investments.

The evolution of data centre infrastructure starts with simplification. Data centre infrastructures are changing from predominantly complex or proprietary systems to repeatable and predictable, standardised around Commercial Off-the-Shelf (COTS) infrastructures. In addition, fast-paced adoptions of new advances with systems such as hyperconvergence, software-defined and composable infrastructures are adding resources for standardisation, rationalisation and consolidation initiatives.

Cabling is critical to the future of data centres

Fibre management systems are crucial in organising the cabling in a clear and concise way, reducing the risk of damage and downtime. With such large costs associated with inaccessible data, downtime is simply not a possibility for a data centre operator. With hyperscale data centres containing hundreds of servers, it is absolutely crucial that there is no error when it comes to moves, adds and changes (MACs) of fibre connectivity.

Due to the complexity of fibre cabling and the implications that can occur through error, operators need to deploy flexible, comprehensive fibre management systems that can be managed with ultimate ease and maximum efficiency.

In order to achieve the ultimate level of protection and maximum ease, operators should install fibre optic systems in the meet-me-room (MMR) or main distribution area (MDA) that are high-density with a clear demarcation point. Selecting a dense cross-connect fibre management system is best. By having all the connections in one location, there are fewer reasons for someone to interact with the active equipment in the data centre and subsequently cause an error.

With its market leading modular LISA Double Access fibre management system, HUBER+SUHNER has revolutionised how structured cabling in data centres work worldwide. In the future, such a short time-to-market turn around will be required. With ever-increasing bandwidth and connectivity demands, it is critical to adapt quickly and select a fibre management system with interchangeable modules that can be easily installed, exchanged and removed.

This modular approach makes the cabling structure in a data centre incredibly flexible. The fibre management system can be installed in a multitude of positions whether that be against a wall, the end of a row or back to back in a row. With a variety of layouts possible, the system can easily be implemented in the main distribution area (MDA) but also in the horizontal distribution area (HDA).

With the pressure on data centres only set to increase in the future, operators need to consider all options available to them and remain flexible and ready for any eventualities that may arise. There are many options on the market for operators to consider when selecting a fibre management system, but by using a fibre management system with a modular pay-as-you-grow infrastructure, data centres have the capability to continuously evolve and adapt to reflect the ever-increasing future demands.

Preparing for the unknown

Over the last fifty years, enterprise data centres have been responsible for storing and processing critical business information and have evolved gradually and conservatively during that time. However, traditional data centres are now feeling the impact of disruption from cloud, edge computing, advances in colocation and hosting services. In addition, advances in the areas of power, cooling, telecommunications, AI, operations, hardware and software are transforming enterprise data centres as never before. Traditional on-premises data centre models must evolve to play a role in modern enterprise information management.

In order for data centre operators to be in with a good shot of keeping up with the unprecedented demands they need to take advantage of the systems that enable them to do their jobs effectively. As technological innovation continues, the pressure on data centres is only going to mount further. Simple, high-density fibre management systems with easy handling designs that clearly demonstrate the incoming and outgoing connectivity, will be critical to the future of data centres.

Picture credit: HUBER+SUHNER

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

View From the Airport: VMworld US 2019


Adam Shepherd

30 Aug, 2019

I think it’s fairly safe to say that I picked a good year to visit VMworld US for the first time. While I’ve been to its European equivalent, this was the first year I went to the main event and it was something of a doozy.

Not only did we get a nice bit of pre-conference sizzle with the news that VMware is acquiring Carbon Black and Pivotal, but the entire show was also a festival of product updates and previews. More than anything else, it felt like a statement of intent from Gelsinger and his comrades, setting out the company’s stall for the future.

The big focus of the show – and of VMware’s main announcements – was Kubernetes. The company is betting big on the container technology as the future of application development, with plans to weave it into vSphere with Project Pacific, and use Pivotal and Bitnami’s technology to make VMware even more attractive to Kubernetes developers. Virtually every main-stage announcement featured Kubernetes in some capacity, and VMware veteran Ray O’Farrell is being put in charge of getting that side of the business (including the forthcoming Pivotal integrations) running smoothly.

All the new Kubernetes-based products – Project Pacific, Tanzu and the like – are still in tech preview with no release date in sight and, honestly, that’s probably a good thing. I’m really not sure how many of VMware’s customers are ready to start deploying containers at scale. Mind you, making Kubernetes management a core part of VMware’s capabilities may well go a long way towards encouraging adoption.

It feels like a future-proofing measure more than anything else. Gelsinger is a sharp guy and when he says that containers are the future, he’s not wrong. It may not have reached mass adoption yet, but it’s growing fast, which isn’t surprising given the technology’s proven benefits. This isn’t a pivot though; VMs aren’t going anywhere, as Gelsinger himself has been quick to point out. He notes that all the companies operating Kubernetes at scale – Google, Microsoft, Amazon, et cetera – operate them inside VMs. More to the point, it’ll be a long time yet before Kubernetes gets anywhere close to rivalling VMs in terms of the number of production workloads.

Between the new possibilities promised by Project Pacific, the increasing focus on multi-cloud infrastructures and the forthcoming integration of Carbon Black’s technology into the product line, VMware looks like a company at the absolute top of its game, cementing its dominance of the virtualisation market and paving the way for that dominance to continue long into the future. If Gelsinger, O’Farrell and the rest of the team can pull off everything they’ve promised, then customers and admins have a lot to look forward to.

The continuing rise of Kubernetes analysed: Security struggles and lifecycle learnings

Analysis The rapid adoption of container technology, DevOps practices, and microservices application architectures are three of the key drivers of modern digital transformation. Whether built in the cloud, on-premises, or in hybrid environments, containerisation has proved to be significantly more advantageous in terms of scalability, portability, and continuous development and improvement.

More recently, organisations have began to standardise on Kubernetes as their container orchestrator. Tinder recently announced the company is moving their infrastructure to Kubernetes. Soon after, Twitter announced its own migration from Mesos to Kubernetes.

While the reasons behind such a rapid adoption of Kubernetes has been well documented, security issues remain one of the biggest concerns for organisations. When you ignore your container and Kubernetes security, you might find yourself in the headlines for all the wrong reasons—just ask Tesla.

To better understand the trends in container and Kubernetes security and adoption, we conducted a survey of over 200 IT security and operations decision makers in November of 2018. We recently repeated the survey across nearly 400 individuals in security, DevOps, and product teams to gain additional insights into how organisations are adopting container technologies and how their security concerns have evolved.

The results are aligned with the prediction from Gartner that by 2022 more than 75% of global organisations will be running containerised applications in production – a significant increase from fewer than 30% today.

Kubernetes adoption grows by 50% in first half of 2019

Originally built by Google—based on the lessons learned from the Borg and Omega projects—Kubernetes was open-sourced in 2014 as a platform for automating deployment, scaling, and management of containerised applications. Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF) to manage the Kubernetes open-source project.

In an early sign of Kubernetes going mainstream, in 2016 Niantic released the massively popular mobile game Pokémon Go, which was built on Kubernetes and deployed in Google Container Engine. At launch, the game experienced usability issues caused by a massive user interest in U.S—the number of users logging in ended up being 50x the original estimation, and 10x the prediction for worst case scenario. By using the inherent scalability advantages of Kubernetes, Pokémon Go went on to successfully launch in Japan two weeks later despite traffic tripling what was experienced during the U.S launch.

Since then, Kubernetes usage has taken off. In our original survey conducted in November of 2018, 57% of respondents said they were orchestrating their containers with Kubernetes, which was at the time already more than any other orchestrator in the market. When we conducted the survey again in July 2019, the percentage of survey respondents who said they use Kubernetes as their orchestrator grew from 57% to 86% – a 50% increase.

And despite the fact that all major cloud providers offer their versions of managed Kubernetes service—with a primary value prop of being easier use—a sizeable portion of Kubernetes users opt for self managing their clusters. This is because self-managed Kubernetes provides greater flexibility to porting an existing Kubernetes application to another environment that’s using Kubernetes.

Kubernetes and container security concerns increase in lockstep with adoption

Security concerns continue to be one of the primary constraints for using containers and Kubernetes. 2019 saw the discovery of several high-severity container and Kubernetes vulnerabilities, including the runC vuln, a k8s privilege escalation flaw, a DoS vuln, and several other vulns that were announced earlier this month as part  of a CNCF audit.

Most respondents identify inadequate investment in security as their biggest concern about their company’s container strategy. Moving to a containerised/microservices architecture introduces several new container and Kubernetes security considerations, and existing security tooling isn’t suitable to address them.

Organisations need dedicated security controls purpose-built for containers, Kubernetes, and microservices, to meet their security and compliance obligations. For example, unlike traditional waterfall method of application development, modern app dev methodologies rely on continuous integration and continuous delivery (CI/CD) where security controls must be deeply embedded in the CI/CD pipeline for it to be effective.

Once again, respondents identified runtime as the life cycle phase that organisations are most worried about; however, most organisations understand that runtime failures are a function of missed security best practices during the build and deploy phases. Not surprisingly, more than half (57%) of respondents are more worried about what happens during the build and deploy phases. In other words, users realise they must "shift left" in their application of security best practices to build it right the first time.

Containers and Kubernetes are running everywhere

One of the interesting findings of the survey report was how diverse container and Kubernetes environments tend to be. While 70% of respondents run at least some of their containers on-premises, 75% of those running on-premises are also running some in the cloud, which means that any workable security solution has to span both environments.

Today, more than half of respondents (53%) are running in hybrid mode compared to 40% at the end of 2018. As a result, the percentage of organisations running containers only on-premises has dropped nearly in half (from 31% to just 17%), while cloud-only deployments have remained steady.

As expected, AWS continues its market dominance in container deployments, followed by Azure. Google comes in third but has gained considerable market share, growing from 18% to 28% over six months.

DevSecOps – not just a catchy term

Traditional security processes can become a barrier when building software using DevOps principals. The increasing complexity of security threats facing enterprises is leading to DevSecOps playing a crucial role.

Across all operations roles, the allocation of management responsibility by role remained consistent, but the jump in those citing DevSecOps as the responsible operator for container security is significant.

When isolating only those survey respondents who are in a security or compliance role, there is an even larger jump in allocation of responsibility to DevSecOps – 42% of respondents in a security or compliance role view DevSecOps as the right organisation to run container security programs.

Final thoughts

Despite the fact that container security is a significant hurdle, containerisation is not slowing down. The advantages of leveraging containers and Kubernetes—allowing engineers and DevOps teams to move fast, deploy software efficiently, and operate at unprecedented scale—is clearly overcoming the anxiety of security concerns.

Organisations are charging ahead with moving their containers to production. The percentage of organisations with more than 50% of their containers running in production environments has increased from 13% to 22% – a growth rate of 70%. In the same six months, those running less than 10% of their containers in production has fallen from 52% to 39%.

Organisations shouldn’t treat security as an afterthought. Unlocking the benefits of cloud-native technologies while maintaining strong security for mission critical application development infrastructure requires protecting the full container life cycle – across build, deploy and runtime phases.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Vodafone and IBM help National Express get on the digital transformation bus


Rene Millman

29 Aug, 2019

National Express has signed an eight-year IT modernisation deal with IBM and Vodafone Business to help the coach company with its hybrid cloud plans.

Under the agreement, the Vodafone Business and IBM venture will modernise National Express’ IT estate by moving to IBM Cloud and implementing a hybrid cloud strategy, building on the existing connectivity services provided by Vodafone Business.

IBM and Vodafone said that this would allow National Express to manage multiple clouds in different locations and from different vendors, as well as letting it scale up and down to support spikes in usage.

There will also be extra security and risk management to protect National Express’s infrastructure.

The agreement covers the provision of cloud and digital services that will underpin National Express’ ‘digital first’ approach; to use the latest technologies to raise customer and safety standards, drive efficiencies and grow its business. 

The deal will also mean that National Express can start to develop customer-focused innovations, such as personalised passenger experiences, flexible payment options and always-connected vehicles.

The coach firm will also have access to other cloud services and new technologies such as 5G, IoT, edge computing and analytics.

Debbie O’Shea, group chief information officer for National Express said that the partnership enables the company to “move to a cloud environment giving us a future-proofed platform with increased flexibility that will better support our business”.

“It also will provide access to emerging and innovative new technologies,” she added.

Anne Sheehan, business director at Vodafone UK, said that cloud services and connectivity are now “inseparable”.

“We will provide National Express with the holistic solution it requires to drive digital innovation across its business – faster, simpler and at scale,” she said.

Carbon Black execs reveal post-acquisition plans


Adam Shepherd

29 Aug, 2019

Last week, VMware threw the tech industry a curveball when CEO Pat Gelsinger announced that not only would it be acquiring Pivotal, as had been announced the previous week, it was also snapping up security firm Carbon Black. While the deal isn’t expected to close until the end of January next year, the company has devoted a substantial chunk of this year’s VMworld conference to discussing the acquisition, and what it means for the future of both companies.

For Carbon Black CEO Patrick Morley, the acquisition presents a huge opportunity for the company to expand its capabilities, and he sees a number of areas where being part of VMware can help it protect its customers in new ways.

“Pending close, I think there’s a number of opportunities,” he tells Cloud Pro. “The biggest one’s end user computing. Management and security go hand in hand, so end user computing is a huge opportunity for us.”

A substantial part of this is integration with Workspace ONE, VMware’s desktop virtualisation product. It’s one of four key integrations with its existing portfolio that VMware has already identified as priorities once the deal goes through. Not only does it make sense from a customer use-case perspective, VMware COO Sanjay Poonen pointed out that many Workspace ONE customers are also Carbon Black customers, a fact which supposedly influenced the decision to acquire the company.

While both VMware and Carbon Black executives have indicated that the company intends to keep the Carbon Black brand alive once the deal closes, and there are no immediate plans to shutter any of its services, Gelsinger told Cloud Pro that the goal is eventually to weave Carbon Black’s technology into VMware’s platform rather than offering it via standalone applications.

“The plans are to bring these integrated solutions together,” he says. “You could imagine you’re going to buy your Workspace ONE with Carbon Black. And these just end up being features. The thing is, we don’t want customers to be ‘buying point products for point use cases’ – buy a platform that gives you lots of those benefits.”

“Customers today will have a Tanium agent, Right? And they’ll have a McAfee agent, and they’ll have a Qualys agent. They’ll also have a Workspace ONE agent for management. So I’ve got four agents on the client. I have customers literally, who have 17 agents on every PC. 17 agents. What are you talking about? One was our goal, as we collapse all of those use cases into one underlying agent.”

Don’t wait, integrate

Being owned by VMware will make Carbon Black a de facto part of the Dell Technologies family, which also opens up other avenues for expanding its endpoint protection.

“Obviously, the Dell family is another capability, because Dell increasingly is providing security to its customers, as part of the laptops and other hardware that they’re providing. And so if we can build security right into that, it’s hugely advantageous too,” Morley says. “You will certainly see us work with Dell – again, pending close – to actually give customers the option to be able to put security right onto the machine, if they so choose.”

If Dell’s business laptops come preloaded with a free subscription to Carbon Black’s endpoint detection and response (EDR) service, this could be hugely beneficial for organisations. The more exciting prospect, however, is the potential impact Carbon Black’s technology can have on application security for VMware customers.

“The second piece is the work that’s already been done around app defence, which is actually building security hooks right into vSphere,” Morley explains.

This integration would enable agentless protection of applications running in vSphere, improving both application performance and detection rates. This would be groundbreaking and, if successfully integrated, has the potential to radically improve the security of organisations running vSphere.

Elsewhere, VMware is planning to integrate Carbon Black’s technology into its NSX platform to provide more in-depth network security analytics, as well as partnering it with another recent acquisition – Secure State – to address security configuration challenges. However, while the acquisition will allow Carbon Black to expand into new kinds of protection, the company executives are also extremely excited about its potential to supercharge its existing services.

One of the linchpins of Carbon Black’s technology is the collection and analysis of security data from all of the endpoints that are running its agent. At the moment, that consists of 15 million endpoints, but if Carbon Black’s agent is incorporated into vSphere or Workspace ONE, that total significantly increases overnight.

Room for growth

“We’re super excited to be able to leverage the reach that Dell EMC and VMware bring to the equation here. I mean, there’s 70,000 partners that we’re going to be able to tap into,” says Carbon Black’s senior vice president of corporate and business development Tom Barsi. “That’s really where you’re talking about adding a zero to the number of customers we’re touching.”

In addition to improving its protection capabilities, this increase in footprint and telemetry will give more fuel than ever to Carbon Black’s Threat Analysis Unit (TAU), which conducts research into security trends as well as analysing new and emerging threat actors and attack methodologies. This research, Morley promises, will most certainly continue and will in fact likely expand once the company joins VMware.

Carbon Black’s executives seem to be exceedingly positive about the prospect of joining the VMware family, which should come as no surprise. Carbon Black has been a technically-focused company since its inception – Morley notes that the company was founded by a team of actual hackers – and this emphasis on technology and engineering is at the core of its new owner’s values.

“I’m really excited,” Carbon Black CTO Scott Lundgren told Cloud Pro. “As CTO, it’s pretty amazing to have an opportunity to work with a highly technical leadership team. It starts with Pat. As ex-CTO of Intel, he’s got a great reputation, and he deserves it. He’s fantastically technical, so he understands the problem. He knows what it takes to actually address it, he can act with confidence, because he knows what’s going on under the hood. But it isn’t just Pat, the whole team is deeply technical [and has] a lot of expertise in a wide variety of technical fields across the board. It’s really great to see.”

“We’re going to have some work to do, obviously, to scale up but it’s very tractable, as long as you’ve got the right mindset at the top – and Pat has that.”

Alibaba, Google Cloud and Microsoft among inaugural members of cloud security consortium

The Linux Foundation has announced the launch of a new community of tech all-stars focused on advancing trust and security for cloud and edge computing.

The open source community, dubbed the Confidential Computing Consortium (CCC), has 10 initial members: Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.

“Current approaches in cloud computing address data at rest and in transit but encrypting data in use is considered the third and possibly most challenging step to providing a fully encrypted lifecycle for sensitive data,” the foundation noted in its press materials. “Confidential computing will enable encrypted data to be processed in-memory without exposing it to the rest of the system and reduce exposure for sensitive data and provide greater control and transparency for users.”

Members are encouraged to bring their own projects to the consortium, with Microsoft offering Open Enclave SDK, a framework which allows developers to build trusted execution environment (TEE) applications using a single enclaving abstraction. Intel’s Software Guard Extensions (SGX) SDK aims to help app developers protect select code and data from disclosure or modification at the hardware layer, while Red Hat’s Enarx provides hardware independence for securing applications using TEEs.

This is by no means the only cross-industry collaboration taking place in the cloud space right now. In March Intel led a launch of cohorts in a campaign to improve data centre performance through Compute Express Link (CXL), an emerging high-speed technology standard.

Alibaba, Google, and Microsoft are, alongside Intel, members of both initiatives. The three pretenders to the cloud infrastructure throne made all the right noises upon launch, with the three gifts of the Magi being looked upon with awe.

“We hope the [Open Enclave SDK] can put the tools in even more developers’ hands and accelerate the development and adoption of applications that will improve trust and security across cloud and edge computing,” said Mark Russinovich, Microsoft CTO.

“As the open source community introduces new projects like Asylo and Open Enclave SDK, and hardware vendors introduce new CPU features that change how we think about protecting programs, operating systems, and virtual machines, groups like the CCC will help companies and users understand its benefits and apply these new security capabilities to their needs,” said Royal Hansen, Google vice president for security.

The FAQ section also provides some interesting titbits. Under the question of ‘why does this require a cross-industry effort?’, the CCC responds with the following. “Of the three data states, ‘in use’ has been less addressed because it is arguably the most complicated and difficult. Currently confidential computing solutions are manifesting in different ways in hardware, with different CPU features and capabilities, even from the same vendor.

“A common, cross-industry way of describing the security benefits, risks, and features of confidential computing will help users make better choices for how to protect their workloads in the cloud,” it adds.

One notable absentee from the CCC party is Amazon Web Services (AWS). The launch, at Open Source Summit, may be something of a clue. While AWS promotes its open source initiatives through its @AWSOpen Twitter handle among others, several in the community feel differently about AWS’ relationship with open source players. The launch of DocumentDB, a database offering compatible with MongoDB in January caused TechCrunch to lead with the brazen headline that AWS had ‘[given] open source the middle finger’. Yet as reported by Business Insider in June, the company is increasingly ‘listening’ to the community.

You can find out more about CCC here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Equinix ties up with VMware to speed up hybrid cloud deployments


Rene Millman

28 Aug, 2019

Equinix has expanded its partnership with VMware to jointly develop solutions to speed up enterprise hybrid cloud transformations. The partnership will see VMware Cloud on Dell EMC (VCDE) hardware running within Equinix data centres worldwide.

According to an announcement by both companies, enterprises will be able to use hybrid, multicloud infrastructures and network connectivity to address the increasing volume and complexity of their application workload needs.

VMware will support Equinix as a global colocation provider for VMware Cloud on Dell EMC, combining systems and storage in hybrid cloud infrastructures.

With the Equinix Cloud Exchange Fabric (ECX Fabric) interconnection service on Platform Equinix, the colocation firm said enterprises can take advantage of private multicloud connectivity and deploy hybrid cloud infrastructures.

ECX Fabric is an on-demand, SDN-enabled interconnection service that helps meet the digital transformation needs of enterprises today by allowing any business to connect between its own distributed infrastructure and any other company’s distributed infrastructure, including the world’s largest cloud providers, on Platform Equinix.

Users will also gain access to potentially thousands of new global partners they can interconnect with via ECX Fabric.

Pat Gelsinger, CEO of VMware, said that the expanded partnership “will enable our mutual customers to gain the benefit of the Equinix enterprise capabilities and the world-class VMware Cloud on Dell EMC solution”.

Rick Villars, research vice president of Datacenter & Cloud at IDC, said digital businesses require IT transformation and a complete end-to-end workload modernisation plan that enables full automation and continuous optimisation of applications.

“Shifting to an interconnected, hybrid cloud model that enables optimal placement and easy movement of workloads across multiple shared and dedicated cloud environments based on latency, resiliency and data security requirements is the critical first step in this transformation. Solutions like VMware Cloud on Dell EMC, integrated with the Equinix interconnection platform, provide businesses with a well-connected, hybrid cloud-ready foundation to quickly address the increasing complexity and volume of applications in a digital world,” he added.

Why it continues to make sense for IT ops to move to the cloud: A guide

There’s been a lot of movement in the IT operations management (ITOM) business lately, from the acquisition of SignalFx by Splunk to the PagerDuty IPO, and all signs point to a Datadog IPO in the future. What’s with all this consolidation? I believe we’re seeing the rise of a future-state of ITOM; that is to say, it’s the rise of SaaS-based ITOM. And it’s easy to see why.

In my previous consulting career as lead enterprise systems architect, our team had an impeccable record in designing and implementing well-architected hybrid infrastructure solutions. We maintained an immaculate record and near-flawless customer satisfaction record. By project sign-off, our job was always done. And yet, returning to the same solutions six months later told a different story entirely.

Well-architected solutions are similar to human bodies: They are perfect when they’re born but need constant care and feeding. These same solutions that satisfied SLAs, exceeded expectations and transformed organizational efficiency can easily degenerate, and just like our bodies have their nervous systems to monitor, brains to send alerts, and tissue to self-heal, well-architected systems need operational maintenance to keep them humming.

The traditional approach for solving this eternal need was and still is to design and implement well-architected IT Operation Management (ITOM) solutions, around a well-architected infrastructure. And yet, this is a self-fulfilling paradox because the ITOM solution itself needs the same care and feeding.

There’s a problem on-premise

ITOM is a broad term encompassing application and operating system (OS) performance, alerts, log management, notification, asset configuration, incident management and more. It typically involves the purchase of a suite of on-premise point tools addressing each need, and then to develop an internal framework to help those tools interoperate in a meaningful way. While that is possible conceptually, the facts on the ground reflect a very different reality:

  • Multi-vendor tools are often not designed to work together
  • Creating an internal logical framework that orchestrates various teams and technologies can be very complex in large enterprises
  • It’s near-impossible to create technical integrations flexible enough to adhere to inevitable organizational and technological changes that will affect this logical framework
  • Predicting cost-of-ownership is nearly impossible since each tool is controlled by a different vendor, and the internal integration effort is often unknown
  • Predicting the cost of the manpower required is also very difficult, as each tool requires its own set of specialists, in addition to integration specialists to make it all work together
  • Upkeep is often overwhelming, as vendors offload software patches, upgrades, and on-premises hardware costs to the customer

In the face of all these challenges, the end result is often unrealized value, overwhelmed operational teams, loss of service, inability to accommodate new technologies resulting business service disruptions.

Why cloud? Why now?

It’s historically been near-impossible to build a traditional ITOM platform on-premise. Vendors typically sell a collection of white-labeled tools cobbled together by acquisitions, and this is far from a platform. The complexity and the rate of technological change make it difficult to provide consistent quality and value across the various product lines. This puts the IT ops team squarely in the middle of a pickle: How can they ride the wave of a changing environment without relying on static tool suites?

The future is flexible

Enterprise IT operations has been stretched now more than ever. There is a serious skills gap, shortage in IT workforce, and ever-increasing technical complexity. Time and resources are precious and enterprise IT operations need simplicity and predictability along with flexibility and control.

Enter SaaS ITOM. By moving the ITOM function to a SaaS orientation, the responsibilities, workloads, and daily tasks can transform according to the needs of the organisation:

  • Keeping up with the business: SaaS ITOM can keep up with technological change, and keep pace with cloud, DevOps, artificial intelligence and more. In the world of SaaS, change is an accepted constant and not an inconvenience. What’s more, SaaS ITOM is infinitely more consumable than the tool suites of legacy past, and that reduces the learning curves associated with running IT operations
     
  • Keeping up with industry needs: A SaaS ITOM platform will be able to deliver a framework that’s both flexible and governed, and can accommodate technical and organizational complexities. This agility is a feature of modern SaaS. SaaS ITOM can also integrate features running on a single code base supported completely by the SaaS vendor, who will absorb maintenance and upgrade cycles, freeing considerable and valuable time back to the operator. All of this results in a more predictable total cost of ownership, improved service quality and more value to the business user

It’s not news that the world of IT is moving to the cloud. It is news, however, that cloud can offer such transformational benefits in ways we’ve never seen before.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google to shut down Hire service from 2020


Keumars Afifi-Sabet

28 Aug, 2019

Google is shutting down its dedicated recruitment tracking service Hire from next year in order to focus resources on other areas of its cloud portfolio.

Aimed primarily at SMBs, Hire serves as a job application tracking app that integrates functions like applicant search and scheduling interviews into the wider G Suite.

Google has decided to discontinue the service from 1 September 2020, however, despite describing the app as “successful” in an official notice.

This announcement has also been made just shy of six months since the HR automation platform fully launched in the UK.

Customers will continue to receive support throughout the duration of existing contracts, and no additional charges will be levied for usage once contracts expire up until the end-of-life date. Contracts can also be terminated without penalty.

Meanwhile, there will be no new features developed for the service and all experimental features that have not been officially launched will be switched off within the next month.

A host of the features included recruiters contacting potential hires via Gmail, scheduling interviews and induction days through Google Calendar, and tracking progress through Google Sheets.

The Candidate Discovery function, in which hiring managers can trawl through several information sources to learn about potential recruits, was also considered one of the biggest draws.

Hire was initially released in 2017 following the $380 million acquisition of Bebop, founded by the former Google Cloud CEO Diane Green. The app’s closure will also be made just a few months after Green’s departure from Google’s cloud computing arm.

Despite targeting SMBs in the main, Hire is also used by a number of larger companies such as Cloudera and Atom Group.

The app’s closure doesn’t mark Google’s withdrawal from the recruitment tech sector entirely, however, with the company still committed to its Google for Jobs search tool, which is intended to rival the likes of Indeed.

Cloud Pro approached Google for comment but the firm did not respond at the time of writing.