Looking to the ‘HyPE’ of cloud storage: How HPE is looking to help with hybrid cloud

Analysis Cloud storage is old hat right?  It’s the simpler part of cloud and is after all just storage. So how can cloud storage be more interesting to explore and deliver greater value to the customer?

Having had a recent expert and exclusive briefing from inside HPE from my position as an industry cloud influencer, there is a powerful and relevant story to tell that forms a base of the HPE cloud strategy and a value for those in cloud to be cognisant of for the opportunity it presents.

We live in a time of cloud and hybrid cloud is rapidly becoming the norm, if it is not already. As we approach 2020 an average of about a third of companies’ IT budget is reported as going towards cloud service with, according to Forbes, an expectation that around 83% of Enterprise Workloads expected to be in the cloud by the end of 2020. Of course, these shall be spread across varying cloud form factors and offerings encompassing SaaS, PaaS and IaaS, private, public and of course hybrid clouds.

Hybrid cloud is where the greatest challenge appears to lay. Whether by accident or strategically, most firms are unable to align to a singular cloud provider through one platform to meet all the need of their business, users and customers. Much like days of the past, where businesses mixed Unix, Netware, Lan Manager and other operating systems to support the applications required by the business, today this has become a hybrid cloud environment.

Industry trends align to validate this with Gartner reporting that 81% of public cloud users choose between two or more providers and with Canalys taking this deeper, citing that Amazon, Microsoft, and Google combined accounted for 57% of the global cloud computing market in 2018. The average Business is running 38% of Workloads in public clouds and 41% in private clouds with Hybrid cloud running at a 58% adoption rate according to Rightscale industry figures.

This growth of hybrid is driving an increasing challenge for the CTO/CxO, that of Data portability. “How do I maintain resiliency and security across hybrid and multi-cloud environments and get the benefits of cloud with the values of on premise I enjoyed?”… “How do I have storage in the cloud behave in away I am used to from on premise?” The want for consistency of data services and to be able to run data back out of cloud if and when wanted is also a key driver.

We have seen cloud make it easy to spin up and speed forwards using Agile and DevOps as attractive rewards. However, as customer’s cloud usage has rapidly matured demands on the platforms and pressures of mobility and portability have driven greater demands on storage flexibilities.

The customer focus of moving applications to the cloud has revolved mostly around the selection of the compute platform, the lift and shift, leaving the storage focused issue to rear its head later, with many experiencing the latter shock factor of cost and tie in issues. We have also seen customers maturing use and demands of cloud platforms drive innovation of periphery cloud services as evidenced here in the area of storage.

So, what do you do when public cloud storage does not meet all your storage needs? Let’s start from the offering of a true commercial utility-based model aligned with a focus on performance and storage needs. HPE is allowing you to abstract your data store from the public cloud in a Storage as a service offering that frees you from ties to any singular public cloud offering. Put your data in and then decide which public cloud(s) do you want to access the data set, knowing you can move data in and out as you want to. The key is that the storage becomes extrapolated from the compute, a positive step towards true portability across the major public cloud compute offerings.

Imagine combining public cloud compute with its high SLA on compute with a data storage set with a 99.9999% SLA and having the ability to easily switch compute providers if and when you choose leaving the data set intact. Moving compute more easily between AWS, Azure and Google Compute is the panacea for many.  In fact in the Turbonomic’s 2019 State of Multicloud  report, 83% cited expecting workloads to eventually move freely between clouds. We are seeing the first steps here to the expectation becoming a reality.

The clever market offering that will prove attractive here is the commercial offering will deliver one flat and clear billing model across all clouds with no egress charges. Both technically and commercially HPE Cloud Volumes is setting out to make the complex and critical needs of storage simple and increasingly affordable, flexible and importantly portable.  Through this HPE is setting its stall to be a key cloud transformation partner for business.

HPE is stepping the game up through acquired technologies to service, support and supplement the needs of the high growth public cloud consumption. Their offering will not be right for every customer in every public cloud, but for its specific use case offers a valuable option. The offering as would be expected is for Block and not Object storage, but it remains that this addresses a large segment of the cloud workload storage requirements for most corporate entities.

The promise is portable cloud storage across compute platforms with on the fly commercial transparency.  This removes the tie in to any public cloud offering such as AWS, Azure or Google Compute. You do of course tie your storage into HPE Cloud Volumes (although without the egress charges), but by agnosticising your storage you allow greater flex to mix/match and change between the major cloud platforms, something lacking for many today.

Are we going to see the question change from where is my data, to where do you want it to be?  The HPE offering is one of portability and operability, bringing on premise flexibility, security and portability to cloud storage.

Separating storage from compute workloads is an enabling factor for the flexibility of moving between cloud compute offerings for DR, testing or simply for when a switch is wanted. To deliver a solution without introducing latency, HPE has had to align its locations with the mainstream public cloud providers. As would be expected both Docker and Kubernetes are inherently supported, key to make the offering fit the increasingly open DevOps world of public cloud.

The extrapolation of storage is smart presentation of value from HPE to the exploding cloud market and the needs of customers for greater flexibility and portability.  We should not forget that one of the drivers for cloud adoption is the capability to access data from anywhere at anytime easily and according to a Sysgroup study “Providing access to data from anywhere is the main reason for cloud adoption.”

We also heard about Infosight – the hidden gem in the HPE kingdom – in simplistic terms this is an offering that utilises AI to take telemetry data and advise customers of an issue forth coming and what to do about it, before it has impact! So, apply this to Cloud Volumes and you have a compounding value of maximising your storage when and where you need with maximum reliability and predictability.

Customers are seeking increased Data mobility and portability – the panacea promise of cloud solutions and the ability to move to/from compute offerings from varying vendors quickly and easy. Excitingly, HPE has strategised that by 2020 everything it sells will be available ‘as a service’. Do we see a new ‘HPEaaS’ ahead? This will form a strong foundation for HPE to make a big noise alongside the explosive growth of the public cloud space and positions a new offering much needed at the centre of the public cloud battle as it continues.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Here is every major Microsoft Teams update from Ignite 2019


Dale Walker

6 Nov, 2019

Microsoft Ignite has delivered an enormous number of updates across the company’s product portfolio and in particular its collaboration platform, Teams.

To make life a little easier for readers we’ve rounded up the most important changes, some of which are available now, with others being accessible through Microsoft’s preview programme.

Simple sign-in for Microsoft 365

Microsoft has added a bunch of authentication features to its 365 platform that will also filter down to Teams. Most of these are aimed at what the company calls “firstline workers”, defined as those employees who act as the first point of contact for a business, typically retail staff.

Firstly, SMS sign-in is coming to Teams, allowing users to log onto the service with their phone number and an SMS authentication code, effectively removing the need to remember passwords. Likewise, a Global Sign-out feature is also on its way for Android devices that lets workers sign out of all their 365 apps, including Teams, with one click.

Firstline managers will also soon be able to manage user credentials directly. The goal here is to reduce the strain on IT support by allowing managers to, for example, help employees reset passwords without having to turn to support teams.

All of these authentication features are currently in preview but are expected to become generally available before the end of the year.

Content management and Yammer integrations

There are also a bunch of new features designed to make it easier to access and manage files and tasks across its Microsoft 365 portfolio from inside the Teams app.

Aside from a few minor tweaks to permission options, some bigger changes include being able to preview files from 320 different file types, support for relevant People cards and recent files that are now displayed alongside chat windows, and the ability to sync Team files to a PC or Mac.

Customers will also be able to migrate conversations they’ve had in Outlook into the Teams app before the end of this year and in the first quarter of 2020, the company plans to add a new UI for tasks created in Teams, Planner or To Do.

Finally, Yammer, the enterprise communications app that at one time looked like it was going to be scrapped, will be integrated into Teams in a preview state by the end of the year, before rolling out generally in 2020.

The app has received a complete redesign, being the first to be built from the ground up using the Microsoft’s Fluent Design System – Microsoft’s latest design language. For Teams, this means that the app can be accessed directly from the left rail, although this new version is unavailable until its private preview in December, with general availability at some point in 2020.

Emergency calling

Speaking of communications, US customers are now able to use a feature called Dynamic Emergency Calling, which also provides the current location of the user to emergency services. Currently, the feature supports those on Microsoft’s Calling Plan, although Direct Routing users will eventually be supported before the end of the year.

There were also a number of smaller announcements for the calling function. Music on Hold is a fairly self-explanatory new feature, offering the option to play music for any caller placed on hold or put into a queue. Location Based Routing allows customers to control the routing of calls between VoIP and PSTN endpoints. And finally, larger organisations with a global footprint can now choose the nearest Microsoft media processor to handle their calls, which should improve overall performance.

Upgrade to Microsoft Teams Rooms

Microsoft Teams Rooms (MTR), the successor to Skype Room Systems, also received a handful of updates, the biggest of which is compatibility with Cisco WebEx and Zoom, allowing Teams users to connect to these services directly. This should be available starting in early 2020, beginning with WebEx.

The second big announcement is the launch of a new managed service called Managed Meeting Rooms. This will provide cloud-based IT management and security monitoring for meetings, as well as support onsite services through a partner network. This service is available now in private preview and is expected to launch generally during spring 2020.

Enhanced security

There have also been a tonne of security and compliance updates for IT admins.

Firstly, Microsoft’s Advanced Threat Protection has been extended to messages within Teams, offering real-time protection against malware and malicious scripts. Similarly, the option to apply policies that block groups from sharing files with each other now also covers files stored within SharePoint.

Admins have also been given more options concerning the length of data policies, with the option to enforce restrictions for a single day, as well as new PowerShell functions designed to make it easier to assign policies across larger Teams groups. It will also soon be possible to manage device usage and deployments through a single management portal.

Private chat and customised conversations

The option to chat privately inside a Team is now generally available to all users, allowing customers to create separate chat windows that can be viewed and accessed by a select few of the team’s members.

The multiwindow feature is also coming to Teams from the first quarter of 2020, allowing users to pop out chats and meetings into a separate window. Users will also be able to pin specific channels to the left rail, allowing them to keep track of most-commonly used conversations.

Finally, the Teams client is also heading to Linux, with a public preview being made available before the end of 2019.

Virtual consultations

A new feature called Virtual Consults is available in private preview, designed to make it easier for organisations that rely on sensitive consultations with their customers, such as healthcare professionals or customer service agencies, to arrange calls. The feature brings with it built-in meeting transcriptions, cloud-recording and the option to blur out backgrounds if they are in a location that would otherwise distract from the meeting.

Developer tools

A healthy chunk of upgrades were also reserved for the Teams developer community.

Firstly, Microsoft has made the tools available as part of its Power Platform more accessible in the Teams environment. Power Apps developers are now able to publish their creations as Teams apps directly to the company’s app library, making it easier for users to find them. The whole process of adding apps has also been made easier. As for users, they will eventually be able to pin these custom apps to the left rail, once this comes into play before the end of 2019.

Power BI is also receiving some updates that will translate to Teams next year, including the ability to create interactive cards within Teams conversations for users to engage with. The Power BI tab in Teams will also be given an upgrade, making it easier to select the right reports.

IGEL partners with Citrix and Ingram Micro for simplified access to Azure workspaces


Daniel Todd

6 Nov, 2019

Cloud software provider IGEL has teamed up with Citrix and Ingram Micro to launch a new software bundle that will simplify user access to Azure-delivered cloud workspaces.

The package includes “best-in-breed” products from both IGEL and Citrix that will simplify the delivery of high performance end user computing with “anywhere access” in the cloud, IGEL said.

Ideal for businesses needing to address aging Windows 7 endpoints before support ends on 14 January 2020, the unified solution simplifies the migration of Windows desktops to Azure — allowing them to realise the benefits of Windows 10 without the usual migration pain points.

“With our combined solution, IGEL, Citrix and Ingram Micro are making it easy to streamline end user computing in Azure to power cloud-based Windows Virtual Desktops that will simplify endpoint management, improve security, lower costs and keep workers productive,” said Jed Ayres, president and CEO of IGEL North America.

“And for those who have already moved to Windows 10, they too can easily migrate those desktops to the cloud, with this combined offer virtually eliminating all the headaches associated with managing and maintaining hundreds or thousands of endpoints running full-blown Windows.”

With the new bundle, organisations can leverage public cloud desktop-as-a-service (DaaS) workspaces from the Azure cloud in the form of Windows Virtual Desktops (WVDs).

Businesses will have access to the Citrix Workspace platform for managing and automating activities across all locations and devices, as well as IGEL’s Workspace Edition software which combines the IGEL OS and its Universal Management Suite (UMS) for endpoint control.

The combined solution is available exclusively via distributor Ingram Micro now, delivered as single offering to further streamline the adoption of WVD DaaS workspaces.

“This new offer from Ingram Micro combines the unique strengths of both Citrix and IGEL to enable organisations to realise the full benefits of Windows 10 without the typical pain of migration,” commented Nabeel Youakim, vice president of Product and Solutions Architecture at Citrix. “In particular, the new Windows 10 multi-session entitlements of Windows Virtual Desktops offer easy access to Windows 10 along with great economy for those looking to move to the Azure cloud.

“With Ingram Micro’s new offer, Citrix and IGEL are playing a key role in making Windows 10 from the cloud the new reality.”

IBM to develop public cloud banking platform


Bobby Hellard

6 Nov, 2019

IBM is working with Bank of America to develop what it claims is the world’s first financial services-ready public cloud.

Named “the financial services-ready public cloud”, IBM said it could enable independent software vendors and Software as a Service (SaaS) providers to focus on deploying their core services to financial institutions with the controls for the platform already put in place.

The aim is to give financial institutions an opportunity to efficiently assess the security, resiliency and compliance of technology vendors, through the platform’s security validation. Only independent software vendors or SaaS providers that demonstrate compliance with the platform’s policies will be eligible to deliver services through it.

Financial services-ready public cloud is expected to run on the tech giant’s public cloud and will be built with Red Hat‘s Open Shift. It will include more than 190 API driven, cloud-native platforms as a service where users will be able to create applications.

The company said the project has been developed with the aid of financial services experts in IBM’s networks, including some of the largest financial institutions in the world.

According to Bank of America’s CTO Cathy Bessant, it’s one of the most important collaborations in the financial services industry cloud space.

“This industry-first platform will allow Bank of America to use the public cloud, putting data security, resiliency, privacy and customer information safety needs at the forefront of decision making,” said Bessant. “By setting a standard that addresses the concern of hosting highly-confidential information, we aim to drive the public cloud to a safety level that is unmatched.”

How Johnson & Johnson boosted its performance by lifting Teradata to AWS


Lindsay Clark

6 Nov, 2019

Data has become the engine that drives modern business, and collating and analysing that data is a crucial component of many IT departments’ duties. Most turn to Enterprise Data Warehouse (EDW) technologies, which offer platforms that allow business to centralise their data for easier analysis and processing.

Teradata is among the most well-known EDW platforms on the market, having spent the last 40 years building its reputation providing on-premise EDW hardware and software for customers including General Motors, P&G, eBay and Boeing. It has now transitioned to a cloud-first model and is now available on all three major public cloud providers, following the addition of Google Cloud Platform support on 22 October 2019.

Back in 2017, however, the company’s cloud credentials were not so well-established. That’s why when healthcare and pharmaceuticals giant Johnson & Johnson (J&J) decided to move its data stores to a Teradata-powered cloud infrastructure, the plan was met with surprise and skepticism. In the years leading up to the project, J&J’s senior manager for data and analytics Irfan Siddiqui says, the company became aware its current on-premise platform would not support its burgeoning data analytics requirements demands at an affordable price for very much longer.

“We [had] been experiencing some challenges and thinking about how we transform the traditional data warehouse into a more modern service, particularly around the flexibility, scalability and cost, and we were searching for a solution,” he told a Teradata conference in Denver, Colorado earlier this year.

And so, in 2017 it started to look at migrating its enterprise data warehouse (EDW) system to the cloud, eventually landing on Teradata as the most promising solution provider for its problems.

At that time, the offer of Teradata on AWS was not widely considered mature enough for an enterprise environment, Siddiqui tells Cloud Pro.

Five lessons from Johnson & Johnson’s EDW cloud migration

Identify all the stakeholders involved and begin discussions to identify potential challenges

Start with a small proof of concept to test all aspects of the potential solution

Understand as early as possible the network bandwidth and latency between your on-premise and cloud solutions

Expect some things to go wrong the first time you try them

Engage a strong project manager, who is good with timelines and risk, to be the single point of contact for communicating progress

Practise processes over and over again, including failure scenarios

“When Teradata released its first machine on AWS, and I said I wanted to do a proof of concept for Teradata in the cloud, people who knew Teradata, their first reaction was, ‘What? Why? Really?’.”

However, the commitment from Teradata to show its systems could work in the cloud was so strong Siddiqui found the confidence to go into a proof of concept. Initial trials showed promise.

The 80-terabyte a-ha moment

“Most of us know doing a capacity expansion or migration to new hardware takes in the order of six months but [with AWS] we were able to spin up a formal system with 80TB of data in just 20 minutes. That was one of the ‘a-ha moments’ for us which became the driving force for us to take another step,” he says.

J&J set itself five goals in lifting Teradata to the cloud, Siddiqui says: to migrate three data environments and all its applications by the halfway point of 2019; to offer the same or improved performance compared with the on-premise system; and to increase flexibility and scalability while reducing cost.

This posed a sizeable challenge for Siddiqui’s team, which aimed to support about 300TB of storage, 50 business applications and 2,500 analytics users on to a system capable of handling more than 200 million queries per month.

It also raised some significant questions.

“How are our applications going to perform? How do we migrate? What happens with downtime, and stability and security?” he says. “We had to address these questions, not just for our leadership team, but all the stakeholders across J&J. We had to show how it would benefit each one of us.”

Most applications stay on-prem

Although all the data warehouse workloads would be in the cloud, most of the related analytics applications and data visualisation tools, including Qlik, Talend, Informatica, and Tibco, remained on-premise.

Some applications were split between the cloud and on-premise servers. For example, J&J wanted to spin up application development environments in the cloud when they were required and only pay when using them. “That is the flexibility we did not have our own servers,” Siddiqui says.

Given the migration had to follow an upgrade to the data warehouse production environment, deadlines became tight. The team worked for three months more or less continuously. But by the end of June of 2019, it was able to decommission the on-premise data warehouse hardware systems.

The hard work has paid off for Siddiqui and his team. Extract-transform-load jobs now take half the time compared to the on-premise system. Large Tableau workload performance has improved by 60% and another application’s data loading was cut from more than three hours to 50 minutes.

Beware the desktop data hoarders

Claudia Imhoff, industry analyst and president of Intelligence Solutions, says it makes sense to put enterprise data warehousing in the cloud in terms of scalability and performance, but there are caveats.

“It’s a wonderful place if you have all the data in there. But, unless you’re a greenfield company, nobody has all of their data in the cloud. Even if most operational systems are in the cloud, there are so many little spreadsheets that are worth gold to the company, and they’re on somebody’s desktop,” she says.

“There are arguments for bringing the data into the cloud. It is this amorphous thing, and you don’t even know where the data is being stored. And you don’t care, as long as you get access to it. Some of it’s in Azure, some of it’s in AWS, and some of it is in fill-in-the-blank cloud. And, by the way, some of it is still on-premise. Can you bring the data together virtually and analyse it? Good luck with that,” she adds.

To succeed in getting data warehousing and analytics into the cloud, IT must convince those hoarding data on desktop systems that it is in their interest to share their data. The cloud has to do something for them, she says.

Despite the challenges, enterprise IT managers can expect to see more data warehouse deployments in the cloud. In April, IDC found the market for analytics tools and EDW software hosted on the public cloud would grow by 32% annually to represent more than 44% of the total market in 2022. These organisations will have plenty to learn from J&J’s data warehouse journey.

Meet Azure Arc, a Microsoft platform for those that want a bit of everything


Dale Walker

5 Nov, 2019

Microsoft kicked off its Ignite conference this week with the reveal of its Azure Arc platform, a set of tools designed to simplify the management of deployments across multiple clouds, on-premises, and edge.

The platform also allows for Azure services and management tools to be expanded to new infrastructures, including Linux and Windows Server, as well as all Kubernetes clusters spread across multiple cloud types.

The idea is to create a single, centralised hub so that users can apply existing tools, such as Azure Resource Manager, Azure Shell, Azure Portal, Azure API, as well as its policy and security protocols, across all deployments. This effectively allows customers to run Azure services irrespective of where the deployment resides.

The move symbolises Microsoft’s effort to accommodate customers that are reluctant, or unable, to become entirely hybrid by allowing them to be more flexible in how they use Azure tools. It also extends many of the benefits of cloud to those parts of a business still reliant on on-premises infrastructure or private data centres, which can now be plugged into Azure.

Specifically, the Azure Portal tool within Arc will give customers a unified view of all Azure data services running across all on-premises and cloud deployments, and the Azure Kubernetes Service can be used to spin up new clusters if they run out of on-premise capacity, the company claims.

What will perhaps matter most is that this expanded availability will also allow customers to make use of all the security and compliance tools integrated into Azure, including controlled access and company policy enforcement.

“With Azure Arc, and with it, the arrival of multi-cloud management in Azure, we are now seeing perhaps the biggest shift yet in Azure’s strategic evolution,” argues Nick McQuire, VP and head of Enterprise and Artificial Intelligence Research at CCS. “It means that Microsoft is becoming more attentive to customer needs, but it is also an indication that battlelines of competition in cloud are shifting towards managing the control pane.”

“In embracing hybrid multi-cloud, Azure Arc also validates the big investments made by key competitors over the past 12-18 months most notably IBM’s acquisition of Red Hat and Google Cloud’s Anthos. The stage is set for AWS to follow suit next month at reinvent.”

Microsoft just redesigned its Edge browser to be an essential business tool


Dale Walker

5 Nov, 2019

Microsoft has released a major update for its Edge browser, which introduces a new logo and a host of business-focused features designed to fuse together intranet and internet search.

It’s arguably the first major change of direction for Edge since it migrated to the Chromium source code late last year, and is a clear attempt to reassert itself as a relevant browser.

Currently in preview, the new update introduces the ability to access company intranet directories from within the Edge browser. For example, entering the name ‘Sofia’ into the Edge search bar will bring up the details colleagues that the user is likely to be searching for, based on previous interactions or similar projects.

Another example Microsoft gave was an employee searching for how many days they are allowed to take off for jury duty, with the top result being the company’s own policy taken directly from the organisation’s intranet.

Given that the new version of Edge is built on the Chromium source code, it’s unsurprising to hear that Edge now has performance parity with Google’s Chrome and is now a perfect match in terms of website compatibility. However, the company explained that it was keen to innovate beyond that to remain competitive.

“We see a unique opportunity to bridge the tradeoffs of today’s web search with more complete solutions that Microsoft can uniquely address,” explained Yusef Mehdi, corporate VP of Microsoft’s Modern Life & Devices division. “The irony is that it is easier to find an obscure piece of information on the much larger internet, than it is to find a simple document on your company’s intranet such as a paystub portal, a pet at work policy, or the office location of a fellow employee.”

Employees will be able to use natural language search to find colleague titles, team names, office locations, floor plans, definitions for company acronyms, and a wide set of internal company information, Microsoft explained.

“As company information continues to expand to terabytes, petabytes and zettabytes of information, this will only get more complex,” added Medhi. “We will unite the internet with your intranet with Microsoft Search in Bing so that you can increasingly access more of your important data in a single browse and search experience.”

Drag and drop search

Another feature, known as Collections, allows workers, such as those involved in procurement, to drag and drop items from search results into a list that can be shared to others, complete with all the appropriate images and metadata for those items. It’s also possible to export this list into Excel, which will automatically input the metadata into a spreadsheet.

A product demo also revealed that each user will be given a personalised homepage, which was largely influenced by their account being logged into the Azure Directory. This meant that links and data from the company’s intranet could be displayed in the place of trending news stories.

InPrivate and baseline cookie blocking

Alongside the business update, the company was also keen to showcase its new privacy protections, including default anti-tracking filters, which it has been working on since June, and an incognito mode dubbed InPrivate, which the company claims offers the most effective protection on the market.

“We’re taking a new, more protective stance to help you on the web. ‘Balanced’, which is what we do by default… gives you more protections than any other browser. If you really want to have your data and privacy secure, you’re going to want to with Microsoft Edge.”

These options can be tweaked in the Edge settings. By default, Edge will block all trackers originating from sites you haven’t actually visited, but a ‘Strict’ option is also available, which will block the “majority of trackers from all sites”, and potentially break some website functions that rely on cookies.

The second feature, the InPrivate mode, is pitched as being a more robust version of Chrome’s Incognito. Medhi explained that Chrome’s version “keeps your browsing safe and private, but what you don’t know is that you can be accidentally logged in on Gmail and your search is not private”, referencing this story from earlier in the year.

“When you navigate to a page (in Edge), we will actually prevent you from being accidentally logged in, and all those searches are kept on the machine, they don’t go back to the server.”

Release schedule

A handful of smaller announcements also accompanied the new Edge launch, including an expansion to the App Assure program to cover the new browser, as well as an expansion of the FastTrack deployment program to rollout Edge in Q1 of 2020.

The release candidate for the new Edge browser is available to download now for both Windows and macOS, with the company aiming for general availability by 15th January.

VMware announces Carbon Black partnership with Dell


Bobby Hellard

5 Nov, 2019

VMware has made a slew of announcements at its annual European conference, starting with a partnership with Carbon Black’s cloud and hardware security and Dell PCs.

The Dell-owned company said it was expanding its enterprise endpoint security portfolio to include Carbon Black Cloud to make organisations more resilient against advanced cyber attacks.

The announcements were made as part of the company’s vision of “intrinsic security”, which is about making it more automated, proactive and pervasive across its entire distributed enterprise.

Rahul Tikoo, Dell’s senior VP of Commercial Client, said that cyber criminals are constantly pushing the limits with difficult-to-discover attack vectors, especially those targeting endpoint devices.

“We have to take a multi-layered approach to security,” he said. “With the addition of VMware Carbon Black Cloud as the preferred endpoint security solution for Dell Trusted Devices and Secureworks, our customers can be more secure while doing their best work.”

The company called it a “unique combination of threat prevention”. It said that detection and response functions from Secureworks use AI and machine learning to proactively detect and block endpoint attacks, while security experts can hunt for threats across the endpoint, network and cloud.

“As we continue to build on VMware’s vision for intrinsic security, it’s clear that we are all stronger when we combine the right people and the right technology,” said Patrick Morley, general manager of the Security Business Unit at VMware. “Dell’s selection of VMware Carbon Black Cloud as its preferred endpoint security, in combination with Dell Trusted Devices and Secureworks, serves as continued validation that we are providing a comprehensive form of endpoint protection. We now have the opportunity to work together and further expand our collective ability to keep worldwide customers protected from advanced cyberattacks.”

Along with Carbon Black, there were also updates to the recently unveiled VMware Tanzu portfolio of products and services. These were aimed at transforming how enterprises build, run and manage software on Kubernetes.

Updates included the rollout of a beta program for Project Pacific, as well as the debut of a new VMware Cloud Native Master Services Competency that help customers build Kubernetes-based platforms.

There were also two previews of brand new offerings, Project Path and Project Maestro. Project Path is for cloud providers and Managed Service Providers to adopt new business models and help bring new value, revenue and improved margins to their cloud business.

Whereas Project Maestro promises a cloud-first service that delivers a unified approach to modelling and managing virtual network functions and services.

Why businesses fail to maximise the value of data visualisation

Data visualisation has become one of the hottest tools in data-driven business management over the past few years. As business intelligence software becomes a more central part of companies’ toolkits and data practices, visualisations have improved while concurrently becoming more precise and versatile.

Even so, not every case of a business implementing BI software and data visualisation is a success. Although they are meant to streamline data analysis and comprehension, they can sometimes produce the opposite effect.

A recent survey by Ascend2 revealed that despite their best intentions, many companies fumble their data visualisation implementations and end up doing more harm than good. While this has not necessarily affected the popularity of BI and data visualisation, it does raise some interesting questions about what companies can do right.

The survey shows that while many have had success with their data visualisation and data dashboard strategies, a majority have only been somewhat successful, or worse, unsuccessful. 

Regardless, dashboards and visualisation confer significant benefits for organisations, so they are not likely to go anywhere.

Why some visualisations are less successful

The survey responses indicate that while data dashboards are still being used and developed, the number of companies that are experiencing strong success with them has dropped. When asked about the overall effectiveness of their data dashboard strategies, only 43% of those surveyed described it as very successful. Meanwhile, 54% called it somewhat successful, while 3% were unsuccessful in deploying data visualisations and dashboards.

One of the biggest challenges is that fewer respondents believed they had consistent access to the data they required. A major benefit of dashboards is that they provide only the data that is relevant to each user and exhibits it in an easily digestible manner. However, dashboard design can sometimes go awry and become either too cluttered or too sparse, obscuring important information in the process.

Indeed, the number of respondents who claimed they frequently or always had the right data to make business decisions fell from 44% in 2017 to 43% in 2018.

A focus on a specific type of data visualisation can misrepresent data, while a strong focus on one type of data can exclude up to 80% of a company’s full data stream

Nevertheless, it does appear that visualisations and dashboards are gaining popularity. The survey found that a total of 84% of respondents planned to increase their overall budgets for data dashboards and visualisations to some extent, although most only plan on increasing it moderately.

This is because despite the challenge of successfully implementing a data visualisation strategy, visual language has been proven to improve productivity and efficiency in the workplace.

Why companies will keep investing in visualisations

One big reason many companies undergo less-than-optimal implementations is that they do not have an effective answer to the question, “What is data visualisation?” For many, the definition is as simple as charts made from spreadsheets and basic diagrams. However, today’s business intelligence tools offer a significant variety of visuals that can make almost any data easier to comprehend and actionable.

A report by the American Management Association has found that visualisation tends to improve several aspects of companies’ decision making. According to the AMA, 64% of participants made decisions faster when using a visualisation tool, while another found that visual language can shorten work meetings by up to 24%.

More importantly, the AMA report cites additional third-party studies demonstrating that visual language helps problem solving, improving efficiency by 19% while overall producing 22% higher results in 13% less time.

With that in mind, however, the report by Ascend2 may be cause for concern, or at least a call to action, for many companies employing data dashboards. The importance of design and precision cannot be overstated when planning a data visualisation strategy.

In some cases, a focus on a specific type of visualisation can misrepresent data or make it harder to understand. Other times, a strong focus on one type of data—such as structured data—can exclude up to 80% of a company’s full data stream.

Having a clear deployment strategy that understands an organisations’ specific needs and objectives can also make the process easier. The Ascend2 study discovered that companies which focused on objectives that are more important—instead of those that are more challenging, but less critical—can also help organisations increase their success with data dashboards and visualisations.

Coursing the right plot

Data visualisations will continue to be a central part of organisations’ data practices. The improvements it offers for decision-making, consensus, problem-solving, and more make it a key part of business success. Still, companies should focus their efforts on building data visualisation strategies and data dashboards that give their teams the information they need, and deliver it consistently.

Editor's note: This article was written in association with StudioWorks.

A guide to enterprise cloud cost management – understanding and reducing costs

For the enterprise, managing cloud costs has become a huge problem. Public cloud continues to grow in popularity and top providers, such as Amazon Web Services, Microsoft Azure and Google offer competitive prices to attract enterprises. But your search to save money shouldn't stop there. There are many factors – some of which IT teams initially overlook – that can increase a public cloud bill. Fortunately, organisations can avoid any unwanted billing surprises with a smart cloud cost management strategy.

Enterprises progressing through their cloud adoption need to ensure that they have cost management strategies in place to control their spend as they continue to migrate services to cloud providers. Let’s examine some cloud cost management strategies that you can use to reduce your cloud costs immediately.

The challenges of managing cloud costs

Cloud infrastructure offers many benefits for organisations but it also presents a variety of challenges. The benefits are easily seen – scalability, control, security etc. – but it's also important to understand how moving to the cloud impacts your organisation. A major factor that contributes to the challenge of cloud cost management is the difficulty that organisations have in tracking and forecasting usage. Unpredictable budget costs can be one of the biggest cloud management pain points.

The ability to scale up and down on demand has allowed resource procurement to transition from sole ownership of the finance or procurement team to stakeholders across IT, DevOps and others. Such democratisation of procurement has initiated an ever-growing group of cost-conscious stakeholders who are now responsible for understanding, managing and optimising costs.

Before you move your infrastructure to the cloud, it is important to evaluate how much the public cloud will cost. Like any IT service, the public cloud can introduce unexpected charges.

The first step of a cloud cost management strategy is to look at the public cloud providers' billing models. Take note of how much storage, CPU and memory your applications require, and which cloud instances would meet those requirements. Then, estimate how much those applications will cost in the cloud. Compare your estimates to how much it currently costs to run those apps on premises. Some workloads are more cost-effective when in-house due to data location and other factors.

When using multiple public cloud providers, integration and other factors can lead to unexpected fees. Think ahead and plan application deployments to see where you might incur additional costs. Also, look at your cloud bill and see what you are charged for access, CPU and storage. The ability to track spending across more than one cloud is invaluable.

Before you commit to a cloud vendor, you have to understand your business requirements and examine what a certain vendor is offering. At first glance, most vendors have similar packages and prices, but when you examine them in detail, you might discover, for example, that one vendor has a dramatically lower price for certain types of workloads.

Organisations should also avoid vendor lock-in. Moving workloads from one cloud vendor to another can sometimes be difficult. Organisations sometimes end up paying higher prices than necessary because they didn't do their homework upfront and it is subsequently too difficult to migrate applications or workloads after they are in production.

Key areas where you can cut your cloud costs

To reduce your cloud costs, you must first identify waste by uncovering inefficient use of cloud resources. Cloud cost management is not a one-and-done process, but you can immediately start saving money on your cloud infrastructure costs if you address key areas that account for the majority of wasted cloud spend and budget overruns.

Ensure teams have the direct ability to see what they are spending. It’s easy to get carried away spinning up services, unless you know exactly what you are already spending. Identify what you have, and who owns it. Tag resources with user ownership, cost centre information and created time to give you a better handle on where the spend originates. This information can be used to track usage through detailed billing reports.

Once you have a handle on what your spend is, set budgets per account. Doing this after establishing a baseline ensures that you are setting practical and realistic budgets that are based on the actual usage. Look to whitelist Instance types (RDS & EC2) to only allow instances of specific types (e.g. t2.medium) or of classes (e.g. t2-*), or of sizes (e.g. *-micro, *-small, *-medium).

Prevent staff from provisioning unapproved virtual instances from the marketplace that include software license costs, or from using specific OS or DB engines from vendors with whom you do not have enterprise agreements in place or are too costly to run at scale. Review in which regions you have services running. The cost of services per region can vary as much as 60%. So you need to ensure you are balancing the need with running services in a given region with the cost of doing so. You can use instance scheduling to start and stop instances on a planned schedule. Shutting down environments on nights and weekends can help save you 70% of runtime costs. Look to determine which environments need 24×7 availability, and schedule the rest.

Manage your storage lifecycle by ensuring that you are rotating logs and snapshots regularly and backup and remove any storage volumes that are no longer in use. Ensure that you are using only one Cloudtrail configuration and have added additional ones only when absolutely necessary. Also, ensure that sandbox or trial accounts are only utilised for exploration purposes and for the duration committed.

Another technological solution that can help to reduce operating expenses is the use of containers. Often used by IT teams taking DevOps approaches, containers package applications together with all their dependencies, making them easier to deploy, manage and/or migrate from one environment to another.

Last, but not least, use a cloud cost management vendor. Many organisations decide that tackling these cost optimisation chores on their own takes too much time and skill. Instead, they leverage services from a reputable cloud cost management vendors. Cloud cost management is one of the major pain points various organisations have when migrating to the cloud. Cloud costs can sometimes be difficult to estimate, due to the complexity of the cloud infrastructure. 

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.