VMware announces data centre-as-a-Service offering


Adam Shepherd

30 Apr, 2019

VMware has formally announced its data centre-as-a-service offering, allowing customers to deploy on-premise infrastructure on a subscription basis, completely managed by VMware.

Previously teased at VMworld last year under the codename ‘Project Dimension’, the service was unveiled at Dell Technologies World in Las Vegas, and is officially known as VMware Cloud on Dell EMC. The offering will consist of pre-packaged bundles consisting of VMware’s vShere, vSAN and NSX management products, running on three or more of Dell EMC’s VxRail hyperconverged infrastructure nodes, along with two switches and SD-WAN appliances, as well as an uninterruptible power supply.

The product offers cloud-like deployment models, not just in terms of pricing, but also with regard to deployment and management. Customers can order new infrastructure capacity via an API call or via the VMware Cloud management console, and Dell EMC will construct, deliver, install and configure it. Not only that, but VMware will fully manage the infrastructure from that point on, monitoring for performance issues, applying patches and upgrades on the customer’s behalf and automatically deploying a Dell EMC engineer in the event of hardware problems.

Customers will pay a single monthly price based on the number of hosts per rack, with no added charges for support or services. Software and hardware fees are also included in the cost.

«You have the agility with the hands-off simplicity of the public cloud, while retaining that predictable and controlled environment of your own data center, delivered as a fully managed solution by Dell Technologies,» VMware CEO Pat Gelsinger explained in a keynote speech. «Also, we are responsible for doing software patching, upgrading and lifecycle management. We take care of that, as well as the hardware service upgrade and firmware management, as well. And it’s fully bought as a purchase subscription, just like you would a cloud service. It’s just your on premise hardware, your environment, in your datacenter or branch.»

VMware Cloud on Dell EMC falls under the umbrella of the newly-launched Dell Technologies Cloud brand, a subset of the company’s portfolio that aims to use VMware as a consistent management and infrastructure layer for hybrid and multi-cloud deployments. VMware Cloud Foundation on VxRail – which was announced at VMworld last year and started shipping last month – also comes under this banner. This, by contrast, uses a more traditional VMware deployment model, and is managed by the customer.

«In going for the future there will be additional routes,» said Tom O’Reilly, Dell EMC’s EMEA field CTO for cloud, converged and hyperconverged infrastructure, «so we’ll have not just VxRail but we’ll have converged infrastructure that can be delivered on premise, we’ll have bundles and ready solutions that can deliver it, if you don’t want to go with a converged or hyperconverged experience. So we’ll have multiple routes to deliver Dell Technologies Cloud, but the experience, the software layer, the operational and management experience, will be consistent across all these.»

Dell has been a major proponent of the multi-cloud model, so unsurprisingly, VMware Cloud on Dell EMC also allows customers to manage their VMware workloads in the public cloud, migrating them between public and private as necessary. This will also includes Azure workloads, following the announcement that Microsoft was introducing native support for the full range of VMware capabilities on Azure.

This new service dovetails with Dell’s new Unified Workspace service, which offers a comprehensive endpoint management suite for customers, spanning the initial ordering and configuration process all the way up to patching and ongoing support.

VMware Cloud on Dell EMC is currently available in beta, with full US availability scheduled for the second half of this year. While no timeline has been given for European availability, O’Reilly told Cloud Pro that the region will be next in line after North America.

«The way Dell EMC divides the world is into different tiers of countries,» he said, «and Northern Europe is tier one. So, this will be the first to get it outside of the US.»

Google’s cloudy head count and revenues remain on the up – but specifics are still a while off

Google is still not quite ready to divulge specifics around its cloud – yet the numbers continue to rise, whether it is from revenues or employees.

Alphabet’s Q1 earnings, published yesterday, saw total revenues of $36.3 billion (£27.8bn) for the most recent three months, an uptick of 16% on the previous year. Google’s other revenues, of which Google Cloud forms a part, shifted up 25% to $5.4bn (£4.19bn).

Naturally, the biggest dent in Alphabet’s figures was the €1.49bn (£1.28bn) fine incurred by the European Commission in March for breaching online advertising antitrust rules. Including the fine, this meant operating income for the quarter fell to $6.6bn from $8.3bn. More importantly for investors, it ensured that earnings per share (EPS) fell from $11.90 to $9.50 – well below Wall Street’s expectations of $10.58.

Looking at the cloud side, CFO Ruth Porat told analysts that the biggest increase was in R&D expenses ‘with headcount growth in cloud as the largest driver’. “In terms of product areas, the most sizeable headcount increases were in cloud for both technical and sales roles,” Porat added.

Naturally, this is the first quarter where CEO Thomas Kurian had his feet under the table. At Next in San Francisco earlier this month, a variety of announcements were made, from cloud services platform Anthos, to an open source partner jamboree featuring Confluent, MongoDB, Redis Labs, and more.

“Thomas [Kurian] has really hit the ground running,” Google CEO Sundar Pichai told analysts. “I was excited to announce Anthos, which gives customers a very elegant solution to both hybrid cloud and multi-cloud in a single technology stack. We are also deeply committed to becoming the most customer-centric cloud provider for enterprise customers and making it easier for companies to do business with us.”

Many of the same stats from Next were referenced here again – nine in 10 of the biggest media companies, seven of the 10 largest retailers, and more than half of the 10 largest companies in manufacturing and financial services are using Google’s cloud.

Yet what about the specifics? Amazon Web Services (AWS) disclosed revenues of $7.4bn in its most recent filings. Why couldn’t Google offer something similar?

Heather Bellini, analyst at Goldman Sachs, had the opportunity to directly ask the top brass the question much of the media had been fascinated in for some time. Despite the momentum, Bellini asked, when will Google be able to share similar updates and growth rates to their biggest competitors?

“[At] the high level, the key differentiators which we are focused on and which we hear from customers are security and reliability, being really open about hybrid multi-cloud – customers don’t want to be locked into any one cloud provider,” said Pichai. “I think we are building a strong business across all our verticals, and we are definitely seeing a strong momentum, and look forward to being able to share more at the appropriate time.”

This committed non-committal was of course to be expected, but interestingly chimed in with similar material analysts had previously told this publication. Speaking to CloudTech on the occasion of Google’s Q418 results, in February, Paul Miller, senior analyst at Forrester Research, explained there was a wider picture to look at.

“All of the major players carve their portfolio up in different ways, and all of them have different strengths and weaknesses,” Miller said at the time. “Make it too easy to pick out G Suite’s revenue and it would look small in comparison to Microsoft’s Office revenue. Make it too easy to pick out GCP, and it would look small in comparison to AWS.

“Neither of those are really apples-to-apples comparisons,” Miller added. “The real value for Google – and for most of the others – is in the way that these different components can be assembled and reassembled to deliver value to their customers. That should be the story, not whether their revenue in a specific category is growing 2x, 3x, or 10x.”

It is an epithet with which Google appears to heartily agree.

You can read the full financial report here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Dell unveils cloud-based endpoint management platform


Adam Shepherd

30 Apr, 2019

Dell is aiming to take the hassle out of configuring and deploying laptops, with the launch of a new endpoint management platform that brings together a number of the company’s technologies and services.

The platform, dubbed the Dell Technologies Unified Workspace, is designed to give IT departments a simple and automated platform for managing devices.

Based on VMware’s Workspace One product, the Unified Workspace allows IT departments to order devices which are imaged, configured and provisioned with all of the customer’s business applications before they leave the factory, including the ability to personalise which applications are installed on a per-user basis. When customers receive their devices, Dell said, end users will be able to start working in minutes, as opposed to hours.

The platform also supports endpoint management tasks over the entire lifecycle of corporate devices, including automated patch deployment, device health and status information, and cloud-based policy tools. In line with Dell’s emphasis on the importance of data analysis, the Unified Workspace will collect and collate data from customers’ device fleets, which will allow IT departments to analyse usage patterns and identify their most widely-used apps.

To ensure security, the Unified Workspace platform integrates with tools from SecureWorks and CrowdStrike, including off-host BIOS storage and verification, threat intelligence data, behavioural analytics and more. In addition, the platform includes integrated support capabilities to allow IT to shorten the time it takes to resolve helpdesk tickets.

Customers can also spread the cost over monthly instalments via Dell Financial Services’ PC-as-a-Service offering, which offers a cloud-style consumption-based payment model for physical devices.

«No setup, no imaging, no provisioning, no installation,» said Dell vice chairman of products and operations Jeff Clarke. «No configuration is, we like to say, no problem.»

These capabilities aren’t new, however; the company already offers all of them, in the form of services like the Dell ProDeploy Client Suite and ProSupport.

Rather, the Dell Technologies Unified Workspace combines all of these functions into a single, unified console.

The value for customers comes from the simplicity and time savings that this centralisation can bring, along with the benefits of rolling all of the various costs into one monthly fee.

Alongside this new service, Dell also unveiled a brand new Data Centre-as-a-Service offering, VMware Cloud on Dell EMC. Coming as part of the newly-launched Dell Technologies Cloud portfolio, the offering is a fully managed VMware cloud solution, controlled through VMware’s cloud management console and deployed on Dell EMC hardware within the customer’s own data centre.

The aim is to allow customers to seamlessly move their workloads between public cloud, on-premise infrastructure and edge installations, with VMware acting as a central, consistent infrastructure layer.

Dell set to triple its AMD server offering


Adam Shepherd

30 Apr, 2019

Dell EMC is planning on tripling the amount of AMD-based servers in its portfolio, following the success of the chip manufacturer’s EPYC range.

AMD spent a long time in the wilderness, playing second fiddle to main rival Intel across both the desktop and server markets. Its Zen microarchitecture, however, has been met with widespread acclaim, with Zen-based chips offering a noticeably lower TCO than equivalent Intel parts. In our tests, EPYC-based servers from Dell EMC, Broadberry and HPE all showcased phenomenal per-core performance for an excellent price.

This has not gone unnoticed by Dell. The company currently offers three server platforms that use AMD chips but Dominique Vanhamme, the company’s EMEA vice president and general manager for storage and compute told IT Pro that the company is planning to triple the number of AMD-based platforms it offers by the end of the year.

«Out of, let’s say, 50 or so platforms that we have today,» he said, «three of them are AMD – we’ll probably triple that by the end of this year.»

He also confirmed that Dell EMC will be launching servers powered by AMD’s newest architecture – a 7nm architecture codenamed ‘Rome’ – in the second half of 2019.

While AMD will still be a minority among Dell’s server platforms, this planned expansion is in contrast to comments made by Dell EMC CTO John Roese last year, who told Cloud  Pro that Intel was still «the big player» in the market and that the company had no plans to substantially increase its AMD offering, stating «don’t expect it to be a duopoly any time soon».

A significant barrier to AMD’s growth in the server market, as Vanhamme pointed out, is that any workloads that currently run on Intel servers will need to be re-validated to run on AMD-powered hardware. Given Intel’s relative stranglehold on the market, this means that a full AMD migration is likely to be a major project for any sizeable company.

One of the primary driving factors behind this expansion of AMD platforms is a growing demand from customers, according to Vanhamme. The lower TCO offered by AMD’s EPYC chips is a large factor, he says; along with a cheaper list price, many EPYC chips use fewer cores and sockets to match the performance of equivalent Intel systems, which means that CIOs can save money on per-core and per-socket licensing costs. Lower power consumption is also attractive, he said.

One thing that surprised Vanhamme was the demand for EPYC servers from general-purpose customers. For example. high-performance computing was expected to be the biggest revenue driver, due to the per-core and per-socket performance advantages, but general demand has been surprisingly strong.

«So in the original plan, we were thinking that it will be a few first verticals that will pick up, like service providers,» he said. «We thought that maybe there are some hosters that may want to have that extra capacity when they provide IaaS services. We clearly see HPC, but we also see general customers for sure.»

You’re not seeing the savings you expected from multi-cloud – so what do you do now?

A recent Gartner report estimated that 80 percent of organisations will overshoot their cloud budgets by 2020 because of a lack of optimisation. Obviously, this frustrates executives. It’s particularly hard to deal with in the case of multi-cloud environments, where each provider’s bill might be a million-plus lines long.

If your organisation is among those with surprisingly high multi-cloud costs, rest assured: it’s possible to align costs with expectations. Here, I outline a strategy that will help you do so by addressing the root causes.

First: Why multi-cloud?

Before tackling cost, it’s essential to understand the underlying strategic business reasons for using a multi-cloud environment. Common reasons include:

  • Avoiding vendor lock-in: Many organisations opt for multi-cloud because they want to avoid being tied to a single cloud provider. While this is a fine strategy, it leaves out an important fact: many of the most valuable features of any cloud environment cannot be accessed unless you’re all in. It’s important to understand that there are opportunity costs here, too
  • Customising the cloud to business requirements: If various apps in your business require different functionalities, a multi-cloud solution may let you maintain existing functionalities and processes rather than adapting to fit the capabilities of your cloud provider
  • Mitigating risks from potential cloud outages: When you’re in multi-cloud, the outage of any one cloud platform is less likely to harm the business overall

If you didn’t have a particular strategic reason for choosing multi-cloud – or if your main reason was that you hoped to save money – you may want to consider moving to a single-cloud setup. With a single cloud provider, you may be a big enough customer to get discounts or freebies (like security services), which can help keep costs down.

You may also, as I mentioned above, be able to use features that could improve your business in various ways.

If, however, your strategic reason for choosing multi-cloud is still relevant – meaning you want to maintain your multi-cloud setup – it’s time to consider two things: the architecture of your cloud environment and your total cost of ownership.

Architecture: Make sure you’re not double-paying

Moving an in-house data centre to the cloud requires more than a simple lift-and-shift. That’s doubly true for multi-cloud, where certain architectures can substantially increase what you pay for cloud services.

For example, if you have an app that straddles different clouds and sends data between environments, you may incur bandwidth charges every time the app sends a request to a different environment.

Depending on the app’s functionality, this could add up to a lot of unnecessary costs. It’s common for thousands to tens of thousands of dollars a month to be eaten up by inter-application data transfer costs. The solution: examine how your apps are structured within the clouds you’re using and adjust that structure to limit double dipping.

In some cases, using software like CloudCheckr or CloudHealth can help this process; these apps analyse costs and make recommendations about how to lower them. But keep in mind that recommendations are not business-specific. Many will be red herrings. You’ll need a knowledgeable cloud professional to evaluate the recommendations in light of your specific business goals.

Total cost of ownership

Even if you can reduce cloud-specific costs with improved architecture, don’t forget to consider your infrastructure’s total cost of ownership. TCO is often higher in a multi-cloud setup, even when cloud-specific costs are lower. Why? There are three main reasons:

  • In a multi-cloud environment, your IT team has to learn multiple clouds. That means more training time and less doing time, longer onboarding for new hires, and longer time to proficiency in each cloud environment. This shouldn’t necessarily be a deal breaker, but it’s an important consideration
  • In a single-cloud setup, some providers will offer discounts. I mentioned above that many cloud providers offer freebies, like security features, for customers in a single-cloud environment. In a multi-cloud setup, you’ll have to pay a third party for security software and other products that you might have otherwise been able to secure through your cloud provider
  • In multi-cloud, you’ll miss out on certain capabilities. The cost of avoiding vendor lock-in is losing access to the features and capabilities that are only available when your business is all-in on a single cloud provider

These costs, of course, may be acceptable depending on your strategic reason for maintaining a multi-cloud setup. The key is to consider them in a strategic context.

Cloud decisions are business decisions

As with any cloud decision, the choice of whether to stick with a multi-cloud setup should be driven by larger business goals. The cloud infrastructure should serve the business – not the other way around. If you’re unsure where to start in evaluating your cloud architecture and spend, talk with a knowledgeable cloud consultant, who can make recommendations based on your business goals.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

CompTIA exam aims to boost cloud comprehension in business


Clare Hopping

30 Apr, 2019

CompTIA has unveiled an overhaul of its Cloud Essentials+ beta exam, ensuring those undertaking the qualification have up to date skills to sell the cloud into their businesses.

The exam has between 80% and 90% new content for students to pass, addressing scenarios and skills needed to help businesses make decisions about cloud products and services.

“The cloud has sparked an evolution in thinking about the role of technology; from a behind-the-scenes tactical tool to a valuable strategic asset that turns businesses into digital organisations and makes greater innovation possible,” said Dr James Stanger, chief technology evangelist at CompTIA.

The CompTIA Cloud Essentials+ beta exam includes content such as how to conduct a comprehensive cloud assessment, the business, financial and operational implications of moving to the cloud and security, risk management and compliance threats and solutions.

It also includes a section on new technologies business may want to integrate into their cloud strategies such as data analytics, the Internet of Things and blockchain.

“To unlock its true value, decision-makers must have a clear understanding about cloud technologies and their potential business impacts,” Stanger continued.

“Individuals who are CompTIA Cloud Essentials+ certified have demonstrated that they have the knowledge and skills to make informed decisions and recommendations on the business case for the cloud.”

As a completely vendor-neutral qualification, CompTIA said its certification varies greatly to its competitors.

Candidates wishing to undertake the exam should have between six and 12 months experience working as an analyst in an IT environment, with some cloud exposure.

What automation can learn from DevOps – and why the future is automation with iteration

A recent survey from Capgemini revealed that while enterprise-scale automation is still in its infancy, IT automation projects are moving along (below). IT is starting to view automation less tactically, and more strategically.

Figure 1. IT leads automation implementation (respondents were asked to select all that apply: “In which of the following functions has your organisation implemented automation initiatives?”). Source: Capgemini Research Institute, Automation Use Case Survey; July 2018, N=705 organisations that are experimenting with or implementing automation initiatives.

The Capgemini survey also showed that IT automation can be responsible for several quick wins, including self-healing, event correlation, diagnostics, application releases, cybersecurity monitoring, and storage and server management tasks. These projects not only lead to massive IT cost savings but, more importantly, to an increase in reliability and responsiveness to customer demands and business services. That would indicate that, while automation is a great solution for manual work, it’s also a part of a high-level, strategic IT plan to innovate the business.

But as DevOps practices like agile methodology and continuous deployment and optimisation start to take hold within the modern enterprise, it stands to question: can automation be agile as well? This is the promise of artificial intelligence for IT operations, or AIOps, but if that’s not a possibility for your IT organisation today, it’s important to make sure that your automation practices are continuously optimised to fit the task. Setting and forgetting was a practice of the server era, and in a world of on-demand infrastructure, automation ought to be continuously optimised and evaluated for maximum benefit.

The new expectations of automation

IT automation projects can have serious ramifications if anything goes wrong, because when the machines execute a policy, they do it in a big way. This is perhaps the chief argument as to why it’s critical that progressive steps are used to define and evaluate both the process being automated and the automation itself – they mitigate the seriousness of any issue that can arise. This is why it’s important to consider the following:

  • Is this a good process, and is it worth automating?
  • How often does this process happen?
  • When it happens, how much time does it take?
  • Is there a human element that can't be replaced by automation?

Let’s break the steps down and see how it can provide the basis for an iterative approach to automation:

Is this a good process?

This may seem like a rudimentary question, but in fact, processes and policies are often set and forgotten, even as things change dramatically. Proper continuous optimisation or agile automation development will force an IT team to revisit existing policies and identify if it’s still right for the business service goals.

Some processes are delicate and automation may threaten their integrity, whereas others are high-level and automation neglects the routine tasks that underlie the eventual results. A good automation engineer understands what tasks are the best candidates for automation and sets policies accordingly.

How often does this process happen?

Patching, updating, load balancing, or orchestration can follow an on-demand or time-series schedule. As workloads become more ephemeral, moving to serverless, cloud-native infrastructure, these process schedules will change as well. An automation schedule ought to be continuously adapted to the workload need, customer demand, and infrastructure form. Particular as the business continues the march toward digital transformation, the nature and schedule of particular work may become more dynamic.

When it happens, how much time it takes?

This also depends on the underlying infrastructure. Some legacy systems require updates that may take hours, and some orchestration of workloads will be continuous. Automation must be tested to be efficient and effective on the schedule and frequency of the manual task.

Is there a human element that’s irreplaceable?

As much as you may want to, it’s difficult for automation to shift left (to more experienced tasks and teams) without the help of artificial intelligence or machine learning. Many times there is a human element involved in deriving insights, creating new workflows, program management or architecture that take place. When building an iterative automation practice, make sure you identify where human interaction must occur to evaluate and optimise.  In our lifetime, technology has advanced at lightning speed with robots now completing jobs that were once held by people. However, there are times when a machine just cannot deliver the same quality a human can.

Automation for all

Automation is perhaps one of the most defining signatures of the future of IT operations management. It relieves teams of routine work and helps improve overall efficiency, all while driving quick wins that turn an IT team into heroes. But don’t let automation be the end goal. Instead, consider it a tool, like any other tool, that can drive action from data. And until AI is an everyday option, it’s inherent on the IT professional to continuously optimise the data that drive that action.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

What to expect from Dell Technologies World 2019


Adam Shepherd

29 Apr, 2019

It’s the end of April, and that can mean only one thing: Michael Dell is getting ready to emerge, bear-like, from his Winter slumber and ravenously tear into something; in this case, insufficiently transformed data centres.

For the next week, Las Vegas’ Sands expo centre will be swarming with Dell’s customers, partners and technologists, all eager to hear what the company has in store over the next 12 months.

The answer, I suspect, will not be a surprise. Dell has been beating the ‘digital transformation’ drum for several years, and it shows no signs of letting up.

As such, we can expect to hear all about the company’s favourite talking-points (sing along if you know the worlds), including the importance of multi-cloud architectures, the growing role of data analytics in business and how software-defined storage and networking can unlock ‘the data centre of the future’. All important topics, to be sure, but nothing that we haven’t heard Dell talking about at its previous conferences.

In the years since the arrival of cloud to the enterprise market, Dell has pivoted rather impressively to being a full-stack provider, broadening its focus from on-premise infrastructure to hybrid cloud, edge computing and IoT in a way that its rivals haven’t been able to match quite as effectively. Realistically, there are very few parts of the modern data centre that Dell doesn’t touch in some way, and the company will be making full use of this position.

Edge computing and IoT, in particular, will almost certainly play a major role in this year’s conference; both areas are a key part of the so-called ‘4th industrial revolution’, and are supported by Dell’s product portfolio.

Analytics will likely be a major theme too – data-crunching is increasingly vital for companies, and by happy coincidence, the high performance and low-latency storage of Dell’s equipment makes it well-suited to this task.

Expect to hear the phrase ‘multi-cloud’ a lot, as well. Dell likes to emphasise how well the full suite of Dell Technologies brands (primarily VMware, Dell EMC and Pivotal) lend themselves to mixed estates, lest anyone think of it solely as a tin shop.

What all of these areas mean for customers, in practical terms, is another matter. We’d be surprised to see any major announcements from VMware – they’re usually saved for VMware’s own conference later in the year – but CEO Pat Gelsinger is all but certain to deliver his usual keynote. This will probably be where 5G, IoT and Edge receive the most airtime.

VMware aside, any cloud announcements are likely to come from Pivotal Cloud Foundry, which is one of Dell Technologies’ major entry-points to the cloud market.

The meat of Dell’s announcements is most likely to focus on new hardware. We’re a little too early in the lifespan of Dell’s latest range of 14G PowerEdge servers to expect a whole new generation. But what we could well see are some newer, more powerful products. Intel has recently announced a swathe of new Xeon server processors, so we’re expecting Dell to show off some fancy new iron that actually makes use of them.

These servers will, no doubt, be touted as an ideal way to accelerate your machine learning and/or data analytics deployments; a position that neatly dovetails with Dell’s preferred messaging.

Equally, don’t be shocked to see Dell unveiling some new servers running on AMD’s EPYC architecture. It’s less likely than new Xeon-powered models – Intel is a major Dell partner with a lot of behind-the-scenes pull, so the company will want to avoid antagonising Intel – but at the same time, Dell’s run by smart people. It’s no secret that Intel’s 10nm development has hit a bit of a brick wall, while AMD has sailed merrily past it onto the 7nm process node.

The results speak for themselves, too: Dell EMC servers fitted with AMD’s EPYC processors can match the performance of Xeon-based equivalents for a considerably lower price point, and that can’t have gone unnoticed. How much attention AMD gets (both on stage and in the halls) should give a good indication of whether the tide is starting to turn in its favour.

Long story short, keep an eye out for more AMD servers than usual – although they won’t be sporting the company’s newest Rome architecture, as it’s still too early for production servers to be ready.

We wouldn’t discount the possibility of Dell launching some new storage or networking hardware, either. We’re not betting the farm on this as it’s usually not a ‘sexy’ enough area to get much attention at the company’s major league show, not to mention that both portfolios got a full refresh not long after the close of the EMC acquisition. That being said, both storage and networking are key areas of data centre transformation, and Dell has been impressing in both categories recently.

Additionally, it’s worth noting that this may well be a more cautious show than we’ve seen in recent years, as it’s the first annual conference since Michael Dell took the company back onto the public market last year. This means that, for the first time since 2013, he’s once again answerable to shareholders. Dell is riding high on the successful execution of its roadmap, but tech investors are a notoriously skittish bunch, so the pressure will be focussed on not spooking them with any controversial announcements.

Thematically-speaking, then, this year’s Dell World is set to be more of the same. Regular attendees probably aren’t going to find themselves surprised by the company’s agenda, and although we may have some excitement on hand in the form of new hardware releases, the looming spectre of public investors makes any big shocks fairly unlikely.

Still, for customers and partners, it’s an opportunity to get a closer look at the latest products and services rolling out of Dell’s development facilities; at the end of the day, that’s what it’s really all about.

What can you do with deep learning?


Cloud Pro

29 Apr, 2019

If there’s one resource the world isn’t going to run out of anytime soon it’s data. International analyst firm IDC estimates the ‘Global Datasphere’ – or the total amount of data stored on computers across the world – will grow from 33 zettabytes in 2018 to 175 zettabytes in 2025. Or to put that in a more relatable form, 175 billion of those terabyte hard disks you might find inside one of today’s PCs.

That data pool is an enormous resource, but one that’s far too big for humans to exploit. Instead, we’re going to need to rely on deep learning to make sense of all that data and discover links we don’t know even exist yet. The applications of deep learning are, according to Intel’s AI Technical Solution Specialist, Walter Riviera, “limitless”.

“The coolest application for deep learning is yet to be invented,” he says.

So, what is deep learning and why is it so powerful?

Teaching the brain

Deep learning is a subset of machine learning and artificial intelligence. It is specifically concerned with neural networks – computer systems that are designed to mimic the behaviour of the human brain.

In the same way that our brains make decisions based on multiple sources of ‘data’ – i.e. sight, touch, memory – deep learning also relies on multiple layers of data. A neural network is comprised of layers of “virtual or digital neurons,” says Riviera. “The more layers you have, the deeper you go, the cleverer the algorithm.”

There are two key steps in deep learning: training and inference. The first is teaching that virtual brain to do something, the second is deploying that brain to do what it’s supposed to do. Riviera says the process is akin to playing a guitar. When you pick up a guitar, you normally have to tune the strings. So you play a chord and see if it matches the sound of the chord you know to be correct. “Unconsciously, you match the emitted sound with the expected one,” he says. “Somehow you’re measuring the error – the difference between the two.”

If the two chords don’t match, you twiddle the tuning pegs and strum the chord again, repeating the process until the sound from the guitar matches the one in your head. “It’s an iterative process and after a while you can basically drop the guitar, because that’s ready to go,” says Riviera. “What song can you play? Whatever, because it’s good to go.”

In other words, once you’ve trained a neural network to work out what’s right and wrong, it can be used to solve problems that it doesn’t already know the answer to. “In the training phase of a neural network, we provide data with the right answer… because we know what is the expected sound. We allow the neural network to play with that data until we are happy with the expected answer,” says Riviera.

“Once we’re ready to go, because we think the guitar is playing well, so the neural network is actually giving the expected answer or the error is very close to zero, it’s time to take that brain and put it in a camera, or to take decisions in a bank system to tell us that it’s a fraud behaviour.”
Deep learning as a concept isn’t new – indeed, the idea has been around for 40 years. What makes it so exciting now is that we finally have all the pieces in place to unlock its potential.

“We had the theory and the research papers, we had all the concepts, but we were missing two important components, which were the data to learn from and the compute power,” says Riviera. “Today, we have all of these three components – theory, data and infrastructures – but what we’re missing is the fourth pillar, which is creativity. We still don’t know what we can and can’t achieve with deep learning.”

Deeper learning

That’s not to say that deep learning isn’t already being put to amazingly good use.

Any regular commuter will know the sheer fist-thumping frustration of delays and cancelled trains. However, Intel technology is being used to power Resonate’s Luminate platform, which helps one British train company better manage more than 2,000 journeys per day.

Small, Intel-powered gateways are placed on the trackside, monitoring the movements of trains across the network. That is married with other critical data, such as timetables, temporary speed restrictions and logs of any faults across the network. By combining all this data and learning from past behaviour, Luminate can forecast where problems might occur on the network and allow managers to simulate revised schedules without disrupting live rail passengers. The system can also make automatic adjustments to short-term schedules, moving trains to where they are most needed.

The results have been startling. On-time arrivals have increased by 9% since the adoption of the system, with 92% of trains now running to schedule.

Perhaps just as annoying as delayed trains is arriving at the supermarket to find the product you went there for is out of stock. Once again, Intel’s deep learning technology is being used to avert this costly situation for supermarkets.

The Intel-powered Vispera ShelfSight system has cameras mounted in stores, keeping an eye on the supermarket shelves. Deep-learning algorithms are used to train the system to identify individual products and to spot empty spaces on the shelves, or even products accidentally placed in the wrong areas by staff.

Staff are alerted to shortages using mobile devices, so that shelves can be quickly restocked and lost sales are kept to a minimum. And because all that data is fed back to the cloud, sales models can be adjusted and the chances of future shortages of in-demand products are reduced.

Only the start

Yet, as Riviera said earlier, these applications of deep learning are really only the start. He relays the story of the Italian start-up that is using deep learning to create a system where drones carry human organs from hospital to hospital, eliminating the huge disadvantages of helicopters (too costly) and ambulances (too slow) when it comes to life-critical transplants.

It’s not the only life-saving application he can see for the technology, either. “I’d like to see deep learning building an autonomous system – robots – that can go and collect plastic from the oceans,” he says. “We do have that capability, it’s just about enabling it and developing it.”

“The best [use for deep learning] is yet to be invented,” he concludes.

Discover more about data innovations at Intel.co.uk

How augmented analytics is turning big data into smart data

Smart data is generated by filtering out the noise from big data that's generated by media, business transactions, Internet of Things (IoT), and data exhausts from online activity. Smart data can uncover valuable commercial insights, by improving the efficiency and effectiveness of data analytics.

Furthermore, vast amounts of unstructured big data can be converted into smart data using enhanced data analytics tools that utilise artificial intelligence (AI) and machine learning (ML) algorithms.

Advancements in data processing tools and the adoption of next-generation technologies – such as augmented analytics used to extract insights from big data – are expected to drive the smart data market toward $31.5 billion by 2022.

Augmented analytics market development

Augmented analytics automates data insights gathering and provides clearer information, which is not possible with traditional analysis tools. Companies such as Datameer, Xcalar, Incorta, and Bottlenose are already focusing on developing end-to-end smart data analytics solutions to obtain valuable insights from big data.

"Markets such as the US, the UK, India, and Dubai have rolled out several initiatives to use AI and ML-powered data analytics tools to generate actionable insights from open data,” said Naga Avinash, research analyst at Frost & Sullivan.

Smart data will help businesses reduce the risk of data loss and improve a range of activities such as operations, product development, predictive maintenance, customer experience and innovation.

Frost & Sullivan’s recent worldwide market study uncovered key market developments, technologies used to convert big data to smart data, government programs, and the IT organizations applying data analytics. It also found use cases for smart data applications.

"The evolution of advanced data analytics tools and self-service analytics endows business users instead of just data scientists with the ability to conduct analyses," noted Avinash.

Technology developers can ensure much wider adoption of their solutions by offering in-built security mechanisms that can block attackers in real time. They could also develop new business models such as shared data economy and even sell data-based products or utilities.

Outlook for augmented analytics application growth

As an example of other application scenarios, various governments have already begun to use data analytics on 'open data' sets to solve issues related to smart city and municipal water crises. Other important growth opportunities for smart data solution providers include:

  • Employing augmented analytics and self-service data analytics tools, as they enable any business user to make queries, analyse data, and create customized reports and visualisations
  • Leveraging a data monetisation approach, as it allows businesses to utilize and bring value at every point in the data value chain
  • Adding new data analytics services to existing offerings, driven by enterprise CIOs and CTOs
  • Partnering with innovative smart data solutions providers (emerging startups) across the world. This will help companies enhance their implementation capabilities by leveraging open-source smart data solutions focused on enterprise data management and analytics
  • Collaborating with the government to address the digital transformation talent shortage and setting clear investment and data strategy goals

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.