Four business benefits of cloud data warehousing


Grace Halverson

10 Apr, 2019

As technology continues to advance, so does the amount of data generated on a daily basis, both internally – through marketing, sales, production, and finance, among others – and externally, from sources like the internet of things.

Storing and analysing all this data requires a dedicated system that can integrate a variety of data types, as well as provide information and insight. Not all traditional, on-site data warehouses are still up to the task. Because of this, cloud data warehousing has emerged, which has opened doors for organisations of all sizes and types.


‘Cloud data warehousing for dummies’ is a must-read whitepaper for any organisation looking for tips on migrating, considerations for choosing a new system, data trends, and more.

Download now


Like a traditional data warehouse, a cloud data warehouse is a computer system dedicated to storing and analysing data to find patterns and correlations that lead to information and insight. Data warehouses also store and integrate data from multiple sources in varying formats. However, cloud data warehousing is based in the cloud, not in a traditional, on-premise location, and as such can be bought from and managed by a vendor in an as-a-service product.

With that in mind, here are four business benefits of cloud data warehousing.

1: Cloud data warehousing meets current and future needs

Flexibility is an important factor of cloud data warehousing. Organisations will have the ability to scale compute and storage independently, pending the company’s needs. So, if a business needs more storage today, they won’t also be forced to add more compute. But if their situation changes in the future, they can adjust however needed.

2: Data is accommodated and integrated in one place

With the help of data analytics, semi-structured data has the capability to provide next-level insights beyond what traditional data can provide. But semi-structured data must be loaded and transformed before an organisation can analyse it—this is a process most traditional data warehouses can’t handle, but one that cloud data warehouses can.

This ability to support diverse data without performance issues ensures all of an organisation’s data can be loaded and integrated in one location. This not only increases flexibility, but it also means all data can be managed and maintained in one system, reducing costs.

3: Cloud data warehousing saves money

Between licensing fees, hardware, set-up, management, securing and backing up data, and more, conventional data warehouses can cost millions. Not to mention building one that can hold the variety and volume required by today’s standards ups the cost even more.

However, using cloud data warehousing as a service (as it is commonly used today) can cut costs significantly, while keeping all the same features. Relying on service providers to maintain systems and only purchasing the amount of support needed helps organisations stretch their budgets further and avoid paying for unnecessary features.


Explore how your organisation can easily and affordably harness massive amounts of data and transform it into valuable insights in this comprehensive guide to cloud data warehousing.

Download now


4: Data is secured at rest and in transit

An important aspect to analysing and storing data is keeping that data safe. In addition to the use of vendor-conducted penetration tests to check for vulnerabilities in the system, modern data warehouses do this through confidentiality and integrity measures.

Confidentiality practices prevent unauthorised access to data, and are usually done through role-based access control, which only allows those permitted to access the data to do so, and multi-factor authentication, which requires users to enter a code (usually one sent to a mobile phone) and ensures a stolen username and password can’t be used to access the system.

Integrity measures guarantee data isn’t modified or corrupted, and entails the use of encryption practices and encryption keys to protect data from unauthorised prying eyes.

Schneider Electric’s EcoStruxure IT aims to ease data centre deployment


Clare Hopping

10 Apr, 2019

Businesses can now up their data centre management game, thanks to the launch of Schneider Electric’s EcoStruxure IT Advisor, an easy to deploy data centre management platform.

IT Advisor combines cloud-based planning and modelling to uncover where businesses can make savings and improve uptime through optimising their facilities.

It can also analyse the business impact that decisions are having on the entire environment, automating workflows to ensure the data centre is running at its optimum.

“Hybrid data centre architectures are driving the industry to rethink the way their data centre infrastructure is managed and operated” said Kim Povlsen, vice president and general manager of Digital Services and Software at Schneider Electric.

“EcoStruxure IT Advisor addresses this need by offering customers a powerful cloud-based or on-premise data centre planning and modelling software, accessible from anywhere, and delivered with a flexible subscription model.”

Schneider Electric’s EcoStruxure IT Advisor features asset management, laying out data in each environment and enabling teams to view device details and asset attributes in a logical way.

In-depth risk planning models potential incidents and demonstrates the impact they may have on devices and infrastructure, while change management is supported by automated workflows, reducing the possibility of human error and ensuring best practices are out in place.

Schneider Electric has also integrated features to help co-location facilities understand how data is distributed.

It maps out areas, cages and racks with assets in a “floor view”, displaying how racks are being utilised so organisations know where there is space and make sure they’re using their resources in the most logical and efficient manner.

The IT Advisor system is an expansion of Schneider Electric’s EcoStruxure IT platform, which also includes EcoStruxure IT Expert, software that allows for the monitoring of physical IoT assets, and EcoStruxure Asset Advisor, a 24/7 monitoring service provided through its partner network.

HPE and Nutanix join forces to deliver hybrid cloud as a service


Clare Hopping

10 Apr, 2019

HPE and Nutanix have expanded their partnership with the launch of an integrated hybrid cloud-as-a-service platform, combining HPE’s GreenLake and Nutanix’s Enterprise Cloud OS software.

The companies said that offering their products together will serve up a hybrid infrastructure fully-managed by HPE, at a lower cost than on-premise solutions. It also offers more flexibility in both customer data centres or a co-location facility.

The offering has been designed for businesses wanting to scale-up their infrastructure to operate mission-critical workloads and big data applications. It supports SAP, Oracle, and Microsoft environments, plus virtualised big data applications, such as Splunk and Hadoop.

The announcement is in response to a problem of overcomplexity from legacy hardware, and concerns over vendor lock-in when attempting to move to a hybrid model, according to the companies.

“HPE created the modern on-premises, as a service consumption market with HPE GreenLake. Hundreds of global customers now leverage HPE GreenLake to get the benefits of a cloud experience combined with the security, governance, and application performance of an on-premises environment, while paying for the service based on actual consumption,” said Antonio Neri, president and CEO, HPE.

“Today, HPE is expanding its leadership in this market by providing additional choice to customers seeking a hybrid cloud alternative that promises greater agility at lower costs.”

As part of the agreement, Nutanix’s channel partners will gain access to HPE’s servers, giving them the opportunity to sell the hardware alongside Nutanix’s software, offering customers a fully-integrated solution.

“Our customers tell us that it’s their applications that matter most. Our partnership with HPE will provide Nutanix customers with another choice to make their infrastructure invisible so they can focus on business-critical apps, not the underlying technology,” said Dheeraj Pandey, founder, CEO and chairman of Nutanix.

“We are delighted to partner with HPE for the benefit of enterprises looking for the right hybrid cloud solution for their business.”

What is Anthos? Google’s brand new multi-cloud platform


Connor Jones

10 Apr, 2019

Google has revealed its Cloud Services Platform has been rebranded to Anthos, a vendor-neutral app development platform that will work in tandem with rival cloud services from Microsoft and AWS.

It was developed as a result of customers wanting a single programming model that gave them the choice and flexibility to move workloads to both Google Cloud and other cloud platforms such as Azure and AWS without any change.

The news was announced during Thomas Kurian’s first keynote speech at a Google Cloud Next event as the company’s new CEO, succeeding Diane Greene’s departure in November last year.

The announcement was met with the loudest cheer of the day from the thousands-strong crowd in attendance who seemed to share the same enthusiasm as the industry analysts who have been trying to convince Google that 88% of businesses will undergo a multi-cloud transformation in the coming years.

That could be some way off though, considering global market intelligence firm IDC said last year less than 10% of organisations are ready for multi-cloud, with most sticking to just one vendor.

Anthos will allow customers to deploy Google Cloud in their own datacentres for a hybrid cloud setup and also allow them to manage workloads within their datacentre, on Google Cloud or other cloud providers in what’s being described as the world’s first true cloud-agnostic setup.

“The only way to reduce risk is by going cloud-agnostic”, at least that’s according to Eyal Manor, VP engineering at Anthos. He said that managing hybrid clouds is too complex and challenging, and the reason why as much as 80% of workloads are still not in the cloud.

As it’s entirely software-based and requires no special APIs or time spent learning different environments, Manor said you can install Anthos and start running it in less than 3 hours.

It became generally available Tuesday both on GCP with Google Kubernetes Engine (GKE), and in customers’ datacentres with GKE On-Prem.

The announcement marks Google’s apparent move to make managing infrastructure much simpler for its customers so they can focus on improving their business.

By using Anthos, enterprises can depend on automation so they can “focus on what’s happening further up the stack and take the infrastructure almost for granted”, said Manor. “You should be able to deploy new and existing apps running on-premise and in the cloud without constantly having to retrain your developers – you can truly double down on delivering business value”.

Some of the world’s leading businesses have been given early access to the platform already such as HSBC which needs a managed cloud platform for its hybrid cloud strategy.

“At HSBC, we needed a consistent platform to deploy both on-premises and in the cloud,” says Darryl West, group CIO, HSBC. “Google Cloud’s software-based approach for managing hybrid environments provided us with an innovative, differentiated solution that was able to be deployed quickly for our customers.”

GCP customers have already invested heavily into their infrastructure, forging relationships with their vendors too which is why Google has launched Anthos with an ecosystem of leading vendors so users can start using the new platform from day one.

Cisco, VMware, Dell EMC, HPE, Intel and Atos are just a few that have committed to delivering Anthos on their own hyperconverged infrastructure for their customers.

Nvidia delivers cutting-edge graphics rendering to any Google Cloud device


Connor Jones

10 Apr, 2019

Nvidia’s Quadro Virtual Workstation (QvWS) will be available on Google Cloud Platform (GCP) by the end of the month, marking the first time any platform has supported RTX technology for virtual workstations.

Cloud workloads are becoming increasingly compute-demanding and the support for Nvidia’s QvWS is thought to be able to accelerate the development and deployment of AI services and vastly improve batch rendering from any device in an organisation.

Enterprises which rely on powerful graphics rendering processes would have to invest huge amounts in the hardware needed to perform such compute-heavy tasks on-premise. Using GPU-enabled virtual workstations, businesses can also forget about the cost and complexity of managing datacentres for the task.

Instead of running up to 12 of the T4 GPUs in a business’s on-premise infrastructure, an endeavour that would cost thousands in investment, using the infrastructure-as-a-service via GCP, businesses can spend much less.

“You can spin an instance up [on GCP] for less than $3 per hour,” said Anne Hecht, senior director, product marketing at Nvidia. “The QvWS is about 20 cents per minute and then you need to buy the other infrastructure depending on how much storage, memory and CPU that you want”.

That cost can drop during ‘peak times’ in a process called pre-emption, whereby if a customer is willing to lose the service within an hour, for example assigning resources to a workload that will be completed quickly, then it can be rented for half the price.

Edward Richards, director & solution architect at Nvidia, told Cloud Pro that the service can be accessed from any device that can connect to GCP.

“You can plug your own tablet, plug in your mouse of choice, keyboard of choice – you just don’t think about it,” he said. “I use one on my desk at work, I’ve almost chained all my day-to-day to it and every once in a while I forget that I’m actually remoting to it from the other side of the country… it’s just that seamless”.

Nvidia is the biggest name in the graphics processing market and its flagship Turing architecture is used in its T4 GPU which can perform real-time graphics-hungry ray tracing, AI and simulation processes. It’s the first time ray tracing has been made available for graphics processing in a cloud instance.

Azure customers can already utilise Nvidia’s graphics processing through VMs, but it only runs on Pascal using its V100 GPU.

The virtual workstations will benefit businesses in more sectors than just games developers. Engineers and car manufacturers run computer-aided design (CAD) applications that are critical to their businesses’ success. Video editors and broadcasters also stand to benefit from high-performance graphics processing while away on set.

“When spikes in production demand occur, particularly around major broadcast events like the Olympics Games or the Soccer World Cup, time and budget to set up new temporary editors are a big problem,” said Alvaro Calandra, consultant at ElCanal.com. “With Quadro vWS on NVIDIA T4 GPUs in the GCP marketplace, I can use critical applications on demand like Adobe Premiere Pro, apply GPU-accelerated effects, stabilise, scale and colour correct clips with a native, workstation-like experience.”

It’s been widely known that Nvidia’s QvWS has been through the alpha and private beta phases in the last few months, but it will be made generally available at the end of the month on GCP.

Announcing @Darktrace “Silver Sponsor” of @CloudEXPO | #HybridCloud #CIO #AI #AIOps #MachineLearning #SmartCities

Darktrace is the world’s leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace’s Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal’ for all devices and users, updating its understanding as the environment changes.

read more

Google Cloud stresses hybrid and multi-cloud at Next – as well as sealing a major open source deal

Analysis Google Cloud means open, hybrid and multi-cloud. The company took the keynote at Next in San Francisco to offer more flexibility with using other vendors such as Amazon Web Services (AWS) – but that’s where the familiarity ended.

The biggest product news to come out of the session was moving its cloud services platform, rebranded as Anthos, to accommodate AWS and Microsoft Azure. Anthos lets users “build and manage modern hybrid applications across environments” through Kubernetes, as the official page puts it. With a cast list as long as one’s arm – more than 35 partners were cited on a slide, with Cisco and VMware, more on whom shortly, among the highlights – the goal was for Anthos to be ‘simple, flexible and secure.’

Google Cloud chief exec Thomas Kurian – who has only been in the role for 10 weeks but, as boss Sundar Pichai put it, whose productivity was stretching Calendar and G Suite – noted the multi-cloud move came through listening to customers. Customers wanted three things; firstly hybrid, secondly multi-cloud, and finally a platform that “allows them to operate this infrastructure without complexity and to secure and manage across multiple clouds in a consistent way”, he added. The live demonstration came with a twist; the workload was being run on AWS.

In terms of partner news, the best was saved until last. Google Cloud announced partnerships with seven open source vendors (below) in what Kurian described as the ‘first integrated open source ecosystem.’ “What this allows you to do, as a developer or customer, is to use the best of breed open source technology, procure them using standard Google Cloud credits, and get a single console [and] bill from Google,” said Kurian. “We support these along with partners and, as you grow, as you use these technologies, you share this success with our partners.”

The CEOs of six of these companies – Confluent, DataStax, Elastic, InfluxData, MongoDB and Neo4j – appeared in a video exhorting Google’s approach to OS. The seventh, Redis Labs chief executive Ofer Bengal, appeared on stage with Kurian. “This is great for us because, as you know, monetising open source was always a very big challenge for open source vendors and more so in the cloud era,” he said, adding that Google had taken a ‘different approach’ from other cloud vendors.

Why this matters is, as regular readers of this publication will know, because of a long-running grumble between the open source companies and cloud vendors. Late last year, Confluent announced it was changing certain aspects of its license. Users could still download, modify and redistribute the code, but not – with one eye on the big cloud vendors – use it to build software as a service.

In February, Redis announced a further modification to its license. Speaking to CloudTech at the time, Bengal noted that, AWS aside, ‘the mood [was] trying to change’ and, as this publication put it, ‘inferred that partnerships between some cloud providers and companies behind open source projects may not be too far away.’

With that question now solved, it was interesting to note the way Google approached discussing its customer base – and it is here where another potential flashpoint could be analysed.

Google frequently cited three industries as key to its customer base; retail, healthcare, and financial services. More than once did the company note it worked with seven of the top 10 retailers. This is noteworthy because, as many industry watchers will recall, retail organisations have made noises about moving away from AWS for fears over Amazon’s retail arm. This has ranged from a full-throated roar in the case of Walmart, to more of a mid-range mew from Albertson’s after the latter signed a deal with Microsoft in January.

Kurian cited this ‘industry cloud’ capability as one of Google’s three bulwarks with regards to its strategy. Building out its global infrastructure was seen as key, with Google CEO Sundar Pichai announcing two new data centre locations in Seoul and Salt Lake City. Pichai added, to illustrate the scale of Google’s expansion, that in 2013 the company’s planned footprint amounted to two Eiffel Towers in terms of steel. Today, this has been expanded to at least 20. The other aspect was around offering a digital transformation path augmented by Google’s AI and machine learning expertise.

From the partner side David Goeckeler, EVP and GM of Cisco’s networking and security business, noted how the two companies had a similar forward-looking feel to cloud deployments. Cloud had traditionally been very application-centric, which was a fair strategy, he noted. But the move has gone from there to having apps in the data centre, at the edge and more – and connecting all these users means enterprises have had to rearchitect for the demands of cloud.

“We start with the premise of hybrid and multi-cloud – the realities of the environment where all of our customers are living today,” he said. Sanjay Poonen, chief operating officer at VMware, noted VMware and Google had ‘embraced Kubernetes big time’, particularly through the acquisition of Heptio, and that alongside the deal for VeloCloud, there was a rosy future for the two companies in network. Poonen added many of the benefits of Anthos will extend to hyperconverged infrastructure – an area he had been recently grandstanding in typically ebullient style.

Various new customers were also announced, from retail in the shape of Kohl’s, to healthcare in the form of Philips, and Chase and ANZ Bank from finance. Philips group CIO Alpna Doshi took to the stage to say it had put 2000 apps on Google’s cloud.

Kurian made his speaking debut as Google Cloud boss in February at a Goldman Sachs conference in San Francisco. The talk focused predominantly around Google’s enterprise-laden focus, with Kurian citing out larger, more traditional companies – a continual weakness for the company’s cloud arm – as well as exploring deals with systems integrators.

In November, when it was announced that Diane Greene would step down and Kurian would replace her, consensus at the time predominantly revolved around Google’s lack of penetration to the top two in cloud infrastructure – namely Azure and AWS. However, this wasn’t an exclusive view. Speaking to this publication at the time Nick McQuire, VP enterprise at CCS Insight, argued that Greene had “laid some pretty good foundations for Kurian to come in…we’ll see where they go from there.”

It would seem from today’s keynote that a much clearer path has been set. “Thomas Kurian’s message from day one is loud and clear: Google Cloud is taking hybrid and now multi-cloud very seriously,” said McQuire. “Enterprises continue to question whether to fully embrace a single public cloud – which workloads are best to ‘lift and shift’ from a cost, security and compliance perspective – or how to avoid supplier lock-in, one of their biggest concerns at the moment.

“With the arrival of Anthos and in particular its support of open source, particularly Kubernetes, Google is now taking a much more realistic path in meeting customers where they are on their cloud journeys and is aiming to become the standard in hybrid, multi-cloud services in this next phase of the cloud market,” McQuire added.

You can find out more about Google Next 19 here.

Picture credit: Google Next/Screenshot

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Kubernetes at @CloudEXPO Silicon Valley | #CloudNative #Containers #DevOps #Monitoring #Serverless #Docker #Kubernetes

As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology — and even primary platform — of cloud migrations for a wide variety of organizations.

Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.

As they do so, IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives.

read more

Atmosera Named “Technology Sponsor” of @CloudEXPO | @Atmosera #HybridCloud #CIO #DataCenter #Serverless #Monitoring

Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera’s expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust that they are being taken care of.

read more

Data Loss Prevention Techniques at @CloudEXPO | @ShieldXNetworks #Cloud #AI #AIOps #Serverless #DevSecOps #DataCenter

ShieldX’s CEO and Founder, Ratinder Ahuja, believes that traditional security solutions are not designed to be effective in the cloud. The role of Data Loss Prevention must evolve in order to combat the challenges of changing infrastructure associated with modernized cloud environments. Ratinder will call out the notion that security processes and controls must be equally dynamic and able to adapt for the cloud. Utilizing four key factors of automation, enterprises can remediate issues and improve their security posture by maximizing their investments in legacy DLP solutions. The factors include new infrastructures opening up, public cloud, fast services and appliance models to fit in the new world of cloud security.

read more