TechTarget “Media Sponsor” of @CloudExpo | @TechTarget #DevOps #IoT #AI

TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies’ information technology needs. By understanding these buyers’ content consumption behaviors, TechTarget creates the purchase intent insights that fuel efficient and effective marketing and sales activities for clients around the world.

read more

Cloud Academy to Exhibit at @CloudExpo | @CloudRank #AWS #Docker #Azure

SYS-CON Events announced today that Cloud Academy will exhibit at SYS-CON’s 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud computing technologies. Get certified, manage the full lifecycle of your cloud-based resources, and build your knowledge based using Cloud Academy’s expert-created content, comprehensive Learning Paths, and innovative Hands-on Labs.

read more

[session] How to IoTify | @ThingsExpo #BigData #IoT #M2M #AI #ML #PAaaS

Almost two-thirds of companies either have or soon will have IoT as the backbone of their business. Though, IoT is far more complex than most firms expected with a majority of IoT projects having failed. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, Chief IoTologist at Wipro, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology portfolios and business models to adopt and leverage IoT. He will delve into the components in this framework: Anatomy, Ramp-up, Use case, Business case, Architecture, Technology selection, Implementation, and Platform (ARUBA TIP). The interdisciplinary techniques and anti-patterns of this method will be discussed, along with best practices and lessons learned, to help organizations transform to and enable IoT more effectively.

read more

Goodbye, Open Source: A New DNS for a New Internet | @CloudExpo #BI #API #SaaS #Cloud

BYOD. IoT. Cloud computing. DevOps. IT professionals today have more demands (and more acronyms) on their time than ever. Application development and delivery are changing rapidly and increasing in complexity, revealing the limitations of traditional DNS approaches in achieving modern network goals.
The IT department has made use of open source platforms for decades to provide DNS and traffic management in their internal networks and for their public, internet-facing services. The majority of DNS servers, both on the internet and in enterprise intranets, are open source-based solutions such as BIND, djbdns, PowerDNS, gdnsd and NSD.

read more

Alibaba Expands Again

The cloud computing industry is moving at a rapid pace, and every player is under a lot of pressure to constantly expand and innovate, to keep pace with the intense competition. Recently, Google opened its flagship product called Spanner that has been driving its operations for many decades, to cloud users. Microsoft is coming up with a slew of new features such as Azure IP Advantage and Azure AD B2B. AWS, the market leader, is entering into lucrative partnerships, with Snap being its latest customer. Not to be left behind is Alibaba – the Chinese giant that is making a big impact in the global cloud computing industry.

Already, this company is having an aggressive expansion plan, as it opened new data centers in Australia, Japan and Dubai to meet the growing needs of its customers in these regions. Besides setting up new centers, Alibaba has more than doubled its existing facility in Hong Kong to better serve the needs of its Asia-Pacific customers. Primarily, this expansion will meet demand in the areas of disaster recovery, data storage and analytics, middleware, and cloud security services. At present, Alibaba is the largest cloud computing service provider in China.

The company has announced that this expansion is in tune with its strategy to become the preferred cloud service provider world over. It sees itself as a competitor to Amazon Web Services (AWS) – the largest provider in the world today.

So, why are companies scrambling so much to dominate the cloud industry? The cloud industry grew by 16.5 percent in 2016, and Gartner sees the same level of growth in 2017. Organizations around the world, regardless of their size, want to move away from legacy systems and into cloud services to tap into the many benefits that come with it. Currently, AWS and Microsoft account for the lion’s share, though other companies like Google and IBM are catching on.

Alibaba also wants to join in this party, and this is why it is embarking on a “build” approach. In this model, the company wants to build new facilities and expand existing ones, and it believes that such grand facilities will surely bring in users. This strategy is slightly different from the ones followed by companies like Google that want to widen its offerings, so customers get the best value for their money.

However, this strategy has paid rich dividends for Alibaba, as it reported a growth of 115 percent over the last year. Though its revenue of $254 million is a drop in the bucket when compared to the revenues of AWS, Microsoft and Google, it has come within a short time. Over the last year alone, it saw its customer database increase by 100 percent. For the sake of comparison, Amazon’s revenue was $10 billion in 2016.

Currently, Alibaba has operations in 14 global centers including Dubai, London, Sydney, Hong Kong and mainland China. At this rate of growth and expansion, we can soon expect Alibaba to give the market leaders a stiff competition.

The post Alibaba Expands Again appeared first on Cloud News Daily.

Software-defined storage, hybrid clouds, and getting the performance right: A guide

(c)iStock.com/4x-image

In today’s data-intensive, hyper-connected world, storage solutions built on the vertical scaling model have become impractical and expensive. Enterprises are driven by the need to remain competitive while storing and managing petabytes of data. In this vein, ESG senior analyst Mark Peters believes that there is at last a straight line between how storage is configured and business value – if organisations can get it right. Vertical scaling is a legacy approach that cannot provide the performance or cost-effectiveness organisations need today, but the adoption of software- defined storage is now enabling data centres to scale competitively.

Another development assists in this goal. Hybrid cloud offers a way for organisations to gain the maximum amount of business flexibility from cloud architectures, which helps maximise budget efficiency and performance goals at the same time. Many storage professionals are still in the learning curve of hybrid cloud architectures because they are so new. They are just beginning to grasp the benefits and challenges associated with deploying a hybrid cloud approach.

This article will offer design elements to present to your customers so that their hybrid clouds can deliver the performance, flexibility and scalability they need.

How scale-out NAS factors in

The linchpin that will make this hybrid cloud storage solution possible is a scale-out NAS. Since hybrid cloud architectures are relatively new to the market—and even newer in full-scale deployment—many organisations are unaware of the importance of consistency in a scale-out NAS (network attached storage).

Many environments are eventually consistent, meaning that files written to one node are not immediately accessible from other nodes. This can be caused by not having a proper implementation of the protocols, or not tight enough integration with the virtual file system. The opposite of that is being strictly consistent: files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.

In an ideal set-up, a scale-out NAS hybrid cloud architecture will be based on three layers. Each server in the cluster will run a software stack based on these layers.

  • The persistent storage layer is layer one. It is based on an object store, which provides advantages like extreme scalability. However, the layer must be strictly consistent in itself.
  • Layer two is the virtual file system, which is the core of any scale-out NAS. It is in this layer that features like caching, locking, tiering, quota and snapshots are handled.
  • Layer three holds the protocols like SMB and NFS but also integration points for hypervisors, for example.

It is very important to keep the architecture symmetrical and clean. If organisations manage to do that, many future architectural challenges will be much easier to solve.

The storage layer requires closer examination now. Because it is based on an object store, we can now easily scale our storage solution. With a clean and symmetrical architecture, we can reach exabytes of data and trillions of files.

The storage layer is responsible for ensuring redundancy, so a fast and effective self-healing mechanism is needed. To keep the data footprint low in the data centre, the storage layer needs to support different file encodings. Some are good for performance and some for reducing the footprint.

Dealing with metadata

Why is metadata such a vital component of the virtual file system? In a virtual file system, metadata are pieces of information that describe the structure of the file system. For example, one metadata file can contain information about what files and folders are contained in a single folder in the file system. That means that we will have one metadata file for each folder in our virtual file system. As the virtual file system grows, we will get more and more metadata files.

Though metadata can be stored centrally and may be fine for smaller set-ups, here we are talking about scale-out. So, let’s look at where not to store metadata. Storing metadata in a single server can cause poor scalability, poor performance and poor availability. Since our storage layer is based on an object store, a better place to store all our metadata is in the object store – particularly when we are talking about high quantities of metadata. This will ensure good scalability, good performance and good availability.

Meeting the need for speed

Performance can be an issue with software-defined storage solutions, so they need caching devices to increase performance. From a storage solution perspective, both speed and size matter – as well as price; finding the sweet spot is important. For an SDS solution, it is also important to protect the data at a higher level by replicating the data to another node before destaging the data to the storage layer.

As the storage solution grows in both capacity and features, particularly in virtual or cloud environments, supporting multiple file systems and domains becomes more important. Supporting multiple file systems is also very important. Different applications and use cases prefer different protocols. And sometimes, it is also necessary to be able to access the same data across different protocols.

NAS, VMs and SDS

Hypervisors will need support in the hybrid cloud. Therefore, the scale-out NAS needs to be able to run as hyper-converged as well. Being software-defined makes sense here.

When there are no external storage systems, the scale-out NAS must be able to run as a virtual machine and make use of the hypervisor host’s physical resources. The guest virtual machines (VMs) own images and data will be stored in the virtual file system that the scale-out NAS provides. The guest VMs can use this file system to share files between them, making it perfect for VDI environments as well.

Now, why is it important to support many protocols? Well, in a virtual environment, there are many different applications running, having different needs for protocols. By supporting many protocols, we keep the architecture flat, and we have the ability to share data between applications that speak different protocols, to some extent.

What we end up with is a very flexible and useful storage solution. It is software-defined, supports both fast and energy-efficient hardware, has an architecture that allows us to start small and scale up, supports bare-metal as well as virtual environments, and has support for all major protocols.

Sharing the file system

Because there are multiple sites, each of them will have its own independent file system. A likely scenario is that different offices have a need for both a private area and an area that they share with other branches. So only parts of the file system will be shared with others.

Using one part of the file system for others to mount at any given point in the other file systems provides the flexibility needed to scale the file system outside the four walls of the office – making sure that the synchronisation is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.

Flexible, scale-out storage

When all of the above considerations are implemented, they create a next-generation hybrid cloud system. One file system spans all servers so that there are multiple points of entry to prevent performance bottlenecks. The solution offers flash support for high performance and native support of protocols. Scale-out is flexible; just add a node. The solution is clean and efficient, enabling linear scaling up to exabytes of data. It is an agile, cost-efficient approach to data centre expansion.

Software-defined storage, hybrid clouds, and getting the performance right: A guide

(c)iStock.com/4x-image

In today’s data-intensive, hyper-connected world, storage solutions built on the vertical scaling model have become impractical and expensive. Enterprises are driven by the need to remain competitive while storing and managing petabytes of data. In this vein, ESG senior analyst Mark Peters believes that there is at last a straight line between how storage is configured and business value – if organisations can get it right. Vertical scaling is a legacy approach that cannot provide the performance or cost-effectiveness organisations need today, but the adoption of software- defined storage is now enabling data centres to scale competitively.

Another development assists in this goal. Hybrid cloud offers a way for organisations to gain the maximum amount of business flexibility from cloud architectures, which helps maximise budget efficiency and performance goals at the same time. Many storage professionals are still in the learning curve of hybrid cloud architectures because they are so new. They are just beginning to grasp the benefits and challenges associated with deploying a hybrid cloud approach.

This article will offer design elements to present to your customers so that their hybrid clouds can deliver the performance, flexibility and scalability they need.

How scale-out NAS factors in

The linchpin that will make this hybrid cloud storage solution possible is a scale-out NAS. Since hybrid cloud architectures are relatively new to the market—and even newer in full-scale deployment—many organisations are unaware of the importance of consistency in a scale-out NAS (network attached storage).

Many environments are eventually consistent, meaning that files written to one node are not immediately accessible from other nodes. This can be caused by not having a proper implementation of the protocols, or not tight enough integration with the virtual file system. The opposite of that is being strictly consistent: files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.

In an ideal set-up, a scale-out NAS hybrid cloud architecture will be based on three layers. Each server in the cluster will run a software stack based on these layers.

  • The persistent storage layer is layer one. It is based on an object store, which provides advantages like extreme scalability. However, the layer must be strictly consistent in itself.
  • Layer two is the virtual file system, which is the core of any scale-out NAS. It is in this layer that features like caching, locking, tiering, quota and snapshots are handled.
  • Layer three holds the protocols like SMB and NFS but also integration points for hypervisors, for example.

It is very important to keep the architecture symmetrical and clean. If organisations manage to do that, many future architectural challenges will be much easier to solve.

The storage layer requires closer examination now. Because it is based on an object store, we can now easily scale our storage solution. With a clean and symmetrical architecture, we can reach exabytes of data and trillions of files.

The storage layer is responsible for ensuring redundancy, so a fast and effective self-healing mechanism is needed. To keep the data footprint low in the data centre, the storage layer needs to support different file encodings. Some are good for performance and some for reducing the footprint.

Dealing with metadata

Why is metadata such a vital component of the virtual file system? In a virtual file system, metadata are pieces of information that describe the structure of the file system. For example, one metadata file can contain information about what files and folders are contained in a single folder in the file system. That means that we will have one metadata file for each folder in our virtual file system. As the virtual file system grows, we will get more and more metadata files.

Though metadata can be stored centrally and may be fine for smaller set-ups, here we are talking about scale-out. So, let’s look at where not to store metadata. Storing metadata in a single server can cause poor scalability, poor performance and poor availability. Since our storage layer is based on an object store, a better place to store all our metadata is in the object store – particularly when we are talking about high quantities of metadata. This will ensure good scalability, good performance and good availability.

Meeting the need for speed

Performance can be an issue with software-defined storage solutions, so they need caching devices to increase performance. From a storage solution perspective, both speed and size matter – as well as price; finding the sweet spot is important. For an SDS solution, it is also important to protect the data at a higher level by replicating the data to another node before destaging the data to the storage layer.

As the storage solution grows in both capacity and features, particularly in virtual or cloud environments, supporting multiple file systems and domains becomes more important. Supporting multiple file systems is also very important. Different applications and use cases prefer different protocols. And sometimes, it is also necessary to be able to access the same data across different protocols.

NAS, VMs and SDS

Hypervisors will need support in the hybrid cloud. Therefore, the scale-out NAS needs to be able to run as hyper-converged as well. Being software-defined makes sense here.

When there are no external storage systems, the scale-out NAS must be able to run as a virtual machine and make use of the hypervisor host’s physical resources. The guest virtual machines (VMs) own images and data will be stored in the virtual file system that the scale-out NAS provides. The guest VMs can use this file system to share files between them, making it perfect for VDI environments as well.

Now, why is it important to support many protocols? Well, in a virtual environment, there are many different applications running, having different needs for protocols. By supporting many protocols, we keep the architecture flat, and we have the ability to share data between applications that speak different protocols, to some extent.

What we end up with is a very flexible and useful storage solution. It is software-defined, supports both fast and energy-efficient hardware, has an architecture that allows us to start small and scale up, supports bare-metal as well as virtual environments, and has support for all major protocols.

Sharing the file system

Because there are multiple sites, each of them will have its own independent file system. A likely scenario is that different offices have a need for both a private area and an area that they share with other branches. So only parts of the file system will be shared with others.

Using one part of the file system for others to mount at any given point in the other file systems provides the flexibility needed to scale the file system outside the four walls of the office – making sure that the synchronisation is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.

Flexible, scale-out storage

When all of the above considerations are implemented, they create a next-generation hybrid cloud system. One file system spans all servers so that there are multiple points of entry to prevent performance bottlenecks. The solution offers flash support for high performance and native support of protocols. Scale-out is flexible; just add a node. The solution is clean and efficient, enabling linear scaling up to exabytes of data. It is an agile, cost-efficient approach to data centre expansion.

Where is the true state of the cloud in 2017? Analysing two influential reports

(c)iStock.com/Serjio74

Two reports have hit this publication’s inbox around the ‘state of the cloud’ in recent days; a study from RightScale argues the market is growing at a solid clip, while the summary from Bessemer Venture Partners’ (BVP) Byron Deeter found an industry which struggled at the start of 2016 but has since roared back.

While both reports differ – RightScale focuses on the adoption and vendors, while Deeter looks almost exclusively at the financial angle – there are common threads. Here, we look at the standout takeaways from both studies:

IPOs down but M&A skyrockets

Deeter affirms a point this publication has previously been making; IPOs in the cloud space have run dry. The total figure of five – Twilio, the standout, alongside Blackline, Coupa, Apptio and Everbridge – is the lowest since the financial crisis of 2008. Yet this, compounded with the resurgence of the industry towards the end of last year, means a huge amount of merger and acquisition activity as companies jockey for position.

Companies acquired represent 40% of the $300 billion market cap, including LinkedIn, bought by Microsoft, NetSuite, bought by Oracle, and AppDynamics, acquired by Cisco just last month:


The top 100 private cloud companies, as noted by Forbes in September with Slack, Dropbox and DocuSign at the summit, represents more than $100 billion of private enterprise value alone, Deeter adds.

AWS stays flat while Microsoft builds momentum

RightScale’s report interviewed more than 1000 technology professionals, across a wide range of industries, and covered the full gamut, from vendors, to DevOps tools, to multi-cloud strategy. Regarding the IaaS race, the report found that while Amazon Web Services (AWS) usage stayed the same year over year used by 57% of respondents, Azure went up from 20% to 34%.

This is a trend which has been apparent for several months, from Microsoft edging their way ahead of the pack and clearly into second place, to the gargantuan share of Amazon slightly being eaten away. AWS posted $3.5bn in revenue for the most recent quarter, up from $3.2bn in Q3.

As Synergy Research put it in their analysis earlier this month, a few cloud providers are growing at ‘extraordinary’ rates yet AWS ‘has no intention of letting its crown slip’. According to those polled, 41% of workloads on average run in the public cloud compared to 38% in private – although this number still marginally favours private when it comes to enterprises – with overall private cloud usage falling from 77% to 72% of respondents year on year.

On another note, Workday, one of the key bellwethers in Deeter’s market cap analysis – and particularly so given LinkedIn’s acquisition – migrated over to AWS as its preferred public cloud supplier back in November.

Follow Dropbox and Slack’s lead if you want to grow

As this publication reported earlier this month, Dropbox announced it had become the fastest SaaS company to hit the $1 billion revenue run rate threshold. Unlike its contemporary Box, Dropbox is ‘not in any rush’ to go public any time soon, according to a Business Insider report. Yet a slide from BVP argues this is ‘the new growth standard’.

Looking further down the scale, BVP argues that if you want to be the best, your company needs to take no more than two years to get to $10m in annualised run rate, and five years to move to $100m. Part of this is down to the astonishing growth of Slack (below):

A report from Okta in January analysing enterprise applications described the company’s outlook as ‘nothing short of jaw-dropping’. One executive this reporter spoke with – formerly of Microsoft’s parish – said there may be an element of not using Microsoft products because of the brand heritage and thinking ‘what is another enterprise tool which isn’t based around email but around communications…oh, Slack’ about its success, although adding the company had “done extremely well [with a] useful product.”

DevOps salad days and skills gap challenges

RightScale found that 30% of enterprise respondents are today adopting DevOps throughout the whole company; a number which went up from 21% this time last year. Docker, with 35% of the vote, was the most popular tool, ahead of Chef (28%), Puppet (28%), and Kubernetes (14%).

The question of DevOps and how organisations are doing it usually leads to concerns over a lack of skills to be able to realise their implementations. Going against the grain of other recent research, fewer RightScale survey respondents in 2017 said lack of resources and expertise was their biggest challenge, down to 25% from 32%.

According to a report from Claranet earlier this month, financial services organisations are the trailblazers when it comes to DevOps, while Robert Half Technology found that almost three quarters of UK CIOs polled frequently encounter IT professionals who were not up to their needs.

You can find out more about the RightScale report here, and the BVP report here.

Main picture credits: Bessemer Venture Partners

Where is the true state of the cloud in 2017? Analysing two influential reports

(c)iStock.com/Serjio74

Two reports have hit this publication’s inbox around the ‘state of the cloud’ in recent days; a study from RightScale argues the market is growing at a solid clip, while the summary from Bessemer Venture Partners’ (BVP) Byron Deeter found an industry which struggled at the start of 2016 but has since roared back.

While both reports differ – RightScale focuses on the adoption and vendors, while Deeter looks almost exclusively at the financial angle – there are common threads. Here, we look at the standout takeaways from both studies:

IPOs down but M&A skyrockets

Deeter affirms a point this publication has previously been making; IPOs in the cloud space have run dry. The total figure of five – Twilio, the standout, alongside Blackline, Coupa, Apptio and Everbridge – is the lowest since the financial crisis of 2008. Yet this, compounded with the resurgence of the industry towards the end of last year, means a huge amount of merger and acquisition activity as companies jockey for position.

Companies acquired represent 40% of the $300 billion market cap, including LinkedIn, bought by Microsoft, NetSuite, bought by Oracle, and AppDynamics, acquired by Cisco just last month:


The top 100 private cloud companies, as noted by Forbes in September with Slack, Dropbox and DocuSign at the summit, represents more than $100 billion of private enterprise value alone, Deeter adds.

AWS stays flat while Microsoft builds momentum

RightScale’s report interviewed more than 1000 technology professionals, across a wide range of industries, and covered the full gamut, from vendors, to DevOps tools, to multi-cloud strategy. Regarding the IaaS race, the report found that while Amazon Web Services (AWS) usage stayed the same year over year used by 57% of respondents, Azure went up from 20% to 34%.

This is a trend which has been apparent for several months, from Microsoft edging their way ahead of the pack and clearly into second place, to the gargantuan share of Amazon slightly being eaten away. AWS posted $3.5bn in revenue for the most recent quarter, up from $3.2bn in Q3.

As Synergy Research put it in their analysis earlier this month, a few cloud providers are growing at ‘extraordinary’ rates yet AWS ‘has no intention of letting its crown slip’. According to those polled, 41% of workloads on average run in the public cloud compared to 38% in private – although this number still marginally favours private when it comes to enterprises – with overall private cloud usage falling from 77% to 72% of respondents year on year.

On another note, Workday, one of the key bellwethers in Deeter’s market cap analysis – and particularly so given LinkedIn’s acquisition – migrated over to AWS as its preferred public cloud supplier back in November.

Follow Dropbox and Slack’s lead if you want to grow

As this publication reported earlier this month, Dropbox announced it had become the fastest SaaS company to hit the $1 billion revenue run rate threshold. Unlike its contemporary Box, Dropbox is ‘not in any rush’ to go public any time soon, according to a Business Insider report. Yet a slide from BVP argues this is ‘the new growth standard’.

Looking further down the scale, BVP argues that if you want to be the best, your company needs to take no more than two years to get to $10m in annualised run rate, and five years to move to $100m. Part of this is down to the astonishing growth of Slack (below):

A report from Okta in January analysing enterprise applications described the company’s outlook as ‘nothing short of jaw-dropping’. One executive this reporter spoke with – formerly of Microsoft’s parish – said there may be an element of not using Microsoft products because of the brand heritage and thinking ‘what is another enterprise tool which isn’t based around email but around communications…oh, Slack’ about its success, although adding the company had “done extremely well [with a] useful product.”

DevOps salad days and skills gap challenges

RightScale found that 30% of enterprise respondents are today adopting DevOps throughout the whole company; a number which went up from 21% this time last year. Docker, with 35% of the vote, was the most popular tool, ahead of Chef (28%), Puppet (28%), and Kubernetes (14%).

The question of DevOps and how organisations are doing it usually leads to concerns over a lack of skills to be able to realise their implementations. Going against the grain of other recent research, fewer RightScale survey respondents in 2017 said lack of resources and expertise was their biggest challenge, down to 25% from 32%.

According to a report from Claranet earlier this month, financial services organisations are the trailblazers when it comes to DevOps, while Robert Half Technology found that almost three quarters of UK CIOs polled frequently encounter IT professionals who were not up to their needs.

You can find out more about the RightScale report here, and the BVP report here.

Main picture credits: Bessemer Venture Partners

What is Google Cloud Spanner?

Google is going all-out with its arsenal to take on Microsoft and AWS in a growing cloud war. It’s latest product is Cloud Spanner, a global database that’s driven Google to become one of the best tech companies in the world. While this is not a new product as it’s been around for decades, this is the first time Google is opening it up for its customers.

Cloud Spanner is the database that has been powering Google for the last so many years. It all started when a group of engineers came together to create the first database and a system that seemed to defy all logic. Called the Spanner, this database was a mechanism to store information across millions of machines spanning through dozens of data centers in multiple continents. The best part is, Spanner acts as a single piece, even if it is spread across the world.

Today, it is the underlying technology for all of Google’s services such as Gmail, Google Photos and its most important revenue-generating product, Adwords.

Now, for the first time ever, Google is opening up this product to the world by branding it as Cloud Spanner. As per the terms and conditions, customers can rent out some space on Cloud Spanner, and can use it for their own apps and products. This is exactly the same as what Google uses for its in-house operations.

Spanner uses SQL language for querying, so most programmers who’ve worked on popular databases like SQL, Oracle and DB2 should be familiar with it. This translates to little or no training to use the Cloud Spanner, and customers can start making the most of it from day one. At the same time, Spanner is a flexible database that can expand to hold any amount of data, so scalability is never an issue.

On top of it, Cloud Spanner is hosted from the Google data centers, and this means, like other Google products, this is protected against hardware failures and cyber attacks. In other words, customers get to use the patented magic technology from Google for their own apps and products, for a fraction of the cost of developing such a massive system.

For those customers who don’t want to create apps using Cloud Spanner, Google offers a product called Cloud SQL, that’s similar to a traditional database software. In addition, customers can also use BigQuery data analysis engine on both the platforms for big data queries. With such a move, Google has empowered its customers in a big way, as they can choose to either use Cloud Spanner or Cloud SQL, depending on their business needs.

This is a significant move by Google, and one that can shake up the cloud market. Over the last few months, Google has tried a range of different strategies to counter the dominance of AWS and the fast-growing Microsoft Azure, but has seen only limited success. Since Spanner is something that is exclusively available only in Google, there is an increased chance for customers to use Google Cloud over Azure and AWS.

The post What is Google Cloud Spanner? appeared first on Cloud News Daily.