Intel adds cloud support for Unite collaboration platform


Keumars Afifi-Sabet

10 Jun, 2019

Intel will target small and medium-sized business (SMBs) with a significant cloud upgrade to its flagship Unite communications platform.

The four-year-old system has traditionally required customers to install physical hardware at a cost to integrate Intel’s collaboration and video conferencing tools. From Wednesday 12 June, however, the firm is hoping to eliminate these barriers and pave the way for smaller companies to take on the platform.

The firm is also seeking to infiltrate new areas such as schools and hospitals. One example may be a doctor taking advantage of pre-installed screens to communicate information to a patient instead of relying on handwritten notes or a tablet device.

The Unite platform itself is built on the Intel vPro PCs, CPUs, chipsets and Wi-Fi components, which allows for a secure hardware encryption engine, as well as remote management. It will also support a wider array of integrated apps, ranging from unified communications tools like Cisco Webex to AV systems such as Panacast.

Fundamentally, Intel wants to introduce a baseline level of technology across an organisation, in rooms of varying sizes, to ensure workflows are continuous and colleagues can collaborate anywhere. These areas include huddle spaces, medium collaboration space and the board room.

The largest change involves adding a cloud-powered rotating PIN service that provides managed security and login between the Unite hub PC and a device running the Unite app. This has been designed to ensure that only people meant to attend a meeting hosted by Unite are allowed access to it, and bypasses the need for an on-premise server to handle PIN orchestration.  

“This is going to obviously give more deployment choice for existing customers,” said Tom Loza, the company’s global director for sales of Unite. “It will provide potentially, for those customers that are on-prem to move to the cloud, a lower maintenance cost of the solution. And just give a broader, more simple managed solution to our small business customers.”

Launched as a wireless sharing platform in 2015, Unite has since added a host of additional capabilities over time, including full client device support and moderator controls. Intel said these changes are all the result of user feedback, as is the cloud launch.

The upgrade not only opens new markets to Intel, Loza noted, but enables further scaling through channel partners, and expands the capabilities of these firms by signing them up to dedicated training programmes.

How the combination of cloud and AI is influencing IT investment strategy

The pace of change from a traditional capital-intensive IT infrastructure model to a more flexible hybrid multi-cloud services model is influencing enterprise spending trends across the globe.

Worldwide IT spending is forecast to total $3.79 trillion in 2019 — that's an increase of just 1.1 percent from 2018, according to the latest global market study by Gartner.

IT infrastructure market development

"Currency headwinds fuelled by the strengthening US dollar have caused us to revise our 2019 IT spending forecast down from the previous quarter," said John-David Lovelock, vice president at Gartner. "Through the remainder of 2019, the US dollar is expected to trend stronger, while enduring tremendous volatility due to uncertain economic and political environments and trade wars."

In 2019, technology product managers will have to get more strategic with their portfolio mix by balancing products and services that will post growth in 2019 with those larger markets that will trend flat to down.

According to the Gartner assessment, successful IT product managers in 2020 will have had a long-term view of the changes made in 2019.

The data centre systems segment will experience the largest decline in 2019 with a decrease of 2.8 percent. This is mainly due to the expected lower average selling prices (ASPs) in the server market driven by adjustments in the pattern of expected component costs.

Moreover, the shift of enterprise IT spending from traditional (non-cloud) offerings to new, cloud-based alternatives is continuing to drive growth in the enterprise software market.

In 2019, the market is forecast to reach $427 billion; that's up 7.1 percent from $399 billion in 2018. The largest cloud shift has so far occurred in application software.

However, Gartner expects increased growth for the infrastructure software segment in the near-term, particularly in integration platform as a service (iPaaS) and application platform as a service (aPaaS).

"The choices CIOs make about technology investments are essential to the success of a digital business. Disruptive emerging technologies, such as artificial intelligence (AI), will reshape business models as well as the economics of public- and private-sector enterprises. AI is having a major effect on IT spending, although its role is often misunderstood," said Mr. Lovelock.

Outlook for AI applications spending growth

Gartner believes that AI is not a product, it is really a set of techniques or a computer engineering discipline. As such, AI is being embedded in many existing products and services, as well as being central to new development efforts in every industry.

Gartner’s AI business value forecast predicts that organisations will receive $1.9 trillion worth of benefit from the use of AI this year alone.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Delusions of infrastructure grandeur: How cloud-native brings its own complexity

It is a truth universally acknowledged that managing fewer things is easier than managing lots of things. Yet, why do so many of us in tech exalt "scale" as a paramount virtue? The cloud-native arena is a particularly interesting focal point for this exact debate.

The cloud-native community is disrupting many long-held technological conventions, making us rethink how we should build the systems of tomorrow. However, many of the tools, platforms, and practices coming out of that community have been extracted from the largest technology companies on Earth. These companies dominate the cloud-native computing landscape: its technology, its evangelism and its revenue.

It should therefore surprise nobody to find that cloud-native architectures introduce a ton of new complexity to the uninitiated (see Conway's Law). Everything is built and packaged up as containers, everything is scaled-out, everything is distributed, with radically different ways to deploy, operate, debug, and optimize the system. This is why platforms like Kubernetes are so critical to managing it all.

But Kubernetes wasn't designed for the masses. It came from Google, designed by Google engineers to help other Google engineers solve mostly Google-scale problems. If you have lots of overlap on that particular Venn diagram, then it's a clear, great choice. But what about those who don't?

Simplicity is always in fashion

The first question you should ask yourself is what your needs truly are. Do your apps genuinely need a massive level of scale to succeed, with all the complexity that implies? Do you need 100 servers when five powerful ones would do? Do you need to break up your app into microservices, or would refactoring and tweaking your monolith suffice? Do you need Kubernetes, or would a PaaS work? Do you have the people and skills on hand to make any of these initiatives succeed?

Infrastructure shouldn't exist just to exist; it exists to run something useful on top of it. Scale is a means to an end. Taking a step back and understanding what your applications genuinely need to thrive, and what the tradeoffs are with each possible approach, is essential.

Why orchestration is key to limiting complexities

In the cloud-native world, nearly every single task involves touching more than one "target." Higher-level abstractions make many things easier, but their inherently distributed nature makes many things more involved. The question becomes less about "what" is being managed and more about "how" to orchestrate an activity across lots of different domains – such as container platforms, build tooling, storage, networking, databases, monitoring, third party ticketing and deployment systems.

IT teams need to focus on finding orchestration tools that integrate with the things they have, cloud-native or not. Breadth of automation is critical; it's the foundation upon which you can solve all kinds of higher-level problems. And once you get to a certain level of complexity, automation becomes non-negotiable.

Day two and beyond

It's easy to focus on the architectural and deployment benefits of cloud-native infrastructure, yet forget that it's only after you've deployed your application that its life truly begins. Provisioning tools are great for handling day one of your application's life. But what about day two, and beyond? How do you reconfigure your application? How do you deploy a new version? How do you handle security breaches? How do you make changes in third party services your app relies upon?

Platforms like Kubernetes offer some really nice primitives for some of these issues. But they may not capture all the nuances of how your particular application needs to be operated, and they may not even apply to services running outside the platform (third-party logging, monitoring, or networking). These platforms can do a lot, but they can't magically make your applications manage themselves.

As the cloud-native movement puts more of the application stack in the hands of developers to control, we'd all benefit from learning from the problems operations personnel have dealt with for years.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Joyent bids farewell to the public cloud in ‘difficult’ decision

It was one of the most innovative early-stage cloud vendors – but Joyent’s public cloud offering will be no more.

The company announced its departure from the public cloud space in a blog post today, scaling back its availability to customers of its single-tenant cloud offering.

Affected customers have five months to find a new home; a documentation page confirmed the Joyent Triton public cloud will reach end of life on November 9, while the company has separately put together a list of available partners, including Microsoft Azure and OVH.

Steve Tuck, Joyent president and chief operating officer (COO), cited strained resources in developing both its public cloud and single-tenant cloud as the reason behind a ‘difficult’ decision.

“To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home,” Tuck wrote. “We are truly grateful for your business and the commitment that you have shown us over the years; thank you.”

Joyent had been acquired by Samsung in 2016 after the Korean giant had explored Manta, the company’s object storage system, for implementation. Samsung liked the product so much that it outright bought it; as Bryan Cantrill, CTO of Joyent, explained at the time, Samsung offered hardware to Joyent after its proposal proved too much heft for the startup to cope with.

Prior to the days of public cloud and infrastructure as a service (IaaS) domination from Amazon Web Services (AWS), Microsoft, Google, and other hyperscalers with frighteningly deep pockets, Joyent enjoyed a stellar reputation. The company was praised by Gartner, in its 2014 IaaS Magic Quadrant, for having a “unique vision”, as well as previously being the corporate steward of Node.js, growing it into a key standard for web, mobile, and Internet of Things (IoT) architectures.

“By providing [an] easy on-ramp to on-demand cloud infrastructure, we have had the good fortune to work with an amazing array of individuals and companies, big and small,” added Tuck.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Organisations need to ‘acknowledge challenges’ in not keeping 100% uptime, argues Veeam

It’s the big downtime downturn; according to a new study from Veeam, three in four organisations admit they are not able to meet users’ demands for uninterrupted access to applications and data.

The findings appear in the company’s latest Cloud Data Management Report, which surveyed more than 1,500 senior business and IT leaders across 13 countries. Ultimately, the need for more sophisticated data management is something that Veeam feels as though it is an expert in – the company cites itself as the leader in ‘cloud data management’ – yet the stats are interesting.

In particular, the research found that lost data from mission-critical application downtime costs organisations more than $100,000 per hour on average, while app downtime translates to a cost of $20.1 million globally in lost revenue and productivity.

Evidently, the research has noted how organisations are struggling with their current data management methods. 44% of those polled said more sophisticated data management was critical to their organisation’s success in the coming two years. Four in five respondents said better data management strategies led to greater productivity, while two thirds found greater stability.

Perhaps surprisingly, of those polled, software as a service (SaaS) was not completely saturated; just over three quarters (77%) said they were already using it, with this number set to rise to 93% by the end of 2019. The golden nugget comes from when organisations see the dividend of adopting new technologies; financial benefits come along after nine months on average, with operational benefits arriving after approximately seven months.

“We are living in a data-driven age, and organisations need to wake up and take action to protect their data,” said Ratmir Timashev, Veeam co-founder and EVP sales and marketing. “Businesses must manage their data in a way that always delivers availability and leverage its value to drive performance. This is no longer a luxury, but a business necessity.

“There is a significant opportunity and competitive advantage for those who effectively manage their data,” Timashev added. “Ask yourself – are you confident that your business data will always be available? If you are unsure it’s time to act – and our study shows that many are not acting fast enough.”

You can find out more about the Veeam report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft and Oracle team up on multi-cloud service


Bobby Hellard

6 Jun, 2019

Microsoft and Oracle have announced a partnership that will see them offer a combined service for customers wanting to migrate their workloads to the cloud.

The combined services will allow users of Oracle’s autonomous databases to connect with Microsoft services such as Azure analytics and AI. Users will be able to log on to services from either firm with a single joint user name.

According to Oracle, the majority of the worlds largest enterprises use its databases along with Microsoft services, running them side-by-side in their on-premises data centres. But migrating these workloads to the cloud can often leave them stranded on multiple cloud islands, with little ability to share data between the two.

With this alliance, both companies are aiming to give customers the ability to seamlessly use multiple clouds with much greater effectiveness. It promises nimble apps that can shift from cloud to cloud easily that can even deploy individual apps that span multiple clouds.

“With Oracle’s enterprise expertise, this alliance is a natural choice for us as we help our joint customers accelerate the migration of enterprise applications and databases to the public cloud,” said Scott Guthrie, head of Microsoft’s cloud unit.

Oracle’s vision for the partnerships is that customers can run applications in separate clouds with consistent controls and that these applications span clouds, typically with the database layer in one cloud and the app and web tiers in another. This is a low-latency connection between the clouds that lets customers choose preferred components for each application.

“Oracle is tremendously excited to give our customers the ability to leverage our technology alongside that of another industry leader with dramatically reduced friction,” said Vinay Kumar, VP of product management at Oracle.

“We see this as a first step down the path of greater choice, flexibility, and effectiveness for enterprise cloud usage. We’re eager to see what our customers will build with this new capability and where this alliance will take us and the industry.”

Currently, that industry is dominated by AWS, which has started a process of moving away from using Oracle databases. Back in August 2018, it was reported that this transition could take up to two years.

The cloud awakens: What needs to happen now to move from teenage kicks to adulthood

Having worked in the cloud computing arena for approaching 14 years I have seen many changes in technology, strategy and opinion of clients in their views to cloud technology platforms and solutions. 

Attitudes to cloud adoption have changed, going through many phases from the ‘we’ll never go cloud’ to ‘we’ll use it in simple non critical areas’ through to today’s cloud committed firms pushing to leverage cloud compute power across all areas possible. Alongside this has come a change in diligence and questions, less why should we consider cloud, to more mature questions relating to data security, access controls, portability and scalability. 

Businesses have an increased focus on moving away from the world of custom code wherever possible to more repeatable cloud offerings, where configuration replaces custom, reducing operational and maintenance costs and allows firm driving of a faster time to market. 

Cloud has changed the customer-to-vendor landscape dramatically in several ways including; 

  • Flattening of the market: Not so long-ago solutions designed for the enterprise required infrastructure, hardware and implementation costing them out of the market for the average firm; cloud has removed this barrier allowing all firms access to the rich power and function equally
  • Relationships: Many traditional vendors only engaged with their clients through resale channels, with cloud this has changed with the delivery mechanism allowing vendors to reach direct customers on a global basis rapidly; an increasing number of customers now having direct cloud vendor relationships 
  • Financial: Cloud has changed models from an upfront capex approach to an opex subscription model, changing how the business views its IT assets and investment
  • Installation: Installation of old solutions was a necessary evil, having no true value, simply a necessity to getting to the start line of configuring for your business. With cloud this is removed, with deployment being near instantaneous, all focus switches to the more valuable configuring to business needs and processes

For the enterprise vendors such as Oracle, this leads to a wider addressable market, where the cloud offering is affordable and applicable to all from very small to the largest of enterprises. Brands traditionally seen as expensive or addressing a specific market size segment can now broaden their appeal and value.

Cloud empowers removal of the ‘tech debt’ of focusing spend on keeping the lights on and maintaining the status quo, allowing a refocus on innovation and progression. The understanding and reasons to adopt cloud have moved from the infancy stage to the teenage years, moving past the ‘it’s cheaper’ mantra often sold in the early days to a more mature position of consideration. 

Today businesses may lead with cloud for a plethora of reasons from greater agility, a refocus of core efforts from keeping the lights on to focused innovation, through to making the business more attractive to the new employee economy where skilled millennials and ‘Zs’ look to join forward thinking agile firms. 

Often cloud is also adopted as a conduit to a greater flexibility where businesses are acquiring and merging and having a need to uniform processes across work forces quickly and at lower cost. Cloud makes absorption and growth easier, buy a company and extend your platforms to those users in minutes and hours, not weeks and months. Another driver comes from a need for organisational value, investors taking favour to organisations that are agile, cloud ready and utilising leading cloud brands for market advantage.  

We have to remember that cloud encompasses SaaS, PaaS and IaaS alongside internal apps, so a multi-cloud approach is becoming the norm. Here customers have a breadth of options now available from traditional brand names to newer born in the cloud vendors. With exceptions, such as Oracle, most vendors play in only one or two of these cloud form factors and a mix of cloud relationships will develop for the customer. 

However, the path is still not cleared for easy and fast full cloud adoption and before we enter the ‘adult stage’ of cloud we need to see some further progression. For example, from the burying of the legacy tech mindset, where a favour to develop and install locally often remains, protecting legacy people tech skills and accreditations, believed job security and political ad emotional drivers.  

Cloud is the underlying enabler for so much, from big data and AI to IoT, that long term resistance is futile and the new generation entering business will look back wondering why it took so long for the barriers to come down. 

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft and Oracle partner up to interconnect clouds – with retail customers cited

Here’s proof that cloudy collaboration can happen even at the highest levels: Microsoft and Oracle have announced an ‘interoperability partnership’ aimed at helping customers migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud.

Organisations who are customers of both vendors will be able to connect Azure and Oracle Cloud seamlessly. The Oracle Ashburn data centre and the Azure US East facilities are the only ones available for connection at this stage, however both companies have plans to expand to additional regions.

The two companies will also offer unified identity and access management to manage resources across Azure and Oracle Cloud, while Oracle’s enterprise applications, such as JD Edwards EnterpriseOne and Hyperion, can be deployed on Azure with Oracle databases running in Oracle’s cloud.

“As the cloud of choice for the enterprise, with over 95% of the Fortune 500 using Azure, we have always been first and foremost focused on helping our customers thrive on their digital transformation journeys,” said Scott Guthrie, executive vice president for Microsoft’s cloud and AI division in a statement. “With Oracle’s enterprise expertise, this alliance is a natural choice for us as we help our joint customers accelerate the migration of enterprise applications and databases to the public cloud.”

This move may be seen as a surprise to some who may see Microsoft and Oracle as competitors in public cloud, but it is by no means the most surprising – that honour still goes to Oracle and Salesforce’s doomed romance in 2013 – cloud partnership.

Indeed, the rationale is a potentially interesting one. The press materials gave mention to three customers. Aside from energy supplier Halliburton, the other two – Albertsons and Gap Inc – are worth considering. Albertsons, as regular readers of this publication will know, moved over to Microsoft earlier this year. At the time, CIO Anuj Dhanda told CNBC the company went with Azure because of its ‘experience with big companies, history with large retailers and strong technical capabilities, and because it [wasn’t] a competitor.’

Gap was announced as a Microsoft customer in a five-year deal back in November. Again speaking with CNBC – and as reported by CIO Dive – Shelley Branston, Microsoft corporate VP for global retail and consumer goods, said retailers shied away from Amazon Web Services (AWS) because they want ‘a partner that is not going to be a competitor of theirs in any other parts of their businesses.’

Albertsons said in a statement that the Microsoft/Oracle alliance would allow the company ‘to create cross-cloud solutions that optimise many current investments while maximising the agility, scalability and efficiency of the public cloud’, while Gap noted the move would help ‘bring [its] omnichannel experience closer together and transform the technology platform that powers the Gap Inc. brands’.

Yet it’s worth noting that the retail cloud ‘war’ may be a little overplayed. Following the Albertsons move Jean Atelsek, digital economics unit analyst at 451 Research, told CloudTech: “It’s easy to get the impression that retailers are fleeing AWS. Microsoft’s big partnership with Walmart seems to be the example that everyone wants to universalise the entire cloud space. However since a lot of retailers also sell through/on AWS, they’re less likely than Walmart to see Amazon (and by extension AWS) as the devil.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Time is running out for SQL Server 2008/R2 support – here’s what to do about it

Extended support for SQL Server 2008 and 2008 R2 will end in July 2019, giving database and system administrators precious little time to make some necessary changes. Upgrading the software to the latest version is always an option, of course, but for a variety of reasons, that may not be viable for some applications. So Microsoft is providing an alternative: Get three more years of free Extended Security Updates by migrating to the Azure cloud.

While their 2008 vintage may designate these as “legacy” applications, many may still be mission-critical and require some form of high availability (HA) and/or disaster recovery (DR) protections. This article provides an overview of the options available within and for the Azure cloud, and highlights two common HA/DR configurations.

Availability options within the Azure cloud

The Azure cloud offers redundancy within datacenters, within regions and across multiple regions. Redundancy within datacenters is provided by Availability Sets that distribute servers across different Fault Domains residing in different racks to protect against failures at the server and rack levels. Within regions, Azure is rolling out Availability Zones (AZs), which consist of at least three datacenters inter-connected via high-bandwidth, low-latency networks capable of supporting synchronous data replication. For even greater resiliency, Azure offers Region Pairs, where a region gets paired with another within the same geography (e.g. US or Europe) to protect against widespread power or network outages, and major natural disasters.

Administrators should be fully aware, however, that even with the 99.99% uptime assurances afforded by AZs, what counts as downtime excludes many common causes of failure at the application level. Two quite common causes of failure explicitly excluded from the Azure Service Level Agreement are the use of software not provided by Microsoft and what could be called “operator error”—those mistakes mere mortals inevitably make. In effect, the SLA only guarantees “dial tone” for the servers, leaving it up to the customer to ensure uptime for the applications.

Achieving satisfactory HA protection for mission-critical applications is problematic in the Azure cloud, however, owing to the lack of a storage area network (SAN) or other shared storage needed for traditional failover clustering. Microsoft addressed this limitation with Storage Spaces Direct (S2D), a virtual shared storage solution. But S2D support began with Windows Server 2016 and only supports SQL Server 2016 and later. SQL Server’s more robust Always On Availability Groups feature, which was introduced in 2012, is also not an option for the 2008 versions.

Satisfactory DR protection is possible for some applications using Azure Site Recovery (ASR), Microsoft’s DR as a service (DRaaS) offering. While ASR automatically replicates entire VM images from the active instance to a standby instance in another datacenter, it requires manual outage detection and failover. The service is usually able to accommodate Recovery Point Objectives (RPOs) ranging from a few minutes to a few seconds, and Recovery Time Objectives (RTOs) of under one hour.

Third-party failover clustering solutions

With SQL Server’s Failover Cluster Instances (FCIs) requiring shared storage, and with no shared storage available in the Azure cloud, a third-party cluster storage solution is needed. Microsoft recognizes this need for providing HA protection, and includes these instructions for configuring one such solution in its documentation: High Availability for a file share using WSFC, ILB and 3rd-party Software SIOS DataKeeper.

Third-party cluster storage solutions include, at a minimum, real-time data replication and seamless integration with Window Server Failover Clustering. Their design overcomes the lack of shared storage by making locally-attached drives appear as clustered storage resources that can be shared by SQL Server’s FCIs. The block-level data replication occurs synchronously between or among instances in the same Azure region and asynchronously across regions.

The cluster is capable of immediately detecting failures at the application level regardless of the cause and without the exceptions cited in the Azure SLA. As a result, this option is able to ensure not only server dial tone, but also the application’s availability, making it suitable for even the most mission-critical of applications.

Two common configurations

With HA provisions for legacy SQL Server 2008/R2 applications being problematic in the Azure cloud, the only viable option is a third-party storage clustering solution. For DR, by contrast, administrators have a choice of using Azure Site Recovery or the failover cluster for both HA and DR. Here is an overview of both configurations.

Combining failover clustering for HA with ASR for DR affords a cost-effective solution for many SQL Server applications. The shared storage required by FCIs is provided by third-party clustered storage resources in the SANless HA failover cluster, and ASR replicates the cluster’s VM images to another region in a Region Pair to protect against widespread disasters. But like all DRaaS offerings, ASR has some limitations. For example, WAN bandwidth consumption cannot exceed 10 megabytes per second, which might be too low for high-demand applications.

More robust DR protection is possible by using the failover clustering solution in a three-node HA/DR configuration as shown in the diagram. Two of the nodes provide HA protection with rapid, automatic failover, while the third node, located in a different Azure region in a Region Pair adds DR protection.

This configuration uses a third-party cluster storage solution to provide both HA and DR protections across Azure Availability Zones and a Region Pair, respectively.

The main advantage of using the failover cluster for both HA and DR is the ability to accommodate even the most demanding RPOs. Another advantage is that administrators have a single, combined HA/DR solution to manage rather than two separate solutions. The main disadvantage is the slight increase in cost for licensing for the third node.

With two cost-effective solutions for HA/DR protection in the Azure cloud, your organization will now be able to get three more years of dependable service from those legacy SQL Server 2008/R2 applications.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

NASCAR revs up its video business with AWS


Connor Jones

5 Jun, 2019

The National Association for Stock Car Auto Racing (NASCAR) has partnered with AWS to utilise the cloud giant’s artificial intelligence and machine learning tools to automate the database categorisation of 70 years worth of video.

In the run-up to the airing of its online series ‘This Moment in NASCAR History’, the sport that packs deafening stadiums has 18-petabytes of video to migrate to an AWS archive where the processing will take place.

“Speed and efficiency are key in racing and business which is why we chose AWS – the cloud with unmatched performance, the most comprehensive set of services, and the fastest pace of innovation – to accelerate our migration to the cloud,” said Craig Neeb, executive vice president of innovation and development, NASCAR.

“Leveraging AWS to power our new video series gives our highly engaged fans a historical look at our sport while providing a sneak peek at the initial results of this exciting collaboration,” he added.

Using Amazon Rekognition, the platform’s AI-driven image and video analysis tool, NASCAR hopes to automate the tagging of video metadata for its huge catalogue of multimedia to save time searching for specific clips.

Metadata is attributed to stored multimedia files which makes it easier for someone to search for it in a database. For example, a type of metadata attributed to a given video would include the race date, competition, the drivers involved, location and other information that would differentiate it from other clips.

Making a series that joins clips of races throughout the years would take a long time to manually search through petabytes of video.

“By using AWS’s services, NASCAR expects to save thousands of hours of manual search time each year, and will be able to easily surface flashbacks like Dale Earnhardt Sr.’s 1987 ‘Pass in the Grass’ or Denny Hamlin’s 2016 Daytona 500 photo finish, and quickly deliver these to fans via video clips on NASCAR.com and social media channels,” read an AWS statement.

NASCAR also plans to use Amazon SageMaker to train deep learning models against its footage spanning decades to enhance the metadata tagging and video analytics capabilities.

The sport will also be using Amazon Transcribe, automatic speech recognition service, to caption and timestamp every word of speech in the archived videos which will facilitate easy searchability further.

“AWS’s unmatched portfolio of cloud services gives NASCAR the most flexible and powerful tools to bring new elements of the sport to live broadcasts of races,” said Mike Clayville, vice resident, worldwide commercial sales at AWS.