Data Loss Prevention | @BigDataExpo #BigData #Security #Analytics

Digital Guardian in Waltham, Massachusetts analyzes both structured and unstructured data to predict and prevent loss of data and intellectual property (IP) with increased accuracy.
To learn how data recognition technology supports network and endpoint forensic insights for enhanced security and control, we’re joined by Marcus Brown, Vice President of Corporate Business Development for Digital Guardian. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

read more

Shared Economy #DigitalTransformation | @ThingsExpo #IoT #AI #ML #DL

Last week I mentioned to one of my U.S.-based colleagues that I would be out of office for three days at the Irish National Digital Week (NDW16), taking the opportunity because the event was being held in Skibbereen, just a few kilometres down the road from my home office. He asked, «Surely an event of that type would be held in Dublin?» And I replied, «Well there’s a story here that needs to be told; one that will have profound implications for business in Ireland and probably internationally».

read more

[slides] The Future of #IoT | @ThingsExpo #M2M #AI #ML #MachineLearning

Connected devices and the industrial internet are growing exponentially every year with Cisco expecting 50 billion devices to be in operation by 2020. In this period of growth, location-based insights are becoming invaluable to many businesses as they adopt new connected technologies. Knowing when and where these devices connect from is critical for a number of scenarios in supply chain management, disaster management, emergency response, M2M, location marketing and more.
In his session at @ThingsExpo, Mario Proietti, CEO of LocationSmart, discussed how Internet connectivity enables location-based services to take advantage of everything available on the World Wide Web and how this data helps expand the possibilities for how we communicate, collaborate and commute.

read more

[slides] From IT Service Management to #DevOps | @DevOpsSummit #AI #ML

CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional «towers» that are not well integrated for cross-functional purposes. So, when traditional IT Service Management (ITSM) meets the cloud, and equally, DevOps, there is invariably going to be conflict.

read more

What is Intercloud?

Intercloud, as the name suggests, is a network of clouds that are connected with each other in some form. This includes private, public, and hybrid clouds that come together to provide a seamless exchange of data, infrastructure, and computing capabilities. In many ways, it is similar to the Internet- the network of networks that power the world today.

This concept of Intercloud was started as a research project in 2008 at Cisco, and it was later taken over by the Institute of Electrical and Electronics Engineers (IEEE). It  is based on the idea that no single cloud can provide all the infrastructure and computing capability needed for the entire world. Also, if a cloud does not have a presence in a particular geographic region, but gets a request for storage or computation, it should still be in a position to fulfill it. In addition, if a particular cloud has reached its maximum capacity and yet if more resource is needed, the cloud should be able to borrow from another cloud seamlessly, so the user has no idea whether it is coming from a single cloud or from a combination of clouds. To achieve these objectives, Intercloud is seen as the best solution.

So, how does it work? Let’s look at a practical scenario. You’re traveling to a foreign country, and you use your cell phone to make a call. The call will go through and you’ll talk to the person you want, even if your service provider does not have a presence in the country in which you’re traveling. How is this possible? Cell phone providers enter into an agreement with providers of different countries to use their network for routing calls. In technical terms, this is called inter-carrier operability. As a result of this agreement, the call you make is routed through the partner’s network to help you talk to someone. The best part is you have no idea how your call is routed, and you don’t care about the technical aspects too, as long as you’re able to make the call you want. The same principle applies for Intercloud too.

When a cloud is saturated or gets a request from a new geographical region, it simply taps into its partners’ infrastructure to give the service you want. Here too, you’ll never know which cloud provider is servicing you, as long as you’re able to store or access what you want. In fact, such a convenience can help cloud providers to offer a more comprehensive service to its customers. Due to these reasons, more cloud providers are looking to enter into such strategic partnerships with other cloud providers who have a strong presence in local regions.

Currently, this technology is in its nascent stage, as it requires substantial efforts and advancements in technology to improve interoperability and sharing among network providers. The good news is that a lot of companies, and organizations such as IEEE have started working towards it, so we can expect the concept of Intercloud to become a reality soon.

The post What is Intercloud? appeared first on Cloud News Daily.

How to support video with mitigated latency

(c)iStock/AndreyPopov

nScreenMedia claims that: “data from Ericsson and FreeWheel paints a rosy picture for mobile video. Mobile data volume is set to increase sevenfold over the next six years, with video’s share increasing from 50% to 70%. The smartphone looks to be in the driver’s seat.»

To top this, Forbes reported in September 2015 that “Facebook users send on average 31.25 million messages and view 2.77 million videos every minute, and we are seeing a massive growth in video and photo data, where every minute up to 300 hours of video are uploaded to YouTube alone.”

Cisco also finds that “annual global IP traffic will pass the zettabyte ([ZB]; 1000 exabytes [EB]) threshold by the end of 2016, and will reach 2.3 ZB per year by 2020.

By the end of 2016, global IP traffic will reach 1.1 ZB per year, or 88.7 EB per month, and by 2020 global IP traffic will reach 2.3 ZB per year, or 194 EB per month.” The firm also predicts that video traffic will grow fourfold from 2015 to 2020, a CAGR of 31 percent.

More recently, a blog by FPV Blue claims that it can solve the latency problems that can dog many marketers and consumers by suggesting that ‘Glass to glass video latency is now under 50 milliseconds’.

Previously, the company announced that this video latency figure stood at 80 milliseconds. To reduce this latency, the firm needed to undertake a hardware revision.

Its blog post nevertheless questions the industry standard for measuring First-Person View latency (FPV latency).

Defining latency

FPV Blue defines latency as follows:

“Before measuring it, we better define it. Sure, latency is the time it takes for something to propagate in a system, and glass to glass latency is the time it takes for something to go from the glass of a camera to the glass of a display.

However, what is that something? If something is a random event, is it happening in all of the screen at the same time, or is restricted to a point in space?

If it is happening in all of the camera’s lenses at the same time, do we consider latency the time it takes for the event to propagate in all of the receiving screen, or just a portion of it? The difference between the two might seem small, but it is actually huge.”

Therefore, whether the video is being used for flying drones or for other purposes, people need to consider how they can accurately measure and mitigate the effects of video latency because in general, video traffic is increasing exponentially.

Cisco’s Visual Networking Index claim: “It would take more than 5 million years to watch the amount of video that will cross global IP networks each month in 2020. Every second, a million minutes of video content will cross the network by 2020.”

The findings of Cisco’s survey also reveal that video traffic over the internet will equate to 82% of all Internet Protocol (IP) traffic, relating to businesses and consumers.

Video: The TV star

Michael Litt, CEO and co-founder of Vidyard, also claims that the future of the internet is television because more and more people are using streaming services for entertainment, which means that the major broadcasters are also having to play catch-up.

At this juncture, it’s worth noting that BBC 3 has moved online to meet the demands of a younger and digital device savvy audience.

Election coverage

Talking about Facebook, Mashable reports that its livestream coverage of the third US presidential debates has one big advantage over everyone else.

“Facebook was delivering its stream at a 13 second delay, on average, compared to radio”, writes Kerry Flynn. The network suffering from the slowest latency was Bloomberg at an arduous 56 seconds.

She rightly adds that the disparity between the different networks should worry the traditional broadcast networks: “Watching the debate on Facebook meant that a viewer not only did not have a TV or pay for cable, they also had the fastest stream accompanied by real-time commentary and reactions.”

The surprise was that Facebook, according to the findings of Wowza Media Systems, managed to – pardon the pun – trump the satellite and cable networks for some viewers.

“Facebook’s livestream setup isn’t that different from what other companies use. Cable systems, however, tend to outsource livestreaming to content delivery networks (CDNs) that are easy to integrate and reliable — but also relatively slow”, writes Flynn.

With a lot of streaming data you have to get the CDN closer to the viewers to improve the user experience. This gives you the problem of getting the content to the CDN in the first place.

When the CDN is at distance from the centralised source to the CDN, the latency will be considerably higher which in turn affects the data throughput to the CDN. As this rich data is compressed, traditional WAN optimisation techniques are ineffective.

The problem: Latency

With the increasing proliferation of video content, why should anyone by concerned about the volume of video that is being produced and latency?

High viewing figures can after all lead to higher advertising revenues for the many broadcasters

From a competitive advantage perspective, increasing volumes of video data means that there is more noise to contend with in order to get marketing messages across to one’s target audiences.

So there is more pressure on internet and on content delivery services with increasing demand and higher quality play out, but on the whole many of these facilities have been sorted even with seamless stitching advertising services.

If latency impinges on livestream services, too, then the viewer is likely to choose the network with the fastest stream.

The key problem is that video and audio can be impeded by the effects of network latency. Slow networks can leave the reputations of customers – whose own ‘consumers’ use video for a variety of reasons – tarnished.

In a commercial situation, this could lead to lost business. A fast network from any datacentre will in contrast engender confidence. You can’t accelerate it all because it’s going at a fixed speed.

It’s about video in general. There are so many different applications for video, and all of them can be affected by bandwidth or latency – or both.” How we produce, consume, and store information has changed dramatically over the past few years with the YouTube and Facebook generation growing up.

Supporting video

To support video companies using video for broadcasting, advertising, video-conferencing, marketing, or for other purposes, need to avoid settling for traditional WAN optimization.

Instead they should employ more innovative solutions that are driven by machine intelligence – such as PORTrockIT, which accelerates data while reducing data packet loss and it mitigates the effects of latency.

Adexchanger offers some more food for thought about why this should concern marketers in particular: “Video on a landing page can increase conversion rates by 80%? Or, that 92% of mobile video consumers share videos with others.” 

Marketers should therefore ask their IT departments to invest in solutions that enable them to deliver marketing messages without their conversations being interrupted by network latency.

Similarly, broadcasters should invest in systems that mitigate the impact that latency can have on their viewers to maintain their loyalty.

High viewing figures can after all lead to higher advertising revenues for the many broadcasters, social media networks and publishers whom are offering video content as part of their service.

They may also need to transfer and back up large and uncompressed video files around the world quickly – that’s a capability which WAN optimisation often fails to deliver, but it can be achieved with the right solution.

It’s therefore important to overview the alternative options that exist on the market.

Unlocking Your Digital Business Architecture | @CloudExpo #Cloud #DevOps #DigitalTransformation

IT leaders face a monumental challenge. They must figure out how to sort through the cacophony of new technologies, buzzwords, and industry hype to find the right digital path forward for their organizations.
And they simply cannot afford to fail.
Those organizations that are fastest to the right digital path will be the ones that win.
The path forward, however, is strewn with the legacy of decisions made long ago — often before any of the current leadership team assumed their roles. While it’s fun to think about the future with a green-field mindset, that’s not reality for IT leaders sitting in the trenches.

read more

Top @CloudExpo Sponsor | #IoT #AI #ML #DL #DevOps #BigData #FinTech

@CloudExpo and @ThingsExpo, two of the most important technology events in the world, have hosted hundreds of sponsors and exhibitors since their launch eight years ago. In this blog post, I provide 10 tips on how our sponsors and exhibitors can maximize their participation at our events. But before reading my top 10 tips for our sponsors and exhibitors, please take a moment and watch this brief Sandy Carter video.

read more

Cohesity Launches DataPlatform CE for AWS and Azure

Cohesity has recently launched a cloud extension of its existing product, and is called DataPlatform Cloud Edition, that can run on both Amazon Web Services and Windows Azure. This California-based startup company specializes in providing hyperconverged secondary storage for its clients located around the world.

This announcement is good news for companies that are dependent on cloud for their services, as well as the cloud market as a whole. Currently, the storage market is highly fragmented, so it is almost impossible for companies to have a seamless data portability from their on-premise location to the cloud. In most cases, it requires special software or gateways to move data to the cloud, not to mention the data format conversions that come with it. In fact, this entire process of migrating data to the cloud becomes cumbersome, and can lead to a lot of frustrations for companies that are looking to leverage the power of cloud.

To overcome this problem, Cohesity came up with an idea to provide native replication for all data, so it makes it easy to move from DataPlatform deployments located on-premises to DataPlatform CE deployments located on the cloud. The CloudReplicate feature of Cohesity’s product ensures that replications happen instantly from on-premises to remote cloud locations. In addition, the seamless integration with both AWS and Azure makes it convenient for companies to tap into the scalability and reduced cost that come with cloud computing and storage. Due to these options, more companies are expected to move their operations to cloud, thereby auguring well for not just Cohesity, but for the cloud market as a whole.

Another advantage of Cohesity’s DataPlatform Cloud Edition is that it consolidates backup, archive, and DevOps workloads into an efficient and scalable architecture that can run completely on the cloud. Also, customers can make the most of the existing Hadoop-based analytics resources available in DataPlatform Cloud Edition.

Further, this product makes transition between private and public cloud a lot easier, as it handles all bottlenecks including conversion of data between different formats. Since many companies prefer to use a hybrid environment for their data and applications, Cohesity’s DataPlatform can turn out to be a sought-after product in the near future. What’s more – it can be licensed through public cloud service providers like AWS and Azure too.

This edition is currently in preview mode, and can be accessed through an early access program. A complete version is expected to be rolled out during the first half of 2017.

Cohesity was founded in 2013 by Mohit Aron, who is also credited with co-founding another cloud company called Nutanix. With about 108 employees, and $70 million in venture funding from companies like Google Ventures and Sequoia Capital, this company has made rapid strides over the last three and a half years. Its first product, a web platform to consolidate and manage secondary storage and all its associated workflows, was launched in July 2015. Its flagship products are DataPlatform, that was launched in October 2015, and the recently announced DataPlatform Cloud Editions.

The post Cohesity Launches DataPlatform CE for AWS and Azure appeared first on Cloud News Daily.

Rackspace becomes latest cloud firm to build Frankfurt data centre

(c)iStock.com/vichie81

Managed cloud services provider Rackspace has announced it is to open a data centre in Germany, citing the strict data protection laws in the DACH (Germany, Austria and Switzerland) region as key to the move.

While it is Rackspace’s first foray into continental Europe in this manner, the company will join a host of other players in Frankfurt, including Amazon Web Services (AWS), Microsoft, and IBM. Alibaba announced a similar move earlier this week, alongside expansions in Australia, Japan, and the Middle East.

“With the opening of our data centre in Germany, we can provide the highest level of availability, security, performance and management, and also help our customers address data protection requirements by providing them with multi-cloud deployment options,” said Alex Fuerst, who heads up Rackspace’s DACH operations. “As the demand for managed services increases in the German-speaking region, companies of all sizes in all vertical are embracing multi-cloud approaches to IT, so that each of their workloads runs on the platform where it can achieve the best performance and cost efficiency.

“More and more of those companies are turning to Rackspace expertise and support for their critical IT services and data,” Fuerst added.

It has been a busy few months for Rackspace, who with the new German site will now have 12 data centres worldwide. Back in August, it was confirmed that the company was to be acquired by private equity firm Apollo Global Management for $4.3 billion, with the deal approved and signed off earlier this month. Writing in a blog post at the time, CEO Taylor Rhodes said that the company’s board was “mindful that Rackspace faces a big opportunity as the early leader in the fast-growing managed cloud services industry.”

The new data centre is expected to become operational by mid-2017.