10 Secrets of @CloudExpo’s #DigitalTransformation Sponsors | #AI #DX #IoT #DevOps #FinTech

The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.

read more

AWS aims at enterprise data migration with Migration Hub and Glue launches

At the AWS Summit in New York, Amazon Web Services focused predominantly around enterprise migration – and launched two new products aimed at taking the difficulty out of data analysis and transfer.

The cloud infrastructure giant announced the launch of AWS Migration Hub, a tool which aims to help organisations migrate their assets from on-prem data centres to Amazon’s cloud, as well as the general availability of AWS Glue, a product first announced in December last year which eases the process of moving data between data stores.

“Companies want to be able to fly from some of the constraints and break free from lock-in, and some of the relationships they have,” Adrian Cockcroft, AWS VP of cloud architecture, told attendees. “What we’ve been hearing from our customers is they want the freedom to build things quickly, unshackle from current database vendors, drive costs down, and have good ways to migrate out.”

This begat a discussion around relational database engine Aurora, AWS’ fastest growing product, which was launched in 2014. This time last year, AWS managed services partner Logicworks, writing for this publication, explained the reason for its success. “As cloud adoption matures, expect more companies to make a (slow) migration over to cloud-native systems,” the company wrote. “Because in the end, it is not just about licensing costs. It is about removing management burden from IT – and choosing to focus engineering talent on what really matters.”

More than 34,000 databases (below) have been migrated since the product’s launch; Cockcroft mentioned that he had given this talk a few times this year, and the number was continually being updated. An example of a company using Aurora to its advantage was Expedia, who performs 300 million writes a day on the engine, tracking how many hotel rooms are available across every hotel in the world.

When it comes to taking everything from a data centre – not just the greenfield apps, not even the mission-critical apps – then that was what AWS Migration Hub was for, Cockcroft added. The product is generally available today, hosted on AWS’ US West 2 zone in Oregon, but with a global reach.

Glue, on the other hand, is positioned as a fully managed data catalogue and ETL (extract, transform, load) service to take the fuss out of those “ubiquitous, and extremely tedious” workloads, as Dr. Matt Wood, general manager for artificial intelligence at AWS, put it.

Wood first riffed on the importance of AWS’ plethora of data handling tools, from the previously mentioned Aurora, to ElastiCache, to Redshift. “This approach, where we have a broad set of tools, each with a deep set of functionality, allows you to find the right tool for the job,” he said. “You don’t see Formula 1 engineers try and fix Formula 1 cars with Swiss Army knives.”

An example from Redshift Spectrum, which enables running SQL queries against exabytes of data in Amazon S3 was presented to the audience (below). Running a complex query against an exabyte dataset took Hive, running a 1000 node cluster, five years – obviously they made some estimates instead of letting it run its course – whereas Spectrum took just over two and a half minutes.

Wood said that up to three quarters of data scientists’ and data warehouse managers’ time was spent running ETL workloads. “Nobody goes to work in the morning and wants to write another ETL script,” he added. Through Glue, and its entirely serverless system and what Wood described as “by far the simplest UI [he had] ever shown to an audience of this size”, AWS aims for that to be a thing of the past.

Wood also discussed the machine learning projects being undertaken on AWS’ infrastructure. “The reason [machine learning] has started to stick in this iteration is that the cloud has enabled machine learning and customers to overcome the single largest point of friction, which is almost always around scale,” he explained. “Much like we did in the early days of AWS… we want to put this magical technology into the hands of every developer.”

Among the most interesting examples of the many companies exploring machine learning (below) were Stanford, who trained a deep learning model to help prevent diabetic blindness, Arterys, who has put together the first FDA-approved use of neural networks in medical imaging, and Wolfram Alpha. The latter, best known as the company to which Siri refers if she is stumped by a question, uses machine learning on AWS to build a computational knowledge engine. “When we’re talking about the challenges of handling inference at scale, with complicated deep learning models, this is the sort of scale you can achieve today through AWS,” said Wood.

Elsewhere, AWS announced a new customer in the shape of Hulu. The media company is moving away from its previous strategy of managing its own infrastructure and data centres – “everything we have, you name it, we built it” as the company put it to attendees – to help cover its various bets, from streaming content, to subscription systems, to live television.

“While we’ve experimented with cloud before, this became our first large scale production deployment,” said Rafael Soltanovich, VP of software development at Hulu.

“Building live TV is really hard, especially when you’re trying to do it in a radically different way,” he added, giving an example of just one of the issues Hulu had to sort out when rebuilding its entire tech stack. Take the Avengers film series based on the Marvel comic book characters, and the unrelated series of the same name, a 1998 film and the 1960s UK TV series. Having the right name, and the right image for each product, is vital to capture the attention of the viewer, Soltanovich said.

The recent Game of Thrones premiere was another example of Hulu’s nimble infrastructure in action; balancing between video on demand and live streams, between data centre and cloud, the company was able to normalise the load on its infrastructure to keep up with ‘massive’ user demand.

You can find out more about AWS Migration Hub here.

Picture credits: AWS/Screenshots

[slides] #IoT and Digitizing Operations | @ThingsExpo @RedHatNews @GHaff #AI #DX

Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including energy, transportation, and retail – of using IoT to create new business opportunities and improve efficiency.

read more

Assessing the key reasons behind a multi-cloud strategy

Everyone who follows cloud computing agrees that we are starting to see more businesses utilise a multi-cloud strategy. The question this raises is: why is a multi-cloud strategy important from a functional standpoint, and why are enterprises deploying this strategy?

To answer this, let’s define “multi-cloud” since it means different things to different people. I personally like this one, as seen on TechTarget:

“the concomitant use of two or more cloud services to minimise the risk of widespread data loss or downtime due to a localised component failure in a cloud computing environment… a multi-cloud strategy can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructures to meet the needs of diverse partners and customers”

From my conversations with some cloud gurus and our customers, a multi-cloud strategy boils down to:

  • Risk mitigation – low priority
  • Managing vendor lock-in (price protection) – medium priority
  • Optimising where you place your workloads – high priority

Let’s look at each one.

Risk mitigation 

Looking at our own infrastructure at ParkMyCloud, we use AWS and other AWS services including RDS, Route 53, SNS and SES. In a risk mitigation exercise, would we look for those like services in Azure, and try to go through the technical work of mapping a 1:1 fit and building a hot failover in Azure? Or would we simply use a different AWS region – which uses fewer resources and less time?

You don’t actually need multi-cloud to do hot failovers, as you can instead use different regions within a single cloud provider. But that’s betting on the fact that those regions won’t go down simultaneously. In our case we would have major problems if multiple AWS regions went down simultaneously, but if that happens we certainly won’t be the only one in that boat.

Furthermore, to do a hot failover from one cloud provider to another (say, between AWS and Google), would require a degree of working between the cloud providers and infrastructure and application integration that is not widely available today.

Ultimately, risk mitigation just isn’t the most significant driver for multi-cloud.

Vendor lock-in

What happens when your cloud provider changes their pricing? Or your CIO says we will never be beholden to one IT infrastructure vendor, like Cisco on the network, or HP in the data centre? In that case, you lose your negotiating leverage on price and support.

On the other hand, look at Salesforce. How many enterprises use multiple CRMs?

Do you then have to design and build your applications to undertake a multi-cloud strategy from the get-go, so that transitioning everything to a different cloud provider will be a relatively simple undertaking? The complexity of moving your applications across clouds over a couple of months is nothing compared to the complexity of doing a real-time hot failover when your service is down. For enterprises this might be doable, given enough resources and time. Frankly, we don’t see much of this.

Instead, I see customers using a multi-cloud strategy to design and build applications in the clouds best suited for optimising their applications. By the way — you can then use this leverage to help prevent vendor lock-in.

Workload optimisation

Hot failovers may come to mind first when considering why you would want to go multi-cloud, but what about normal operations when your infrastructure is running smoothly? Having access to multiple cloud providers lets your engineers pick the one that is the most appropriate for the workload they want to deploy. By avoiding the “all or nothing’ approach,” IT leaders gain greater control over their different cloud services. They can pick and choose the product, service or platform that best fits their requirements, in terms of time-to-market or cost effectiveness – then integrate those services. Also, this approach may help in avoiding problems that arise when a single provider runs into trouble.

A multi-cloud strategy addresses several inter-related problems. It’s not just a technical avenue for hot failover. It includes vendor relationship management and the ability optimise your workloads based on the strengths of your teams and that CSP’s infrastructure.

By the way — when you deploy your multi-cloud strategy, make sure you have a management plan in place upfront. Too often, I hear from companies who deploy on multiple clouds but don’t have a way to see or compare them in one place. So, make sure you have a multi-cloud dashboard in place to provide visibility that spans across cloud providers, their locations and your resources, for proper governance and control. This will help you can get the most benefit out of a multi-cloud infrastructure.

Embracing Conflict to Fuel Digital Innovation | @ThingsExpo #DX #IoT #M2M #BigData

When talking to clients about their business goals, most business executives are pretty clear as to what they want to accomplish, such as reducing customer churn or reducing inventory costs or improving quality of care or improving product line profitability. But these “one dimensional” business initiatives really don’t push the organization’s innovative thinking. For example, I can easily reduce marketing costs if I significantly reduce advertising and promotional spending. Or I can easily improve product line profitability by cutting all marketing and advertising spending and laying off anyone not directly related to manufacturing and sell products.

The post Embracing Conflict to Fuel Digital Innovation appeared first on InFocus Blog | Dell EMC Services.

read more

What’s New in Google’s Cloud Speech API?

Speech recognition is a software that allows you to convert video into text. This means, you can speak into your phone and the same will be converted into a text that you share with others through email or social media. Cool, right?

Many tech companies, especially owners of mobile operating systems have been working extensively to improve the quality of this cloud recognition software. One of the pioneers of speech recognition software is Google as its Cloud Speech API is one of the most advanced and sophisticated products in this genre.

If you’re wondering what in the world is Cloud Speech API, it’s nothing but a piece of software that allows third-party companies and its developers to integrate Google’s speech recognition software into their own products.

You can do a ton of things with your Cloud Speech API such as recognizing an audio, integrating a storage, filter inappropriate content and so much more. One of the most widely used applications of Google’s Cloud Speech API is in contact centers, where any call can be routed to the concerned department by listening to what the customer is saying.

Many companies have been using this API to give a better experience for their users. A case in point is Twilio that uses this API to convert speech into text for all its products, thereby giving users the flexibility to directly talk to the software instead of going through the more laborious process of typing it out.

Due to the growing use of this product, Google has been working to enhance its functionality. Recently, it announced many changes to the Cloud Speech API to make it more usable and even boost its adoption around the world.

One of the notable changes it made is the world-level time offsets, more popularly known as timestamps. So, what’s the use of this feature? It will make it easier than ever before to find the exact spot where a particular word occurs. For example, let’s say, you have the audio of an important person’s interview and you want to hear just what he said on a particular topic. In the past, you have to go through the entire audio to identify where he made a particular statement. With this new feature, you can simply search for a keyword in an audio and it will bring up all the timestamps where that word was uttered.

This way, you’ll spend less time in finding what you want, thereby increasing your productivity. What’s more? You can even enable text to be display while the audio is playing in real-time. It’s something similar to the closed captions you can see while a video is playing, except that it’s mostly pre-written. Here, you can get the text as you hear.

According to Dan Aharon, the product manager, this feature was something that customers have been requesting for some time now, so Google has worked to offer the same to them.

In addition, the new version will also support longer files. Instead of the maximum 80 minutes, you can now have 180 minutes of video transcribed for you.

All these are sure to add to the appeal of Google’s Cloud Speech API.

The post What’s New in Google’s Cloud Speech API? appeared first on Cloud News Daily.

Intel runs rule over new data centre storage design

It is not quite available yet – but Intel has shed some light on its plans in the data centre storage space with the announcement of a new form factor which could enable up to one petabyte of storage in a 1U rack unit.

The new ‘ruler’ form factor (above), named as such for self-evident reasons, “shifts storage from the legacy 2.5 inch and 3.5 inch form factors that follow traditional hard disk drives” and “delivers on the promise of non-volatile storage technologies to eliminate constraints on shape and size”, in Intel’s words. The company adds that the product will come to market ‘in the near future’.

1U rackmounts are predominantly 19” wide and 1.75” high, although the depth can vary from 17.7” to 21.5”. As the numbers go up, the height essentially doubles, so a 5U mount can be 19.1” by 8.75” by 26.4”, while 7U, the highest, is 17” by 12.2” by 19.8”. To put one petabyte into perspective, it is enough storage to hold 300,000 HD movies.

Intel also had room for a couple more announcements. The company is targeting hard disk drive (HDD) replacement in the data centre with an updated SATA family of solid state disks (SSDs), aiming to reduce power and cooling as well as increase server efficiency, as well as announcing dual port Intel Optane SSDs and Intel 3D NAND SSDs, replacing SAS SSDs and HDDs. The former is available now with the latter coming in the third quarter of this year.

Bill Leszinske, Intel vice president, said the company was driving forward an era of ‘major data centre transformation’. “These new ‘ruler’ form factor SSDs and dual port SSDs are the latest in a long line of innovations we’ve brought to market to make storing and accessing data easier and faster, while delivering more value to customers,” he said in a statement.

“Data drives everything we do – from financial decisions to virtual reality gaming, and from autonomous driving to machine learning – and Intel storage innovations like these ensure incredible quick, reliable access to that data,” Leszinske added.

According to a study from Intel and HyTrust released in April last year, two thirds of C-suite respondents said they expect increased adoption in the software defined data centre (SDDC) space.

Picture credit: Intel

Strategies for a Digital Age | @ThingsExpo #IoT #M2M #DigitalTransformation

Technologies help us deliver on a business strategy. Without a strategy, there is no rationale for deploying technologies. In addition, there is no rationale for digital transformation, unless there is a need for business transformation. If you believe this as we do, then strategy development will be a priority. Strategies, however, are developed under the guidance of a doctrine. The purpose of a doctrine is to create a high level understanding of what we we want to achieve with our strategy, and the concepts that must be employed to achieve it. An organization’s doctrine will guide strategy development, and the tactics needed to achieve a goal.

read more

Announcing @GGU to Exhibit at @CloudExpo Silicon Valley | #AI #ML #Cloud #Analytics

SYS-CON Events announced today that Golden Gate University will exhibit at SYS-CON’s 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Since 1901, non-profit Golden Gate University (GGU) has been helping adults achieve their professional goals by providing high quality, practice-based undergraduate and graduate educational programs in law, taxation, business and related professions. Many of its courses are taught by faculty actively working in their field of expertise, providing students with skills that can be applied immediately. The new MS in Business Analytics, like most of its programs, is available fully online or in-person in downtown SF.

read more

[slides] Enabling Business Transformation | @ThingsExpo #AI #DX #IoT #FinTech #SmartCities

In his session at @ThingsExpo, Arvind Radhakrishnen discussed how IoT offers new business models in banking and financial services organizations with the capability to revolutionize products, payments, channels, business processes and asset management built on strong architectural foundation. The following topics were covered: How IoT stands to impact various business parameters including customer experience, cost and risk management within BFS organizations.

read more