[session] IoT and the Rise of Hyper-Contextual Marketing | @ThingsExpo #IoT #M2M #RTC #InternetOfThings

Consumer IoT applications provide data about the user that just doesn’t exist in traditional PC or mobile web applications. This rich data, or “context,” enables the highly personalized consumer experiences that characterize many consumer IoT apps. This same data is also providing brands with unprecedented insight into how their connected products are being used, while, at the same time, powering highly targeted engagement and marketing opportunities.
In his session at @ThingsExpo, Nathan Treloar, President and COO of Bebaio, will explore examples of brands transforming their businesses by tapping into this powerful aspect of the IoT. He will also review some emerging techniques being used for hyper-contextual engagement in consumer IoT apps.

read more

[session] WebRTC Business Models By @Kurentoms | @ThingsExpo #IoT #M2M #API #RTC #WebRTC

WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications.
In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas such as entertainment, video surveillance, interactive media broadcasting, gaming or advertising. He will conclude with a discussion of their potential business applications beyond plain call models.

read more

The FT discusses app and cloud strategy

christy rossBCN caught up with Christy Ross, Head of Application and Publishing Services, Technology at the Financial Times, to get some insight into the company’s approach to digital publishing, mobile apps and the cloud.

BCN: From a digital perspective, what is the FT currently focussed on?

Christy Ross: Print has been written off for years now, no pun intended, but we’re still doing very well. However our main interest these days — rather than investing in print product – is in looking at how we can identify and supply other means of content delivery and then to actually make some money from that. Over the past few years we’ve done things to help us to maintain a direct relationship with our subscribers, such as building our own web app rather than place anything on the Apple Store or Play Store.

We have also done a lot around building APIs, so that we can provide distinct feeds of information to businesses, enabling them to come to us and say, ‘we are particualrly interested in these areas of news, or analysis, and will pay you for that’. Of course we’ve also seen mobile take off massively, so probably over 50% of our new subscription revenue comes from mobile, rather than fromm the browser or tablets.

Why is the FT able to be so confident when asking for revenue from its readers?

We’ve been quite lucky. We were one of if not the first UK newspaper to introduce a paywall. A lot has been made of the fact that paywalls ‘don’t work,’ and we’ve seen a number of other daily national papers put them up and pull them back down again, but we are very wedded to ours.

That’s because we are a niche product. If you like, we’re ‘the business world’s second newspaper.’ So in the UK someone will have, say, their Times or the Telegraph (or in the US they’ll have the Washington Post or the New York Times), but then their second newspaper will be the Financial Times. You can’t get our content anywhere else, particularly not the analysis we provide. While we are interested in breaking news and do follow it, our key differetnaitor is analysis and that comment of what is going on in the world and what it means long term. People aree able to use these insights in their business decisions – and people are prepared to pay for that.

Is there anything unique about your current mobile application in itself?

At the end of the day we are a  content provider. It’s about getting the content out as quickly as we can, and providing the tools to our editorial users so they can concentrate on writing and not worry so much about layout – we’re doing a lot more about templating, metadata, and making our content much richer, so that, when a reader comes on, the acutal related stories mean something to them, and it’s easier for them to navigate through our considerable archive on the same poeople and companies, and be able to form a much more rounded opinion.

What about internal technical innvoation?

We’ve built our own private cloud, and we’re also heavily investigating and starting to use AWS, so doing a lot out there to support the public cloud. One of our strategy points is that any new applcaition or new functionality that we look to bring online, we have to start by looking on the public cloud to see if we can host and proivide it on that, and there has to be a very good technical reason for not doing it. We’re pushing it much more that way.

We have also borrrowed a concept from Netflix, their Chaos Monkey appraoch, where every now and then we deliberately break parts of our estate to see how resilient applications are, and to see how we can react to some of our applications not being available and what that means to our user base. Just a a couple of weekends ago we completely turned off one of our UK data centres, where we’d put most of our publishing and membership applciations in advance, to see what it did, and also to see whether we could bring up the applications in our other data centres – to see how long it took us and what it meant for things like our recovery time objectives.

 

Christy Ross will be appearing at Apps World Europe (18- 19 November, Excel, London)

IBM launches cloud-based Internet of Things service for electronic industry

IBM has launched the first in a series of cloud-based, industry-specific services for the Internet of Things (IoT), with an offering for the electronics industry. Its debut IoT service will gather data from individual sensors to provide instant analysis of the production processes of electronics manufacturers.

Meanwhile, IBM said it has integrated its IoT system, IBM IoT Foundation, with the firmware of chipmaker ARM, so that all devices driven by ARM chips will be able to release data for analysis. IBM said the fusion will allow ‘huge quantities’ of data from industrial appliances, weather sensors and wearable monitoring devices to be gathered, analysed and acted upon.

The IBM IoT Foundation, a cloud-hosted offering, aims to simplify the complexity of analysing masses of machine to machine (M2M) data. It offers tools for analysing large quantities of fast-moving data and provides access to Bluemix, IBM’s service for managing and prioritising data flows. It also offers to secure confidential financial, IP and strategy information.

During the integration process, products powered by ARM’s ‘mbed-enabled’ chips will automatically register with the IBM IoT Foundation and connect with IBM analytics services. This unification means that information gathered from sensors in any connected device is delivered to the cloud for analysis. The IoT connection also means that commands can be pushed to devices, with actions being taken on the basis of the intelligence gathered.

If an alarm message is triggered on a machine in a manufacturing plant, it can now be automatically shut down and an engineer despatched to trouble shoot the disruption, IBM said. This cost-saving damage limitation is best achieved by combining the knowledge and communications protocols of different vendors at different levels of the ICT stack, according IBM’s General Manager for the Internet of Things, Pat Toole.

“The IoT is now at an inflection point and it needs the big data expertise of IBM and little data expertise of ARM to ensure it reaches its global potential,” said Toole.

Original design manufacturers and OEMs, like Ionics, are already seeing value in the chip level architecture harmonisation, said Krisztian Flautner, the General Manager of ARM’s IoT business. “Deploying IoT technology has to be easy, secure and scalable for it to feel like a natural extension of a company’s business,” said Flautner.

Why hybrid cloud is so important – and why the market prediction is so large

(c)iStock.com/Spondylolithesis

Now we’ve had a few years of cloud adoption under our belts, it’s a good time to take a look at how some of the models are performing.  Public cloud has its own great case study with Amazon AWS, and private clouds see strong supporters with forward-thinking IT teams.  But there is another model that is winning over IT teams, the hybrid cloud, and it has good reason to.

With the rise of cloud models, we’ve heard a lot about the benefits of public and private clouds. Public clouds gave us the ability to leverage low-cost services to help organisations transition to cloud models through availability of services such as Amazon AWS. Private clouds were either built in-house to start taking advantage of the same type of technologies that make public clouds so attractive, but sadly the scale of efficiencies often doesn’t work for small organisations because the upfront costs of purchasing hardware and licenses can be more than simply leveraging cloud services from a third party provider.

Hybrid clouds came out of the evolution of data centres into cloud environments. IT folks weren’t 100% sold on the idea of moving everything into a cloud environment, whether public or private, due to the perceived risks around security, availability and most importantly, control. But here we are a few years later, and IDC predicts markets for hybrid cloud are expected to grow from the over $25 billion global market we saw in 2014 to a staggering $84 billion global market by 2019. Very impressive statistics for a cloud model that we didn’t expect to see such large adoption as public or private cloud.

So why is hybrid cloud so important? And why is the market prediction so large? Well, first let’s start with the benefits of hybrid cloud. Simply put, hybrid clouds provide all the benefits of a regular cloud environment such as integration, networking, management and security, but applied to a partially internal environment.

The market is at a point now where the complexities that originally came from designing, implementing and maintaining a hybrid environment are now mostly solved

This means an organisation can start with in-house computing resources, add external cloud resources to scale up, and then go back and replace those cloud sources either with more on premise infrastructure, or continue to leverage cloud solutions to balance manageability and security with the low-cost benefits of outsourcing to cloud providers where it makes sense.

By combining in-house private and public clouds, organisations benefit from not just the standardisation of shared services, but also scalability, pay per use models, and the ability to launch new services more efficiently. By tacking on external services and connecting them through UI technologies, APIs and publishing services, these hybrid models make it easier to use the cloud services as a true extension of in-house data centres.

Imagine using external storage services as if they were sitting in your data centre, but without the care and feeding requirements such as patching, maintenance and backups. Cloud computing can also be leveraged to help with data processing or development, and help reduce not just the capital investments associated with building the environment, but also the costs of resources sitting idle between projects.

The best part of hybrid cloud is that it’s a solution that can be used in so many different contexts from cloud security, networking, integration, management, and consulting. Plus, it applies to just about every vertical including powering media and entertainment, complex computing, healthcare, government, education, and analytics driven organisations.  It’s a great way to augment your IT team and resources where you may not have the luxury of building up teams and skill sets or purchasing new infrastructure.

The market is at a point now where the complexities that originally came from designing, implementing and maintaining a hybrid environment are now mostly solved.  This means organisations have more solutions to choose from, more supported vendors and availability of providers, and increased simplicity when it comes to ensuring visibility, connectivity and stability between multiple environments.

The post The Rising Growth of Hybrid Cloud appeared first on Cloud Best Practices.

Salesforce says its Health Cloud is about building relationships, not records

Salesforce has unveiled a new cloud based system aimed at helping clinicians to build stronger relationships with patients. The launch comes in the same week that UK health secretary Jeremy Hunt announced plans to give patients in England access to their entire medical record by 2018, and to let them read and add to their GP record using their smartphone within a year.

Salesforce Health Cloud (SHC) is a cloud-based patient relationship manager that aims to give health service providers a more complete picture of each patient, by integrating data from electronic medical records, wearables and other sources, such as general practitioner and hospital notes.

The service was developed in the US, where recent legislation – such as the Affordable Care Act (ACA) – aims to put more emphasis on improving the patient experience. According to Salesforce, wearable technology has changed the way health services are administered and new cloud apps must cater for the new expectations of patients. The SHC is designed to meet the demands of a generation of digital natives that grew up with iPhones, Facebook and FitBits who expect to use technology to manage their care. According to Salesforce’s research, 71 per cent of  ‘millennials’ (those reaching adulthood around the year 2000) want their doctors to provide a mobile app to actively manage their health. Salesforce claims that 63 per cent of them want health data extracted from their wearables to be available to their doctors.

The Health Cloud was developed with input from a variety of US-based healthcare companies, including Centura Health, DJO Global, Radboud University Medical Center, Philips and the University of California and San Francisco. Development partners included Accenture, Deloitte Digital, PwC, MuleSoft and Persistent Systems, who collectively integrated records and customised content.

Features include a Patient Caregiver Map, which can map household relationships, as well as all providers and specialists involved in a patient’s care. A ‘Today’ screen alerts caregivers to timely issues, such as missed appointments or the need to refill medications. The logic of the system is that fewer patients will fall through the cracks in any health service, an issue that Salesforce Chatter – an internal social networking tool – aims to combat through a review process for internal health service conversations.

“The era of precision healthcare is upon us,” said Joshua Newman, Chief Medical Officer for Salesforce Healthcare and Life Sciences.

Network Simulation in Parallels Desktop Pro Edition

One of the additional features in Parallels Desktop Pro Edition is the ability for a developer or tester to simulate various network conditions. The speed of the network can be artificially adjusted, and the quality of the connection can purposely be degraded to see how an app being developed responds in those conditions. In this […]

The post Network Simulation in Parallels Desktop Pro Edition appeared first on Parallels Blog.

BAE Systems rings Cloud Security Solutions to New Zealand and Australia

BAE Systems Applied Intelligence is going to bring its Cloud Security solutions to the area of New Zealand and Australia for the first time. It is introducing security products that will hopefully defend users against email based threats such as targeted and ‘Zero Day’ attacks. The solutions are supposedly very effective; they will reduce integration time and complexity while simultaneously offers an alternative to on premise cloud software and hardware.

BAE Systems Applied Intelligence director cyber security products a proclaimed that many nosiness are naïve when it comes to email based threats, as many cyber attacks begin with an email. Blount stated, “Whether this is a targeted spear-phishing campaign or a shotgun-approach distribution of ransomware, the likelihood of success is unfortunately very high in the absence of the necessary protection.”

 cloud security One of the first services being offered in the New Zealand and Australia region is its Email Protection Service, which provides extensive protection against some of the most advanced threats. “With 70 to 90 per cent of malware being unique to any single organization, the most difficult attacks to defend against are Zero Day attacks,” Blount said. ”These are attacks that are unknown or have not previously been seen and that, as a consequence, require advanced defense.” The main element of the Email Protection Service is the Zero Day Prevention, which analyzes emails within the cloud for malicious content before it may reach the recipient.

One of the largest risks to businesses is the accidental or intentional leak of data. Because many companies are quite unprepared for this type of issue, BAE systems provides Insider Threat Prevention service, which will make it considerably easier to find as well as investigate such issues.

Blount also added “Our Cloud-based cyber security solutions leverage BAE Systems’ expertise as a leader in risk analytics and cyber defense. With this launch, we are introducing A/NZ businesses to a new kind of protection against sophisticated cyber-attacks. Because the solutions are cloud-based, they are easy to buy, consume and manage with a short delivery time frame. And they have the inherent flexibility to scale up or down as required, so companies assess what they need and have a service which can grow with their organization,”

 

The post BAE Systems rings Cloud Security Solutions to New Zealand and Australia appeared first on Cloud News Daily.

Why the Tomb Raider publishers created their own database as a service

Picture credit: Flickr/Anthony Jauneaud

In the past 25 years, one of the most proprietary technologies that has come to market is cloud computing. That’s the claim I made to the editor of this very publication back in July. The cloud’s promise of flexibility may prove to be a Trojan horse of vendor lock-in as you move up each layer of the vendor’s stack, consuming not just infrastructure, but also software and services.

In this article, I’d like to explain why there’s a risk of cloud lock-in and one robust tactic for avoiding it.

In the beginning

All the major cloud vendors began with infrastructure as a service (IaaS) offerings with two irresistible features: dramatically reduced infrastructure provisioning time, and the advantage of a pay-as-you-go elastic pricing model. This was incredibly well received by the market, and today it’s hard to imagine that most enterprise workloads won’t eventually be deployed on these offerings.

With a captive audience, these same vendors realised they could simply move up the stack, putting an ‘aaS’ on every layer. The most valued and most critical software component of all, the database, is very much the end game here as a database as a service (DBaaS). Amazon, Microsoft, and Google, among others, have developed wonderfully simple DBaaS offerings that eliminate much of the complexity and headache from running your own deployment in the cloud. The challenge is that the data always outlives the applications.

There is nothing wrong with the idea of DBaaS. Your business is probably using some of it right now.  Most organisations are resource constrained, especially when it comes to database admins. They are happy to give up some control and flexibility for convenience. In some cases their choice may be as stark as to either build their application on a DBaaS or not to build at all.

Many organisations are just recovering from an era when vendors used punitive and rigid licensing to force inflexible and outdated products on people. In the past 15 years we’ve seen the unstoppable march of Linux, as well as open source alternatives for every layer of the technology stack. While these options were initially viewed as inferior to their proprietary competitors, today open source is not only legitimate, it has become the innovator in many categories of technology.

Cloud vendors developed most of their offerings on an open source stack, and for good reason. It would be easy to view this as a continuation of the move away from vendor lock-in, but the truth is if you take a closer look at the pricing models, the egress charges, the interfaces, the absence of source code and so on, you’ll notice a familiar whiff coming from many of the cloud contracts. Prediction: DBaaS is going to be the new lock-in that everyone complains viciously about.

So here’s your challenge: you want to offer your team of developers the convenience of a DBaaS, but you want to keep complete control of your stack to avoid lock in and maximise flexibility. You also want to avoid an unsightly invoice from *insert cloud giant here* stapled to your forehead every month. What do you do?

The third way

Square Enix is one of the world’s leading providers of gaming experiences, publishing iconic titles like Tomb Raider and Final Fantasy. Collectively Square Enix games have sold hundreds of millions of units worldwide. It’s not just in gaming distribution that Square Enix is an innovator though. The operations team has also taken a progressive approach to delivering infrastructure to its army of designers and developers.  

Every game has its own set of functionality, so each team of developers uses dedicated infrastructure in a public cloud to store unique data sets for their game. Some functions are used across games, such as leaderboards, but most functions are specific to a given title. For example, Hitman Absolution introduced the ability for players to create their own contracts and share those with other players.

As the number and complexity of online games grew, Square Enix found it could not scale its infrastructure, which at that time was based on a relational database. The operations team needed to overcome that scaling issue and provide all the gaming studios with access to a high performance database. To do this, they migrated to a non-relational database and built a multi-tenant platform they call Online Suite. Online Suite is deployed as one instance of infrastructure that is shared across the company and studios. Essentially, the ops team built their own MongoDB as a service (MDaaS) which is delivered to all of Square Enix’s studios and developers.

The Online Suite provides an API that allows the studios to use MDaaS to store and manage metrics, player profiles, info cast information, leaderboards and competitions. The MDaaS is also used to enable players to share messages across all supported platform such as PlayStation, Xbox, PC, web, iOS, and Android. Essentially, the Online Suite supports any functionality that is shared across multiple games.

This gives them the best of both worlds: control and convenience. They are able to maintain full control of their self-managed environment, with the convenience that comes from a management platform consumed as a service from the cloud.

Square Enix can now scale dozens of database clusters on-demand and deliver 24×7 availability to its developers around the world, all with a single operations staffer. By adopting a multi-tenant DBaaS, Square Enix has been able to consolidate its database instances. This has improved performance and reliability while simplifying life for developers.

The way forward

Crucially, Square Enix has not lost any control. The ops team can still access the code throughout the stack, but they’ve hidden that complexity from their users. As far as the developers are concerned, they have the irresistible cloud experience that is flexible and elastic, but Square Enix has protected itself from lock-in by keeping ownership of the stack.

I’m not crazy. I don’t think this approach would work in every single organisation. I do hope that the example is instructive though. It’s not always a simple dichotomy between the burden of running your own stack or losing control and getting locked-in.

Cloud computing is dramatically changing the way we create services and products. It is a great tool but it’s also a Siren’s call of flexibility and cost savings which has the potential to trap you and to limit your options. But if you learn the lessons from our friends behind Tomb Raider, you might just be able to navigate a course out of cloud cuckoo land.

Painful Breakups: The Beatles, Ben & Jen, Now Symantec & Veritas

You probably saw the rumors come across Twitter, Facebook or on the newsstands in a checkout aisle. Perhaps, like me, you never thought it would actually happen, but the day is coming. Grab a tissue, Symantec and Veritas are breaking up.

Years ago, Symantec, an anti-virus company, merged with Veritas, a backup company known for such products as Backup and Netbackup forming a super power of sorts. This, however, is changing. Although Symantec and Veritas have been a staple in our lives for many years, starting next month they will be separated.

veritasWe’ve seen some tough breakups in the past. The Beatles, Ben and Jen, Britney and Justin, Ross and Rachel, Belichick and Revis, Peaches & Herb, (but I think they reunited), yet this Symantec and Veritas drama really stings. Like all good relationships, this one is coming to an end.

What’s the Deal?

Starting Friday October 2nd all backup related products like Backup Exec and Netbackup will change. This will be the last day to order these products under the current Symantec pricing and part number model.

On Monday October 5th any existing, open quotes for Backup Exec and Netbackup will need to be re-quoted using Veritas’s part numbers and pricing. The new Veritas SKUs won’t be visible until October 5th so, unless that changes, the new Veritas quotes can’t be created until October 5th. Since there is so much change taking place, there is a good possibility that pricing, at least on certain products, could change and increase.

Renewals: Big Change here. With Veritas, there will no longer be a 30 day grace period to get your renewals orders in. So, any Backup Exec and Netbackup renewals will have to be placed prior to its expiration date, otherwise Veritas will apply reinstatement fees. This will be strictly enforced.

There are no changes to Symantec i.e. AV products.

Dates to know:

Friday October 2nd – Last day to use Symantec related quotes for Backup Exec and Netbackup. This includes new and renewal quotes.

Monday October 5th – The new Vertias SKU’s will become available. Any open quotes will need to change over to the new part numbers. Pricing will likely change as well.

What Next?

If you’re working with GreenPages we will provide you a new Veritas quote, however, because we don’t currently know if there will be a price increase, we’d recommend placing your order prior to Friday October 2nd. Also, GreenPages has fantastic backup and retention solution architects and engineers. If you have any questions on Veritas or any other venders you could potentially switch over to, please let us know.

 

By Rob O’Shaugnessy, Director of Software Sales & Renewals