Archivo de la categoría: Features

More than just a low sticker price: Three key factors for a successful SaaS deployment

Teamwork. Business illustrationOne of the key challenges for businesses when evaluating new technologies is understanding what a successful return on investment (ROI) looks like.

In its infancy, business benefits of the cloud-based Software-as-a-Service (SaaS) model were simple: save on expensive infrastructure, while remaining agile enough to scale up or down depending on demand. Yet as cloud-based tools become ubiquitous, both inside and outside of a workplace, measuring success extended beyond simple infrastructure savings.

In theory the ability to launch new projects in hours and replace high infrastructure costs with a low monthly subscription should deliver substantial ROI benefits. But what happens to that ROI when the IT team discovers, six months after deployment, that end-user adoption is as low as 10 per cent? If businesses calculated the real “cost per user” in these instances, the benefits promised by cloud would simply diminish. This is becoming a real issue for businesses that bought on the promise of scalability, or reduced infrastructure costs.

In reality, success demands real organisational change, not just a cheap licencing fee. That’s why IT buyers must take time to look beyond the basic “sticker price” and begin to understand the end-user.

Aiming for seamless collaboration

As the enterprise workplace becomes ever-more fragmented, a “collaborative approach” is becoming increasingly important to business leaders. Industry insight, experience and understanding are all things that can’t be easily replicated by the competition. Being able to easily share this knowledge across an entire organisation is an extremely valuable asset – especially when trying to win new customers. That said, in organisations where teams need to operate across multiple locations (be it in difference offices or different countries), this can be difficult to implement: collaboration becomes inefficient, content lost and confidential data exposed – harming reputation and reducing revenue opportunities.

Some cloud-based SaaS solutions are quite successful in driving collaboration, improving the agility of teams and the security of their content. For example, Baker Tilly International – a network of 157 independent accountancy and business advisory firms, with 27,000 employees across 133 counties –significantly improved efficiency and created more time to bid for new business by deploying a cloud-based collaboration platform with government-grade security. However, not all organisations experience this success when deploying new cloud technologies. Some burden themselves with services that promise big ROI through innovation, but struggle with employee adoption.

Solving problems. Business conceptHere are the three key considerations all IT buyers must look at when evaluating successful SaaS deployment:

  1. Building awareness and confidence for better user experience

All enterprise systems, cloud or otherwise, need ownership and structure. IT teams need to understand how users and information move between internal systems. The minute workflows become broken, users will abandon the tool and default back to what has worked for them in the past. The result: poor user adoption and even increased security risks as users try to circumvent the new process. Building awareness and confidence in cloud technologies is the key to curbing this.

While cloud-based SaaS solutions are sold on their ease of use, end user education is paramount to ensuring an organization sees this value. The truth is, media scaremongering around data breaches has resulted in a fear of “the cloud”, causing many employees, especially those that don’t realise the consumer products they use are cloud-based, to resist using these tools in the workplace. In addition to teaching employees how to use services, IT teams must be able to alleviate employee concerns – baking change management into a deployment schedule.

These change management services aren’t often included within licensing costs, making the price-per-user seem artificially low. IT teams must be sure to factor in education efforts for driving user adoption and build an ROI not against price-per-user, but the actual cost-per-user.

  1. Data security isn’t just about certifications

There’s a thin line drawn between usability and security. If forced to choose, security must always come first. However, be aware that in the age of citizen IT too much unnecessary security can actually increase risk. That may seem contradictory but if usability is compromised too deeply, users will default to legacy tools, shadow IT or even avoid processes altogether.

Many businesses still struggle with the concept of their data being stored offsite. However, for some this mind-set is changing and the focus for successful SaaS implementations is enablement. In these businesses, IT buyers not only look for key security credentials – robust data hosting controls, application security features and secure mobile working – to meet required standards and compliance needs; but also quality user experience. The most secure platform in the world serves no purpose if employees don’t bother to use it.

Contemplate. Business concept illustrationThrough clear communication and a well-thought out on-boarding plan for end users, businesses can ensure all employees are trained and adequately supported as they begin using the solution.

  1. Domain expertise

One of the key advantages of cloud-based software is its ability to scale quickly and drive business agility. Today, scale is not only a measure of infrastructure but also a measure of user readiness.

This requires SaaS vendors to respond quickly to a business’s growth by delivering all of the things that help increase user adoption including; adequate user training, managing new user on-boarding, and even monitoring usage data and feedback to deliver maximum value as business begin to scale.

Yes, SaaS removes the need for big upgrade costs but without support from a seasoned expert, poor user adoption puts ROI at risk.

SaaS is about service

Cloud-based SaaS solutions can deliver a flexible, efficient and reliable way to deploy software into an organisation, helping to deliver ROI through reduced deployment time and infrastructure savings. However, these business must never forget that the second “S” in SaaS stands for service, and that successful deployments require more than just a low “sticker price”.

Written by Neil Rylan, VP of Sales EMEA, Huddle

Will containers change the world of cloud?

Global Container TradeThe rise of containers as a technology has been glorious and confusing in equal measure. While touted by some as the saviour of developers, and by others as the end of VM’s, the majority simply don’t understand containers as a concept or a technology.

In the simplest of terms, containers let you pack more computing workloads onto a single server and in theory, that means you can buy less hardware, build or rent less data centre space, and hire fewer people to manage that equipment.

“In the earlier years of computing, we had dedicated servers which later evolved with virtualisation,” say Giri Fox, Director of Technical Services at Rackspace. “Containers are part of the next evolution of servers, and have gained large media and technologist attention. In essence, containers are the lightest way to define an application and to transport it between servers. They enable an application to be sliced into small elements and distributed on one or more servers, which in turn improves resource usage and can even reduce costs.”

There are some clear differences between containers and virtual machines though. Linux containers give each application, its own isolated environment in which to run, but multiple containers share the host servers’ operating system. Since you don’t have to boot up an operating system, you can create containers in seconds not minutes like virtual machines. They are faster, require less memory space, offer higher-level isolation and are highly portable.

“Containers are more responsive and can run the same task faster,” adds Fox. “They increase the velocity of application development, and can make continuous integration and deployment easier. They often offer reduced costs for IT; testing and production environments can be smaller than without containers. Plus, the density of applications on a server can be increased which leads to better utilisation.

“As a direct result of these two benefits, the scope for innovation is greater than its previous technologies. This can facilitate application modernisation and allow more room to experiment.”

So the benefits are pretty open-ended. Speed of deployment, flexibility to run anywhere, no more expensive licenses, more reliable and more opportunity for innovation.

Which all sounds great, doesn’t it?

CaptureThat said, a recent survey from the Cloud & DevOps World team brought out some very interesting statistics, first and foremost the understanding of the technology. 76% of respondents agreed with the statement “Everyone has heard of containers, but no-one really understands what containers are”.

While containers have the potential to be
the next big thing in the cloud industry, unless those in the ecosystem understand the concept and perceived benefits, it is unlikely to take off.

“Containers are evolving rapidly and present an interesting runtime option for application development,” says Joe Pynadath, ‎GM of EMEA for Chef. “We know that with today’s distributed and lightweight apps, businesses, whether they are a new start-up’s to traditional enterprise, must accelerate their capabilities for building, testing, and delivering modern applications that drive revenue.

“One result of the ever-greater focus on software development is the use of new tools to build applications more rapidly and it is here that containers have emerged as an interesting route for developers. This is because they allow you to quickly build applications in a portable and lightweight manner. This provides a huge benefit for developers in speeding up the application building process. However, despite this, containers are not able to solve the complexities of taking an application from build through test to production, which presents a range of management challenges for developers and operations engineers looking to use them.”

There is certainly potential for containers within the enterprise environment, but as with all emerging technologies there is a certain level of confusion as to how they will integrate within the current business model, and how the introduction will impact the IT department on a day-to-day basis.

“Some of the questions we’re regularly asked by businesses looking to use containers are “How do you configure and tune the OS that will host them? How do you adapt your containers at run time to the needs of the dev, test and production environments they’re in?” comments Pynadath.

While containers allow you to use discovery services or roll your own solutions, the need to monitor and manage them in an automated way remains a challenge for IT teams. At Chef, we understand the benefits containers can bring to developers and are excited to help them automate many of the complex elements that are necessary to support containerized workflows in production”

Vendors are confident that the introduction of containers will drive further efficiencies and speed within the industry, though we’re yet to see a firm commitment from the mass market to demonstrate the technology will take off. The early adopter uptake is promising, and there are case studies to demonstrate the much lauded potential, but it’s still early days.

In short, containers are good, but most people just need to learn what they are.

Overcoming the data integration challenge in hybrid and cloud-based environments

Vivo, the Brazilian subsidiary of Spanish telco Telefónica deployed TOA Technologies' cloud-based field service management softawre

Industry experts estimate that data volumes are doubling in size every two years. Managing all of this is a challenge for any enterprise, but it’s not just the volume of data as much as the variety of data that presents a problems. With SaaS and on-premises applications, machine data, and mobile apps all proliferating, we are seeing the rise of an increasingly complicated value-chain ecosystem. IT leaders need to incorporate a portfolio-based approach and combine cloud and on-premises deployment models to sustain competitive advantage. Improving the scale and flexibility of data integration across both environments to deliver a hybrid offering is necessary to provide the right data to the right people at the right time.

The evolution of hybrid integration approaches creates requirements and opportunities for converging application and data integration. The definition of hybrid integration will continue to evolve, but its current trajectory is clearly headed to the cloud.

According to IDC, cloud IT infrastructure spending will grow at a compound annual growth rate (CAGR) of 15.6 percent each year between now and 2019 at which point it will reach $54.6 billion.  In line with this, customers need to advance their hybrid integration strategy to best leverage the cloud. At Talend, we have identified five phases of integration, starting from the oldest and most mature right through to the most bleeding edge and disruptive. Here we take a brief look at each and show how businesses can optimise the approach as they move from one step to the next.

Phase 1: Replicating SaaS Apps to On-Premise Databases

The first stage in developing a hybrid integration platform is to replicate SaaS applications to on-premises databases. Companies in this stage typically either need analytics on some of the business-critical information contained in their SaaS apps, or they are sending SaaS data to a staging database so that it can be picked up by other on-premise apps.

In order to increase the scalability of existing infrastructure, it’s best to move to a cloud-based data warehouse service within AWS, Azure, or Google Cloud. The scalability of these cloud-based services means organisations don’t need to spend cycles refining and tuning the databases. Additionally, they get all the benefits of utility-based pricing. However, with the myriad of SaaS apps today generating even more data, they may also need to adopt a cloud analytics solution as part of their hybrid integration strategy.

Phase 2: Integrating SaaS Apps directly with on-premises apps

Each line of business has their preferred SaaS app of choice: Sales departments have Salesforce, marketing has Marketo, HR has Workday, and Finance has NetSuite. However, these SaaS apps still need to connect to a back-office ERP on-premises system.

Due to the complexity of back-office systems, there isn’t yet a widespread SaaS solution that can serve as a replacement for ERP systems such as SAP R/3 and Oracle EBS. Businesses would be best advised not to try to integrate with every single object and table in these back-office systems – but rather to accomplish a few use cases really well so that their business can continue running, while also benefiting from the agility of cloud.

Phase 3: Hybrid Data Warehousing with the Cloud

Databases or data warehouses on a cloud platform are geared toward supporting data warehouse workloads; low-cost, rapid proof-of-value and ongoing data warehouse solutions. As the volume and variety of data increases, enterprises need to have a strategy to move their data from on-premises warehouses to newer, Big Data-friendly cloud resources.

While they take time to decide which Big Data protocols best serve their needs, they can start by trying to create a Data Lake in the cloud with a cloud-based service such as Amazon Web Services (AWS) S3 or Microsoft Azure Blobs. These lakes can relieve cost pressures imposed by on-premise relational databases and act as a “demo area”, enabling businesses to process information using their Big Data protocol of choice and then transfer into a cloud-based data warehouse. Once enterprise data is held there, the business can enable self-service with Data Preparation tools, capable of organising and cleansing the data prior to analysis in the cloud.

Phase 4: Real-time Analytics with Streaming Data

Businesses today need insight at their fingertips in real-time. In order to prosper from the benefits of real-time analytics, they need an infrastructure to support it. These infrastructure needs may change depending on use case—whether it be to support weblogs, clickstream data, sensor data or database logs.

As big data analytics and ‘Internet of Things’ (IoT) data processing moves to the cloud, companies require fast, scalable, elastic and secure platforms to transform that data into real-time insight. The combination of Talend Integration Cloud and AWS enables customers to easily integrate, cleanse, analyse, and manage batch and streaming data in the Cloud.

Phase 5: Machine Learning for Optimized App Experiences

In the future, every experience will be delivered as an app through mobile devices. In providing the ability to discover patterns buried within data, machine learning has the potential to make applications more powerful and more responsive. Well-tuned algorithms allow value to be extracted from disparate data sources without the limits of human thinking and analysis. For developers, machine learning offers the promise of applying business critical analytics to any application in order to accomplish everything from improving customer experience to serving up hyper-personalised content.

To make this happen, developers need to:

  • Be “all-in” with the use of Big Data technologies and the latest streaming big data protocols
  • Have large enough data sets for the machine algorithm to recognize patterns
  • Create segment-specific datasets using machine-learning algorithms
  • Ensure that their mobile apps have properly-built APIs to draw upon those datasets and provide the end user with whatever information they are looking for in the correct context

Making it Happen with iPaaS

In order for companies to reach this level of ‘application nirvana’, they will need to have first achieved or implemented each of the four previous phases of hybrid application integration.

That’s where we see a key role for integration platform-as-a-service (iPaaS), which is defined by analysts at  Gartner as ‘a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on premises and cloud-based processes, services, applications and data within individual or across multiple organisations.’

The right iPaaS solution can help businesses achieve the necessary integration, and even bring in native Spark processing capabilities to drive real-time analytics, enabling them to move through the phases outlined above and ultimately successfully complete stage five.

Written by Ashwin Viswanath, Head of Product Marketing at Talend

Cloud academy: Rudy Rigot and his new Holberton School

rudy rigotBusiness Cloud News talks to Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA) keynote Rudy Rigot about his new software college, which opens today.

Business Cloud News: Rudy, first of all – can you introduce yourself and tell us about your new Holberton School?

Rudy Rigot: Sure! I’ve been working in tech for the past 10 years, mostly in web-related stuff. Lately, I’ve worked at Apple as a full-stack software engineer for their localization department, which I left this year to found Holberton School.

Holberton School is a 2-year community-driven and project-oriented school, training software engineers for the real world. No classes, just real-world hands-on projects designed to optimize their learning, in close contact with volunteer mentors who all work for small companies or large ones like Google, Facebook, Apple, … One of the other two co-founders is Julien Barbier, formerly the Head of Community, Marketing and Growth at Docker.

Our first batch of students started last week!

What are some of the challenges you’ve had to anticipate?

Since we’re a project-oriented school, students are mostly being graded on the code they turn in, that they push to GitHub. Some of this code is graded automatically, so we needed to be able to run each student’s code (or each team’s code) automatically in a fair and equal way.

We needed to get information on the “what” (what is returned in the console), but also on the “how”: how long does the code take to run?  How much resource is being consumed? What is the return code? Also, since Holberton students are trained on a wide variety of languages; how do you ensure you can grade a Ruby project, and later a C project, and later a JavaScript project, etc. with the same host while minimizing issues?

Finally we had to make sure that the student can commit code that is as malicious as they want, we can’t need to have a human check it before running it, it should only break their program, not the whole host.

So how on earth do you negotiate all these?

Our project-oriented training concept is new in the United States, but it’s been successful for decades in Europe, and we knew the European schools, who built their programs before containers became mainstream, typically run the code directly on a host system that has all of the software they need directly installed on the host; and then they simply run a chroot before running the student’s code. This didn’t solve all of the problem, while containers did in a very elegant way; so we took the container road!

HolbertonCloud is the solution we built to that end. It fetches a student’s code on command, then runs it based on a Dockerfile and a series of tests, and finally returns information about how that went. The information is then used to compute a score.

What’s amazing about it is that by using Docker, building the infrastructure has been trivial; the hard part has been about writing the tests, the scoring algorithm … basically the things that we actively want to be focused on!

So you’ve made use of containers. How much disruption do you expect their development to engender over the coming years?

Since I’m personally more on the “dev” end use of devops, I see how striking it is that containers restore focus on actual development for my peers. So, I’m mostly excited by the innovation that software engineers will be focusing on instead of focusing on the issues that containers are taking care of for them.

Of course, it will be very hard to measure which of those innovations were able to exist because containers are involved; but it also makes them innovations about virtually every corner of the tech industry, so that’s really exciting!

What effect do you think containers are going to have on the delivery of enterprise IT?

I think one takeaway from the very specific HolbertonCloud use case is that cases where code can be run trivially in production are getting rare, and one needs guarantees that only containers can bring efficiently.

Also, a lot of modern architectures fulfil needs with systems that are made of more and more micro-services, since we now have enough hindsight to see the positive outcomes on their resiliences. Each micro-service may have different requirements and therefore be relevant to be done each with different technologies, so managing a growing set of different software configurations is getting increasingly relevant. Considering the positive outcomes, this trend will only keep growing, making the need for containers keep growing as well.

You’re delivering a keynote at Container World. What’s the main motivation for attending?

I’m tremendously excited by the stellar line-up! We’re all going to get amazing insight from many different and relevant perspectives, that’s going to be very enlightening!

The very existence of Container World is exciting too: it’s crazy the long way containers have gone over the span of just a few years.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA)

The IoT in Palo Alto: connecting America’s digital city

jonathan_reichental_headshot_banffPalo Alto is not your average city. Established by the founder of Stanford University, it was the soil from which Google, Facebook, Pinterest and PayPal (to name a few) have sprung forth. Indeed, Palo Alto has probably done more to transform human life in the last quarter century than any other. So, when we think of how the Internet of Things is going to affect life in the coming decades, we can be reasonably sure where much of expected disruption will originate.

All of which makes Palo Alto a great place to host the first IoT Data Analytics & Visualization event (February 9 – 11, 2016). Additionally fitting: the event is set to be kicked off by Dr. Jonathan Reichental, the city’s Chief Information Officer: Reichental is the man entrusted with the hefty task of ensuring the city is as digital, smart and technologically up-to-date as a place should be that has been called home by the likes of Steve Jobs, Mark Zuckberg, Larry Page and Sergey Brin.

Thus far, Reichental’s tenure has been a great success. In 2013, Palo Alto was credited with being the number one digital city in the US, and has made the top five year upon year – in fact, it so happens that, following our long and intriguing telephone interview, Reichental is looking forward to a small celebration to mark its latest nationwide ranking.

BCN: Jonathan, you’ve been Palo Alto’s CIO now for four years. What’s changed most during that time span?

Dr Jonathan Reichental: I think the first new area of substance would be open government. I recognise open government’s been a phenomenon for some time, but over the course of the last four years, it has become a mainstream topic that city and government data should be easily available to the people. That it should be machine readable, and that an API should be made available to anyone that wants the data. That we have a richer democracy by being open and available.

We’re still at the beginning however. I have heard that there are approximately 90,000 public agencies in the US alone. And every day and week I hear about a new federal agency or state or city of significance who are saying, ‘you can now go to our data portal and you can access freely the data of the city or the public agency. The shift is happening but it’s got some way to go.

Has this been a purely technical shift, or have attitudes had to evolve as well?

I think if you kind of look at something like cloud, cloud computing and cloud as a capability for government – back when I started ‘cloud’ was a dirty word. Many government leaders and government technology leaders just weren’t open to the option of putting major systems off-premise. That has begun to shift quite positively.

I was one of the first to say that cloud computing is a gift to government. Cloud eliminates the need to have all the maintenance that goes with keeping systems current and keeping them backed up and having disaster recovery. I’ve been a very strong proponent of that.

Then there’s social media  – government has fully embraced that now, having been reluctant early on. Mobile is beginning to emerge though it’s still very nascent. Here in Palo Alto we’re trying to make all services that make sense accessible via smart phone. I call it ‘city in a box.’ Basically, bringing up an app on the smart phone you should be able to interact with government – get a pet license, pay a parking fee, pay your electrical bill: everything should really be right there on the smartphone, you shouldn’t need to go to City Hall for many things any more.

The last thing I’d say is there has been an uptake in community participation in government. Part of it is it’s more accessible today, and part of it is there’s more ways to do so, but I think we’re beginning also to see the fruits of the millennial generation – the democratic shift in people wanting to have more of a voice and a say in their communities. We’re seeing much more in what is traditionally called civic engagement. But ‘much more’ is still not a lot. We need to have a revolution in this space for there to be significant change to the way cities operate and communities are effective.

Palo Alto is hosting the IoT Data Analytics & Visualization in February. How have you innovated in this area as a city?

One of the things we did with data is make it easily available. Now we’re seeing a community of people in the city and beyond, building solutions for communities. One example of that is a product called Civic Insight. This app consumes the permit data we make available and enables users to type in an address and find out what’s going on in their neighbourhood with regard to construction and related matters.

That’s a clear example of where we didn’t build the thing, we just made the data available and someone else built it. There’s an economic benefit to this. It creates jobs and innovation – we’ve seen that time and time again. We saw a company build a business around Palo Alto releasing our budget information. Today they are called OpenGov, and they sell the solution to over 500 cities in America, making it easy for communities to understand where their tax payer dollars are being spent. That was born and created in Palo Alto because of what we did making our data available.

Now we get to today, and the Internet of Things. We’re still – like a lot folks, especially in the government context – defining this. It can be as broad or as narrow as you want. There’s definitely a recognition that when infrastructure systems can begin to share data between each other, we can get better outcomes.

The Internet of Things is obviously quite an elastic concept, but are there areas you can point to where the IoT is already very much a reality in Palo Alto?

The clearest example I can give of that today is our traffic signal system here in the city. A year-and-a-half ago, we had a completely analogue system, not connected to anything other than a central computer, which would have created a schedule for the traffic signals. Today, we have a completely IP based traffic system, which means it’s basically a data network. So we have enormous new capability.

For example, we can have schedules that are very dynamic. When schools are being let out traffic systems are one way, at night they can be another way, you can have very granular information. Next you can start to have traffic signals communicate with each other. If there is a long strip of road and five traffic systems down there is some congestion, all the other traffic signals can dynamically change to try and make the flow better.

It goes even further than this. Now we can start to take that data – recording, for example, the frequency and volume of vehicles, as well as weather, and other ambient characteristics of the environment – and we can start to send this to the car companies. Here at Palo Alto, almost every car company has their innovation lab. Whether it’s Ford, General Motors, Volkswagen, BMW, Google (who are getting into the car business now) – they’re all here and they all want our data. They’re like: ‘this is interesting, give us an API, we’ll consume it into our data centres and then we’ll push into cars so maybe they can make better decisions.’

You have the Internet of Things, you’ve got traffic signals, cloud analytics solutions, APIs, and cars as computers and processors. We’re starting to connect all these related items in a way we’ve never done before. We’re going to follow the results.

What’s the overriding ambition would you say?

We’re on this journey to create a smart city vision. We don’t really have one today. It’s not a product or a service, it’s a framework. And within that framework we will have a series of initiatives that focus on things that are important to us. Transportation is really important to us here in Palo Alto. Energy and resources are really important: we’re going to start to put sensors on important flows of water so we can see the amount of consumption at certain times but also be really smart about leak detection, potentially using little sensors connected to pipes throughout the city. We’re also really focused on the environment. We have a chief sustainability officer who is putting together a multi-decade strategy around what PA needs to do to be part of the solution around climate change.

That’s also going to be a lot about sensors, about collecting data, about informing people and creating positive behaviours. Public safety is another key area. Being able to respond intelligently to crimes, terrorism or natural disasters. A series of sensors again sending information back to some sort of decision system that can help both people and machines make decisions around certain types of behaviours.

How do you expect this whole IoT ecosystem to develop over the next decade?

Bill Gates has a really good saying on this: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”  It’s something that’s informed me in my thinking. I think things are going to move faster and in more surprising ways in the next ten years for sure: to the extent that it’s very hard to anticipate where things are headed.

We’re disrupting the taxi business overnight, the hotel business, the food business. Things are happening at lightning speed. I don’t know if we have a good sense of where it’s all headed. Massive disruption across all domains, across work, play, healthcare, every sort of part of our lives.

It’s clear that – I can say this – ten years from now won’t be the same as today. I think we’ve yet to see the full potential of smart phones – I think they are probably the most central part of this ongoing transformation.

I think we’re going to connect many more things that we’re saying right now. I don’t know what the number will be: I hear five billion, twenty billion in the next five years. It’s going to be more than that. It’s going to become really easy to connect. We’ll stick a little communication device on anything. Whether it’s your key, your wallet, your shoes: everything’s going to be connected.

Palo Alto and the IoT Data Analytics & Visualization event look like a great matchup. What are you looking forward to about taking part?

It’s clearly a developing area and so this is the time when you want to be acquiring knowledge, networking with some of the big thinkers and innovators in the space. I’m pleased to be part of it from that perspective. Also from the perspective of my own personal learning and the ability to network with great people and add to the body of knowledge that’s developing. I’m going to be kicking it off as the CIO for the city.

BT and the IoT

BT Sevenoaks workstyle buildingIt is often said that the Internet of Things is all about data. Indeed, at its absolute heart, the whole ecosystem could even be reduced to four distinct layers, ones that are essentially applicable to any vertical.

First of all, you have the sensing layer: somehow (using sensors, Wi-Fi, beacons: whatever you can!) you have to collect the data in the first place, often in harsh environments. From there you need to transport the data on a connectivity layer. This could be mobile or fixed, Wi-Fi or something altogether more cutting edge.

Thirdly, you need to aggregate this data, to bring it together and allow it to be exchanged. Finally, there’s the crucial matter of analytics, where the raw data is transformed into something useful.

Operators such as BT sense the opportunities in this process – particularly in the first three stages. Some telcos may have arrived a little late to the IoT table, but there’s no question that – with their copious background developing vast, secure infrastructures – they enjoy some fundamental advantages.

“I see IoT as a great opportunity,” says Hubertus von Roenne, VP Global Industry Practices, BT Global Services. “The more the world is connected, the more you have to rely on a robust infrastructure, whether it’s connectivity or data centres, and the more you have to rely on secure and reliable environment. That’s our home turf. We are already active on all four layers, not only through our global network infrastructure, but also via our secure cloud computing capabilities and a ‘Cloud of Clouds’ technology vision that enables real time data crunching and strategic collaboration across very many platforms.”

An example of how BT is positioning itself can be seen in Milton Keynes, a flagship ‘smart city’ in the UK, with large public and private sector investment. BT is one of over a dozen companies from various industries testing out different use cases for a smarter, more connected city.

“In Milton Keynes we are the technology partner that’s collecting the data. We’ve created a data hub where we allow the information to be passed on, but also make it compatible and usable. The governance body of this Milton Keynes project decided very early to make it open source, open data, and allow small companies or individuals to play around with the data and turn it into applications. Our role is not necessarily to go onto the application layer – we leave that to others – our role is to allow the collection and transmission of data, and we help turn data into usable information.”

One use case BT is involved in is smart parking – figuring out how to help traffic management, reduce carbon footprint, and help the council to reduce costs and better plan for parking availability. “Lots of ideas which can evolve as you collect the data, and that’s BT’s role.”

Another good example of how BT can adapt its offerings to different verticals is its work in telecare and telehealth, where the telco currently partners with the NHS, providing the equipment, monitoring system, and certain administrative and operational units, leaving the medical part to the medical professionals.

While BT’s established UK infrastructure makes it well positioned to assume these kinds of roles in developing smarter cities and healthcare, in other, more commercial areas there are no place-specific constraints.

“Typically our core customer base for global services are the large multinational players,” says von Roenne, “and these operate around the world. We are bringing our network and cloud integration capabilities right down to the manufacturing lines or the coal face of our multinational customers. Just a few weeks ago, we announced a partnership with Rajant Corporation, who specialise in wireless mesh deployments, to enable organisations to connect and gather data from thousands of devices such as sensors, autonomous vehicles, industrial machinery, high-definition cameras and others.”

Indeed, there are countless areas where data can be profitably collated and exploited, and next month von Roenne will be attending Internet of Things World Europe in Berlin, where he will be looking to discover new businesses and business opportunities. “I think there is already a lot of low hanging fruit out there if we just do some clever thinking about using what’s out there,” he says, adding that, often, the area in which the data could really be useful is not necessarily the same as the one it’s being collected in.

The capacity to take a bird’s eye view, bringing together different sectors of the economy for everyone’s mutual benefit, is another advantage BT will point to as it positions itself for the Internet of Things.

Make your Sunday League team as ‘smart’ as Borussia Dortmund with IoT

IoT can help make your football team smarter

IoT can help make your football team smarter

How, exactly, is IoT changing competitive sports? And how might you, reader, go about making your own modest Sunday League team as ‘smart’ as the likes of AC Milian, Borussia Dortmund and Brazil?

We asked Catapult, a world leader in the field and responsible for connecting all three (as well as Premier League clubs including Tottenham, West Brom, Newcastle, West Ham and Norwich) exactly how the average sporting Joe could go about it. Here’s what the big teams are increasingly doing, in five easy steps.

Link-up play

The technology itself consists of a small wearable device that sits (a little cyborg-y) at the top of the spine under the uniform, measuring every aspect of an athlete’s movement using GPS antenna and motion sensors. The measurements include acceleration, deceleration, change of direction and strength – as well as more basic things like speed, distance and heart rate.

Someone’s going to have to take a bit of time off work though! You’ll be looking at a one- or two-day installation on-site with the team, where a sports scientist would set you up with the software.

Nominate a number cruncher

All the raw data you’ll collect is then put through algorithms that provide position-specific and sport-specific data output to a laptop. Many of Catapult’s Premier League and NFL clients hire someone specifically to analyse the massed data.  Any of your team-mates work in IT or accountancy?

Tackle number crunching

Now you’ve selected your data analyst, you’ll want to start them out on the more simple metrics. Everyone understands distance, for instance (probably the easiest way to understand how hard an athlete has worked). From there you can look at speed. Combine the two and you’ll have a fuller picture of how much of a shift Dean and Dave have really put in (hangovers notwithstanding).

Beyond this, you can start looking at how quickly you and your team mates accelerate (not very, probably), and  the effect of deceleration on your intensity afterward. Deceleration is usually the most harmful to tissue injuries.

Higher still up the spectrum of metrics, you can encounter a patented algorithm called inertial movement analysis, used to capture ‘micro-movements’ and the like.

Pay up!

Don’t worry, you won’t have to actually buy all the gear (which could well mean your entire team re-mortgaging its homes): most of Catapult’s clients rent the devices…

However, you’ll still be looking at about £100 per unit/player per month, a fairly hefty additional outlay.

Surge up your Sunday League!

However, if you are all sufficiently well-heeled (not to mention obsessively competitive) to make that kind of investment, the benefits could be significant.

Florida State Football’s Jimbo Fisher recently credited the technology with reducing injuries 88 per cent. It’s one of number of similarly impressive success stories: reducing injuries is Catapult’s biggest selling point, meaning player shortages and hastily arranged stand-ins could be a thing of the past.

Of course if the costs sound a bit too steep, don’t worry: although the timescale is up in the air, Catapult is ultimately planning to head down the consumer route.

The day could yet come, in the not too distant future, when every team is smart!

How will the Wearables market will continue to change and evolve? Jim Harper (Director of Sales and Business Development, Bittium) will be leading a discussion on this very topic at this year’s Internet of Things World Europe (Maritim Pro Arte, Berlin 6th – 7th October 2015)

A tale of two ITs

Werner Knoblich,  head of strategy at Red Hat in Europe, Middle East, and Africa (EMEA)

Werner Knoblich, senior vp and gm of Red Hat in EMEA

Gartner calls it ‘bimodal IT’; Ovum calls it ‘multimodal IT’; IDC calls it the ‘third platform’. Whatever you choose to call it, they are all euphemisms for the same evolutions in IT: a shift towards deploying more user-centric, mobile-friendly software and services that more scalable, flexible and easily integrated than the previous generation of IT services. And while the cloud has evolved as an essential delivery mechanism for the next generation of services, it’s also prompting big changes in IT says Werner Knoblich, senior vice president and general manager of Red Hat in EMEA.

“The challenge with cloud isn’t really a technology one,” Knoblich explains, “but the requirements of how IT needs to change in order to support these technologies and services. All of the goals, key metrics, ways of doing business with vendors and service providers have changed.”

Most of what Knoblich is saying may resonate with any large organisation managing a large legacy estate that wants to adopt more mobile and cloud services; the ‘two ITs can be quite jarring.

The chief goal used to be reliability; now it’s agility. In the traditional world of IT the focus was on price for performance; now it’s about customer experience. In traditional IT the most common approach to development was the classic ‘waterfall’ approach – requirements, design, implementation, verification, maintenance; now it’s all about agile and continuous delivery.

Most assets requiring management were once physical; now they’re all virtualised machines and microservices. The applications being adopted today aren’t monolithic beasts as they were traditionally, but modular, cloud-native apps running in Linux containers or platforms like OpenStack (or both).

Not just the suppliers – but also the way they are sourced – has changed. In the traditional world long-term, large-scale multifaceted deals were the norm; now, there are lots of young, small suppliers, contracted in short terms or on a pay-as-you-go basis.

“You really need a different kind of IT, and people who are very good in the traditional mode aren’t necessarily the ones that will be good in this new hybrid world,” he says. “It’s not just hybrid cloud but hybrid IT.”

The challenges are cultural, organisational, and technical. According to the 2015 BCN Annual Industry Survey, which petitioned over 700 senior IT decision makers, over 67 per cent of enterprises plan to implement multiple cloud services over the next 18 months, but close to 70 per cent were worried about how those services would integrate with other cloud services and 90 per cent were concerned about how they will integrate those cloud services with their legacy or on-premise services.

That said, open source technologies that also make use of open standards play a massive role in ensuring cloud-to-cloud and cloud-to-legacy integrations are achievable and, where possible, seamless – one of the main reasons why Linux containers are gaining so much traction and mind share today (workload portability). And open source technology is something Red Hat knows a thing or two about.

Beyond its long history in server and desktop OSs (Red Hat Enterprise Linux) and middleware (JBoss) the company is a big sponsor and early backer of Open Stack, increasingly popular cloud building software built on a Linux foundation. It helped create an open source platform as a service, OpenShift. The company is also working on Atomic Host, an open source container-based hosting mechanism for a slimmed down version of RHEL with support for other open source container technologies including Kubernetes and Docker, the darlings of the container community.

“Our legacy in open source is extremely important and even more important in cloud than the traditional IT world,” Knoblich says.

“All of the innovation happening today in cloud is open source – think of Docker, OpenStack, Cloud Foundry, Kubernetes, and you can’t really think of one pure proprietary offering that can match these in terms of the pace of innovation and the rate at which new features are being added,” he explains.

But many companies, mostly the large supertankers, don’t yet see themselves as ready to embrace these new technologies and platforms – not just because they don’t have the type or volume of workloads to migrate, because they require a huge cultural and organisational shift. And cultural as well as organisational shifts are typically rife with political struggles, resentment, and budgetary wrestling.

“You can’t just install OpenStack or Dockerise your applications and ‘boom’, you’re ready for cloud – it just doesn’t work that way. Many of the companies that are successfully embracing these platforms and digitising their organisations set up a second IT department that operates in parallel to the traditional one, and can only seed out the processes and practices – and technologies – they’ve embraced when critical mass is reached. Unless that happens, they risk getting stuck back in the traditional IT mentality.”

An effective open hybrid approach ultimately means not only embracing the open source solutions and technologies, but recognising that some large, monolithic applications – say, Cobol-based mainframe apps – won’t make it into this new world; neither will the processes needed to maintain those systems.

“For some industries, like insurance for instance, there isn’t a recognised need to ditch those systems and processes. But for others, particularly those being heavily disrupted, that’s not the case. Look at Volkswagen. They don’t just see Mercedes, BMW and Tesla as competitors – they see Google and Apple as competitors too because the car becomes a technology platform for services.”

“No industry is secure from disruption, particularly from players that scarcely existed a few years ago, which is why IT will be multi-modal for many, many years to come,” he concludes.

This interview was developed in partnership with Red Hat

Jennifer Kent of Parks Associates on IoT and healthcare

BCN spoke to Jennifer Kent, Director of Research Quality and Product Development at Parks Associates, on the anticipated impact IoT will have on healthcare.

BCN: Can you give us a sense of how big an impact the Internet of Things could have on health in the coming years?

Jennifer KentJennifer Kent: Because the healthcare space has been slow to digitize records and processes, the IoT stands to disrupt healthcare to an even greater extent than will be experienced in other industries. Health systems are just now getting to a point where medical record digitization and electronic communication are resulting in organizational efficiencies.

The wave of new data that will result from the mass connection of medical and consumer health devices to the Internet, as part of the IoT, will give care providers real insight for the first time into patients’ behaviour outside of the office. Parks Associates estimates that the average consumer spends less than 1% of their time interacting with health care providers in care facilities. The rest of consumers’ lives are lived at home and on-the-go, engaging with their families, cooking and eating food, consuming entertainment, exercising, and managing their work lives – all of which impact their health status. The IoT can help care providers bridge the gap with their patients, and can potentially provide insight into the sources of motivation and types of care plans that are most effective for specific individuals.

 

Do you see IoT healthcare as an essentially self-enclosed ecosystem, or one that will touch consumer IoT?

IoT healthcare will absolutely touch consumer IoT, at least in healthcare markets where consumers have some responsibility for healthcare costs, or in markets that tie provider payments to patients’ actual health outcomes. In either scenario, the consumer is motivated to take a greater interest in their own self-care, driving up connected health device and application use. While user-generated data from consumer IoT devices will be less clinically accurate or reliable, this great flood of data still has the potential to result in better outcomes, and health industry players will have an interest in integrating that data with data produced via IoT healthcare sources.

 

Medical data is very well protected – and quite rightly – but how big a challenge is this to the development of effective medical IoT, which after all depends on the ability to effectively share information?

All healthcare markets must have clear regulations that govern health data protection, so that all players can ensure that their IoT programs are in compliance with those regulations. Care providers’ liability concerns, along with the investments in infrastructure that are necessary to protect such data, have created the opportunity for vendors to create solutions that take on the burden of regulatory compliance for their clients. Furthermore, application and device developers on the consumer IoT side that border very closely the medical IoT vertical can seek regulatory approval –even if not required – as a means of attaining premium brand status from consumers and differentiation from the may untested consumer-facing applications on market.

Finally, consumers can be motivated to permit their medical data be shared, for the right incentive. Parks Associates data show that no less than 40% of medical device users in the U.S. would share the data from their devices in order to identify and resolve device problems. About a third of medical devices users in the US would share data from their devices for a discount on health insurance premiums. Effective incentives will vary, depending on each market’s healthcare system, but care providers, device manufacturers, app developers, and others who come into contact with medical device data should investigate whether potential obstacles related to data protection could be circumvented by incentivizing device end-users to permit data sharing.

 

You’re going to be at Internet of Things World Europe (5 – 7 October 2015 Maritim proArte, Berlin). What are you looking forward to discussing there and learning about?

While connected devices have been around for decades, the concept of the Internet of Things – in which connected devices communicate in a meaningful way across silos – is at a very early and formative stage. Industry executives can learn much from their peers and from visionary thinkers at this stage, before winners and losers have been decided, business plans hardened, and innovation slowed. The conversations among attendees at events like Internet of Things World Europe can shape the future and practical implementation of the IoT. I look forward to learning how industry leaders are applying lessons learned from early initiatives across markets and solution types.

Enabling smart cities with IoT

The Internet of Things will help make cities smarter

The Internet of Things will help make cities smarter

The population of London swells by an additional 10,000 a month, a tendency replicated in cities across the world. To an extent such growth reflects the planet’s burgeoning wider population, and there is even an interesting argument that cities are an efficient way of providing large numbers with their necessary resources. What we know as the ‘smart city’ may well prove to be the necessary means to manage this latest shift at scale.

Justin Anderson is sympathetic to this assessment. As the chairman of Flexeye, vice chair of techUK’s Internet of Things Council, and a leader of government-funded tech consortium Hypercat and London regeneration project Old Oak Common, he is uniquely positioned to comment on the technological development of our urban spaces.

“We are in an early stage of this next period of the evolution of the way that cities are designed and managed,” he says. “The funny thing about ‘smart’ of course, is that if you look back 5000 years, and someone suggested running water would be a good idea, that would be pretty smart at the time. ‘Smart’ is something that’s always just over the horizon, and we’re just going through another phase of what’s just over the horizon.”

There’s some irony in the fact that Anderson finds himself so profoundly involved in laying the foundations for smarter cities, since architects have been in his family for 400 years, and he intended to go in that direction himself before falling into the study of mathematics – which then led to a career in technology.

“There are lots of similarities between the two,” he says. “Stitching lots of complex things together and being able to visualise how the whole thing might be before it exists. And of course the smart city is a world comprised of both the physical and virtual aspects of infrastructure, both of which need to be tied together to be able to manage cities in a more efficient way.”

Like many of the great urban developments, the smart city is mostly going to be something invisible, something we quickly take for granted.

“We’re not necessarily all going to be directly feeling the might of processing power all around us. I think we’ll see a lot of investment on the industrial level coming into the city that’s going to be invisible to the citizen, but ultimately they will benefit because it’s a bit more friction taken out of their world. It’ll be a gradual evolution of things just working better – and that will have a knock on effect of not having to queue for so long, and life just being a little bit easier.”

There are, however, other ways connectivity could change urban life in the coming years: by reversing the process of urban alienation, and allowing online communities to come together and effect real world change.

“If you can engage citizens in part of that process as a way that they live, and make sure that they feel fully accountable for what the city might be, then there’s also a lot of additional satisfaction that could come from being a part of that city, rather than just a pawn in a larger environment where you really have no say and just have to do what you’ve got to do. Look at something like air quality – to be able to start to get that united force and be able to then put more pressure upon the city authorities to do something about it. Local planning policy is absolutely core in all of this.”

Anderson sees technology as an operative part of the trend towards devolution, with cities and their citizens gaining more and more control of their destiny. “If you build that sort of nuclear community around issues rather than just around streets or neighbourhoods, you get new levels of engagement.” For such changes to be effected, however, there is plenty that still needs doing on the technical level – a message Anderson will bringing to Internet of Things World Europe event in Berlin this October.

“I think the most important thing right now is that technology companies come together to agree on a common urban platform that is interoperable, allowing for different components to be used appropriately, and that we don’t find ourselves locked into large systems that mean cities can’t evolve in a flexible and fluid way in the future. We have to have that flexibility designed into this next stage of evolution that comes frMakom interoperability. My drive is to make sure everyone is a believer in interoperability.”