Archivo de la categoría: Features

How SMEs are benefitting from hybrid cloud architecture in the best of both worlds

Firm handshakeHybrid cloud architecture has been a while maturing, but now offers businesses unparalleled flexibility, ROI and scalability. The smaller the business, the more vital these traits are, making hybrid cloud the number one choice for SMEs in 2016.

It’s been more than two years since Gartner predicted that, by 2017, 50 per cent of enterprises would be using a hybrid of public and private cloud operations. This prediction was based on growing private cloud deployment coupled with interest in hybrid cloud, but a lack of actual uptake – back then in 2013. “Actual deployments [of hybrid cloud] are low, but aspirations are high”, said Gartner at the time.

It’s fair to say that Gartner’s prediction has been borne out, with hybrid cloud services rapidly becoming a given for a whole range of businesses, but perhaps less predictably the value of hybrid is being most felt in the SME sector, where speed, ROI and overall flexibility are most intensely valued. As enterprise data requirements continue to rocket – indeed overall business data volume is growing at a rate of more than 60 per cent annually – it’s not hard to see why this sector is burgeoning.

Data protection is no longer an option

Across the board, from major corporations through to SMEs in particular, there’s now clear recognition that data protection is no longer merely a “nice-to-have”, it’s a basic requirement for doing business. Not being able to access customer, operational or supply-chain data for even short periods can be disastrous, and every minute of downtime impacts on ROI. Critically, losing data permanently threatens to damage operational function, as well as business perception. The latter point is particularly important in terms of business relationships with suppliers and customers that may have taken years to develop, but can be undone in the course of a few hours of unexplained downtime. It’s never been easier to take business elsewhere, so the ability to keep up and running irrespective of hardware failure or an extreme weather event is essential.

Speed and cost benefits combined

Perhaps the most obvious benefit of hybrid cloud technology (a combination of on-premises and off-premises deployment models) is that SMEs are presented with enterprise class IT capabilities at a much lower cost. SMEs that outsource the management of IT services through Managed Service Providers (MSP), pay per seat, for immediate scalability, and what’s more avoid the complexity of managing the same systems in-house. This model also avoids the requirement for capital investment, allowing SMEs to avoid large upfront costs, but still enjoy the benefits – such as data protection in the example of hybrid cloud data backup.

One UK business that saved around £200,000 in lost revenue due to these benefits is Mandarin Stone, a natural stone and tile retailer. Having implemented a hybrid cloud disaster recovery system from Datto the company experienced an overheated main server just months later, but were able to switch operations to a virtualised cloud server in just hours while replacement hardware was setup, in contrast to a previous outage that took days to resolve. “Datto was invaluable,” said Alana Preece, Mandarin Stone’s Financial Director, “and the device paid for itself in that one incident. The investment [in a hybrid cloud solution] was worth it.”

The considerable upside of the hybrid model is that where immediate access to data or services is required, local storage devices can make this possible without any of the delay associated with hauling large datasets down from the cloud. SMEs in particular are affected by bandwidth concerns as well as costs. In the event of a localised hardware failure or loss of a business mobile device, for example, data can be locally restored in just seconds.

Unburden the network for better business

Many hybrid models use network downtime to backup local files to the cloud, lowering the impact on bandwidth during working hours, but also ensuring that there is an off-premises backup in place in the event of a more serious incident such as extreme weather, for example. Of course, this network management isn’t a new idea, but with a hybrid cloud setup it’s much more efficient – for example, in a cloud-only implementation the SMEs server will have an agent or multiple agents running to dedupe, compress and encrypt each backup, using the server’s resources. A local device taking on this workload leaves the main server to deal with the day-to-day business unhindered, and means that backups can be made efficiently as they’re required, then uploaded to the cloud when bandwidth is less in demand.

Of course, since Gartner’s original prediction there’s been considerable consumer uptake of cloud-based backups such as Apple’s iCloud and Google’s Drive, which has de-stigmatised the cloud and driven acceptance and expectations. SME’s have been at the forefront of this revolution, making cloud technology far more widely accepted as being reliable, cost-effective, low-hassle and scalable. The fact that Google Apps and Microsoft Office 365 are both largely cloud-based show just how the adoption barriers have fallen since 2013, which makes reassuring SME decision-makers considerably easier for MSPs.

Compliance resolved

Compliance can be particularly onerous for SMEs, especially where customer data is concerned. For example, the global demands of a standard like PCI DSS, or HIPAA (for those with North American operations) demand specific standards of care in terms of data storage, retention and recovery. Hybrid solutions can help smooth this path by providing compliant backup storage off-premises for retention, protect data from corruption and provide a ‘paper trail’ of documentation that establishes a solid data recovery process.

Good news for MSPs

Finally, hybrid cloud offers many benefits for the MSP side of the coin, delivering sustainable recurring revenues, not only via the core backup services themselves, which will tend to grow over time as data volumes increase, but also via additional services. New value-add services might include monitoring the SME’s environment for new backup needs, or periodic business continuity drills, for example, to improve the MSPs customer retention and help their business grow.

Written by Andrew Stuart, Managing Director, EMEA, Datto

 

About Datto

Datto is an innovative provider of comprehensive backup, recovery and business continuity solutions used by thousands of managed service providers worldwide. Datto’s 140+ PB private cloud and family of software and hardware devices provide Total Data Protection everywhere business data lives. Whether your data is on-prem in a physical or virtual server, or in the cloud via SaaS applications, only Datto offers end-to-end recoverability and single-vendor accountability. Founded in 2007 by Austin McChord, Datto is privately held and profitable, with venture backing by General Catalyst Partners. In 2015 McChord was named to the Forbes “30 under 30” ranking of top young entrepreneurs.

Big Data looks inwards to transform network management and application delivery

Strawberry and Blackberry CloseWe’ve all heard of the business applications touted by big data advocates – data-driven purchasing decisions, enhanced market insights and actionable customer feedback. These are undoubtedly of great value to businesses, yet organisations only have to look inwards to find further untapped potential. Here Manish Sablok, Head of Field Marketing NWE at ALE explains the two major internal IT processes that can benefit greatly from embracing big data: network management and application delivery.

SNS Research estimated Big Data investments reached $40 billion worldwide this year. Industry awareness and reception is equally impressive – ‘89% of business leaders believe big data will revolutionise business operations in the same way the Internet did.’ But big data is no longer simply large volumes of unstructured data or just for refining external business practices – the applications continue to evolve. The advent of big data analytics has paved the way for smarter network and application management. Big data can ultimately be leveraged internally to deliver cost saving efficiencies, optimisation of network management and application delivery.

What’s trending on your network?

Achieving complete network visibility has been a primary concern of CIOs in recent years – and now the arrival of tools to exploit big data provides a lifeline. Predictive analytics techniques enable a transition from a reactive to proactive approach to network management. By allowing IT departments visibility of devices – and crucially applications – across the network, the rise of the Bring Your Own Device (BYOD) trend can be safely controlled.

The newest generation of switch technology has advanced to the stage where application visibility capability can now be directly embedded within the most advanced switches. These switches, such as the Alcatel-Lucent Enterprise OmniSwitch 6860, are capable of providing an advanced degree of predictive analytics. The benefits of these predictive analytics are varied – IT departments can establish patterns of routine daily traffic in order to swiftly identify anomalies hindering the network. Put simply, the ability to detect what is ‘trending’ – be it backup activities, heavy bandwidth usage or popular application deployment – has now arrived.

More tasks can be automated than ever before, with a dynamic response to network and user needs becoming standard practice. High priority users, such as internal teams requiring continued collaboration, can be prioritised the necessary network capacity in real-time.

Trees, silhouetted in the mistEffectively deploy, monitor and manage applications

Effective application management has its own challenges, such as the struggle to enforce flexible but secure user and device policies. Big data provides the business intelligence necessary to closely manage application deployment by analysing data streams, including application performance and user feedback. Insight into how employees or partners are using applications allows IT departments to identify redundant features or little used devices and to scale back or increase support and development accordingly.

As a result of the increasing traffic from voice, video and data applications, new network management tools have evolved alongside the hardware. The need to reduce the operational costs of network management, while at the same time providing increased availability, security and multimedia support has led to the development of unified management tools that offer a single, simple window into applications usage. Centralised management can help IT departments predict network trends, potential usage issues and manage users and devices – providing a simple tool to aid business decisions around complex processes.

Through the effective deployment of resources based on big data insight, ROI can be maximised. Smarter targeting of resources makes for a leaner IT deployment, and reduces the need for investment in further costly hardware and applications.

Networks converging on the future

Big data gathering, processing and analytics will all continue to advance and develop as more businesses embrace the concept and the market grows. But while the existing infrastructure in many businesses is capable of using big data to a limited degree, a converged network infrastructure, by providing a simplified and flexible architecture, will maximise the benefits and at the same time reduce Total Cost of Ownership – and meet corporate ROI requirements.

By introducing this robust network infrastructure, businesses can ensure a future-proof big data operation is secure. The advent of big data has brought with it the ability for IT departments to truly develop their ‘smart network’. Now it is up to businesses to seize the opportunity.

Written by Manish Sablok, Head of Field Marketing NWE at Alcatel Lucent Enterprise

Securing Visibility into Open Source Code

Yellow road sign with a blue sky and white clouds: open sourceThe Internet runs on open source code. Linux, Apache Tomcat, OpenSSL, MySQL, Drupal and WordPress are built on open source. Everyone, every day, uses applications that are either open source or include open source code; commercial applications typically have only 65 per cent custom code. Development teams can easily use 100 or more open source libraries, frameworks tools and code snippets, when building an application.

The widespread use of open source code to reduce development times and costs makes application security more challenging. That’s because the bulk of the code contained in any given application is often not written by the team that developed or maintain it. For example, the 10 million lines of code incorporated in the GM Volt’s control systems include open source components. Car manufacturers like GM are increasingly taking an open source approach because it gives them broader control of their software platforms and the ability to tailor features to suit their customers.

Whether for the Internet, the automotive industry, or for any software package, the need for secure open source code has never been greater, but CISOs and the teams they manage are losing visibility into the use of open source during the software development process.

Using open source code is not a problem in itself, but not knowing what open source is being used is dangerous, particularly when many components and libraries contain security flaws. The majority of companies exercise little control over the external code used within their software projects. Even those that do have some form of secure software development lifecycle tend to only apply it to the code they write themselves – 67 per cent of companies do not monitor their open source code for security vulnerabilities.

The Path to Better Code

Development frameworks and newer programming languages make it much easier for developers to avoid introducing common security vulnerabilities such as cross-site scripting and SLQ injection. But developers still need to understand the different types of data an application handles and how to properly protect that data. For example, session IDs are just as sensitive as passwords, but are often not given the same level of attention. Access control is notoriously tricky to implement well, and most developers would benefit from additional training to avoid common mistakes.

Mike

Mike Pittenger, VP of Product Strategy at Black Duck Software

Developers need to fully understand how the latest libraries and components work before using them, so that these elements are integrated and used correctly within their projects. One reason people feel safe using the OpenSSL library and take the quality of its code for granted is its FIPS 140-2 certificate. But in the case of the Heartbleed vulnerability, the Heartbleed protocol is outside the scope of FIPS. Development teams may have read the documentation covering secure use of OpenSSL call functions and routines, but how many realised that the entire codebase was not certified?

Automated testing tools will certainly improve the overall quality of in-house developed code. But CISOs must also ensure the quality of an application’s code sourced from elsewhere, including proper control over the use of open source code.

Maintaining an inventory of third-party code through a spreadsheet simply doesn’t work, particularly with a large, distributed team. For example, the spreadsheet method can’t detect whether a developer has pulled in an old version of an approved component, or added new, unapproved ones. It doesn’t ensure that the relevant security mailing lists are monitored or that someone is checking for new releases, updates, and fixes. Worst of all, it makes it impossible for anyone to get a full sense of an application’s true level of exposure.

Know Your Code

Developing secure software means knowing where the code within an application comes from, that it has been approved, and that the latest updates and fixes have been applied, not just before the application is released, but throughout its supported life.

While using open source code makes business sense for efficiency and cost reasons, open source can undermine security efforts if it isn’t well managed. Given the complexity of today’s applications, the management of the software development lifecycle needs to be automated wherever possible to allow developers to remain agile enough to keep pace, while reducing the introduction and occurrence of security vulnerabilities.

For agile development teams to mitigate security risks from open source software, they must have visibility into the open source components they use, select components without known vulnerabilities, and continually monitor those components throughout the application lifecycle.

Written by Mike Pittenger, VP of Product Strategy at Black Duck Software.

More than just a low sticker price: Three key factors for a successful SaaS deployment

Teamwork. Business illustrationOne of the key challenges for businesses when evaluating new technologies is understanding what a successful return on investment (ROI) looks like.

In its infancy, business benefits of the cloud-based Software-as-a-Service (SaaS) model were simple: save on expensive infrastructure, while remaining agile enough to scale up or down depending on demand. Yet as cloud-based tools become ubiquitous, both inside and outside of a workplace, measuring success extended beyond simple infrastructure savings.

In theory the ability to launch new projects in hours and replace high infrastructure costs with a low monthly subscription should deliver substantial ROI benefits. But what happens to that ROI when the IT team discovers, six months after deployment, that end-user adoption is as low as 10 per cent? If businesses calculated the real “cost per user” in these instances, the benefits promised by cloud would simply diminish. This is becoming a real issue for businesses that bought on the promise of scalability, or reduced infrastructure costs.

In reality, success demands real organisational change, not just a cheap licencing fee. That’s why IT buyers must take time to look beyond the basic “sticker price” and begin to understand the end-user.

Aiming for seamless collaboration

As the enterprise workplace becomes ever-more fragmented, a “collaborative approach” is becoming increasingly important to business leaders. Industry insight, experience and understanding are all things that can’t be easily replicated by the competition. Being able to easily share this knowledge across an entire organisation is an extremely valuable asset – especially when trying to win new customers. That said, in organisations where teams need to operate across multiple locations (be it in difference offices or different countries), this can be difficult to implement: collaboration becomes inefficient, content lost and confidential data exposed – harming reputation and reducing revenue opportunities.

Some cloud-based SaaS solutions are quite successful in driving collaboration, improving the agility of teams and the security of their content. For example, Baker Tilly International – a network of 157 independent accountancy and business advisory firms, with 27,000 employees across 133 counties –significantly improved efficiency and created more time to bid for new business by deploying a cloud-based collaboration platform with government-grade security. However, not all organisations experience this success when deploying new cloud technologies. Some burden themselves with services that promise big ROI through innovation, but struggle with employee adoption.

Solving problems. Business conceptHere are the three key considerations all IT buyers must look at when evaluating successful SaaS deployment:

  1. Building awareness and confidence for better user experience

All enterprise systems, cloud or otherwise, need ownership and structure. IT teams need to understand how users and information move between internal systems. The minute workflows become broken, users will abandon the tool and default back to what has worked for them in the past. The result: poor user adoption and even increased security risks as users try to circumvent the new process. Building awareness and confidence in cloud technologies is the key to curbing this.

While cloud-based SaaS solutions are sold on their ease of use, end user education is paramount to ensuring an organization sees this value. The truth is, media scaremongering around data breaches has resulted in a fear of “the cloud”, causing many employees, especially those that don’t realise the consumer products they use are cloud-based, to resist using these tools in the workplace. In addition to teaching employees how to use services, IT teams must be able to alleviate employee concerns – baking change management into a deployment schedule.

These change management services aren’t often included within licensing costs, making the price-per-user seem artificially low. IT teams must be sure to factor in education efforts for driving user adoption and build an ROI not against price-per-user, but the actual cost-per-user.

  1. Data security isn’t just about certifications

There’s a thin line drawn between usability and security. If forced to choose, security must always come first. However, be aware that in the age of citizen IT too much unnecessary security can actually increase risk. That may seem contradictory but if usability is compromised too deeply, users will default to legacy tools, shadow IT or even avoid processes altogether.

Many businesses still struggle with the concept of their data being stored offsite. However, for some this mind-set is changing and the focus for successful SaaS implementations is enablement. In these businesses, IT buyers not only look for key security credentials – robust data hosting controls, application security features and secure mobile working – to meet required standards and compliance needs; but also quality user experience. The most secure platform in the world serves no purpose if employees don’t bother to use it.

Contemplate. Business concept illustrationThrough clear communication and a well-thought out on-boarding plan for end users, businesses can ensure all employees are trained and adequately supported as they begin using the solution.

  1. Domain expertise

One of the key advantages of cloud-based software is its ability to scale quickly and drive business agility. Today, scale is not only a measure of infrastructure but also a measure of user readiness.

This requires SaaS vendors to respond quickly to a business’s growth by delivering all of the things that help increase user adoption including; adequate user training, managing new user on-boarding, and even monitoring usage data and feedback to deliver maximum value as business begin to scale.

Yes, SaaS removes the need for big upgrade costs but without support from a seasoned expert, poor user adoption puts ROI at risk.

SaaS is about service

Cloud-based SaaS solutions can deliver a flexible, efficient and reliable way to deploy software into an organisation, helping to deliver ROI through reduced deployment time and infrastructure savings. However, these business must never forget that the second “S” in SaaS stands for service, and that successful deployments require more than just a low “sticker price”.

Written by Neil Rylan, VP of Sales EMEA, Huddle

Will containers change the world of cloud?

Global Container TradeThe rise of containers as a technology has been glorious and confusing in equal measure. While touted by some as the saviour of developers, and by others as the end of VM’s, the majority simply don’t understand containers as a concept or a technology.

In the simplest of terms, containers let you pack more computing workloads onto a single server and in theory, that means you can buy less hardware, build or rent less data centre space, and hire fewer people to manage that equipment.

“In the earlier years of computing, we had dedicated servers which later evolved with virtualisation,” say Giri Fox, Director of Technical Services at Rackspace. “Containers are part of the next evolution of servers, and have gained large media and technologist attention. In essence, containers are the lightest way to define an application and to transport it between servers. They enable an application to be sliced into small elements and distributed on one or more servers, which in turn improves resource usage and can even reduce costs.”

There are some clear differences between containers and virtual machines though. Linux containers give each application, its own isolated environment in which to run, but multiple containers share the host servers’ operating system. Since you don’t have to boot up an operating system, you can create containers in seconds not minutes like virtual machines. They are faster, require less memory space, offer higher-level isolation and are highly portable.

“Containers are more responsive and can run the same task faster,” adds Fox. “They increase the velocity of application development, and can make continuous integration and deployment easier. They often offer reduced costs for IT; testing and production environments can be smaller than without containers. Plus, the density of applications on a server can be increased which leads to better utilisation.

“As a direct result of these two benefits, the scope for innovation is greater than its previous technologies. This can facilitate application modernisation and allow more room to experiment.”

So the benefits are pretty open-ended. Speed of deployment, flexibility to run anywhere, no more expensive licenses, more reliable and more opportunity for innovation.

Which all sounds great, doesn’t it?

CaptureThat said, a recent survey from the Cloud & DevOps World team brought out some very interesting statistics, first and foremost the understanding of the technology. 76% of respondents agreed with the statement “Everyone has heard of containers, but no-one really understands what containers are”.

While containers have the potential to be
the next big thing in the cloud industry, unless those in the ecosystem understand the concept and perceived benefits, it is unlikely to take off.

“Containers are evolving rapidly and present an interesting runtime option for application development,” says Joe Pynadath, ‎GM of EMEA for Chef. “We know that with today’s distributed and lightweight apps, businesses, whether they are a new start-up’s to traditional enterprise, must accelerate their capabilities for building, testing, and delivering modern applications that drive revenue.

“One result of the ever-greater focus on software development is the use of new tools to build applications more rapidly and it is here that containers have emerged as an interesting route for developers. This is because they allow you to quickly build applications in a portable and lightweight manner. This provides a huge benefit for developers in speeding up the application building process. However, despite this, containers are not able to solve the complexities of taking an application from build through test to production, which presents a range of management challenges for developers and operations engineers looking to use them.”

There is certainly potential for containers within the enterprise environment, but as with all emerging technologies there is a certain level of confusion as to how they will integrate within the current business model, and how the introduction will impact the IT department on a day-to-day basis.

“Some of the questions we’re regularly asked by businesses looking to use containers are “How do you configure and tune the OS that will host them? How do you adapt your containers at run time to the needs of the dev, test and production environments they’re in?” comments Pynadath.

While containers allow you to use discovery services or roll your own solutions, the need to monitor and manage them in an automated way remains a challenge for IT teams. At Chef, we understand the benefits containers can bring to developers and are excited to help them automate many of the complex elements that are necessary to support containerized workflows in production”

Vendors are confident that the introduction of containers will drive further efficiencies and speed within the industry, though we’re yet to see a firm commitment from the mass market to demonstrate the technology will take off. The early adopter uptake is promising, and there are case studies to demonstrate the much lauded potential, but it’s still early days.

In short, containers are good, but most people just need to learn what they are.

Overcoming the data integration challenge in hybrid and cloud-based environments

Vivo, the Brazilian subsidiary of Spanish telco Telefónica deployed TOA Technologies' cloud-based field service management softawre

Industry experts estimate that data volumes are doubling in size every two years. Managing all of this is a challenge for any enterprise, but it’s not just the volume of data as much as the variety of data that presents a problems. With SaaS and on-premises applications, machine data, and mobile apps all proliferating, we are seeing the rise of an increasingly complicated value-chain ecosystem. IT leaders need to incorporate a portfolio-based approach and combine cloud and on-premises deployment models to sustain competitive advantage. Improving the scale and flexibility of data integration across both environments to deliver a hybrid offering is necessary to provide the right data to the right people at the right time.

The evolution of hybrid integration approaches creates requirements and opportunities for converging application and data integration. The definition of hybrid integration will continue to evolve, but its current trajectory is clearly headed to the cloud.

According to IDC, cloud IT infrastructure spending will grow at a compound annual growth rate (CAGR) of 15.6 percent each year between now and 2019 at which point it will reach $54.6 billion.  In line with this, customers need to advance their hybrid integration strategy to best leverage the cloud. At Talend, we have identified five phases of integration, starting from the oldest and most mature right through to the most bleeding edge and disruptive. Here we take a brief look at each and show how businesses can optimise the approach as they move from one step to the next.

Phase 1: Replicating SaaS Apps to On-Premise Databases

The first stage in developing a hybrid integration platform is to replicate SaaS applications to on-premises databases. Companies in this stage typically either need analytics on some of the business-critical information contained in their SaaS apps, or they are sending SaaS data to a staging database so that it can be picked up by other on-premise apps.

In order to increase the scalability of existing infrastructure, it’s best to move to a cloud-based data warehouse service within AWS, Azure, or Google Cloud. The scalability of these cloud-based services means organisations don’t need to spend cycles refining and tuning the databases. Additionally, they get all the benefits of utility-based pricing. However, with the myriad of SaaS apps today generating even more data, they may also need to adopt a cloud analytics solution as part of their hybrid integration strategy.

Phase 2: Integrating SaaS Apps directly with on-premises apps

Each line of business has their preferred SaaS app of choice: Sales departments have Salesforce, marketing has Marketo, HR has Workday, and Finance has NetSuite. However, these SaaS apps still need to connect to a back-office ERP on-premises system.

Due to the complexity of back-office systems, there isn’t yet a widespread SaaS solution that can serve as a replacement for ERP systems such as SAP R/3 and Oracle EBS. Businesses would be best advised not to try to integrate with every single object and table in these back-office systems – but rather to accomplish a few use cases really well so that their business can continue running, while also benefiting from the agility of cloud.

Phase 3: Hybrid Data Warehousing with the Cloud

Databases or data warehouses on a cloud platform are geared toward supporting data warehouse workloads; low-cost, rapid proof-of-value and ongoing data warehouse solutions. As the volume and variety of data increases, enterprises need to have a strategy to move their data from on-premises warehouses to newer, Big Data-friendly cloud resources.

While they take time to decide which Big Data protocols best serve their needs, they can start by trying to create a Data Lake in the cloud with a cloud-based service such as Amazon Web Services (AWS) S3 or Microsoft Azure Blobs. These lakes can relieve cost pressures imposed by on-premise relational databases and act as a “demo area”, enabling businesses to process information using their Big Data protocol of choice and then transfer into a cloud-based data warehouse. Once enterprise data is held there, the business can enable self-service with Data Preparation tools, capable of organising and cleansing the data prior to analysis in the cloud.

Phase 4: Real-time Analytics with Streaming Data

Businesses today need insight at their fingertips in real-time. In order to prosper from the benefits of real-time analytics, they need an infrastructure to support it. These infrastructure needs may change depending on use case—whether it be to support weblogs, clickstream data, sensor data or database logs.

As big data analytics and ‘Internet of Things’ (IoT) data processing moves to the cloud, companies require fast, scalable, elastic and secure platforms to transform that data into real-time insight. The combination of Talend Integration Cloud and AWS enables customers to easily integrate, cleanse, analyse, and manage batch and streaming data in the Cloud.

Phase 5: Machine Learning for Optimized App Experiences

In the future, every experience will be delivered as an app through mobile devices. In providing the ability to discover patterns buried within data, machine learning has the potential to make applications more powerful and more responsive. Well-tuned algorithms allow value to be extracted from disparate data sources without the limits of human thinking and analysis. For developers, machine learning offers the promise of applying business critical analytics to any application in order to accomplish everything from improving customer experience to serving up hyper-personalised content.

To make this happen, developers need to:

  • Be “all-in” with the use of Big Data technologies and the latest streaming big data protocols
  • Have large enough data sets for the machine algorithm to recognize patterns
  • Create segment-specific datasets using machine-learning algorithms
  • Ensure that their mobile apps have properly-built APIs to draw upon those datasets and provide the end user with whatever information they are looking for in the correct context

Making it Happen with iPaaS

In order for companies to reach this level of ‘application nirvana’, they will need to have first achieved or implemented each of the four previous phases of hybrid application integration.

That’s where we see a key role for integration platform-as-a-service (iPaaS), which is defined by analysts at  Gartner as ‘a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on premises and cloud-based processes, services, applications and data within individual or across multiple organisations.’

The right iPaaS solution can help businesses achieve the necessary integration, and even bring in native Spark processing capabilities to drive real-time analytics, enabling them to move through the phases outlined above and ultimately successfully complete stage five.

Written by Ashwin Viswanath, Head of Product Marketing at Talend

Cloud academy: Rudy Rigot and his new Holberton School

rudy rigotBusiness Cloud News talks to Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA) keynote Rudy Rigot about his new software college, which opens today.

Business Cloud News: Rudy, first of all – can you introduce yourself and tell us about your new Holberton School?

Rudy Rigot: Sure! I’ve been working in tech for the past 10 years, mostly in web-related stuff. Lately, I’ve worked at Apple as a full-stack software engineer for their localization department, which I left this year to found Holberton School.

Holberton School is a 2-year community-driven and project-oriented school, training software engineers for the real world. No classes, just real-world hands-on projects designed to optimize their learning, in close contact with volunteer mentors who all work for small companies or large ones like Google, Facebook, Apple, … One of the other two co-founders is Julien Barbier, formerly the Head of Community, Marketing and Growth at Docker.

Our first batch of students started last week!

What are some of the challenges you’ve had to anticipate?

Since we’re a project-oriented school, students are mostly being graded on the code they turn in, that they push to GitHub. Some of this code is graded automatically, so we needed to be able to run each student’s code (or each team’s code) automatically in a fair and equal way.

We needed to get information on the “what” (what is returned in the console), but also on the “how”: how long does the code take to run?  How much resource is being consumed? What is the return code? Also, since Holberton students are trained on a wide variety of languages; how do you ensure you can grade a Ruby project, and later a C project, and later a JavaScript project, etc. with the same host while minimizing issues?

Finally we had to make sure that the student can commit code that is as malicious as they want, we can’t need to have a human check it before running it, it should only break their program, not the whole host.

So how on earth do you negotiate all these?

Our project-oriented training concept is new in the United States, but it’s been successful for decades in Europe, and we knew the European schools, who built their programs before containers became mainstream, typically run the code directly on a host system that has all of the software they need directly installed on the host; and then they simply run a chroot before running the student’s code. This didn’t solve all of the problem, while containers did in a very elegant way; so we took the container road!

HolbertonCloud is the solution we built to that end. It fetches a student’s code on command, then runs it based on a Dockerfile and a series of tests, and finally returns information about how that went. The information is then used to compute a score.

What’s amazing about it is that by using Docker, building the infrastructure has been trivial; the hard part has been about writing the tests, the scoring algorithm … basically the things that we actively want to be focused on!

So you’ve made use of containers. How much disruption do you expect their development to engender over the coming years?

Since I’m personally more on the “dev” end use of devops, I see how striking it is that containers restore focus on actual development for my peers. So, I’m mostly excited by the innovation that software engineers will be focusing on instead of focusing on the issues that containers are taking care of for them.

Of course, it will be very hard to measure which of those innovations were able to exist because containers are involved; but it also makes them innovations about virtually every corner of the tech industry, so that’s really exciting!

What effect do you think containers are going to have on the delivery of enterprise IT?

I think one takeaway from the very specific HolbertonCloud use case is that cases where code can be run trivially in production are getting rare, and one needs guarantees that only containers can bring efficiently.

Also, a lot of modern architectures fulfil needs with systems that are made of more and more micro-services, since we now have enough hindsight to see the positive outcomes on their resiliences. Each micro-service may have different requirements and therefore be relevant to be done each with different technologies, so managing a growing set of different software configurations is getting increasingly relevant. Considering the positive outcomes, this trend will only keep growing, making the need for containers keep growing as well.

You’re delivering a keynote at Container World. What’s the main motivation for attending?

I’m tremendously excited by the stellar line-up! We’re all going to get amazing insight from many different and relevant perspectives, that’s going to be very enlightening!

The very existence of Container World is exciting too: it’s crazy the long way containers have gone over the span of just a few years.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA)

The IoT in Palo Alto: connecting America’s digital city

jonathan_reichental_headshot_banffPalo Alto is not your average city. Established by the founder of Stanford University, it was the soil from which Google, Facebook, Pinterest and PayPal (to name a few) have sprung forth. Indeed, Palo Alto has probably done more to transform human life in the last quarter century than any other. So, when we think of how the Internet of Things is going to affect life in the coming decades, we can be reasonably sure where much of expected disruption will originate.

All of which makes Palo Alto a great place to host the first IoT Data Analytics & Visualization event (February 9 – 11, 2016). Additionally fitting: the event is set to be kicked off by Dr. Jonathan Reichental, the city’s Chief Information Officer: Reichental is the man entrusted with the hefty task of ensuring the city is as digital, smart and technologically up-to-date as a place should be that has been called home by the likes of Steve Jobs, Mark Zuckberg, Larry Page and Sergey Brin.

Thus far, Reichental’s tenure has been a great success. In 2013, Palo Alto was credited with being the number one digital city in the US, and has made the top five year upon year – in fact, it so happens that, following our long and intriguing telephone interview, Reichental is looking forward to a small celebration to mark its latest nationwide ranking.

BCN: Jonathan, you’ve been Palo Alto’s CIO now for four years. What’s changed most during that time span?

Dr Jonathan Reichental: I think the first new area of substance would be open government. I recognise open government’s been a phenomenon for some time, but over the course of the last four years, it has become a mainstream topic that city and government data should be easily available to the people. That it should be machine readable, and that an API should be made available to anyone that wants the data. That we have a richer democracy by being open and available.

We’re still at the beginning however. I have heard that there are approximately 90,000 public agencies in the US alone. And every day and week I hear about a new federal agency or state or city of significance who are saying, ‘you can now go to our data portal and you can access freely the data of the city or the public agency. The shift is happening but it’s got some way to go.

Has this been a purely technical shift, or have attitudes had to evolve as well?

I think if you kind of look at something like cloud, cloud computing and cloud as a capability for government – back when I started ‘cloud’ was a dirty word. Many government leaders and government technology leaders just weren’t open to the option of putting major systems off-premise. That has begun to shift quite positively.

I was one of the first to say that cloud computing is a gift to government. Cloud eliminates the need to have all the maintenance that goes with keeping systems current and keeping them backed up and having disaster recovery. I’ve been a very strong proponent of that.

Then there’s social media  – government has fully embraced that now, having been reluctant early on. Mobile is beginning to emerge though it’s still very nascent. Here in Palo Alto we’re trying to make all services that make sense accessible via smart phone. I call it ‘city in a box.’ Basically, bringing up an app on the smart phone you should be able to interact with government – get a pet license, pay a parking fee, pay your electrical bill: everything should really be right there on the smartphone, you shouldn’t need to go to City Hall for many things any more.

The last thing I’d say is there has been an uptake in community participation in government. Part of it is it’s more accessible today, and part of it is there’s more ways to do so, but I think we’re beginning also to see the fruits of the millennial generation – the democratic shift in people wanting to have more of a voice and a say in their communities. We’re seeing much more in what is traditionally called civic engagement. But ‘much more’ is still not a lot. We need to have a revolution in this space for there to be significant change to the way cities operate and communities are effective.

Palo Alto is hosting the IoT Data Analytics & Visualization in February. How have you innovated in this area as a city?

One of the things we did with data is make it easily available. Now we’re seeing a community of people in the city and beyond, building solutions for communities. One example of that is a product called Civic Insight. This app consumes the permit data we make available and enables users to type in an address and find out what’s going on in their neighbourhood with regard to construction and related matters.

That’s a clear example of where we didn’t build the thing, we just made the data available and someone else built it. There’s an economic benefit to this. It creates jobs and innovation – we’ve seen that time and time again. We saw a company build a business around Palo Alto releasing our budget information. Today they are called OpenGov, and they sell the solution to over 500 cities in America, making it easy for communities to understand where their tax payer dollars are being spent. That was born and created in Palo Alto because of what we did making our data available.

Now we get to today, and the Internet of Things. We’re still – like a lot folks, especially in the government context – defining this. It can be as broad or as narrow as you want. There’s definitely a recognition that when infrastructure systems can begin to share data between each other, we can get better outcomes.

The Internet of Things is obviously quite an elastic concept, but are there areas you can point to where the IoT is already very much a reality in Palo Alto?

The clearest example I can give of that today is our traffic signal system here in the city. A year-and-a-half ago, we had a completely analogue system, not connected to anything other than a central computer, which would have created a schedule for the traffic signals. Today, we have a completely IP based traffic system, which means it’s basically a data network. So we have enormous new capability.

For example, we can have schedules that are very dynamic. When schools are being let out traffic systems are one way, at night they can be another way, you can have very granular information. Next you can start to have traffic signals communicate with each other. If there is a long strip of road and five traffic systems down there is some congestion, all the other traffic signals can dynamically change to try and make the flow better.

It goes even further than this. Now we can start to take that data – recording, for example, the frequency and volume of vehicles, as well as weather, and other ambient characteristics of the environment – and we can start to send this to the car companies. Here at Palo Alto, almost every car company has their innovation lab. Whether it’s Ford, General Motors, Volkswagen, BMW, Google (who are getting into the car business now) – they’re all here and they all want our data. They’re like: ‘this is interesting, give us an API, we’ll consume it into our data centres and then we’ll push into cars so maybe they can make better decisions.’

You have the Internet of Things, you’ve got traffic signals, cloud analytics solutions, APIs, and cars as computers and processors. We’re starting to connect all these related items in a way we’ve never done before. We’re going to follow the results.

What’s the overriding ambition would you say?

We’re on this journey to create a smart city vision. We don’t really have one today. It’s not a product or a service, it’s a framework. And within that framework we will have a series of initiatives that focus on things that are important to us. Transportation is really important to us here in Palo Alto. Energy and resources are really important: we’re going to start to put sensors on important flows of water so we can see the amount of consumption at certain times but also be really smart about leak detection, potentially using little sensors connected to pipes throughout the city. We’re also really focused on the environment. We have a chief sustainability officer who is putting together a multi-decade strategy around what PA needs to do to be part of the solution around climate change.

That’s also going to be a lot about sensors, about collecting data, about informing people and creating positive behaviours. Public safety is another key area. Being able to respond intelligently to crimes, terrorism or natural disasters. A series of sensors again sending information back to some sort of decision system that can help both people and machines make decisions around certain types of behaviours.

How do you expect this whole IoT ecosystem to develop over the next decade?

Bill Gates has a really good saying on this: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”  It’s something that’s informed me in my thinking. I think things are going to move faster and in more surprising ways in the next ten years for sure: to the extent that it’s very hard to anticipate where things are headed.

We’re disrupting the taxi business overnight, the hotel business, the food business. Things are happening at lightning speed. I don’t know if we have a good sense of where it’s all headed. Massive disruption across all domains, across work, play, healthcare, every sort of part of our lives.

It’s clear that – I can say this – ten years from now won’t be the same as today. I think we’ve yet to see the full potential of smart phones – I think they are probably the most central part of this ongoing transformation.

I think we’re going to connect many more things that we’re saying right now. I don’t know what the number will be: I hear five billion, twenty billion in the next five years. It’s going to be more than that. It’s going to become really easy to connect. We’ll stick a little communication device on anything. Whether it’s your key, your wallet, your shoes: everything’s going to be connected.

Palo Alto and the IoT Data Analytics & Visualization event look like a great matchup. What are you looking forward to about taking part?

It’s clearly a developing area and so this is the time when you want to be acquiring knowledge, networking with some of the big thinkers and innovators in the space. I’m pleased to be part of it from that perspective. Also from the perspective of my own personal learning and the ability to network with great people and add to the body of knowledge that’s developing. I’m going to be kicking it off as the CIO for the city.

BT and the IoT

BT Sevenoaks workstyle buildingIt is often said that the Internet of Things is all about data. Indeed, at its absolute heart, the whole ecosystem could even be reduced to four distinct layers, ones that are essentially applicable to any vertical.

First of all, you have the sensing layer: somehow (using sensors, Wi-Fi, beacons: whatever you can!) you have to collect the data in the first place, often in harsh environments. From there you need to transport the data on a connectivity layer. This could be mobile or fixed, Wi-Fi or something altogether more cutting edge.

Thirdly, you need to aggregate this data, to bring it together and allow it to be exchanged. Finally, there’s the crucial matter of analytics, where the raw data is transformed into something useful.

Operators such as BT sense the opportunities in this process – particularly in the first three stages. Some telcos may have arrived a little late to the IoT table, but there’s no question that – with their copious background developing vast, secure infrastructures – they enjoy some fundamental advantages.

“I see IoT as a great opportunity,” says Hubertus von Roenne, VP Global Industry Practices, BT Global Services. “The more the world is connected, the more you have to rely on a robust infrastructure, whether it’s connectivity or data centres, and the more you have to rely on secure and reliable environment. That’s our home turf. We are already active on all four layers, not only through our global network infrastructure, but also via our secure cloud computing capabilities and a ‘Cloud of Clouds’ technology vision that enables real time data crunching and strategic collaboration across very many platforms.”

An example of how BT is positioning itself can be seen in Milton Keynes, a flagship ‘smart city’ in the UK, with large public and private sector investment. BT is one of over a dozen companies from various industries testing out different use cases for a smarter, more connected city.

“In Milton Keynes we are the technology partner that’s collecting the data. We’ve created a data hub where we allow the information to be passed on, but also make it compatible and usable. The governance body of this Milton Keynes project decided very early to make it open source, open data, and allow small companies or individuals to play around with the data and turn it into applications. Our role is not necessarily to go onto the application layer – we leave that to others – our role is to allow the collection and transmission of data, and we help turn data into usable information.”

One use case BT is involved in is smart parking – figuring out how to help traffic management, reduce carbon footprint, and help the council to reduce costs and better plan for parking availability. “Lots of ideas which can evolve as you collect the data, and that’s BT’s role.”

Another good example of how BT can adapt its offerings to different verticals is its work in telecare and telehealth, where the telco currently partners with the NHS, providing the equipment, monitoring system, and certain administrative and operational units, leaving the medical part to the medical professionals.

While BT’s established UK infrastructure makes it well positioned to assume these kinds of roles in developing smarter cities and healthcare, in other, more commercial areas there are no place-specific constraints.

“Typically our core customer base for global services are the large multinational players,” says von Roenne, “and these operate around the world. We are bringing our network and cloud integration capabilities right down to the manufacturing lines or the coal face of our multinational customers. Just a few weeks ago, we announced a partnership with Rajant Corporation, who specialise in wireless mesh deployments, to enable organisations to connect and gather data from thousands of devices such as sensors, autonomous vehicles, industrial machinery, high-definition cameras and others.”

Indeed, there are countless areas where data can be profitably collated and exploited, and next month von Roenne will be attending Internet of Things World Europe in Berlin, where he will be looking to discover new businesses and business opportunities. “I think there is already a lot of low hanging fruit out there if we just do some clever thinking about using what’s out there,” he says, adding that, often, the area in which the data could really be useful is not necessarily the same as the one it’s being collected in.

The capacity to take a bird’s eye view, bringing together different sectors of the economy for everyone’s mutual benefit, is another advantage BT will point to as it positions itself for the Internet of Things.

Make your Sunday League team as ‘smart’ as Borussia Dortmund with IoT

IoT can help make your football team smarter

IoT can help make your football team smarter

How, exactly, is IoT changing competitive sports? And how might you, reader, go about making your own modest Sunday League team as ‘smart’ as the likes of AC Milian, Borussia Dortmund and Brazil?

We asked Catapult, a world leader in the field and responsible for connecting all three (as well as Premier League clubs including Tottenham, West Brom, Newcastle, West Ham and Norwich) exactly how the average sporting Joe could go about it. Here’s what the big teams are increasingly doing, in five easy steps.

Link-up play

The technology itself consists of a small wearable device that sits (a little cyborg-y) at the top of the spine under the uniform, measuring every aspect of an athlete’s movement using GPS antenna and motion sensors. The measurements include acceleration, deceleration, change of direction and strength – as well as more basic things like speed, distance and heart rate.

Someone’s going to have to take a bit of time off work though! You’ll be looking at a one- or two-day installation on-site with the team, where a sports scientist would set you up with the software.

Nominate a number cruncher

All the raw data you’ll collect is then put through algorithms that provide position-specific and sport-specific data output to a laptop. Many of Catapult’s Premier League and NFL clients hire someone specifically to analyse the massed data.  Any of your team-mates work in IT or accountancy?

Tackle number crunching

Now you’ve selected your data analyst, you’ll want to start them out on the more simple metrics. Everyone understands distance, for instance (probably the easiest way to understand how hard an athlete has worked). From there you can look at speed. Combine the two and you’ll have a fuller picture of how much of a shift Dean and Dave have really put in (hangovers notwithstanding).

Beyond this, you can start looking at how quickly you and your team mates accelerate (not very, probably), and  the effect of deceleration on your intensity afterward. Deceleration is usually the most harmful to tissue injuries.

Higher still up the spectrum of metrics, you can encounter a patented algorithm called inertial movement analysis, used to capture ‘micro-movements’ and the like.

Pay up!

Don’t worry, you won’t have to actually buy all the gear (which could well mean your entire team re-mortgaging its homes): most of Catapult’s clients rent the devices…

However, you’ll still be looking at about £100 per unit/player per month, a fairly hefty additional outlay.

Surge up your Sunday League!

However, if you are all sufficiently well-heeled (not to mention obsessively competitive) to make that kind of investment, the benefits could be significant.

Florida State Football’s Jimbo Fisher recently credited the technology with reducing injuries 88 per cent. It’s one of number of similarly impressive success stories: reducing injuries is Catapult’s biggest selling point, meaning player shortages and hastily arranged stand-ins could be a thing of the past.

Of course if the costs sound a bit too steep, don’t worry: although the timescale is up in the air, Catapult is ultimately planning to head down the consumer route.

The day could yet come, in the not too distant future, when every team is smart!

How will the Wearables market will continue to change and evolve? Jim Harper (Director of Sales and Business Development, Bittium) will be leading a discussion on this very topic at this year’s Internet of Things World Europe (Maritim Pro Arte, Berlin 6th – 7th October 2015)