Todas las entradas hechas por Guest Author

The top three cloud security myths: BUSTED

a safe place to workThe rise in global cyber-attacks and the subsequent high-profile press coverage, understandably makes businesses question the security of cloud. After all, the dangers of hosting anything in an environment where data loss or system failure events are attributed to an outside source are magnified. As a result, many CIOs are also still struggling to identify and implement the cloud services most suitable for their business. In fact, research finds over three quarters (79%) of CIOs find it a challenge to balance the productivity needs of employees against potential security threats. Moreover, 84% of CIOs worry cloud causes them to lose control over IT.

But is cloud really more vulnerable than any other infrastructure? And how can organisations mitigate any risk they encounter? The reality is that all systems have vulnerabilities that can be exploited, whether on-premise, in the cloud or a hybrid of the two. It’s safe to say that people fear what they don’t understand – and with cloud becoming increasingly complex, it’s not surprising that there are so many myths attached to it. It’s time to clear up some of these myths.

Myth 1: Cloud technology is still in its infancy and therefore inherently insecure

Cloud has been around for much longer than we often think and can be traced as far back as the 1970’s. The rapid pace of cloud development, coupled with an awakening realisation of what cloud can do for businesses, has thrust it into the limelight in recent years.

The biggest issue CIOs have with cloud is their increasing distance from the physical technology involved. Indeed, many CIO’s feel that if they cannot walk into a data centre and see comforting lights flashing on the hardware, then it is beyond their reach. As a result, many organisations overlook instrumentation in the cloud, so don’t look at the data or systems they put there in the same way they would if it were on a physical machine. Organisations then forget to apply their own security standards, as they would in their own environment, and it is this complacency that gives rise to risk and exposure.

Lady Justice On The Old Bailey, LondonMyth 2: Physical security keeps data safe

It is a common misconception that having data stored on premise and on your own servers is the best form of protection. However, the location of data is not the only factor to consider. The greatest form of defence you can deploy with cloud is a combination of strict access rights, diligent data stewardship and strong governance.

Common security mistakes include not performing full due diligence on the cloud provider and assuming that the provider will be taking care of all security issues. In addition, it is still common for organisations to not take into account the physical location of a cloud environment and the legal ramifications of storing data in a different country. Indeed, a recent European Court of Justice ruling found the Safe Harbour accord was invalid as it failed to adequately protect EU data from US government surveillance. Cloud providers rushed to assure customers they were dealing with the situation, but the main takeaway from this is to not believe that a cloud provider will write security policy for you – organisations need to take ownership.

Myth 3: Cloud security is the provider’s responsibility

All of the major public clouds have multiple certifications (ISO27001, ISO27018, ENISA IAF, FIPS140-2, HIPAA, PCI-DSS) attained by proving they have controls to ensure data integrity.

Security CCTV camera in office buildingThe real risk comes when organisations blindly park data, thinking that security is just implicit. Unless the data is protected with encryption, firewalls, access lists etc., organisations remain vulnerable. The majority of cloud exposures can in fact be traced back to a failure in policy or controls not being applied correctly – look at the TalkTalk hack for example, and consider the alternate outcome had the database been encrypted.

Education and ownership is the future

The speed at which cloud is evolving can understandably cause a few teething problems. But it is the responsibility of providers and clients alike to take ownership of their own elements and apply security policies which are right for their business, their risk profile and the data which they hold. As with any technological change, many interested parties quickly jumped on the cloud bandwagon. But the allure of a technology can inhibit a lack of critical thinking, and the broader view of choosing the right application at the right cost, with appropriate security to mitigate risk, is lost. Remember, the cloud is not inherently secure and given the fact it stands to underpin enterprise operations for years to come, it’s worth approaching it not as a bandwagon but as an important part of enterprise infrastructure.

Written by Mark Ebden, Strategic Consultant, Trustmarque

How SMEs are benefitting from hybrid cloud architecture in the best of both worlds

Firm handshakeHybrid cloud architecture has been a while maturing, but now offers businesses unparalleled flexibility, ROI and scalability. The smaller the business, the more vital these traits are, making hybrid cloud the number one choice for SMEs in 2016.

It’s been more than two years since Gartner predicted that, by 2017, 50 per cent of enterprises would be using a hybrid of public and private cloud operations. This prediction was based on growing private cloud deployment coupled with interest in hybrid cloud, but a lack of actual uptake – back then in 2013. “Actual deployments [of hybrid cloud] are low, but aspirations are high”, said Gartner at the time.

It’s fair to say that Gartner’s prediction has been borne out, with hybrid cloud services rapidly becoming a given for a whole range of businesses, but perhaps less predictably the value of hybrid is being most felt in the SME sector, where speed, ROI and overall flexibility are most intensely valued. As enterprise data requirements continue to rocket – indeed overall business data volume is growing at a rate of more than 60 per cent annually – it’s not hard to see why this sector is burgeoning.

Data protection is no longer an option

Across the board, from major corporations through to SMEs in particular, there’s now clear recognition that data protection is no longer merely a “nice-to-have”, it’s a basic requirement for doing business. Not being able to access customer, operational or supply-chain data for even short periods can be disastrous, and every minute of downtime impacts on ROI. Critically, losing data permanently threatens to damage operational function, as well as business perception. The latter point is particularly important in terms of business relationships with suppliers and customers that may have taken years to develop, but can be undone in the course of a few hours of unexplained downtime. It’s never been easier to take business elsewhere, so the ability to keep up and running irrespective of hardware failure or an extreme weather event is essential.

Speed and cost benefits combined

Perhaps the most obvious benefit of hybrid cloud technology (a combination of on-premises and off-premises deployment models) is that SMEs are presented with enterprise class IT capabilities at a much lower cost. SMEs that outsource the management of IT services through Managed Service Providers (MSP), pay per seat, for immediate scalability, and what’s more avoid the complexity of managing the same systems in-house. This model also avoids the requirement for capital investment, allowing SMEs to avoid large upfront costs, but still enjoy the benefits – such as data protection in the example of hybrid cloud data backup.

One UK business that saved around £200,000 in lost revenue due to these benefits is Mandarin Stone, a natural stone and tile retailer. Having implemented a hybrid cloud disaster recovery system from Datto the company experienced an overheated main server just months later, but were able to switch operations to a virtualised cloud server in just hours while replacement hardware was setup, in contrast to a previous outage that took days to resolve. “Datto was invaluable,” said Alana Preece, Mandarin Stone’s Financial Director, “and the device paid for itself in that one incident. The investment [in a hybrid cloud solution] was worth it.”

The considerable upside of the hybrid model is that where immediate access to data or services is required, local storage devices can make this possible without any of the delay associated with hauling large datasets down from the cloud. SMEs in particular are affected by bandwidth concerns as well as costs. In the event of a localised hardware failure or loss of a business mobile device, for example, data can be locally restored in just seconds.

Unburden the network for better business

Many hybrid models use network downtime to backup local files to the cloud, lowering the impact on bandwidth during working hours, but also ensuring that there is an off-premises backup in place in the event of a more serious incident such as extreme weather, for example. Of course, this network management isn’t a new idea, but with a hybrid cloud setup it’s much more efficient – for example, in a cloud-only implementation the SMEs server will have an agent or multiple agents running to dedupe, compress and encrypt each backup, using the server’s resources. A local device taking on this workload leaves the main server to deal with the day-to-day business unhindered, and means that backups can be made efficiently as they’re required, then uploaded to the cloud when bandwidth is less in demand.

Of course, since Gartner’s original prediction there’s been considerable consumer uptake of cloud-based backups such as Apple’s iCloud and Google’s Drive, which has de-stigmatised the cloud and driven acceptance and expectations. SME’s have been at the forefront of this revolution, making cloud technology far more widely accepted as being reliable, cost-effective, low-hassle and scalable. The fact that Google Apps and Microsoft Office 365 are both largely cloud-based show just how the adoption barriers have fallen since 2013, which makes reassuring SME decision-makers considerably easier for MSPs.

Compliance resolved

Compliance can be particularly onerous for SMEs, especially where customer data is concerned. For example, the global demands of a standard like PCI DSS, or HIPAA (for those with North American operations) demand specific standards of care in terms of data storage, retention and recovery. Hybrid solutions can help smooth this path by providing compliant backup storage off-premises for retention, protect data from corruption and provide a ‘paper trail’ of documentation that establishes a solid data recovery process.

Good news for MSPs

Finally, hybrid cloud offers many benefits for the MSP side of the coin, delivering sustainable recurring revenues, not only via the core backup services themselves, which will tend to grow over time as data volumes increase, but also via additional services. New value-add services might include monitoring the SME’s environment for new backup needs, or periodic business continuity drills, for example, to improve the MSPs customer retention and help their business grow.

Written by Andrew Stuart, Managing Director, EMEA, Datto

 

About Datto

Datto is an innovative provider of comprehensive backup, recovery and business continuity solutions used by thousands of managed service providers worldwide. Datto’s 140+ PB private cloud and family of software and hardware devices provide Total Data Protection everywhere business data lives. Whether your data is on-prem in a physical or virtual server, or in the cloud via SaaS applications, only Datto offers end-to-end recoverability and single-vendor accountability. Founded in 2007 by Austin McChord, Datto is privately held and profitable, with venture backing by General Catalyst Partners. In 2015 McChord was named to the Forbes “30 under 30” ranking of top young entrepreneurs.

CIOs look to the cloud for seamless M&A

IBM speaker

Sebastian Krause, General Manager for IBM Cloud Europe

For senior CIOs, knowing how to respond to an M&A and divesture situation is key, as mergers, acquisitions and divestitures are a critical component of business strategy.

Projections for European M&A transactions show total deal values are set to rise from US$621 billion in 2014 to US$936 billion by 2017. M&A activity is likely to be bolstered by continued positive monetary policy, with additional cross-border M&A activity likely to take place as a result of a strong US dollar, primarily in Spain, Germany, and Italy.

Increasingly, businesses are using M&A to grow their organisation, achieve economies of scale, expand product portfolios, globalise and diversify.

In the intense negotiations around this business change, IT operations are likely to face dramatic reorganisation as various stakeholders analyse existing systems and look at the potential for efficiencies.

This is about survival and the IT division is likely to be under intense scrutiny during this period, under pressure to perform critical functions such as the integration or separation of critical systems and data, the provision of an uninterrupted service during the transition period, and the prompt delivery of synergy targets. IT strategy is therefore core to any successful M&A or divestiture plan and a critical contributor to its success or failure.

Increasingly, CIOs are under pressure to meet these challenges quickly and at lower cost. Their ability to do so can even impact the way analysts assess potential deals. IT dependent synergies have been found to be responsible for a large proportion (30 to 60%) of M&A benefits, but 70% of M&As fail to meet their synergy targets in the planned timeframe.

Realising these M&A and divestiture targets for enterprise IT environments is complex and requires a holistic approach that considers public, private, IaaS, PaaS, and SaaS as well as non-cloud delivery models.

Some CIOs may approach the situation by simply making adjustments to the existing IT landscape – from CRM, ERP through to office.

This can involve singling out certain components of an established Enterprise Resource Planning (ERP) system, cloning the existing ERP environment, deploying existing systems into the acquired business asset or transferring data between differing systems with the expectation that no issues with integration will arise. These approaches have certainly worked in the past, but can be costly, challenging to implement and disruptive.

This is why many CIOs are looking at a move towards cloud-based applications and infrastructure, which can take the pain out of the M&A process. Broadly, the drivers for moving to cloud services are increased agility, speed, innovation and lowering costs.

They can help organisations going through mergers and acquisitions to realise synergy benefits more quickly, simplify integration and accelerate the change programme, reduce costs through efficiencies, mitigate costly migration investments and encourage financial flexibility.

Top cloud benefits for M&A:

  • Achieving synergy more quickly: Cloud enabled applications simplify portability, integration and deployment.
  • Lowering costs: The cloud can provide temporary burst capacity for the migration.
  • Increased financial flexibility: Cloud provides a flexible cost model, allowing organisations to easily move between CAPEX and OPEX to impact EBITA and cash flow.
  • Simplifying changes: Cloud simplifies the creation of APIs to hide the underlying complexity of multiple, overlapping systems.
  • When preparing for an M&A or divestiture, it’s worth considering what the future IT model will look like, which APIs are needed to simplify required activities and how applications can be cloud enabled for portability and deployment.

Developing a repeatable platform that delivers these benefits and simplifies M&A activities will greatly improve an organisation’s ability to grow and be successful. It may even open up new opportunities that might not have been possible without the cost, flexibility, and scalability benefits that cloud solutions can deliver.

With businesses already realising real benefits, the cloud’s role in M&A is only set to grow. By building a cloud model that works, organisations can avoid reorganising IT operations for each merger or acquisition and ensure a much more seamless transition.

Through implementing an approach that can speed the execution and success of these deals, CIOs can look to deliver value from the IT department that goes far beyond just support, to true business leadership.

Written by Sebastian Krause, General Manager for IBM Cloud Europe

Big Data looks inwards to transform network management and application delivery

Strawberry and Blackberry CloseWe’ve all heard of the business applications touted by big data advocates – data-driven purchasing decisions, enhanced market insights and actionable customer feedback. These are undoubtedly of great value to businesses, yet organisations only have to look inwards to find further untapped potential. Here Manish Sablok, Head of Field Marketing NWE at ALE explains the two major internal IT processes that can benefit greatly from embracing big data: network management and application delivery.

SNS Research estimated Big Data investments reached $40 billion worldwide this year. Industry awareness and reception is equally impressive – ‘89% of business leaders believe big data will revolutionise business operations in the same way the Internet did.’ But big data is no longer simply large volumes of unstructured data or just for refining external business practices – the applications continue to evolve. The advent of big data analytics has paved the way for smarter network and application management. Big data can ultimately be leveraged internally to deliver cost saving efficiencies, optimisation of network management and application delivery.

What’s trending on your network?

Achieving complete network visibility has been a primary concern of CIOs in recent years – and now the arrival of tools to exploit big data provides a lifeline. Predictive analytics techniques enable a transition from a reactive to proactive approach to network management. By allowing IT departments visibility of devices – and crucially applications – across the network, the rise of the Bring Your Own Device (BYOD) trend can be safely controlled.

The newest generation of switch technology has advanced to the stage where application visibility capability can now be directly embedded within the most advanced switches. These switches, such as the Alcatel-Lucent Enterprise OmniSwitch 6860, are capable of providing an advanced degree of predictive analytics. The benefits of these predictive analytics are varied – IT departments can establish patterns of routine daily traffic in order to swiftly identify anomalies hindering the network. Put simply, the ability to detect what is ‘trending’ – be it backup activities, heavy bandwidth usage or popular application deployment – has now arrived.

More tasks can be automated than ever before, with a dynamic response to network and user needs becoming standard practice. High priority users, such as internal teams requiring continued collaboration, can be prioritised the necessary network capacity in real-time.

Trees, silhouetted in the mistEffectively deploy, monitor and manage applications

Effective application management has its own challenges, such as the struggle to enforce flexible but secure user and device policies. Big data provides the business intelligence necessary to closely manage application deployment by analysing data streams, including application performance and user feedback. Insight into how employees or partners are using applications allows IT departments to identify redundant features or little used devices and to scale back or increase support and development accordingly.

As a result of the increasing traffic from voice, video and data applications, new network management tools have evolved alongside the hardware. The need to reduce the operational costs of network management, while at the same time providing increased availability, security and multimedia support has led to the development of unified management tools that offer a single, simple window into applications usage. Centralised management can help IT departments predict network trends, potential usage issues and manage users and devices – providing a simple tool to aid business decisions around complex processes.

Through the effective deployment of resources based on big data insight, ROI can be maximised. Smarter targeting of resources makes for a leaner IT deployment, and reduces the need for investment in further costly hardware and applications.

Networks converging on the future

Big data gathering, processing and analytics will all continue to advance and develop as more businesses embrace the concept and the market grows. But while the existing infrastructure in many businesses is capable of using big data to a limited degree, a converged network infrastructure, by providing a simplified and flexible architecture, will maximise the benefits and at the same time reduce Total Cost of Ownership – and meet corporate ROI requirements.

By introducing this robust network infrastructure, businesses can ensure a future-proof big data operation is secure. The advent of big data has brought with it the ability for IT departments to truly develop their ‘smart network’. Now it is up to businesses to seize the opportunity.

Written by Manish Sablok, Head of Field Marketing NWE at Alcatel Lucent Enterprise

Securing Visibility into Open Source Code

Yellow road sign with a blue sky and white clouds: open sourceThe Internet runs on open source code. Linux, Apache Tomcat, OpenSSL, MySQL, Drupal and WordPress are built on open source. Everyone, every day, uses applications that are either open source or include open source code; commercial applications typically have only 65 per cent custom code. Development teams can easily use 100 or more open source libraries, frameworks tools and code snippets, when building an application.

The widespread use of open source code to reduce development times and costs makes application security more challenging. That’s because the bulk of the code contained in any given application is often not written by the team that developed or maintain it. For example, the 10 million lines of code incorporated in the GM Volt’s control systems include open source components. Car manufacturers like GM are increasingly taking an open source approach because it gives them broader control of their software platforms and the ability to tailor features to suit their customers.

Whether for the Internet, the automotive industry, or for any software package, the need for secure open source code has never been greater, but CISOs and the teams they manage are losing visibility into the use of open source during the software development process.

Using open source code is not a problem in itself, but not knowing what open source is being used is dangerous, particularly when many components and libraries contain security flaws. The majority of companies exercise little control over the external code used within their software projects. Even those that do have some form of secure software development lifecycle tend to only apply it to the code they write themselves – 67 per cent of companies do not monitor their open source code for security vulnerabilities.

The Path to Better Code

Development frameworks and newer programming languages make it much easier for developers to avoid introducing common security vulnerabilities such as cross-site scripting and SLQ injection. But developers still need to understand the different types of data an application handles and how to properly protect that data. For example, session IDs are just as sensitive as passwords, but are often not given the same level of attention. Access control is notoriously tricky to implement well, and most developers would benefit from additional training to avoid common mistakes.

Mike

Mike Pittenger, VP of Product Strategy at Black Duck Software

Developers need to fully understand how the latest libraries and components work before using them, so that these elements are integrated and used correctly within their projects. One reason people feel safe using the OpenSSL library and take the quality of its code for granted is its FIPS 140-2 certificate. But in the case of the Heartbleed vulnerability, the Heartbleed protocol is outside the scope of FIPS. Development teams may have read the documentation covering secure use of OpenSSL call functions and routines, but how many realised that the entire codebase was not certified?

Automated testing tools will certainly improve the overall quality of in-house developed code. But CISOs must also ensure the quality of an application’s code sourced from elsewhere, including proper control over the use of open source code.

Maintaining an inventory of third-party code through a spreadsheet simply doesn’t work, particularly with a large, distributed team. For example, the spreadsheet method can’t detect whether a developer has pulled in an old version of an approved component, or added new, unapproved ones. It doesn’t ensure that the relevant security mailing lists are monitored or that someone is checking for new releases, updates, and fixes. Worst of all, it makes it impossible for anyone to get a full sense of an application’s true level of exposure.

Know Your Code

Developing secure software means knowing where the code within an application comes from, that it has been approved, and that the latest updates and fixes have been applied, not just before the application is released, but throughout its supported life.

While using open source code makes business sense for efficiency and cost reasons, open source can undermine security efforts if it isn’t well managed. Given the complexity of today’s applications, the management of the software development lifecycle needs to be automated wherever possible to allow developers to remain agile enough to keep pace, while reducing the introduction and occurrence of security vulnerabilities.

For agile development teams to mitigate security risks from open source software, they must have visibility into the open source components they use, select components without known vulnerabilities, and continually monitor those components throughout the application lifecycle.

Written by Mike Pittenger, VP of Product Strategy at Black Duck Software.

More than just a low sticker price: Three key factors for a successful SaaS deployment

Teamwork. Business illustrationOne of the key challenges for businesses when evaluating new technologies is understanding what a successful return on investment (ROI) looks like.

In its infancy, business benefits of the cloud-based Software-as-a-Service (SaaS) model were simple: save on expensive infrastructure, while remaining agile enough to scale up or down depending on demand. Yet as cloud-based tools become ubiquitous, both inside and outside of a workplace, measuring success extended beyond simple infrastructure savings.

In theory the ability to launch new projects in hours and replace high infrastructure costs with a low monthly subscription should deliver substantial ROI benefits. But what happens to that ROI when the IT team discovers, six months after deployment, that end-user adoption is as low as 10 per cent? If businesses calculated the real “cost per user” in these instances, the benefits promised by cloud would simply diminish. This is becoming a real issue for businesses that bought on the promise of scalability, or reduced infrastructure costs.

In reality, success demands real organisational change, not just a cheap licencing fee. That’s why IT buyers must take time to look beyond the basic “sticker price” and begin to understand the end-user.

Aiming for seamless collaboration

As the enterprise workplace becomes ever-more fragmented, a “collaborative approach” is becoming increasingly important to business leaders. Industry insight, experience and understanding are all things that can’t be easily replicated by the competition. Being able to easily share this knowledge across an entire organisation is an extremely valuable asset – especially when trying to win new customers. That said, in organisations where teams need to operate across multiple locations (be it in difference offices or different countries), this can be difficult to implement: collaboration becomes inefficient, content lost and confidential data exposed – harming reputation and reducing revenue opportunities.

Some cloud-based SaaS solutions are quite successful in driving collaboration, improving the agility of teams and the security of their content. For example, Baker Tilly International – a network of 157 independent accountancy and business advisory firms, with 27,000 employees across 133 counties –significantly improved efficiency and created more time to bid for new business by deploying a cloud-based collaboration platform with government-grade security. However, not all organisations experience this success when deploying new cloud technologies. Some burden themselves with services that promise big ROI through innovation, but struggle with employee adoption.

Solving problems. Business conceptHere are the three key considerations all IT buyers must look at when evaluating successful SaaS deployment:

  1. Building awareness and confidence for better user experience

All enterprise systems, cloud or otherwise, need ownership and structure. IT teams need to understand how users and information move between internal systems. The minute workflows become broken, users will abandon the tool and default back to what has worked for them in the past. The result: poor user adoption and even increased security risks as users try to circumvent the new process. Building awareness and confidence in cloud technologies is the key to curbing this.

While cloud-based SaaS solutions are sold on their ease of use, end user education is paramount to ensuring an organization sees this value. The truth is, media scaremongering around data breaches has resulted in a fear of “the cloud”, causing many employees, especially those that don’t realise the consumer products they use are cloud-based, to resist using these tools in the workplace. In addition to teaching employees how to use services, IT teams must be able to alleviate employee concerns – baking change management into a deployment schedule.

These change management services aren’t often included within licensing costs, making the price-per-user seem artificially low. IT teams must be sure to factor in education efforts for driving user adoption and build an ROI not against price-per-user, but the actual cost-per-user.

  1. Data security isn’t just about certifications

There’s a thin line drawn between usability and security. If forced to choose, security must always come first. However, be aware that in the age of citizen IT too much unnecessary security can actually increase risk. That may seem contradictory but if usability is compromised too deeply, users will default to legacy tools, shadow IT or even avoid processes altogether.

Many businesses still struggle with the concept of their data being stored offsite. However, for some this mind-set is changing and the focus for successful SaaS implementations is enablement. In these businesses, IT buyers not only look for key security credentials – robust data hosting controls, application security features and secure mobile working – to meet required standards and compliance needs; but also quality user experience. The most secure platform in the world serves no purpose if employees don’t bother to use it.

Contemplate. Business concept illustrationThrough clear communication and a well-thought out on-boarding plan for end users, businesses can ensure all employees are trained and adequately supported as they begin using the solution.

  1. Domain expertise

One of the key advantages of cloud-based software is its ability to scale quickly and drive business agility. Today, scale is not only a measure of infrastructure but also a measure of user readiness.

This requires SaaS vendors to respond quickly to a business’s growth by delivering all of the things that help increase user adoption including; adequate user training, managing new user on-boarding, and even monitoring usage data and feedback to deliver maximum value as business begin to scale.

Yes, SaaS removes the need for big upgrade costs but without support from a seasoned expert, poor user adoption puts ROI at risk.

SaaS is about service

Cloud-based SaaS solutions can deliver a flexible, efficient and reliable way to deploy software into an organisation, helping to deliver ROI through reduced deployment time and infrastructure savings. However, these business must never forget that the second “S” in SaaS stands for service, and that successful deployments require more than just a low “sticker price”.

Written by Neil Rylan, VP of Sales EMEA, Huddle

The flexible working phenomenon – what’s holding us back?

Business people working together in officeWe live in a world where the 9-5 office job is rapidly becoming obsolete. The office worker is no longer chained to a desk, thanks to the rapid rise and swift adoption of technologies which enable work to take place at home, on the move, or anywhere with an internet or mobile connection.

At least, that’s what the world would have you believe. According to the latest research from UC EXPO, many workers still aren’t aware that they have the right to request flexible working from their employers. Even more worryingly, many office-based workers say that not all employees have access to these seemingly universal policies. So what’s going on at an employee level? Is the flexible working revolution really as advanced as it seems?

A flexible revolution – embracing new working ideals

It can’t be denied that the workplace and attitudes towards the traditional office-based role is changing. In a sharp increase on previous years, 27% of UK office workers now regularly work outside their base, and just under that (22%) say that they have worked at home, remotely, or elsewhere with flexible hours more in 2015 than they did in previous years.

It’s clear that the option to work flexible hours is seen as a right nowadays, but interestingly, so is remote working. The right to request flexible working became law in 2014, but 74% of the UK’s office-based workforce think that requesting remote working should be a right too.

It’s not just the ability to ‘be your own boss’ which makes flexible working so attractive. 82% of UK workers are much more likely to take a job that offers flexible working benefits than one that doesn’t, which presents an issue for businesses that don’t adhere to this. Whilst some workers are excluded whose job roles do not require a strict 9-5 policy, the benefits of flexible working are more widely recognised than a year ago, with a whopping 90% of those surveyed citing flexible working as essential to maintaining a better work/life balance. So much so, in fact, that it is valued higher than any other benefits, including a season ticket loan and daily free breakfast!

What’s stalling the flexible phenomenon?

Despite the widespread acknowledgment and appreciation of flexible working policies, it seems that total adoption is still a long way away. The concerns of recent years are still prevalent, including questions around BYOD security and the ability to trust employees to actually work when they are out of the office on company time. 67% of UK office workers, in fact, believe that productivity levels either increase or stay the same when working remotely.

Dear Future Im Ready, message on paper, smart phone and coffee on tableAlthough the concerns around productivity and security are decreasing, thanks to increasingly secure technologies available, a worrying number of UK office workers are still not aware of their right to request flexible working. In 2015, 50% of workers were unaware of this law, whereas in early 2016, around 39% are still unaware. So, despite a decrease, it’s still a significant proportion of the workforce who are potentially missing out on adopting the work style that suits them best.

The future of UC

Unified Communications technologies are helping to stimulate the growth of flexible working culture – most of us have used video conferencing at some point, in addition to other cloud-based collaboration tools. This is starting to become more sophisticated, and eventually, we will see a much more fluid working policy for the majority of UK businesses. As UC EXPO exhibitor Tim Bishop of Unify comments: “The office as we know faces an uncertain future. According to our research, 69% of knowledge workers say that having a single office as a physical workplace is less important than it was in the past, and 49% report that their organizations operate through technology and communication (such as virtual teams) rather than through offices and locations”.

Whilst Unify, and many others, argue that this will be a good thing, until the concerns around security are truly resolved, and we have a foolproof method of ensuring productivity and security when employees work remotely, there will always be something holding us back to some extent. That said, it’s clear that this is the future of the workforce – time for businesses and technology providers alike to get on board and embrace the change.

Written by Bradley Maule-ffinch, Director of Strategy at UC EXPO

 

 

 

 

About UC EXPO

 UC EXPO is Europe’s largest unified communications & collaboration (UC&C) event, for those looking to find out how the latest unified communications can drive and support their business. The event showcases brand new exclusive content and senior level insights from across the industry. UC EXPO 2016, together with Unified Communications Insight (www.ucinsight.com) and the world’s largest UC&C LinkedIn group delivers news, insight and knowledge throughout the year. Attending UC EXPO 2016 will help to ensure business decisions are being made based on the latest best practice for improved communications and collaboration, and organisations are able to continue, or start their journey in enabling workforce mobility.

 UC EXPO 2016 will take place on 19-20 April 2016, at Olympia, London. 

 For full details of the event, or to register for free, visit www.ucexpo.co.uk or follow UC EXPO on Twitter using the hashtag #UCEXPO.

Head in the clouds: Four key trends affecting enterprises

New trends, concept imageCloud is changing the way businesses are functioning and has provided a new and improved level of flexibility and collaboration. Companies worldwide are realising the cloud’s capabilities to generate new business models and promote sustainable competitive advantage; the impact of this is becoming very apparent: a Verizon report recently revealed that 69 per cent of businesses who have used cloud have put it to use to significantly reengineer one or more of their business processes. It’s easy to see why there’s still so much hype around cloud. We’ve heard so much about cloud computing over the last few years that you could be forgiven for thinking that it is now universally adopted, but the reality is that we are still only just scratching the surface, as cloud is still very much in a period of growth and expansion.

Looking beyond the horizon

At present, the majority of corporate cloud adoption is around Infrastructure-as-a-Service (IaaS) and Software-as-a-Service (SaaS) offerings such as AWS, Azure, Office 365 and Salesforce.com. These services offer cheap buy-in and a relatively painless implementation process, which remains separate from the rest of corporate IT. Industry analyst Gartner says IaaS spending is set to grow 38.4 per cent over the course of 2016, while worldwide SaaS spending is set to grow 20.3 per cent over the year, reaching $37.7 billion. However, the real promise of cloud is much more than IaaS, PaaS or SaaS: it’s a transformative technology moving compute power and infrastructure between on-premise resources, private cloud and public cloud.

As enterprises come to realise the true potential of cloud, we’ll enter a period of great opportunity for enterprise IT, but there will be plenty of adoption-related matters to navigate. Here are four big areas enterprises will have to deal with as cloud continues to take the world by storm:

  1. Hybrid cloud will continue to dominate

Hybrid cloud will rocket up the agenda, as businesses and providers alike continue to realise that there is no one-size-fits-all approach to cloud adoption. Being able to mix and match public and private cloud services from a range of different providers enables businesses to build an environment that meets their unique needs more effectively. To date, this has been held back by interoperability challenges between cloud services, but a strong backing for open application programming interfaces (APIs) and multi-cloud orchestration platforms is making it far easier to integrate cloud services and on-premise workloads alike. As a result, we will continue to see hybrid cloud dominate the conversation.

  1. Emergence of iPaaS

NASA is teaming up with IBM to host a code-a-thon for developers interested in supporting space exploration through apps

The drive towards integration of on premise applications and cloud is giving rise to Integration Platform as a Service (iPaaS). Cloud integration still remains a daunting task for many organizations, but iPaaS is a cloud-based integrations solution that is slowly and steadily gaining traction within enterprises. With iPaaS, users can develop integration flows that connect applications residing in the cloud or on premise, and deploy them without installing any hardware or software. Although iPaaS is relatively new to the market, categories of iPaaS vendors in the market are beginning to emerge, including ecommerce/B2B integration and cloud integration. With integration challenges still a huge issue for enterprises using cloud, demand for iPaaS is only set to grow over the coming months.

  1. Containers will become reality

To date, a lot of noise has been made about the possibilities of container technology, but in reality its use has yet to fully kick-off. That’s set to change as household name public clouds such as Amazon, Microsoft and Google are now embracing containers; IBM’s Blue Mix offering in particular is set to make waves with its triple-pronged Public, Dedicated and Local delivery model. Building a wave of momentum for many application and OS technology manufacturers to ride, it will now become increasingly realistic for them to construct support services around container technology. This does present a threat to traditional virtualization approach, but over time a shift in hypervisors is on the cards and container technology can only improve from this point.

  1. Cloud will be used for Data Resiliency/Recovery services

With cloud storage prices coming down drastically and continuous improvements being made to cloud gateway platforms, the focus is set to shift to cloud-powered backup and disaster recovery services. We are in an age where everything is being offered ‘as a service’; the idea of cloud-powered on-demand usability suits backup and disaster recovery services very well because they do not affect the immediate production data. As such, this should be an area where cloud use will dramatically increase over the next year.

With all emerging technologies, it takes time to fully figure out what they actually mean for enterprises, and these four cloud trends reflect that. In reality we’re only just getting started with cloud, now they understand how it works, the time has come for enterprises to turn the screw and begin driving even more benefits from it.

Written by Kalyan Kumar, Chief Technologist at HCL Technologies.

Digital Transformation: Seven Big Traps to avoid in Implementing Bimodal IT

Zumos de verdura ecolcica‘Bimodal IT’ is a term coined by Gartner. It describes one approach for both keeping the lights on with mission critical, but stable core IT systems (Mode 1), whilst taking another route (Mode 2) to delivering the innovative new applications required to digitally transform and differentiate the business.

Both streams of IT are critical. Mode 1 requires highly specialised programmers, long and detailed development cycles. Control, detailed planning and process adherence are of priority. Projects are technical and require little involvement from business teams. Mode 2 requires a high degree of business involvement, fast turnaround, and frequent updates; effectively a quick sprint to rapidly transform business ideas into applications.

According to a recent survey by the analyst group, nearly 40 per cent of CIOs have embraced bimodal IT, with the majority of the remainder planning to follow in the next three years. Those yet to implement bimodal IT were tellingly those who also fared worst in terms of digital strategy performance.

If you’re one of the recently converted, you won’t want to rush blindly into bimodal IT, oblivious to the mistakes made by those who have already ventured down that path.

Based on experience over many customer projects, here are seven mistakes and misconceptions I’ve learned firms need to avoid when implementing bimodal IT:

1. Thinking bimodal IT impacts only IT – In transforming how IT operates, bimodal IT changes the way the business operates too. Mode 2 is about bringing IT and business together to collaboratively bring new ideas to market. This requires the business to be much more actively involved, as well as take different approaches to planning, budgeting and decision making.

2. Lacking strong (business) leadership – Strong IT and business leadership is absolutely critical to implementing bimodal IT. The individual responsible for operationally setting up Mode 2 needs to be a strong leader, and ideally even a business leader. That’s because the goals and KPIs of Mode 2 are so completely different from Mode 1. When Mode 2 is set up by someone with a Mode 1 mind-set, they tend to focus on the wrong things (e.g. upfront planning vs. learning as you go, technical evaluations vs. business value etc.), limiting the team’s chance of success

3. Confusing Mode 2 with ‘agile’ – One of the biggest misconception about bimodal IT is that Mode 2 is synonymous with agile. Don’t get me wrong; iterative development is a key part of it. Because requirements for digital applications are often fuzzy, teams need to work in short, iterative cycles, creating functionality, releasing it, and iterating continually based on user feedback. But the Process element extends beyond agile, encompassing DevOps practices (to achieve the deployment agility required for continuous iteration) and new governance models.

4. Not creating dedicated teams for Mode 1/2 – Organisations that have one team serving as both Mode 1 and Mode 2 will inevitably fail. For starters, Mode 1 always takes precedence over Mode 2. When your SAP production instance goes down, your team is going to drop everything to put out the fire, leaving the innovation project on the shelf. Second, Mode 1 and Mode 2 require a different set of people, processes and platforms. By forcing one team to perform double duty, you’re not setting yourself up for success.

5. Overlooking the Matchmaker role – When building your Mode 2 team, it’s important to identify the individual(s) that will help cultivate and prioritise new project ideas through a strong dialog with the business. These matchmakers have a deep understanding of, and trusted relationship with the business, which they can leverage to uncover new opportunities that can be exploited with Mode 2. Without them, it’s much harder to identify projects that deliver real business impact.

6. Keeping Mode 1 and 2 completely separate – While we believe Mode 1 and Mode 2 teams should have separate reporting structures, the two teams should never be isolated from each other. In fact, the two should collaborate and work closely together, whether to integrate a Mode 2 digital application with a system of record or to transfer maintenance of a digital application to Mode 1 once it becomes mission critical, requiring stability and security over speed and agility.

7. Ignoring technical debt – Mode 2 is a great way to rapidly bring new applications to market. However, you can’t move fast at the expense of accumulating technical debt along the way. It is important to ensure maintainability, refactoring applications over time as required.

While 75 per cent of IT organisations will have a bimodal capability by 2017, Gartner predicts that half of those will make a mess. Don’t be one of them! Avoid the mistakes above to you implement bimodal IT properly and sustainably, with a focus on the right business outcomes that drive your digital innovation initiatives forward.

Written by Roald Kruit, Co-founder at Mendix

Overcoming the data integration challenge in hybrid and cloud-based environments

Vivo, the Brazilian subsidiary of Spanish telco Telefónica deployed TOA Technologies' cloud-based field service management softawre

Industry experts estimate that data volumes are doubling in size every two years. Managing all of this is a challenge for any enterprise, but it’s not just the volume of data as much as the variety of data that presents a problems. With SaaS and on-premises applications, machine data, and mobile apps all proliferating, we are seeing the rise of an increasingly complicated value-chain ecosystem. IT leaders need to incorporate a portfolio-based approach and combine cloud and on-premises deployment models to sustain competitive advantage. Improving the scale and flexibility of data integration across both environments to deliver a hybrid offering is necessary to provide the right data to the right people at the right time.

The evolution of hybrid integration approaches creates requirements and opportunities for converging application and data integration. The definition of hybrid integration will continue to evolve, but its current trajectory is clearly headed to the cloud.

According to IDC, cloud IT infrastructure spending will grow at a compound annual growth rate (CAGR) of 15.6 percent each year between now and 2019 at which point it will reach $54.6 billion.  In line with this, customers need to advance their hybrid integration strategy to best leverage the cloud. At Talend, we have identified five phases of integration, starting from the oldest and most mature right through to the most bleeding edge and disruptive. Here we take a brief look at each and show how businesses can optimise the approach as they move from one step to the next.

Phase 1: Replicating SaaS Apps to On-Premise Databases

The first stage in developing a hybrid integration platform is to replicate SaaS applications to on-premises databases. Companies in this stage typically either need analytics on some of the business-critical information contained in their SaaS apps, or they are sending SaaS data to a staging database so that it can be picked up by other on-premise apps.

In order to increase the scalability of existing infrastructure, it’s best to move to a cloud-based data warehouse service within AWS, Azure, or Google Cloud. The scalability of these cloud-based services means organisations don’t need to spend cycles refining and tuning the databases. Additionally, they get all the benefits of utility-based pricing. However, with the myriad of SaaS apps today generating even more data, they may also need to adopt a cloud analytics solution as part of their hybrid integration strategy.

Phase 2: Integrating SaaS Apps directly with on-premises apps

Each line of business has their preferred SaaS app of choice: Sales departments have Salesforce, marketing has Marketo, HR has Workday, and Finance has NetSuite. However, these SaaS apps still need to connect to a back-office ERP on-premises system.

Due to the complexity of back-office systems, there isn’t yet a widespread SaaS solution that can serve as a replacement for ERP systems such as SAP R/3 and Oracle EBS. Businesses would be best advised not to try to integrate with every single object and table in these back-office systems – but rather to accomplish a few use cases really well so that their business can continue running, while also benefiting from the agility of cloud.

Phase 3: Hybrid Data Warehousing with the Cloud

Databases or data warehouses on a cloud platform are geared toward supporting data warehouse workloads; low-cost, rapid proof-of-value and ongoing data warehouse solutions. As the volume and variety of data increases, enterprises need to have a strategy to move their data from on-premises warehouses to newer, Big Data-friendly cloud resources.

While they take time to decide which Big Data protocols best serve their needs, they can start by trying to create a Data Lake in the cloud with a cloud-based service such as Amazon Web Services (AWS) S3 or Microsoft Azure Blobs. These lakes can relieve cost pressures imposed by on-premise relational databases and act as a “demo area”, enabling businesses to process information using their Big Data protocol of choice and then transfer into a cloud-based data warehouse. Once enterprise data is held there, the business can enable self-service with Data Preparation tools, capable of organising and cleansing the data prior to analysis in the cloud.

Phase 4: Real-time Analytics with Streaming Data

Businesses today need insight at their fingertips in real-time. In order to prosper from the benefits of real-time analytics, they need an infrastructure to support it. These infrastructure needs may change depending on use case—whether it be to support weblogs, clickstream data, sensor data or database logs.

As big data analytics and ‘Internet of Things’ (IoT) data processing moves to the cloud, companies require fast, scalable, elastic and secure platforms to transform that data into real-time insight. The combination of Talend Integration Cloud and AWS enables customers to easily integrate, cleanse, analyse, and manage batch and streaming data in the Cloud.

Phase 5: Machine Learning for Optimized App Experiences

In the future, every experience will be delivered as an app through mobile devices. In providing the ability to discover patterns buried within data, machine learning has the potential to make applications more powerful and more responsive. Well-tuned algorithms allow value to be extracted from disparate data sources without the limits of human thinking and analysis. For developers, machine learning offers the promise of applying business critical analytics to any application in order to accomplish everything from improving customer experience to serving up hyper-personalised content.

To make this happen, developers need to:

  • Be “all-in” with the use of Big Data technologies and the latest streaming big data protocols
  • Have large enough data sets for the machine algorithm to recognize patterns
  • Create segment-specific datasets using machine-learning algorithms
  • Ensure that their mobile apps have properly-built APIs to draw upon those datasets and provide the end user with whatever information they are looking for in the correct context

Making it Happen with iPaaS

In order for companies to reach this level of ‘application nirvana’, they will need to have first achieved or implemented each of the four previous phases of hybrid application integration.

That’s where we see a key role for integration platform-as-a-service (iPaaS), which is defined by analysts at  Gartner as ‘a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on premises and cloud-based processes, services, applications and data within individual or across multiple organisations.’

The right iPaaS solution can help businesses achieve the necessary integration, and even bring in native Spark processing capabilities to drive real-time analytics, enabling them to move through the phases outlined above and ultimately successfully complete stage five.

Written by Ashwin Viswanath, Head of Product Marketing at Talend