Category Archives: Features

What’s New in Parallels Desktop 13

Are you curious regarding what’s new in  our latest version of Parallels Desktop 13? Parallels Desktop® for Mac enables users to run Windows, Linux, and other popular OSes without rebooting your Mac®. Parallels stands tall as the #1 solution for desktop virtualization for millions of users—for over 11 years. Parallels Desktop 13 for Mac has […]

The post What’s New in Parallels Desktop 13 appeared first on Parallels Blog.

Parallels Toolbox features

Support team guest blog author: Ajith Mamolin If you already use Parallels Toolbox, or are planning to give it a shot soon, you might be interested to learn more about the awesome Parallels Toolbox features we introduced in a past blog. The Parallels Development Team is constantly expanding the Toolbox functionality, and it now includes around […]

The post Parallels Toolbox features appeared first on Parallels Blog.

Storage Wars: Cloud vs. the Card for Storing Mobile Content

Cloud storageIn May, Samsung announced what it describes as the world’s highest capacity microSD card. The Samsung EVO+ 256GB microSD card has enough space to store more than 55,000 photos, 46 hours of HD video or 23,500 MP3 files and songs. It can be used for phones, tablets, video cameras and even drones. It’s set to be available in 50 countries worldwide.

The announcement of Samsung’s new card comes at a time when the amount of mobile content that consumers are creating, consuming and storing on their smartphones and mobile devices is increasing at an exponential rate.  The Growing number of connected devices with advanced features, including high-resolution cameras, 4K video filming and faster processors, are fuelling a global ‘content explosion’.  The content being created today is richer and heavier than ever, placing a growing strain on device storage capacities which could damage the data and impair user experience.

Earlier this year, 451 Research and Synchronoss Technologies charted the growth of smartphone content and found that the average smartphone user now generates 911MB of new content every month. At this rate, a typical 16GB smartphone – which already has almost 11GB of user content on it – will fill up in less than two months.  Given that a high proportion of smartphone owners have low-capacity devices – 31% 16GB; 57% 32GB or smaller – many will (if they haven’t already) quickly find themselves having to make difficult decisions. At the moment, this means having to frequently remove photos, videos and apps to make room for new ones.

It’s also surprising that almost half of smartphone users have no off-device storage in place at all, despite the variety of storage options available. One option is a hardware solution like a memory card. Samsung claim its new microSD card delivers a seamless experience user when accessing, sharing and storing content between different devices (depending on compatibility, of course). Samsung’s suggested price for this experience is $250 however there is another storage option for end-users, the cloud.

Cloud-based storage arguably provides a more flexible and secure method for end-users to back up, transfer and restore their precious content. A memory card, like a phone, can be damaged, lost or stolen. In contrast, the cloud is an ever-present secure repository that retains and restores consumers’ files, photos and media, even if they lose or damage their device or card. However, even in the US, the most mature market for consumer uptake of cloud storage services, more than half of smartphone users are not currently using the cloud to manage their smartphone content.

But why should operators care? 

Subscriber loyalty to operators is being tested. Rather than receive a subsidised handset as part of a contract with an operator, growing numbers of people purchase their devices directly in a regular subscription agreement with the manufacturer instead. Rather than commit to a long-term contract, these consumers enter into no-obligation rolling connectivity-only agreements with their operator.

Offering consumers access to a personal cloud platform is an important opportunity for operators to re-engage with these consumers and keep them tied to their services. Helping subscribers manage the spiralling volumes of their content could be much more effective for operators than faddish offers and promotional bundles to keep subscribers connected to their brand and their ecosystem.

While there is already a lot of cloud competition in the market, such as Google Drive, iCloud, Dropbox and Box, however hosted storage and access has the potential to be much more than a “me too” play for operators, or even an answer to churn.

Cloud services can be a viable revenue generator for operators in their own right. They equip operators with an attractive channel for brand partnerships and developers to reach subscribers with an expanded ecosystem of services. Considerable productivity and profitability benefits can also be found, including reducing time on device-to-device content transfer and freeing up operators’ in-store staff for more in-depth customer engagement.

Operators shouldn’t approach the provision of cloud technology with unease. After all, their core business is all about providing secure wireless transport for voice and increasingly data quickly, at scale, and to a wide range of mobile phones and other connected devices. Cloud storage and access is the natural extension of this business. Of course, given the current climate of heightened awareness around privacy and security, it’s crucial to work with a vendor with a strong track record.  However, operators should realise they’re in a stronger position than they think when it comes to providing cloud services.

Written by Ted Woodbery, VP, Marketing & Product Strategy at Synchronoss Technologies

Benefits of cloud communications in a crisis situation

Europe At Golden Sunrise - View From SpaceNick Hawkins, Managing Director EMEA of Everbridge, discusses how in crisis situations organisations can use cloud-based platforms to communicate with employees anywhere in the world to identify which employees may be affected, communicate instructions quickly, and receive responses to verify who may be at risk.

In November 2015, the people of Paris were victims of a series of co-ordinated terrorist attacks that targeted several locations and venues across the city.  Whilst emergency services were left to deal with the aftermath of the deadliest attack on the French capital since the Second World War, businesses across Europe were once again reminded of the importance of having effective emergency planning procedures that help to protect employees in the event of a crisis.

In the event of any emergency or crisis situation—such as the attacks in Paris—secure, effective and reliable communication is crucial.  Modern workforces are mobile, so it is vital for businesses of all sizes to ensure that the bilateral lines of communication between management and staff remain open in any situation.  It can be difficult for organisations to manually keep track of everyone’s locations, schedules and travel plans at all times.  The solution is to utilise the power of a critical communications platform to implement crisis management plans that will help to keep businesses operational and effective in the event of an emergency, and ensure that staff are safe and protected.

Location Data

The benefits of opting to use a cloud-based platform in the event of crisis are twofold.  Firstly, they allow for location-mapping functions to be easily installed on employee’s smartphones, meaning that business’ can receive regular alerts and updates on their employee’s last known locations.  This wealth of data is then readily accessible should a crisis situation develop, ensuring that management are not only able to locate all of their staff but are also able to coordinate a more effective response, prioritising and deploying resources to help those employees who are deemed to be at risk.  Without this location mapping function, businesses are left in the dark and forced to rely solely on traditional routes of communication to find out if their staff are in danger.

For example, if you had a mobile sales force out at various events across London when a series of terrorist attacks disables the GSM network and makes traditional mobile communication virtually impossible, what would you do? How would you know if you staff are safe?

Organisations with crisis management plans that include using a cloud-based location mapping device are instantly able to know that Employee A is out of the impact zone and safe, whilst Employee B is at the epicentre of the crisis and likely to be in danger, making communicating with them the top priority.

The common alternative to using cloud-based software to track the location of employees is to use GPS tracking devices.  However, not only are these expensive and liable to be lost or stolen, but they are also unable to be turned off.  The advantage of using application-based software installed on an employee’s smartphone is that the location alert function can be turned off whilst they are not travelling.  The most proactive businesses agree hostile areas and travel restrictions with staff as a key part of their emergency planning procedures, with staff agreeing to make sure that location-mapping is always turned on whilst traveling and in areas that are deemed to be at risk.  This allows the function to be switched off when an employee is in a safe-zone, providing a balance between staff privacy and protection.

Secure, Two-way Messaging

The second advantage to implementing secure, cloud-based communication platforms into a business’ emergency communications plan is that it enables users to quickly and reliably send secure messages to all members of staff, individual employees and specific target groups of people.  These crisis notifications are sent out through multiple contact paths which include: SMS messaging; emails; VOIP calls; voice-to-text alerts; app notifications and many more.  In fact, with cloud-based software installed on an employee’s smartphone, there are more than 100 different contact paths that management can use to communicate and send secure messages to their workforce, wherever they may be in the world.  This is a crucial area where cloud-based platforms have an advantage over other forms of crisis communication tools; unlike the SMS blasters of the past, emergency notifications are not only sent out across all available channels and contact paths, but continue to be sent out until the recipient acknowledges them.

This two-way polling feature means that businesses can design bespoke templates to send out to staff in the event of an emergency, which allows them to quickly respond and inform the company as to their current status and whether they are in need of any assistance.   Being able to send out notifications and receive responses, all within a few minutes, means businesses can rapidly gain visibility of an incident and react more efficiently to an unfolding situation.

Power of Wi-Fi Enabled Devices

By utilising cloud computing and capitalising on the capabilities of the one device an employee is most likely to have on or near their persons at all times—their smartphone—lines of communication can remain open, even when more traditional routes are out of order. For example, during the recent terrorist attacks in Brussels in March 2016 the GSM network went offline, making standard mobile communication impossible.  The citizens of the Belgian capital were unable to send messages to family, friends and work colleagues.  The team at Brussels Airport made its public Wi-Fi discoverable and free of a network key, allowing anyone with a Wi-Fi enabled device to connect and send messages. For crisis management and business continuity, this ability to remain in contact with employees is essential to ensuring that both a business and its staff are protected and capable of handling an emergency.

In crisis situations businesses need to have a plan that works in real life, not just on paper.  Secure, cloud-based communications platforms enable a business to react and protect itself and its staff from any harm, ensuring that the organisation is best prepared to face the challenges of the future.

What is the promise of big data? Computers will be better than humans

AI-Artificial-Intelligence-Machine-Learning-Cognitive-ComputingBig data as a concept has in fact been around longer than computer technology, which would surprise a number of people.

Back in 1944 Wesleyan University Librarian Fremont Rider wrote a paper which estimated American university libraries were doubling in size every sixteen years meaning the Yale Library in 2040 would occupy over 6,000 miles of shelves. This is not big data as most people would know it, but the vast and violent increase in the quantity and variety of information in the Yale library is the same principle.

The concept was not known as big data back then, but technologists today are also facing a challenge on how to handle such a vast amount of information. Not necessarily on how to store it, but how to make use of it. The promise of big data, and data analytics more generically, is to provide intelligence, insight and predictability but only now are we getting to a stage where technology is advanced enough to capitalise on the vast amount of information which we have available to us.

Back in 2003 Google wrote a paper on its MapReduce and Google File System which has generally been attributed to the beginning of the Apache Hadoop platform. At this point, few people could anticipate the explosion of technology which we’ve witnessed, Cloudera Chairman and CSO Mike Olson is one of these people, but he is also leading a company which has been regularly attributed as one of the go-to organizations for the Apache Hadoop platform.

“We’re seeing innovation in CPUs, in optical networking all the way to the chip, in solid state, highly affordable, high performance memory systems, we’re seeing dramatic changes in storage capabilities generally. Those changes are going to force us to adapt the software and change the way it operates,” said Olson, speaking at the Strata + Hadoop event in London. “Apache Hadoop has come a long way in 10 years; the road in front of it is exciting but is going to require an awful lot of work.”

Analytics was previously seen as an opportunity for companies to look back at its performance over a defined period, and develop lessons for employees on how future performance can be improved. Today the application of advanced analytics is improvements in real-time performance. A company can react in real-time to shift the focus of a marketing campaign, or alter a production line to improve the outcome. The promise of big data and IoT is predictability and data defined decision making, which can shift a business from a reactionary position through to a predictive. Understanding trends can create proactive business models which advice decision makers on how to steer a company. But what comes next?

Mike Olsen

Cloudera Chairman and CSO Mike Olsen

For Olsen, machine learning and artificial intelligence is where the industry is heading. We’re at a stage where big data and analytics can be used to automate processes and replace humans for simple tasks. In a short period of time, we’ve seen some significant advances in the applications of the technology, most notably Google’s AlphaGo beating World Go champion Lee Se-dol and Facebook’s use of AI in picture recognition.

Although computers taking on humans in games of strategy would not be considered a new PR stunt, IBM’s Deep Blue defeated chess world champion Garry Kasparov in 1997, this is a very different proposition. While chess is a game which relies on strategy, go is another beast. Due to the vast number of permutations available, strategies within the game rely on intuition and feel, a complex task for the Google team. The fact AlphaGo won the match demonstrates how far researchers have progressed in making machine-learning and artificial intelligence a reality.

“In narrow but very interesting domains, computers have become better than humans at vision and we’re going to see that piece of innovation absolutely continue,” said Olsen. “Big Data is going to drive innovation here.”

This may be difficult for a number of people to comprehend, but big data has entered the business world; true AI and automated, data-driven decision may not be too far behind. Data is driving the direction of businesses through a better understanding of the customer, increase the security of an organization or gaining a better understanding of the risk associated with any business decision. Big data is no longer a theory, but an accomplished business strategy.

Olsen is not saying computers will replace humans, but the number of and variety of processes which can be replaced by machines is certainly growing, and growing faster every day.

How SMEs are benefitting from hybrid cloud architecture in the best of both worlds

Firm handshakeHybrid cloud architecture has been a while maturing, but now offers businesses unparalleled flexibility, ROI and scalability. The smaller the business, the more vital these traits are, making hybrid cloud the number one choice for SMEs in 2016.

It’s been more than two years since Gartner predicted that, by 2017, 50 per cent of enterprises would be using a hybrid of public and private cloud operations. This prediction was based on growing private cloud deployment coupled with interest in hybrid cloud, but a lack of actual uptake – back then in 2013. “Actual deployments [of hybrid cloud] are low, but aspirations are high”, said Gartner at the time.

It’s fair to say that Gartner’s prediction has been borne out, with hybrid cloud services rapidly becoming a given for a whole range of businesses, but perhaps less predictably the value of hybrid is being most felt in the SME sector, where speed, ROI and overall flexibility are most intensely valued. As enterprise data requirements continue to rocket – indeed overall business data volume is growing at a rate of more than 60 per cent annually – it’s not hard to see why this sector is burgeoning.

Data protection is no longer an option

Across the board, from major corporations through to SMEs in particular, there’s now clear recognition that data protection is no longer merely a “nice-to-have”, it’s a basic requirement for doing business. Not being able to access customer, operational or supply-chain data for even short periods can be disastrous, and every minute of downtime impacts on ROI. Critically, losing data permanently threatens to damage operational function, as well as business perception. The latter point is particularly important in terms of business relationships with suppliers and customers that may have taken years to develop, but can be undone in the course of a few hours of unexplained downtime. It’s never been easier to take business elsewhere, so the ability to keep up and running irrespective of hardware failure or an extreme weather event is essential.

Speed and cost benefits combined

Perhaps the most obvious benefit of hybrid cloud technology (a combination of on-premises and off-premises deployment models) is that SMEs are presented with enterprise class IT capabilities at a much lower cost. SMEs that outsource the management of IT services through Managed Service Providers (MSP), pay per seat, for immediate scalability, and what’s more avoid the complexity of managing the same systems in-house. This model also avoids the requirement for capital investment, allowing SMEs to avoid large upfront costs, but still enjoy the benefits – such as data protection in the example of hybrid cloud data backup.

One UK business that saved around £200,000 in lost revenue due to these benefits is Mandarin Stone, a natural stone and tile retailer. Having implemented a hybrid cloud disaster recovery system from Datto the company experienced an overheated main server just months later, but were able to switch operations to a virtualised cloud server in just hours while replacement hardware was setup, in contrast to a previous outage that took days to resolve. “Datto was invaluable,” said Alana Preece, Mandarin Stone’s Financial Director, “and the device paid for itself in that one incident. The investment [in a hybrid cloud solution] was worth it.”

The considerable upside of the hybrid model is that where immediate access to data or services is required, local storage devices can make this possible without any of the delay associated with hauling large datasets down from the cloud. SMEs in particular are affected by bandwidth concerns as well as costs. In the event of a localised hardware failure or loss of a business mobile device, for example, data can be locally restored in just seconds.

Unburden the network for better business

Many hybrid models use network downtime to backup local files to the cloud, lowering the impact on bandwidth during working hours, but also ensuring that there is an off-premises backup in place in the event of a more serious incident such as extreme weather, for example. Of course, this network management isn’t a new idea, but with a hybrid cloud setup it’s much more efficient – for example, in a cloud-only implementation the SMEs server will have an agent or multiple agents running to dedupe, compress and encrypt each backup, using the server’s resources. A local device taking on this workload leaves the main server to deal with the day-to-day business unhindered, and means that backups can be made efficiently as they’re required, then uploaded to the cloud when bandwidth is less in demand.

Of course, since Gartner’s original prediction there’s been considerable consumer uptake of cloud-based backups such as Apple’s iCloud and Google’s Drive, which has de-stigmatised the cloud and driven acceptance and expectations. SME’s have been at the forefront of this revolution, making cloud technology far more widely accepted as being reliable, cost-effective, low-hassle and scalable. The fact that Google Apps and Microsoft Office 365 are both largely cloud-based show just how the adoption barriers have fallen since 2013, which makes reassuring SME decision-makers considerably easier for MSPs.

Compliance resolved

Compliance can be particularly onerous for SMEs, especially where customer data is concerned. For example, the global demands of a standard like PCI DSS, or HIPAA (for those with North American operations) demand specific standards of care in terms of data storage, retention and recovery. Hybrid solutions can help smooth this path by providing compliant backup storage off-premises for retention, protect data from corruption and provide a ‘paper trail’ of documentation that establishes a solid data recovery process.

Good news for MSPs

Finally, hybrid cloud offers many benefits for the MSP side of the coin, delivering sustainable recurring revenues, not only via the core backup services themselves, which will tend to grow over time as data volumes increase, but also via additional services. New value-add services might include monitoring the SME’s environment for new backup needs, or periodic business continuity drills, for example, to improve the MSPs customer retention and help their business grow.

Written by Andrew Stuart, Managing Director, EMEA, Datto

 

About Datto

Datto is an innovative provider of comprehensive backup, recovery and business continuity solutions used by thousands of managed service providers worldwide. Datto’s 140+ PB private cloud and family of software and hardware devices provide Total Data Protection everywhere business data lives. Whether your data is on-prem in a physical or virtual server, or in the cloud via SaaS applications, only Datto offers end-to-end recoverability and single-vendor accountability. Founded in 2007 by Austin McChord, Datto is privately held and profitable, with venture backing by General Catalyst Partners. In 2015 McChord was named to the Forbes “30 under 30” ranking of top young entrepreneurs.

Big Data looks inwards to transform network management and application delivery

Strawberry and Blackberry CloseWe’ve all heard of the business applications touted by big data advocates – data-driven purchasing decisions, enhanced market insights and actionable customer feedback. These are undoubtedly of great value to businesses, yet organisations only have to look inwards to find further untapped potential. Here Manish Sablok, Head of Field Marketing NWE at ALE explains the two major internal IT processes that can benefit greatly from embracing big data: network management and application delivery.

SNS Research estimated Big Data investments reached $40 billion worldwide this year. Industry awareness and reception is equally impressive – ‘89% of business leaders believe big data will revolutionise business operations in the same way the Internet did.’ But big data is no longer simply large volumes of unstructured data or just for refining external business practices – the applications continue to evolve. The advent of big data analytics has paved the way for smarter network and application management. Big data can ultimately be leveraged internally to deliver cost saving efficiencies, optimisation of network management and application delivery.

What’s trending on your network?

Achieving complete network visibility has been a primary concern of CIOs in recent years – and now the arrival of tools to exploit big data provides a lifeline. Predictive analytics techniques enable a transition from a reactive to proactive approach to network management. By allowing IT departments visibility of devices – and crucially applications – across the network, the rise of the Bring Your Own Device (BYOD) trend can be safely controlled.

The newest generation of switch technology has advanced to the stage where application visibility capability can now be directly embedded within the most advanced switches. These switches, such as the Alcatel-Lucent Enterprise OmniSwitch 6860, are capable of providing an advanced degree of predictive analytics. The benefits of these predictive analytics are varied – IT departments can establish patterns of routine daily traffic in order to swiftly identify anomalies hindering the network. Put simply, the ability to detect what is ‘trending’ – be it backup activities, heavy bandwidth usage or popular application deployment – has now arrived.

More tasks can be automated than ever before, with a dynamic response to network and user needs becoming standard practice. High priority users, such as internal teams requiring continued collaboration, can be prioritised the necessary network capacity in real-time.

Trees, silhouetted in the mistEffectively deploy, monitor and manage applications

Effective application management has its own challenges, such as the struggle to enforce flexible but secure user and device policies. Big data provides the business intelligence necessary to closely manage application deployment by analysing data streams, including application performance and user feedback. Insight into how employees or partners are using applications allows IT departments to identify redundant features or little used devices and to scale back or increase support and development accordingly.

As a result of the increasing traffic from voice, video and data applications, new network management tools have evolved alongside the hardware. The need to reduce the operational costs of network management, while at the same time providing increased availability, security and multimedia support has led to the development of unified management tools that offer a single, simple window into applications usage. Centralised management can help IT departments predict network trends, potential usage issues and manage users and devices – providing a simple tool to aid business decisions around complex processes.

Through the effective deployment of resources based on big data insight, ROI can be maximised. Smarter targeting of resources makes for a leaner IT deployment, and reduces the need for investment in further costly hardware and applications.

Networks converging on the future

Big data gathering, processing and analytics will all continue to advance and develop as more businesses embrace the concept and the market grows. But while the existing infrastructure in many businesses is capable of using big data to a limited degree, a converged network infrastructure, by providing a simplified and flexible architecture, will maximise the benefits and at the same time reduce Total Cost of Ownership – and meet corporate ROI requirements.

By introducing this robust network infrastructure, businesses can ensure a future-proof big data operation is secure. The advent of big data has brought with it the ability for IT departments to truly develop their ‘smart network’. Now it is up to businesses to seize the opportunity.

Written by Manish Sablok, Head of Field Marketing NWE at Alcatel Lucent Enterprise

Securing Visibility into Open Source Code

Yellow road sign with a blue sky and white clouds: open sourceThe Internet runs on open source code. Linux, Apache Tomcat, OpenSSL, MySQL, Drupal and WordPress are built on open source. Everyone, every day, uses applications that are either open source or include open source code; commercial applications typically have only 65 per cent custom code. Development teams can easily use 100 or more open source libraries, frameworks tools and code snippets, when building an application.

The widespread use of open source code to reduce development times and costs makes application security more challenging. That’s because the bulk of the code contained in any given application is often not written by the team that developed or maintain it. For example, the 10 million lines of code incorporated in the GM Volt’s control systems include open source components. Car manufacturers like GM are increasingly taking an open source approach because it gives them broader control of their software platforms and the ability to tailor features to suit their customers.

Whether for the Internet, the automotive industry, or for any software package, the need for secure open source code has never been greater, but CISOs and the teams they manage are losing visibility into the use of open source during the software development process.

Using open source code is not a problem in itself, but not knowing what open source is being used is dangerous, particularly when many components and libraries contain security flaws. The majority of companies exercise little control over the external code used within their software projects. Even those that do have some form of secure software development lifecycle tend to only apply it to the code they write themselves – 67 per cent of companies do not monitor their open source code for security vulnerabilities.

The Path to Better Code

Development frameworks and newer programming languages make it much easier for developers to avoid introducing common security vulnerabilities such as cross-site scripting and SLQ injection. But developers still need to understand the different types of data an application handles and how to properly protect that data. For example, session IDs are just as sensitive as passwords, but are often not given the same level of attention. Access control is notoriously tricky to implement well, and most developers would benefit from additional training to avoid common mistakes.

Mike

Mike Pittenger, VP of Product Strategy at Black Duck Software

Developers need to fully understand how the latest libraries and components work before using them, so that these elements are integrated and used correctly within their projects. One reason people feel safe using the OpenSSL library and take the quality of its code for granted is its FIPS 140-2 certificate. But in the case of the Heartbleed vulnerability, the Heartbleed protocol is outside the scope of FIPS. Development teams may have read the documentation covering secure use of OpenSSL call functions and routines, but how many realised that the entire codebase was not certified?

Automated testing tools will certainly improve the overall quality of in-house developed code. But CISOs must also ensure the quality of an application’s code sourced from elsewhere, including proper control over the use of open source code.

Maintaining an inventory of third-party code through a spreadsheet simply doesn’t work, particularly with a large, distributed team. For example, the spreadsheet method can’t detect whether a developer has pulled in an old version of an approved component, or added new, unapproved ones. It doesn’t ensure that the relevant security mailing lists are monitored or that someone is checking for new releases, updates, and fixes. Worst of all, it makes it impossible for anyone to get a full sense of an application’s true level of exposure.

Know Your Code

Developing secure software means knowing where the code within an application comes from, that it has been approved, and that the latest updates and fixes have been applied, not just before the application is released, but throughout its supported life.

While using open source code makes business sense for efficiency and cost reasons, open source can undermine security efforts if it isn’t well managed. Given the complexity of today’s applications, the management of the software development lifecycle needs to be automated wherever possible to allow developers to remain agile enough to keep pace, while reducing the introduction and occurrence of security vulnerabilities.

For agile development teams to mitigate security risks from open source software, they must have visibility into the open source components they use, select components without known vulnerabilities, and continually monitor those components throughout the application lifecycle.

Written by Mike Pittenger, VP of Product Strategy at Black Duck Software.

More than just a low sticker price: Three key factors for a successful SaaS deployment

Teamwork. Business illustrationOne of the key challenges for businesses when evaluating new technologies is understanding what a successful return on investment (ROI) looks like.

In its infancy, business benefits of the cloud-based Software-as-a-Service (SaaS) model were simple: save on expensive infrastructure, while remaining agile enough to scale up or down depending on demand. Yet as cloud-based tools become ubiquitous, both inside and outside of a workplace, measuring success extended beyond simple infrastructure savings.

In theory the ability to launch new projects in hours and replace high infrastructure costs with a low monthly subscription should deliver substantial ROI benefits. But what happens to that ROI when the IT team discovers, six months after deployment, that end-user adoption is as low as 10 per cent? If businesses calculated the real “cost per user” in these instances, the benefits promised by cloud would simply diminish. This is becoming a real issue for businesses that bought on the promise of scalability, or reduced infrastructure costs.

In reality, success demands real organisational change, not just a cheap licencing fee. That’s why IT buyers must take time to look beyond the basic “sticker price” and begin to understand the end-user.

Aiming for seamless collaboration

As the enterprise workplace becomes ever-more fragmented, a “collaborative approach” is becoming increasingly important to business leaders. Industry insight, experience and understanding are all things that can’t be easily replicated by the competition. Being able to easily share this knowledge across an entire organisation is an extremely valuable asset – especially when trying to win new customers. That said, in organisations where teams need to operate across multiple locations (be it in difference offices or different countries), this can be difficult to implement: collaboration becomes inefficient, content lost and confidential data exposed – harming reputation and reducing revenue opportunities.

Some cloud-based SaaS solutions are quite successful in driving collaboration, improving the agility of teams and the security of their content. For example, Baker Tilly International – a network of 157 independent accountancy and business advisory firms, with 27,000 employees across 133 counties –significantly improved efficiency and created more time to bid for new business by deploying a cloud-based collaboration platform with government-grade security. However, not all organisations experience this success when deploying new cloud technologies. Some burden themselves with services that promise big ROI through innovation, but struggle with employee adoption.

Solving problems. Business conceptHere are the three key considerations all IT buyers must look at when evaluating successful SaaS deployment:

  1. Building awareness and confidence for better user experience

All enterprise systems, cloud or otherwise, need ownership and structure. IT teams need to understand how users and information move between internal systems. The minute workflows become broken, users will abandon the tool and default back to what has worked for them in the past. The result: poor user adoption and even increased security risks as users try to circumvent the new process. Building awareness and confidence in cloud technologies is the key to curbing this.

While cloud-based SaaS solutions are sold on their ease of use, end user education is paramount to ensuring an organization sees this value. The truth is, media scaremongering around data breaches has resulted in a fear of “the cloud”, causing many employees, especially those that don’t realise the consumer products they use are cloud-based, to resist using these tools in the workplace. In addition to teaching employees how to use services, IT teams must be able to alleviate employee concerns – baking change management into a deployment schedule.

These change management services aren’t often included within licensing costs, making the price-per-user seem artificially low. IT teams must be sure to factor in education efforts for driving user adoption and build an ROI not against price-per-user, but the actual cost-per-user.

  1. Data security isn’t just about certifications

There’s a thin line drawn between usability and security. If forced to choose, security must always come first. However, be aware that in the age of citizen IT too much unnecessary security can actually increase risk. That may seem contradictory but if usability is compromised too deeply, users will default to legacy tools, shadow IT or even avoid processes altogether.

Many businesses still struggle with the concept of their data being stored offsite. However, for some this mind-set is changing and the focus for successful SaaS implementations is enablement. In these businesses, IT buyers not only look for key security credentials – robust data hosting controls, application security features and secure mobile working – to meet required standards and compliance needs; but also quality user experience. The most secure platform in the world serves no purpose if employees don’t bother to use it.

Contemplate. Business concept illustrationThrough clear communication and a well-thought out on-boarding plan for end users, businesses can ensure all employees are trained and adequately supported as they begin using the solution.

  1. Domain expertise

One of the key advantages of cloud-based software is its ability to scale quickly and drive business agility. Today, scale is not only a measure of infrastructure but also a measure of user readiness.

This requires SaaS vendors to respond quickly to a business’s growth by delivering all of the things that help increase user adoption including; adequate user training, managing new user on-boarding, and even monitoring usage data and feedback to deliver maximum value as business begin to scale.

Yes, SaaS removes the need for big upgrade costs but without support from a seasoned expert, poor user adoption puts ROI at risk.

SaaS is about service

Cloud-based SaaS solutions can deliver a flexible, efficient and reliable way to deploy software into an organisation, helping to deliver ROI through reduced deployment time and infrastructure savings. However, these business must never forget that the second “S” in SaaS stands for service, and that successful deployments require more than just a low “sticker price”.

Written by Neil Rylan, VP of Sales EMEA, Huddle

Will containers change the world of cloud?

Global Container TradeThe rise of containers as a technology has been glorious and confusing in equal measure. While touted by some as the saviour of developers, and by others as the end of VM’s, the majority simply don’t understand containers as a concept or a technology.

In the simplest of terms, containers let you pack more computing workloads onto a single server and in theory, that means you can buy less hardware, build or rent less data centre space, and hire fewer people to manage that equipment.

“In the earlier years of computing, we had dedicated servers which later evolved with virtualisation,” say Giri Fox, Director of Technical Services at Rackspace. “Containers are part of the next evolution of servers, and have gained large media and technologist attention. In essence, containers are the lightest way to define an application and to transport it between servers. They enable an application to be sliced into small elements and distributed on one or more servers, which in turn improves resource usage and can even reduce costs.”

There are some clear differences between containers and virtual machines though. Linux containers give each application, its own isolated environment in which to run, but multiple containers share the host servers’ operating system. Since you don’t have to boot up an operating system, you can create containers in seconds not minutes like virtual machines. They are faster, require less memory space, offer higher-level isolation and are highly portable.

“Containers are more responsive and can run the same task faster,” adds Fox. “They increase the velocity of application development, and can make continuous integration and deployment easier. They often offer reduced costs for IT; testing and production environments can be smaller than without containers. Plus, the density of applications on a server can be increased which leads to better utilisation.

“As a direct result of these two benefits, the scope for innovation is greater than its previous technologies. This can facilitate application modernisation and allow more room to experiment.”

So the benefits are pretty open-ended. Speed of deployment, flexibility to run anywhere, no more expensive licenses, more reliable and more opportunity for innovation.

Which all sounds great, doesn’t it?

CaptureThat said, a recent survey from the Cloud & DevOps World team brought out some very interesting statistics, first and foremost the understanding of the technology. 76% of respondents agreed with the statement “Everyone has heard of containers, but no-one really understands what containers are”.

While containers have the potential to be
the next big thing in the cloud industry, unless those in the ecosystem understand the concept and perceived benefits, it is unlikely to take off.

“Containers are evolving rapidly and present an interesting runtime option for application development,” says Joe Pynadath, ‎GM of EMEA for Chef. “We know that with today’s distributed and lightweight apps, businesses, whether they are a new start-up’s to traditional enterprise, must accelerate their capabilities for building, testing, and delivering modern applications that drive revenue.

“One result of the ever-greater focus on software development is the use of new tools to build applications more rapidly and it is here that containers have emerged as an interesting route for developers. This is because they allow you to quickly build applications in a portable and lightweight manner. This provides a huge benefit for developers in speeding up the application building process. However, despite this, containers are not able to solve the complexities of taking an application from build through test to production, which presents a range of management challenges for developers and operations engineers looking to use them.”

There is certainly potential for containers within the enterprise environment, but as with all emerging technologies there is a certain level of confusion as to how they will integrate within the current business model, and how the introduction will impact the IT department on a day-to-day basis.

“Some of the questions we’re regularly asked by businesses looking to use containers are “How do you configure and tune the OS that will host them? How do you adapt your containers at run time to the needs of the dev, test and production environments they’re in?” comments Pynadath.

While containers allow you to use discovery services or roll your own solutions, the need to monitor and manage them in an automated way remains a challenge for IT teams. At Chef, we understand the benefits containers can bring to developers and are excited to help them automate many of the complex elements that are necessary to support containerized workflows in production”

Vendors are confident that the introduction of containers will drive further efficiencies and speed within the industry, though we’re yet to see a firm commitment from the mass market to demonstrate the technology will take off. The early adopter uptake is promising, and there are case studies to demonstrate the much lauded potential, but it’s still early days.

In short, containers are good, but most people just need to learn what they are.