Gaining elastic visibility into your clouds – without the strain

(c)iStock.com/VisualCommunications

How do you see what isn’t physically there?  It is emerging as a major problem facing enterprises, as cloud increasingly becomes the new normal.

On-premise networks and physical data centres are disappearing fast, with core business applications and processes being migrated to cloud architectures. The latest Cisco Global Cloud Index states that by 2020, 92% of workloads will be processed in public and private cloud data centres, and just 8% in physical data centres.

The rapid move to cloud is understandable since it is so alluring:  they are elastic, cost less to operate and manage, and is scalable, enhancing business agility.  But in engineering there is no such thing as getting something for nothing.  You always have to give something up in order to gain something.  The trick, of course, is getting something you value greatly while giving up something that does not mean as much to you.  For cloud migration, these benefits are being realised at the expense of visibility and insight into those cloud environments. 

When we surveyed a range of businesses on their virtualisation practices, just 37% monitored their virtualised environments with the same rigor as their physical networks.  So there is a big visibility gap when it comes to the cloud.  While that may seem like a good trade-off today, it may feel different if and when things go wrong.   Most organisations cannot take that chance, so they need to bridge the visibility gap quickly.  They need it for better control, to maintain security no matter where their data goes, and of course to ensure reliability of their core business applications. 

It would seem like addressing this visibility gap could be fixed easily by inserting virtual network taps into the virtualised environment and sending the traffic to their monitoring, analytics, and security tools.  Unfortunately, doing this would quickly flood these tools with data, because internal East-West traffic in virtual data centres typically represents 80% of the total traffic.  It would be like connecting a lawn sprinkler to a fire hydrant.  Identifying and extracting only relevant traffic is key, but how can you do that efficiently?  More so, how can your virtual taps handle scaling up and down as virtual machines emerge and dissolve? Let’s take a closer look at the key requirements for visibility and monitoring in virtual environments.

Expanding your (visibility) horizon

There are four key points to consider when deploying virtual taps so you end up with meaningful, granular access to critical application traffic on virtualised networks. 

Horizontal scale: Cloud environments are attractive because they can scale up and down rapidly as user demands and workloads change. When placing the virtual taps in the virtual network, you need to be sure they can scale up to accommodate rapid growth in traffic volumes as well as user numbers and data interactions.  The taps should do this automatically, without needing IT intervention.  Virtualisation means agility, so if an application or service expands to handle 10x or 100x the number of users, make sure the virtual tap you are using can scale elastically – without impacting application performance. 

Securing in the dark: Virtualised networks are typically segmented using virtual firewalls to protect key applications and services from attack, and to prevent lateral movement in the virtualised environment that could compromise data or resources.  So the virtual taps you use needs to be able to see the application and network traffic flowing between segments.  With this comprehensive insight, you can ensure that the appropriate security rules and policies governing each segment are being enforced. 

More containers: As virtual machine use grows, container use multiplies even faster – by as much as 10-fold or more, as each application may employ multiple containers.  If your organisation is using container-based virtualisation to boost application performance, the virtual tap must be able to access traffic in the container environment.

DevOps elasticity: When your DevOps team puts out a new build – which, remember, doesn’t just cover new applications and services, but also updates to existing ones – then that update propagates across the virtual environment.  Individual virtual machines, containers, and by association their hosted applications, have shorter and shorter lifespans requiring continual awareness of the actual state of the environment. It is vital that these changes not block the entire traffic path, or take the virtual tap down with it.  As an example, consider how to archive and retrieve monitored traffic from a container that no longer exists.   The tap is your sentinel, which has to maintain pervasive access to traffic to enable you to see what is happening on the virtual network:  it must be fault-tolerant, even if the application it is monitoring fails. 

These four points apply when monitoring any virtualised environment, whether public cloud, private cloud or software defined wide-area networks (SD-WANs):  the virtual taps and the overall visibility solution need to be completely environment-agnostic. 

Elastic visibility

Once the virtual taps have been deployed to extract traffic from the virtual machines in your environments, you are ready to start processing packets.  This volume of traffic needs filtering and controlling using a network packet broker.  That keeps duplicate data from overwhelming monitoring and security tools and ensures they scale up/down as needed.  Data traffic should be broken up into manageable pieces using packet filtering, grooming and brokering processes, to make sure the security systems and analytics tools are seeing everything.

Elastically-scalable access is achievable for all the data crossing their virtual networks and clouds as well as intelligent distribution to analytics and compliance tools.  Leaving your data unmonitored is not smart business.  You do not have to give up visibility to gain cloud speed and cost advantages.  With the right architecture, you can have both so you should not compromise.   

Gaining elastic visibility into your clouds – without the strain

(c)iStock.com/VisualCommunications

How do you see what isn’t physically there?  It is emerging as a major problem facing enterprises, as cloud increasingly becomes the new normal.

On-premise networks and physical data centres are disappearing fast, with core business applications and processes being migrated to cloud architectures. The latest Cisco Global Cloud Index states that by 2020, 92% of workloads will be processed in public and private cloud data centres, and just 8% in physical data centres.

The rapid move to cloud is understandable since it is so alluring:  they are elastic, cost less to operate and manage, and is scalable, enhancing business agility.  But in engineering there is no such thing as getting something for nothing.  You always have to give something up in order to gain something.  The trick, of course, is getting something you value greatly while giving up something that does not mean as much to you.  For cloud migration, these benefits are being realised at the expense of visibility and insight into those cloud environments. 

When we surveyed a range of businesses on their virtualisation practices, just 37% monitored their virtualised environments with the same rigor as their physical networks.  So there is a big visibility gap when it comes to the cloud.  While that may seem like a good trade-off today, it may feel different if and when things go wrong.   Most organisations cannot take that chance, so they need to bridge the visibility gap quickly.  They need it for better control, to maintain security no matter where their data goes, and of course to ensure reliability of their core business applications. 

It would seem like addressing this visibility gap could be fixed easily by inserting virtual network taps into the virtualised environment and sending the traffic to their monitoring, analytics, and security tools.  Unfortunately, doing this would quickly flood these tools with data, because internal East-West traffic in virtual data centres typically represents 80% of the total traffic.  It would be like connecting a lawn sprinkler to a fire hydrant.  Identifying and extracting only relevant traffic is key, but how can you do that efficiently?  More so, how can your virtual taps handle scaling up and down as virtual machines emerge and dissolve? Let’s take a closer look at the key requirements for visibility and monitoring in virtual environments.

Expanding your (visibility) horizon

There are four key points to consider when deploying virtual taps so you end up with meaningful, granular access to critical application traffic on virtualised networks. 

Horizontal scale: Cloud environments are attractive because they can scale up and down rapidly as user demands and workloads change. When placing the virtual taps in the virtual network, you need to be sure they can scale up to accommodate rapid growth in traffic volumes as well as user numbers and data interactions.  The taps should do this automatically, without needing IT intervention.  Virtualisation means agility, so if an application or service expands to handle 10x or 100x the number of users, make sure the virtual tap you are using can scale elastically – without impacting application performance. 

Securing in the dark: Virtualised networks are typically segmented using virtual firewalls to protect key applications and services from attack, and to prevent lateral movement in the virtualised environment that could compromise data or resources.  So the virtual taps you use needs to be able to see the application and network traffic flowing between segments.  With this comprehensive insight, you can ensure that the appropriate security rules and policies governing each segment are being enforced. 

More containers: As virtual machine use grows, container use multiplies even faster – by as much as 10-fold or more, as each application may employ multiple containers.  If your organisation is using container-based virtualisation to boost application performance, the virtual tap must be able to access traffic in the container environment.

DevOps elasticity: When your DevOps team puts out a new build – which, remember, doesn’t just cover new applications and services, but also updates to existing ones – then that update propagates across the virtual environment.  Individual virtual machines, containers, and by association their hosted applications, have shorter and shorter lifespans requiring continual awareness of the actual state of the environment. It is vital that these changes not block the entire traffic path, or take the virtual tap down with it.  As an example, consider how to archive and retrieve monitored traffic from a container that no longer exists.   The tap is your sentinel, which has to maintain pervasive access to traffic to enable you to see what is happening on the virtual network:  it must be fault-tolerant, even if the application it is monitoring fails. 

These four points apply when monitoring any virtualised environment, whether public cloud, private cloud or software defined wide-area networks (SD-WANs):  the virtual taps and the overall visibility solution need to be completely environment-agnostic. 

Elastic visibility

Once the virtual taps have been deployed to extract traffic from the virtual machines in your environments, you are ready to start processing packets.  This volume of traffic needs filtering and controlling using a network packet broker.  That keeps duplicate data from overwhelming monitoring and security tools and ensures they scale up/down as needed.  Data traffic should be broken up into manageable pieces using packet filtering, grooming and brokering processes, to make sure the security systems and analytics tools are seeing everything.

Elastically-scalable access is achievable for all the data crossing their virtual networks and clouds as well as intelligent distribution to analytics and compliance tools.  Leaving your data unmonitored is not smart business.  You do not have to give up visibility to gain cloud speed and cost advantages.  With the right architecture, you can have both so you should not compromise.   

Spreadsheets, Clouds and Finance-Owned Solutions | @CloudExpo #Cloud #FinTech

It’s that time of year again! Finance teams are putting the finishing touches on next year’s corporate budgets and teams are likely looking back on 2016 to see what to change or improve in the year ahead.
Here at Vena, we’re making our annual predictions in finance and corporate performance management for 2017. We pulled the predictions after countless conversations with customers, partners, industry analysts and more. Buckle your seatbelts, I’m sure 2017 will be a wild ride!

read more

Six Cloud Trends to Watch in 2017 | @CloudExpo @NaviSite #Cloud #MachineLearning

While cloud may have formally entered the enterprise in 2006, it’s now a reality for nearly every company. The Cloud Era got its start with Software as a service (SaaS) offerings that paved the way for innovations in CRM and email deployments, to create new opportunities for the likes of Salesforce and Microsoft Office 365. Still in its “early stages,” cloud now encompasses legacy applications, backups, disaster recovery, security/audit log management and so much more. 2017 will be full of continued maturation and will give way to a whole new generation of cloud-native applications.

read more

IBM and Delos to Create Healthy Indoor Environments

One of the major concerns during any winter season is the quality of indoor air. Since most of us have to shut our windows and turn on the heater to protect against the cold, the quality of indoor air suffers greatly. Such unclean air leads to an increased chance of respiratory and other illnesses during winters. To prevent these problems and to promote healthy indoor environment, cloud-giant IBM has partnered with a wellness, real estate and technology firm called Delos.

Under the terms of this partnership, Delos will tap into IBM’s Watson and its cloud infrastructure to understand the impact of indoor environment on human health. Specifically, it will create cognitive computing-based apps with Watson and Bluemix platform to give construction engineers and architects an insight into the existing problem, and may also provide solutions that can be incorporated in the design and construction of homes and offices. Through these apps, both companies want to drive home the point that a healthy indoor environment is essential for better living and working conditions. Already, many companies find it difficult to handle the low productivity and frequent sick leaves during the winter season, so they are sure to take steps to reduce this absenteeism and increase productivity.

Are you’re wondering why IBM chose Delos for this partnership? Well, for starters, Delos is already in the process of collecting massive amounts of data, to understand the relationship that exists between indoor air quality and health. So, in this sense, IBM is simply providing the right technology to help Delos make sense of the data they have collected. In many ways, it’s a natural partnership because IBM has the perfect platform and technological tools to help Delos identify the right patterns from its vast data. In addition to its own data, Delos is also tapping into the database of Mayo Clinic to fill in any gaps that may arise.

Delos has already setup a Wellness Lab in collaboration with Mayo Clinic, to simulate a wide-variety of indoor environments in real-time. it has setup sensors in homes and offices across different cities in the US to give its scientists and researchers greater access into everyday conditions. With this information, researchers can identify the impact of different aspects such as indoor light, optimal temperature, acoustics, dust levels, and more, on the health of those living in these conditions.

With IBM’s Watson and Bluemix, Delos can also look into the historical data, including the many studies that have been done in this regard. Eventually, it can combine all this information to understand what impacts indoor air, and in turn, what effect it has on human beings.

This partnership is a classic example of IT application, where the advancements we have made are used to improve our own well-being. Though this is not the first experiment of its kind, the fact that it taps into so many sources of information and uses advanced technology, makes it truly unique.

It won’t be long before we start breathing healthy air – even during winters!

The post IBM and Delos to Create Healthy Indoor Environments appeared first on Cloud News Daily.

How to keep downtime to a minimum with the right cloud computing support

(c)iStock.com/fazon1

The cloud has transformed the way we do business today, improving infrastructure scalability and cost models for everything from software to data storage to disaster recovery. As with any IT solution, however, cloud computing isn’t without its risks.

In 2012, the International Working Group on Cloud Computing Resiliency (IWGCR) claimed that cloud downtime had cost £45 million over five years. Although a new five-year report on cloud outage costs hasn’t yet been released, we do know that application downtime is costing enterprises across the globe an estimated $16 million (approximately £12.9 million) annually.

So how can businesses reap the benefits of the cloud while minimising the risk of downtime? The solution is having the right support. The following steps are key starting points for mitigating cloud risk:

Assess cybersecurity

Symantec reported that, in 2015, there was a new zero-day vulnerability discovered every week. Not surprisingly, spear-phishing campaigns targeting employees increased 55%, and ransomware increased 35%.

Technology is constantly evolving to thwart these attacks, but security software cannot be treated as a set-it-and-forget-it solution. It must be complemented with monitoring, patch management and routine maintenance.

The challenge is that nearly half of businesses admit that there is a talent shortage in security. ESG research indicated that 46% of organisations say that, in 2016, they have a “problematic shortage” of cybersecurity skills, while a surprising third (33%) admitted their biggest deficiency was in cloud security specialists. Based on these figures, incident detection and responses to cloud-based cyber threats would undoubtedly be a problem for those organisations, as they have inadequate staff available to manage any cybersecurity risks that may arise.

This is a major problem, as malware infections are commonly the result of inadequate patching, carelessness, misconfiguration, human error or negligence. These errors can have costly ramifications if malware infiltrates the network and corrupts backup data.

As such, businesses might require a managed firewall service that can keep their network secure while freeing up their staff to focus on day-to-day responsibilities. Different organisations will require different levels of support, but one advantage of a cloud-based firewall service is that it is scalable and can be changed to meet ever-increasing demand and usage, both now and into the future.

Regardless of whether cybersecurity is managed in-house or outsourced, it should feature advanced security capabilities such as intrusion detection and prevention, and a safe tunnel for remote employee access. It is imperative that these features integrate with one another to allow for timely incident response or prevention. If data is breached or a system goes down, time is of the essence.

Make a data backup and recovery plan

If an organisation’s facility is impacted, it must have a plan for how to access its data. Businesses using disaster recovery as a service (DRaaS) have the advantage of being able to access their backups from anywhere, even if their primary facility has been affected. As capacity grows, they have the potential to leverage various cloud models – private, community and public cloud – depending on the use case. When recovery is necessary, stored data can be restored to either virtual or physical machines.

What many cloud providers tend to de-emphasise, however, is that while the environment might be available, bandwidth limitations can extend recovery times, especially when recovering a large amount of data and applications. For this reason, many businesses are complementing cloud backups with an on-site storage appliance, which allows data to be recovered within hours or even minutes.

If the business’s facility is impacted, recovering the data stored on the appliance would require either accessing an alternate backup stored at an off-site location or waiting until the business regains access to the facility, assuming it’s still intact.

With the right support, however, a hybrid approach to disaster recovery reduces the overall risk of downtime. Some DRaaS providers, but not all, can assist with recovering the data and applications stored on the appliance through the cloud. Others will provide the appliance but leave maintenance up to the client. The key is to know upfront what level of support the vendor can provide and plan accordingly.

Ensure ongoing monitoring

Even if a business has invested in top-of-the-line cybersecurity solutions and backed up data to multiple targets, the organisation still risks downtime if the entire environment isn’t properly monitored. To assess whether or not a business has the resources required for adequate oversight of the environment, it should consider the following questions:

  • Is there any period of time when the environment is unmonitored (e.g. during shift changes or holidays)?
  • Do any on-site IT personnel lack the skills required to manage software settings, remediate failures, and so on?
  • When considering past downtime events or security threats, were the systems always brought online or the threats mitigated within the required time frame?

The greater the number of yes responses, the greater the risk of downtime. Some businesses might indeed have the resources required for ongoing monitoring. For those that don’t, it is worth considering outsourcing cybersecurity monitoring and DRaaS. Vendors offering these services should provide service level agreements (SLAs), 24/7/365 support and the services of qualified engineers.

Cloud computing offers the potential for greater business agility, but unless a business has the right support, it is all but guaranteed to experience downtime. 

Read more: Top 10 disasters reaffirm need for cloud business continuity strategy

How to keep downtime to a minimum with the right cloud computing support

(c)iStock.com/fazon1

The cloud has transformed the way we do business today, improving infrastructure scalability and cost models for everything from software to data storage to disaster recovery. As with any IT solution, however, cloud computing isn’t without its risks.

In 2012, the International Working Group on Cloud Computing Resiliency (IWGCR) claimed that cloud downtime had cost £45 million over five years. Although a new five-year report on cloud outage costs hasn’t yet been released, we do know that application downtime is costing enterprises across the globe an estimated $16 million (approximately £12.9 million) annually.

So how can businesses reap the benefits of the cloud while minimising the risk of downtime? The solution is having the right support. The following steps are key starting points for mitigating cloud risk:

Assess cybersecurity

Symantec reported that, in 2015, there was a new zero-day vulnerability discovered every week. Not surprisingly, spear-phishing campaigns targeting employees increased 55%, and ransomware increased 35%.

Technology is constantly evolving to thwart these attacks, but security software cannot be treated as a set-it-and-forget-it solution. It must be complemented with monitoring, patch management and routine maintenance.

The challenge is that nearly half of businesses admit that there is a talent shortage in security. ESG research indicated that 46% of organisations say that, in 2016, they have a “problematic shortage” of cybersecurity skills, while a surprising third (33%) admitted their biggest deficiency was in cloud security specialists. Based on these figures, incident detection and responses to cloud-based cyber threats would undoubtedly be a problem for those organisations, as they have inadequate staff available to manage any cybersecurity risks that may arise.

This is a major problem, as malware infections are commonly the result of inadequate patching, carelessness, misconfiguration, human error or negligence. These errors can have costly ramifications if malware infiltrates the network and corrupts backup data.

As such, businesses might require a managed firewall service that can keep their network secure while freeing up their staff to focus on day-to-day responsibilities. Different organisations will require different levels of support, but one advantage of a cloud-based firewall service is that it is scalable and can be changed to meet ever-increasing demand and usage, both now and into the future.

Regardless of whether cybersecurity is managed in-house or outsourced, it should feature advanced security capabilities such as intrusion detection and prevention, and a safe tunnel for remote employee access. It is imperative that these features integrate with one another to allow for timely incident response or prevention. If data is breached or a system goes down, time is of the essence.

Make a data backup and recovery plan

If an organisation’s facility is impacted, it must have a plan for how to access its data. Businesses using disaster recovery as a service (DRaaS) have the advantage of being able to access their backups from anywhere, even if their primary facility has been affected. As capacity grows, they have the potential to leverage various cloud models – private, community and public cloud – depending on the use case. When recovery is necessary, stored data can be restored to either virtual or physical machines.

What many cloud providers tend to de-emphasise, however, is that while the environment might be available, bandwidth limitations can extend recovery times, especially when recovering a large amount of data and applications. For this reason, many businesses are complementing cloud backups with an on-site storage appliance, which allows data to be recovered within hours or even minutes.

If the business’s facility is impacted, recovering the data stored on the appliance would require either accessing an alternate backup stored at an off-site location or waiting until the business regains access to the facility, assuming it’s still intact.

With the right support, however, a hybrid approach to disaster recovery reduces the overall risk of downtime. Some DRaaS providers, but not all, can assist with recovering the data and applications stored on the appliance through the cloud. Others will provide the appliance but leave maintenance up to the client. The key is to know upfront what level of support the vendor can provide and plan accordingly.

Ensure ongoing monitoring

Even if a business has invested in top-of-the-line cybersecurity solutions and backed up data to multiple targets, the organisation still risks downtime if the entire environment isn’t properly monitored. To assess whether or not a business has the resources required for adequate oversight of the environment, it should consider the following questions:

  • Is there any period of time when the environment is unmonitored (e.g. during shift changes or holidays)?
  • Do any on-site IT personnel lack the skills required to manage software settings, remediate failures, and so on?
  • When considering past downtime events or security threats, were the systems always brought online or the threats mitigated within the required time frame?

The greater the number of yes responses, the greater the risk of downtime. Some businesses might indeed have the resources required for ongoing monitoring. For those that don’t, it is worth considering outsourcing cybersecurity monitoring and DRaaS. Vendors offering these services should provide service level agreements (SLAs), 24/7/365 support and the services of qualified engineers.

Cloud computing offers the potential for greater business agility, but unless a business has the right support, it is all but guaranteed to experience downtime. 

Read more: Top 10 disasters reaffirm need for cloud business continuity strategy

The key takeaways from Gartner’s 2016 CPQ application suites market guide

(c)iStock.com/Thampapon

  • Gartner estimates the CPQ application suites market was $570M in 2015, attaining 20% year-on-year growth between 2014 and 2015.
  • Cloud-based CPQ revenue was $157M in 2015, attaining 46% year-over-year.
  • Gartner predicts CPQ will continue to be one of the hottest enterprise apps for the foreseeable future, predicting a 20% annual growth rate through 2020 with the majority being from cloud-based solutions. Legacy on-premise vendors including SAP’s Variant Configurator (VC) are going to face increasingly strong headwinds in the market as a result.
  • SaaS and Cloud solutions are driving the majority of CPQ market growth today, fueling greater innovation in the market.
  • The CPQ market continues to grow as companies replace legacy on-premise CPQ apps and outdated ERP quoting and ordering apps with cloud-based CPQ solutions.

These and many other insights are from the recently published Gartner Market Guide for Configure, Price and Quote Application Suites (PDF, client access required) by Mark David Lewis and Guneet Bharaj on October 27 of this year. CPQ selling strategies are part of the broader Quote-To-Cash (QTC) business process that encompasses, quotes, contracts, order management and billing.

CPQ market leaders also are offering solutions that support the creation of quotes and capturing of orders across multiple channels of customer interaction (such as direct sales, contact center, resellers and self-service). Cloud- and SaaS-based CPQ systems scale faster across multiple channels and often have higher adoption rates than their legacy on-premise counterparts due to more intuitive app designs and better integration with Cloud-based Customer Relationship Management (CRM), Sales Force Automation (SFA) and incentives systems.

What makes the Market Guide so noteworthy is that it is the first research piece on CPQ published by a major analyst firm in several years

What makes the Market Guide so noteworthy is that it is the first research piece on CPQ published by a major analyst firm in several years.

Key takeaways from the study include the following:

  • Microsoft Azure and the Salesforce platform are benefiting the most from the intense competition in the CPQ market today. Microsoft Azure is emerging as the enterprise leader from a platform perspective, evidenced by the points made in my previous post, Seven Ways Microsoft Redefined Azure For The Enterprise And Emerged A Leader. Being able to scale globally and provide greater control over security and openly address Total Cost of Ownership (TCO) concerns of enterprises are a few of the many factors driving Azure’s adoption.  Salesforce has gone in a different direction in the CPQ market, choosing to acquire SteelBrick earlier this year. Salesforce in effect became a competitor with its partners in the CPQ market by doing this. According to Gartner, SteelBrick is a good solution for high-tech assemble to order (ATO) and software companies.  Last month Salesforce founder and CEO Marc Benioff was interviewed at the Intel Capital Global Summit, and the video is available here.  At 11 min., 20 seconds, he says that “Steelbrick is not for all customers, so Apttus still has a tremendous opportunity.” Earlier this year Apttus announced their entire QTC suite is now available on Microsoft Dynamics, showing just how critical it is for CPQ engineering teams to move fast from a platform strategy perspective to keep their companies growing.
  • Omnichannel and digital commerce is a high-growth area of CPQ as companies seek to improve buying experiences across all customer-facing channels. For many companies, their omnichannel selling strategies and initiatives are proliferating, driven by how quickly customers are changing the channels they buy through. Leading CPQ and QTC suites are now offering digital commerce and omnichannel apps integrated into their main app platforms. They are having initial success in B2B selling scenarios where self-service configuration is needed.  Gartner mentions Apttus, Oracle CPQ Cloud, and SAP as having the most robust digital commerce offerings today.
  • CPQ vendors are attempting to reinvent themselves by innovating faster and more broadly than before. Relying on machine learning to recommend the optimal incentives, pricing, and terms to close more deals and increase up-sell and cross-sell revenues through guided selling apps is a fascinating area of innovation today. Apttus’ Intelligent Quote-to-Cash Agent Max, Salesforce’s Einstein and others exemplify this area of development. Rapid advances and improvements in visualization, 3D modeling and Configuration Lifecycle Management (CLM) from Configit also illustrate how quickly innovation is changing the landscape. Gartner also mentions intelligent negotiation guidance, mobile configuration support, estimated compensation, verticalization, and deeper integration with back-end fulfillment systems as being additional areas where innovation is redefining the competitive landscape.
  • Improving promotion, incentive and rebate performance across a multitier selling network based on machine learning algorithms is redefining the QTC competitive landscape. Eighteen CPQ vendors are profiled in the market guide, many of them selling into industries that rely on complex multitier distribution, selling and support networks for the majority of their revenue. It’s clear many are moving in the direction of using machine learning to improve the effectiveness of promotions, incentives, and rebates across all selling channels. Being able to provide the best possible incentive to a distributor, dealer or 3rd party sales person defines which manufacturer wins the deal. Look to see more emphasis in this area in 2017 as CPQ vendors work to provide companies with the chance to steer more deals their way in channels they don’t directly control.
  • The CPQ landscape will continue to consolidate as the race for new customers accelerates, driven by the need companies have for improving QTC performance. Gartner mentions how there have been major acquisitions over the last four years including Big Machines being acquired by Oracle, Salesforce acquiring SteelBrick, Configure One acquired by AutoDesk, and Cameleon Software was acquired by Pros. There are many other CPQ vendors privately for sale right now, with all of them looking to find an acquirer or company to merge with who can best complement their core technologies. Look to see the pace of acquisitions accelerate in the next year.
  • I’m looking to see which CPQ vendors further distance themselves from competitors with modern and intuitive user experience design (UX). CPQ, while a necessary foundation piece for B2B use cases is evolving into the broader Quote-to-Cash umbrella. To attain its full market potential, I believe that CPQ vendors must excel at UX across all products and app experiences. I am looking forward to seeing which vendors will invest in modern and intuitive UX to drive this change in the market and deliver great experiences to customers as a result.
  • From the enosiX blog, Key Takeaways From Gartner’s Market Guide For Configure, Price and Quote (CPQ) Application Suites, 2016

Mac Security: Managing your Macs

Waaaaaaaay back in 2013, Gartner forecast that about half of the world’s enterprises would adopt bring your own device (BYOD) programs by 2017. With that deadline only weeks away, how are you feeling about your own company’s BYOD or CYOD policy? Have you seen your IT administration evolve to manage the growing mix of hardware […]

The post Mac Security: Managing your Macs appeared first on Parallels Blog.

IoT Is Real | @ThingsExpo @BanyanHills #IoT #M2M #AI #ML #API #Sensors

It wasn’t that long ago when the first smartphone came out, and we saw the pace of connected devices and associated mobile applications accelerate beyond what anyone could have imagined. Shortly after that, something incredible happened, we reached the point where there were more connected devices than people on the planet. Since then, we’ve used this to measure the growth of the Internet of Things (IoT). Research predicts there will be as many as 50 to 100, and possibly even 200 billion connected devices by 2020. Some businesses are already set up for IoT, like operators of large networks of devices such as self-service kiosks and vending machines. These devices are communicating information back to the enterprise and for those that aren’t, installing the proper hardware and software to do so is more cost efficient today than ever before. Any business that is going to scale is going to look for opportunities to automate its processes. We’ve seen history repeat itself from the Industrial Revolution to the automotive industry and more recently with Netflix automating the distribution of those little red envelopes. Netflix transformed their operations through the success of their streamlined mail business, becoming the premier provider of streamed digital media into the home. Anything that has successfully scaled has used automation effectively. Furthermore, we are seeing traditional retail and brick-and-mortar stores look for ways to combat increasing wages, and costs to maintain a physical environment. For many, that is through the implementation of kiosks and vending technology. We see this in hospitality with self-service ordering. Businesses are finding ways to automate through technology while improving the customer experience. An operator needs a central place to monitor and manage their business—one-time, real-time. The Perfect Storm The reality of increasing wages and rising costs of real estate, combined with the decreasing cost of sensors, computation, and storage has created the perfect storm for our next technology revolution. As a result, automation through the implementation of IoT is becoming more desirable and achievable. When taking a closer look at the specific drivers of these trends, it’s clear that the economics of IoT are very compelling. It is much less expense now to automate than before due to the decreasing costs of micro-electronics (examples: microcontrollers and sensors). The cost of data processing and computation is also reducing at a pace that we’ve never seen before. Additionally, the ability to now process and persist data in the cloud has helped alleviate concerns about rapid growth, through the dynamic, on-demand, scaling nature of cloud infrastructure. These technology drivers are a few of the many signs that IoT can help solve real business problems. Economic headwinds and more affordable technology have created the perfect storm for IoT to transform traditional physical environments through automation. Automation is just the tip of the IoT iceberg Once you’ve automated your operations, the next step is having the right set of management tools for your business. If you are operating a vast network of devices, having the ability to

read more