Moving the help desk to the cloud: Companies reticent to adopt SaaS models

(c)iStock.com/Andrew_Howe

More than two thirds of respondents in a survey from Software Advice currently use on-premise help desk systems despite the prevalence of cloud-based systems in the market.

68% of the more than 200 IT staff and management respondents have on-prem deployments compared to 18% utilising vendor-hosted cloud and 13% hosting on a leased server.

The most frequently used help desk software functionality, according to those polled, is ticket management (66%), followed by reporting and analytics (51%) and live chat integration (45%). The respondents also noted how software was having a positive impact on performance at the rest of the company; the vast majority of options had a more than 90% swing towards positive, with software problem resolution time (95%), first contact resolution (94%) and support staff productivity (93%) the most popular.

For 2015, more than two thirds (68%) said they expected a ‘moderate’ increase, with 16% predicting a significant increase and the remainder expecting a drop. The report argues this is the case for various reasons; not only do staff expect greater productivity, but customers also have higher expectations for the service they receive.

The report, which was funded and conducted by Software Advice independently, argues that while the CRM software market is dynamic, it creates a varied market, causing confusion for first time buyers. As a consequence, the company arrives at a list of best practice tips:

  • Define the scope of use: Working out whether it will be an internal employee-facing or external customer-facing service is important, as more specialised solutions could provide a better fit
  • Identify which business goals the software must address: Is the software going to address specific KPIs, or have a broader goal of improving the overall customer experience?
  • Determine integration requirements: Are you going to use CRM suites that offer help desk functionality baked in with other applications, or is it a variety of best-of-breed software tools?

The most intriguing point, however, concerns the consideration of both SaaS and on-premise. Even though the majority of survey respondents go with on-prem, cloud is becoming more of a factor – and this has to be considered, the researchers argue, although noting companies who continue with on-prem will have specific reasons for doing so, such as complex integrations with other software platforms.

You can find the full report here.

The cloud service provider and security vulnerabilities: Three steps to prevention

(c)iStock.com/cherezoff

IT departments worldwide face a dizzying array of security threats, whether they manage traditional or NextGen/cloud based environments. IT security experts report some very frightening statics:

  • Approximately 400,000 new malware instances are recognised daily
  • New kinds of malware are gaining prominence including Ransomware, Scareware, and banking malware.
  • New attack vectors include public cloud, software-as-a-service provider environments, third party services providers and mobile devices.
  • Reports of politically or cause sponsored terrorism and corporate espionage are on the rise globally
  • Increased use by hackers, of Botnets and other automated attack tools
  • Recently a number of “foundational internet technology” based attacks have had a rebirth, including Heartbleed, ShellShock and Poodle.
  • The targets seem to get bigger and the damage more costly as time goes by (ex. Sony, Target and Ebay)

The question is: how can today’s cloud service providers protect themselves and their customers while enabling technology innovation, data access, performance, scalability, flexibility and elasticity that are the hallmark of the NextGen/cloud world?

One of the great martial arts movies of all time has the main character, played by Bruce Lee, explaining that his style of martial arts is fighting without fighting, and then talking his would-be opponent into a small boat. Cloud providers would do well to emulate the intent of this “style” of combat.

Cloud service providers need to be ever mindful that they are targets and must monitor network traffic, application activity and data movement at all levels of the cloud environment without being intrusive or over-burdening the customer environments. For example, as in any technical environment keeping operating system and patch levels up to date is critical, but doing so in an efficient and non-customer-impacting manner is the trick.

Another important item to keep in mind is speed of reaction. In a cloud environment, more so than in traditional IT environments, the speed of discovery and closure of vulnerabilities as well as reaction to monitored attacks, is critical. 

Identify the layers

The first step in architecting a cloud security solution is to identify the layers of the environment to be protected. The following diagram shows the layers of a generalised cloud environment.

The layers of the environment mostly look like any tiered infrastructure, but the contents and components, as shown above, are quite different. Some of these areas, specific to cloud environments are:

  • Hypervisor
  • Virtual networking
  • Virtual storage
  • Orchestration and automation control plane
  • Software defined networking (SDN) components
  • Self service portal

Choose your protection methodology and tools

The next step is to determine how to provide protection for each layer. As an overall methodology the cloud instance should be considered a contained secure zone, as defined by firewall or proxy boundaries (virtual or physical). Once the secure boundaries are defined the majority of the remaining methods encompass monitoring and remediation of suspicious activity and malware protection.

An important point to keep in mind is that as service providers we cannot use tools or methods that access the customers VMs. The best example of this is malware protection. Hypervisor based malware protection is critical but actually touching the customer VMs breaches the boundary between provider and customer. There exist many options for the choice of the tool or configuration of the solutions presented below. 

  • VIP: Providing a Virtual IP Addresses allows for network address separation between the external and internal net.
  • SSL: Secure Socket Layer provides encrypted transport of HTTP network packets.
  • Perimeter security (firewalls, load balancers, proxies): Creation of the boundaries of the secure zone and control of traffic flow between the external and internal networks. Firewalls, load balancers and proxy servers can be physical devices, physical appliances or virtual devices.
  • Virtual services: This secure appliance or virtual machine provides update, patch and deployment services from within the secure zone.
  • Network activity monitoring/IDS: Monitoring of traffic flows for intrusion detection purposes. In the cloud environment specialised tools to collect network data on a VM by VM basis need to be acquired or developed.
  • File change tracking: Tracking of changes to important configuration files on the control plane and hypervisor layers.
  • Log tracking and analysis: Centralised tracking of events from log files (ex. Syslog, cloud management component logs etc.), and analysis of those events for trending and detection of malicious activity.
  • Hypervisor based malware protection: Specialised software (many on the market already) to detect and clean malware on the Hypervisor and on the physical device layers.

Looking at the above diagram and list of concepts, the reader may notice that there is no mention of inter-layer firewalls (e.g. between the control plane and the compute layer). This is because of the need to reduce intrusion and reduce performance impacts in the customer’s environment.

Developing a security management strategy

The most important element of securing any environment, not just a cloud environment, is an ongoing and ever evolving plan for maintaining the security aspects of the environment. This includes:

  • Regular software updates – software vendors, including cloud management hypervisor, and security component providers will update their software regularly. These changes must be evaluated and appropriate updates implemented in an expedient manner
  • Regular patching – As security patches and bug fixes are released from the software vendors, these must be a high priority for evaluation and implementation into your cloud environment.
  • Centralised activity and security components monitoring – A centralised team of people, processes and tools to monitor and evaluate activity, and security alerts, in the environment. Centralisation allows for rapid event recognition and remediation.
  • Scheduled and post-update vulnerability testing – Never rest of your laurels. An environment that is deemed secure today can be attacked in a whole new way tomorrow. Regularly scheduled vulnerability testing and testing after an update is applied can be critical in keeping the environment secure.
  • Change management procedures and tracking – Tracking changes and comparing them to the results of file change scans is one step in identifying malicious updates. This will also assist in general issue resolution as well as remediation of a security event.
  • Proper governance of the overall environment requires that processes and procedures especially around security are reviewed regularly and adjustments made as appropriate.

Conclusion

There are three key steps to preventing security vulnerabilities in a cloud environment:

  • Identification of the layers of a cloud environment that could be vulnerable to attack
  • Definition of methodologies and tools to monitor, manage and secure each of the identified layers
  • Creation of a management environment to maintain the secure implementation of the cloud provision environment

No environment can be completely secure forever. Our goal is to reach a high level of security through the above methods and implement new policies and methodologies as time goes by, to attempt to keep up with the ever-changing threat landscape.

Microservices Unplugged By @DavidSprott | @DevOpsSummit [#DevOps]

Right off the bat, Newman advises that we should “think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development”. These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as process and organization.

read more

SBT, Scrooge, Intellij and You By @VictorOps | @DevOpsSummit [#DevOps]

Here at VictorOps, we are going through the process of overhauling the serialization framework that we use for inter and intra process communication. We evaluated several great options including Kryo (and its Scala companion Chill), Protobuffers, and Thrift.

In the process, we put together a little test project to help us decide which of these choices was right for us. Testing Thrift posed some challenges as the tooling, while quite mature, wasn’t quite as plug-n-play with our local development environment as we would have liked. The following instructions assume a basic knowledge of the Scala/SBT framework, Thrift, and (optionally) Intellij.

read more

Cultivate Trust in Your Organization by @IanKhanLive | @CloudExpo [#Cloud]

Trust is the fundamental building block of any relationship. Whether it’s personal or business, trust is something that cannot be replaced with anything. There are hundreds of books available on how to cultivate business relationships, how to maintain them, and how to leverage the best from them and so on. What forms the basis of a business relationship and what are the fundamental blocks of building trust? Here are three things that will get you started when thinking about using trust building trust and maintaining trust.

read more

Is Eclipse Faster than NetBeans? By @OmniProf | @DevOpsSummit [#DevOps]

While putting my test code up on GitHub and writing the readme.md, I ran my NetBeans test code on my 2011 early MacBook Pro. To my surprise the times for both embedded and remote testing were between 25 and 35 seconds. My original blog was based on working on a much much faster Windows 8.1 system that took 16 seconds for embedded but 100 seconds for remote. So I guess we blame. Some very bright people will be looking at the code and hopefully they will have an explanation for why remote server testing on Windows 8.1 performs so badly.

read more

Microeconomics and Application Performance By @Ruxit | @DevOpsSummit [#DevOps]

In my last post, I talked about how I keep my approach to application performance simple – I use my one semester’s worth of microeconomics knowledge to continuously evaluate the supply and demand sides of my application architecture. What can go wrong, right? 🙂

One of the points we’ll explore further in this post is that supply shouldn’t be viewed simply as a measure of hardware capacity. Better to view supply as that which is demanded (see what I did there…?). In a complex environment, supply can be measured as the number of connections, available SOA services, process space, and hardware. Here’s a good visualization of what I’m talking about:

read more

IBM’s Cloud Services

IBM has been restructuring its business to accommodate better growth opportunities by boosting profitability and focusing on new ventures. This includes expanding their cloud services. Recently, IBM announced their hybrid cloud technology, which extends client control, visibility, and security into the private cloud as well as allow developers to work across any IT cloud. By 2018, the company hopes to see $40 billion in revenue from services such as the cloud, big data, security, etc.

 

Cloud computing gives way for convenient, on-demand access to a shared pool of computing resources, such as servers, storage, applications and services. These can quickly be released with minimal effort on the managerial or service provider side. Cloud computing is made up of three main services: software as a service (SaaS), infrastructure as a service (IaaS) and platform as a service (PaaS). SaaS is expected to grow the fastest, followed by IaaS, though all three categories are going to be in high demand in the near future. Growth is expected due to the global demand for technology-based services. The global cloud computing market is expected to reach almost $200 billion by 2020.

 

hhhh

 

The three services previously mentioned are interconnected and dependent on one another in order to provide a cost effective solutions for clients. Most cloud services are providing a multitenancy structure, which represents a shared infrastructure with many locations in a topology that leverages advantages of remote access to deliver new businesses and services. SaaS software is positioned on the internet. A software company licenses an application to customers through a subscription based model. Another approach that recently popped up is one that gives users free access to the most basic functions, and requiring payment for more advanced ones. IBM has 120 SaaS offerings that cover a wide array of capabilities. They cover everything from big data analytics to human resource administration.

 

IaaS delivers on-demand cloud computing infrastructure through the use of secure IP-based connectivity. Clients can buy resources such as servers, software and data center space as a fully outsourced on-demand system. IaaS is based on creating a virtual version of something, and users are responsible for managing the applications, data, and middleware. IBM’s lead IaaS service is based on a global cloud infrastructure called SoftLayer. This program provides different machine virtualization services that can run both advanced operating systems and analytics software. This program is based on a pay-as-you-go model.

 

PaaS is the most complex layer. PaaS is a computing platform that allows creation of applications of software fast without the complexity of buying and maintaining it or its infrastructure. The software created is then delivered over the internet. In this layer, IBM has the Bluemix platform, which offers developers a single solution environment to develop and deploy application across many domains.

 

Cloud services have come to the front end of companies of all sizes that are looking to improve their business through the use of IT solutions or services. The advantage of this approach is the scalability and accessibility of new applications, resources and services. Also, the initial cost of this method is lower. IBM’s cloud service vision allows customers to subscribe not only to standalone applications, but also to interact with Softlayer’s infrastructure with ease as well as on-site applications and SaaS offerings. IBM has invested over $1 billion to expand their footprint in cloud centers that are accessible to every major financial network around the world. These investments should lead to big revenue in the

The post IBM’s Cloud Services appeared first on Cloud News Daily.

It’s Time to Break Up with Your Legacy System By @Infor | @CloudExpo [#Cloud]

In every era of innovation, disruptive technologies have had their disbelievers who resisted adopting new concepts. But in today’s business world, “being disruptive” has really empowered many businesses to make tough decisions, investments, and lead the changes that need to be seen organization-wide. Now, many disruptive technologies are truly the foundation for growth – like cloud.
Unfortunately, the same pioneering attitude from the executive team doesn’t always trickle down to the internal IT departments. For example, the companies that manufacture, supply, and service some of the most remarkable high-tech equipment in the world often heavily rely on outdated IT systems to run their own internal processes.

read more

Implementing Mobile and Analytics By @VAIsoftware | @CloudExpo [#Cloud]

To remain competitive in 2015, midmarket companies must consider the advantages of adding mobile capabilities and analytics to their existing ERP systems. Investing in mobile and analytics technology can lead to improved cost and operational efficiencies, expanded real-time collaboration with customers, vendors and partners, as well as faster and more personalized customer experiences. With technological advances in mobile, analytics, and business intelligence more accessible to the midmarket, now is the time for organizations to update their ERP systems to support the latest technical applications and capabilities.

read more