Organisations continue to move from monolithic to microservices – but observability a struggle

We've all heard the exhortations from vendors and industry players alike – get into DevOps, containers and serverless for IT and application delivery before it's too late. But while the benefits are evident, companies are getting bogged down in troubleshooting and debugging software issues.

That's according to a new report from Scalyr. The study, which polled 155 software development practitioners, found that the majority (71%) are getting used to DevOps-type practices, pushing code into production at least weekly. Almost a third of respondents said they did so at least once a day.

Yet while greater efficiencies are coming out at one end, observability and analysis shoots up at the other. Almost half of those polled said they have five or more observability tools, with 58% of respondents in a DevOps role doing so. When it comes to log management, the biggest concern for software developers is ad-hoc query speed; this was cited by 54% of overall respondents, with that number going up to 68% for those who mostly work in microservices.

Two in five (40%) said their companies' engineers spent the majority of their time troubleshooting software issues, with the number rising to almost three quarters (73%) of companies who push out code on a daily basis. A quarter said they spent the majority of their log management investigation time waiting for queries to complete.

The figures show that time saved in one area is essentially relative – developers are still wasting a lot of time doing something else. "As organisations make the shift from more traditional architectures to microservices and deliver code more frequently, they spend more time troubleshooting and debugging software, working to understand complex data sources, and operational visibility is paramount," wrote Jamie Barnett, Scalyr chief customer officer in a blog post.

"Our takeaway is that organisations that are experiencing this shift to modern software delivery need to take a hard look at their observability tools and processes to make sure they can keep pace and evolve to support modern, high-speed, distributed software engineering practices," Barnett added.

This chimes in with other recent research on the topic. According to the Ponemon Institute last month, the gap between what organisations ideally want in DevOps and microservices practices versus what they are actually able to deliver is costing them millions each year.

Here’s where business intelligence is truly delivering value in 2018

  • Executive management, operations, and sales are the three primary roles driving Business Intelligence (BI) adoption in 2018.
  • Dashboards, reporting, end-user self-service, advanced visualization, and data warehousing are the top five most important technologies and initiatives strategic to BI in 2018.
  • Small organizations with up to 100 employees have the highest rate of BI penetration or adoption in 2018.
  • Organizations successful with analytics and BI apps define success in business results, while unsuccessful organizations concentrate on adoption rate first.
  • 50% of vendors offer perpetual on-premises licensing in 2018, a notable decline over 2017. The number of vendors offering subscription licensing continues to grow for both on-premises and public cloud models.
  • Fewer than 15% of respondent organizations have a Chief Data Officer, and only about 10% have a Chief Analytics Officer today.

These and many other fascinating insights are from Dresner Advisory Service’s  2018 Wisdom of Crowds® Business Intelligence Market Study. In its ninth annual edition, the study provides a broad assessment of the business intelligence (BI) market and a comprehensive look at key user trends, attitudes, and intentions.  The latest edition of the study adds Information Technology (IT) analytics, sales planning, and GDPR, bringing the total to 36 topics under study.

“The Wisdom of Crowds BI Market Study is the cornerstone of our annual research agenda, providing the most in-depth and data-rich portrait of the state of the BI market,” said Howard Dresner, founder and chief research officer at Dresner Advisory Services. “Drawn from the first-person perspective of users throughout all industries, geographies, and organization sizes, who are involved in varying aspects of BI projects, our report provides a unique look at the drivers of and success with BI.” Survey respondents include IT (28%), followed by Executive Management (22%), and Finance (19%). Sales/Marketing (8%) and the Business Intelligence Competency Center (BICC) (7%). Please see page 15 of the study for specifics on the methodology.

Key takeaways from the study include the following:

Executive management, operations, and sales are the three primary roles driving business intelligence (BI) adoption in 2018

Executive management teams are taking more of an active ownership role in BI initiatives in 2018, as this group replaced Operations as the leading department driving BI adoption this year. The study found that the greatest percentage change in functional areas driving BI adoption includes Human Resources (7.3%), Marketing (5.9%), BICC (5.1%) and Sales (5%).

Making better decisions, improving operational efficiencies, growing revenues and increased competitive advantage are the top four BI objectives organizations have today

Additional goals include enhancing customer service and attaining greater degrees of compliance and risk management. The graph below rank orders the importance of BI objectives in 2018 compared to the percent change in BI objectives between 2017 and 2018. Enhanced customer service is the fastest growing objective enterprises adopt BI to accomplish, followed by growth in revenue (5.4%).

Dashboards, reporting, end-user self-service, advanced visualization, and data warehousing are the top five most important technologies and initiatives strategic to BI in 2018

The study found that second-tier initiatives including data discovery, data mining/advanced algorithms, data storytelling, integration with operational processes, and enterprise and sales planning are also critical or very important to enterprises participating in the survey. Technology areas being hyped heavily today including the Internet of Things, cognitive BI, and in-memory analysis are relatively low in the rankings as of today, yet are growing. Edge computing increased 32% as a priority between 2017 and 2018 for example. The results indicate the core aspect of excelling at using BI to drive better business decisions and more revenue still dominate the priorities of most businesses today.

Sales & marketing, business intelligence competency center (BICC) and executive management have the highest level of interest in dashboards and advanced visualization

Finance has the greatest interest in enterprise planning and budgeting. Operations including manufacturing, supply chain management, and services) leads interest in data mining, data storytelling, integration with operational processes, mobile device support, data catalog and several other technologies and initiatives. It’s understandable that BICC leaders most advocate end-user self-service and attach high importance to many other categories as they are internal service bureaus to all departments in an enterprise. It’s been my experience that BICCs are always looking for ways to scale BI adoption and enable every department to gain greater value from analytics and BI apps. BICCs in the best run companies are knowledge hubs that encourage and educate all departments on how to excel with analytics and BI.

Insurance companies most prioritize dashboards, reporting, end-user self-service, data warehousing, data discovery and data mining

Business Services lead the adoption of advanced visualization, data storytelling, and embedded BI. Manufacturing most prioritizes sales planning and enterprise planning but trails in other high-ranking priorities. Technology prioritizes Software-as-a-Service (SaaS) given its scale and speed advantages. The retail & wholesale industry is going through an analytics and customer experience revolution today. Retailers and wholesalers lead all others in data catalog adoption and mobile device support.

Insurance, technology and business services vertical industries have the highest rate of BI adoption today

The Insurance industry leads all others in BI adoption, followed by the Technology industry with 40% of organizations having 41% or greater adoption or penetration. Industries whose BI adoption is above average include Business Services and Retail & Wholesale. The following graphic illustrates penetration or adoption of Business Intelligence solutions today by industry.

Dashboards, reporting, advanced visualization, and data warehousing are the highest priority investment areas for companies whose budgets increased from 2017 to 2018

Additional high priority areas of investment include advanced visualization and data warehousing. The study found that less well-funded organizations are most likely to lead all others by investing in open source software to reduce costs.

Small organizations with up to 100 employees have the highest rate of BI penetration or adoption in 2018

Factors contributing to the high adoption rate for BI in small businesses include business models that need advanced analytics to function and scale, employees with the latest analytics and BI skills being hired to also scale high growth businesses and fewer barriers to adoption compared to larger enterprises. BI adoption tends to be more pervasive in small businesses as a greater percentage of employees are using analytics and BI apps daily.

Executive management is most familiar with the type and number of BI tools in use across the organization

The majority of executive management respondents say their teams are using between one or two BI tools today. Business Intelligence Competency Centers (BICC) consistently report a higher number of BI tools in use than other functional areas given their heavy involvement in all phases of analytics and BI project execution. IT, Sales & Marketing and Finance are likely to have more BI tools in use than Operations.

Enterprises rate BI application usability and product quality & reliability at an all-time high in 2018

Other areas of major improvements on the part of vendors include improving ease of implementation, online training, forums and documentation, and completeness of functionality. Dresner’s research team found between 2017 and 2018 integration of components within product dropped, in addition to scalability. The study concludes the drop in integration expertise is due to an increasing number of software company acquisitions aggregating dissimilar products together from different platforms.

How the Reykjavik City Council Got Its IT to Toe the Line

Halldór-Ingi H. Guðmundsson lives in Reykjavik, Iceland, and when he isn’t off fishing or cycling he looks after the town hall’s IT in his home town. Iceland’s capital and the island’s largest town, Reykjavik has an extensive infrastructure and offers numerous critical services for its citizens. If anything is out of order here, there are […]

The post How the Reykjavik City Council Got Its IT to Toe the Line appeared first on Parallels Blog.

Keep off the cloud warns EU financial regulator


Clare Hopping

5 Jul, 2018

The EU Banking Authority (EBA) has warned financial institutions moving to the cloud are risking their freedom by being locked into using particular vendor’s service and being forced to onboard subcontractors from “high risk areas”.

The EBAs report “on the prudential risks and opportunities arising for institutions from fintech” highlighted cloud servces as one of the seven key risks and opportunities to financial institutions, alongside other technologies such as blockchain and Big Data.

The report explained that businesses choosing to use the cloud are putting both their own organisations and others in the sector at risk because “large suppliers of cloud services could become a single point of failure should many institutions rely on them”.

“Additionally, a possible impact on the wider operational risk could arise from issues with data security, systems and banking secrecy, especially when cloud services are hosted in jurisdictions subject to different laws and regulations from the institution,” the report continued.

The EBA advises financial businesses only to use cloud technologies if security is not a primary concern and recommended businesses intent on using cloud services consider a private cloud set-up, rather than public cloud services.

“[Private cloud] allows the most flexibility in data processing and security. On the other hand, private clouds are typically less scalable and more expensive than public clouds,” the report said.

It also warned against financial firms using subcontractors because it poses a risk to the institution. If a business cannot control the technological infrastructure used by a cloud provider, it could affect the ICT outsourcing risk of that business. 

Academics: Full cloud is like Netflix, bursting is just boring old iPlayer


Keri Allan

12 Jul, 2018

It’s easy to see why cloud bursting – where an application is run in a private cloud or data centre and then ‘bursts’ into a public cloud when demand dictates – could appeal to research universities.

It can provide institutions with an escape valve when their in-house resources are fully committed, helping to potentially speed up research and save costs.

In recent years adoption of cloud computing has been transforming research and education, and although change within academia can be slow, the latest UK Research & Innovation (UKRI) e-infrastructure report has shown a growing interest in community and public clouds.

“We also see that scientific computing teams at universities and research institutes are starting to look very seriously at virtualising their in-house compute clusters,” says Martin Hamilton, a member of UKRI’s Cloud for Research working group.

Although educational researchers tend to “thrash kit within an inch of its life”, Hamilton says there’s a “growing recognition that having the option of running a virtual machine (VM) image can make it easier for researchers to share and re-use code.”

However, there are divergent opinions within the research community as to how best cloud resources should be deployed. While bursting remains a go-to choice for some, others either remain reticent or have avoided the technology entirely in favour of a full-fledged cloud.

Cloud bursting advocates

Two of the world’s biggest champions of the cloud bursting approach are the University of Cambridge and, on the other side of the world, the National University of Singapore (NUS).

“NUS has a wide range of computing requirements, making it impractical for all resources and capacity requirements to be provided in-house,” says Tommy Hor, NUS’ chief information technology officer, speaking to Cloud Pro.

The National University of Singapore deploys cloud bursting to support its research projects

“Our researchers occasionally have ad-hoc service demands that require dedicated computing resources to speed up their work. We have started migrating our in-house pay-per-use service to the cloud, and this will give us greater financial agility and economies of scale.”

The University of Cambridge has gone as far as providing its own cloud bursting capabilities. Its Research Computing Services (RCS) operation has a dedicated private ‘public sector’ cloud designed specifically for scientific and technical computing.

“Researchers from across Cambridge University, plus UK universities and companies, use RCS for cloud bursting,” says Dr Paul Calleja, the university’s director of Research Computing. “Research undertaken includes large-scale genomic analysis for clinical diagnosis and simulations of jet engines.”

Cloud bursting challenges and limitations

But while cloud bursting has potential benefits, there are still problems to be ironed out. This includes interoperability issues between environments, pricing models and security.

“We recently saw a number of Docker images laden with malware removed from the public registry, opening black doors onto users’ machines and running cryptocurrency mining processes,” says UKRI’s Martin Hamilton.

“Things like this take on an even greater significance when we are talking about compute jobs to calculate stresses on airframes, analyse CT images looking for tumours, or model the effect a new drug will have on the human body.”

For the University of Bristol, cloud bursting is seen as a highly restrictive approach to deployment, one that needlessly increases the complexity of a network.

“In my opinion cloud bursting limits the use of the cloud to being just an extension of a local on-premise compute cluster,” says Dr Christopher Woods, leader of the university’s Research Software Engineering group, which is fully in the cloud.

“It also means you get the worst of both worlds – you’re running both a cluster and a cloud, so have twice the complexity.”

He adds that, in his experience, bursting can introduce problems when it comes to moving data between on-premise and the cloud, and that the “up-front-investment ‘batch queue’ way of using a cluster” isn’t always compatible with the on-demand way of paying for cloud computing services.

A stepping-stone to cloud

Cloud providers and organisations like Jisc are looking to address some of these issues by negotiating data egress waivers and special pricing agreements for universities.

However, as Dr Woods notes, universities may struggle with a change of payment model.

“The biggest issue is the money side. Universities are terribly slow at moving money around so it’s difficult to work out how the money would make its way from a researcher’s grant to the provider.

“A big question is how do they go from CAPEX to OPEX? Maybe this is why cloud bursting can be a good stepping-stone, as it lets universities effectively turn cloud into a CAPEX investment that’s been prepaid for.

“It’s a way to dip their toes in the water and get their heads around new contracts and procurement models,” he says.

Woods considers cloud bursting a “sticking plaster solution” that will disappear as more organisations trust their data to cloud providers and the option becomes cheaper than on-premise.

“My feeling is that the cost of cloud will be competitive by 2020 and that most universities will be fully on cloud by the end of 2025,” he says.

The iPlayer of cloud deployment

Woods says that cloud bursting, by definition, only offers a slice of the flexibility that full cloud deployment brings, something he suggests can be compared to TV streaming services.

“You get to run interactive simulations, interactive data analysis and publish interactive papers that can be re-run and re-used by others. The best way to describe the difference is that the cloud is the ‘Netflix of simulation’, while on-premise is like watching the BBC following a TV schedule.

“Cloud bursting is like iPlayer – a hybrid mix of terrestrial TV and on-demand streaming that’s unsatisfactory compared to just binge-watching whatever you want on Netflix on demand.”

The importance of engineers

Research software engineers like Woods at the University of Bristol are a relatively new kind of academic, using their DevOps mindset and technical knowledge to support other researchers.

Hamilton believes that this new mindset is going to be essential for research in the years to come, helping “researchers get to grips with the tools available and develop their scientific computing applications.”

In Woods’ experience, cloud providers are frequently only doing work with those institutions that are able to support projects with in-house research software engineers.

“You need to have that skill set within the university to make it work,” says Woods. “Academics want to solve a genome – they have no interest in putting together the supercomputer that will do that. You really need that layer of person to lead the way.

“Those institutions that have people that understand software and hardware – and can bring the two together – will be the ones to prosper and take advantage of everything cloud offers,” he adds.

Image: Shutterstock

IBM looks to further European cloud expansion with new customers and availability zones

IBM is looking to build upon recent cloud momentum – and the company is expanding in Europe after securing several new customers in healthcare, logistics, energy and more.

The announcements showcase how many prospective IBM clients are utilising the company’s cloud for its artificial intelligence, machine learning and blockchain capabilities. Credit Mutuel, a French bank, is deploying IBM Watson virtual assistants across all of its business lines – run on IBM’s cloud in France with a backup in Germany – while Koopman Logistics, based in the Netherlands, will aim to track and trace consignments across its supply chain through IBM’s blockchain.

Alongside them are Gruppo 24 Ore, a media firm based in Italy, Spanish digital health provider Teckel Medical, UK-based RS Components and lighting solutions firm Osram AG, based in Germany.

Last month at CeBIT, IBM announced 18 new availability zones across the North America, Europe and Asia Pacific regions, among other launches designed around security and privacy. “Our new availability zones and regions architecture is the next step in the evolution of our public cloud platform, and it’ll immediately reinforce and supplement the broad portfolio of infrastructure, platform, and software services that our clients trust to fuel their businesses,” wrote Andrew Hately, VP, DE and chief architect IBM Watson and Cloud Platform at the time.

Speaking to this publication back in February, John Considine, IBM general manager of cloud infrastructure services, cited the importance of extracting data to glean actionable insights for businesses as key – with the emerging technologies forming part of these extraction methods.

“One of our theories leading into the cloud, for the past few years, is that data is enormously important for the enterprises – and given more than 80% of the world’s data is still maintained behind the corporate firewall, our focus has been how… we enable the businesses to take advantage of that data, to combine it with new processing techniques, new data sets, and new capabilities,” Considine said.

“[It’s about] all the things associated with machine learning and deep learning, analytics and bringing all of these things together in a form that allows them to tap into those resources and deliver not only application modernisation, but really even process reinvention,” he added.

It is important to note, as Considine did, how much data is in less-than-easy spots. IBM punted out a similar statistic – that almost 80% of all enterprise data is still managed on the mainframe – when a new partnership with CA Technologies was announced last month.

The DNA of adaptability: How Kubernetes hosts and manages a plethora of different workloads

We live in an environment where everything is changing. Business requirements are changing. User demands are constantly in flux and always evolving. And our infrastructure is also continually changing. Frankly, the infrastructure has always been in a constant state of change, but in the past we pretended that we could get it to a point of stability — that we could reach a state of “done.” Once we finished setting up that totally stable infrastructure, then we could run everything on top with no problems, right?

IT is perpetually in firefighting mode because it treats change as the exception, not the rule. Yet, change is the only constant in our world.

The increased use of containers in recent years has come largely out of the value that the container image brought — having a deployable artefact (the Docker image) that bundled together all dependencies, from the operating system through middleware and the application components, enabled significant advancements in development and operational (DevOps) efficiencies. And the speed with which containers could be launched helped to expand and refine practices around infrastructure as code and immutable infrastructure. But containers alone do not address the need for constant adaptation.

Just like the infrastructure virtualisation that was ushered in by VMware 20 years ago and delivered as a service starting with AWS, the introduction and early adoption of containers has left so much of the way IT works largely unchanged. The use of automation has increased the very infrastructure as code and also resulted in the automation of existing practices — a script to install the docker runtime on three hosts, another to “docker run” three different microservice images, and another to adjust firewall rules to allow traffic through.

This automation still assumes a level of stability; after running the scripts we are “done” and things will just keep humming along. But when, for example, two of the docker hosts are suddenly unavailable, the team is once again in firefighting mode.

Enter container orchestration. The most popular container orchestration system in the industry today is Kubernetes, and with good reason. What makes Kubernetes and other similar systems really shine, is that the system operates in a mode that anticipates constant change.

The Kubernetes model is so effective because it allows a user to say “here’s my desired state. I want 2 instances of my user-facing web page, 3 instances of my catalogue service and 10 instances of my shopping cart service” and Kubernetes just makes it so. It is a declarative model for defining complex systems. Kubernetes constantly monitors the actual state of the system and any time it differs from the desired state it’ll remediate. Kubernetes has change-tolerance built into its DNA.

Another thing that taxes an IT team is the variability they have in their infrastructure. There are different server and storage platforms and an arguably even more varied set of networking solutions. Increasingly, enterprises are going hybrid, leveraging a combination of on premise and public cloud infrastructures. This means that not only must IT teams become experts in the management interfaces for many different clouds, the scripts they are writing to automate the myriad of different tasks must be written and maintained for each different infrastructure.

Kubernetes addresses this by providing abstractions over the top of the varied infrastructure assets, allowing Kubernetes consumers to leverage that infrastructure through common entities such as workloads (pods and replica sets), networks and network policies (NetworkPolicy) and storage (Storage Classes, Persistent Volume Claims). Kubernetes is designed to adapt to the infrastructure.

Finally, and perhaps the thing that gets me most excited about Kubernetes, is its extensibility. Out of the box Kubernetes already delivers a whole host of resource types — pods, storage classes, roles and so much more — and functionality to lifecycle manage those resources — replica sets, daemon sets, stateful sets and more, but particularly when it comes to stateful workloads like a database, cache, or indexing services, each one has unique needs. The way that Mongo DB protects data that it stores is quite different from the way that MySQL does, for example. Kubernetes allows for custom resource definitions (CRDs) and associated behaviours (one of the most popular means for this is via operator) to be added, effectively extending the reach of the platform. That is, Kubernetes can be adapted to host and manage a virtually endless set of different types of workloads.

When you look at the abstractions that Kubernetes provides it’s easy to think of it as a new API for infrastructure — its base primitives are compute, storage, and network, just as with server virtualisation. It is its tolerance for change that sets it apart.

Who is Kubernetes for?

Just like Docker and server virtualisation before that, initially Kubernetes has captured the mindshare of the developer. Particularly now that those developers are increasingly responsible for keeping their software running well in production, having an intelligent, autonomous system that helps them with those operational tasks is hugely valuable. App operations involves not only the day 1 task of deployment but also maintenance in the face of infrastructure changes, security vulnerabilities, and more.

Just as enterprise IT provides centralised, secure, compliant, and resilient virtualised infrastructure environments, the time has come for providing secure, compliant, and resilient container platforms.

It’s rare these days that I speak to an enterprise that does not have some, sometimes substantial, presence of container-centric efforts going on. Often it has grown out of a development group that has built its practices around containers. They’re building docker images for their apps but, because the enterprise does not already have a production platform that can run those images, the same app teams are managing the container platform. Just as enterprise IT provides centralised, secure, compliant, and resilient virtualised infrastructure environments, the time has come for providing secure, compliant, and resilient container platforms.

As Kubernetes becomes mission critical

With the capabilities that it brings for running and managing mission-critical workloads, Kubernetes itself must be equally resilient to change. If a security vulnerability is found that requires Kubernetes be upgraded, it must be patched quickly and with zero downtime for the workloads it is hosting.

If application capacity requirements suddenly spike, the Kubernetes capacity must be quickly expanded to meet the need. When the spike has passed, Kubernetes needs to be right-sized again to keep IT infrastructure costs in check.

These are exactly the challenges that Kubernetes is addressing for containerised workloads. The key is to use the same principles and techniques that Kubernetes uses for workloads to manage Kubernetes itself.

Gmail confirms private Gmail messages can be read by third parties


Bobby Hellard

4 Jul, 2018

Google has responded to The Wall Street Journal highlighting how common it is for third-party developers to view user Gmail messages.

The publication had previously reported that Google has a “dirty secret” by allowing developers to sift through Gmail due to users granting permission for third parties to do so. 

Google said it makes it possible for applications from other developers to integrate with Gmail, such as email clients, trip planners and customer relationship management systems so that users have options around how they access and use email.

As a result of this, private messages in Gmail can be read not only by third-party systems but also by humans not intended to be the recipients of such emails.

The search giant stressed that it continuously works to vet developers and their apps that integrate with Gmail before it opens up them for general access. It said it also provides both enterprise admins and individual consumers transparency and control over how their data is used.

“A vibrant ecosystem of non-Google apps gives you choice and helps you get the most out of your email,” said Suzanne Frey, Google Cloud’s director of security, trust and privacy.

“However, before a published, non-Google app can access your Gmail messages, it goes through a multi-step review process that includes automated and manual review of the developer, assessment of the app’s privacy policy and homepage to ensure it is a legitimate app, and in-app testing to ensure the app works as it says it does.”

In order to pass Google’s review process, non-Google apps must meet two key requirements. Firstly, apps should not misrepresent their identity and must be clear about how they are using your data and secondly, they must only request relevant data they need for their specific function, nothing more, and be clear about how they are using it.

The WSJ story did not unearth any wrongdoing from third-party apps or services using Gmail, but it has shone a light on a previously discreet industry practice that is under heavier scrutiny since Facebook’s Cambridge Analytica data privacy scandal.

Google is now taking steps to actively defend its own data management and user privacy practices to convince users and businesses that is a responsible steward of sensitive user data.

Picture: Google

Force Quit on a Mac: 3 Easy Ways to Close Frozen Applications

Ok, I get it. There is no equivalent to the PC’s Ctrl+Alt+Del shortcut on a Mac® to force quit an application. So how do I quit that annoying program that’s not responding? Luckily, Apple® has you covered and gives you multiple options. The shortcut actually exists, and moreover, there are a few other extremely convenient […]

The post Force Quit on a Mac: 3 Easy Ways to Close Frozen Applications appeared first on Parallels Blog.

Force Quit on a Mac: 3 Easy Ways to Close Frozen Applications

Ok, I get it. There is no equivalent to the PC’s Ctrl+Alt+Del shortcut on a Mac® to force quit an application. So how do I quit that annoying program that’s not responding? Luckily, Apple® has you covered and gives you multiple options. The shortcut actually exists, and moreover, there are a few other extremely convenient […]

The post Force Quit on a Mac: 3 Easy Ways to Close Frozen Applications appeared first on Parallels Blog.