All posts by jeffklaus

Get smart: Achieving data-driven insights through a modernised IT infrastructure

Many organisations are beginning to rely on real-time insights to drive mission-critical business decisions, but a recent study by Accenture and HfS Research states that nearly 80% of companies can’t make data-driven decisions due to a lack of skills and technology. Based on a global survey, respondents reported that 50% to 90% of their data is unstructured and largely inaccessible.

As Debbie Polishook, group chief executive of Accenture Operations has commented: "Organisations need to maximise the use of ‘always on’ intelligence to sense, predict and act on changing customer and market developments."

In part, that means companies need to develop a data-driven backbone that can capitalise on the explosion of structured and unstructured data from multiple sources to gain new insights in order to achieve stronger outcomes. It also means leveraging integrated automation and analytics to understand business challenges and then applying the right combination of tools to find the right answers.

All good in theory, but how? Especially given the challenges of today’s distributed data environments that often comprise on-premise, private clouds, public and hybrid clouds, and colocation facilities spread across a single enterprise with multiple, geographically-dispersed locations.

Visibility and control, no matter the data environment

A simple solution to this problem is modernising your IT infrastructure to support the optimal mix of distributed data environments while gaining better visibility and control to access key insights that move the needle toward peak performance.

Cloud infrastructure tools provide IT staff with greater visibility and real-time insight into power usage, thermal consumption, server health, and utilisation. The key benefits are better operational control, infrastructure optimisation, and reduced costs – benefits that enhance an organisation’s business operations and adhere to their balance sheet regardless of their specific cloud strategy, or whether their infrastructure resides on-premises or at a colocation facility.

Let’s consider an organisation weighing whether to migrate its data to the public cloud. Its IT staff would first need to access how its systems perform internally and then determine the needs of its applications, including memory, processing power, and operating systems. By virtue of their ability to collect and normalise data, cloud infrastructure tools help IT teams better understand their current implementation on-premise, empowering them to make data-driven decisions as to what to provision in the cloud.

Power and cooling, solved

Today’s high-density servers generate more heat than ever, overstressing cooling systems and consuming more power than legacy equipment. Data centre and colocation facility teams sometimes deploy solutions to measure and manage Power at Rack and PDU levels but have little visibility at the server level. There are also challenges in managing power at the appropriate times and determining what is the optimal target temperature for every section of the data centre. The traditional method of cooling data centres does not take into account the actual needs of the servers. Consequently, cooling devices will operate inefficiently because they do not accurately anticipate cooling requirements.

In addition to providing data centre managers with real-time power consumption data, giving them the clarity needed to lower power usage, increase rack density, and prolong operation during outages, cloud infrastructure tools provide insight into thermal levels, airflow and utilisation. By retrieving server inlet air temperature and providing this information to the building management system to control the cooling system in the data centre, cloud infrastructure tools help data centre managers reduce energy consumption by precisely controlling the amount of cooling required. Cloud infrastructure tools aggregate the server level information at rack, row, and room levels, calculating efficiency metrics, developing three-dimensional thermal maps of the data centre, and determining optimal temperature. The derived metrics and indexes, along with server sensor information, help identify and address data centre energy efficiency issues such as hotspots.

Automated health monitoring, not spreadsheets

Through ongoing monitoring, analytics, diagnostics and remediation, data centre operators can employ a health management approach to addressing the risk of costly downtime and outages. For those IT staff who take an automated approach to data centre health, continuously monitoring and flagging issues within their complex data centre environments in real-time, more than half, according to a recent survey, can identify and remedy these issues within their data centre within 24 hours. Those data centre managers who perform health checks manually — either by walking the floor, squinting at a spreadsheet, or worse, only after an outage event — are denying themselves the real-time insights cloud infrastructure tools can provide to keep their facilities up and running and their business reputation intact, to say nothing of the financial repercussions from extended downtime.

Cloud infrastructure tools deliver the visibility and operational control to optimise on-premise, private clouds, colocation, as well as public and hybrid cloud models. These software solutions and products maintain a vigil on power, thermal consumption, server health and utilisation, allowing better data-driven decision-making no matter your distributed data environment.

The clouds are rolling in: Three reasons to take the cloud plunge in 2018

In the past year, cloud technologies have dominated the headlines; not just the major players, but new and emerging platforms that promise to become mainstream in the very near future.

As more and more enterprises seek out the capabilities a cloud solution can provide, including data distribution, improved performance and controlled costs, we can anticipate this trend will only continue into the new year. This is apparent by the thousands of companies who already have a fully-baked cloud strategy in place or have one in the works. According to IDG’s 2016 Cloud Computing Survey, cloud technologies are now used by at least 70 percent of U.S. businesses, and Gartner predicts 90 percent of organisations worldwide will adopt hybrid infrastructure management capabilities within the next two years.

Still, whether private, public or a hybrid mix of cloud computing models, there are many organisations who have yet to fully take the leap. So, let’s look at three compelling motivations for businesses to implement a cloud strategy in 2018.

Motivation #1: Reducing operational overheads

Most companies fail to realise the total cost of IT ownership, such as support, additional hardware, maintenance, etc. In fact, Gartner’s 2017 IT Budget reveals that healthcare companies often spend nearly 75 percent of their IT budgets on maintaining internal systems alone.

Since cloud services operate on a subscription model, companies only pay for their usage over time, preventing IT teams from spending all their budget at once. Moving to the cloud means a decrease in rack space, power usage and IT requirements, which results in lower installation, maintenance, hardware, upgrade and support costs. It also gives IT teams time to focus on building the business in more progressive ways instead of becoming bogged down by maintenance and support tasks.

Motivation #2: Increasing control and flexibility

Many enterprises that are growing at a fast-pace are often face a bottleneck when all operations are kept on-premise. By using a hybrid approach, data centre managers and IT teams alike can enhance innovation and create new systems that can scale and grow as demand increases, while providing the necessary flexibility to turn these cloud environments up, down, or off depending upon circumstances or needs.

Through a single network that connects an on-premise data centre to several cloud environments, companies can effectively manage both standard and critical workloads to move the business strategy forward.

Motivation #3: Boosting your data centre’s security

With a growing number of companies moving to the cloud, security measures are evolving to account for this growth, pushing providers to offer higher levels of security and data integrity. However, with the shifting role of IT in business now spanning strategic planning, revenue generation, efficient management of resources, and advancing innovation, organisations have little time for such comparatively mundane tasks.

It’s important to realise that the cloud is no less secure than servers managed internally by most companies. In fact, there’s a very good chance that these cloud infrastructures are more secure. By storing data in the cloud, businesses will be able to securely and remotely access this data from any location or device, as well as manage any potential breaches, including deleting or moving sensitive data at risk in real-time.

Now that we understand several compelling reasons why more companies will move their data to the cloud and experience the value it brings to their organisations, such as a reduction in operational overhead, increased control and flexibility, and boosted data centre security, let’s briefly examine how IT staff can best utilise a mixed cloud strategy to reach a company’s full data potential.

By providing greater visibility and real-time insight into power usage, thermal consumption, server health and utilisation, cloud infrastructure tools help IT teams better understand how their cloud is performing. Whether an organisation’s cloud computing model is private, public or hybrid, the major benefits are improved operational control, infrastructure optimisation and reduced costs.

Especially as a business transitions from private to public or hybrid clouds, an organisation’s IT staff needs to understand how its systems perform internally. Understanding the needs of its mission-critical applications — including memory, processing power and operating systems — should determine what to provision in the cloud. By collecting and normalising data to help IT staff better understand their current implementation on-premise, cloud infrastructure tools enable them to make intelligent decisions concerning the requirements of a new cloud configuration.

Cloud infrastructure tools can also identify idle or under-used servers. These so called “ghost servers” can draw as much as half the power used during peak workloads. At any point in time, 10 to 15 percent of servers can fall into this category. Hence, cloud infrastructure tools can assist data centre managers to consolidate and virtualise these servers to avoid wasted energy and space, which is essential in a hybrid cloud environment.

Additionally, given that energy costs are the fastest-rising expense for today’s data centres, cloud infrastructure tools deliver real-time power and thermal consumption data, providing IT staff with the clarity needed to lower power usage, increase rack density and prolong operation during outages.

There’s no question that hybrid infrastructure creates new challenges for IT staff. By ensuring that energy, equipment and floor space are used as efficiently as possible, cloud infrastructure tools assist IT staff to optimise their organisation’s multi-cloud environment.