Many organisations are beginning to rely on real-time insights to drive mission-critical business decisions, but a recent study by Accenture and HfS Research states that nearly 80% of companies can’t make data-driven decisions due to a lack of skills and technology. Based on a global survey, respondents reported that 50% to 90% of their data is unstructured and largely inaccessible.
As Debbie Polishook, group chief executive of Accenture Operations has commented: "Organisations need to maximise the use of ‘always on’ intelligence to sense, predict and act on changing customer and market developments."
In part, that means companies need to develop a data-driven backbone that can capitalise on the explosion of structured and unstructured data from multiple sources to gain new insights in order to achieve stronger outcomes. It also means leveraging integrated automation and analytics to understand business challenges and then applying the right combination of tools to find the right answers.
All good in theory, but how? Especially given the challenges of today’s distributed data environments that often comprise on-premise, private clouds, public and hybrid clouds, and colocation facilities spread across a single enterprise with multiple, geographically-dispersed locations.
Visibility and control, no matter the data environment
A simple solution to this problem is modernising your IT infrastructure to support the optimal mix of distributed data environments while gaining better visibility and control to access key insights that move the needle toward peak performance.
Cloud infrastructure tools provide IT staff with greater visibility and real-time insight into power usage, thermal consumption, server health, and utilisation. The key benefits are better operational control, infrastructure optimisation, and reduced costs – benefits that enhance an organisation’s business operations and adhere to their balance sheet regardless of their specific cloud strategy, or whether their infrastructure resides on-premises or at a colocation facility.
Let’s consider an organisation weighing whether to migrate its data to the public cloud. Its IT staff would first need to access how its systems perform internally and then determine the needs of its applications, including memory, processing power, and operating systems. By virtue of their ability to collect and normalise data, cloud infrastructure tools help IT teams better understand their current implementation on-premise, empowering them to make data-driven decisions as to what to provision in the cloud.
Power and cooling, solved
Today’s high-density servers generate more heat than ever, overstressing cooling systems and consuming more power than legacy equipment. Data centre and colocation facility teams sometimes deploy solutions to measure and manage Power at Rack and PDU levels but have little visibility at the server level. There are also challenges in managing power at the appropriate times and determining what is the optimal target temperature for every section of the data centre. The traditional method of cooling data centres does not take into account the actual needs of the servers. Consequently, cooling devices will operate inefficiently because they do not accurately anticipate cooling requirements.
In addition to providing data centre managers with real-time power consumption data, giving them the clarity needed to lower power usage, increase rack density, and prolong operation during outages, cloud infrastructure tools provide insight into thermal levels, airflow and utilisation. By retrieving server inlet air temperature and providing this information to the building management system to control the cooling system in the data centre, cloud infrastructure tools help data centre managers reduce energy consumption by precisely controlling the amount of cooling required. Cloud infrastructure tools aggregate the server level information at rack, row, and room levels, calculating efficiency metrics, developing three-dimensional thermal maps of the data centre, and determining optimal temperature. The derived metrics and indexes, along with server sensor information, help identify and address data centre energy efficiency issues such as hotspots.
Automated health monitoring, not spreadsheets
Through ongoing monitoring, analytics, diagnostics and remediation, data centre operators can employ a health management approach to addressing the risk of costly downtime and outages. For those IT staff who take an automated approach to data centre health, continuously monitoring and flagging issues within their complex data centre environments in real-time, more than half, according to a recent survey, can identify and remedy these issues within their data centre within 24 hours. Those data centre managers who perform health checks manually — either by walking the floor, squinting at a spreadsheet, or worse, only after an outage event — are denying themselves the real-time insights cloud infrastructure tools can provide to keep their facilities up and running and their business reputation intact, to say nothing of the financial repercussions from extended downtime.
Cloud infrastructure tools deliver the visibility and operational control to optimise on-premise, private clouds, colocation, as well as public and hybrid cloud models. These software solutions and products maintain a vigil on power, thermal consumption, server health and utilisation, allowing better data-driven decision-making no matter your distributed data environment.