Archivo de la categoría: Data center

Cloud Data Center Draw is Often Power

Interesting trend reported on by James Glanz of the New York Times. Ample access to electrical power is driving up data center rents across the river in New Jersey — to levels higher than the trophy skyscrapers in Manhattan.

…electrical capacity is often the central element of lease agreements, and space is secondary.

Read “Landlords Double As Energy Brokers”.

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Colocation: 55+ living for your IT equipment

I recently sat on a planning call with an extremely smart and agreeable client. We had discussed a modest “data center” worth of equipment to host the environment he’s considering putting into production. I asked the simple enough question of “where are you going to deploy this gear?” I have to admit not being very surprised when he responded: “Well, I’ve cleaned out a corner of my office.” Having spent some early days of my IT career working in a server closet, I knew that if the hum of the equipment fans didn’t get to him quickly, the heat output would for sure. This is not an uncommon conversation. Clearly the capital expense of building out a “data center” onsite was not an appealing topic. So, if building isn’t an option, why not rent?

In a similar vein, not too far back I watched several “senior” members of my family move into 55+ communities after years of resisting. Basically, they did a “capacity planner” and realized the big house was no longer needed. They figured somebody else could worry about the landscaping, snow plowing and leaky roofs. The same driving forces should have many IT pros considering a move into a colocation facility.

The opportunities to move into a hosted data center (colo facility) are plentiful today. You simply don’t have as much gear any longer (assuming you’re mostly virtualized). Your desire to “do it all” yourself has waned (let someone else worry about keeping the lights on and network connected). The added bonus of providing redundant network paths, onsite security and almost infinite expansion are driving many “rental” conversations today. Colos are purpose-built facilities which are ideal for core data center gear such as servers, storage (SANs), routers and core switches, to name a few.  Almost all of them have dual power feeds, backup battery systems and generators. HVAC (heating, ventilation, and air-conditioning) units keep appropriate environmental conditions for the operation of this critical equipment.

Many businesses don’t fully realize just how much power consumption is required to operate a data center. The energy bills achieved for just the cooling component alone can leave many IT managers, well, frosted. Even still, the need to see the healthy status green blinking lights is like a digital comfort blanket. Speaking with many IT execs, we hear over and over “This was the best move we could have made.” From our own experience, we’ve seen our internal IT team shift focus to strategic initiatives and end user support.

While it is certainly not a one-size-fits-all endeavor, there is something for most organizations when it comes to colo. Smaller organizations with one rack of equipment have seen tremendous advantages as have clients approaching the “enterprise” size with dozens of cabinets of gear. Redundancy, security, cost control, predictable budgets and 7x24x365 support are all equally attractive reasons to move into a “colo.” Call it a “colominium” if you will. Colo could be the right step toward a more efficient and effective IT existence.

 

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

Contrarian: Building, Colocating Your Own Servers, No Cloud Involved

Jeff Atwood has a great post at Coding Horror talking about his penchant for building his own servers to rack at a colo. It tries to compare to the Amazon Ec2 alternative, all the while admitting it’s pretty much apples and oranges.

I want to make it clear that building and colocating your own servers isn’t (always) crazy, it isn’t scary, heck, it isn’t even particularly hard. In some situations it can make sense to build and rack your own servers, provided …

  • you want absolute top of the line server performance without paying thousands of dollars per month for the privilege
  • you are willing to invest the time in building, racking, and configuring your servers
  • you have the capital to invest up front
  • you desire total control over the hardware
  • you aren’t worried about the flexibility of quickly provisioning new servers to handle unanticipated load
  • you don’t need the redundancy, geographical backup, and flexibility that comes with cloud virtualization

It’s worth reading in its own right, but also because he does a pretty good job of outlining the pros and cons of cloud versus self-hosting. It’s also good thing to remember that no matter how “virtual” we get there’s still gotta be a bunch of hardware somewhere to make it all go.


McAfee Launches New Data Center Security Suites

Image representing McAfee as depicted in Crunc...

McAfee today announced four new Data Center Security Suites to help secure servers and databases in the data center. The suites offer a unique combination of whitelisting, blacklisting and virtualization technologies for protecting servers and virtual desktops. These solutions provide optimal security for servers and databases in physical, virtualized and cloud-based data centers, with minimal impact on server resources which is a key demand for data centers.

“Performance and security are key concerns for servers in the physical, virtualized or cloud-based data centers,” said Jon Oltsik, Senior Principal Analyst, Information Security and Networking at Enterprise Security Group. “The new server security suites from McAfee, based on its application whitelisting, virtualization and blacklisting and AV technologies, provide an enhanced security posture while maintaining the high server performance needs of the data center.”

The suites offer customers the ability to protect their physical and virtual servers and virtual desktops with a unique combination of technologies in a single solution.

  • McAfee Data Center Security Suite for Server provides a
    complete set of blacklisting, whitelisting, and optimized
    virtualization support capabilities for basic security on servers of
    all types
  • McAfee Data Center Security Suite for Server–Hypervisor Edition
    provides a complete set of blacklisting, whitelisting, and optimized
    virtualization support capabilities for basic security on servers of
    all types and is licensed per Hypervisor
  • McAfee Data Center Security Suite for Virtual Desktop
    Infrastructure
    provides comprehensive security for virtual desktop
    deployments without compromising performance or the user experience
  • McAfee Database Server Protection provides data base activity
    monitoring and vulnerability assessment in a single suite, for all
    major database servers in the data center

“McAfee is leading the industry with these new solutions for protecting servers in the data center,” said Candace Worley, senior vice president and general manager of endpoint security at McAfee. “The combination of whitelisting, blacklisting and virtualization in a single solution, offers an optimal security posture for protecting servers in the data centers. These solutions address the need in the industry to offer solutions that provide the highest level of protection with minimal impact on the resources they are deployed on and in a wide range of customized licensing options.”

 


Mellanox Introduces SwitchX-2 Software Defined Networking VPI Switch

Mellanox Technologies today announced SwitchX-2, the next generation of its switch silicon optimized for Software Defined Networking (SDN). SwitchX-2 includes advanced capabilities of remote configurable routing tables, lossless and congestion free networks, efficient control planes, and SDN-optimized software interfaces. SwitchX-2 enables IT managers to program and centralize their server and storage interconnect management and dramatically reduce their operational expenses by completely virtualizing their data center network. According to IDC*, the broader SDN/OpenFlow market is expected to see rapid growth, reaching $2 billion by 2016, a significant portion of which will be network infrastructure.

SwitchX-2 is based on Mellanox’s leading Virtual Protocol Interconnect® (VPI) technology which allows for simultaneous connection to InfiniBand or Ethernet with integrated gateways to legacy data center and storage systems. Utilizing industry-first, RDMA-based 56Gb/s Ethernet and InfiniBand, SwitchX-2 is the world’s fastest, most scalable SDN switch with unmatched 4Tb/s switching capacity (50 percent higher than closest competition), the industry’s lowest power consumption, extremely low 170ns latency, hardware-based L2/L3 congestion management for highest efficiency and hardware-based data error correction for highest reliability. SwitchX-2’s advanced feature set enables the creation of larger flat SDN networks with lower cost and higher performance.

“Software Defined Networking is rapidly emerging as a key architectural element for next generation cloud, Web 2.0 and scalable data centers. As a building block for SDN-enabled network infrastructure, switches with high throughput, low latency and low power consumption are expected to be instrumental in realizing the goal of reducing operational expense while enabling data center scalability and flexibility,” said Rohit Mehra, vice president, Enterprise and Datacenter Networks, IDC. “Technologies such as Mellanox SwitchX-2, when built into next-generation data centers, will enable IT to benefit from the promise of Software Defined Networking by delivering improved throughput, latency and power, along with enhanced programmability, automation and control.”

“Mellanox’s SwitchX-2 VPI switch leads the industry with the highest throughout capacity, low latency with nearly zero jitter, as well as advanced SDN interfaces for control and management,” said David Barzilai, vice president of marketing at Mellanox Technologies. “SDN technology has been a critical component of the InfiniBand scalable architecture and has been proven worldwide in data centers and clusters of tens-of-thousands of servers. Now, with SwitchX-2, Mellanox provides the most efficient SDN solution for both InfiniBand and Ethernet data centers. Mellanox’s fast, RDMA-based interconnect technology leads the competition in terms of performance, SDN technology and return-on-investment advantages it brings to IT and application managers.”


Cloud Data Centers in Rural Locations — Gobbling Electricity, Throwing Their Weight Around

Very interesting in-depth article in the New York Times today on the sprawling, electricity-hungry data centers spawned by cloud computing.

Internet-based industries have honed a reputation for sleek, clean convenience based on the magic they deliver to screens everywhere. At the heart of every Internet enterprise are data centers, which have become more sprawling and ubiquitous as the amount of stored information explodes, sprouting in community after community.

the gee-whiz factor of such a prominent high-tech neighbor wore off quickly. First, a citizens group initiated a legal challenge over pollution from some of nearly 40 giant diesel generators that Microsoft’s facility — near an elementary school — is allowed to use for backup power.

Then came a showdown late last year between the utility and Microsoft, whose hardball tactics shocked some local officials.

These data centers are apparently not always good neighbors, and of course as they are there to serve our cloud needs we’re all complicity to some degree.


Report: Green Data Center Market $45 Billion by 2016

The combination of rising energy costs, increasing demand for computing power, environmental concerns, and economic pressure has made the green data center a focal point for the transformation of the IT industry as a whole. According to a recent report from Pike Research, a part of Navigant’s Energy Practice, the worldwide market for green data centers will grow from $17.1 billion in 2012 to $45.4 billion by 2016 – at a compound annual growth rate of nearly 28 percent.

“There is no single technology or design model that makes a data center green,” says research director Eric Woods. “In fact, the green data center is connected to the broader transformation that data centers are undergoing—a transformation that encompasses technical innovation, operational improvements, new design principles, changes to the relationship between IT and business, and changes in the data center supply chain.”

In particular, two powerful trends in IT are shaping the evolution of data centers, Woods adds: virtualization and cloud computing. Virtualization, the innovation with the greatest impact on the shape of the modern data center, is also recognized as one of the most effective steps toward improving energy efficiency in the data center. In itself, however, virtualization may not lead to reduced energy costs. To gain the maximum benefits from virtualization, other components of the data center infrastructure will need to be optimized to support more dynamic and higher-density computing environments. Cloud computing, meanwhile, has many efficiency advantages, but new metrics and new levels of transparency are required if its impact on the environment is to be adequately assessed, the report finds.

The report, “Green Data Centers”, explores global green data center trends with regional forecasts for market size and opportunities through 2016. The report examines the impacts of global economic and political factors on regional data center growth, along with newly adopted developments in power and cooling infrastructure, servers, storage, and data center infrastructure management software tools across the industry. The research study profiles key industry players and their strategies for expansion and technology adoption. An Executive Summary of the report is available for free download on the Pike Research website.


Cloud Isn’t Social, It’s Business

Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS.

Much like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model.

While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization.

By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations.

A service subscription business model:

  • Makes it easier to project costs across entire infrastructure
    Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project.
  • Easier to justify cost of infrastructure
    Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not.
  • Enables business to manage costs over time 
    Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not.
  • Easier to start up a project/application and ramp up over time as associated revenue increases
    Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets.
  • Enables consistent comparison with off-premise cloud computing
    A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required.

The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand.

That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well.

One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT.


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more