Archivo de la categoría: Data center

Gartner Data Center Conference: Success in the Cloud & Software Defined Technologies

I just returned from the Gartner Data Center conference in Vegas and wanted to convey some of the highlights of the event.  This was my first time attending a Gartner conference, and I found it pretty refreshing as they do take an agnostic approach to all of their sessions unlike a typical vendor sponsored event like VMWorld, EMC World, Cisco Live, etc.  Most of the sessions I attended were around cloud and software defined technologies.  Below, I’ll bullet out what I consider to be highlights from a few of the sessions.

Building Successful Private/Hybrid Clouds –

 

  • Gartner sees the majority of private cloud deployments being unsuccessful. Here are some common reasons for that…
    • Focusing on the wrong benefits. It’s not all about cost in $$. In cloud, true ROI is measured in agility vs dollars and cents
    • Doing too little. A virtualized environment does not equal a private cloud. You must have automation, self-service, monitoring/management, and metering in place at a minimum.
    • Doing too much. Putting applications/workloads in the private cloud that don’t make sense to live there. Not everything is a fit nor can take full advantage of what cloud offers.
    • Failure to change operational models. It’s like being trained to drive an 18 wheeler then getting behind the wheel of a Ferrari and wondering why you ran into that tree.
    • Failure to change funding model. You must, at a minimum, have a show back mechanism so the business will understand the costs, otherwise they’ll just throw the kitchen sink into the cloud.
    • Using the wrong technologies. Make sure you understand the requirements of your cloud and choose the proper vendors/technologies. Incumbents may not necessarily be the right choice in all situations.
  • Three common use cases for building out a private cloud include outsourcing commodity functions, renovating infrastructure and operations, and innovation/experimentation…but you have to have a good understanding of each of these to be successful (see above).
  • There is a big difference between doing cloud to drive bottom line (cost) savings vs top line (innovation) revenue expansion. Know ‘why’ you are doing cloud!
  • On the hybrid front, it is very rare today to see fully automated environments that span private and public as the technology still has some catching up to do. That said, it will be reality within 24 months without a doubt.
  • In most situations, only 20-50% of all applications/workloads will (or should) live in the cloud infrastructure (private or public) with the remaining living in traditional frameworks. Again, not everything can benefit from the goodness that cloud can bring.

Open Source Management Tools (Free or Flee) –

 

  • Organizations with fewer than 2500 employees typically look at open source tools to save on cost while larger organizations are interested in competitive advantage and improved security.
  • Largest adoption is in the areas of monitoring and server configuration while cloud management platforms (i.e. openstack), networking (i.e. open daylight), and containers (i.e. docker) are gaining momentum.
  • When considering one of these tools, very important to look at how active the community is to ensure relevancy of the tool
  • Where is open source being used in the enterprise today? Almost half (46%) of deployments are departmental while only about 12% of deployments are considered strategic to the overall organization.
  • Best slide I saw at the event which pretty much sums up open source….

 

Gartner Data Center Conference

 

If this makes you excited, then maybe open source is for you.  If not, then perhaps you should run away!

3 Questions to Ask Your SDN Vendor –

  • First, a statistic…organization which fail to properly integrate their virtualization and networking teams will see a 3x longer MTR (mean time to resolution) of issues vs those who do properly integrate the teams
  • There are approximately 500 true production SDN deployments in the world today
  • The questions to ask…
    • How to prevent network congestion caused by dynamic workload placement
    • How to connect to bare metal (non-virtualized) servers
    • How to integrate management and visibility between the underlay/overlay
  • There are numerous vendors in this space, it’s not just VMware and Cisco.
  • Like private cloud, you really have to do SDN for the right reasons to be successful.
  • Last year at this conference, there were 0 attendees who indicated they had investigated or deployed SDN. This year, 14% of attendees responded positively.

 

If you’re interested in a deeper discussion around what I heard at the conference, let me know and I’ll be happy to continue to dialogue.

 

By Chris Ward, CTO. Follow Chris on Twitter @ChrisWardTech . You can also download his latest whitepaper on data center transformation.

 

 

Riding on the Cloud – The Business Side of New Technologies

For the last couple of years “The Cloud” has been a buzzword all over the business and IT world.

What is The Cloud? -Basically, it is the possibility to use remote servers to handle your processing, storage and other IT needs. In the olden days you only the resources that you physically had on your computer; these days that’s not the case. You can “outsource” resources from another computer in a remote location and use them anywhere. This has opened so many doors for the world of business and has helped bring new companies into the internet.

Why? Because of how much it reduces the cost of being on the internet. A server is a costly piece of equipment and not everybody can afford it. Between the initial cost and upkeep of the hardware, you could easily spend a few thousand pounds every year.

The cloud has brought on the Virtual Private Server, which gives you all the benefits of an actual server without the hefty price tag. A hosting company will rent out a piece of their processing capabilities to your company and create a server environment for you. You only pay for what you use and you don’t have to worry about things like hardware failure, power costs or having room for a couple of huge server racks.

But what if your business grows? One of the biggest advantages of the cloud is that it can grow along with your business and your needs. It’s highly scalable and flexible, so if you ever need some extra storage or extra bandwidth, it’s a really easy fix that does not require you to purchase new equipment.

Since your own personal business cloud is by definition a remote solution, this means that you can access it from anywhere and everywhere as long as you have an internet connection. Want to make changes to your server? You can probably do it without leaving your house, even from the comfort of your own bed.

The same applies to your staff. If anyone ever needs to work from home or from another machine that’s not their work computer, all of the important files and resources they could possibly need can be hosted in the cloud, making those files accessible from anywhere. If someone’s office computer breaks there’s a backup and no data is lost.

The Cloud also makes sharing files between members of your staff a lot easier. Since none of the files are hosted on a local machine everybody has access to the files they require. Files update in real time, applications are shared and you can create a business environment that’s exponentially more effective.

Of course, the cloud still offers security and access control so you can keep track of who can see which files. A good cloud services provider also provides protection against malware and other security risks, to make sure that no pesky interlopers get into your files.

If your business is growing and so are your IT needs, then the cloud is an option worth exploring. Embrace the future, adopt new technologies and take your business to the next level.

Amazon, Google: a Battle to Dominate the Cloud

The cloud is just a vast mass of computers connected to the internet, on which people or companies can rent processing power or data storage as they need it.

All the warehouses of servers that run the whole of the internet, all the software used by companies the world over, and all the other IT services companies hire others to provide, or which they provide internally, will be worth some $1.4 trillion in 2014, according to Gartner Research—some six times Google and Amazon’s combined annual revenue last year.

When that time comes, all the world’s business IT needs will be delivered as a service, like electricity; you won’t much care where it was generated, as long as the supply is reliable.

Way back in 2006, Amazon had the foresight to start renting out portions of its own, already substantial cloud—the data centers on which it was running Amazon.com—to startups that wanted to pay for servers by the hour, instead of renting them individually, as was typical at the time. Because Amazon was so early, and so aggressive—it has lowered prices for its cloud services 42 times since first unveiling them, according to the company—it first defined and then swallowed whole the market for cloud computing and storage.

Even though Amazon’s external cloud business is much bigger than Google’s, Google still has the biggest total cloud infrastructure—the most servers and data centers. Tests of Amazon’s and Google’s clouds show that by one measure at least—how fast data is transferred from one virtual computer to another inside the cloud—Google’s cloud is seven to nine times faster than Amazon’s.

The question is, is Amazon’s lead insurmountable?

 

Take a Photo Tour of Facebook’s Amazing Cold Storage Datacenter

There’s a fascinating photo tour of Facebook’s Oregon data center on readwrite today.

Facebook (arguably) owns more data than God.

But how to store a cache of user data collected at the scale of omniscience? If you’re Facebook, just build another custom-crafted server storage locker roughly the size of the USS Abraham Lincoln on top of a breezy plateau in the Oregon high desert. The company’s new Prineville, Ore., data center employs an ultra-green ”cold storage” plan designed from the ground up to meet its unique—and uniquely huge—needs.

The piece also includes useful links on the tech behind the data center, shingled drive tech, and the Open Compute project that led to the innovations on display here.

What’s Missing from Today’s Hybrid Cloud Management – Leveraging Brokerage and Governance

By John Dixon, Consulting Architect, LogicsOne

Recently GreenPages and our partner Gravitant hosted a webinar on Cloud Service Broker technology. Senior Analyst Dave Bartoletti gave a preface to the webinar with Forrester’s view on cloud computing and emerging technology. In this post we’ll give some perspective on highlights from the webinar. In case you missed it, you can also watch a replay of the webinar here: http://bit.ly/12yKJrI

Ben Tao, Director of Marketing for Gravitant, kicks off the discussion by describing the traditional data center sourcing model. Two key points here:

  1. Sourcing decisions, largely based on hardware selection, are separated by years
  2. In a cloud world, sourcing decisions can be separated by months or even weeks

 

The end result is that cloud computing can drive the benefit of a multi-sourcing model for IT, where sourcing decisions are made in close proximity to the use of services. This has the potential of enabling organizations to adjust their sourcing decisions more often to best suit the needs of their applications.

Next, Dave Bartoletti describes the state of cloud computing and the requirements for hybrid cloud management. The core of Dave’s message is that the use of cloud computing is on the rise, and that cloud is being leveraged for more and more complex applications – including those with sensitive data.

Dave’s presentation is based on the statement, “what IT must do to deliver on the hybrid cloud promise…”

Some key points here:

  • Cloud is about IT services first, infrastructure second
  • You won’t own the infrastructure, but you’ll own the service definitions; take control of your own service catalog
  • The cloud broker is at the center of the SaaS provider, cloud VAR, and cloud integrator
  • Cloud brokers can accelerate the cloud application lifecycle

 

Dave does an excellent job of explaining the things that IT must do in order to deliver on the hybrid cloud promise. Often, conversations on cloud computing are purely about technology, but I think there’s much more at stake. For example, Dave’s first two points above really resonate with me. You can also read “cloud computing” as ITIL-style sourcing. Cloud computing puts service management back in focus. “Cloud is about IT services first, infrastructure second,” and “You won’t own the infrastructure […]” also suggests that cloud computing may influence a shift in the makeup of corporate IT departments – fewer   core technologists and more “T-shaped” individuals. So called T-shaped individuals have knowledge and experience with a broad set of technologies (the top of the “T”), but have depth in one or more areas like programming, Linux, or storage area networking. My prediction is that there will still be a need for core technologists; but that some of them may move into roles to do things like define customer-facing IT services. For this reason, our CMaaS product also includes optional services to deal with this type of workforce transformation. This is an example of a non-technical item that must be made when considering cloud computing. Do you agree? Do you have other non-technical considerations for cloud computing?

Chris Ward, CTO of LogicsOne, then dives in to the functionality of the Cloud Management as a Service, or CMaaS offering. The GreenPages CMaaS product implements some key features that can be used to help customers advance to the lofty points that Dave suggests in his presentation. CMaaS includes a cloud brokerage component and a multi-cloud monitoring and management component. Chris details some main features from the brokerage tool, which are designed to address the key points that Dave brought up:

  • Collaborative Design
  • Customizable Service Catalog
  • Consistent Access for Monitoring and Management
  • Consolidated Billing Amongst Providers
  • Reporting and Decision Support

Chris then gives an example from the State of Texas and the benefits that they realized from using cloud through a broker. Essentially, with the growing popularity of e-voting and the use of the internet as an information resource on candidates and issues, the state knew the demand for IT resources would skyrocket on election day. Instead of throwing away money to buy extra infrastructure to satisfy a temporary surge in demand, Texas utilized cloud brokerage to seamlessly provision IT resources in real time from multiple public cloud sources to meet the variability in demand.

All in all, the 60-minute webinar is time well spent and gives clients some guidance to think about cloud computing in the context of a service broker.

To view this webinar in it’s entirety click here or download this free whitepaper to learn more about hybrid cloud management

 

Cloud Data Center Draw is Often Power

Interesting trend reported on by James Glanz of the New York Times. Ample access to electrical power is driving up data center rents across the river in New Jersey — to levels higher than the trophy skyscrapers in Manhattan.

…electrical capacity is often the central element of lease agreements, and space is secondary.

Read “Landlords Double As Energy Brokers”.

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Colocation: 55+ living for your IT equipment

I recently sat on a planning call with an extremely smart and agreeable client. We had discussed a modest “data center” worth of equipment to host the environment he’s considering putting into production. I asked the simple enough question of “where are you going to deploy this gear?” I have to admit not being very surprised when he responded: “Well, I’ve cleaned out a corner of my office.” Having spent some early days of my IT career working in a server closet, I knew that if the hum of the equipment fans didn’t get to him quickly, the heat output would for sure. This is not an uncommon conversation. Clearly the capital expense of building out a “data center” onsite was not an appealing topic. So, if building isn’t an option, why not rent?

In a similar vein, not too far back I watched several “senior” members of my family move into 55+ communities after years of resisting. Basically, they did a “capacity planner” and realized the big house was no longer needed. They figured somebody else could worry about the landscaping, snow plowing and leaky roofs. The same driving forces should have many IT pros considering a move into a colocation facility.

The opportunities to move into a hosted data center (colo facility) are plentiful today. You simply don’t have as much gear any longer (assuming you’re mostly virtualized). Your desire to “do it all” yourself has waned (let someone else worry about keeping the lights on and network connected). The added bonus of providing redundant network paths, onsite security and almost infinite expansion are driving many “rental” conversations today. Colos are purpose-built facilities which are ideal for core data center gear such as servers, storage (SANs), routers and core switches, to name a few.  Almost all of them have dual power feeds, backup battery systems and generators. HVAC (heating, ventilation, and air-conditioning) units keep appropriate environmental conditions for the operation of this critical equipment.

Many businesses don’t fully realize just how much power consumption is required to operate a data center. The energy bills achieved for just the cooling component alone can leave many IT managers, well, frosted. Even still, the need to see the healthy status green blinking lights is like a digital comfort blanket. Speaking with many IT execs, we hear over and over “This was the best move we could have made.” From our own experience, we’ve seen our internal IT team shift focus to strategic initiatives and end user support.

While it is certainly not a one-size-fits-all endeavor, there is something for most organizations when it comes to colo. Smaller organizations with one rack of equipment have seen tremendous advantages as have clients approaching the “enterprise” size with dozens of cabinets of gear. Redundancy, security, cost control, predictable budgets and 7x24x365 support are all equally attractive reasons to move into a “colo.” Call it a “colominium” if you will. Colo could be the right step toward a more efficient and effective IT existence.

 

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

Contrarian: Building, Colocating Your Own Servers, No Cloud Involved

Jeff Atwood has a great post at Coding Horror talking about his penchant for building his own servers to rack at a colo. It tries to compare to the Amazon Ec2 alternative, all the while admitting it’s pretty much apples and oranges.

I want to make it clear that building and colocating your own servers isn’t (always) crazy, it isn’t scary, heck, it isn’t even particularly hard. In some situations it can make sense to build and rack your own servers, provided …

  • you want absolute top of the line server performance without paying thousands of dollars per month for the privilege
  • you are willing to invest the time in building, racking, and configuring your servers
  • you have the capital to invest up front
  • you desire total control over the hardware
  • you aren’t worried about the flexibility of quickly provisioning new servers to handle unanticipated load
  • you don’t need the redundancy, geographical backup, and flexibility that comes with cloud virtualization

It’s worth reading in its own right, but also because he does a pretty good job of outlining the pros and cons of cloud versus self-hosting. It’s also good thing to remember that no matter how “virtual” we get there’s still gotta be a bunch of hardware somewhere to make it all go.