Tag Archives: DataCenter

Nokia creates foundations for launching telcos into the cloud

nokia data center servicesNokia’s Data Center Services division has unveiled plans to launch mobile telcos into the cloud. Plans include a custom-made a multivendor infrastructure to support its transformation consulting services.

These services aim to help telecoms operators re-shape their people and processes for the new cloud-centric comms industry. In a statement it explained that its new managed cloud operations aim to make the introduction of multi-vendor hybrid operations, cloud data centres and the virtualisation of network functions (VNFs) as painless as possible.

The networking vendor is expanding its cloud services portfolio with the launch of three professional services. Nokia Data Center services will offer development and operations (DevOps) services, with a brief to help telcos use cloud technology to launch services as quickly as possible.

Secondly, the Nokia Cloud Transformation Consulting services aim to help operators make the fullest use of telco cloud opportunities. Nokia said it is using expertise rom the Bell Labs Consulting practice to support operators and enterprises in addressing cloud transformation.

Finally the Managed Cloud Operations managed service will help telcos run hybrid operations across hardware, cloudware and application layer management, without the build up of silos of information that have traditionally hamstrung telcos turned comms service providers.

In order to support the data centre services Nokia is creating a design facility in the UK, supported by global delivery depots across the globe. To complement its services portfolio, Nokia has now invited partners, such as global supply chain Sanmina, to focus on Data Center services.

The service is needed because 62% of operators are very likely to rely on network equipment providers for data centre transformation, according to Heavy Reading research figures quoted by Nokia.

Meanwhile, in a related announcement Nokia said it will simplify networks with a new Shared Data Layer, a central point of storage for all the data used by Virtualized Network Functions (VNFs). This could free VNFs from the need to manage their own data, creating so-called stateless VNFs that are simpler and have the capacity for rapid expansion or contraction.

The result is a more flexible, programmable network for 5G that can minimise latency and maximise network speeds in order to cater for the Internet of Things (IoT). The network also becomes more reliable as a failed stateless VNF can instantly activate and provide access to the shared data to maintain seamless service continuity.

Google signs five deals for green powering its cloud services

Cloud service giant Google has announced five new deals to buy 781MW of renewable energy from suppliers in the US, Sweden and Chile, according to a report on Bloomberg.

The deals add up to the biggest-ever purchase of renewable energy ever by a company that is not a utility, according to Michael Terrell, Google’s principal of energy and global infrastructure.

Google will buy 200 megawatts of power from Oklahoma-based Renewable Energy Systems Americas’s Bluestem wind project. From the same US state another 200 megawatts will be contributed by Great Western wind project run by Electricite de France. In addition, Google will also power its cloud services with 225 megawatts of wind power from independent power producer Invenergy.

Google’s data centres and cloud services in South America could become carbon free when the 80 megawatts of solar power that it has ordered from Acciona Energia’s El Romero farm in Chile comes online.

In Scandinavia the cloud service provider has agreed to buy 76 megawatts of wind power from Eolus Vind’s Jenasen wind project to be built in Vasternorrland County, Sweden.

In July, Google committed to tripling its purchases of renewable energy by 2025. At the time, it had contracts to buy 1.1 GW of sustainably sourced power.

Google’s first ever green power deal was in 2010 when it agreed to buy power from a wind farm in Iowa. Last week, it announced plans to purchase buy 61 megawatts from a solar farm in North Carolina.

90 Second Tech News Recap for the Week of 2/3/2014

 

Get your weekly technology new recap for the week of 1/27 in 90 seconds!

 

http://www.youtube.com/watch?v=BXOIAD_gFik

 

Download our whitepaper to learn how corporate IT can manage its environment as if it is “deployed to the cloud.” So, if and when different parts of the environment are deployed to the cloud, day-to-day management of the environment remains unchanged—regardless of where it is running: on premises or at a service provider.

The 2013 Tech Industry – A Year in Review

By Chris Ward, CTO, LogicsOne

As 2013 comes to a close and we begin to look forward to what 2014 will bring, I wanted to take a few minutes to reflect back on the past year.  We’ve been talking a lot about that evil word ‘cloud’ for the past 3 to 4 years, but this year put a couple of other terms up in lights including Software Defined X (Datacenter, Networking, Storage, etc.) and Big Data.  Like ‘cloud,’ these two newer terms can easily mean different things to different people, but put in simple terms, in my opinion, there are some generic definitions which apply in almost all cases.  Software Defined X is essentially the concept of taking any ties to specific vendor hardware out of the equation and providing a central point for configuration, again vendor agnostic, except of course for the vendor providing the Software Defined solution :) .  I define Big Data simply as the ability to find a very specific and small needle of data in an incredibly large haystack within a reasonably short amount of time. I see both of these technologies becoming more widely adopted in short order with Big Data technologies already well on the way. 

As for our friend ‘the cloud,’ 2013 did see a good amount of growth in consumption of cloud services, specifically in the areas of Software as a Service (SaaS) and Infrastructure as a Service (IaaS).  IT has adopted a ‘virtualization first’ strategy over the past 3 to 4 years when it comes to bringing any new workloads into the datacenter.  I anticipate we’ll begin to see a ‘SaaS first’ approach being adopted in short order if it is not out there already.  However, I can’t necessarily say the same on the IaaS side so far as ‘IaaS first’ goes.  While IaaS is a great solution for elastic computing, I still see most usage confined to the application development or super large scale out application (Netflix) type use cases.  The mass adoption of IaaS for simply forklifting existing workloads out of the private datacenter and into the public cloud simply hasn’t happened.  Why?? My opinion is for traditional applications neither the cost nor operational model make sense, yet. 

In relation to ‘cloud,’ I did see a lot of adoption of advanced automation, orchestration, and management tools and thus an uptick in ‘private clouds.’  There are some fantastic tools now available both commercially and open source, and I absolutely expect to see this adoption trend to continue, especially in the Enterprise space.  Datacenters, which have a vast amount of change occurring whether in production or test/dev, can greatly benefit from these solutions. However, this comes with a word of caution – just because you can doesn’t mean you should.  I say this because I have seen several instances where customers have wanted to automate literally everything in their environments. While that may sound good on the surface, I don’t believe it’s always the right thing to do.  There are times still where a human touch remains the best way to go. 

As always, there were some big time announcements from major players in the industry. Here are some posts we did with news and updates summaries from VMworld, VMware Partner Exchange, EMC World, Cisco Live and Citrix Synergy. Here’s an additional video from September where Lou Rossi, our VP, Technical Services, explains some new Cisco product announcements. We also hosted a webinar (which you can download here) about VMware’s Horizon Suite as well as a webinar on our own Cloud Management as a Service Offering

The past few years have seen various predictions relating to the unsustainability of Moore’s Law which states that processors will double in computing power every 18-24 months and 2013 was no exception.  The latest prediction is that by 2020 we’ll reach the 7nm mark and Moore’s Law will no longer be a logarithmic function.  The interesting part is that this prediction is not based on technical limitations but rather economic ones in that getting below that 7nm mark will be extremely expensive from a manufacturing perspective and, hey, 64k of RAM is all anyone will ever need right?  :)

Probably the biggest news of 2013 was the revelation that the National Security Agency (NSA) had undertaken a massive program and seemed to be capturing every packet of data coming in or out of the US across the Internet.   I won’t get into any political discussion here, but suffice it to say this is probably the largest example of ‘big data’ that exists currently.  This also has large potential ramifications for public cloud adoption as security and data integrity have been 2 of the major roadblocks to adoption so it certainly doesn’t help that customers may now be concerned about the NSA eavesdropping on everything going on within the public datacenters.  It is estimated that public cloud providers may lose as much as $22-35B over the next 3 years as a result of customers slowing adoption due to this.  The only good news in this, at least for now, is it’s very doubtful that the NSA or anyone else on the planet has the means to actual mine anywhere close to 100% of the data they are capturing.  However, like anything else, it’s probably only a matter of time.

What do you think the biggest news/advancements of 2013 were?  I would be interested in your thoughts as well.

Register for our upcoming webinar on December 19th to learn how you can free up your IT team to be working on more strategic projects (while cutting costs!).

 

 

Moving Our Datacenter: An IT Director’s Take

An Interview with Matt Mock, IT Director, GreenPages Technology Solutions

Journey to the Cloud’s Ben Stephenson sat down with GreenPages’ IT Director Matt Mock to discuss GreenPages’ recent datacenter move.

Ben: Why did GreenPages decide to move its datacenter?

Matt: Our current contract was up so we started evaluating new facilities looking for a robust, redundant facility to house our equipment in. We needed a facility to meet specific objectives around our business continuity plan. In addition, we were also looking for cost savings.

Ben: Where did you move the datacenter to and from?

Matt: Geographically, we stayed in a close area. We moved it from Charlestown, MA a couple of miles down the road into downtown Boston. Staying within a close area certainly made the physical move quicker and easier.

Ben: What were the benefits of moving the datacenter?

Matt: Ultimately, we were able to get into an extremely redundant and secure datacenter that provided us with cost savings. Furthermore, the datacenter is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7.

{Register for our upcoming webinar on 11/7 on key announcements from VMworld 2013}

Ben: Tell us about the process of the move? What had to happen ahead of time to ensure a smooth transition?

Matt: The most important parts were planning, testing, and communication. We put together an extremely detailed plan that broke out every phase of the move down to 15 minute increments. We devised teams for the specific phases that had a communication plan for each team. We also devised a backup emergency plan in the event that we hit any issues the night of the move.

Ben: What happened the night of the move?

Matt: The night of the move we leveraged the excellent facilities at Markley to be able to run a command center that was run by one of our project managers. In the room, we had multiple conference bridges to run the different work streams to ensure smooth and constant communication. We also utilized Huddle, our internal collaboration tool, to communicate as our internal systems were down during the move.

Ben: Anything else you had to factor in?

Matt: Absolutely. The same night of the move we were also changing both voice and data providers at three different locations, which added another layer of complexity. We had to work closely with our new providers to ensure a smooth transition. Because we have a 24/7 Managed Services division at GreenPages, we needed to continue to offer customers the same support during the move that we do on a day-to-day basis.

Ben: Did you experience unexpected events during the move? If so, what were they and how did you handle them?

Matt: With any complex IT project you’re going to experience unexpected events. A couple that we experienced were some hardware failures and unforeseen configuration issues. Fortunately, our detailed plan accounted for these issues, and we were able to address them with the teams on hand and remain on schedule.

Ben: You used an all GreenPages team to accomplish this, right?

Matt: Correct. We did not use any outside vendors for this move – all services were rendered by the GreenPages team. Last time we used outside providers and this time we had a much better experience. I’m in the unique position where I have access to an entire team of project managers and technical resources that made doing this possible. In fact, this is something we offer our customers (from consulting to project management to the actual move) so our team is very, very good at it.

Ben: What advice do you have for other IT Directors who are considering moving their datacenters?

Matt: Detailed planning and constant communication is critical, having a plan in place for every possible scenario, and having an emergency plan ready so that in the middle of the night you’re not scrambling with how to address those unforeseen issues.

Ben: Congratulations on the successful move. See you Monday after the Patriots crush your Steelers.

Would you like to learn more about how GreenPages can help you with your datacenter needs?

Disaster Recovery in the Cloud, or DRaaS: Revisited

By Randy Weis

The idea of offering Disaster Recovery services has been around as long as SunGard or IBM BCRS (Business Continuity & Resiliency Services). Disclaimer: I worked for the company that became IBM Information Protection Services in 2008, a part of BCRS.

It seems inevitable that Cloud Computing and Cloud Storage should have an impact on the kinds of solutions that small, medium and large companies would find attractive and would fit their requirements. Those cloud-based DR services are not taking the world by storm, however. Why is that?

Cloud infrastructure seems perfectly suited for economical DR solutions, yet I would bet that none of the people reading this blog has found a reasonable selection of cloud-based DR services in the market. That is not to say that there aren’t DR “As a Service” companies, but the offerings are limited. Again, why is that?

Much like Cloud Computing in general, the recent emergence of enabling technologies was preceded by a relatively long period of commercial product development. In other words, virtualization of computing resources promised “cloud” long before we actually could make it work commercially. I use the term “we” loosely…Seriously, GreenPages announced a cloud-centric solutions approach more than a year before vCloud Director was even released. Why? We saw the potential, but we had to watch for, evaluate, and observe real-world performance in the emerging commercial implementations of self-service computing tools in a virtualized datacenter marketplace. We are now doing the same thing in the evolving solutions marketplace around derivative applications such as DR and archiving.

I looked into helping put together a DR solution leveraging cloud computing and cloud storage offered by one of our technology partners that provides IaaS (Infrastructure as a Service). I had operational and engineering support from all parties in this project and we ran into a couple of significant obstacles that do not seem to be resolved in the industry.

Bottom line:

  1. A DR solution in the cloud, involving recovering virtual servers in a cloud computing infrastructure, requires administrative access to the storage as well as the virtual computing environment (like being in vCenter).
  2. Equally important, if the solution involves recovering data from backups, is the requirement that there be a high speed, low latency (I call this “back-end”) connection between the cloud storage where the backups are kept and the cloud computing environment. This is only present in Amazon at last check (a couple of months ago), and you pay extra for that connection. I also call this “locality.”
  3. The Service Provider needs the operational workflow to do this. Everything I worked out with our IaaS partners was a manual process that went way outside normal workflow and ticketing. The interfaces for the customer to access computing and storage were separate and radically different. You couldn’t even see the capacity you consumed in cloud storage without opening a ticket. From the SP side, notification of DR tasks they would need to do, required by the customer, didn’t exist. When you get to billing, forget it. Everyone admitted that this was not planned for at all in the cloud computing and operational support design.

Let me break this down:

  • Cloud Computing typically has high speed storage to host the guest servers.
  • Cloud Storage typically has “slow” storage, on separate systems and sometimes separate locations from a cloud computing infrastructure. This is true with most IaaS providers, although some Amazon sites have S3 and EC2 in the same building and they built a network to connect them (LOCALITY).

Scenario 1: Recovering virtual machines and data from backup images

Scenario 2: Replication based on virtual server-based tools (e.g. Veeam Backup & Replication) or host-based replication

Scenario 3: SRM, array or host replication

Scenario 1: Backup Recovery. I worked hard on this with a partner. This is how it would go:

  1. Back up VMs at customer site; send backup or copy of it to cloud storage.
  2. Set up a cloud computing account with an AD server and a backup server.
  3. Connect the backup server to the cloud storage backup repository (first problem)
    • Unless the cloud computing system has a back end connection at LAN speed to the cloud storage, this is a showstopper. It would take days to do this without a high degree of locality.
    • Provider solution when asked about this.
      • Open a trouble ticket to have the backups dumped to USB drives, shipped or carried to the cloud computing area and connected into the customer workspace. Yikes.
      • We will build a back end connection where we have both cloud storage and cloud computing in the same building—not possible in every location, so the “access anywhere” part of a cloud wouldn’t apply.

4. Restore the data to the cloud computing environment (second problem)

    • What is the “restore target”? If the DR site were a typical hosted or colo site, the customer backup server would have the connection and authorization to recover the guest server images to the datastores, and the ability to create additional datastores. In vCenter, the Veeam server would have the vCenter credentials and access to the vCenter storage plugins to provision the datastores as needed and to start up the VMs after restoring/importing the files. In a Cloud Computing service, your backup server does NOT have that connection or authorization.
    • How can the customer backup server get the rights to import VMs directly into the virtual VMware cluster? The process to provision VMs in most cloud computing environments is to use your templates, their templates, or “upload” an OVF or other type of file format. This won’t work with a backup product such as Veeam or CommVault.

5. Recover the restored images as running VMs in the cloud computing environment (third problem), tied to item #4.

    • Administrative access to provision datastores on the fly and to turn on and configure the machines is not there. The customer (or GreenPages) doesn’t own the multitenant architecture.
    • The use of vCloud Director ought to be an enabler, but the storage plugins, and rights to import into storage, don’t really exist for vCloud. Networking changes need to be accounted for and scripted if possible.

Scenario 2: Replication by VM. This has cost issues more than anything else.

    • If you want to replicate directly into a cloud, you will need to provision the VMs and pay for their resources as if they were “hot.” It would be nice if there was a lower “DR Tier” for pricing—if the VMs are for DR, you don’t get charged full rates until you turn them on and use for production.
      • How do you negotiate that?
      •  How does the SP know when they get turned on?
      • How does this fit into their billing cycle?
    • If it is treated as a hot site (or warm), then the cost of the DR site equals that of production until you solve these issues.
    • Networking is an issue, too, since you don’t want to turn that on until you declare a disaster.
      • Does the SP allow you to turn up networking without a ticket?
      • How do you handle DNS updates if your external access depends on root server DNS records being updated—really short TTL? Yikes, again.
    • Host-based replication (e.g. WANsync, VMware)—you need a host you can replicate to. Your own host. The issues are cost and scalability.

Scenario 3: SRM. This should be baked into any serious DR solution, from a carrier or service provider, but many of the same issues apply.

    • SRM based on host array replication has complications. Technically, this can be solved by the provider by putting (for example) EMC VPLEX and RecoverPoint appliances at every customer production site so that you can replicate from dissimilar storage to the SP IDC. But, they need to set up this many-to-one relationship on arrays that are part of the cloud computing solution, or at least a DR cloud computing cluster. Most SPs don’t have this. There are other brands/technologies to do this, but the basic configuration challenge remains—many-to-one replication into a multi-tenant storage array.
    • SRM based on VMware host replication has administrative access issues as well. SRM at the DR site has to either accommodate multi-tenancy, or each customer gets their own SRM target. Also, you need a host target. Do you rent it all the time? You have to, since you can’t do that in a multi-tenant environment. Cost, scalability, again!
    • Either way, now the big red button gets pushed. Now what?
      • All the protection groups exist on storage and in cloud computing. You are now paying for a duplicate environment in the cloud, not an economically sustainable approach unless you have a “DR Tier” of pricing (see Scenario 2).
      • All the SRM scripts kick in—VMs are coming up in order in protection groups, IP addresses and DNS are being updated, CPU loads and network traffic climb…what impact is this?
      • How does that button get pushed? Does the SP need to push it? Can the customer do it?

These are the main issues as I see it, and there is still more to it. Using vCloud Director is not the same as using vCenter. Everything I’ve described was designed to be used in a vCenter-managed system, not a multi-tenant system with fenced-in rights and networks, with shared storage infrastructure. The APIs are not there, and if they were, imagine the chaos and impact on random DR tests on production cloud computing systems, not managed and controlled by the service provider. What if a real disaster hit in New England, and a hundred customers needed to spin up all their VMs in a few hours? They aren’t all in one datacenter, but if one provider that set this up had dozens, that is a huge hit. They need to have all the capacity in reserve, or syndicate it like IBM or SunGard do. That is the equivalent of thin-provisioning your datacenter.

This conversation, as many I’ve had in the last two years, ends somewhat unsatisfactorily with the conclusion that there is no clear solution—today. The journey to discovering or designing a DRaaS is important, and it needs to be documented, as we have done here with this blog and in other presentations and meetings. The industry will overcome these obstacles, but the customer must remain informed and persistent. The goal of an economically sustainable DRaaS solution can only be achieved by market pressure and creative vendors. We will do our part by being your vigilant and dedicated cloud services broker and solution services provider.

 

 

 

 

 

 

 

 

 

 

Guest Post: A Wrinkle in the IT Universe

By Kai Gray, VP of Operations at Carbonite

I feel like tectonic plates are shifting beneath the IT world. I’ve been struggling to put my finger on what it is that is making me feel this way, but slowly things have started to come into focus. These are my thoughts on how cloud computing has forever changed the economics of IT by shifting the balance of power.

The cloud has fundamentally changed business models; it has shifted time-to-market, entry points and who can do what. These byproducts of massive elasticity are wrapped up in an even greater evolutionary change that is occurring right now: The cloud is having a pronounced impact on the supply chain, which will amount to a tidal wave of changes in the near-term that will cause huge pain for some and spawn incredible innovation and wealth for others. As I see it, the cloud has started a chain of events that will change our industry forever:

1) Big IT used to rule the datacenter. Not long ago, large infrastructure companies were at the heart of IT. The EMCs, Dells, Ciscos, HPs and IBMs were responsible for designing, sourcing, supplying and configuring the hardware that was behind nearly all of the computing and storage power in the world. Every server closest was packed full of name-brand equipment and the datacenter was no different. A quick tour of any datacenter would – and still will – showcase the wares of these behemoths of the IT world. These companies developed sophisticated supply and sales channels that produced great margins businesses built on some very good product. This included the OEMs and ODMs that produced bent metal to the VARs and distributors who then sold their finish products. Think of DeBeers, the diamond mine owner and distributor. What are the differences between a company like HP and DeBeers? Not very much, but the cloud began to change all that.

2) Cloud Computing. Slowly we got introduced to the notion of cloud computing. We started using products that put the resource away from us, and (slowly) we became comfortable with not needing to touch the hardware. Our email “lived” somewhere else, our backups “lived” somewhere else and our computing cycles “lived” somewhere else. With each incremental step, our comfort levels rose until it stopped being a question and turned into an expectation. This process set off a dramatic shift in supply chain economics.

3) Supply Chain Economics. The confluence of massive demand coupled with near-free products (driven by a need to expand customer acquisition) changed how people had to think about infrastructure. All of a sudden, cloud providers had to think about infrastructure in terms of true scalability. This meant acquiring and managing massive amounts of infrastructure at the lowest possible cost. This was/is fundamentally different from the way the HPs and Dells and Ciscos thought about the world. All of a sudden, those providers were unable to address the needs of this new market in an effective way. This isn’t to say that the big IT companies can’t, just that it’s hard for them. It’s hard to accept shrinking margin and “openness.”  The people brave enough to promote such wild ideas are branded as heretics and accused of rocking the boat (even as the boat is sinking). Eventually the economic and scale requirements forced cloud providers to tackle the supply chain and go direct.

4) Going Direct. As cloud providers begin to develop strong supply chain relationships and build up their competencies around hardware engineering and logistics, they begin to become more ingrained with the ODMs (http://en.wikipedia.org/wiki/Original_design_manufacturer) and other primary suppliers. Huge initiatives came into existence from the likes of Amazon, Google and Facebook that are focused on driving down the cost of everything. For example, Google began working directly with Intel and AMD to develop custom chipsets that allow them to run at efficiency levels never before seen, and Facebook started the Open Compute Project that seeks to open-source design schematics that were once locked in vaults.

In short, the supply chain envelope gets pushed by anyone focused on cost and large-scale.

…and here it gets interesting.

Cloud providers now account for more supplier revenue than the Big IT companies. Or, maybe better stated — cloud providers account for more hope of revenue (HoR) than Big IT. So, what does that mean? That means that the Big IT companies no longer receive the biggest discounts available from the suppliers. The biggest discounts are going to the end users and the low-margin companies built solely on servicing the infrastructure needs of cloud providers. This means that Big IT is at even more of a competitive disadvantage than they already were. The cycle is now in full swing. If you think this isn’t what is happening, just look at HP and Dell right now. They don’t know how to interact with a huge set of end users without caving in their margins and cannibalizing their existing businesses. Some will choose to amputate while others will go down kicking, but margin declines and openness of information will take their toll with excruciating pain.

What comes of all this? I don’t know. But here are my observations:

1) Access to the commodity providers (ODMs and suppliers) is relatively closed. To be at all interesting to ODMs and suppliers you have to be doing things at enough volume that it is worthwhile for them to engage with you. That will change. The commodity suppliers will learn how to work in different markets but there will be huge opportunity for companies that help them get there. When access to ODMs and direct suppliers gets opened up to traditional Enterprise companies so they can truly and easily take advantage of commodity hardware through direct access to suppliers then, as they say, goodnight.

2) Companies that provide some basic interfaces between the suppliers and the small(er) consumers will do extremely well. For me, this means configuration management of some sort, but it could be anything that helps accelerate the linkage between supplier and end-user . The day will come when small IT shops have direct access to suppliers and are able to custom-build hardware in same way that huge cloud providers do today. Some might argue that there is no need for small shops to do this — that they can use other cloud providers, that it’s too time consuming to do it on their own, and that their needs are not unique enough to support such a relationship. Yes, yes, and yes… for right now. Make it easy for companies to realize the cost and management efficiencies of direct supplier access and I don’t know of anyone that wouldn’t take you up on that. Maybe this is the evolution of the “private cloud” concept but all I know is that, right now, the “private cloud” talk is being dominated by the Big IT folks so the conflict of interest is too great.

3) It’s all about the network. I don’t think the network is being addressed in the same way as other infrastructure components. I almost never hear about commodity “networks,” yet I constantly hear about commodity “hardware.” I’m not sure why. Maybe Cisco and Juniper and the other network providers are good at deflecting or maybe it’s too hard of a problem to be solved or maybe the cost isn’t a focal point (yet). Whatever the reason, I think this is a huge problem/opportunity. Without the network, everything else can just go away. Period. The entire conversation driving commodity-whatever is predicated around delivering lots of data to people at very low-cost. The same rules that drive commoditization need to be applied to the network and right now I only know of 1 or 2 huge companies that are even thinking in these terms.

There are always multiple themes in play at any given time that, when looking back, we summarize as change. People say that the Internet changed everything. And, before that, the PC changed everything. What we’re actually describing is a series of changes that happened over a period of time that have the cumulative effect of making us say, “How did we ever do X without Y?” I believe that the commoditization of infrastructure is just one theme among the change that will be described as Cloud Computing. I contend, however, the day is almost upon us when everybody, from giant companies to the SMB, will say, “Why did we ever buy anything but custom hardware directly from the manufacturer?”

This post originally appeared on kaigray.com.  It does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

To Learn more about GreenPages Cloud Computing Practice click here.

Los datos están realmente seguros en un servicio Cloud?

Muchos usuarios ven el cloud como algo infalible, donde sus datos nunca van a desaparecer y su servicio siempre va a estar en línea, pero realmente es cierto?

En contra de muchas opiniones, el termino “servicios cloud”  no esta relacionado en absoluto con el término garantía de servicio, la calidad y garantía de servicio, no depende del nombre del mismo, depende directamente de la calidad, conocimiento e inversión del proveedor que lo ofrece.

Continue reading Los datos están realmente seguros en un servicio Cloud?

Alta disponibilidad, Virtualización de hardware o Virtualización de Sistema Operativo?

Virtualización en los años 60
Virtualización desde los años 60

La virtualización es uno de los temas más complejos que hoy podemos encontrar en el diseño de plataformas de datos y aplicaciones, cubre prácticamente todas las posibilidades y requerimientos necesarios para crear un centro de datos, sea un gran DataCenter con miles de servidores, un SDC con decenas, o un solo servidor instalado en las oficinas de una empresa.

Es la base del servicio de alta disponibilidad y maneja, de una u otra forma, todos los componentes que forman parte de cada uno de los servidores principales o nodos y  la interoperabilidad de los componentes de los mismos, maneja la memoria, la CPU, las comunicaciones y los accesos a los discos de cada uno de los nodos, esto significa añadir una o varias capas “de control y administración” a cada uno de los componentes que controla y cuando hablamos de alta disponibilidad, también hay que tener en cuenta que una capa adicional en el software significa una nueva fuente de posibles problemas.

Hay varios tipos de virtualización, pero se pueden agrupar globalmente en dos grandes grupos, la virtualización del hardware y la virtualización de Sistema Operativo. Continue reading Alta disponibilidad, Virtualización de hardware o Virtualización de Sistema Operativo?