Archivo de la etiqueta: cloud

Cloud Corner Series -The Networking & Storage Challenges Around Clustered Datacenters



www.youtube.com/watch?v=fRl-KDveZQg

In this new episode of Cloud Corner, Director of Solutions Architecture Randy Weis and Solutions Architect Nick Phelps sit down to talk about clustered datacenters from both a networking and storage perspective. They discuss the challenges, provide some expert advice, and talk about what they think will be in store for the future. Check it out and enjoy!

Cloud Corner Series -The Networking & Storage Challenges Around Clustered Datacenters



www.youtube.com/watch?v=fRl-KDveZQg

In this new episode of Cloud Corner, Director of Solutions Architecture Randy Weis and Solutions Architect Nick Phelps sit down to talk about clustered datacenters from both a networking and storage perspective. They discuss the challenges, provide some expert advice, and talk about what they think will be in store for the future. Check it out and enjoy!

Mind the Gap – Transitioning Your IT Management Methodology

At the recent GreenPages’ Summit, I presented on a topic that I believe will be key to our (for those of us in IT management) success as we re-define IT in the “cloud” era.  In the past, I have tried to define the term “cloud,” and have described it as anything from “an ecosystem of compute capabilities that can be delivered upon demand from anywhere to anywhere” to “IT in 3D.”  In truth, its definition is not really that important, but how we enable the appropriate use of it in our architectures is.

One barrier to adopting cloud as a part of an IT strategy is how we will manage the resources it provides us.  In theory, cloud services are beyond our direct control.  But are they beyond our ability to evaluate and influence?

IT is about enablement.  Enabling our customers or end users to complete the tasks that drive our businesses forward is our true calling.  Enabling the business to gain intelligence from its data is our craft.    So, we must strive to enable, where appropriate and effective, the use of cloud services as part of our mission.  What then is the impact to IT management?

There are the obvious challenges.  Cloud services are provided by, and managed by, those whom we consume them from.  Users utilizing cloud services may do so outside of IT control.  And, what happens when data and services step into that void where we cannot see?

In order to manage effectively in this brave new world of enablement, we must start to transition our methodologies and change our long-standing assumptions of what is critical.  Sure, we still have to manage and maintain our own datacenters (unless you go 100% service provider).  However, our concept of a datacenter has to change.  For one thing, datacenters are not really “centers” anymore. Once you leverage external resources as part of your overall architecture, you step outside of the hardened physical/virtual platforms that exist within your own facilities.  A datacenter is now “a flexible, secure and measurable compute utility comprised of delivery mechanisms, consumption points, and all connectivity in between.”

And so, we need to change how we manage our IT resources.  We need to expand our scope and visibility to include both the cloud services that are part of our delivery and connectivity mechanisms, and the end points used to consume our data and services.  This leads to a fundamental shift in daily operations and management.  Going forward, we need to be able to measure our service effectiveness end to end, even if in between they travel through systems that are not our own to devices we did not provision.

This is a transition, not a light-switch event.  Over the next few blogs I hope to focus some attention on several of the gaps that will exist as we move forward.  As a sneak peak, consider these statements:

Consumerization of technical innovation

Service-oriented management focus

Quality of Experience

“Greatest Generation” of users

Come on by and bring your imagination.  There is not one right or wrong answer here, but a framework for us to discuss what changes are coming like a speeding train, and how we need to mind the gap if we don’t want to be run over.

Cloudscape 2012: WhatsUp at GreenPages? Journey to success!

Guest Post from Caitlin Buxton, Director of North American Channel Sales, WhatsUp Gold Network Management Division of Ipswitch, Inc.

The WhatsUp Gold team attended the GreenPages Annual Technology Summit this week on the scenic New Hampshire/Maine Seacoast. This event was one of the most valuable technology summits we have participated in this year. The three-day event showcased all of GreenPages’ exemplary talent, skill, and professionalism that the organization brings to the IT community for both clients and vendor partners.

During the Partner Pavilion, we exhibited WhatsUp Gold’s Suite of Network Management and Log Management solutions and showed attendees how these solutions install, discover, and map network connected assets in minutes. We also showcased the powerful SNMP, WMI and SSH monitoring, alerting and notification capabilities, and web-based management which gives organizations a complete picture of an entire network infrastructure in real-time.

The entire GreenPages staff worked very closely with our team both in pre-event planning and during the event to make sure our investment and time was well spent by engaging with their clients, learning their challenges, and understanding how our solutions can make life easier. The GreenPages Account Managers were fantastic in providing insight into their clients’ needs and facilitating productive conversations.

I was also impressed by how many clients raved about the incredible value they receive from GreenPages. Repeatedly, I was told how hard the GreenPages team works to understand their individual business needs and helps to deliver solutions and information specific to their needs. They are always looking out for their customers’ best interests.

This is not surprising given that 100 IT Executives with limited time and budgets would not have travelled from all over the country for this event if they did not get significant value from it. However, it was refreshing to hear directly from the customers. It validates the pride I have in our GreenPages partnership knowing such a quality organization is on our team representing the WhatsUp Gold family of solutions.

Well done GreenPages! Thank you!

The Operational Consistency Proxy

#devops #management #webperf Cloud makes more urgent the need to consistently manage infrastructure and its policies regardless of where that infrastructure might reside

f5friday

While the potential for operational policy (performance, security, reliability, access, etc..) diaspora is often mentioned in conjunction with cloud, it remains a very real issue within the traditional data center as well. Introducing cloud-deployed resources and applications only serves to exacerbate the problem.

F5 has long offered a single-pane of glass management solution for F5 systems with Enterprise Manager (EM) and recently introduced significant updates that increase its scope into the cloud and broaden its capabilities to simplify the increasingly complex operational tasks associated with managing security, performance, and reliability in a virtual world.

f5em2.0AUTOMATE COMMON TASKS

The latest release of F5 EM includes enhancements to its ability to automate common tasks such as configuring and managing SSL certificates, managing policies, and enabling/disabling resources which assists in automating provisioning and de-provisioning processes as well as automating what many might consider mundane – and yet critical – maintenance window operations.

Updating policies, too, assists in maintaining operational consistency across all F5 solutions – whether in the data center or in the cloud. This is particularly important in the realm of security, where control over access to applications is often far less under the control of IT than even the business would like. Combining F5’s cloud-enabled solutions such as F5 Application Security Manager (ASM) and Access Policy Manager (APM) with the ability for F5 EM to manage such distributed instances in conjunction with data center deployed instances provides for consistent enforcement of security and access policies for applications regardless of their deployment location. For F5 ASM specifically, this extends to Live Signature updates, which can be downloaded by F5 EM and distributed to managed instances of F5 ASM to ensure the most up-to-date security across enterprise concerns.

The combination of centralized management with automation also ensures rapid response to activities such as the publication of CERT advisories. Operators can quickly determine from the centralized inventory the impact of such a vulnerability and take action to redress the situation.

INTEGRATED PERFORMANCE METRICS real-time-app-perf-monitoring-cloud-dc

F5 EM also includes an option to provision a Centralized Analytics Module. This module builds on F5’s visibility into application performance based on its strategic location in the architecture – residing in front of the applications for which performance is a concern. Individual instances of F5 solutions can be directed to gather a plethora of application performance related statistics, which is then aggregated and reported on by application in EM’s Centralized Analytics Module.

These metrics enable capacity planning, troubleshooting and can be used in conjunction with broader business intelligence efforts to understand the performance of applications and its related impact whether those applications are in the cloud or in the data center. This global monitoring extends to F5 device health and performance, to ensure infrastructure services scale along with demand. 

Monitoring includes:

  • Device Level Visibility & Monitoring
  • Capacity Planning
  • Virtual Level & Pool Member Statistics
  • Object Level Visibility
  • Near Real-Time Graphics
  • Reporting

In addition to monitoring, F5 EM can collect actionable data upon which thresholds can be determined and alerts can be configured.

Alerts include:

  • Device status change
  • SSL certificate expiration
  • Software install complete
  • Software copy failure
  • Statistics data threshold
  • Configuration synchronization
  • Attack signature update
  • Clock skew

When thresholds are reached, triggers send an alert via email, SNMP trap or syslog event. More sophisticated alerting and inclusion in broader automated, operational systems can be achieved by taking advantage of F5’s control-plane API, iControl. F5 EM is further able to proxy iControl-based applications, eliminating the need to communicate directly with each BIG-IP deployed.

OPERATIONAL CONSISTENCY PROXY

By acting as a centralized management and operational console for BIG-IP devices, F5 EM effectively proxies operational consistency across the data center and into the cloud. Its ability to collect and aggregate metrics provides a comprehensive view of application and infrastructure performance across the breadth and depth of the application delivery chain, enabling more rapid response to incidents whether performance or security related.

F5 EM ensures consistency in both infrastructure configuration and operational policies, and actively participates in automation and orchestration efforts that can significantly decrease the pressure on operations when managing the critical application delivery network component of a highly distributed, cross-environment architecture.

Additional Resources:

Happy Managing!


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

News on Windows 2012, Office 365 and Canadian Police

I had the pleasure of attending the Microsoft Worldwide Partner Conference in Toronto, Canada earlier this month and worldwide it was as 16,000 attendees squeezed into the Air Canada Center for Microsoft’s morning key note speeches.  That’s the most that arena has seen inside its snug confines since Vince Carter was dunking on opposing players, or I guess when Vince Carter could dunk period.  It was a week where Microsoft spent making some big announcements, covered some important changes and showcased some new products “Eh.”

The first major announcement was Microsoft’s Office 365 cloud solution which later this year will be available for purchase under the Open Licensing Program.  Office 365 was released last summer and has been solely available for customers to purchase online, although partners like GreenPages would assist with quoting the subscription, ultimately customers would purchase the monthly subscription directly from Microsoft, which can be a little painstaking and nevertheless confusing (like this sentence is).  Now with the announcement that Office 365 will be available through volume licensing, we’ll be able to invoice the customer directly like we would with an on-premise product, making the process much simpler for you.  Now you’ll have another avenue to purchase the subscription.  Most likely it will be available through the Open Value program and details are still being ironed out, so be on the lookout as we’ll provide the latest information as to when this will be available through volume licensing.

The other news is the announcement of Windows 8 set to be released to manufacturing in August and general availability in October.  Microsoft is very excited about this new release as they said it is the most anticipated release they’ve had since XP.  They showcased some pretty nifty touchscreen laptops with Windows 8 Professional loaded on which, I would have loved to bring back to the States, and I would have, assuming the Royal Canadian Mounted Police didn’t finally catch up with me at the Boarder.

The biggest news is the upcoming release of Windows 2012 which is scheduled for General Availability in early September and will offer new enhancements centered around Hyper-V. Along with the new features there are some major licensing changes, loss of an edition (nice knowing you Enterprise) and upgrade paths if you have current Software Assurance.

The first change with Windows 2012 is it will move to a more consistent licensing model and each edition will have the same exact common features, however the editions have been reduced.  With Windows 2012 there will only be two editions: Standard and Datacenter. Windows Enterprise, on the other hand, has been cut from the team and will not be at training camp when Windows 2012 debuts.  So you’re probably wondering, if Standard and Datacenter have the exact same features and can perform the same tasks than what is the difference between the two?   It’s all in the licensing, but before we get into the licensing, let’s check out the new features in Windows 2012 Standard edition which previously were only available in the premium editions.

Both Windows Standard and Datacenter will include these features among others.

-Windows Server Failover Clustering

-BranchCache Hosted Chache Server

-Active Directory Federated Services

-Additional Active Directory Certificate Services capabilities

-Distributed File Services

-DRS-R Cross-File Replication

Along with the new features there is a new licensing model for Windows 2012.  Both Windows 2012 Standard and Datacenter will now be licensed by the processor and the days of per server licensing are now gone and the biggest reason for that is virtualization.  What differentiates the two editions is the number of Virtual Machines (VMs) that are entitled to be run with each edition.  A Standard edition license will entitle you to run up to two VMs on up to two processors.  A Datacenter edition license will entitle you to run an unlimited number of VMs on up to two processors. Each license of Standard and Datacenter will cover two processors so for example if you have a quad-processor host, you would purchase 2 x Two-Processor licenses.  The Two-Processor license cannot be split up, meaning you can’t put one processor license on one server and the other processor license on another, nor can you combine a Standard and Datacenter license on the same host.  The processor license does not include Cals.  Windows Cals would still have to be purchased separately.

Ok, now that I have dropped this knowledge on you, what should you expect moving forward?  Let’s talk about pricing and what this new model is going to cost you.  A Two-Processor license of Datacenter will retail for $4,809, which breaks down to $2,405 a CPU.  The current retail price for Windows 2008 R2 Datacenter per Processor license is $2,405 so nothing has changed there.  For Windows 2012 Standard, a Two-Processor license retails for $882.  For those of you who were accustomed to purchasing Windows 2008 R2 Enterprise for $2,358 MSRP so you could use the 4-VMs that came with it will notice that the price to get 4-VMs of Windows 2012 (2 x Two-Processor Windows 2012 Standard = $1,764) is actually going to be less than what Windows 2008 R2 Enterprise costs.  The issue will be for those who need Windows Standard for a physical server.  Since there is no Windows 2012 license for physical servers, you’ll have to purchase the Two-Processor license.  Currently, Windows 2008 R2 Standard edition runs for $726 retail so you will be paying more to use Windows on physical servers.

Once Windows 2012 is released, you’ll still be able to use prior versions, which is known as downgrade rights.  Windows 2012 Datacenter edition can downgrade to any prior version or lower edition.  Windows 2012 Standard edition gives you rights to downgrade to any prior version of Standard or Enterprise edition.

In addition, if you have current Software Assurance (SA) on your Windows 2008 R2 license you are entitled to Windows 2012.  If you have Software Assurance on Datacenter edition you will be entitled to Windows 2012 Datacenter edition.  Today Datacenter edition covers 1 processor and Datacenter 2012 license with cover 2 processors, so for every two current Datacenter licenses with Software Assurance, you will receive one Windows 2012 Datacenter edition license.  If you have Software Assurance on Enterprise edition, you will be entitled to receive 2 x Two-Processor Standard 2012 edition licenses, that way you still have coverage of 4-VMs.  Lastly, if you have Software Assurance on Standard edition you’ll receive one Windows 2012 Standard edition license for each Standard edition license you own.

As you’re taking this news in, there are a few things I’d recommend considering.  The first of which is if you’re looking to purchase Windows over the next couple of months prior to Windows 2012’s release, you should look at purchasing it with Software Assurance because that will give you new versions rights to Windows 2012 once it’s ships.  Keep in mind you don’t have to load Windows 2012 right away, but by having Software Assurance it will give you access when you decide to. Also, there may be instances where you need to add VMs to your host, specifically those running Windows Standard and the only way to add more VMs is to purchase additional Windows Standard licenses.  Secondly, if you think you’ll be adding a substantial amount of VMs in the future, but don’t want to invest in Datacenter today, what you can do is purchase Windows Standard with Software Assurance through these participating license programs: Open Value, Select and Enterprise Agreement and by doing so you will be eligible to  “Step-Up”  your Standard License to Datacenter.  Step-Up is Microsoft’s term for an upgrade.  This Step-Up license will allow you to upgrade from your Standard edition license to Datacenter edition, thus providing you unlimited VMs on that host.  Again the Standard license would have to have current Software Assurance and be purchased through the aforementioned licensing programs.

Obviously this is big news and will create many more questions and we’re here to assist and guide you through the purchase process so feel free to reach out to your GreenPages Account Executive for more details.

Automation & Orchestration Part 1: What’s In A Name? That Which We Call a “Service”…

The phrases “service,” “abstraction,” & “automation & orchestration” are used a lot these days. Over the course of the next few blogs, I am going to describe what I think each phrase means and in the final blog I will describe how they all tie in together.

Let’s look at “service.” To me, when you trim off all the fat that word means, “Something (from whom) that provides a benefit to something (to whom).” The first thing that comes to mind when I think of who provides me a service is a bartender. I like wine. They have wine behind the bar. I will pay them the price of a glass + 20% for them to fill that glass & move it from behind the bar to in front of me. It’s all about services these days. Software-as-a-Service, Infrastructure-as-a-Service, and Platform-as-a-Service. Professional services. Service level agreement. No shirts, no shoes, no service.

Within a company, there are many people working together to deliver a service. Some to external people & some to internal people. I want to examine an internal service because those tend to be much more loosely defined & documented. If a company sells an external service to a customer, chances are that service is very well defined b/c that company needs to describe in very clear terms to the customer exactly what they are getting when the customer shells out money. If that service changes, careful consideration needs to be paid to what ways that service can add more benefit (i.e., make the company more money) and in what ways parts of that service will change or be removed. Think about how many “Terms of Service & Conditions” pamphlets you get from a credit card company and how many pages each one is.

It can take many, many hours as a consultant in order to understand a service as it exists in a company today. Typically, the “something” that provides a benefit are the many people who work together to deliver that service. In order to define the service and its scope, you need to break it down into manageable pieces…let’s call them “tasks.” And those tasks can be complex so you can break those down into “steps.” You will find that each task, with its one or more steps, which is part of a service, is usually performed by the same person over and over again. Or, if the task is performed a lot (many times per day) then that task can usually be executed by a member of a team and not just a single person. Having the capability internally for more than one person to perform a task also protects the company from when Bob in accounting takes a sick day or when Bob in accounting takes home a pink slip. I’ll throw in a teaser for when I cover automation and orchestration…it would be ideal that not only can Bob do a task, but a computer as well (automation). That also may play into Bob getting a pink slip…but, again, more on that later. For now Bob doesn’t need to update his resume.

A lot of companies have not documented many, if any, of the internal services they deliver. I’m sure there is someone who knows the service from soup to nuts, but it’s likely they don’t know how (can’t) to do every task—or—may not have the authority/permission (shouldn’t) to do the task. Determining who in a company performs what task(s) can be a big undertaking in and of itself. And then, once you find Bob (sorry to pick on you Bob), it takes a lot of time for him to describe all the steps he does to complete a task. And once you put it on paper & show Bob, he remembers that he missed a step. And once you’ve pieced it all together and Bob says, “Yup, that about covers it,” you ask Bob what happens when something goes wrong and he looks at you and says, “Oh man, where do I begin?”

That last part is key. When things go well I call it the “Happy Day Scenario.” But things don’t always go well (ask the Yankees after the 2004 season) and just as, if not more, important in understanding a service is to know what to do when the Bob hits the fan. This part is almost never documented. Documentation is boring to lots of people and it’s hard enough for people to capture what the service *should* do let alone what it *could* do if something goes awry. So it’s a challenge to get people to recall and also predict what could go wrong. Documenting and regurgitating the steps of a business service “back” to the company is a big undertaking and very valuable to that company. Without knowing what Bob does today, it’s extremely hard to tell him how he can do it better.

Fun with Neologism in the Cloud Era

Having spent the last several blog posts on more serious considerations about cloud computing and the new IT era, I decided to lighten things up a bit.  The term “cloud” has bothered me from the first time I heard it uttered, as the concept and definition are as nebulous as, well a cloud.  In the intervening years, when thoroughly boring my wife and friends with shop talk about the “cloud,” I came to realize that in order for cloud computing to become mainstream, “it” needs to have some way to translate to the masses.

Neologism is the process of creating new words using existing or combinations of existing words to form a more descriptive term.  In our industry neologisms have been used extensively, although many of us do not realize how these terms got coined.  For example, the word “blog” is a combination of web and log.  “Blog” was formed over time as the lexicon was adopted.  It began with a new form of communicating across the Internet, known as a web log.  “Web log” become “we blog” simply by moving the space between words one to the left.  Now, regardless of who you talk to, the term “blog” is pretty much a fully formed concept.  Similarly, the term “Internet” is a combination of “inter” (between) and “network”, hence meaning between networks.

Today, the term “cloud” has become so overused that confusion reigns (get it?) over everyone.

Cloudable – meaning something that is conducive to leveraging cloud.  As in:  “My CRM application is cloudable “ or “We want to leverage data protection that includes cloudable capabilities”

Cloudiac – someone who is a huge proponent of cloud services.  A combination of “Cloud” and “Maniac”, as in:  “There were cloudiacs everywhere at Interop. “  In the not too distant future, we very well may see parallels to the “Trekkie” phenomena.  Imagine a bunch of middle-aged IT professionals running around in costumes made of giant cotton-balls and cardboard lightning bolts.

Cloudologist – an expert in cloud solutions.  Different from a Cloudiac, the Cloudologist actually has experience in developing and utilizing cloud based services.   This will lead to master’s degree programs in Cloudology.

Cloutonomous –  maintaining your autonomy over your systems and data in the cloud.  “I may be in the Cloud but I make sure I’m cloutonomous.”  Could refer to the consumer of the cloud services not being tied into long term services commitments that may inhibit their ability to move services in the event of a vendor failing to hit SLAs.

Cloud crawl – actions related to monitoring or reviewing your various cloud services.  “I went cloud crawling today and everything was sweet.” Off-take of the common “pub crawl,” just not as fun and with no lingering after-effects.

Counter-cloud – a reference to the concept of “counter culture,” which dates back to hippie days of the 60s and 70s.  In this application, it would describe a person or business that is against utilizing cloud services mainly because it is the new trend, or because they feel that it’s the latest government conspiracy to control the world.

Global Clouding – IT’s version of Global Warming, except in this case the world isn’t becoming uninhabitable, IT is just becoming a bit fuzzy around the edges.  What will IT be like with the advent of Global Clouding?

Clackers – Cloud and Hacker.  Clackers are those nefarious, shadowy figures that focus on disruption of cloud services.  This “new” form of hacker will concentrate on capturing data in transit, traffic disruption/re-direction (i.e. DNS Changer anyone?), and platform incursion.

Because IT is so lexicon heavy, building up a stable of Cloud-based terminology is inevitable, and potentially beneficial in focusing the terminology further.  Besides, as Cloudiacs will be fond of saying… “resistance is futile.”

Do you have any Neologisms of your own? I’d love to hear some!

RECAP: HP Discover 2012 Event

If you are going to do something, make it matter.  That was the key phrase that was posted throughout the conference at HP Discover 2012 in Las Vegas a couple weeks ago.  With some of the new announcements, HP did just that.

One of the biggest announcements in my opinion is the HP Virtual Connect Direct-Attached Fibre Channel Storage for 3PAR. In a nutshell, it helps to reduce your SAN infrastructure by eliminating switches and HBAs. You connect your Blade System Servers directly to the 3PAR array.  This allows you to have a single layer FC storage network.  Since you won’t have a fabric to manage, you can increase your provisioning process by as much as 2.5X.  Also, by removing the fabric layer, you can eliminate up to 55% latency.

This will allow organizations to reduce costs by eliminating the SAN fabric.  It will save on operating costs by cutting down on capital expenditure.  It also scales with the “pay as you grow” methodology allowing you to purchase only what you need.

Complexity is greatly decreased with the wire-once strategy.  If new servers are added to the Blade Chassis, they simply access the storage through the already connected cabling.

Virtual Connect Manager allows for a single pane of glass approach.  It can be used through a web interface or CLI, for those UNIX lovers.

The new trend in IT is Big Data.  Some of the biggest customer challenges are the velocity and volume of data, the large variety and disparate sources of data, and the complex analytics that are required for maximizing the value of information.  HP introduced Vertica 6, which does all of
these.

Vertica 6 FlexStore has been expanded to allow access to any data, stored at any location, through any interface.  You can connect to Hadoop File Systems, existing databases, and data warehouses.  You can also access unstructured analysis platforms such as HP/Autonomy IDOL.

It also includes high performance data analytics for the R Statistical Tool natively and in parallel without the in-memory and single-threaded limitations of R. Vertica 6 has expanded their C++ SDK to add secure sandboxing of user-defined code.

Workload Management simplifies the user experience by enabling more diverse workloads.  Some users experienced up to a 40X speed increase on their queries.  Regardless of size, Workload Management balances all system resources to meet SLAs.

Vertica 6 software will run on the HP public cloud.  Web and mobile applications generate a ton of data.  This will allow business intelligence to quickly spot any trends that are developing and act accordingly.

Not to be overlooked are the enhancements made to the core components that are already part of the system.

Over the past few years, there has been a big interest in disk to disk backup and deduplication.  HP’s latest solution in this space is the B6200 with StoreOnce Catalyst software.  It has over 50 patents that deliver world record performance of 100TB/hr backups and 40TB/hr restores.  This claims to be 3X and 5X faster, respectively, than the next leading competitor.

The hardware is scalable.  It starts at 48TB (32TB usable) and can grow to 768TB (512TB usable).  With a typical deduplication rate of 20X, the system can provide extended data protection for up to 10PBs.

This is a federated backup solution that allows you to move data from remote sites to multiple datacenters without having to reduplicate it.  It integrates with HP Data Protector, Symantec NetBackup, and Symantec BackupExec giving the administrator one console to manage all deduplication, backup, and disaster recovery operations.

The portfolio also includes smaller units for SMB customers. They take advantage of the same type of technologies allowing companies to meet those pesky backup windows.

As a leading HP Partner, GreenPages can assist you with these or any of the products in the HP portfolio.

By Mark Mychalczuk

The Private Cloud Strikes Back

Having read JP Rangaswami’s argument against private clouds (and the obvious promoting of his version of cloud) I have only to say that he’s looking for oranges in an apple tree.  His entire premise is based on the idea that enterprises are wholly concerned with cost and sharing risk when that can’t be farther from the truth.  Yes, cost is indeed a factor as is sharing risk but a bigger and more important factor facing the enterprise today is agility and flexibility…something that monolithic leviathan-like enterprise IT systems of today definitely are not. He then jumps from cost to social enterprise as if there is a causal relationship there when, in fact, they are two separate discussions.  I don’t doubt that if you are a consumer (not just customer) facing organization, it’s best to get on that social enterprise bandwagon but if your main concern is how to better equip and provide the environment and tools necessary to innovate within your organization, the whole social thing is a red herring for selling you things that you don’t need.

Traditional status quo within IT is deeply encumbered by mostly manual processes—optimized for people carrying out commodity IT tasks such as provisioning servers and OSes—that cannot be optimized any further, therefore a different, much better way had to be found.  That way is the private cloud which takes those commodity IT tasks and elevates them to automated and orchestrated, well defined workflows and then utilizes a policy-driven system to carry them out.  Whether these workflows are initiated by a human or as a result of a specific set of monitored criteria, the system dynamically creates and recreates itself based on actual business and performance need—something that is almost impossible to translate into the public cloud scenario.

Not that public cloud cannot be leveraged where appropriate, but the enterprise’s requirement is much more granular and specific than any public cloud can or should allow…simply to JP’s point that they must share the risk among many players and that risk is generic by definition within the public cloud.  Once you start creating one-off specific environments, the commonality is lost and it loses the cost benefits because now you are simply utilizing a private cloud whose assets are owned by someone else…sound like co-lo?

Finally, I wouldn’t expect someone whose main revenue source is based on the idea that a public cloud is better than a private cloud to say anything different than what JP has said, but I did expect some semblance of clarity as to where his loyalties lie…and it looks like it’s not with the best interests of the enterprise customer.