Benefits of Cloud Computing

Technology has been taken over by the Cloud. Putting it in simple terms, ‘Cloud’ is the new meme used to describe nirvana from clogged up computers and saving files directly to the internet. So how does it work?
Essentially, instead of saving files to a hard disk or using software on a computer that is directly installed, the Cloud offers the ability to quickly and easily access these files using an Internet connection. It all sounds very swish, but what actually are the benefits?
Services that offer Cloud storage can provide businesses with the storage space that they simply wouldn’t otherwise have access to. As storage is provided using a vast remote server, businesses can pay a relatively small amount of money (compared to the relevant cost of physical hardware) to receive a phenomenal amount of storage space.

read more

NJVC Cloudcuity Management Portal Provides Secure Cloud Brokerage Services

NJVC® will lead efforts to provide secure cloud brokerage services to the Network Centric Operations Industry Consortium using its first-to-market CloudcuityTM Management Portal during a series of 2013 geospatial community cloud demonstrations that will be conducted on behalf of the National Geospatial-Intelligence Agency (NGA). NJVC’s partners are The Aerospace Corporation, The Boeing Company and Open Geospatial Consortium (OGC).
“NJVC is honored that NCOIC chose the Cloudcuity Management Portal as its brokerage of choice to offer cloud services to the geospatial community,” said Kevin L. Jackson, vice president and general manager of NJVC Cloudcuity. “This NCOIC demonstration is significant as it will represent a new procurement model for the geospatial community, which is at the frontlines of protecting and responding to national and international crises in times of conflict. Procurement of cloud services will be less complicated, making deployments to the cloud easier and more efficient than ever before. This will benefit first responders and other non-traditional NGA users.”

read more

The nephew effect on SMB IT

By  Adam Bogobowicz, Sr. Director of Product Marketing for Service Providers, Parallels

 

Just like I normally do, last Friday I went to Big-Bob’s-Cuts for my quarterly haircut. During the haircut Bob and I usually have a conversation about his IT infrastructure. I like to keep an eye on changes in the small business IT, and it keeps Bob away from discussing his gout problems.

 

On previous occasions it was a short conversation, just about matching the trim time, since Bob’s IT was composed of a cellphone, a notepad (made of paper) and a credit card reader. But not this Friday, as Bob apparently has decided to join the 21st century.

 

Even before plugging in his trimmer, Bob proudly announced to me that he went online! At this point I fully expected that he would have an account with a local hoster and a website in a shared hosting environment with a big picture of Bob and his phone number. But I underestimated Ralphy, his 16 year old nephew.

 

If you are a hoster you must be thinking that Ralph got Big Bob’s Cuts into a VPS or a cloud server, installed Panel software on it and using Application Marketplace installed all the apps needed for Bob to conduct his business online.

 

Well, not exactly. Apparently brought up on iPhones, Facebook and Xbox, Ralph does not know anything about hosting, panels or SMB applications. Instead Ralph created a full SMB IT infrastructure for Bob out of an iPad and a few SaaS services he found via Google.

 

First Ralphy got rid of the Bob’s notepad with services from https://www.schedulicity.com/ (well not really, notepad was still there …) and at the same time got Bob’s Cuts into an online catalog of services. Next he took care of in-shop payments by attaching a card reader to an iPad https://squareup.com/ which also got rid of all the paper (except Bob’s notepad) in the shop.

 

But that is not all; he then used http://www.bigcommerce.com/  not only to setup a website but to also setup a store featuring Bob’s famous $3.99 shampoo and shaving cream bundle. I am a bit skeptical of success for this venture, but hey, it costs $25 / month, so it breaks even with just a few dozen sales and he already got one after my shopping spree. $3.99 is a really good deal, you should try it out.

 

I wrote this blog because I finally understood why small businesses are not buying (more) apps from hosters. They do not need or want apps. Instead small businesses are choosing solutions to their business problems on the web. Solutions not separate applications that may or may not work together.

 

SaaS providers are a serious challenge to our industry and we cannot ignore them or pretend that they represent a different market. These companies exemplify precisely how the one trillion dollars of SMB  spend will be distributed.

 

So far from ignoring, traditional hosters need to learn from them and incorporate best ideas from these new players into how we conduct business online. Here are a few things that I see hosters should borrow from the likes of Bigcommerce and Schedulicity.

 

  1. Focus on small business problem you are solving and not on memory, CPU, cores or storage. Do you think Bob or Ralphy care?
  2. Make it simple. The SMB IT admins are sons and nephews much more often than paid-by-the- hour system integrators. If Ralph cannot set it up for Bob, your business prospects are limited.
  3. Showcase your solution. Go to schedulicity.com and see the home page of this service. It is not about hosting. It is all about customers. Go to Bigcommerce.com and see customer stories on the home page. And read my previous blog on differentiation. What Bigcommerce is doing is differentiation through customer stories. Brilliant.

Full disclosure, Bigcommerce.com is using Parallels Plesk Panel as infrastructure for delivering its solution and Parallels is not associated with the other two solutions mentioned.

Enterprise Cloud Trends

2012 proved to be the year that’s been predicted for almost half a decade now: enterprises are turning in droves to cloud computing solutions. With the rapid growth and acceptance of robust open source solutions like Red Hat Enterprise Virtualization and others, the cloud is starting to make sense in that market sector.
Accordingly, here are some trends we think we’re likely to see over the next three to five years in the enterprise cloud space:
The growth of the hybrid cloud. According to one study about half of all of the new money enterprises spend on IT by 2015 will be on hybrid cloud technology. Another 40% of funds will be on pure cloud solutions. Third party integrators will increase support and offerings for hybrid solutions, because let’s face it: not everything you want exists in the public cloud.

read more

Rackspace Buys More Cloud Developers’ Tools

Rackspace has acquired Exceptional Cloud Services in what it calls a strategic move to add to its toolset for developers deploying and managing applications in the open cloud.
The deal will bring it error tracking and Redis-as-a-Service solutions that are currently used by more than 50,000 apps developers.
Acquisition terms were not disclosed.
Rackspace said it’s getting “technology and expertise that will provide start-ups and cloud developers with the tools that help them deliver more reliable customer experiences and to bring the next generation of cloud-based apps to market faster.”
As part of the agreement, Rackspace gets Exceptional.io, which tracks errors in over 6,000 web applications in real-time so they can be fixed faster.

read more

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Evolve IP Adds Enterprise-Class Virtual Data Center to Cloud Portfolio

“The Evolve IP Virtual Data Center aligns price and predictability for IT decision makers looking to take advantage of consumption-based computing,” said Joe Corvaia, Vice President of Cloud Computing for Evolve IP, as Evolve IP released its Virtual Data Center service. “It couples guaranteed capacity and high availability with self-provisioning while still providing IT leaders the flexibility to control densities and subscription levels within their own private cloud environment.”
The Evolve IP Virtual Data Center allows the rapidly growing segment of cloud computing customers to deploy guaranteed reserved resource pools at Evolve IP’s SOC III compliant, geographically redundant data centers. In conjunction with the company’s dedicated, private cloud service, vServer, and its award-winning unified cloud platform, it further elevates Evolve IP’s position as the nation’s leading cloud services provider.

read more

The Importance of Private Clouds

PrivateFew days ago I noticed a question on a LinkedIn group that made me thinking how important is the notion of private clouds. First, let’s briefly look at what is the difference between public, private and community clouds as well as hybrid clouds. Once again those are very well defined in NIST Definition of Cloud Computing but stated with simple words they are:
Private Cloud is cloud infrastructure that belong to single organization (enterprise, university, government organization etc.) that is hosted either on or off premise and is managed by the organization or third party contracted by the organization. The key point for private cloud is that the infrastructure is dedicated to this particular organization. Very often though you will notice that the term is used for cloud infrastructure that is hosted in the organization’s datacenter.

read more

EMC Leads the Storage Market for a Reason

By Randy Weis, Consulting Architect, LogicsOne

There are reasons that EMC is a leader in the market. Is it because they come out first with the latest and greatest technological innovation? No, or at least not commonly. Is it because they rapidly turn over their old technology and do sweeping replacements of their product lines with the new stuff? No. It’s because there is significant investment in working through what will work commercially and what won’t and how to best integrate the stuff that passes that test into traditional storage technology and evolving product lines.

Storage Admins and Enterprise Datacenter Architects are notoriously conservative and resistant to change. It is purely economics that drives most of the change in datacenters, not the open source geeks (I mean that with respect), mad scientists and marketing wizards that are churning out & hyping revolutionary technology. The battle for market leadership and ever greater profits will always dominate the storage technology market. Why is anyone in business but to make money?

Our job as consulting technologists and architects is to match the technology with the business needs, not to deploy the cool stuff because we think it blows the doors off of the “old” stuff. I’d venture to say that most of the world’s data sits on regular spinning disk, and a very large chunk of that behind EMC disk. The shift to new technology will always be led by trailblazers and startups, people who can’t afford the traditional enterprise datacenter technology, people that accept the risk involved with new technology because the potential reward is great enough. Once the technology blender is done chewing up the weaker offerings, smart business oriented CIOs and IT directors will integrate the surviving innovations, leveraging proven manufacturers that have consistent support and financial history.

Those manufacturers that cling to the old ways of doing business (think enterprise software licensing models) are doomed to see ever-diminishing returns until they are blown apart into more nimble and creative fragments that can then begin to re-invent themselves into more relevant, yet reliable, technology vendors. EMC has avoided the problems that have plagued other vendors and continued to evolve and grow, although they will never make everyone happy (I don’t think they are trying to!). HP has had many ups and downs, and perhaps more downs, due to a lack of consistent leadership and vision. Are they on the right track with 3PAR? It is a heck of a lot more likely than it was before the acquisition, but they need to get a few miles behind them to prove that they will continue to innovate and support the technology while delivering business value, continued development and excellent post-sales support. Dell’s investments in Compellent, particularly, bode very well for the re-invention of the commodity manufacturer into a true enterprise solution provider and manufacturer. The Compellent technology, revolutionary and “risky” a few years ago, is proving to be a very solid technology that innovates while providing proven business value. Thank goodness for choices and competition! EMC is better because they take the success of their competitors at HP and Dell seriously.

If I were starting up a company now, using Kickstarter or other venture investment capital, I would choose the new products, the brand new storage or software that promises the same performance and reliability as the enterprise products at a much lower cost, knowing that I am exposed to these risks:

  • the company may not last long (poor management, acts of god, fickle investors) or
  • the support might frankly sucks, or
  • engineering development will diminish as the vendor investors wait for the acquisition to get the quick payoff.

Meanwhile, large commercial organizations are starting to adopt cloud, flash and virtualization technologies precisely for all the above reasons. Their leadership needs to drive profitability into the datacenter technologies to increase speed to market and improve profitability. As the bleeding edge becomes the smart bet as brought to market by the market leading vendors, we will continue to see success where Business Value and Innovation intersect.

MaaS implements small data and enables personal clouds

Abstract – MaaSTM (Model as a Service) sets a new concept to order and classify data modeling design and deployment to the Cloud. MaaS changes the way to move data to the Cloud because allows to define data taxonomy, size and contents. Starting from data model design, MaaS might guide the DaaS (Database as a Service) lifecycle, providing data granularity and duty rules: as a consequence, MaaS implements the new concept of Small Data.

In fact, Small Data answers to the need of controlling “on-premise” data dimension and granularity. Anyway, Small Data is not data volume limitation. Small Data affords data modeling full configuration and provides 2 main advantages: data model scale and data ownership that provide assigned data deployment and, finally, data deletion in the Cloud.

Introduction

The inheritance coming from the past imposes to manage big data as a consequence of multiple integration and aggregation of data systems …