Offering Cloud Services, practice what you preach

by, Gerrit-Jan van Wieren, Vice President Business Development

IASO Cloud Backup

As an IT service provider you probably started years ago with the idea you would do a better job than the rest. With a lot of enthusiasm and energy you have built a company of reasonable size and offer all kinds of different services. With the rise of Cloud you have the ideal position to tell your customers not to buy and own things, or have IT staff as a core business. You ask them the question; is this bringing you money? Or can we take it off your back? You used to invest in hardware which looked cheaper long-term, but you end up with IT staff that you don’t want on your payroll.

 

Did you ask that question to yourself? As an IT service provider providing services is in your DNA. But is investing also in your DNA? Or is it about making money? Owning stuff is not your core business. When you choose a solution, whether it is hosted backup or hosted e-mail the question should always be if you could do a better job. Look at all the aspects which come with hosting it yourself. It might look cheaper; buy some disks, forget to calculate the costs for staff, electricity and bring in the money.

 

Is it that easy? There is some risk involved. Disks can break, 24×7 availability is actually costing money and your staff can (accidentally) mismanage the platform into serious downtime. When we take a closer look at the business case we also see the calculation is missing reality. You always start with 0 GB and it takes time to reach the 100% coverage for all invested Terabytes. And when you reach 80% you know you’re up for new cash out. 

 

Will you do a better job than the manufacturer? Will it bring you more money?

 

Always go for pay as you grow, without investments. Add value to the proposition with your knowledge and well educated staff. Start making money from day one. And remember what you tell your end customer about investing in IT. Practice what you preach.

What’s in Store for Technology in 2013

As we get into the full swing of the new year and put another CES behind us, it’s time to take a look at the current trends in technology and get a feel for some of the developments we can expect in 2013. There are a lot of interesting products just around the corner – some of them will seem revolutionary while others will be a big update to current technology – and many of them are well worth a look.
The release of Windows 8, which was optimized for touchscreens, is encouraging a lot of companies to really innovate with their laptops. Last year we saw the trend toward ultrabook computers – all sleek design and impressive form factor – and now the new operating system is giving them a chance to develop even more. Touchscreen ultrabooks are going to provide the simple convenience of a tablet while offering the power and capabilities of a compete computer.

read more

Best Practices: The Role of API Management

We are in the midst of an API revolution. Countless major enterprises are opening up access to their core information systems, allowing innovative third-party developers to build new business opportunities through collaboration and community. However, this remarkable movement puts pressure on IT to manage APIs. The goal is to ensure optimal business outcomes through APIs without inadvertently creating security and system management problems or running up unsustainable costs.
In his General Session at Cloud Expo Silicon Valley, Alistair Farquharson, CTO at SOA Software, addresses this challenge by exploring some proven best practices for API management.

read more

As I Work in the Cloud, I Encounter an 80/20 Rule

I’m developing software – yikes. I first came to this industry in the 1980s as a non-geek music major, and was forced to learn how to run a Unix-based box from Callan Data Systems that I fondly called The Antichrist.

All these years later, I’m still a non-geek, albeit one who has learned more than I ever cared to learn about dealing with the vast skunkworks project known as Windows, the Stalinesque Apple operating systems, neo-Byzantine network protocols, and now, the ethereal nebulousness of cloud computing.

So I find myself developing a workflow system – or at minimum, the outlines of one – for a vertical-market client who eventually wants this stuff to run on any number of phones, tablets, and hybrids. Android or Win8? Do we need to write for the Mac and iOS, too?

But this responsibility is just part of my job. The client, like most companies, seems to spend most of its time chasing its tail, bogged in the mud, in the doldrums, etc. However you want to put it. Take the industry cliché that 80% of IT is devoted to routine maintenance and ops, with only 20% for innovation, and you get the picture. It may be more like 90/10.

The Obsolete 47%
This is why I was struck this morning by a tweet from Congressman Darrell Issa (R-CA) that 47% of the federal IT budget is devoted to what he calls “obsolete/deficient” systems. I like Darrell because of his intransigent stand against SOPA and related nonsense; this stand alone makes me overlook other aspects of his public career.

So I hope he’s not trying to make some cheap political point about liberal inefficiency or some other manner of pandering. I hope he’s merely pointing out that a lot of federal IT spending is COBOL-related, for example, rather than focused on the latest cloud developments. It could be that this 47% is above the average in business.

Meanwhile, I’ve expressed disappointment in the current Federal CIO, who is seemingly doing nothing about the government’s Cloud First vision, and who is not responding to my inquiries about it. (No, my feelings are not hurt by this, but it would be great for the industry if he woke up, in my opinion.)

So here I am, trying to create some snappy workflow in the cloud, while also trying to get some new onsite laptops to communicate with the various printers scattered throughout this office. Experts and gurus talks about reversing this 80/20 ratio – but in the real world, it’s probably not going to happen soon. The face is, if I can squeeze 20% of my time to do the innovative stuff, I’ll be happy.

read more

NetDNA EdgeRules Gives Websites Control over CDN Content

NetDNA today announced EdgeRules, an instantaneous HTTP caching rules service, giving site managers rapid and granular control over their web content for a better user experience, improved security, lower bandwidth costs and the ability to better monetize content by preventing hotlinking.

EdgeRules is an add-on service to NetDNA’s EdgeCaching and EdgeCaching for Platforms.  Both of these HTTP caching services place site content in NetDNA’s worldwide network of edge servers and peering partners for superior web performance optimization.

Using the EdgeRules control panel, site managers can make changes to their content rules and see them enacted in less than one minute – with no review needed from the NetDNA engineering team. This makes it possible for the first time to test, tweak and deploy very granular controls over how and when content is served.

“EdgeRules truly gives website manages the ability to manage their CDN services their way and to finely tune their pull zone content in a way that they never could before,” said David Henzel, NetDNA vice president of marketing.  “NetDNA is well known for giving site managers unprecedented control over their CDN service through our Control Panel.  With EdgeRules, we are at the forefront of CDN self provisioning again.”

A site manager can use EdgeRules to keep certain files from being proxied and thus protecting them from exposure on the Internet. For example, EdgeRules can prevent the exposure of directory indices due to misconfiguration, which is a common problem on cloud services such as Amazon’s S3 service.

The service allows different rules to be set for different files or classes of data so that frequently updated files can be classed differently from more static data.  This reduces calls to the origin server, which lowers bandwidth charges.

Site managers can also use the service to blacklist certain IP addresses, for example blocking web robots that are scraping data from the site.

The EdgeRules service can also read the operating system of a device and serve up optimized content for that device.  For example a smartphone-optimized image can be served up instead of a large image when the service detects a request from an Android or iOS device.

EdgeRules is now available for all NetDNA EdgeCaching customers.  For more information email sales@netdna.com or go to: http://www.netdna.com/products/add-ons/edgerules/.

Disaster Recovery in the Cloud, or DRaaS: Revisited

By Randy Weis

The idea of offering Disaster Recovery services has been around as long as SunGard or IBM BCRS (Business Continuity & Resiliency Services). Disclaimer: I worked for the company that became IBM Information Protection Services in 2008, a part of BCRS.

It seems inevitable that Cloud Computing and Cloud Storage should have an impact on the kinds of solutions that small, medium and large companies would find attractive and would fit their requirements. Those cloud-based DR services are not taking the world by storm, however. Why is that?

Cloud infrastructure seems perfectly suited for economical DR solutions, yet I would bet that none of the people reading this blog has found a reasonable selection of cloud-based DR services in the market. That is not to say that there aren’t DR “As a Service” companies, but the offerings are limited. Again, why is that?

Much like Cloud Computing in general, the recent emergence of enabling technologies was preceded by a relatively long period of commercial product development. In other words, virtualization of computing resources promised “cloud” long before we actually could make it work commercially. I use the term “we” loosely…Seriously, GreenPages announced a cloud-centric solutions approach more than a year before vCloud Director was even released. Why? We saw the potential, but we had to watch for, evaluate, and observe real-world performance in the emerging commercial implementations of self-service computing tools in a virtualized datacenter marketplace. We are now doing the same thing in the evolving solutions marketplace around derivative applications such as DR and archiving.

I looked into helping put together a DR solution leveraging cloud computing and cloud storage offered by one of our technology partners that provides IaaS (Infrastructure as a Service). I had operational and engineering support from all parties in this project and we ran into a couple of significant obstacles that do not seem to be resolved in the industry.

Bottom line:

  1. A DR solution in the cloud, involving recovering virtual servers in a cloud computing infrastructure, requires administrative access to the storage as well as the virtual computing environment (like being in vCenter).
  2. Equally important, if the solution involves recovering data from backups, is the requirement that there be a high speed, low latency (I call this “back-end”) connection between the cloud storage where the backups are kept and the cloud computing environment. This is only present in Amazon at last check (a couple of months ago), and you pay extra for that connection. I also call this “locality.”
  3. The Service Provider needs the operational workflow to do this. Everything I worked out with our IaaS partners was a manual process that went way outside normal workflow and ticketing. The interfaces for the customer to access computing and storage were separate and radically different. You couldn’t even see the capacity you consumed in cloud storage without opening a ticket. From the SP side, notification of DR tasks they would need to do, required by the customer, didn’t exist. When you get to billing, forget it. Everyone admitted that this was not planned for at all in the cloud computing and operational support design.

Let me break this down:

  • Cloud Computing typically has high speed storage to host the guest servers.
  • Cloud Storage typically has “slow” storage, on separate systems and sometimes separate locations from a cloud computing infrastructure. This is true with most IaaS providers, although some Amazon sites have S3 and EC2 in the same building and they built a network to connect them (LOCALITY).

Scenario 1: Recovering virtual machines and data from backup images

Scenario 2: Replication based on virtual server-based tools (e.g. Veeam Backup & Replication) or host-based replication

Scenario 3: SRM, array or host replication

Scenario 1: Backup Recovery. I worked hard on this with a partner. This is how it would go:

  1. Back up VMs at customer site; send backup or copy of it to cloud storage.
  2. Set up a cloud computing account with an AD server and a backup server.
  3. Connect the backup server to the cloud storage backup repository (first problem)
    • Unless the cloud computing system has a back end connection at LAN speed to the cloud storage, this is a showstopper. It would take days to do this without a high degree of locality.
    • Provider solution when asked about this.
      • Open a trouble ticket to have the backups dumped to USB drives, shipped or carried to the cloud computing area and connected into the customer workspace. Yikes.
      • We will build a back end connection where we have both cloud storage and cloud computing in the same building—not possible in every location, so the “access anywhere” part of a cloud wouldn’t apply.

4. Restore the data to the cloud computing environment (second problem)

    • What is the “restore target”? If the DR site were a typical hosted or colo site, the customer backup server would have the connection and authorization to recover the guest server images to the datastores, and the ability to create additional datastores. In vCenter, the Veeam server would have the vCenter credentials and access to the vCenter storage plugins to provision the datastores as needed and to start up the VMs after restoring/importing the files. In a Cloud Computing service, your backup server does NOT have that connection or authorization.
    • How can the customer backup server get the rights to import VMs directly into the virtual VMware cluster? The process to provision VMs in most cloud computing environments is to use your templates, their templates, or “upload” an OVF or other type of file format. This won’t work with a backup product such as Veeam or CommVault.

5. Recover the restored images as running VMs in the cloud computing environment (third problem), tied to item #4.

    • Administrative access to provision datastores on the fly and to turn on and configure the machines is not there. The customer (or GreenPages) doesn’t own the multitenant architecture.
    • The use of vCloud Director ought to be an enabler, but the storage plugins, and rights to import into storage, don’t really exist for vCloud. Networking changes need to be accounted for and scripted if possible.

Scenario 2: Replication by VM. This has cost issues more than anything else.

    • If you want to replicate directly into a cloud, you will need to provision the VMs and pay for their resources as if they were “hot.” It would be nice if there was a lower “DR Tier” for pricing—if the VMs are for DR, you don’t get charged full rates until you turn them on and use for production.
      • How do you negotiate that?
      •  How does the SP know when they get turned on?
      • How does this fit into their billing cycle?
    • If it is treated as a hot site (or warm), then the cost of the DR site equals that of production until you solve these issues.
    • Networking is an issue, too, since you don’t want to turn that on until you declare a disaster.
      • Does the SP allow you to turn up networking without a ticket?
      • How do you handle DNS updates if your external access depends on root server DNS records being updated—really short TTL? Yikes, again.
    • Host-based replication (e.g. WANsync, VMware)—you need a host you can replicate to. Your own host. The issues are cost and scalability.

Scenario 3: SRM. This should be baked into any serious DR solution, from a carrier or service provider, but many of the same issues apply.

    • SRM based on host array replication has complications. Technically, this can be solved by the provider by putting (for example) EMC VPLEX and RecoverPoint appliances at every customer production site so that you can replicate from dissimilar storage to the SP IDC. But, they need to set up this many-to-one relationship on arrays that are part of the cloud computing solution, or at least a DR cloud computing cluster. Most SPs don’t have this. There are other brands/technologies to do this, but the basic configuration challenge remains—many-to-one replication into a multi-tenant storage array.
    • SRM based on VMware host replication has administrative access issues as well. SRM at the DR site has to either accommodate multi-tenancy, or each customer gets their own SRM target. Also, you need a host target. Do you rent it all the time? You have to, since you can’t do that in a multi-tenant environment. Cost, scalability, again!
    • Either way, now the big red button gets pushed. Now what?
      • All the protection groups exist on storage and in cloud computing. You are now paying for a duplicate environment in the cloud, not an economically sustainable approach unless you have a “DR Tier” of pricing (see Scenario 2).
      • All the SRM scripts kick in—VMs are coming up in order in protection groups, IP addresses and DNS are being updated, CPU loads and network traffic climb…what impact is this?
      • How does that button get pushed? Does the SP need to push it? Can the customer do it?

These are the main issues as I see it, and there is still more to it. Using vCloud Director is not the same as using vCenter. Everything I’ve described was designed to be used in a vCenter-managed system, not a multi-tenant system with fenced-in rights and networks, with shared storage infrastructure. The APIs are not there, and if they were, imagine the chaos and impact on random DR tests on production cloud computing systems, not managed and controlled by the service provider. What if a real disaster hit in New England, and a hundred customers needed to spin up all their VMs in a few hours? They aren’t all in one datacenter, but if one provider that set this up had dozens, that is a huge hit. They need to have all the capacity in reserve, or syndicate it like IBM or SunGard do. That is the equivalent of thin-provisioning your datacenter.

This conversation, as many I’ve had in the last two years, ends somewhat unsatisfactorily with the conclusion that there is no clear solution—today. The journey to discovering or designing a DRaaS is important, and it needs to be documented, as we have done here with this blog and in other presentations and meetings. The industry will overcome these obstacles, but the customer must remain informed and persistent. The goal of an economically sustainable DRaaS solution can only be achieved by market pressure and creative vendors. We will do our part by being your vigilant and dedicated cloud services broker and solution services provider.

 

 

 

 

 

 

 

 

 

 

Analysing cloud computing and the healthcare industry

Cloud computing has touched many industries and is increasingly being adopted in many ways, from easily accessible data storage to business application solutions and reduction in hardware investment. The healthcare industry is no exception: here we outline some of the ways in which cloud computing can be of benefit in the future.

1. The secure storage of patient records

Doctors and medical staff are bound by oaths, especially with regards to patient confidentiality. Having secure cloud storage is therefore paramount.

The first generation of cloud computing had security issues, however now that these have been ironed out, the cloud is more secure than ever. In the US, hospitals must adhere to the Health Insurance Portability and Accountability Act.

2. Reducing the cost of data storage

Utilising the cloud can save the healthcare service a lot of money. Subscriptions can equate to approximately a 90% saving on hardware investments, especially when …

TwinStrata Reports Record Year

TwinStrata, Inc., on Tuesday highlighted results for its fiscal year ended December 31, 2012. The year was punctuated by a series of corporate and product milestones.
TwinStrata CEO Nicos Vekiarides noted that the company’s “tremendous growth this year validates not only the TwinStrata value proposition, but also the increasing traction of cloud storage as a whole.”
TwinStrata ended the year with substantial growth across all measures of its business:
Throughout 2012, TwinStrata experienced sequential bookings growth of 35 percent or more quarter over quarter.

read more

Hybrid Cloud Power

As the hybrid cloud becomes the cloud of choice for the enterprise, you can expect cloud integration to eventually replace cloud migration as a solution of choice.  While migration supports the migration of apps into public clouds, cloud integration supports cloud migration, cloud failover,  devtest cloud (or cloud cloning) and cloud bursting. Migration is a […]

read more