All posts by GreenPages

VMworld 2017: NSX Cloud, AppDefense + VMware’s New Direction

Enterprise Consultant, Chris Williams recently returned from VMworld 2017 and gives his take on a few of exciting announcements made at the event. AppDefense, VMware’s newest security solution monitors the steady state of servers and stops infiltration at the application layer. It’s a cloud offering rather than an on-premise based solution. VMware also announced NSX Cloud which allows you to employ a security policy once but also deploy it everywhere, providing companies with a common networking and security model across clouds. To learn more about the key news from VMworld and hear from an experienced technologist, check out the video above.

By Jake Cryan, Digital Marketing Specialist

Tech News Recap for the Week of 09/04/17

If you had a busy week and need to catch up, here’s a tech news recap of articles you may have missed for the week of 09/04/2017!

How to adapt to digital disruption with Microsoft. Rethinking the software-defined storage market. How ISPs use your data. Giant ransomware email campaign could cause some problems. Azure App Service now available on Linux and more top news this week you may have missed! Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap


IT Operations

[Interested in SD-WAN? DownloadWhat to Look For When Considering an SD-WAN Solution.]



  • VMware gets closer to the cloud at VMworld 2017
  • VMware choosing Frame to deliver cloud app streaming says a lot
  • Hybrid IT and cyber security drive disruption at VMworld 2017





[Interested in SD-WAN? DownloadWhat to Look For When Considering an SD-WAN Solution.]

By Jake Cryan, Digital Marketing Specialist

CIOs and factors overlooked when changing your cloud

By Clint Gilliam, Virtual CIO, GreenPages Technology Solutions

The topic of cloud computing currently ranks in the top-five of IT articles published for IT professionals. Daily we hear about the benefits of this new world, the range of exciting new services now available, and of course how to make the transition.

Even with the valuable insights provided by these articles, there is one critical aspect given too little attention or even overlooked entirely.  Specifically, how to plan for a breakup.

If one accepts the old dictum that change is the only universal constant, then ask yourself why most people do not plan as carefully for unwinding a cloud / SaaS arrangement as we do in setting one up. The details of ending an arrangement can be tricky and not immediately self-evident.

These issues are beyond standard legal provisions for exit clauses, terms/conditions, and related matters. It deals with practicality and preparedness.

Take this as an example, imagine you use a SaaS system to implement secure e-mail for corresponding with people outside your organization. Even in the world of TLS, many still have a need for such services which provide mailbox-to-mailbox encryption for both e-mails and attachments.

Should you decide to terminate this service, you might be in for some unexpected challenges. If your service provider does not provide bulk decrypt and export tools, you could be in for a painful process.

In a previous role, I ran into this exact situation. We had to write custom scripts to go through each mailbox e-mail-by-email to unencrypt and export; it was slow and costly.

Even without terminating the service, data exporting tools can be useful as a course of normal business. Consider the situation when your organization is involved in litigation. As part of the legal Discovery process, you might have to produce e-mails for individuals covering specific subjects and dates. Should the list be significant or the filters complex, you can again run into unexpected workloads.

Another example is data offloading. Many services, AWS included, offer excellent tools for migration / uploading large volumes into their cloud services. In some cases, particularly with large datasets, such approaches are the only feasible or timely solution.

But what happens when you elect to move those datasets to another cloud provider? Don’t assume the comprehensive set of options you have for bringing data into your provider’s cloud is symmetric. You might just find a long slow process to make a change.

In both examples, specific industry or regulatory requirements such as security, data location, and privacy can compound the challenge.

IT professionals have a lot of experience with managing proprietary solutions and data. The key is leveraging that knowledge when considering cloud-based solutions. Personally, I have found two methods for reducing these risks.

The first is to run some tabletop simulations on what happens in various scenarios, to develop and expand your punch list over time. Scenarios to consider might include migration, legal requests, disaster recovery or other matters specific to your industry.

My second approach is to network: it is a knowledge and experience multiplier that’s second to none. Check with colleagues; get their advice and listen to their own experiences.

Of course, you don’t know what you don’t know, but thinking of the end, as well as the beginning, should put you in a better spot.

Putting the ‘converged’ in hyperconverged support: What to do on the second day


By Geoff Smith, Senior Manager, Managed Services Business Development

Today’s hyperconverged technologies are here to stay it seems.  I mean, who wouldn’t want to employ a novel technology approach that ‘consolidates all required functionality’ into a single infrastructure appliance that provides an ‘efficient, elastic pool of x86’ resources controlled by a ‘software-centric’ architecture?  I mean, outside of the x86 component, it’s not like we haven’t seen this type of platform before (hello, mainframe anyone?).

But this is not about the technology behind HCI (hyperconverged infrastructure), nor about whether this technology is the right choice for your IT demands. It’s more about what you need to consider on day two, after your new platform is happily spinning away in your data centre.  Assuming you have determined that the hyperconverged path will deliver technology and business value for your organisation, why wouldn’t you extend that belief system to how you plan on operating it?

Today’s hyperconverged vendors offer very comprehensive packages that include some advanced support offerings.  They have spent much time and energy (and VC dollars) in creating monitoring and analytics platforms that are definitely an advancement over traditional technology support packages. 

While technology vendors such as HP, Dell/EMC, Cisco and others have for years provided phone-home monitoring and utilisation/performance reporting capabilities, hyperconverged vendors have pushed these capabilities further with real-time analytics and automation workflows, such as Nutanix Prism, SimpliVityOmniWatch, and OmniView.  Additionally, these vendors have aligned support plans to business outcomes – ‘mission critical’, ‘production’, ‘basic’, and so on.

Now you are asking: “Okay Mr. Know-It-All, didn’t you just debunk your own argument?” Au contraire I say – I have just reinforced it.

Each hyperconverged vendor technology requires its own separate platform for monitoring and analytics.  And these tools are restricted to just what is happening internally within the converged platform.  Sure, that covers quite a bit of your operational needs, but is it the complete story?

Let’s say you deploy SimpliVity for your main data centere.  You adopt the ‘mission critical’ support plan, which comes with OmniWatch and OmniView.  You now have great insight into how your OmniCube architecture is operating, and you can delve into the analytics to understand how your SimpliVity resources are being utilised.  In addition, you get software support with one, two, or four hour response (depending on the channel you use – phone, email, web ticket).  You also get software updates and RCA reports.  It sounds like a comprehensive, ‘converged’ set of required support services.

And it is, for your selected hyperconverged vendor.  What these services do not provide is a holistic view of how the hyperconverged platforms are operating within the totality of your environment.  How effective is the networking that connects it to the rest of the data centre?  What about non-hyperconverged based workloads, either on traditional server platforms or in the cloud?  And how do you measure end user experience if your view is limited to hyperconverged data-points?  Not to mention, what happens if your selected hyperconverged vendor is gobbled up by one of the major technology companies or, worse, closes when funding runs dry?

Adopting hyperconverged as your next-generation technology play is certainly something to consider carefully, and has the potential to positively impact your overall operational maturity.  You can reduce the number of vendor technologies and management interfaces, get more proactive, and make decisions based on real data analytics. But your operations teams will still need to determine if the source of impact is within the scope of the hyperconverged stack and covered by the vendor support plan, or if its symptomatic of an external influence.

Beyond the awareness of health and optimised operations, there will be service interruptions.  If there weren’t we would all be in the unemployment line.  Will a one hour response be sufficient in a major outage?  Is your operational team able to response 7X24 with hyperconverged skills?  And, how will you consolidate governance and compliance reporting between the hyperconverged platform and the rest of your infrastructure?

Hyperconverged platforms can certainly enhance and help mature your IT operations, but they do provide only part of the story.  Consider carefully if their operational and support offerings are sufficient for overall IT operational effectiveness.  Look for ways to consolidate the operational information and data provided by hyperconverged platforms with the rest of your management interfaces into a single control plane, where your operations team can work more efficiently.  If you’re looking for help, GreenPages can provide this support via its Cloud Management as a Service (CMaaS) offering.

Convergence at this level is even more critical to ensure maximum support of your business objectives. 

Four ways SMBs can take advantage of the cloud

(Image Credit: iStockPhoto/pixdeluxe)

While cloud adoption among SMBs continues to rise, there are still plenty of SMB customers I speak with who are reluctant to take advantage of what the cloud has to offer. Below are four examples of how cloud adoption can help SMBs excel:

Access to enterprise class features

The cloud gives SMBs access to enterprise class features that many couldn’t normally take advantage of. Geo-location and load balancing are both great examples. If an SMB puts its website up on Microsoft Azure, a click of a button can put 3 copies locally and also put 3 copies in 3 different geographical locations automatically.

This way, if something happened at one of the locations, all of the data is already at another data centre ready to spin up. Doing this without utilising the cloud would be extremely costly and quite unrealistic for the budgets of most SMB organisations.

Disaster recovery as a service (DRaaS)

DRaaS is a cost effective insurance policy for SMBs. Instead of having to buy and maintain separate servers, SAN, storage, network, firewall, rack space, etc. I can take my backups and load them up to the cloud (Azure, vCloud Air, Cirrity, etc). This gives me a way to have infrastructure fail over in the event of a disaster.

SMBs that go this route can pay less per month to have this available than it would be buy on-prem equipment. Buying the equipment may mean that you aren’t using all of it as well.

Desktops in the cloud

Another way SMBs can use the cloud is to host desktops. Doing this means you don’t have to buy or maintain desktops and allows for greater scalability. There are plenty of companies where users change a lot so internal IT is tasked with adding or removing users on a fairly regular basis. This means they have desktops that they need to build out manually.

By hosting your desktops in the cloud, you can automatically spin up or down when needed. This not only provides cost savings, but will also save your IT department a significant amount of time.

Application scalability

If you are running, say, Microsoft Azure, you can set Azure to utilisation between 25-75% of CPU. When utilisation gets above 75%, Azure is going to automatically turn up more servers and load balance them. If utilisation dips below 25%, it will decommission servers. This allows for automatic scaling based on user activity. Doing this traditionally is much more expensive and in many cases not possible for SMBs.

The bottom line is SMBs should take a closer look at cloud options that can increase efficiencies and drive down costs. The corporate IT department is evolving. Has yours kept pace?

Can you think of further key ways for SMBs to take advantage of the cloud? Let us know in the comments.

Analysing emerging technologies across the cloud storage landscape

There has been an influx of emerging technologies across the storage landscape. Many vendors are using the exact same hardware but are figuring out ways to do a lot of smarter things with the software. In this post, I’ll cover a handful of vendors who are doing a great job innovating at the software layer to improve storage technology and performance.


Nimble was founded by the same people who did Data Domain and does data compression. Their success led to EMC buying them in June 2009. The company is known for its massively popular backup targets. They’re the one of the first ones to compress and duplicate the data as it was being stored to greatly reduce the amount of data that needed to be stored.

Essentially, Nimble takes commodity solid state drives and slow 7,200 RPM spinning disks and turns it into an extremely fast, well-performing hybrid SAN, while delivering excellent compression ratios and the best support team in the business. Very simply, they’re doing smarter things with the same technology everyone else is using. It’s highly scalable and well designed. For example, you can change your controllers on the array during business hours with no interruptions, as opposed to having to wait until off hours as companies have been forced to do traditionally.


What’s interesting about DataGravity is that they have taken an entirely different approach to traditional storage. They make arrays that perform on par with just about everyone else’s, yet their secret sauce is taking unstructured, uncategorized data and categorising it at the time it’s being written. Why is this important? A lot of companies have to keep track of Social Security Numbers, Credit Card Numbers, and so on.

Traditionally, you have to buy expensive software to do this. DataGravity does it at the time the Data is written. You don’t need to invest in any additional software. That sounds too good to be true, right? Every modern SAN has two storage controllers. Most are active passive or they are both on. DataGravity has one controller looking at these traditional things while the other storage controller looks at data, categorizes it and looks at data management functions. This eliminates the need for expensive compliance related software and data protection management.

Who should take advantage?

Any company that has to deal with regulatory compliance (healthcare, finance).


Simplivity offers hyper-converged infrastructure similar to Nuatanix, EVO Rails, and Dell Vertex. The piece that makes them unique is their dedication to reduce IO. They take all data and compress/duplicate at ingestion once and forever. This means that if I write a data block and the data is already on the storage system, there is zero IO; I don’t have to rewrite it.

Furthermore, I can migrate virtual machines from one data center to another. It’s easy to migrate a 5 gig virtual machine and write less than a 100 mgs across the WAN. Also, when I clone a machine, there is no IO. IO is something companies can’t address during work hours because it takes up way too many resources and would bring them to their knees. You can’t do it without impacting the business. When you have Simplivity, there is no need for a third party backup vendor. Redundant data spreads through notes and only writes redundant blocks. It’s easy to have petabytes of backups living on terabytes of storage.

Who should take advantage?

We have a client who is currently in Massachusetts that is looking to move to a Colocation Facility in Florida. For this use case, Simplivity is a quick and easy way to migrate that data geographically without huge impacts on bandwidth, WAN costs, etc.

Pure Storage

If you’re looking for ridiculously fast storage, Pure Storage could be the solution for you. They use the same flash technology as everyone else, but they read and write to it differently so it’s much more efficient, optimized, and it matches the flash technology. Typically, vendors have been writing to flash drivers in the same way that they were treating spinning disk.

Who should take advantage?

If your organization has applications that require tremendously fast storage, this could be a good fit for you. One example would be if you have extremely demanding Oracle SAP or SQL applications.


VMware brings a lot of great benefits to the table with EVO: Rail. EVO: Rail is basically VMware SAN with prebuilt hardware that can very quickly and easily be deployed. It’s a scalable, software-defined data center building block that provides compute, networking, storage and management. Furthermore, it’s highly resilient.

Who should take advantage?

This is a good fit for organizations that have branch offices where there is a need for smaller VMware environments at multiple locations. It’s a quick, inexpensive way to manage them centrally from a virtual center.

Be sure to keep your eyes out for HP who is making innovations in flash storage. More on that soon.

Have you used any of these solutions? How have your experiences been? If you would like to talk more about this, send us an email at

What to move to the cloud: A more mature model for SMEs

Picture credit: iStockPhoto

By Chris Chesley, Solutions Architect

Many SMBs struggle with deciding if and what to move to the cloud. Whether it’s security concerns, cost, or lack of expertise, it’s often difficult to map the best possible solution. Here are eight applications and services to consider when your organization is looking to move to the cloud and reduce their server footprint.

What to move to the cloud

1. Email

Obviously in this day and age email is a requirement in virtually every business. A lot of businesses continue to run Exchange locally. If you are thinking about moving portions of your business out to the cloud, email is a good place to start. Why should you move to the cloud?

Simple, it’s pretty easy to do and at this point it’s been well documented that mail runs very well up in the cloud. It takes a special skill set to run Exchange beyond just adding and managing users. If something goes wrong and you have an issue, it can often be very complicated to fix. It can also be pretty complicated to install. A lot of companies do not have access to high quality Exchange skills.

Moving to the cloud solves those issues.  Having Exchange in the cloud also gets your company off of the 3-5 year refresh cycle for the hardware to run Exchange as well as the upfront cost of the software.

Quick Tip – Most cloud e-mail providers offer Anti-Spam/Anti-virus functionality as part of their offering. You can also take advantage of cloud based AS/AV providers like MacAfee’s MXLogic.

2. File shares

Small to medium sized businesses have to deal with sharing files securely and easily among its users. Typically, that’s a file server running locally in your office or at multiple offices. This can present a challenge of making sure everyone has the correct access and that there is enough storage available.

Why should you move to the cloud? There are easy alternatives in the cloud to avoid dealing with those challenges. Such alternatives include Microsoft OneDrive, Google Drive or using a file server in Microsoft Azure. In most cases you can use Active Directory to be the central repository of rights to manage passwords and permissions in one place.

Quick Tip – OneDrive is included with most Office 365 subscriptions. You can use Active Directory authentication to provide access through that.

3. Instant messaging/online meetings

This one is pretty self-explanatory. Instant messaging can often be a quicker and more efficient form of communication than email. There are many platforms out there that can be used including Microsoft Lync, Skype and Cisco Jabber. A lot of these can be used for online meetings as well including screen sharing. Your users are looking for these tools and there are corporate options. With a corporate tool like Lync or Jabber, you can be in control. You can make sure conversations get logged, are secure and can be tracked. Microsoft Lync is included in Office 365.

Quick Tip – If you have the option, you might as well take advantage of it!

4. Active Directory

It is still a best practice to keep an Active Directory domain controller locally at each physical location to speed the login and authentication process even when some or most of the applications are services are based in the Cloud. This still leaves most companies with an issue if their site or sites are down for any reason.  Microsoft now has provided the ability to run a domain controller in their Cloud with Azure Active Directory to provide that redundancy that many SMBs do not currently have.

Quick Tip – Azure Active Directory is pre-integrated with Salesforce, Office 365 and many other applications. Additionally, you can setup and use multi-factor authentication if needed.

5. Web servers

Web servers are another very easy workload to move to the cloud whether it’s Rackspace, Amazon, Azure, VMware etc. The information is not highly confidential so there is a much lower risk than putting extremely sensitive data up there. By moving your servers to the cloud, you can avoid all the traffic from your website going to the local connection; it can all go to the cloud instead.

Quick Tip – most cloud providers offer SQL server back-ends as part of their offerings. This makes it easy to tie in the web server to a backend database. Make sure you ask your provider about this.

6. Backup 

A lot of companies are looking for alternate locations to store backup files. It’s easy to back up locally on disk or tape and then move offsite. It’s often cheaper to store in the cloud and it helps eliminate the headache of rotating tapes.

Quick Tip – account for bandwidth needs when you start moving backups to the cloud. This can be a major factor.

7. Disaster recovery

Now that you have your backups offsite, it’s possible to have capacity to run virtual machines or servers up in the cloud in the event of a disaster. Instead of moving data to another location you can pay to run your important apps in the cloud in case of disaster. It’s usually going to cost you less to do this.

Quick Tip – Make sure you look at your bandwidth closely when backing up to the cloud. Measure how much data you need to backup, and then calculate the bandwidth that you will need.  Most enterprise class backup applications allow you to throttle the backups so they do not impact business.

8. Applications

A lot of common applications that SMBs use are offered as a cloud service. For example, Salesforce and Microsoft Dynamics. These companies make and host the product so that you don’t have to onsite. You can take advantage of the application for a fraction of the cost and headache.

In conclusion, don’t be afraid to move different portions of your environment to the cloud. For the most part, it’s less expensive and easier than you may think. If I was starting a business today, the only thing I would have running locally would be an AD controller or file server. The business can be faster and leaner without the IT infrastructure overhead that one needed to run a business ten years ago.

Looking for more tips? Download this whitepaper written by our CTO Chris Ward “8 Point Checklist for a Successful Data Center Move