Archivo de la categoría: Featured

Microsoft Ignite 2015: Top News & Announcements

This week I was fortunate to be able to attend my first-ever Microsoft Ignite 2015 Conference in Chicago at the McCormick Center. Me and 23,000 of my closest friends. We all gathered in one of the most cavernous buildings I have ever been in to see what Microsoft would unveil. We were not disappointed. Satya Nadella, Joe Belfiori and Gurdeep Singh Pall brought us insight into what was to come and began to showcase the innovation being delivered in the latest Microsoft miracles—miracles to empower IT Pros in companies all over the globe.

Microsoft IgniteIt quickly became apparent that Microsoft has made significant strides reinventing productivity for people and organizations. All of the new and upcoming Office 2016 features will enable successful companies to create effective communication flows between folks on premise and tele-workers. From my perspective, how can individual productivity not provide collective value from the co-creation feature available in Office 2016. Quite literally, you see folks type letter by letter, word by word from anywhere in the world. Gone are the days of email for this effort, painstakingly waiting for Jim to respond and then email it to Jennifer. In today’s new IT Integrator world, this means we can share documents with perspective customers via Skype for Business and mark them up live, with the customer adding to the flow real-time, in the actual Word document, not just on a whiteboard. Enable Track Changes and you can see what each contributor is doing and then merge the changes at the end.

This leads to faster turnaround on important Statements of Work, BAAs or other sales documents, speeding the rate of close on a particular opportunity.

For GreenPages, and our fellow IT Pros in their respective customer organizations, this is our collective opportunity to create better and more adaptable infrastructures. No longer are we burdened by hardware lead times and costs that blow up our budgets, just to add capacity for DevOps. The Microsoft Cloud makes it possible to create virtual datacenters on the fly, edit documents live, store them in the Microsoft Cloud and recall them from anywhere on a moment’s notice, and at a lower cost than ever before. I want to also highlight that this week at Ignite was not just about Azure, Office365, and Office2016. We also saw the walkthroughs on Skype for Business, Server 2015, Exchange 2016 and SharePoint 2016 in-depth for the first time. One word… Impressive.

 

Now, let’s talk about what Microsoft sees as the new online work experience.

Teams

Where work used to be a cube based, do your own thing and don’t lift your head (unless you smell food), it’s now a communal one. People still work individually on their own devices, in their own space, often on their own time, but now teams deliver projects more effectively to customers. With the foundation of new Office 365 Groups, they can work in communal, virtual teams, again anytime, anywhere. The ability to quickly bring people together to solve a complex business problem must be simple, lightweight, and allow team members to work the way they want to (much like the new millennial worker will or does want).  It is the ubiquitous team element that allows organizations such as GreenPages to listen to customers, take notes, create content, video, IM, tweet—and ensure our practices and our customers are part of the OneTeam approach driving collaborative context. As a Microsoft VTSP, I have access to their Office365 portal as my communication and knowledge base toolset. I have often lamented to customers during presentations that I wished Microsoft would release Office Delve to the consumer. Oh, what a great real time presentation of data; pertinent to what you are working on and a single pane of glass experience. Well, viola, we saw the preview of the Office 365 Group’s “hub” in Office Delve – not to mention that Delve has been released into production in Office365.  Also, I saw the ability to have group conversations in email, via Outlook 2016.

Human Mobility

Today, work is what we do, not where we go. My mission at GreenPages is to have helped develop a next generation VAR that ensures people can be productive wherever they are, using whatever device they have, therefore resulting in exemplary customer services to all of our customers. This includes both GreenPages’ employees and GreenPages’ customers. There are many, many reports that say 80% of time spent on phones and tablets is within native applications, so Microsoft presents us with the step-future approach and releases Word, PowerPoint, Excel, Outlook, OneNote, Skype, OneDrive, Yammer and more—across all devices and platforms. These newly deemed Office universal applications for Windows 10 are another great step on this journey. So, I immediately updated my Microsoft Surface Pro 3 to Windows 10 and Office 2016. So far so cool.

I am now a mobility monster. Maybe I should change my Microsoft Surface touch type keyboard to green. No… the whole thing should be green. I’ll show you a picture in my next blog.

Meetings

At GreenPages today, our meetings are as often ad-hoc as they are pre-scheduled, and there are very few meetings where everyone is in the room. Most meetings, even those with customers, include one or more remote attendees. But I live for body language; I need to see how the person is reacting to the information that I’m offering so that I can adapt to make sure they are comfortable with it. The physical queue is imperative for me. Virtual attendees don’t offer body language. They don’t offer queues and most of the time you hit it out of the park, but sometimes you miss that shift in the chair and don’t find out you were off base until a follow up from the customer crushing your record of successful delivery. I believe, as does Microsoft, that moving forward, every meeting scheduled in Office 365 will automatically be a Skype for Business meeting, so customers and fellow employees don’t have to do anything additional to make video meetings.  With Microsoft’s roll-out of the new Skype for Business experience, it’s easy to get a meeting up and running in a few clicks, and video just works. There’s no need for plugins or special software; it is part of the default experience. Now, add in great hardware integration across the Surface Hub, Skype Room Systems, and with vendors like Cisco, Logitech and Polycom and you can have smart meeting rooms on the fly.

Content co-creation

One of the more exciting things we saw in the Office 2016 Public Preview release was Content co-creation. In theory and practice, I tried this once my upgrade was complete. All Office content is by design and default saved to, and shared from, OneDrive or OD4B. This content can be created and edited with real-time co-authoring in Word 2016. Also, email attachments are a thing of the past with Outlook’s new attachments that are simply shared from the cloud, much like you would share a link from Microsoft SharePoint.

I think this is an unprecedented period in Microsoft history. A full on charge at the Cloud, better yet the Microsoft Cloud and finally a rich Office package that makes the cloud seem like it is the hard drive on your desktop, laptop, tablet, Ipad, Surface or Mac. It was a very exciting week, and this just begins the build up to WPC in Orlando this year. I am sure more is to come from this next evolution.

Have you been dragging your feet leading up to the Windows Server 2003 End of Life date? Read David’s whitepaper to get a better idea of migration options available to organizations.

 

By David Barter, Practice Manager – Microsoft Technologies

EMC World 2015: Event Recap

After EMC World 2015, I’m languishing in airports today in post-conference burnout – an ideal time to deliver a report on the news, announcements and my prognostications on what this means to our business.

The big announcements were delivered in General Sessions on Monday (EMC Information Infrastructure & VCE) and on Tuesday (Federation: VMware & Pivotal). The Federation announcements are more developer and futures oriented, although important strategically, so I’ll pass on that for now.

EMC and VCE have updated their converged and Hyperconverged products pretty dramatically. Yes, VSPEX Blue is Hyperconverged, however unfortunate the name is in linking an EVO:RAIL solution to a reference architecture solution.

The products can be aligned as:

  1. Block
  2. Rack
  3.  Appliances

EMC World 2015

The VCE Vblock product line adheres to its core value proposition closely.

  1. Time from order to completely deployed on the data center floor in 45 days. (GreenPages will provide the Deploy & Implementation services. We have three D&I engineers on staff now.)
  2. Cross component Unified upgrade through a Release Candidate Matrix – every single bit of hardware is tested in major and minor upgrades to insure compatibility: storage, switch, blade, add-ons (RecoverPoint, Avamar, VPLEX).
  3. Unified support – one call to VCE, not to all the vendors in the build

However, VCE is adding options and variety to make the product less monolithic.

  1. VXblock – this is the XtremIO version, intended for large VDI or mission critical transactional deployments (trading, insurance, national healthcare claims processing). The Beast is a Vblock of eight 40 TB Xbrick nodes, 320 TB before dedupe and compression, or nearly 2 PB with realistic data reduction. Yes, that is Two Petabytes of All Flash Array. Remote replication is now totally supported with RecoverPoint.
  2. VXRack – this is a Vblock without an array, but it isn’t VSAN either. It is….ScaleIO, a software storage solution that pools server storage into a shared pool. The minimum configuration is 100 compute nodes, which can be dense performance (4 node form factor in 2 U chassis) or capacity. The nodes can be bare metal or hypervisor of any sort. This can scale to 328 Petabytes. Yes, Petabytes. This is web-scale, but they call it “Rack Scale” computing (first generation). More on that later…
  3. Vscale – Networking! This is Leaf and Spine networking in a rack to tie a VXrack or Vblock deployment together, at scale. “One Ring to Rule Them All”. This is big, literally. Imagine ordering a petabyte installation of VXblock, VXrack and Vscale, and rolling it onto the floor in less than two months.

So, that is Block and Rack. What about Appliance?

Enter VSPEX Blue, the EMC implementation of EVO:RAIL. This has definite value in…

  • Pricing
  • Unified management & support
  • The “app store” with
    • integrated backup (VDPA)
    • replication (vRPA)
    • Cloud Array integration (TwinStrata lives!), a virtual iSCSI controller that will present cloud storage to the system as a backup target or a capacity tier.

This post from Mike Colson provides a good explanation.

Future apps will include virus scanning, links to Public IaaS and others.

I set one up in the lab in 15 minutes, as advertised, although I had to wait for the configuration wizard to churn away after I initialized it and input all the networking. Professional Services will be required, as EMC is requiring PS to implement. Our team is and will be prepared to deploy this. We can discuss how this compares to other Hyperconverged appliances. Contact us for more information.

There are other announcements, some in sheer scale and some in desirable new features.

Data Domain Beast: DD9500, 58.7 TB/hr. and 1.7 PB of capacity. This is rated at 1.5x the performance and 4x the scalability of the nearest competitor.

VPLEX News: The VPLEX Witness can now be deployed in the public Cloud (naturally EMC recommends the EMC Hybrid Cloud or vCloud Air). The Witness has to be outside the fault domains of any protected site, so where better than the Cloud? It is a very lightweight VM.

CloudArray (TwinStrata’s Cloud Array Controller) is integrated with VPLEX. You can have a distributed volume spanning on premise and cloud storage. I’m still trying to grasp the significance of this. The local cache for the CloudArray controller can be very fast, so this isn’t limited to low latency applications. The things you could do…

VPLEX is now available in a Virtual Edition (VPLEX/VE). This will obviously come with some caveats and restrictions, but this also is a fantastic new option for smaller organizations looking for the high availability that VPLEX provides, as well as data mobility and federation of workloads across metro distances.

VVOL: Chuck Hollis (@chuckhollis) led an entertaining and informative ‘Birds of a Feather’ session for VVOLs. Takeaway – this is NOT commonly deployed yet. Only a handful of people have even set it up, and mostly for test. This was in a room with at least 150 people, so high interest, but low deployment. Everyone sees the potential and is looking forward to real world policy based deployments on industry standard storage. This is an emerging technology that will be watched closely.

VNX/VNXe: I didn’t see or hear many striking features or upgrades in this product line, but an all flash VNXe was trumpeted. I’ll be looking at the performance and design specifications of this more closely to see how it might fit targeted use cases or general purpose storage for SMB and commercial level customers. There is talk around the virtualization of the VNX array, as well as Isilon, so pretty soon nearly every controller or device in the EMC portfolio will be available as a virtual appliance. This leads me to…

ViPR Controller and ViPR SRM: Software Defined Storage

ViPR Controller is definitely a real product with real usefulness. This is the automation and provisioning tool for a wide variety of infrastructure elements, allowing for creation of virtual arrays with policy based provisioning, leveraging every data service imaginable: dedupe, replication, snapshots, file services, block services and so on.

ViPR SRM is the capacity reporting and monitoring tool that provides the management of capacity that is needed in an SDS environment. This is a much improved product with a very nice GUI and more intuitive approach to counters and metrics.

I’d recommend a Storage Transformation Workshop for people interested in exploring how SDS can change the way (and cost) of how you manage your information infrastructure.

More on EVO:RAIL/VSPEX Blue

I met with Mike McDonough, the mastermind behind EVO:RAIL. He is indeed a mastermind. The story of the rise of EVO:RAIL as a separate business unit is interesting enough (300 business cases submitted, 3 approved, and he won’t say what the other mystery products are), but the implementation and strategy and vision are what matter to us. The big factor here was boiling down the support cases to come up with the 370 most common reasons for support, all around configuration, management and hardware. The first version of EVO:RAIL addressed 240 of those issues. Think of this as having a safety rail around a vSphere appliance to prevent these common and easily avoidable issues, without restricting the flexibility too much. The next version will incorporate NSX, most likely. Security and inspection are the emphases for the next iteration. Partners and distributors were chosen carefully. GreenPages is one of only 9 national partners chosen for this, based on our long history as a strategic partner and our thought leadership! The tightly controlled hardware compatibility list is a strength, as future regression tests for software and other upgrades will keep the permutations down to a minimum. (By the way, the EMC server platform is Intel, for VxRack, VSPEX Blue and I think for all of their compute modules for all their products). The implication here, competitively, is that as competitive appliances that are buying white box hardware with commodity contracts allowing for flexibility in drives, memory and CPU, will have an exponentially more difficult task in maintain the increasing permutations of hardware versions over time.

Final Blue Sky note:

Rack Scale is an Intel initiative that promises an interesting future for increased awareness of the hardware for hypervisors, but is a very future leaning project. Read Scott Lowe’s thoughts on this.

 

As always, contact us for more details and in-depth conversations about how we can help you build the data center of the future, today.

 

By Randy Weis, Practice Manager, Information Infrastructure

VMware NSX vs. Cisco ACI: Which SDN solution is right for me?

In a video I did recently, I discussed steps organizations need to take to prepare their environments to be able to adopt software defined technologies when the time comes. In this video, I talk about VMware NSX and Cisco ACI.

VMware NSX and Cisco ACI are both really hot technologies that are generating a lot of conversation. Both are API driven SDN solutions. NSX and ACI are really good in their unique areas and each come at it from a unique perspective. While they are both very different solutions, they do have overlapping functionality.

//www.youtube.com/watch?v=xtdfHGnCovA

 

Are you interested in talking with Nick about VMware NSX or Cisco ACI? Let’s set up some time!

 

By Nick Phelps, Principal Architect

8 Things You May Not Know About Docker

DockerIt’s possible that containers and container management tools like Docker will be the single most important thing to happen to the data center since the mainstream adoption of hardware virtualization in the 90s. In the past 12 months, the technology has matured beyond powering large-scale startups like Twitter and Airbnb and found its way into the data centers of major banks, retailers and even NASA. When I first heard about Docker a couple years ago, I started off as a skeptic. I blew it off as skillful marketing hype around an old concept of Linux containers. But after incorporating it successfully into several projects at Spantree I am now a convert. It has saved my team an enormous amount of time, money and headaches and has become the underpinning of our technical stack.

If you’re anything like me, you’re often time crunched and may not have a chance to check out every shiny new toy that blows up on Github overnight. So this article is an attempt to quickly impart 8 nuggets of wisdom that will help you understand what Docker is and why it’s useful.

 

Docker is a container management tool.

Docker is an engine designed to help you build, ship and execute applications stacks and services as lightweight, portable and isolated containers. The Docker engine sits directly on top of the host operating system. Its containers share the kernel and hardware of the host machine with roughly the same overhead as processes launched directly on the host machine.

But Docker itself isn’t a container system; it merely piggybacks off the existing container facilities baked into the OS, such as LXC on Linux. These container facilities have been baked into operating systems for many years, but Docker provides a much friendlier image management and deployment system for working with these features.

 

Docker is not a hardware virtualization engine.

When Docker was first released, many people compared it to virtualization hypervisors like VMware, KVM and Virtualbox. While Docker solves a lot of the same problems and shares many of the same advantages as hypervisors, Docker takes a very different approach. Virtual machines emulate hardware. In other words, when you launch a VM and run a program that hits disk, it’s generally talking to a “virtual” disk. When you run a CPU-intensive task, those CPU commands need to be translated to something the host CPU understands. All these abstractions come at a cost– two disk layers, two network layers, two processor schedulers, even two whole operating systems that need to be loaded into memory. These limitations typically mean you can only run a few virtual machines on a given piece of hardware before you start to see an unpleasant amount of overhead and churn. On the other hand, you can theoretically run hundreds of Docker containers on the same host machine without issue.

All that being said, containers aren’t a wholesale replacement for virtual machines. Virtual machines provide a tremendous amount of flexibility in areas where containers generally can’t. For example, if you want to run a Linux guest operating system on top of a Windows host, that’s where virtual machines shine.

 

Docker uses a layered file system.

As mentioned earlier, one of the key design goals for Docker is to provide image management on top of existing container technology. In Docker terms, an image is a static, immutable snapshot of a container’s file system. But Docker rather cleverly takes this snapshotting concept a step further by incorporating a copy-on-write filesystem into its design. I’ve found the best way to explain this is by example:

Let’s say you want to build a Docker image to run your Java web application. You may start with one of the official Docker base images that have Java 8 pre-installed. In your Dockerfile (a text file which tells Docker how to build your image) you’d specify that you’re extending the Java 8 image, which instructs Docker to pull down the pre-built snapshot associated with this image. Now, let’s say you execute a command that downloads, extracts and configures Apache Tomcat into /opt/tomcat. This command will not affect the state of original Java 8 image. Instead, it will start writing to a brand new filesystem layer. When a container boots up, it will merge these file systems together. It may load /usr/bin/java from one layer and /opt/tomcat/bin from another. In fact, every step in a Dockerfile produces a new filesystem layer, even if only one file is changed. If you’re familiar with the Git version control system, this is similar to a commit tree. But with Docker, it provides users with tremendous flexibility to compose application stacks iteratively.

At Spantree, we have a base image with Tomcat pre-installed and on each application release we merely copy the latest deployable asset into a new image, tagging the Docker image to match the release version as well. Since the only variation on these images is the very last layer, a 90MB WAR file in our case, each image is able to share the same ancestors on disk. This means we can keep our old images around and rollback on-demand with very little added cost. Furthermore, when we launch several instances of these applications side-by-side, they share the same read-only filesystems.

 

Docker can save you time.

Many years ago, I was working on a project for a major restaurant chain and on the first day I was handed a 12 page Word document describing how to get my development environment set up to develop against all the various applications. I had to install a local Oracle database, a specific version of the Java runtime, along with a number of other system and library dependencies and tooling. The whole setup process cost each member of my team approximately a day of productivity, which unfortunately translated to thousands of dollars in sunk costs for our client. Our client was used to this and considered this part of the cost of doing business when onboarding new team members, but as consultants we would have much rather spent that time building useful features that add value to our client’s business.

Had Docker existed at the time, we could have cut this process from a day to mere minutes. With Docker, you can express servers and services through code, similarly to configuration tools like Puppet, Chef, Salt and Ansible. But, unlike these tools, Docker goes a step further by actually pre-executing these steps for you during its build process snapshotting the output as an indexed, shareable disk image. Need to compile Node.js from source? No problem. The Docker runtime will do that on build and simply snapshot the output for you at the end. Furthermore, because Docker containers sit directly on top of the Linux kernel, there’s no risk of environmental variations getting in the way.

Nowadays, when we bring a new team member into a client project, they merely have to run `Docker-compose up`, grab a cup of coffee and by the time they’re back they should have everything they need to start working.

 

Docker can save you money.

Of course, time is money, but Docker can also save you hard, physical dollars as it relates to infrastructure costs. Studies at Gartner and McKinsey cite the average data center utilization as between 6% to 12%. Quite a lot of that underutilized space is due to static partitioning. With physical machines or even hypervisors, you need to defensively provision the CPU, disk and memory based on the high watermark of possible usage. Containers, on the other hand, allow you to share unused memory and disk between instances. This allows you to pack many more services onto the same hardware, spinning them down when they’re not needed without worrying about the cost of bringing them back up again. If it’s 3am and no one is hitting your Dockerized intranet application but you need a little extra horsepower for your Dockerized nightly batch job, you can simply swap some resources between the two applications running on common infrastructure.

 

Docker has a robust ecosystem of existing images.

At the time of writing, there are over 14,000 public Docker images available on the web. Most of these images are shared through Docker Hub. Similar to how Github has largely become the home of most major open-source projects, Docker Hub is the de facto resource for sharing and working with public Docker images. These images can serve as building blocks for your application or database services. Want to test drive the latest version of that hot new graph database you’ve been hearing about? Someone’s probably already gone to the trouble of Dockerizing it. Need to build and host a simple Rails application with a special version of Ruby? It’s now at your fingertips in a single command.

 

Docker helps you avoid production bugs.

At Spantree, we’re big fans of “immutable infrastructure.” That is to say, if at all possible, we avoid doing upgrades or changes on live servers at all costs. Instead, we build out new servers from scratch, applying the new application code directly to a pristine image and rolling the new release servers into the load balancer when they’re ready, retiring the old server instances after all our health checks pass. This gives us the ability to cleanly roll back if something goes wrong. It also gives us the ability to promote the same master images from dev to QA to production with no risk of configuration drift. By extending this approach all the way to the developer machine with Docker, we can also avoid the “it works on my machine” problem because each developer is able to test their build locally in a parallel

 

Docker only works on Linux (for now).

The technologies powering Docker are not necessarily new, but many of them, like LXC and cgroups, are specific to the Linux kernel. This means that, at the time of writing, Docker is only capable of hosting applications and services that can run on Linux.  That is likely to change in the coming years as Microsoft has recently announced plans for first-class container support in the next version of Windows Server. Microsoft has been working closely with Docker to achieve this goal. In the meantime, tools like boot2docker and Docker Machine make it possible to run and proxy Docker commands to a lightweight Linux VM on Mac and Windows environments.

Have you used Docker? What has your experience been like? If you’re interested in learning more about how Spantree and GreenPages can help with your application development initiatives, please reach out!

 

By Cedric Hurst, Principal, Spantree Technology Group, LLC

Cedric Hurst is Principal at Spantree Technology Group, a boutique software engineering firm based primarily out of Chicago, Illinois that focuses on delivering scalable, high-quality solutions for the web. Spantree provides clients throughout North America with deep insights, strategies and development around cloud computing, devops and infrastructure automation. Spantree is partnered with GreenPages to provide high-value application development and devops enablement to their growing enterprise client base. In his spare time, Cedric speaks at technical meetups, makes music, mentors students and hangs out with his daughter. To stay up to date with Spantree, follow them on Twitter @spantreellc

How to Prepare Your Environment for the Software Defined Networking Era

Whether it’s VMware NSX or Cisco ACI, to adopt any software defined networking solution there is a lot of backend work that needs to be done. Before you get into the weeds around specific products, take a step back. To be successful, you’re going to need to have a level of understanding about your applications you’ve never needed before. The key is to take the proper steps now to make sure you can adopt software defined networking technologies when the time comes.

 

Preparing Your Environment for the Software Defined Networking Era

 

//www.youtube.com/watch?v=Y6pVmNrOnCA

 

 

If you’re interested in speaking to Nick in more detail about software defined technology, reach out!

 

 

By Nick Phelps, Principal Architect

If You Could Transform Your IT Strategy, Would You?

GreenPages-Transformation-Services-LogoAs you may know, GreenPages recently launched our Transformation Services Group, a new practice dedicated to providing customers with the agility, flexibility and innovation they need to compete in the modern era of cloud computing.

This move was designed to allow us to help companies think beyond the near-term and to cast a vision for the business.  As we look at the market, we see a need to help organizations take a more revolutionary and accelerated approach to embracing what we call “New World” IT architectures.  While this is something we have been helping companies with for many years, we now believe that this is a logical evolution of our business that builds on our legacy of delivering high quality and competency deploying advanced virtualization and cloud solutions.

When we think about some of the great work we have done over the years, many examples come to mind.  One of these is a complex project we completed for The Channel Company, that helped them truly transform their business. Coming off a management buyout from its parent company, UBM, The Channel Company was tasked with having to migrate off the parent company’s centralized IT infrastructure under a very tight timeline.

Faced with this situation, the company was presented with a very compelling question: “If you had the opportunity to start from scratch, to transform your IT department, resources and strategy what would you do?”

Essentially, as a result of their situation, The Channel Company had the opportunity to leapfrog traditional approaches.  They had the opportunity to become more agile, and more responsive. And, more importantly, they took it!

As opposed to simply moving to a traditional baseline on-prem solution, The Channel Company saw this as an opportunity to fundamentally rethink its entire IT strategy and chose GreenPages to help lead them through the process.

Through a systematic approach and advanced methodology, we were able to help The Channel Company achieve its aggressive objectives.  Specifically, in less than six months, we led a transformation project that entailed the installation of new applications, and a migration of the company’s entire infrastructure to the cloud.  This included moving six independent office locations from a shared infrastructure to a brand-new cloud platform supporting their employees, as well as new cloud-based office and ERP applications.

In addition to achieving the independence and technical autonomy The Channel Company needed, the savings benefits and operational efficiencies achieved were truly transformational from a business standpoint.

It is these types of success stories that drove us to formalize our Transformation Services Group. We have seen first-hand the benefits that organizations can achieve by transforming inflexible siloed IT environments into agile organizations, and we’re proud to be able to offer the expertise and end-to-end capabilities required to help customers achieve true business transformation.

In our view, the need for business agility and innovation has never been greater. The question is no longer “is transformation necessary?” but rather  “if you had the opportunity to start from scratch and achieve business transformation, would you take it?”

If you’re interested in hearing more about how GreenPages has helped companies like The Channel Company transform their IT operations, please reach out.

 

By Ron Dupler, CEO

Launching GreenPages’ Transformation Services Group

GreenPages' Transformation ServicesI am excited to announce that today, with the launch of GreenPages’ Transformation Services Group, GreenPages took a major step in the continuing evolution of our company. This evolution was done for one reason, and one reason only –to meet the needs of our customers as they strive to compete in today’s rapidly-changing business and technology environment.

GreenPages’ new Transformation Services Group is a practice dedicated to providing customers with the agility, flexibility and innovation they need to compete in the modern era of cloud computing.  We see the establishment of this focused practice area as a way to help clients take a revolutionary, accelerated approach to standing up New World, Modern IT architectures and service delivery models that enable business agility and innovation.

Disrupt or be Disrupted

With each day’s latest business headlines we learn of new ‘up-start’ companies that are finding a new way to compete in what was once a mature market.  You know the names – its Uber, it’s Airbnb.  These companies have found a way to leverage advanced technologies as a strategic weapon and were able to completely turn existing industries on their heads without even owning cabs or hotels (respectively).

How’d they do it?  They were agile enough from a business standpoint to understand the disruptive force that technology can play, and they were fortunate enough not to be encumbered by existing infrastructure, policies and procedures.  While these companies clearly were smart and innovative, they were also fortunate — they had a blank slate and could start from scratch with an offensive game plan capable of delivering value to customers in new ways.

These market disrupters share the benefit of not being encumbered by legacy technologies, platforms and processes and as a result, are out-performing and executing their larger competitors.  These companies were born to be agile organizations capable of “turning on a dime” when their competitors could not.

To compete effectively in today’s environment, every company needs to find a way to become more agile.  Business leaders have the choice, play defense and respond to disruption, or play offense and become the disruptor.  The need for business agility has never been greater.  To support this needed agility and innovation, enterprises need nimble, agile IT platforms, as legacy platforms cannot meet this need.

If it were just about technology, modernizing IT would be a more straightforward situation, but it’s about more than that. This is more than a technology problem. This is a people and process problem. It’s about command, control and compliance… Needless to say, “high velocity change” is no walk in the park.

Fortunately, helping companies achieve transformational change is something we have been doing for many years and is an area where we have deep domain expertise.  Throughout our history as a company, we have become adept at guiding companies through IT and business transformation.  What we are doing today is formalizing this expertise—which has been forged working with our customers in the trenches and in the boardrooms—into a unique Transformation Services practice.  Transformation Services represents the next logical evolution of GreenPages and builds on our prior legacy of high quality and competency deploying advanced virtualization and cloud solutions.

Our Approach

We have always believed that while many companies face similar challenges, no two scenarios are identical. Through our more than 20 years of experience we have established a methodology that we use in each engagement, regardless of the challenge, that allows us to identify the best solution for each customer, drive organizational and technical change, and create positive outcomes for the business.

We hope that you share our excitement about this unique moment in the IT industry and our continued evolution as a company.   We all know that technology can produce tangible benefits, but sometimes the road to deployment can be daunting.  Transformation Services was founded to ensure our customers are able to successfully navigate that road with agility and velocity.

If you’re interested in learning more about our new Transformation Services Group, please reach out!

 

By Ron Dupler, CEO

Disaster Recovery as a Service: Does it make sense for you?

Does disaster recovery as a service make sense for your organization? It is oftentimes more cost effective and less of a headache than traditional disaster recovery options. As the importance of information infrastructure and applications grows, disaster recovery continues to become more and more critical to a company’s success. In this video, I break down the benefits of Disaster Recovery as a Service and discuss how you go about finding a solution that fits your needs. Benefits include:

  • You can get up and running in almost no time. Decrease implementation time from between 6 months-1 year down to 1 month or even a few weeks.
  • Shift from CapEx to OpEx
  • More affordable
  • No hardware refreshes
  • No software support

If you’re interested in learning more about Disaster Recovery as a Service and how it could impact your organization, reach out!

 

Disaster Recovery as a Service: Does it make sense for you?

http://www.youtube.com/watch?v=8kYOIGxhBRc

 

 

By Randy Weis, Practice Manager, Information Infrastructure

Tech News Recap for the Week of 3/30/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 3/30/2015.

tech news recapGoogle has banned China’s website certification authority after a security breach. A study revealed that almost half of smartphone users in the U.S said they cannot live without their phones. Obama authorized sanctions against hackers. Cisco will purchase SDN startup Embrane.

Tech News Recap

If you’re looking to stay up-to-date on the top industry news throughout the week, follow @GreenPagesIT on Twitter!

By Ben Stephenson, Emerging Media Specialist

 

vCloud Air: Helping a customer move to a hybrid cloud environment

As you most likely know, vCloud Air is VMware’s offering in the hybrid/public cloud space. In my opinion, it’s a great offering. It allows you to take existing virtual machines and migrate those up to the cloud so that you can manage everything with your existing virtual center. It’s also a very good option to do disaster recovery.

I worked on a project recently where the client wanted to know what they needed to do with their infrastructure. They were looking for solid options to build a foundation for their business, whether it was on-prem, a cloud-based offering, or a hybrid approach.

In this project, we ended up taking their VMs and physical servers and put a brand new host on site running VMware that’s running a domain controller and a file server. We put the rest of the production servers and test dev environment in vCloud Air. Additionally, this helped them address their disaster recovery needs. It gave them a place where they could take their systems without a lot of upfront money and have a place where they could recover their VMs in case of the event of a disaster.

 

http://www.youtube.com/watch?v=OP3qO-SI6SY

 

Are you interested in learning more about vCloud Air? Reach out!

 

By Chris Chesley, Solutions Architect