Tag Archives: cloud

10 Storage Predictions for 2014

By Randy Weis, Consulting Architect, LogicsOne

As we wrap up 2013 and head into the New Year, I wanted to give 10 predictions I have for the storage market for 2014.

  1. DRaaS will be the hottest sector of cloud-based services: Deconstructing cloud means breaking out specific services that fit a definition of a cloud type service such as Disaster Recovery as a Service (DRaaS) and other specialized and targeted usages of shared multi-tenant computing and storage services. Capital expenditures, time to market, and staff training are all issues that prevent companies from developing a disaster recovery strategy and actually implementing it. I predict that DRaaS will be the hottest sector of cloud based services for small to medium businesses and commercial companies. This will impact secondary storage purchases.
  2. Integration of flash storage technology will explode: The market for flash storage is maturing and consolidating. EMC has finally entered into the market. Cisco has purchased Whiptail to integrate it into unified computing systems. PCI flash, server flash drives at different tiers of performance and endurance, hybrid flash arrays, and all flash arrays will all continue to drive the adoption of solid state storage in mainstream computing.
  3. Storage virtualization – software defined storage on the rise: VMware is going to make their virtual VSAN technology generally available at the beginning of Q2 in 2014. This promises to create a brand new tier of storage in datacenters for virtual desktop solutions, disaster recover, and other specific use cases. EMC is their first release of a software defined storage product called ViPR. It has a ways to go before it really begins to address software defined storage requirements, but it is a huge play in the sense that it validates a segment of the market that has long had a miniscule share. DataCore has been the only major player in this space for 15 years. They see EMC’s announcement as a validation of their approach to decoupling storage management and software from the commodity hard drives and proprietary array controllers.
  4. Network Attached Storage (NAS) Revolution: We’re undergoing a revolution with the integration and introduction of scale out NAS technologies. One of the most notable examples is Isilon being purchased by EMC and starting to appear as a more fully integrated and fully available solution with a wide variety of applications. Meanwhile NetApp continues to innovate in the traditional scale up NAS market with increasing adoption of ONTAP 8.x. New NAS systems feature support of the most recent releases SMB 3.0, Microsoft’s significant overhaul of Windows-based file sharing protocol (also known as CIFS). This has a significant impact on design Hyper V Storage and Windows file sharing in general. Client and server side failover are now possible with SMB 3.0, which enables the kind of high availability and resiliency for Hyper V that VMware has enjoyed as a competitive advantage.
  5. Mobile Cloud Storage – File Sharing Will Never Be the Same: Dropbox, Box, Google Drive, Huddle and other smartphone-based methods to access data anywhere are revolutionizing the way individual consumers access their data. This creates security headaches for IT admins, but the vendors are responding with better and better security built into their products. At the enterprise level, Syncplicity, Panzura, Citrix ShareFile, Nasuni and other cloud storage and shared storage technologies are providing deep integration into Active Directory and enabling transfer of large files across long distances quickly and securely. These technologies integrate with on premise NAS systems and cloud storage. Plain and simple, file sharing will never be the same again.
  6. Hyper Converged Infrastructure Will Be a Significant Trend: The market share dominance of Nutanix, Simplivity (based in Westborough, MA) and VMware’s VSAN technology will all change the way shared storage is viewed in datacenters of every size. These products will not replace the use of shared storage arrays but, instead, provide an integrated, flexible and modular way to scale virtualized application deployments, such as VDI and virtual servers. These technologies all integrate compute & storage, networking (at different levels) and even data protection technology, to eliminate multiple expenditures and multiple points of management. Most importantly, Hyper-converged Infrastructure will allow new deployments to begin small and then scale out without large up-front purchases. This will not work for every tier of application or every company, but it will be a significant trend in 2014.
  7. Big Data Will Spread Throughout Industries: Big Data has become as much a buzzword as cloud. The actual use of the technologies that we call big data is growing rapidly. This adoption is not only in internet giants like Google and companies that track online behavior, but also in industries such as insurance, life sciences, and retailers. Integration of big data technologies (i.e. Hadoop, MapReduce) with more traditional SQL database technology allows service providers of any type to extract data from traditional databases and begin processing it on a huge scale more efficiently and more quickly, while still gaining the advantage of more structured databases. This trend will continue to spread throughout many industries that need to manage large amount of structured and unstructured data.
  8. Object based storage will grow: Cloud storage will be big news for 2014 for two major reasons. The first reason stems from shock waves of Nirvanix going out of business. Corporate consumers of cloud storage will be much more cautious and demand better SLAs in order to hold cloud storage providers accountable. The second reason has to do with adoption of giant, geographically dispersed data sets. Object based storage has been a little known, but important, development in storage technology that allows data sets on scale of petabytes to be stored and retrieved by people who generate data and those who consume it. However, these monstrous data sets can’t be protected by traditional RAID technologies. Providers such as Cleversafe have developed a means to spread data across multiple locations, preserving its integrity and improving resiliency while continuing to scale to massive amounts.
  9. More Data Growth: This may seem redundant, but it is predicted that business data will double every two years. While this may seem like great news for traditional storage vendors, it is even better news for people who provide data storage on a massive scale, and for those technology firms that enable mobile access to that data anywhere while integrating well with existing storage systems. This exponential data growth will lead to advances in file system technologies, object storage integration, deduplication, high capacity drives and storage resource/lifecycle management tool advances.
  10. Backup and Data Protection Evolution + Tape Will Not Die: The data protection market continues to change rapidly as more servers and applications are virtualized or converted to SaaS. Innovations in backup technology include the rapid rise of Veeam as a dominant backup and replication technology – not only for businesses but also for service providers. The Backup as a Service market seems to have stalled out because feature sets are limited; however the appliance model for backups and backup services continue to show high demand. The traditional market leaders face very strong competition from the new players and longtime competitor CommVault. CommVault has evolved to become a true storage resources management play and is rapidly gaining market share as an enterprise solution. Data deduplication has evolved from appliances such as Data Domain into a software feature set that’s included in almost every backup software out there. CommVault, Veeam, Backup Exec, and others all have either server side deduplication or client side deduplication (or both). The appliance model for disk-spaced backups continues to be popular with Data Domain, ExaGrid, and Avamar as leading examples. EMC dominates this market share – the competition is still trying to capture market share. Symantec has even entered the game with its own backup appliances, which are essentially servers preconfigured with their popular software and internal storage. Tape will not die. Long term, long capacity archives still require use of tapes, primarily for economic reasons. The current generation of tape technology, such as LTO6, can contain up to 6 TB of data on a single tape. Tape drives are routinely made with built-in encryption to avoid data breaches that were more common in the past with unencrypted tape.

 

So there you have it, my 2014 storage predictions. What do you think? Which do you agree with/disagree with? Did I leave anything off that you think will have a major impact next year? As always, reach out if you have any questions!

 

The 2013 Tech Industry – A Year in Review

By Chris Ward, CTO, LogicsOne

As 2013 comes to a close and we begin to look forward to what 2014 will bring, I wanted to take a few minutes to reflect back on the past year.  We’ve been talking a lot about that evil word ‘cloud’ for the past 3 to 4 years, but this year put a couple of other terms up in lights including Software Defined X (Datacenter, Networking, Storage, etc.) and Big Data.  Like ‘cloud,’ these two newer terms can easily mean different things to different people, but put in simple terms, in my opinion, there are some generic definitions which apply in almost all cases.  Software Defined X is essentially the concept of taking any ties to specific vendor hardware out of the equation and providing a central point for configuration, again vendor agnostic, except of course for the vendor providing the Software Defined solution :) .  I define Big Data simply as the ability to find a very specific and small needle of data in an incredibly large haystack within a reasonably short amount of time. I see both of these technologies becoming more widely adopted in short order with Big Data technologies already well on the way. 

As for our friend ‘the cloud,’ 2013 did see a good amount of growth in consumption of cloud services, specifically in the areas of Software as a Service (SaaS) and Infrastructure as a Service (IaaS).  IT has adopted a ‘virtualization first’ strategy over the past 3 to 4 years when it comes to bringing any new workloads into the datacenter.  I anticipate we’ll begin to see a ‘SaaS first’ approach being adopted in short order if it is not out there already.  However, I can’t necessarily say the same on the IaaS side so far as ‘IaaS first’ goes.  While IaaS is a great solution for elastic computing, I still see most usage confined to the application development or super large scale out application (Netflix) type use cases.  The mass adoption of IaaS for simply forklifting existing workloads out of the private datacenter and into the public cloud simply hasn’t happened.  Why?? My opinion is for traditional applications neither the cost nor operational model make sense, yet. 

In relation to ‘cloud,’ I did see a lot of adoption of advanced automation, orchestration, and management tools and thus an uptick in ‘private clouds.’  There are some fantastic tools now available both commercially and open source, and I absolutely expect to see this adoption trend to continue, especially in the Enterprise space.  Datacenters, which have a vast amount of change occurring whether in production or test/dev, can greatly benefit from these solutions. However, this comes with a word of caution – just because you can doesn’t mean you should.  I say this because I have seen several instances where customers have wanted to automate literally everything in their environments. While that may sound good on the surface, I don’t believe it’s always the right thing to do.  There are times still where a human touch remains the best way to go. 

As always, there were some big time announcements from major players in the industry. Here are some posts we did with news and updates summaries from VMworld, VMware Partner Exchange, EMC World, Cisco Live and Citrix Synergy. Here’s an additional video from September where Lou Rossi, our VP, Technical Services, explains some new Cisco product announcements. We also hosted a webinar (which you can download here) about VMware’s Horizon Suite as well as a webinar on our own Cloud Management as a Service Offering

The past few years have seen various predictions relating to the unsustainability of Moore’s Law which states that processors will double in computing power every 18-24 months and 2013 was no exception.  The latest prediction is that by 2020 we’ll reach the 7nm mark and Moore’s Law will no longer be a logarithmic function.  The interesting part is that this prediction is not based on technical limitations but rather economic ones in that getting below that 7nm mark will be extremely expensive from a manufacturing perspective and, hey, 64k of RAM is all anyone will ever need right?  :)

Probably the biggest news of 2013 was the revelation that the National Security Agency (NSA) had undertaken a massive program and seemed to be capturing every packet of data coming in or out of the US across the Internet.   I won’t get into any political discussion here, but suffice it to say this is probably the largest example of ‘big data’ that exists currently.  This also has large potential ramifications for public cloud adoption as security and data integrity have been 2 of the major roadblocks to adoption so it certainly doesn’t help that customers may now be concerned about the NSA eavesdropping on everything going on within the public datacenters.  It is estimated that public cloud providers may lose as much as $22-35B over the next 3 years as a result of customers slowing adoption due to this.  The only good news in this, at least for now, is it’s very doubtful that the NSA or anyone else on the planet has the means to actual mine anywhere close to 100% of the data they are capturing.  However, like anything else, it’s probably only a matter of time.

What do you think the biggest news/advancements of 2013 were?  I would be interested in your thoughts as well.

Register for our upcoming webinar on December 19th to learn how you can free up your IT team to be working on more strategic projects (while cutting costs!).

 

 

Cloud Management, Business Continuity & Other 2013 Accomplishments

By Matt Mock, IT Director

It was a very busy year at GreenPages for our internal IT department. With 2013 coming to a close, I wanted to highlight some of the major projects we worked on over the course of the year. The four biggest projects we tackled were using a cloud management solution, improving our business continuity plan, moving our datacenter, and creating and implementing a BYOD policy.

Cloud Management as a Service

GreenPages now offers a Cloud Management as a Service (CMaaS) solution to our clients. We implemented the solution internally late last year, but really started utilizing it as a customer would this year by increasing what was being monitored and managed. We decided to put Exchange under the “Fully Managed” package of CMaaS. Exchange requires a lot of attention and effort. Instead of hiring a full time Exchange admin, we were able to offload that piece with CMaaS as our Managed Services team does all the health checks to make sure any new configuration changes are correct. This resulted in considerable cost savings. Having access to the team 24/7 is a colossal luxury. Before using CMaaS, if an issue popped up at 3 in the morning we would find out about it the next morning. This would require us to try and fix the problem during business hours. I don’t think I need to explain to anyone the hassle of trying to fix an issue with frustrated coworkers who are unable to do their jobs. If an issue arises now in the middle of the night, the problem has already been fixed before anyone shows up to start working. The Managed Services team does research and remediates bugs that come up. This happened to us when we ran into some issues with Apple iOS calendaring. The Managed Services team did the research to determine the cause and went in and fixed the problem. If my team tried to do this it would have taken us 2-3 days of wasted time. Instead, we could be focusing on some of our other strategic projects. In fact, we are holding a webinar on December 19th that will cover strategies and benefits to being the ‘first-to-know,’ and we will also provide a demo of the CMaaS Enterprise Command Center. We also went live with fully automated patching, which requires zero intervention from my team. Furthermore, we leveraged CMaaS to allow us to spin up a fully managed Linux environment. It’s safe to say that if we didn’t implement CMaaS we would not have been able to accomplish all of our strategic goals for this year.

{Download this free whitepaper to learn more about how organizations can revolutionize the way they manage hybrid cloud environments}

Business Plan

We also determined that we needed to update our disaster recovery plan to a true robust business continuity plan. A main driver of this was because of our more diverse office model. Not only were more people working remotely as our workforce expanded, but we now have office locations up and down the east coast in Kittery, Boston, Attleboro, New York City, Atlanta, and Tampa. We needed to ensure that we could continue to provide top quality service to our customers if an event were to occur. My team took a careful look at our then current infrastructure set up. After examining our policies and plans, we generated new ones around the optimal outcome we wanted and then adjusted the infrastructure to match. A large part of this included changing providers for our data and voice, which included moving our datacenter.

Datacenter Move

In 2013 we wanted to have more robust datacenter facilities. Ultimately, we were able to get into an extremely redundant and secure datacenter at the Markley Group in Boston that provided us with cost savings. Furthermore, Markley is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7. Another benefit our new datacenter offered was excess office space. That way, if there ever was an event at one of our GreenPages locations we could have a place to send people to work. I recently wrote a post which describes the datacenter move in more details.

BYOD Policy

As 2013 ends, we are finishing our first full year with our BYOD policy. We are taking this time to look back and see where there were any issues with the policies or procedures and adjusting for the next year. Our plan is to ensure that year two is even more streamlined. I answered questions in a recent Q & A explaining our BYOD initiative in more detail.

I’m pretty happy looking back at the work we accomplished in 2013. As with any year, there were bumps along the way and things we didn’t get to that we wanted to. All in all though, we accomplished some very strategic projects that have set us up for success in the future. I think that we will start out 2014 with increased employee satisfaction, increased productivity of our IT department, and of course noticeable cost savings. Here’s to a successful 2014!

Is your IT team the first-to-know when an IT outage happens? Or, do you find out about it from your end users? Is your expert IT staff stretched thin doing first-level incident support? Could they be working on strategic IT projects that generate revenue? Register for our upcoming webinar to learn more!

 

Trick or Treat: Top 5 Fears of a CTO

By Chris Ward, CTO

Journey to the Cloud’s Ben Stephenson recently sat down with Chris Ward, CTO of GreenPages-LogicsOne, to get his take on what the top 5 fears of a CTO are.

Ben: Chief Technology Officer is obviously an extremely strategic, important, and difficult role within an organization. Since it’s almost Halloween, and since you’re an active (and successful) CTO yourself, I thought we would talk about your Top 5 Fears of a CTO. You also have the unique perspective of seeing how GreenPages uses technology internally, as well as how GreenPages advises clients to utilize different technologies.

Chris: Sounds good. I think a major fear is “Falling Behind the Trends.” In this case, it’s not necessarily that you couldn’t see what was coming down the path. You can see it there and know it’s coming, but can you get there with velocity? Can you get there before the competition does?

Ben: Do you have any examples of when you have avoided falling behind the trends?

Chris: At GreenPages, we were fortunate to catch virtualization early on when a lot of others didn’t. We had a lot of customers who were not sold on virtualization for 2-4 years. Those customers are now very far behind the competition and are trying to play catch up. In some cases, I’m sure it’s meant the CTO is out of a job. We also utilized virtualization internally early on and reaped the benefits. Another example is our CMaaS Brokerage and Governance offering. We recognize the significance of cloud brokerage and the paradigm shift towards a hybrid cloud computing model. In this case we are out ahead of the market.

Ben: How about a time when GreenPages did fall behind a trend?

Chris: I would say we fell behind a trend when we began our managed services business. It was traditional, old school managed services. It definitely took us some time to figure out where we wanted to go and where we wanted to be. While we may have fallen behind initially, we recognized change was needed and our Cloud Management as a Service offering has transformed us. Instead of sitting back and missing the boat, we are now in a great spot. This will be a huge help to our customers – but will (and does already) help us significantly internally as well.

Ben: How about fear number 2?

Chris: Fear number two is not seeing around the bend.  From my perspective as the CTO at a solutions provider, things move so fast in this industry and GreenPages offers such a wide variety and breadth of products and services to customer – it can be very difficult to keep up with. If we focused on only one area it would be a lot easier, but since we focus on cloud, virtualization, end user computing, security, storage, datacenter transformation, networking and more it can be quite challenging. For a corporate CTO you are allowed to be a market follower, which can be somewhat of an advantage. While you don’t want to fall behind, you do have partners, like GreenPages and others out there, that you can count on.

Ben: That makes sense. What about a 3rd fear?

Chris: Another large fear for CTOs is making a wrong turn. CTOs can get the crystal ball out and there may be a couple of things coming down the road…but what happens if you turn left and everyone else turns right? What happens if you make the wrong decision or the decision to early?

Ben: Can you give us an example?

Chris: A good example of taking a turn too early in the Cloud era is with the company Nirvanix. Cloud storage is extremely important, but what happens when a business model has not been properly vetted? This is one of the “gotchas” of being an early adopter. To be successful you need a good mix. You can’t be too conservative, but you can’t jump all in any time a new company pops up – the key is balance.

Ben: Do you have any advice for CTOs about this?

Chris: Sure – just because you can doesn’t mean you should!

Ben: I’ve heard you say that one before…

Chris: For example, software defined networking stacks, with products like Cisco Insieme and VMware NSX are very cool new technologies. I personally, and we at GreenPages, think this is going to be the next big thing. But we’re at a crossroads…who should use these? Who will gain the benefits? For example, maybe it makes sense for the enterprise but not for small businesses? This is something major that I have to determine – who is this a good fit for?

Ben: How about fear number 4?

Chris: Fear number 4 revolves around retaining my talent. I want my team to feel like they are always learning something new. I want them to know they are always on the bleeding edge of IT. I want to give them a world that changes very quickly. In my experience, most people that are stellar employees in a technical capacity want to be challenged constantly and to try new things and look at different ways of doing things.

Ben: What should CTOs do to try and retain talent?

Chris: Really take the time and focus on building a culture and environment that harnesses what I mentioned above. If not, you’re at serious risk of losing top talent.

Ben: Before I get too scared let’s get to number 5 and finish this up.

Chris: I’d say the fifth fear of mine is determining if I am working with the right technologies and the right vendors. IT can often be walking a tightrope between vendors from technical and business perspectives. From my perspective, I need to make sure we are providing our customers with the right technology from the right vendor to meet their needs. I need to determine if the technology works as advertised. Is it something that is reasonable to implement? Is there money in this for GreenPages?

Ben: What about from a customer’s perspective?

Chris: The customer also needs to make sure they align themselves with the right partners.  CTOs want to find partners that are looking towards the future, who will advise them correctly, and who will allow the business to stay out ahead of the competition. If a CTO looks at a partner or technology and doesn’t think it’s really advancing the business, then it’s time to reevaluate.

Ben: Thanks for the time Chris – and good luck!

What are your top fears as an IT decision makers? Leave them in the comment section!

Download this free ebook on the evolution of the corporate IT department. Where has the IT department been, where is it now, and where should it be headed?

 

 

Moving Email to the Cloud Part 2

By Chris Chesley, Solutions Architect

My last blog post was part 1 of moving your Email to the Cloud with Office 365.  Here’s the next installment in the series in which I will be covering the 3 methods of authenticating your users for Office 365.  This is a very important consideration and will have a large impact on your end users and their day to day activities.

The first method of authenticating your users into Office 365 is to do so directly.  This has no ties to your Active Directory.  The benefits here are that your users get mail, messages and SharePoint access regardless of your site’s online status.  The downside is that your users may have a different password than they use to get into their desktop/laptops and this can get very messy if you have a large number of users.

The second way of authenticating your users is full Active Directory integration.  I will refer to this as the “Single Sign On” method.  In this method, your Active Directory is the authoritative source of authentication for your users.  Users log into their desktop/laptop and can access all of the Office 365 applications without typing their password again, which is convenient. You DO need a few servers running locally to make this happen.  You need an Active Directory Federation Server (ADFS) and an Azure Active Directory Sync Sever. Both of these services are needed to sync your AD and user information to Office 365. The con of this method is that you need a redundant AD setup because if it’s down your users are not going to be able to access mail or anything else in the cloud.  You can do this by hosting a Domain Controller, and the other 2 systems I mentioned, in a cloud or at one of your other locations, if you have one.

The third option is what I will refer to as “Single Password.”  In this setup, you install an Azure Active Directory Sync server in your environment but do not need an ADFS server.  The Sync tool will hash your user’s passwords and send them to Office 365.  When a user tries to access any of the Office 365 services, they are asked to type in their password.  The password is then hashed and compared to the stored hash and they are let in if they match.  This does require the users to type their password again, but it allows them to use their existing Active Directory password and anytime this password changes, it is synced to the cloud.

The choice of which method you use has a big impact on your users as well as how you manage them.  Knowing these choices and choosing one that meets your business goals will set you on the path of successfully moving your services to the cloud.

 

Download this free ebook on the evolution of the corporate IT department

 

My VMworld Breakout Session: Key Lessons Learned from Deploying a Private Cloud Service Catalog

By John Dixon, Consulting Architect, LogicsOne

 

Last month, I had the special privilege of co-presenting a breakout session at VMworld with our CTO Chris Ward. The session’s title was “Key Lessons Learned from Deploying a Private Cloud Service Catalog,” and we had a full house for it. Overall, the session went great and we had a lot of good questions. In fact, due to demand, we ended up giving the presentation twice.

In the session, Chris and I discussed a recent project we did for a financial services firm where we built a private cloud, front-ended by a service catalogue. A service catalog really enables self-service – it is one component of corporate IT’s opportunity to partner with the business. In a service catalog, the IT department can publish the menu of services that it is willing to provide and (sometimes) the price that it charges for those services. For example, we published a “deploy VM” service in the catalog, and the base offering was priced at $8.00 per day. Additional storage or memory from the basic spec was available at an additional charge. When the customer requests “deploy VM,” the following happens:

  1. The system checks to see if there is capacity available on the system to accommodate the request
  2. The request is forwarded to the individual’s manager for approval
  3. The manager approves or denies the request
  4. The requestor is notified of the approval status
  5. The system fulfills the request – a new VM is deployed
  6. A change record and a new configuration item is created to document the new VM
  7. The system emails the requestor with the hostname, IP address, and login credentials for the new VM

This sounds fairly straightforward, and it is. Implementation is another matter however. It turns out that we had to integrate with vCenter, Active Directory, the client’s ticketing system, and client’s CMDB, an approval system, and the provisioned OS in order to automate the fulfillment of this simple request. As you might guess, documenting this workflow upfront was incredibly important to the project’s success. We documented the workflow and assessed it against the request-approval-fulfillment theoretical paradigm to identify the systems we needed to integrate. One of the main points that Chris and I made at VMworld was to build this automation incrementally instead of tackling it all at once. That is, just get automation suite to talk to vCenter before tying in AD, the ticketing system, and all the rest.

Download this on-demand webinar to learn more about how you can securely enable BYOD with VMware’s Horizon Suite

Self-service, automation, and orchestration all drove real value during this deployment. We were able to eliminate or reduce at least three manual handoffs via this single workflow. Previously, these handoffs were made either by phone or through the client’s ticketing system.

During the presentation we also addressed which systems we integrated, which procedures we selected to automate, and what we plan to have the client automate next. You can check out the actual VMworld presentation here. (If you’re looking for more information around VMworld in general, Chris wrote a recap blog of Pat Gelsinger’s opening keynote as well as one on Carl Eschenbach’s General Session.)

Below are some of the questions we got from the audience:

Q: Did the organization have ITSM knowledge beforehand?

A:The group had very limited knowledge of ITSM but left our project with real-world perspective on ITIL and ITSM

Q: What did we do if we needed a certain system in place to automate something

A: We did encounter this and either labeled it as a risk or used “biomation” (self-service is available, fulfillment is manual, customer doesn’t know the difference) until the necessary systems were made available

Q: Were there any knowledge gaps at the client? If so, what were they?

A: Yes, the developer mentality and service management mentality are needed to complete a service catalog project effectively. Traditional IT engineering and operations do not typically have a developer mentality or experience with languages like Javascript.

Q: Who was the primary group at the client driving the project forward?

A: IT engineering and operations were involved with IT engineering driving most of the requirements.

Q: At which level was the project sponsored?

A: VP of IT Engineering with support from the CIO

All in all, it was a very cool experience to get the chance to present a breakout session at VMworld. If you have any other questions about key takeaways we got from this project, leave them in the comment section. As always, if you’d like more information you can contact us. I also just finished an ebook on “The Evolution of the Corporate IT Department” so be sure to check that out as well!

The Evolution of Your Corporate IT Department

By John Dixon, Consulting Architect, LogicsOne

 

Corporate IT departments have progressed from keepers of technology to providers of complex solutions that businesses truly rely on. Even a business with an especially strong core competency simply cannot compete without information systems to provide key pieces of technology such as communication and collaboration systems (e.g., email). Many corporate IT departments have become adept providers of technology solutions. We, at GreenPages, think that corporate IT departments should be recognized as providers of services. Also, we think that emerging technology and management techniques are creating an especially competitive market of IT service providers. Professional business managers will no doubt recognize that their internal IT department is perhaps another competitor in this market for IT services. Could the business choose to source their systems to a provider of services other than internal corporate IT?

IT departments large and small already have services deployed to the cloud. We think that organizations should prepare to deploy services to the cloud provider that meets their requirements most efficiently, and eventually, move services between providers to continually optimize the environment. As we’ll show, one of the first steps to enabling this Cloud Management is to use a tool that can manage resources in different environments as if they are running on the same platform. Corporate IT departments can prepare for cloud computing without taking the risk of moving infrastructure or changing any applications.

In this piece, I will describe the market for IT service providers, the progression of corporate IT departments from technology providers to brokers of IT services, and how organizations can take advantage of behavior emerging in the market for IT services. This is not a cookbook of how to build a private cloud for your company—this instead offers a perspective on how tools and management techniques, namely Cloud Management as a Service (CMaaS), can be adopted to take advantage of cloud computing, whatever it turns out to become. In the following pages, we’ll answer these questions:

  1. Why choose a single cloud provider? Why not position your IT department to take advantage of any of them?
  2. Why not manage your internal IT department as if it is already a cloud environment?
  3. Can your corporate IT department compete with a firm whose core competency is providing infrastructure?
  4. When should your company seriously evaluate an application for deployment to an external cloud service provider? Which applications are suitable to deploy to the cloud?

 

To finish reading, download John’s free ebook

 

 

 

 

 

 

Moving Email to the Cloud, Part 1

By Chris Chesley, Solutions Architect

Many of our clients are choosing to not manage Exchange day to day and not to upgrade it every 3-5 years.  They do this by choosing to have Microsoft host their mail in Office 365.  Is this right for your business?  How do you tie this into your existing infrastructure and still have access to email regardless of the status of your onsite services?

The different plans for Microsoft Office 365 can be confusing. Regardless of what plan you get, the Exchange Online choices boil down to two options.  Exchange Plan 1 offers you 50GB mailboxes per user, ActiveSync, Outlook Web Access, Calendar and all of the other features you are currently getting with an on premises Exchange implementation.  Additionally you also get antivirus and antispam protection.  All of this for 4 dollars a month per user.

Exchange Plan 2 offers the exact same features as plan 1, with the additions of unlimited archiving, legal hod capabilities, compliance support tools and advanced voice support.  This plan is 8 dollars a user per month.

All of the other Office 365 plans that include Exchange are either plan 1 or plan 2.  For example, the E3 plan (Enterprise plan 3) includes Exchange plan 2, SharePoint Plan 2, Lync Plan 2 and Office Professional Plus for 5 devices per user.  You can take any plan and break it down to the component part and fully understand what you’re getting.

If you are looking to move email to the cloud and are currently using Exchange, who better to host your Exchange than Microsoft?  Office 365 is an even better choice if you are using, or plan on using, SharePoint or Lync.  All of these technologies are available in the current plans or individually through Office 365.

I’ve helped many clients make this transition so if you have any questions or if there’s any confusion around the Office 365 plans feel free to reach out.

My next blog will be on the 3 different authentication methods in Office 365.

Top 10 Ways to Kill Your VDI Project

By Francis Czekalski, Consulting Architect, LogicsOne

Earlier this month I presented at GreenPages’ annual Summit Event. My breakout presentation this year was an End User Computing Super Session. In this video, I summarize the ‘top 10 ways to kill your VDI project.’

If you’re interested in learning more, download this free on-demand webinar where I share some real world VDI battlefield stories.

http://www.youtube.com/watch?v=y9w1o0O8IaI

 

 

A Guide to Successful Cloud Adoption

Last week, I met with a number of our top clients near the GreenPages HQ in Portsmouth, NH at our annual Summit event to talk about successful adoption of cloud technologies. In this post, I’ll give a summary of my cloud adoption advice, and cover some of the feedback that I heard from customers during my discussions. Here we go…

The Market for IT Services

I see compute infrastructure looking more and more like a commodity, and that there is intense competition in the market for IT services, particularly Infrastructure-as-a-Service (IaaS).

  1. Every day, Amazon installs as much computing capacity in AWS as it used to run all of Amazon in 2002, when it was a $3.9 billion company.” – CIO Journal, May 2013
  2. “[Amazon] has dropped the price of renting dedicated virtual server instances on its EC2 compute cloud by up to 80 percent […]  from $10 to $2 per hour” – ZDNet,  July 2013
  3. “…Amazon cut charges for some of its services Friday, the 25th reduction since its launch in 2006.” – CRN, February 2013

I think that the first data point here is absolutely stunning, even considering that it covers a time span of 11 years. Of course, a simple Google search will return a number of other similar quotes. How can Amazon and others continue to drop their prices for IaaS, while improving quality at the same time? From a market behavior point of view, I think that the answer is clear – Amazon Web Services and others specialize in providing IaaS. That’s all they do. That’s their core business. Like any other for-profit business, IaaS providers prefer to make investments in projects that will improve their bottom line. And, like any other for-profit business, those investments enable companies like AWS to effectively compete with other providers (like Verizon/Terremark, for example) in the market.

Register for our upcoming webinar on 8/22 to learn how to deal with the challenges of securely managing corporate data across a broad array of computing platforms. 

With network and other technologies as they are, businesses now have a choice of where to host infrastructure that supports their applications. In other words, the captive corporate IT department may be the preferred provider of infrastructure (for now), but they are now effectively competing with outside IaaS providers. Why, then, would the business not choose the lowest cost provider? Well, the answer to that question is quite the debate in cloud computing (we’ll put that aside for now). Suffice to say that we think that internal corporate IT departments are now competing with outside providers to provide IaaS and other services to the business and that this will become more apparent as technology advances (e.g., as workloads become more portable, network speeds increase, storage becomes increasingly less costly, etc.).

Now here’s the punch line and the basis for our guidance on cloud computing; how should internal corporate IT position itself to stay competitive? At our annual Summit event last week, I discussed the progression of the corporate IT department from a provider of technology to a provider of services (see my whitepaper on cloud management for detail). The common thread is that corporate IT evolves by becoming closer and closer to the requirements of the business – and may even be able to anticipate requirements of the business or suggest emerging technology to benefit the business. To take advantage of cloud computing, one thing corporate IT can do is source commodity services to outside providers where it makes sense. Fundamentally, this has been commonplace in other industries for some time – manufacturing being one example. OEM automotive manufacturers like GM and Ford do not produce the windshields and brake calipers that are necessary for a complete automobile – it just isn’t worth it for GM or Ford to produce those things. They source windshields, brake calipers, and other components from companies who specialize. GM, Ford, and others are then left with more resources to invest in designing, assembling and marketing a product that appeals to end users like you and I.

So, it comes down to this: how do internal corporate IT departments make intelligent sourcing decisions? We suggest that the answer is in thinking about packaging and delivering IT services to the business.

GreenPages Assessment and Design Method

So, how does GreenPages recommend that customers take advantage of cloud computing? Even if you are not considering external cloud at this time, I think it makes sense to prepare your shop for it. Eventually, cloud may make sense for your shop even if, at this time, there is no fit for it. The guidance here is to take a methodical look at how your department is staffed and operated. ITIL v2 and v3 provide a good guide here of what should be examined:

  • Configuration Management
  • Financial Management
  • Incident and Problem Management
  • Change Management
  • Service Level and Availability, and Service Catalog Management
  • Lifecycle Management
  • Capacity Management
  • Business Level Management

 

Assigning a score to each of these areas in terms of repeatability, documentation, measurement, and continuous improvement will paint the picture of how well your department can make informed sourcing decisions. Conducting an assessment and making some housekeeping improvements where needed will serve two purposes:

  1. Plans for remediation could form one cornerstone of your cloud strategy
  2. Doing things according to good practice will add discipline to your IT department – which is valuable regardless of your position on cloud computing at this time

When and if cloud computing services look like a good option for your company, your department will be able to make an informed decision on which services to use at which times. And, if you’re building an internal private cloud, the processes listed above will form the cornerstone of the way you will operate as a service provider.

Case Study: Service Catalog and Private Cloud

Implementing a Service Catalog, corporate IT departments can take a solid first step to becoming a service provider and staying close to the requirements of the business. This year at VMworld in San Francisco, I’ll be leading a session to present a case study of a recent client that did exactly this with our help. If you’re going to be out at VMworld, swing by and listen in to my session!

 

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms.