Archivo de la etiqueta: cloud

Why Nirvanix Doesn’t Mean the End of Cloud Storage

By Randy Weis, Practice Manager, Virtualization & Data Management

By now everyone is familiar with the Nirvanix fiasco. Now that the dust has settled, I decided to talk about the implications this has had, and will have, on the cloud storage market as well as to highlight some silver linings organizations can take away from the meltdown.

http://www.youtube.com/watch?v=mtQmGQBWzbc

If you’re looking for more content around storage and information management check out my recent posts “A Guide to Successful Big Data Adoption” as well as “10 Storage Predictions for 2014.”

Do you have questions for Randy about storage & data management? Email us at socialmedia@greenpages.com

The PaaS Market as We Know it Will Not Die Off

I’ve been hearing a lot about Platform as a Service (PaaS) lately as part of the broader discussion of cloud computing from both customers and in articles across the web. In this post, I’ll describe PaaS, discuss a recent article that came out on the subject, and take a shot at sorting out IaaS, PaaS, and SaaS.

What is PaaS?

First a quick trip down memory lane for me. As an intern in college, one of my tours of duty was through the manufacturing systems department at an automaker. I came to work the first day to find a modest desktop computer loaded with all of the applications I needed to look busy, and a nicely printed sheet with logins to various development systems. My supervisor called the play: “I tell you what I want, you code it up, I’ll take a look at it, and move it to test if it smells ok.” I and ten other ambitious interns were more than happy to spend the summer with what the HR guy called “javaweb.” The next three months went something like this:

Part I: Setup the environment…

  1. SSH to abcweb01dev.company.com, head over to /opt/httpd/conf/httpd.conf, configure AJP to point to the abcapp01 and 02dev.company.com
  2. SSH to abcapp01.dev.company.com, reinstall the Java SDK to the right version, install the proper database JARs, open /opt/tomcat/conf/context.xml with the JDBC connection pool
  3. SSH to abcdb01dev.company.com, create a user and rights for the app server to talk to the web server
  4. Write something simple to test everything out
  5. Debug the environment to make sure everything works

Part II: THEN start coding…

  1. SSH to abcweb01dev.company.com, head over to /var/www/html and work on my HTML login page for starters, other things down the road
  2. SSH to devapp01dev.company.com, head over to /opt/tomcat/webapps/jpdwebapp/servlet, and code up my Java servlet to process my logins
  3. Open another window, login to abcweb01dev and tail –f /var/www/access_log to see new connections being made to the web server
  4. Open another window, login to abcapp01dev and tail –f /opt/tomcat/logs/catalina.out to see debug output from my servlet
  5. Open another window, login to abcdevapp01 and just keep /opt/tomcat/conf/context.xml open
  6. Open another window, login to abcdevapp01 and /opt/tomcat/bin/shutdown.sh; sleep 5; /opt/tomcat/bin/startup.sh (every time I make a change to the servlet)

(Host names and directory names have been changed to protect the innocent)

Setting up the environment was a little frustrating. And I knew that there was more to the story; some basic work, call it Part 0, to get some equipment in the datacenter, the OS installed, and IP addresses assigned. Part I, setting up the environment, is the work you would do to setup a PaaS platform. As a developer, the work in Part I was to enable me and my department to do the job in Part II – and we had a job to do – to get information to the guys in the plants who were actually manufacturing product!

 

So, here’s a rundown:

Part 0: servers, operating systems, patches, IPs… IaaS

Part I: middleware, configuration, basic testing… PaaS

Part II: application development

So, to me, PaaS is all about using the bits and pieces provided by IaaS, configuring them in a usable platform, delivering that platform to a developer so that they can deliver software to the business. And, hopefully the business is better off because of our software. In this case, our software helped the assembly plant identify and reduce “in-system damage” to vehicles – damage to vehicles that happens as a result of the manufacturing process.

Is the PaaS market as we know it dead?

I’ve read articles predicting the demise of PaaS altogether and others just asking the question about its future. There was a recent Networkworld article entitled “Is the PaaS market as we know it dying?” that discussed the subject. The article makes three main points, referring to 451 Research, Gartner, and other sources.

  1. PaaS features are being swallowed up by IaaS providers
  2. The PaaS market has settled down while the IaaS and SaaS markets have exploded
  3. Pure-play PaaS providers may be squeezed from the market by IaaS and SaaS

 

I agree with point #1. The evidence is in Amazon Web Services features like autoscaling, RDS, SQS, etc. These are fantastic features but interfacing to them locks developers in to using AWS as their single IaaS provider. The IaaS market is still very active, and I think there is a lot to come even though AWS is ahead of other providers at this point. IaaS is commodity, and embedding specialized (read: PaaS) features in an otherwise IaaS system is a tool to get customers to stick around.

I disagree with point #2. The PaaS market has not settled down – it hasn’t even started yet! The spotlight has been on IaaS and SaaS because these things are relatively simple to understand, considering the recent boom in server virtualization. SaaS also used to be known as something that was provided by ASPs (Application Service Providers), so many people are already familiar with this. I think PaaS and the concepts are still finding their place.

Also disagree with point #3, the time and opportunity for pure-play PaaS providers is now. IaaS is becoming sorted out, and it is clearly a commodity item. As we highlighted earlier, solutions from PaaS providers can ride on top of IaaS. I think that PaaS will be the key to application portability amongst different IaaS providers – kind of like Java: write once, run on any JVM (kind of). As you might know, portability is one of NIST’s key characteristics of cloud computing.

Portability is key. I think PaaS will remain its own concept apart from IaaS and SaaS and that we’ll see some emergence of PaaS in 2014. Why? PaaS is the key to portable applications — once written to a PaaS platform, it can be deployed on different IaaS platforms. It’s also important to note that AWS is almost always associated with IaaS, but they have started to look a lot like a PaaS provider (I touched on this in a blog earlier this month). An application written to use AWS features like AutoScaling is great, but not very portable. Lastly, the PaaS market is ripe for innovation. Barriers to entry are low as is required startup capital (there is no need to build a datacenter to build a useful PaaS platform).

This is just my opinion on PaaS — I think the next few years will see a growing interest in PaaS, possibly even over IaaS. I’m interested in hearing what you think about PaaS, feel free to leave me a comment here, find me on twitter at @dixonjp90, or reach out to us at socialmedia@greenpages.com

To hear more from John, download his whitepaper on hybrid cloud computing or his ebook on the evolution of the corporate IT department!

 

 

Grading the Internet’s 2014 Tech Predictions

 

The time is here for bloggers across the internet to make their tech predictions for 2014 and beyond (we have made some ourselves around storage and cloud). In this post, a couple of our authors have weighed in to grade predictions made by others across the web.

Prioritizing Management Tool Consolidation vs. New Acquisitions

Enterprise customers will want to invest in new tools only when necessary. They should look for solutions that can address several of their needs so that they do not have to acquire multiple tools and integrate them. The ability to cover multiple areas of management (performance, configuration and availability) to support multiple technologies (e.g., application tiers) and to operate across multiple platforms (Unix, Windows, virtual) will be important criteria for enterprises to assess what management tools will work for them.  (eweek)

Agree – I have been saying this for a while.  If you want a new tool, get rid of 5 and consolidate and use what you have now or get one that really works. (Randy Becker)

 

Bigger big data spending

IDC predicts spending of more than $14 billion on big data technologies and services or 30% growth year-over-year, “as demand for big data analytics skills continues to outstrip supply.” The cloud will play a bigger role with IDC predicting a race to develop cloud-based platforms capable of streaming data in real time. There will be increased use by enterprises of externally-sourced data and applications and “data brokers will proliferate.” IDC predicts explosive growth in big data analytics services, with the number of providers to triple in three years. 2014 spending on these services will exceed $4.5 billion, growing by 21%. (Forbes)

Absolutely agree with this.  Companies of all sizes are constantly looking to garner more intelligence from the data they have.  Even here at GreenPages we have our own big data issues and will continue to invest in these solutions to solve our own internal business needs. (Chris Ward)

 

Enterprises Will Shift From Silo to Collaborative Management

 In 2014, IT organizations will continue to feel increased pressure from their lines of business. Collaborative management will be a key theme, and organizations will be looking to provide a greater degree of performance visibility across their individual silo tiers to the help desk, so it is easier and faster to troubleshoot problems and identify the tier that is responsible for a problem. (eweek)

Agree – cross domain technology experts are key!  (Randy Becker)

 

New IT Will Create New Opportunities

Mobility, bring-your-own device (BYOD) and virtual desktops will all continue to gain a foothold in the enterprise. The success of these new technologies will be closely tied to the performance that users can experience when using these technologies. Performance management will grow in importance in these areas, providing scope for innovation and new solutions in the areas of mobility management, VDI management and so on. (eweek)

Disagree – This is backwards. The business is driving change and accountability.  It is not IT that creates new opportunities – it is the business demanding apps that work and perform for the people using them. (Randy Becker)

 

Here comes the Internet of Things

By 2020, the Internet of Things will generate 30 billion autonomously connected end points and $8.9 trillion in revenues. IDC predicts that in 2014 we will see new partnerships among IT vendors, service providers, and semiconductor vendors that will address this market. Again, China will be a key player:  The average Chinese home in 2030 will have 40–50 intelligent devices/sensors, generating 200TB of data annually. (Forbes)

Totally agree with this one.  Everything and everybody is eventually going to be connected.  I wish I were building a new home right now because there are so many cool things you can do by having numerous household items connected.  I also love it because I know that in 10 years when my daughter turns 16 that I’ll no doubt know in real-time where she is and what she is doing.  However, I doubt she’ll appreciate the ‘coolness’ of that.  Although very cool, this concept does introduce some very real challenges around management of all of these devices.  Think about 30 billion devices connected to the net….  We might actually have to start learning about IPv6 soon… (Chris Ward)

 

Cloud service providers will increasingly drive the IT market

As cloud-dedicated datacenters grow in number and importance, the market for server, storage, and networking components “will increasingly be driven by cloud service providers, who have traditionally favored highly componentized and commoditized designs.” The incumbent IT hardware vendors will be forced to adopt a “cloud-first” strategy, IDC predicts. 25–30% of server shipments will go to datacenters managed by service providers, growing to 43% by 2017. (Forbes)

Not sure I agree with this one for 2014 but I do agree with it in the longer term.  As more and more applications/systems get migrated to public cloud providers, that means less and less hardware/software purchased directly from end user customers and thus more consolidation at the cloud providers.  This could be a catch 22 for a lot of the traditional IT vendors like HP and Dell.  When’s the last time you walked into an Amazon or Google datacenter and saw racks and racks of HP or Dell gear?  Probably not too recently as these providers tend to ‘roll their own’ from a hardware perspective.  One thing is for sure…this will get very interesting over the next 24 to 36 months… (Chris Ward)

 

End-User Experience Will Determine Success

Businesses will expect IT to find problems before their users do, pinpoint the root cause of the problem and solve the problem as early as possible. IT organizations will seek solutions that will allow them to provide great user experience and productivity. (eweek)

Agree – 100% on this one. Need a good POC and Pilot that is well managed with clear goals and objectives. (Randy Becker)

 

Amazon (and possibly Google) to take on traditional IT suppliers

Amazon Web Services’ “avalanche of platform-as-a-service offerings for developers and higher value services for businesses” will force traditional IT suppliers to “urgently reconfigure themselves.” Google, IDC predicts, will join in the fight, as it realizes “it is at risk of being boxed out of a market where it should be vying for leadership.” (Forbes)

I agree with this one to an extent.  Amazon has certainly captured a good share of the market in two categories, developers and large scale-out applications and I see them continuing to have dominance in these 2 spaces.  However, anyone who thinks that customers are forklift moving traditional production business applications from the datacenter to the public cloud/Amazon should really get out in the field and talk to CIOs and IT admins as this simply isn’t happening.  I’ve had numerous conversations with our own customers around this topic, and when you do the math it just doesn’t make sense in most cases – assuming the customer has an existing investment in hardware/software and some form of datacenter to house it.  That said, where I have seen an uptake of Amazon and other public cloud providers is from startups or companies that are being spun out of a larger parent. Bottom line, Amazon and others will absolutely compete with traditional IT suppliers, just not in a ubiquitous manner. (Chris Ward)

 

The digitization of all industries

By 2018, 1/3 of share leaders in virtually all industries will be “Amazoned” by new and incumbent players. “A key to competing in these disrupted and reinvented industries,” IDC says, “will be to create industry-focused innovation platforms (like GE’s Predix) that attract and enable large communities of innovators – dozens to hundreds will emerge in the next several years.” Concomitant with this digitization of everything trend, “the IT buyer profile continues to shift to business executives. In 2014, and through 2017, IT spending by groups outside of IT departments will grow at more than 6% per year.” (Forbes)

I would have to agree with this one as well.  The underlying message here is that IT spending decisions continue to shift away from IT and into the hands of the business.  I have seen this happening more and more over the past couple of years and can’t help but believe it will continue in that direction at a rapid pace. (Chris Ward)

What do you think about these predictions? What about Chris and Randy’s take on them?

Download this free eBook about the evolution of the corporate IT department.

 

 

5 Cloud Predictions for 2014

By John Dixon, LogicsOne

 

Here are my 5 Cloud Predictions for 2014. As always, leave a comment below and let me know what you think!

1. IaaS prices will drop by at least 20%

Amazon has continued to reduce its pricing since it first launched its cloud services back in 2006. In February of last year, Amazon dropped its price for the 25th time. By April prices dropped for the 30th time and by the summer it was up to 37 times. Furthermore, there was a 37% drop in hourly costs for dedicated on-demand instances. Microsoft announced that they will follow AWS’s lead with regard to price cuts. I expect this trend to continue in 2014 and likely 2015. I highlight some of these price changes and the impact it will have on the market as more organizations embrace the public cloud in more detail in my eBook.

2. We’ll see signs of the shift to PaaS

Amazon is already starting to look more like a PaaS provider than an IaaS provider. Just consider pre-packaged, pre-engineered features like Auto Scaling, CloudWatch, SQS, RDS among other services. An application hosted with AWS that uses all of these features looks more like an AWS application and less like a cloud application. Using proprietary features is very convenient, but don’t forget how application portability is impacted. I expect continued innovation in the PaaS market with new providers and technology, while downward price pressures in the IaaS market remain high. Could AWS (focusing on PaaS innovation) one day source its underlying infrastructure to a pure IaaS provider? This is my prediction for the long term — large telecoms like AT&T, Verizon, BT, et al. will eventually own the IaaS market, Amazon, Google, Microsoft will focus on PaaS innovation, and use infrastructure provided by those telecoms. This of course leaves room for startup, niche PaaS providers to build something innovative and leverage quality infrastructure delivered from the telecoms. This is already happening with smaller PaaS providers. Look for signs of this continuing in 2014.

3. “The cloud” will not be regulated

Recently, there have been rumblings of regulating “the cloud” especially in Europe, and that European clouds are safer than American clouds. If we stick with the concept that cloud computing is just another way of running IT (I call it the supply chain for IT service delivery), then the same old data classification and security rules apply. Only now, if you use cloud computing concepts, the need to classify and secure your data appropriately becomes more important. An attempt to regulate cloud computing would certainly have far reaching economic impacts. This is one to watch, but I don’t expect any legislative action to happen here in 2014.

4. More organizations will look to cloud as enabling DevOps

It’s relatively easy for developers to head out to the cloud, procure needed infrastructure, and get to work quickly. When developers behave like this, they not only write code and test new products, but they become the administrators of the platforms they own (all the way from underlying code to patching the OS) — development and operations come together. This becomes a bit stickier as things move to production, but the same concept can work (see prediction #5).

5. More organizations will be increasingly interested in governance as they build a DevOps culture

As developers can quickly bypass traditional procurement processes and controls, new governance concepts will be needed. Notice how I wrote “concepts” and not “controls.” Part of the new role of the IT department is to stay a step ahead of these movements, and offer developers new ways to govern their own platforms. For example, a real time chart showing used vs. budgeted resources will influence a department’s behavior much more effectively than a cold process that ends with “You’re over budget, you need to get approval from an SVP (expected wait time: 2-8 weeks).”

DevOps CIO Dashboard

 Service Owner Dashboard

The numbers pictured are fictitious. With the concept of Service Owners, the owner of collaboration services can get a view of the applications and systems that provide the service. The owner can then see how VoIP spending is a little above the others, and drill down to see where resources are being spent (on people, processes, or technology). Different ITBM applications display these charts differently, but the premise is the same – real time visibility into spend. With cloud usage in general gaining steam, it is now possible to adjust the resources allocated to these services. With this type of information available to developers, it is possible to take proactive steps to avoid compromising the budget allocated to a particular application or service. On the same token, opportunities to make informed investments in certain areas will become exposed with this information.

So there you have it, my 2014 cloud predictions. What other predictions do you have?

To hear more from John, download his eBook “The Evolution of Your Corporate IT Department” or his Whitepaper “Cloud Management, Now

 

 

10 Storage Predictions for 2014

By Randy Weis, Consulting Architect, LogicsOne

As we wrap up 2013 and head into the New Year, I wanted to give 10 predictions I have for the storage market for 2014.

  1. DRaaS will be the hottest sector of cloud-based services: Deconstructing cloud means breaking out specific services that fit a definition of a cloud type service such as Disaster Recovery as a Service (DRaaS) and other specialized and targeted usages of shared multi-tenant computing and storage services. Capital expenditures, time to market, and staff training are all issues that prevent companies from developing a disaster recovery strategy and actually implementing it. I predict that DRaaS will be the hottest sector of cloud based services for small to medium businesses and commercial companies. This will impact secondary storage purchases.
  2. Integration of flash storage technology will explode: The market for flash storage is maturing and consolidating. EMC has finally entered into the market. Cisco has purchased Whiptail to integrate it into unified computing systems. PCI flash, server flash drives at different tiers of performance and endurance, hybrid flash arrays, and all flash arrays will all continue to drive the adoption of solid state storage in mainstream computing.
  3. Storage virtualization – software defined storage on the rise: VMware is going to make their virtual VSAN technology generally available at the beginning of Q2 in 2014. This promises to create a brand new tier of storage in datacenters for virtual desktop solutions, disaster recover, and other specific use cases. EMC is their first release of a software defined storage product called ViPR. It has a ways to go before it really begins to address software defined storage requirements, but it is a huge play in the sense that it validates a segment of the market that has long had a miniscule share. DataCore has been the only major player in this space for 15 years. They see EMC’s announcement as a validation of their approach to decoupling storage management and software from the commodity hard drives and proprietary array controllers.
  4. Network Attached Storage (NAS) Revolution: We’re undergoing a revolution with the integration and introduction of scale out NAS technologies. One of the most notable examples is Isilon being purchased by EMC and starting to appear as a more fully integrated and fully available solution with a wide variety of applications. Meanwhile NetApp continues to innovate in the traditional scale up NAS market with increasing adoption of ONTAP 8.x. New NAS systems feature support of the most recent releases SMB 3.0, Microsoft’s significant overhaul of Windows-based file sharing protocol (also known as CIFS). This has a significant impact on design Hyper V Storage and Windows file sharing in general. Client and server side failover are now possible with SMB 3.0, which enables the kind of high availability and resiliency for Hyper V that VMware has enjoyed as a competitive advantage.
  5. Mobile Cloud Storage – File Sharing Will Never Be the Same: Dropbox, Box, Google Drive, Huddle and other smartphone-based methods to access data anywhere are revolutionizing the way individual consumers access their data. This creates security headaches for IT admins, but the vendors are responding with better and better security built into their products. At the enterprise level, Syncplicity, Panzura, Citrix ShareFile, Nasuni and other cloud storage and shared storage technologies are providing deep integration into Active Directory and enabling transfer of large files across long distances quickly and securely. These technologies integrate with on premise NAS systems and cloud storage. Plain and simple, file sharing will never be the same again.
  6. Hyper Converged Infrastructure Will Be a Significant Trend: The market share dominance of Nutanix, Simplivity (based in Westborough, MA) and VMware’s VSAN technology will all change the way shared storage is viewed in datacenters of every size. These products will not replace the use of shared storage arrays but, instead, provide an integrated, flexible and modular way to scale virtualized application deployments, such as VDI and virtual servers. These technologies all integrate compute & storage, networking (at different levels) and even data protection technology, to eliminate multiple expenditures and multiple points of management. Most importantly, Hyper-converged Infrastructure will allow new deployments to begin small and then scale out without large up-front purchases. This will not work for every tier of application or every company, but it will be a significant trend in 2014.
  7. Big Data Will Spread Throughout Industries: Big Data has become as much a buzzword as cloud. The actual use of the technologies that we call big data is growing rapidly. This adoption is not only in internet giants like Google and companies that track online behavior, but also in industries such as insurance, life sciences, and retailers. Integration of big data technologies (i.e. Hadoop, MapReduce) with more traditional SQL database technology allows service providers of any type to extract data from traditional databases and begin processing it on a huge scale more efficiently and more quickly, while still gaining the advantage of more structured databases. This trend will continue to spread throughout many industries that need to manage large amount of structured and unstructured data.
  8. Object based storage will grow: Cloud storage will be big news for 2014 for two major reasons. The first reason stems from shock waves of Nirvanix going out of business. Corporate consumers of cloud storage will be much more cautious and demand better SLAs in order to hold cloud storage providers accountable. The second reason has to do with adoption of giant, geographically dispersed data sets. Object based storage has been a little known, but important, development in storage technology that allows data sets on scale of petabytes to be stored and retrieved by people who generate data and those who consume it. However, these monstrous data sets can’t be protected by traditional RAID technologies. Providers such as Cleversafe have developed a means to spread data across multiple locations, preserving its integrity and improving resiliency while continuing to scale to massive amounts.
  9. More Data Growth: This may seem redundant, but it is predicted that business data will double every two years. While this may seem like great news for traditional storage vendors, it is even better news for people who provide data storage on a massive scale, and for those technology firms that enable mobile access to that data anywhere while integrating well with existing storage systems. This exponential data growth will lead to advances in file system technologies, object storage integration, deduplication, high capacity drives and storage resource/lifecycle management tool advances.
  10. Backup and Data Protection Evolution + Tape Will Not Die: The data protection market continues to change rapidly as more servers and applications are virtualized or converted to SaaS. Innovations in backup technology include the rapid rise of Veeam as a dominant backup and replication technology – not only for businesses but also for service providers. The Backup as a Service market seems to have stalled out because feature sets are limited; however the appliance model for backups and backup services continue to show high demand. The traditional market leaders face very strong competition from the new players and longtime competitor CommVault. CommVault has evolved to become a true storage resources management play and is rapidly gaining market share as an enterprise solution. Data deduplication has evolved from appliances such as Data Domain into a software feature set that’s included in almost every backup software out there. CommVault, Veeam, Backup Exec, and others all have either server side deduplication or client side deduplication (or both). The appliance model for disk-spaced backups continues to be popular with Data Domain, ExaGrid, and Avamar as leading examples. EMC dominates this market share – the competition is still trying to capture market share. Symantec has even entered the game with its own backup appliances, which are essentially servers preconfigured with their popular software and internal storage. Tape will not die. Long term, long capacity archives still require use of tapes, primarily for economic reasons. The current generation of tape technology, such as LTO6, can contain up to 6 TB of data on a single tape. Tape drives are routinely made with built-in encryption to avoid data breaches that were more common in the past with unencrypted tape.

 

So there you have it, my 2014 storage predictions. What do you think? Which do you agree with/disagree with? Did I leave anything off that you think will have a major impact next year? As always, reach out if you have any questions!

 

The 2013 Tech Industry – A Year in Review

By Chris Ward, CTO, LogicsOne

As 2013 comes to a close and we begin to look forward to what 2014 will bring, I wanted to take a few minutes to reflect back on the past year.  We’ve been talking a lot about that evil word ‘cloud’ for the past 3 to 4 years, but this year put a couple of other terms up in lights including Software Defined X (Datacenter, Networking, Storage, etc.) and Big Data.  Like ‘cloud,’ these two newer terms can easily mean different things to different people, but put in simple terms, in my opinion, there are some generic definitions which apply in almost all cases.  Software Defined X is essentially the concept of taking any ties to specific vendor hardware out of the equation and providing a central point for configuration, again vendor agnostic, except of course for the vendor providing the Software Defined solution :) .  I define Big Data simply as the ability to find a very specific and small needle of data in an incredibly large haystack within a reasonably short amount of time. I see both of these technologies becoming more widely adopted in short order with Big Data technologies already well on the way. 

As for our friend ‘the cloud,’ 2013 did see a good amount of growth in consumption of cloud services, specifically in the areas of Software as a Service (SaaS) and Infrastructure as a Service (IaaS).  IT has adopted a ‘virtualization first’ strategy over the past 3 to 4 years when it comes to bringing any new workloads into the datacenter.  I anticipate we’ll begin to see a ‘SaaS first’ approach being adopted in short order if it is not out there already.  However, I can’t necessarily say the same on the IaaS side so far as ‘IaaS first’ goes.  While IaaS is a great solution for elastic computing, I still see most usage confined to the application development or super large scale out application (Netflix) type use cases.  The mass adoption of IaaS for simply forklifting existing workloads out of the private datacenter and into the public cloud simply hasn’t happened.  Why?? My opinion is for traditional applications neither the cost nor operational model make sense, yet. 

In relation to ‘cloud,’ I did see a lot of adoption of advanced automation, orchestration, and management tools and thus an uptick in ‘private clouds.’  There are some fantastic tools now available both commercially and open source, and I absolutely expect to see this adoption trend to continue, especially in the Enterprise space.  Datacenters, which have a vast amount of change occurring whether in production or test/dev, can greatly benefit from these solutions. However, this comes with a word of caution – just because you can doesn’t mean you should.  I say this because I have seen several instances where customers have wanted to automate literally everything in their environments. While that may sound good on the surface, I don’t believe it’s always the right thing to do.  There are times still where a human touch remains the best way to go. 

As always, there were some big time announcements from major players in the industry. Here are some posts we did with news and updates summaries from VMworld, VMware Partner Exchange, EMC World, Cisco Live and Citrix Synergy. Here’s an additional video from September where Lou Rossi, our VP, Technical Services, explains some new Cisco product announcements. We also hosted a webinar (which you can download here) about VMware’s Horizon Suite as well as a webinar on our own Cloud Management as a Service Offering

The past few years have seen various predictions relating to the unsustainability of Moore’s Law which states that processors will double in computing power every 18-24 months and 2013 was no exception.  The latest prediction is that by 2020 we’ll reach the 7nm mark and Moore’s Law will no longer be a logarithmic function.  The interesting part is that this prediction is not based on technical limitations but rather economic ones in that getting below that 7nm mark will be extremely expensive from a manufacturing perspective and, hey, 64k of RAM is all anyone will ever need right?  :)

Probably the biggest news of 2013 was the revelation that the National Security Agency (NSA) had undertaken a massive program and seemed to be capturing every packet of data coming in or out of the US across the Internet.   I won’t get into any political discussion here, but suffice it to say this is probably the largest example of ‘big data’ that exists currently.  This also has large potential ramifications for public cloud adoption as security and data integrity have been 2 of the major roadblocks to adoption so it certainly doesn’t help that customers may now be concerned about the NSA eavesdropping on everything going on within the public datacenters.  It is estimated that public cloud providers may lose as much as $22-35B over the next 3 years as a result of customers slowing adoption due to this.  The only good news in this, at least for now, is it’s very doubtful that the NSA or anyone else on the planet has the means to actual mine anywhere close to 100% of the data they are capturing.  However, like anything else, it’s probably only a matter of time.

What do you think the biggest news/advancements of 2013 were?  I would be interested in your thoughts as well.

Register for our upcoming webinar on December 19th to learn how you can free up your IT team to be working on more strategic projects (while cutting costs!).

 

 

Cloud Management, Business Continuity & Other 2013 Accomplishments

By Matt Mock, IT Director

It was a very busy year at GreenPages for our internal IT department. With 2013 coming to a close, I wanted to highlight some of the major projects we worked on over the course of the year. The four biggest projects we tackled were using a cloud management solution, improving our business continuity plan, moving our datacenter, and creating and implementing a BYOD policy.

Cloud Management as a Service

GreenPages now offers a Cloud Management as a Service (CMaaS) solution to our clients. We implemented the solution internally late last year, but really started utilizing it as a customer would this year by increasing what was being monitored and managed. We decided to put Exchange under the “Fully Managed” package of CMaaS. Exchange requires a lot of attention and effort. Instead of hiring a full time Exchange admin, we were able to offload that piece with CMaaS as our Managed Services team does all the health checks to make sure any new configuration changes are correct. This resulted in considerable cost savings. Having access to the team 24/7 is a colossal luxury. Before using CMaaS, if an issue popped up at 3 in the morning we would find out about it the next morning. This would require us to try and fix the problem during business hours. I don’t think I need to explain to anyone the hassle of trying to fix an issue with frustrated coworkers who are unable to do their jobs. If an issue arises now in the middle of the night, the problem has already been fixed before anyone shows up to start working. The Managed Services team does research and remediates bugs that come up. This happened to us when we ran into some issues with Apple iOS calendaring. The Managed Services team did the research to determine the cause and went in and fixed the problem. If my team tried to do this it would have taken us 2-3 days of wasted time. Instead, we could be focusing on some of our other strategic projects. In fact, we are holding a webinar on December 19th that will cover strategies and benefits to being the ‘first-to-know,’ and we will also provide a demo of the CMaaS Enterprise Command Center. We also went live with fully automated patching, which requires zero intervention from my team. Furthermore, we leveraged CMaaS to allow us to spin up a fully managed Linux environment. It’s safe to say that if we didn’t implement CMaaS we would not have been able to accomplish all of our strategic goals for this year.

{Download this free whitepaper to learn more about how organizations can revolutionize the way they manage hybrid cloud environments}

Business Plan

We also determined that we needed to update our disaster recovery plan to a true robust business continuity plan. A main driver of this was because of our more diverse office model. Not only were more people working remotely as our workforce expanded, but we now have office locations up and down the east coast in Kittery, Boston, Attleboro, New York City, Atlanta, and Tampa. We needed to ensure that we could continue to provide top quality service to our customers if an event were to occur. My team took a careful look at our then current infrastructure set up. After examining our policies and plans, we generated new ones around the optimal outcome we wanted and then adjusted the infrastructure to match. A large part of this included changing providers for our data and voice, which included moving our datacenter.

Datacenter Move

In 2013 we wanted to have more robust datacenter facilities. Ultimately, we were able to get into an extremely redundant and secure datacenter at the Markley Group in Boston that provided us with cost savings. Furthermore, Markley is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7. Another benefit our new datacenter offered was excess office space. That way, if there ever was an event at one of our GreenPages locations we could have a place to send people to work. I recently wrote a post which describes the datacenter move in more details.

BYOD Policy

As 2013 ends, we are finishing our first full year with our BYOD policy. We are taking this time to look back and see where there were any issues with the policies or procedures and adjusting for the next year. Our plan is to ensure that year two is even more streamlined. I answered questions in a recent Q & A explaining our BYOD initiative in more detail.

I’m pretty happy looking back at the work we accomplished in 2013. As with any year, there were bumps along the way and things we didn’t get to that we wanted to. All in all though, we accomplished some very strategic projects that have set us up for success in the future. I think that we will start out 2014 with increased employee satisfaction, increased productivity of our IT department, and of course noticeable cost savings. Here’s to a successful 2014!

Is your IT team the first-to-know when an IT outage happens? Or, do you find out about it from your end users? Is your expert IT staff stretched thin doing first-level incident support? Could they be working on strategic IT projects that generate revenue? Register for our upcoming webinar to learn more!

 

Trick or Treat: Top 5 Fears of a CTO

By Chris Ward, CTO

Journey to the Cloud’s Ben Stephenson recently sat down with Chris Ward, CTO of GreenPages-LogicsOne, to get his take on what the top 5 fears of a CTO are.

Ben: Chief Technology Officer is obviously an extremely strategic, important, and difficult role within an organization. Since it’s almost Halloween, and since you’re an active (and successful) CTO yourself, I thought we would talk about your Top 5 Fears of a CTO. You also have the unique perspective of seeing how GreenPages uses technology internally, as well as how GreenPages advises clients to utilize different technologies.

Chris: Sounds good. I think a major fear is “Falling Behind the Trends.” In this case, it’s not necessarily that you couldn’t see what was coming down the path. You can see it there and know it’s coming, but can you get there with velocity? Can you get there before the competition does?

Ben: Do you have any examples of when you have avoided falling behind the trends?

Chris: At GreenPages, we were fortunate to catch virtualization early on when a lot of others didn’t. We had a lot of customers who were not sold on virtualization for 2-4 years. Those customers are now very far behind the competition and are trying to play catch up. In some cases, I’m sure it’s meant the CTO is out of a job. We also utilized virtualization internally early on and reaped the benefits. Another example is our CMaaS Brokerage and Governance offering. We recognize the significance of cloud brokerage and the paradigm shift towards a hybrid cloud computing model. In this case we are out ahead of the market.

Ben: How about a time when GreenPages did fall behind a trend?

Chris: I would say we fell behind a trend when we began our managed services business. It was traditional, old school managed services. It definitely took us some time to figure out where we wanted to go and where we wanted to be. While we may have fallen behind initially, we recognized change was needed and our Cloud Management as a Service offering has transformed us. Instead of sitting back and missing the boat, we are now in a great spot. This will be a huge help to our customers – but will (and does already) help us significantly internally as well.

Ben: How about fear number 2?

Chris: Fear number two is not seeing around the bend.  From my perspective as the CTO at a solutions provider, things move so fast in this industry and GreenPages offers such a wide variety and breadth of products and services to customer – it can be very difficult to keep up with. If we focused on only one area it would be a lot easier, but since we focus on cloud, virtualization, end user computing, security, storage, datacenter transformation, networking and more it can be quite challenging. For a corporate CTO you are allowed to be a market follower, which can be somewhat of an advantage. While you don’t want to fall behind, you do have partners, like GreenPages and others out there, that you can count on.

Ben: That makes sense. What about a 3rd fear?

Chris: Another large fear for CTOs is making a wrong turn. CTOs can get the crystal ball out and there may be a couple of things coming down the road…but what happens if you turn left and everyone else turns right? What happens if you make the wrong decision or the decision to early?

Ben: Can you give us an example?

Chris: A good example of taking a turn too early in the Cloud era is with the company Nirvanix. Cloud storage is extremely important, but what happens when a business model has not been properly vetted? This is one of the “gotchas” of being an early adopter. To be successful you need a good mix. You can’t be too conservative, but you can’t jump all in any time a new company pops up – the key is balance.

Ben: Do you have any advice for CTOs about this?

Chris: Sure – just because you can doesn’t mean you should!

Ben: I’ve heard you say that one before…

Chris: For example, software defined networking stacks, with products like Cisco Insieme and VMware NSX are very cool new technologies. I personally, and we at GreenPages, think this is going to be the next big thing. But we’re at a crossroads…who should use these? Who will gain the benefits? For example, maybe it makes sense for the enterprise but not for small businesses? This is something major that I have to determine – who is this a good fit for?

Ben: How about fear number 4?

Chris: Fear number 4 revolves around retaining my talent. I want my team to feel like they are always learning something new. I want them to know they are always on the bleeding edge of IT. I want to give them a world that changes very quickly. In my experience, most people that are stellar employees in a technical capacity want to be challenged constantly and to try new things and look at different ways of doing things.

Ben: What should CTOs do to try and retain talent?

Chris: Really take the time and focus on building a culture and environment that harnesses what I mentioned above. If not, you’re at serious risk of losing top talent.

Ben: Before I get too scared let’s get to number 5 and finish this up.

Chris: I’d say the fifth fear of mine is determining if I am working with the right technologies and the right vendors. IT can often be walking a tightrope between vendors from technical and business perspectives. From my perspective, I need to make sure we are providing our customers with the right technology from the right vendor to meet their needs. I need to determine if the technology works as advertised. Is it something that is reasonable to implement? Is there money in this for GreenPages?

Ben: What about from a customer’s perspective?

Chris: The customer also needs to make sure they align themselves with the right partners.  CTOs want to find partners that are looking towards the future, who will advise them correctly, and who will allow the business to stay out ahead of the competition. If a CTO looks at a partner or technology and doesn’t think it’s really advancing the business, then it’s time to reevaluate.

Ben: Thanks for the time Chris – and good luck!

What are your top fears as an IT decision makers? Leave them in the comment section!

Download this free ebook on the evolution of the corporate IT department. Where has the IT department been, where is it now, and where should it be headed?

 

 

Moving Email to the Cloud Part 2

By Chris Chesley, Solutions Architect

My last blog post was part 1 of moving your Email to the Cloud with Office 365.  Here’s the next installment in the series in which I will be covering the 3 methods of authenticating your users for Office 365.  This is a very important consideration and will have a large impact on your end users and their day to day activities.

The first method of authenticating your users into Office 365 is to do so directly.  This has no ties to your Active Directory.  The benefits here are that your users get mail, messages and SharePoint access regardless of your site’s online status.  The downside is that your users may have a different password than they use to get into their desktop/laptops and this can get very messy if you have a large number of users.

The second way of authenticating your users is full Active Directory integration.  I will refer to this as the “Single Sign On” method.  In this method, your Active Directory is the authoritative source of authentication for your users.  Users log into their desktop/laptop and can access all of the Office 365 applications without typing their password again, which is convenient. You DO need a few servers running locally to make this happen.  You need an Active Directory Federation Server (ADFS) and an Azure Active Directory Sync Sever. Both of these services are needed to sync your AD and user information to Office 365. The con of this method is that you need a redundant AD setup because if it’s down your users are not going to be able to access mail or anything else in the cloud.  You can do this by hosting a Domain Controller, and the other 2 systems I mentioned, in a cloud or at one of your other locations, if you have one.

The third option is what I will refer to as “Single Password.”  In this setup, you install an Azure Active Directory Sync server in your environment but do not need an ADFS server.  The Sync tool will hash your user’s passwords and send them to Office 365.  When a user tries to access any of the Office 365 services, they are asked to type in their password.  The password is then hashed and compared to the stored hash and they are let in if they match.  This does require the users to type their password again, but it allows them to use their existing Active Directory password and anytime this password changes, it is synced to the cloud.

The choice of which method you use has a big impact on your users as well as how you manage them.  Knowing these choices and choosing one that meets your business goals will set you on the path of successfully moving your services to the cloud.

 

Download this free ebook on the evolution of the corporate IT department