Category Archives: Cloud computing

Cloud Spending Will Increase 1 Billion% by 2014

By Ben Stephenson, Journey to the Cloud

It seems like every week a new study comes out analyzing cloud computing growth. Whether it’s that Public Cloud Services Spending will reach $47.4B in 2013, Global SaaS spending projected to grow from $13.5B in 2011 to $32.8B in 2016, the public cloud services market is forecast to grow 18.5 percent in 2013, or cloud spending at Dunder Mifflin will increase 200% by 2020, the indication is that cloud adoption and spending are on the rise. But how is that relevant to you?

Does it matter to the everyday CIO that cloud spending at midsized companies west of the Mississippi is going to increase by 15% over the next 3 years? The relevant question isn’t how much will cloud adoption and spending increase, but why will it do so? It’s the “why” that matters to the business. If you understand the why, it becomes easier to put context around the statistics coming out of these studies. It comes down to a shift in the industry – a shift in the economics of how a modern day business operates. This shift revolves around the way IT services are being delivered.

To figure out where the industry is going, and why spending and adoption are increasing, you need to look at where the industry has come from. The shift from on-premise IT to public cloud began with SaaS based technologies. Companies like Salesforce.com realized that organizations were wasting a lot of time and money buying and deploying hardware for their CRM solutions. Why not use the internet to be able to allow organizations to pay a subscription fee instead of owning their entire infrastructure? This, however, was not true cloud computing. Next came IaaS with Amazon’s EC3 initiative. Essentially, Amazon realized it had excess compute capacity and decided to rent it out to people who needed the extra space. IaaS put an enormous amount of pressure on corporate IT because App Dev. teams no longer had to wait weeks or months to test and deploy environments. Instead, they could start up right away and become much more efficient. Finally, PaaS came about with initiatives such as Microsoft Azure.

{Free ebook: The Evolution of Your Corporate IT Department}

The old IT paradigm, or a private cloud environment, consists of organizations buying hardware and software and keeping it in their datacenter behind their own firewalls. While a private cloud environment doesn’t need to be fully virtualized, it does need to be automated and very few organizations are actually operating in a true private cloud environment. Ideally, a true private cloud environment is supposed to let internal IT compete with public cloud providers by providing a similar amount of speed and agility that a public cloud allows. While the industry is starting to shift towards public cloud, the private cloud is not going away. Public cloud will not be the only way to operate IT, or even the majority of the way, for a long time. This brings us to the hybrid cloud computing model; the direct result of this shift. Hybrid cloud is the combination of private and public cloud architectures. It’s about the ability to be able to seamlessly transition workloads between private and public, or, in other words, moving on-premise workloads to rented platforms where you don’t own anything in order to leverage services.

So why are companies shifting towards a hybrid cloud model? It all comes down to velocity, agility, efficiency, and elasticity. IT delivery methodology is no longer a technology discussion, but, rather, it’s become a business discussion. CIOs and CFOs are starting to scratch their heads wondering why so much money is being put towards purchasing hardware and software when all they are reading about is cloud this and cloud that.

{Free Whitepaper: Revolutionizing the Way Organizations Manage Hybrid Cloud Environments}

The spending and adoption rates of cloud computing are increasing because the shift in the industry is no longer just talk – it’s real and it’s here now. The bottom line? We’re past hypothetical discussions. There is a major shift in the industry that business decision makers need to be taking seriously. If you’re not modernizing your IT operations by moving towards a hybrid cloud model, you’re going to be missing out on the agility and cost savings that can give your organization a substantial competitive advantage.  This is why cloud adoption and spending are on the rise. This is why you’re seeing a new study every month on the topic.

Moving Our Datacenter: An IT Director’s Take

An Interview with Matt Mock, IT Director, GreenPages Technology Solutions

Journey to the Cloud’s Ben Stephenson sat down with GreenPages’ IT Director Matt Mock to discuss GreenPages’ recent datacenter move.

Ben: Why did GreenPages decide to move its datacenter?

Matt: Our current contract was up so we started evaluating new facilities looking for a robust, redundant facility to house our equipment in. We needed a facility to meet specific objectives around our business continuity plan. In addition, we were also looking for cost savings.

Ben: Where did you move the datacenter to and from?

Matt: Geographically, we stayed in a close area. We moved it from Charlestown, MA a couple of miles down the road into downtown Boston. Staying within a close area certainly made the physical move quicker and easier.

Ben: What were the benefits of moving the datacenter?

Matt: Ultimately, we were able to get into an extremely redundant and secure datacenter that provided us with cost savings. Furthermore, the datacenter is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7.

{Register for our upcoming webinar on 11/7 on key announcements from VMworld 2013}

Ben: Tell us about the process of the move? What had to happen ahead of time to ensure a smooth transition?

Matt: The most important parts were planning, testing, and communication. We put together an extremely detailed plan that broke out every phase of the move down to 15 minute increments. We devised teams for the specific phases that had a communication plan for each team. We also devised a backup emergency plan in the event that we hit any issues the night of the move.

Ben: What happened the night of the move?

Matt: The night of the move we leveraged the excellent facilities at Markley to be able to run a command center that was run by one of our project managers. In the room, we had multiple conference bridges to run the different work streams to ensure smooth and constant communication. We also utilized Huddle, our internal collaboration tool, to communicate as our internal systems were down during the move.

Ben: Anything else you had to factor in?

Matt: Absolutely. The same night of the move we were also changing both voice and data providers at three different locations, which added another layer of complexity. We had to work closely with our new providers to ensure a smooth transition. Because we have a 24/7 Managed Services division at GreenPages, we needed to continue to offer customers the same support during the move that we do on a day-to-day basis.

Ben: Did you experience unexpected events during the move? If so, what were they and how did you handle them?

Matt: With any complex IT project you’re going to experience unexpected events. A couple that we experienced were some hardware failures and unforeseen configuration issues. Fortunately, our detailed plan accounted for these issues, and we were able to address them with the teams on hand and remain on schedule.

Ben: You used an all GreenPages team to accomplish this, right?

Matt: Correct. We did not use any outside vendors for this move – all services were rendered by the GreenPages team. Last time we used outside providers and this time we had a much better experience. I’m in the unique position where I have access to an entire team of project managers and technical resources that made doing this possible. In fact, this is something we offer our customers (from consulting to project management to the actual move) so our team is very, very good at it.

Ben: What advice do you have for other IT Directors who are considering moving their datacenters?

Matt: Detailed planning and constant communication is critical, having a plan in place for every possible scenario, and having an emergency plan ready so that in the middle of the night you’re not scrambling with how to address those unforeseen issues.

Ben: Congratulations on the successful move. See you Monday after the Patriots crush your Steelers.

Would you like to learn more about how GreenPages can help you with your datacenter needs?

Moving Email to the Cloud Part 2

By Chris Chesley, Solutions Architect

My last blog post was part 1 of moving your Email to the Cloud with Office 365.  Here’s the next installment in the series in which I will be covering the 3 methods of authenticating your users for Office 365.  This is a very important consideration and will have a large impact on your end users and their day to day activities.

The first method of authenticating your users into Office 365 is to do so directly.  This has no ties to your Active Directory.  The benefits here are that your users get mail, messages and SharePoint access regardless of your site’s online status.  The downside is that your users may have a different password than they use to get into their desktop/laptops and this can get very messy if you have a large number of users.

The second way of authenticating your users is full Active Directory integration.  I will refer to this as the “Single Sign On” method.  In this method, your Active Directory is the authoritative source of authentication for your users.  Users log into their desktop/laptop and can access all of the Office 365 applications without typing their password again, which is convenient. You DO need a few servers running locally to make this happen.  You need an Active Directory Federation Server (ADFS) and an Azure Active Directory Sync Sever. Both of these services are needed to sync your AD and user information to Office 365. The con of this method is that you need a redundant AD setup because if it’s down your users are not going to be able to access mail or anything else in the cloud.  You can do this by hosting a Domain Controller, and the other 2 systems I mentioned, in a cloud or at one of your other locations, if you have one.

The third option is what I will refer to as “Single Password.”  In this setup, you install an Azure Active Directory Sync server in your environment but do not need an ADFS server.  The Sync tool will hash your user’s passwords and send them to Office 365.  When a user tries to access any of the Office 365 services, they are asked to type in their password.  The password is then hashed and compared to the stored hash and they are let in if they match.  This does require the users to type their password again, but it allows them to use their existing Active Directory password and anytime this password changes, it is synced to the cloud.

The choice of which method you use has a big impact on your users as well as how you manage them.  Knowing these choices and choosing one that meets your business goals will set you on the path of successfully moving your services to the cloud.

 

Download this free ebook on the evolution of the corporate IT department

 

My VMworld Breakout Session: Key Lessons Learned from Deploying a Private Cloud Service Catalog

By John Dixon, Consulting Architect, LogicsOne

 

Last month, I had the special privilege of co-presenting a breakout session at VMworld with our CTO Chris Ward. The session’s title was “Key Lessons Learned from Deploying a Private Cloud Service Catalog,” and we had a full house for it. Overall, the session went great and we had a lot of good questions. In fact, due to demand, we ended up giving the presentation twice.

In the session, Chris and I discussed a recent project we did for a financial services firm where we built a private cloud, front-ended by a service catalogue. A service catalog really enables self-service – it is one component of corporate IT’s opportunity to partner with the business. In a service catalog, the IT department can publish the menu of services that it is willing to provide and (sometimes) the price that it charges for those services. For example, we published a “deploy VM” service in the catalog, and the base offering was priced at $8.00 per day. Additional storage or memory from the basic spec was available at an additional charge. When the customer requests “deploy VM,” the following happens:

  1. The system checks to see if there is capacity available on the system to accommodate the request
  2. The request is forwarded to the individual’s manager for approval
  3. The manager approves or denies the request
  4. The requestor is notified of the approval status
  5. The system fulfills the request – a new VM is deployed
  6. A change record and a new configuration item is created to document the new VM
  7. The system emails the requestor with the hostname, IP address, and login credentials for the new VM

This sounds fairly straightforward, and it is. Implementation is another matter however. It turns out that we had to integrate with vCenter, Active Directory, the client’s ticketing system, and client’s CMDB, an approval system, and the provisioned OS in order to automate the fulfillment of this simple request. As you might guess, documenting this workflow upfront was incredibly important to the project’s success. We documented the workflow and assessed it against the request-approval-fulfillment theoretical paradigm to identify the systems we needed to integrate. One of the main points that Chris and I made at VMworld was to build this automation incrementally instead of tackling it all at once. That is, just get automation suite to talk to vCenter before tying in AD, the ticketing system, and all the rest.

Download this on-demand webinar to learn more about how you can securely enable BYOD with VMware’s Horizon Suite

Self-service, automation, and orchestration all drove real value during this deployment. We were able to eliminate or reduce at least three manual handoffs via this single workflow. Previously, these handoffs were made either by phone or through the client’s ticketing system.

During the presentation we also addressed which systems we integrated, which procedures we selected to automate, and what we plan to have the client automate next. You can check out the actual VMworld presentation here. (If you’re looking for more information around VMworld in general, Chris wrote a recap blog of Pat Gelsinger’s opening keynote as well as one on Carl Eschenbach’s General Session.)

Below are some of the questions we got from the audience:

Q: Did the organization have ITSM knowledge beforehand?

A:The group had very limited knowledge of ITSM but left our project with real-world perspective on ITIL and ITSM

Q: What did we do if we needed a certain system in place to automate something

A: We did encounter this and either labeled it as a risk or used “biomation” (self-service is available, fulfillment is manual, customer doesn’t know the difference) until the necessary systems were made available

Q: Were there any knowledge gaps at the client? If so, what were they?

A: Yes, the developer mentality and service management mentality are needed to complete a service catalog project effectively. Traditional IT engineering and operations do not typically have a developer mentality or experience with languages like Javascript.

Q: Who was the primary group at the client driving the project forward?

A: IT engineering and operations were involved with IT engineering driving most of the requirements.

Q: At which level was the project sponsored?

A: VP of IT Engineering with support from the CIO

All in all, it was a very cool experience to get the chance to present a breakout session at VMworld. If you have any other questions about key takeaways we got from this project, leave them in the comment section. As always, if you’d like more information you can contact us. I also just finished an ebook on “The Evolution of the Corporate IT Department” so be sure to check that out as well!

The Evolution of Your Corporate IT Department

By John Dixon, Consulting Architect, LogicsOne

 

Corporate IT departments have progressed from keepers of technology to providers of complex solutions that businesses truly rely on. Even a business with an especially strong core competency simply cannot compete without information systems to provide key pieces of technology such as communication and collaboration systems (e.g., email). Many corporate IT departments have become adept providers of technology solutions. We, at GreenPages, think that corporate IT departments should be recognized as providers of services. Also, we think that emerging technology and management techniques are creating an especially competitive market of IT service providers. Professional business managers will no doubt recognize that their internal IT department is perhaps another competitor in this market for IT services. Could the business choose to source their systems to a provider of services other than internal corporate IT?

IT departments large and small already have services deployed to the cloud. We think that organizations should prepare to deploy services to the cloud provider that meets their requirements most efficiently, and eventually, move services between providers to continually optimize the environment. As we’ll show, one of the first steps to enabling this Cloud Management is to use a tool that can manage resources in different environments as if they are running on the same platform. Corporate IT departments can prepare for cloud computing without taking the risk of moving infrastructure or changing any applications.

In this piece, I will describe the market for IT service providers, the progression of corporate IT departments from technology providers to brokers of IT services, and how organizations can take advantage of behavior emerging in the market for IT services. This is not a cookbook of how to build a private cloud for your company—this instead offers a perspective on how tools and management techniques, namely Cloud Management as a Service (CMaaS), can be adopted to take advantage of cloud computing, whatever it turns out to become. In the following pages, we’ll answer these questions:

  1. Why choose a single cloud provider? Why not position your IT department to take advantage of any of them?
  2. Why not manage your internal IT department as if it is already a cloud environment?
  3. Can your corporate IT department compete with a firm whose core competency is providing infrastructure?
  4. When should your company seriously evaluate an application for deployment to an external cloud service provider? Which applications are suitable to deploy to the cloud?

 

To finish reading, download John’s free ebook

 

 

 

 

 

 

How IT Operations is Like Auto Racing

By John Dixon, Consulting Architect, LogicsOne

 

If you’ve ever tried your hand at auto racing like I did recently at Road Atlanta, you’ll know that putting up a great lap time is all about technique. If you’ve ever been to a racing school, you’ll also remember that being proactive and planning your corners is absolutely critical in driving safely. Lets compare IT operations to auto racing now. Everyone knows how to, essentially, drive a car. Just as every company, essentially, knows how to run IT. What separates a good driver from a great driver? Technique, preparation, and knowing the capabilities of your driver and equipment.

 

The driver = your capabilities

The car = your technology

The track = your operations as the business changes

 

Preparation

Lets spend a little bit of time on “preparation.” As we all know, preparation time is often a luxury. From what I have seen consulting over the past few years, preparation is not just installed in the culture of IT. But we’d all agree that more preparation leads to better outcomes (for almost everything, really). So, how do we get more preparation time? This is where the outsourcing trend gained momentum – outsource the small stuff to get more time back to work on strategic projects. Well, this didn’t always work out very well, as typical outsourcing arrangements moved large chunks of IT to an outside provider. Why didn’t we move smaller chunks first? That’s what we do in auto racing – the reconnaissance lap! Now we have the technology and arrangements to do a reconnaissance lap of sorts. For example, our Cloud Management as a Service (CMaaS) has this philosophy built-in – we can manage certain parts of infrastructure that you select, and leave others alone. Maybe you’d like to have your Exchange environment fully managed but not your SAP environment. We’ve built CMaaS with the flexible technology and arrangements to do just that.

Technique

 

Auto Racing IT Operations
Safety   first! Check your equipment before heading out, let the car warm up before   increasing speed Make sure   your IT shop can perform as a partner with the business
Know where   to go slow! You can’t take every turn with full throttle. Even if you can,   its worth it to “throw away” some corners in preparation for straight   sections Know where   to allocate investment in IT – its all about producing results for the   business
First lap:   reconnaissance (stay on the track) Avoid   trying to tackle very complex problems with brand new technology (e.g., did   you virtualize Exchange on your very first P2V?)
Last lap:   cool down (stay on the track) An easy   one, manage the lifecycle of your applications and middleware to avoid be   caught by a surprise required upgrade
Know where   to go fast! You can be at full throttle without any brake or steering inputs   (as in straight sections), so dig in! Recognize   established techniques and technologies and use them to the max advantage
Smooth =   fast. Never stab the throttle or the brakes! Sliding all over the track with   abrupt steering and throttle inputs is not the fastest way (but it IS fun and   looks cool) Build   capabilities gradually and incrementally instead of looking to install a   single technology to solve all problems today.
Know the   capabilities of your car – brakes, tires, clutch, handling. Exceed the   capabilities of your equipment and see what happens. Take the   time to know your people, processes, and technology – which things work well   and which could be improved? This depends greatly on your business, but there   are some best practices to run a modern IT shop.
Improve   time with each lap This is   all about continuous improvement – many maneuvers in IT should be repeatable   (like handling a trouble ticket), so do it better every time.
Take a   deep breath, check your gauges, check your harnesses, check your helmet Monitoring   is important, but it is not an endgame for most of us. Be aware of things   that could go wrong, how you could mitigate risk, which workarounds you could   implement, etc.
Carry   momentum around the track. A high horsepower car with a novice driver will   always lose to a great driver in a sedan Technology   doesn’t solve everything. You need proper technique and preparation.
Learn from   your mistakes – they aren’t the end of the world With   well-instrumented monitoring, performance blips or mistakes are opportunities   to improve

 

Capabilities

A word on capabilities. Capabilities are not something you simply install with software or infrastructure. Just as an aspiring racecar driver can’t simply obtain the capability required to win a professional F1 race with a weekend class. You need assets (e.g., infrastructure, applications, data) and resources (e.g., dollars) to build capabilities. What exactly is a capability? In racing, it’s the ability to get around a track, any track, quickly and safely. In IT, this would be the ability to handle a helpdesk call and resolve the issue to completion, for a basic example. An advanced IT capability in a retail setting might be to produce a report on how frequently shoppers from a particular zip code purchase a certain product. Or, perhaps, it’s an IT governance capability to understand the costs of providing a particular IT service. One thing I’ve seen in consulting with various shops is that organizations could do a better job of understanding their capabilities.

Now picture yourself in the in the driver’s seat (of your IT shop). Know your capabilities, but really think about your technique and continuously improving your “lap times.”

  1. Where are your straight sections – where you can just “floor it” and hang on? These might be well-established processes, projects, or tasks that pay obvious benefits. Can you take some time to create more straight sections?
  2. How much time do you have for preparation? How much time do you spend “studying the track” and “knowing your equipment?” Do you know your capabilities? Can you create time that you can use for preparation?
  3. Where are your slow sections? The processes that require careful attention to detail. This is probably budget planning time for many of us. Hiring time is probably another slow section.
  4. Do you understand your capabilities? Defining the IT services that you provide your customer is a great place to start. If you haven’t done this yet, you should — especially if you’re looking at cloud computing. GreenPages and our partners have some well-established techniques to help you do this successfully.

 

As always, feel free to reach out if you’d like to have a conversation just to toss around some ideas on this topic.

 

Now for the fun part, a video that a classmate of mine recorded of a hot lap around Road Atlanta. The video begins in turn 11 (under the bridge in this video).

  1. Turn 11 is important because it is a setup to the front straight section. BUT, it is pretty dangerous too as it leads downhill to turn 12 (the entrance to the straight). Position the car under the RED box on the bridge and give a small amount of right steering input. Build speed down the hill.
  2. Clip the apex of turn 11 and pull the car into turn 12. Be gentle with turn 12 – upset the car over the gators and you could easily lose control.
  3. Under the second bridge and onto the front straight section. Grab 5th gear if you can. Up to ~110mph. Position the car out to the extreme left side of the track for turn 1.
  4. Show no mercy to the brakes for turn 1! Engage ABS, downshift, then trail brake into the right hander, pull the car in to the apex of the turn in 4th gear, carrying 70-80mph.
  5. Uphill for turn 2. Aim the nose of the car at the telephone pole in the distance, as turn 2 is blind. Easy on the throttle!
  6. Collect the apex at turn 2 and downhill for turn 3. Use a dab of brakes to adjust speed as you turn slight right for turn 3.
  7. Turn slight left for turn 4, hug the inside
  8. Track out and downhill for “the esses” – roll on the throttle easily, you’ve got to keep momentum for the uphill section at turn 5.
  9. The esses are a fast part of the track but be careful not to upset the car
  10. Brake slightly uphill for turn 5. It is the entrance to a short straight section where you can gain some speed
  11. Stay in 4th gear for turn 6 and bring the car to the inside of the turn
  12. Track way out to the left for the crucial turn 7 – a slow part of the track. Brake hard and downshift to third gear. Get this one right as it is the entrance to the back straight section.
  13. Build speed on the straight – now is the time to floor it!
  14. Grab 5th gear midway down the straight for 110+ mph. Take a deep breath! Check your gauges and harnesses.
  15. No mercy for the brakes at turn 10a! Downshift to 4th gear, downshift to 3rd gear and trail brake as you turn left
  16. Slight right turn for turn 10b and head back uphill to the bridge – position the car under the RED box and take another lap!

 

Moving Email to the Cloud, Part 1

By Chris Chesley, Solutions Architect

Many of our clients are choosing to not manage Exchange day to day and not to upgrade it every 3-5 years.  They do this by choosing to have Microsoft host their mail in Office 365.  Is this right for your business?  How do you tie this into your existing infrastructure and still have access to email regardless of the status of your onsite services?

The different plans for Microsoft Office 365 can be confusing. Regardless of what plan you get, the Exchange Online choices boil down to two options.  Exchange Plan 1 offers you 50GB mailboxes per user, ActiveSync, Outlook Web Access, Calendar and all of the other features you are currently getting with an on premises Exchange implementation.  Additionally you also get antivirus and antispam protection.  All of this for 4 dollars a month per user.

Exchange Plan 2 offers the exact same features as plan 1, with the additions of unlimited archiving, legal hod capabilities, compliance support tools and advanced voice support.  This plan is 8 dollars a user per month.

All of the other Office 365 plans that include Exchange are either plan 1 or plan 2.  For example, the E3 plan (Enterprise plan 3) includes Exchange plan 2, SharePoint Plan 2, Lync Plan 2 and Office Professional Plus for 5 devices per user.  You can take any plan and break it down to the component part and fully understand what you’re getting.

If you are looking to move email to the cloud and are currently using Exchange, who better to host your Exchange than Microsoft?  Office 365 is an even better choice if you are using, or plan on using, SharePoint or Lync.  All of these technologies are available in the current plans or individually through Office 365.

I’ve helped many clients make this transition so if you have any questions or if there’s any confusion around the Office 365 plans feel free to reach out.

My next blog will be on the 3 different authentication methods in Office 365.

Journey to the Cloud: An Insider’s Perspective

By Ben Stephenson, Journey to the Cloud

Our Journey to the Cloud blog has been live for a little over two years now, and I’ve had the privilege of running and managing it from the start. I wanted to touch base about the site, share my unique perspective from managing it, and hear from our readers about what we can do to make it even better.

Our goal from the very beginning was to establish ourselves as thought leaders in the industry by providing high quality content that was relevant and beneficial to IT decision makers. We wanted to make sure we let our authors keep their opinions and voice, while at the same time taking an unbiased, agnostic approach. The last thing we wanted to do was start blathering on about what a great company GreenPages is or bragging about the most recent award we won (it was being named to the Talkin’ Cloud 100 if you were wondering…).  Over the course of the two years, we’ve posted over 200 blogs and seen the number of page views and shares across various social media sites increase drastically. We’ve brought in some big time guest bloggers such as ConnectEDU CTO Rick Blaisdell, CA’s Andi Mann, the Director, Advanced Analytics and Sr. Research Scientist at Gravitant, and more. We’ve incorporated a lot of video as well – in fact for whatever strange reason someone thought it was a good idea to let me host our Cloud Corner Series. We’ve covered topics ranging from cloud, virtualization, end user computing, BYOD, network infrastructure, storage, disaster recovery, shadow IT, project management, and much more.

Have there been challenges along the way? Absolutely. Have I had to go after people and chase them down, scratching and clawing until I get a blog to post? Yes. Have tears been shed? Has blood been shed? We’ll keep that to ourselves as it’s generally frowned upon by HR. And, yes, I have had to give William Wallace-like speeches to attempt to rally the troops. While there have been some challenges, all in all there’s been a great amount of enthusiasm and support from our writers to produce a high quality publication. For me, being in the industry for two years now with no previous technological background, the amount I’ve learned is ridiculous. Before starting at GreenPages, I would have rather listened to a Ben Stein Lecture or Bill Lumbergh explaining TPS Reports than read an article on software defined networking and the impact it will have on businesses in the next 5-10 years. I can see why our customers get excited to work with our consultants because they truly love and believe in the technology they talk about. I completely buy into their enthusiasm and passion and it makes me genuinely interested in the topics we cover. I’m in my mid-twenties and have, sadly, found myself out drinking at a bar with my friends having a great time before somehow winding up in a heated debate over the pros and cons of moving to a hybrid cloud architecture.

 

So, in case, for whatever deranged reason, you haven’t read all 200 of our posts, I’m going to list out my top ten from the past two years (in no particular order). Take a look and let me know what you think:

 

 

To close this out…I want to hear from you. What can we do to make Journey to the Cloud better? Are there any specific topics you’d like to hear more about? Any specific authors you’d like to hear more from? How about any features or functionality of the site you’d like added, changed or improved? What have you seen on other sites that you like that we don’t have? Leave a comment here or tweet us at @GreenPagesIT or @benstephenson1

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Rapid Fire Summary of Opening Keynote at VMworld 2013

By Chris Ward, CTO, LogicsOne

For those of you who aren’t out in San Francisco at the 10th annual VMworld event, here is a quick overview of what was covered in the opening keynote delivered by CEO Pat Gelsinger’s opening:

  • Social, Mobile, Cloud & Big Data are the 4 largest forces shaping IT today
  • Transitioned from Mainframe –>Client Server –>Mobile Cloud
  • Pat sets the stage that the theme of this year’s event is networking – basically setting the stage for a ton of Nicira/NSX information. I think VMware sees the core of the software defined datacenter as networking-based, and they are in a very fast race to beat out the competition in that space
  • Pat also mentioned that his passion is to get every x86 application/workload 100% virtualized. He drew parallels to Bill Gates saying his dream was a PC on every desk in every home that runs Microsoft software.
  • Next came announcements around vSphere 5.5 & vCloud Suite 5.5…here are some of the highlights:
    • 2x CPU and Memory limits and 32x storage capacity per volume to support mission critical and big applications
    • Application Aware high availability
    • Big Data Extensions – multi-tenant Hadoop capability via Serengeti
    • vSAN officially announced as public beta and will be GA by 1st half of 2014
    • vVOL is now in tech preview
    • vSphere Flash Read Cache included in vSphere 5.5

Next, we heard from Martin Casado. Martin is the CTO – Networking at VMware and came over from the Nicira acquisition and was speaking about VMware NSX. NSX is a combination of vCloud Network and Security (vCNS) and Nicira. Essentially, NSX is a network hypervisor that abstracts the underlying networking hardware just like ESX abstracts underlying server hardware.

Other topics to note:

  • IDC names VMware #1 in Cloud Management
  • VMware hypervisor fully supported as part of OpenStack
  • Growing focus on hybrid cloud. VMware will have 4 datacenters soon (Las Vegas, Santa Clara, Sterling, & Dallas). Also announcing partnerships with Savvis in NYC & Chicago to provide vCHS services out of Savvis datacenters.
  • End User Computing
    • Desktop as a Service on vCHS is being announced (I have an EUC Summit Dinner later on tonight so I will be able to go into more detail afterward that).

So, all-in-all a good start to the event. Network virtualization/NSX is clearly the focus of this conference and vCHS is a not too distant 2nd. Something that was omitted from the keynote was the rewritten SSO engine for vCenter 5.5. The piece was weak for 5.1 and has been vastly improved with 5.5…this could be addressed tomorrow as most of the tech staff is in Tuesday’s general session.

If you’re at the event…I’ll actually be speaking on a panel tomorrow at 2:30 about balancing agility with service standardization. I’ll be joining Khalid Hakim and Kurt Milne of VMware, along with Dave Bartoletti of Forrester Research and Ian Clayton of Service Management 101. I will also be co-presenting on Wednesday with my colleague John Dixon at 2:30-3:30 in the Moscone West Room 2011 about deploying a private cloud service catalogue. Hopefully you can swing by.

More to come soon!