Archivo de la categoría: hybrid cloud

Hybrid cloud issues are cultural first, technical second – Ovum

CIOs are still struggling with their hybrid cloud strategies

CIOs are still struggling with their hybrid cloud strategies

This week has seen a number of hybrid cloud deals which would suggest the industry is making significant progress delivering the platforms, services and tools necessary to make hybrid cloud practical. But if anything they also serve as a reminder that IT will forever be multimodal which creates challenges that begin with people, not technology, explains Ovum’s principle analyst of infrastructure solutions Roy Illsley.

There has been no shortage of hybrid cloud deals this week.

Rackspace and Microsoft announced a deal that would see the hosting and cloud provider expand its Fanatical Support to Microsoft Azure-based hybrid cloud platforms.

Google both announced it would support Windows technologies on its cloud platform, and that it would formally sponsor the OpenStack foundation – a move aimed at supporting container portability between multiple cloud platforms.

HP announced it would expand its cloud partner programme to include CenturyLink, which runs much of its cloud platform on HP technology, in a move aimed at bolstering HP’s hybrid cloud business and CenturyLink’s customer reach.

But one of the more interesting hybrid cloud stories this week came from the enterprise side of the industry. Copper and gold producer Freeport-McMoRan announced it is embarking on a massive overhaul of its IT systems. In a bid to become more agile the firm said it would deploy its entire application estate on a combination of private and public cloud platforms – though, and somewhat ironically, the company said the entire project would wrap up in five years (which, being pragmatic about IT overhauls, could mean far later).

“The biggest challenge with hybrid cloud isn’t the technology per se – okay, so you need to be able to have one version of the truth, one place where you can manage most the platforms and applications, one place where to the best of your abilities you can orchestrate resources, and so forth,” Illsley explains.

Of course you need all of those things, he says. There will be some systems that won’t fit into that technology model, that will likely be left out (i.e. mainframes). But there are tools out there to fit current hybrid use cases.

“When most organisations ‘do’ hybrid cloud, they tend to choose where their workloads will sit depending on their performance needs, scaling needs, cost and application architecture – and then the workloads sit there, with very little live migration of VMs or containers. Managing them while they sit there isn’t the major pain point. It’s about the business processes; it’s the organisational and cultural shifts in the IT department that are required in order to manage IT in a multimodal world.”

“What’s happening in hybrid cloud isn’t terribly different from what’s happening with DevOps. You have developers and you have operations, and sandwiching them together in one unit doesn’t change the fact that they look at the world – and the day-to-day issues they need to manage or solve – in their own developer or operations-centric ways. In effect they’re still siloed.”

The way IT is financed can also create headaches for CIOs intent on delivering a hybrid cloud strategy. Typically IT is funded in an ‘everyone pitches into the pot’ sort of way, but one of the things that led to the rise of cloud in the first place is line of businesses allocating their own budgets and going out to procure their own services.

“This can cause both a systems challenge – shadow IT and the security, visibility and management issues that come with that – and a cultural challenge, one where LOB heads see little need to fund a central organisation that is deemed too slow or inflexible to respond to customer needs. So as a result, the central pot doesn’t grow.”

While vendors continue to ease hybrid cloud headaches on the technology front with resource and financial (i.e. chargeback) management tools, app stores or catalogues, and standardised platforms that bridge the on-prem and public cloud divide, it’s less likely the cultural challenges associated with hybrid cloud will find any straightforward solutions in the short term.

“It will be like this for the next ten or fifteen years at least. And the way CIOs work with the rest of the business as well as the IT department will define how successful that hybrid strategy will be, and if you don’t do this well then whatever technologies you put in place will be totally redundant,” Illsley says.

Cloud News Daily 2015-07-14 05:38:21

Rackspace Hosting and has paired up with Microsoft to manage and offer technical support to Microsoft’s public cloud computing platform known as Azure. Azure support and managed services are currently available and expansion to overseas customers will begin in 2016.

Rackspace has struggled to compete with larger companies and their cloud platforms, such as Amazon Web Services, and this agreement with Microsoft marks its first major deal to support public cloud services other than its own.

Rackspace Chief Technology Officer, John Engates, has said, “Stay tuned. As part of our managed cloud strategy, a tenet of that is we want to support industry-leading technologies. Our strategy really gives us the opportunity to apply fanatical support to any leading industry platform in the future. So stay tuned in terms of announcements.”

logo

Rackspace hopes to improve profit margins and reduce capital spending by offering managed services and technical support for public clouds, and it is starting with Microsoft’s Azure. Rackspace’s main strength has been providing fanatical service, training and technical support to smaller businesses.

Rackspace technical support will be available directly to clients through Microsoft. Rackspace may also resell Microsoft’s IaaS services to its cutomers. In the fourth quarter of 2014, IaaS Services accounted for thirty one percent of Rackspace’s total revenue.

Engates also added Rackspace will help customers build apps that run in hybrid, private-public cloud environments. Many companies are becoming interested in the public-private cloud model, with important business apps ran on private servers with accessing public IaaS providers on a as needed basis.

The post appeared first on Cloud News Daily.

Living in a hybrid world: From public to private cloud and back again

Orlando Bayter, chief exec and founder of Ormuco

Orlando Bayter, chief exec and founder of Ormuco

The view often propagated by IT vendors is that public cloud is already capable of delivering a seamless extension between on-premise private cloud platforms and public, shared infrastructure. But Orlando Bayter, chief executive and founder of Ormuco, says the industry is only at the outset of delivering a deeply interwoven fabric of private and public cloud services.

Demand for that kind of seamlessness hasn’t been around for very long, admittedly. It’s no great secret that in the early days of cloud demand for public cloud services was spurred largely by the slow-moving pace traditional IT organisations are often set. As a result every time a developer wanted to build an application they would simply swipe the credit card and go, billing back to IT at some later point. So the first big use case for hybrid cloud emerged when developers then needed to bring their apps back in-house, where they would live and probably die.

But as the security practices of cloud service providers continue to improve, along with enterprise confidence in cloud more broadly, cloud bursting – the ability to use a mix of public and private cloud resources to fit the utilisation needs of an app – became more widely talked about. It’s usually cost prohibitive and far too time consuming to scale private cloud resources quick enough to meet the changing demands of today’s increasingly web-based apps, so cloud bursting has become the natural next step in the hybrid cloud world.

Orlando will be speaking at the Cloud World Forum in London June 24-25. Click here to register.

There are, however, still preciously few platforms that offer this kind of capability in a fast and dynamic way. Open source projects like OpenStack or more proprietary variants like VMware’s vCloud or Microsoft’s Azure Stack (and all the tooling around these platforms or architectures) are at the end of the day all being developed with a view towards supporting the deployment and management of workloads that can exist in as many places as possible, whether on-premise or in a cloud vendor’s datacentre.

“Let’s say as a developer you want to take an application you’ve developed in a private cloud in Germany and move it onto a public cloud platform in the US. Even for the more monolithic migration jobs you’re still going to have to do all sorts of re-coding, re-mapping and security upgrades, to make the move,” Bayter says.

“Then when you actually go live, and have apps running in both the private and public cloud, the harsh reality is most enterprises have multiple management and orchestration tools – usually one for the public cloud and one for the private; it’s redundant, and inefficient.”

Ormuco is one company trying to solve these challenges. It has built a platform based on HP Helion OpenStack and offers both private and public instances, which can both be managed in a single pane of glass; it has built its own layer in between to abstract resources underneath).

It has multiple datacentres in the US and Europe from which it offers both private and public instances, as well as the ability to burst into its cloud platform using on-premise OpenStack-based clouds. The company is also a member of the HP Helion Network, which Bayter says gives it a growing channel and the ability to offer more granular data protection tools to customers.

“The OpenStack community has been trying to bake some of these capabilities into the core open source code, but the reality is it only achieved a sliver of these capabilities by May this year,” he said, alluding to the recent OpenStack Summit in Vancouver where new capabilities around federated cloud identity were announced and demoed.

“The other issue is simplicity. A year and a half ago, everyone was talking about OpenStack but nobody was buying it. Now service providers are buying but enterprises are not. Specifically with enterprises, the belief is that OpenStack will be easier and easier as time goes on, but I don’t think that’s necessarily going to be the case,” he explains.

“The core features may become a bit easier but the whole solution may not, but there are so many things going into it that it’s likely going to get clunkier, more complex, and more difficult to manage. It could become prohibitively complex.”

That’s not to say federated identity or cloud federation is a lost cause – on the contrary, Bayter says it’s the next horizon for cloud. The company is currently working a set of technologies that would enable any organisation with infrastructure that lies significantly underutilised for long periods to rent out their infrastructure in a federated model.

Ormuco would verify and certify the infrastructure, and allocate a performance rating that would change dynamically along with the demands being placed on that infrastructure – like an AirBnB for OpenStack cloud users. Customers renting cloud resources in this market could also choose where their data is hosted.

“Imagine a university or a science lab that scales and uses its infrastructure at very particular times; the rest of the time that infrastructure is fairly underused. What if they could make money from that?”

There are still many unanswered questions – like whether the returns for renting organisations would justify the extra costs (i.e. energy) associate with running that infrastructure, or where the burden of support lies (enterprises need solid SLAs for production workloads) and how that influences what kinds of workloads ends up on rented kit, but the idea is interesting and definitely consistent with the line of thinking being promoted by the OpenStack community among others in open source cloud.

“Imagine the power, the size of that cloud,” says Bayter . “That’s the cloud that will win out.”

This interview was produced in partnership with Ormuco

Is force of habit defining your hybrid cloud destiny?

Experience breeds habit, which isn't necessarily the best thing strategically

Experience breeds habit, which isn’t necessarily the best thing strategically

I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

And the point of all this?  Simply, that people’s viewpoints are constrained by their experiences and what keeps them busy day-to-day, so often they miss an opportunity to do something different.  For those people working day-to-day in a traditional IT department , keeping systems up and running,  hybrid cloud is all about integrating an existing on-site system with an off-site cloud.  This is a nice, easy one to grasp in principal but the reality is somewhat harder to realize.

The idea of connecting an on-site System of Record to a cloud-based System of Engagement:  pulling data from both to generate new insights is conceptually well understood.  That said, the number of organisations making production use of such arrangements is few and far between.  For example, combining historical customer transaction information with real-time geospatial, social and mobile data and then applying analytics to generate new insights which uncover new sales potential.  For many organizations though, the challenge in granting access to the existing enterprise systems is simply too great.  Security concerns, the ability to embrace the speed of change that is required and the challenge to extract the right data in a form that is immediately usable by the analytical tools may be simply a hurdle too high.  Indeed, many clients I’ve worked with have stated that they’re simply not going to do this.  They understand the benefits, but the pain they see themselves having to go through to get these makes this unattractive to pursue.

So, if this story aligns with your view of hybrid cloud and you’ve already put it in the “too hard” box then what is your way forward?

For most organizations, no single cloud provider is going to provide all of the services they might want to consume.  Implicitly then, if they need to bring data from these disparate cloud services together then there is a hybrid cloud use case:  linking cloud to cloud.  Even in the on-site to off-site hybrid cloud case there are real differences when the relationship is static compared to when you are dynamically bursting in and out of off-site capacity.  Many organizations are looking to cloud as a more-effective and agile platform for backup and archiving or for disaster recovery.  All of these are hybrid cloud use cases to but if you’ve already written off ‘hybrid’ then you’re likely missing very real opportunities to do what is right for the business.

Regardless of the hybrid cloud use case, you need to keep in mind three key principals which are:

  1. Portability – the ability to run and consume services and data from wherever it is most appropriate to do so, be that cloud or non-cloud, on-site or off-site.
  2. Security, visibility and control – to be assured that end-to-end, regardless of where the ‘end’ is, you are running services in such a way that they are appropriately secure, well managed and their characteristics are well understood.
  3. Developer productivity – developers should be focused on solving business problems and not be constrained by needing to worry about how or when supporting infrastructure platforms are being deployed.  They should be able to consume and integrate services from many different sources to solve problems rather than having to create everything they need from scratch.

Business applications need to be portable such that they can both run as well as consume other services from wherever is most appropriate.  To do that, your developers need to be more unconstrained by the underlying platform(s) and so can develop for any cloud or on-site IT platform.  All this needs to be done in a way that allows enterprise controls, visibility and security to be extended to the cloud platforms that are being used.

If you come from that traditional IT department background, you’ll be familiar with the processes that are in place to ensure that systems are well managed, change is controlled and service levels are maintained.  These processes may not be compatible with the ways that clouds open up new opportunities.  This leads to the need to look a creating a “two-speed” IT organisation to provide the rigor where needed for the Systems of Record whilst enabling rapid change and delivery in the Systems of Engagement space.

Cloud generates innovation and hence diversity.  Economics, regulation and open communities drive standardization and it is this, and in particular open standards, which facilitates integration in all of these hybrid cases.

So, ask yourself.  With more than 65 per cent of enterprise IT organizations making commitments on hybrid cloud technologies before 2016 are you ensuring that your definitions – and hence your technologies choices – reflect future opportunities rather than past prejudices?

Written by I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

Written by John Easton, IBM distinguished engineer and leading cloud advisor for Europe

Cisco Systems Inc. Announces Plans for Intercloud

A year ago, Cisco Systems Inc. announced its plans to invest one billion dollars in a cloud computing company to compete with six billion dollar Amazon Web Services. This plan was christened with the name Intercloud.

At the Cisco Live annual conference, the company revealed its plans to take the Intercloud a step forward. The Intercloud will not offer this cloud itself from its own data centers but will instead unify smaller cloud service providers onto a large platform of products that will be compatible with each other.  The Intercloud will prevent the smaller providers from losing customers to Amazon while allowing Cisco to continue to sell these providers hardware as they grow and develop. So, the aim of Intercloud is to enable these smaller providers and Cisco to unify and compete with Amazon.

intercloud

 

In addition, Cisco is opening the Intercloud marketplace, an app store that gives customers the tools, software, and technology they need to quickly and efficiently use their cloud. Cisco is partnering up with many tech companies like Hortonworks and Docker for this marketplace, which will 35 apps.

Cisco also announced the development of the Intercloud Fabric, which will allow customers to manage and control their data centers and Intercloud at the same time. The Inter cloud Fabric makes it easier for customers to manage what can be a very tough technology. Cloud service providers like Datalink, Peak 10, and Sungard Availability Services have already backed Cisco’s plan to develop the Intercloud Fabric.

Cisco insists that while Amazon Web Services may have a head start in the cloud computing market, cloud computing still has much room to grow, making it anyone’s game.

The post Cisco Systems Inc. Announces Plans for Intercloud appeared first on Cloud News Daily.

Ormuco taps HP Helion for mid-market hybrid cloud offering

Ormuco is partnering with HP on hybrid cloud

Ormuco is partnering with HP on hybrid cloud

Ormuco is partnering with HP to launch a Helion OpenStack-based hybrid cloud solution the company said is designed specifically with workload portability in mind.

Hybrid cloud is still high on the agenda for many CIOs but challenges abound – security and compliance management, service automation and orchestration and of course, workload portability. The company is relying on HP’s implementation of OpenStack to solve some of those challenges, and said its ConnectedCloud offering will help enterprises move their workloads across OpenStack-based private and public clouds.

“Ormuco is entering the cloud services market since there is a vital need for a hybrid cloud solution with streamlined functionality for enterprise customers,” said Ormuco chief executive Orlando Bayter. “HP Helion OpenStack and Ormuco’s new data centres enable us to create environments that focus on service delivery regardless of the underlying infrastructure.”

Ormuco has datacentres in Dallas, Texas; Sunnyvale, California and Montreal, Quebec, and said it has others planned for New York and Seattle as well as an expansion into Europe with datacentres in Frankfurt and London.

The company is a member of HP’s Helion Partner Network, a federation of HP-certified OpenStack cloud incumbents globally that private cloud users can burst into, which is for the time being primarily how the company delivers scale.

“Ormuco requires extensive geographic reach and the ability to meet customers’ in-country or cross-border cloud requirements,” said Steve Dietch, vice president, HP Helion, HP. “With HP Helion OpenStack and the HP Helion Network, Ormuco’s Connected Cloud customers will have access to hybrid cloud services from a global, open ecosystem of service providers.”

EMC World 2015: Event Recap

After EMC World 2015, I’m languishing in airports today in post-conference burnout – an ideal time to deliver a report on the news, announcements and my prognostications on what this means to our business.

The big announcements were delivered in General Sessions on Monday (EMC Information Infrastructure & VCE) and on Tuesday (Federation: VMware & Pivotal). The Federation announcements are more developer and futures oriented, although important strategically, so I’ll pass on that for now.

EMC and VCE have updated their converged and Hyperconverged products pretty dramatically. Yes, VSPEX Blue is Hyperconverged, however unfortunate the name is in linking an EVO:RAIL solution to a reference architecture solution.

The products can be aligned as:

  1. Block
  2. Rack
  3.  Appliances

EMC World 2015

The VCE Vblock product line adheres to its core value proposition closely.

  1. Time from order to completely deployed on the data center floor in 45 days. (GreenPages will provide the Deploy & Implementation services. We have three D&I engineers on staff now.)
  2. Cross component Unified upgrade through a Release Candidate Matrix – every single bit of hardware is tested in major and minor upgrades to insure compatibility: storage, switch, blade, add-ons (RecoverPoint, Avamar, VPLEX).
  3. Unified support – one call to VCE, not to all the vendors in the build

However, VCE is adding options and variety to make the product less monolithic.

  1. VXblock – this is the XtremIO version, intended for large VDI or mission critical transactional deployments (trading, insurance, national healthcare claims processing). The Beast is a Vblock of eight 40 TB Xbrick nodes, 320 TB before dedupe and compression, or nearly 2 PB with realistic data reduction. Yes, that is Two Petabytes of All Flash Array. Remote replication is now totally supported with RecoverPoint.
  2. VXRack – this is a Vblock without an array, but it isn’t VSAN either. It is….ScaleIO, a software storage solution that pools server storage into a shared pool. The minimum configuration is 100 compute nodes, which can be dense performance (4 node form factor in 2 U chassis) or capacity. The nodes can be bare metal or hypervisor of any sort. This can scale to 328 Petabytes. Yes, Petabytes. This is web-scale, but they call it “Rack Scale” computing (first generation). More on that later…
  3. Vscale – Networking! This is Leaf and Spine networking in a rack to tie a VXrack or Vblock deployment together, at scale. “One Ring to Rule Them All”. This is big, literally. Imagine ordering a petabyte installation of VXblock, VXrack and Vscale, and rolling it onto the floor in less than two months.

So, that is Block and Rack. What about Appliance?

Enter VSPEX Blue, the EMC implementation of EVO:RAIL. This has definite value in…

  • Pricing
  • Unified management & support
  • The “app store” with
    • integrated backup (VDPA)
    • replication (vRPA)
    • Cloud Array integration (TwinStrata lives!), a virtual iSCSI controller that will present cloud storage to the system as a backup target or a capacity tier.

This post from Mike Colson provides a good explanation.

Future apps will include virus scanning, links to Public IaaS and others.

I set one up in the lab in 15 minutes, as advertised, although I had to wait for the configuration wizard to churn away after I initialized it and input all the networking. Professional Services will be required, as EMC is requiring PS to implement. Our team is and will be prepared to deploy this. We can discuss how this compares to other Hyperconverged appliances. Contact us for more information.

There are other announcements, some in sheer scale and some in desirable new features.

Data Domain Beast: DD9500, 58.7 TB/hr. and 1.7 PB of capacity. This is rated at 1.5x the performance and 4x the scalability of the nearest competitor.

VPLEX News: The VPLEX Witness can now be deployed in the public Cloud (naturally EMC recommends the EMC Hybrid Cloud or vCloud Air). The Witness has to be outside the fault domains of any protected site, so where better than the Cloud? It is a very lightweight VM.

CloudArray (TwinStrata’s Cloud Array Controller) is integrated with VPLEX. You can have a distributed volume spanning on premise and cloud storage. I’m still trying to grasp the significance of this. The local cache for the CloudArray controller can be very fast, so this isn’t limited to low latency applications. The things you could do…

VPLEX is now available in a Virtual Edition (VPLEX/VE). This will obviously come with some caveats and restrictions, but this also is a fantastic new option for smaller organizations looking for the high availability that VPLEX provides, as well as data mobility and federation of workloads across metro distances.

VVOL: Chuck Hollis (@chuckhollis) led an entertaining and informative ‘Birds of a Feather’ session for VVOLs. Takeaway – this is NOT commonly deployed yet. Only a handful of people have even set it up, and mostly for test. This was in a room with at least 150 people, so high interest, but low deployment. Everyone sees the potential and is looking forward to real world policy based deployments on industry standard storage. This is an emerging technology that will be watched closely.

VNX/VNXe: I didn’t see or hear many striking features or upgrades in this product line, but an all flash VNXe was trumpeted. I’ll be looking at the performance and design specifications of this more closely to see how it might fit targeted use cases or general purpose storage for SMB and commercial level customers. There is talk around the virtualization of the VNX array, as well as Isilon, so pretty soon nearly every controller or device in the EMC portfolio will be available as a virtual appliance. This leads me to…

ViPR Controller and ViPR SRM: Software Defined Storage

ViPR Controller is definitely a real product with real usefulness. This is the automation and provisioning tool for a wide variety of infrastructure elements, allowing for creation of virtual arrays with policy based provisioning, leveraging every data service imaginable: dedupe, replication, snapshots, file services, block services and so on.

ViPR SRM is the capacity reporting and monitoring tool that provides the management of capacity that is needed in an SDS environment. This is a much improved product with a very nice GUI and more intuitive approach to counters and metrics.

I’d recommend a Storage Transformation Workshop for people interested in exploring how SDS can change the way (and cost) of how you manage your information infrastructure.

More on EVO:RAIL/VSPEX Blue

I met with Mike McDonough, the mastermind behind EVO:RAIL. He is indeed a mastermind. The story of the rise of EVO:RAIL as a separate business unit is interesting enough (300 business cases submitted, 3 approved, and he won’t say what the other mystery products are), but the implementation and strategy and vision are what matter to us. The big factor here was boiling down the support cases to come up with the 370 most common reasons for support, all around configuration, management and hardware. The first version of EVO:RAIL addressed 240 of those issues. Think of this as having a safety rail around a vSphere appliance to prevent these common and easily avoidable issues, without restricting the flexibility too much. The next version will incorporate NSX, most likely. Security and inspection are the emphases for the next iteration. Partners and distributors were chosen carefully. GreenPages is one of only 9 national partners chosen for this, based on our long history as a strategic partner and our thought leadership! The tightly controlled hardware compatibility list is a strength, as future regression tests for software and other upgrades will keep the permutations down to a minimum. (By the way, the EMC server platform is Intel, for VxRack, VSPEX Blue and I think for all of their compute modules for all their products). The implication here, competitively, is that as competitive appliances that are buying white box hardware with commodity contracts allowing for flexibility in drives, memory and CPU, will have an exponentially more difficult task in maintain the increasing permutations of hardware versions over time.

Final Blue Sky note:

Rack Scale is an Intel initiative that promises an interesting future for increased awareness of the hardware for hypervisors, but is a very future leaning project. Read Scott Lowe’s thoughts on this.

 

As always, contact us for more details and in-depth conversations about how we can help you build the data center of the future, today.

 

By Randy Weis, Practice Manager, Information Infrastructure

vCloud Air: Helping a customer move to a hybrid cloud environment

As you most likely know, vCloud Air is VMware’s offering in the hybrid/public cloud space. In my opinion, it’s a great offering. It allows you to take existing virtual machines and migrate those up to the cloud so that you can manage everything with your existing virtual center. It’s also a very good option to do disaster recovery.

I worked on a project recently where the client wanted to know what they needed to do with their infrastructure. They were looking for solid options to build a foundation for their business, whether it was on-prem, a cloud-based offering, or a hybrid approach.

In this project, we ended up taking their VMs and physical servers and put a brand new host on site running VMware that’s running a domain controller and a file server. We put the rest of the production servers and test dev environment in vCloud Air. Additionally, this helped them address their disaster recovery needs. It gave them a place where they could take their systems without a lot of upfront money and have a place where they could recover their VMs in case of the event of a disaster.

 

http://www.youtube.com/watch?v=OP3qO-SI6SY

 

Are you interested in learning more about vCloud Air? Reach out!

 

By Chris Chesley, Solutions Architect

CIO Focus Interview: Kevin Hall, GreenPages-LogicsOne

CIO Focus InterviewFor this segment of our CIO Focus Interview Series, I sat down with our CIO and Managing Director, Kevin Hall. Kevin has an extremely unique perspective as he serves as GreenPages’ CIO as well as the Managing Director of our customer facing Professional Services and Managed Services divisions.

 

Ben: Can you give me some background on your IT experience?

Kevin: I’ve been a CIO for 17+ years holding roles in both consulting organizations and roles overseeing internal IT. The position I have at GreenPages is very interesting because I am both a Managing Partner of our services business and the CIO of our organization. This is the first time I have held both jobs at the same time in one company

Ben: What are your primary responsibilities for each part of your role then?

Kevin: As CIO, I’m responsible for all aspects of information services. This includes both traditional data center functions, engineering functions, operations functions, and app dev functions. As Managing Director I am responsible for our Professional Services and Managed Services divisions. These divisions provide help to our customers on the same sorts of projects that I am undertaking as CIO.

Ben: Does it help you being in this unique position? Does it allow you to get a better understanding of what GreenPages’ customers are looking for since you experience the same challenges as CIO?

Kevin: Yes, I think it is definitely an advantage. The CIO role is crucial in this era. It has certainly been a challenging job for a long time, and that has magnified in recent years because of the fundamental shift and explosion of the capabilities available to modern day CIOs. Because I am in this rather crazy position, it does help me understand the needs of our customers better. If I was just on the consulting side of the house, I’m not sure I could fully understand or appreciate how difficult some of the choices CIOs are faced with are. I can relate to that feeling of being blocked or trapped because I’ve experienced it. The good news is our CTO and Architects provide real world lessons right here at home for both myself and our IT Director.

Interestingly enough, on the services side of my role, in both the Professional Services and Managed Services division, we are entering our 3rd year of effort to realign those divisions in a way that helps CIOs solve those same demanding needs that I am facing. We’re currently helping companies with pure cloud, hybrid cloud and traditional approaches to information services. I’m both a provider of our services to other organizations as well as a customer of those services. Our internal IT team is actually a customer of our Professional and Managed Services division. We use our CMaaS platform to manage and operate our computing platforms internally. We also use the same help desk team our customers do. Furthermore, we use various architects and engineers that serve our customers to help us with internal projects. For example, we have recently engaged our client-facing App Dev team to help GreenPages reimagine our internal Business Intelligence systems and are underway on developing our next generation BI tools and capabilities. Another example would be a project we recently completed to look at our networking and security infrastructure in order to be prepared to move workloads from on-prem or colo facilities to the cloud. We had to add additional capabilities to our network and went out and got the SOC 2 Type 2 certification which really speaks to the importance we place on security. What I love about working here is that we don’t just talk about hybrid cloud; we are actively and successfully using those models for our own business.

Ben: What are some of your goals for 2015?

Kevin: On the internal IT side, I’m engaged, like many of my colleagues around the globe, on assessing what the new computing paradigm means for our organization. We’re embarked in looking at every aspect of our environment along with our ability to deliver services to the GreenPages’ organization. Our goal is to figure out a way to do this in a cost effective, scalable, and flexible way that meets the needs of our organization.

Ben: Any interesting projects you have going on right now?

Kevin: As we assess our workloads and start trying to understand what the best execution venues for those workloads are, it’s become pretty clear that we are going to be using more than a single venue. For example, one big execution venue for us is VMware’s vCloud Air. We have some workloads that are excellent candidates for that venue. Other workloads are great fits for Microsoft Azure. We have some initiatives, like the BI project, that are going to be an open source project. We’ll be utilizing things like Docker and Hadoop that are most likely going to be highly optimized around Amazon’s capabilities. This is giving me insight into the notion that there are many different capabilities between clouds. The important thing is to make sure every workload is optimized for the right cloud. This is an important ongoing exercise for us in 2015.

Ben: Which area of IT would you say interests you the most?

Kevin: What interests me most about IT is the organizational aspect. How do you organize in a way that creates value for the company? How do you prioritize in terms of people, process and technology? For me, it’s not about one particular aspect; it’s about the entire program and how it all functions.

Ben: What are you looking forward to in 2015 from a technology perspective?

Kevin: I’m really looking forward to our annual Summit event in August. I think it is going to be the best one yet. If you look back several years ago, very few attendees raised their hand when asked if they thought the cloud was real. Last year, most of the hands in the room went up. What will make it especially interesting this year is that we have many customers deeply involved with these types of projects. Four years ago the only option was to sit and listen to presentations, but now our customers will have the opportunity to talk to their peers about how they are actually going about doing cloud. It will be a great event and a fantastic learning opportunity.

Are you looking for more information around the transformation of corporate IT? Download this eBook from our Director of Cloud Services John Dixon to learn more!

 

By Ben Stephenson, Emerging Media Specialist

How Software Defined Networking is Enabling the Hybrid Cloud

By Nick Phelps, Practice Manager, Network & Security

 

Networking expert Nick Phelps discusses how software defined networking is enabling the hybrid cloud & creating the networks of tomorrow.

 

http://www.youtube.com/watch?v=VMIBY1wnUzU

 

 

Interested in learning more about software defined networking? Email us at socialmedia@greenpages.com to set up a conversation with Nick!