Category Archives: Private Cloud

Converged OpenStack cloud pioneer Nebula closes its doors

Nebula, an OpenStack pioneer, is closing its doors

Nebula, an OpenStack pioneer, is closing its doors

Converged infrastructure vendor Nebula, one of the first companies to pioneer integrated OpenStack-based private cloud hardware, announced it will close its doors this week.

A notice posted by the Nebula management team on its website says the company had no choice but to cease operations after exhaustively searching for alternative arrangements that would allow the company to keep operating.

“When we started this journey four years ago, we set out to usher in a new era of cloud computing by curating and productizing OpenStack for the enterprise. We are incredibly proud of the role we had in establishing Nebula as the leading enterprise cloud computing platform. At the same time, we are deeply disappointed that the market will likely take another several years to mature. As a venture backed start up, we did not have the resources to wait.”

“Nebula private clouds deployed at customer sites will continue to operate normally, however support will no longer be available. Nebula is based on OpenStack and is compatible with OpenStack products from vendors including Red Hat, IBM, HP and others, providing customers with a number of choices moving forward.”

One of the original players behind the OpenStack codebase, Nebula offered Nebula Cosmos, a fast and secure deployment, management, and monitoring tool for enterprise-grade OpenStack private clouds, and converged infrastructure solutions based on x86 servers running OpenStack- the Nebula One.

Nearly five years after the creation of OpenStack the market is clearly still in its early stages despite loads of vendor hype and a flurry of acquisitions in this space. Indeed, the first challenge for independents like Nebula is their ability to gain critical mass and maintain operations – at least before being acquired by firms like Cisco, Red Hat, HP and other IT vendors that have snapped OpenStack startups in recent years in a bid to grow their portfolios based on the open source platform; the second is, of course, competing with the Ciscos, Red Hats and HPs of the world, which is no small feat.

UK MoD launches dedicated private cloud for internal apps

The UK MoD is using a hosted private cloud for internal shared services apps

The UK MoD is using a hosted private cloud for internal shared services apps

The UK’s Ministry of Defence (MOD) Information Systems and Services (ISS) has deployed a private cloud based in CGI’s South  Wales datacentre which is being used to host internal applications for the public sector authority.

The ISS said it received Approval to Operate for the new Foundation Application Hosting Environment (FAHE), which is hosted as a private cloud instance in CGI’s facilities, and that the first applications have successfully transitioned onto the new platform.

The hosting environment was procured through the G-Cloud framework, the UK government’s cloud-centric procurement framework, and the contract will run for at least two years.

“FAHE provides the foundation of our Applications Services approach and a future-proofed platform for secure application hosting. Our vision is that ISS will be the Defence provider of choice for applications development, hosting, and management,” said Keith Jefferies, ISS Programmes, EMPORIUM deputy head, UK Ministry of Defence.

“FAHE is the first delivery contract under the broader banner of the Applications Programme and we have selected CGI on their ability to deliver a secure environment coupled with a flexible commercial model that allows us to rapidly up and down-scale in line with future demand,” Jefferies said.

Steve Smart, UK vice president of space, defence, national and cyber security at CGI said: “MOD ISS is taking an important step towards delivering the Government’s vision of using  flexible cloud services. The CGI platform is compliant to Defence and pan-Government ICT strategies and architectures. It will provide multi-discipline services from the most appropriate source with the agility and cost of industry best practice.”

The move comes just a few months after the MoD contracted with Ark to design a new state-of-the art datacentre in Corsham, Wiltshire, a move that will allow the department to decommission its Bath facility and save on energy and operations costs.

Toy retailer The Entertainer taps Rackspace for managed private cloud

The Entertainer has moved onto Rackspace's managed private cloud platform

The Entertainer has moved onto Rackspace’s managed private cloud platform

UK toy retailer The Entertainer has moved onto Rackspace’s managed private cloud platform in a bid to improve how the company’s site and databases handle traffic spikes.

Working with omni-channel retail consultancy Conexus, The Entertainer sought to enhance its website and databases in a bid to cope with rising seasonal demand.

The company, which has about 100 stores in the UK, said in the five weeks leading up to last Christmas last year it saw a 60 per cent sales increase from the same period in 2013 (it generates half of its annual revenues between November and December).

“In addition to the scalability that’s available through the Rackspace Private Cloud, the high performance it offers is also very important to us. It has allowed the business to deploy a Click and Collect service, which has improved the customer experience and boosted sales,” said Ian Pulsford, head of IT services, The Entertainer.

“A crucial aspect of Click and Collect is having an effective stock management system, which we also power by the cloud. Every evening between midnight and 4 a.m. we monitor the stock available in each store, collecting data on our 17,000 products. This ensures that the availability we offer our Click and Collect customers is accurate and updated in real time,” Pulsford said.

“However, as we’ve learned in the past with previous hosting providers, the technology alone is not enough if we don’t have access to a high level of support and expertise to keep it running smoothly,” he added.

Jeff Cotten, managing director of Rackspace International said: “Multi-channel retailing is highly competitive, which means both the in-store and online experiences have to be excellent to keep customers coming back. It’s been great working with The Entertainer and Conexus to build a Private Cloud environment that is high performing and highly scalable, so The Entertainer can focus on developing new services and increasing its presence across a growing number of ecommerce channels.”

Gartner Data Center Conference: Success in the Cloud & Software Defined Technologies

I just returned from the Gartner Data Center conference in Vegas and wanted to convey some of the highlights of the event.  This was my first time attending a Gartner conference, and I found it pretty refreshing as they do take an agnostic approach to all of their sessions unlike a typical vendor sponsored event like VMWorld, EMC World, Cisco Live, etc.  Most of the sessions I attended were around cloud and software defined technologies.  Below, I’ll bullet out what I consider to be highlights from a few of the sessions.

Building Successful Private/Hybrid Clouds –

 

  • Gartner sees the majority of private cloud deployments being unsuccessful. Here are some common reasons for that…
    • Focusing on the wrong benefits. It’s not all about cost in $$. In cloud, true ROI is measured in agility vs dollars and cents
    • Doing too little. A virtualized environment does not equal a private cloud. You must have automation, self-service, monitoring/management, and metering in place at a minimum.
    • Doing too much. Putting applications/workloads in the private cloud that don’t make sense to live there. Not everything is a fit nor can take full advantage of what cloud offers.
    • Failure to change operational models. It’s like being trained to drive an 18 wheeler then getting behind the wheel of a Ferrari and wondering why you ran into that tree.
    • Failure to change funding model. You must, at a minimum, have a show back mechanism so the business will understand the costs, otherwise they’ll just throw the kitchen sink into the cloud.
    • Using the wrong technologies. Make sure you understand the requirements of your cloud and choose the proper vendors/technologies. Incumbents may not necessarily be the right choice in all situations.
  • Three common use cases for building out a private cloud include outsourcing commodity functions, renovating infrastructure and operations, and innovation/experimentation…but you have to have a good understanding of each of these to be successful (see above).
  • There is a big difference between doing cloud to drive bottom line (cost) savings vs top line (innovation) revenue expansion. Know ‘why’ you are doing cloud!
  • On the hybrid front, it is very rare today to see fully automated environments that span private and public as the technology still has some catching up to do. That said, it will be reality within 24 months without a doubt.
  • In most situations, only 20-50% of all applications/workloads will (or should) live in the cloud infrastructure (private or public) with the remaining living in traditional frameworks. Again, not everything can benefit from the goodness that cloud can bring.

Open Source Management Tools (Free or Flee) –

 

  • Organizations with fewer than 2500 employees typically look at open source tools to save on cost while larger organizations are interested in competitive advantage and improved security.
  • Largest adoption is in the areas of monitoring and server configuration while cloud management platforms (i.e. openstack), networking (i.e. open daylight), and containers (i.e. docker) are gaining momentum.
  • When considering one of these tools, very important to look at how active the community is to ensure relevancy of the tool
  • Where is open source being used in the enterprise today? Almost half (46%) of deployments are departmental while only about 12% of deployments are considered strategic to the overall organization.
  • Best slide I saw at the event which pretty much sums up open source….

 

Gartner Data Center Conference

 

If this makes you excited, then maybe open source is for you.  If not, then perhaps you should run away!

3 Questions to Ask Your SDN Vendor –

  • First, a statistic…organization which fail to properly integrate their virtualization and networking teams will see a 3x longer MTR (mean time to resolution) of issues vs those who do properly integrate the teams
  • There are approximately 500 true production SDN deployments in the world today
  • The questions to ask…
    • How to prevent network congestion caused by dynamic workload placement
    • How to connect to bare metal (non-virtualized) servers
    • How to integrate management and visibility between the underlay/overlay
  • There are numerous vendors in this space, it’s not just VMware and Cisco.
  • Like private cloud, you really have to do SDN for the right reasons to be successful.
  • Last year at this conference, there were 0 attendees who indicated they had investigated or deployed SDN. This year, 14% of attendees responded positively.

 

If you’re interested in a deeper discussion around what I heard at the conference, let me know and I’ll be happy to continue to dialogue.

 

By Chris Ward, CTO. Follow Chris on Twitter @ChrisWardTech . You can also download his latest whitepaper on data center transformation.

 

 

Managing Resources in the Cloud: How to Control Shadow IT & Enable Business Agility

 

In this video, GreenPages CTO Chris Ward discusses the importance of gaining visibility into Shadow IT and how IT Departments need to offer the same agility to its users that public cloud offerings like Amazon can provide.

 

http://www.youtube.com/watch?v=AELrS51sYFY

 

 

If you would like to hear more from Chris, download his on-demand webinar, “What’s Missing in Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage”

You can also download this ebook to learn more about the evolution of the corporate IT department & changes you need to make to avoid being left behind.

 

 

 

Lessons Learned from Running My Own Cloud From My Kitchen (Clive Thompson)

From Wired comes a good opinion read on the author’s experiences and insights derived from setting up his own private cloud server using Tonido.

It’s Time for You to Take the Cloud Back From Corporations (Clive Thompson)

“…it’s a “personal cloud”: I own and run the hardware. The simple act of building and running it has given me a glimpse of a possible alternate future for the Internet. It’s an increasingly popular one too.”

“Another outcome: You realize that, holy Moses, putting stuff online is not rocket science anymore.”

“Granted, personal clouds create new problems. A blizzard knocked out my DSL for a day, taking my cloud with it. A house fire destroys not just your laptop but your cloud backup as well.”

Read it all

 

My VMworld Breakout Session: Key Lessons Learned from Deploying a Private Cloud Service Catalog

By John Dixon, Consulting Architect, LogicsOne

 

Last month, I had the special privilege of co-presenting a breakout session at VMworld with our CTO Chris Ward. The session’s title was “Key Lessons Learned from Deploying a Private Cloud Service Catalog,” and we had a full house for it. Overall, the session went great and we had a lot of good questions. In fact, due to demand, we ended up giving the presentation twice.

In the session, Chris and I discussed a recent project we did for a financial services firm where we built a private cloud, front-ended by a service catalogue. A service catalog really enables self-service – it is one component of corporate IT’s opportunity to partner with the business. In a service catalog, the IT department can publish the menu of services that it is willing to provide and (sometimes) the price that it charges for those services. For example, we published a “deploy VM” service in the catalog, and the base offering was priced at $8.00 per day. Additional storage or memory from the basic spec was available at an additional charge. When the customer requests “deploy VM,” the following happens:

  1. The system checks to see if there is capacity available on the system to accommodate the request
  2. The request is forwarded to the individual’s manager for approval
  3. The manager approves or denies the request
  4. The requestor is notified of the approval status
  5. The system fulfills the request – a new VM is deployed
  6. A change record and a new configuration item is created to document the new VM
  7. The system emails the requestor with the hostname, IP address, and login credentials for the new VM

This sounds fairly straightforward, and it is. Implementation is another matter however. It turns out that we had to integrate with vCenter, Active Directory, the client’s ticketing system, and client’s CMDB, an approval system, and the provisioned OS in order to automate the fulfillment of this simple request. As you might guess, documenting this workflow upfront was incredibly important to the project’s success. We documented the workflow and assessed it against the request-approval-fulfillment theoretical paradigm to identify the systems we needed to integrate. One of the main points that Chris and I made at VMworld was to build this automation incrementally instead of tackling it all at once. That is, just get automation suite to talk to vCenter before tying in AD, the ticketing system, and all the rest.

Download this on-demand webinar to learn more about how you can securely enable BYOD with VMware’s Horizon Suite

Self-service, automation, and orchestration all drove real value during this deployment. We were able to eliminate or reduce at least three manual handoffs via this single workflow. Previously, these handoffs were made either by phone or through the client’s ticketing system.

During the presentation we also addressed which systems we integrated, which procedures we selected to automate, and what we plan to have the client automate next. You can check out the actual VMworld presentation here. (If you’re looking for more information around VMworld in general, Chris wrote a recap blog of Pat Gelsinger’s opening keynote as well as one on Carl Eschenbach’s General Session.)

Below are some of the questions we got from the audience:

Q: Did the organization have ITSM knowledge beforehand?

A:The group had very limited knowledge of ITSM but left our project with real-world perspective on ITIL and ITSM

Q: What did we do if we needed a certain system in place to automate something

A: We did encounter this and either labeled it as a risk or used “biomation” (self-service is available, fulfillment is manual, customer doesn’t know the difference) until the necessary systems were made available

Q: Were there any knowledge gaps at the client? If so, what were they?

A: Yes, the developer mentality and service management mentality are needed to complete a service catalog project effectively. Traditional IT engineering and operations do not typically have a developer mentality or experience with languages like Javascript.

Q: Who was the primary group at the client driving the project forward?

A: IT engineering and operations were involved with IT engineering driving most of the requirements.

Q: At which level was the project sponsored?

A: VP of IT Engineering with support from the CIO

All in all, it was a very cool experience to get the chance to present a breakout session at VMworld. If you have any other questions about key takeaways we got from this project, leave them in the comment section. As always, if you’d like more information you can contact us. I also just finished an ebook on “The Evolution of the Corporate IT Department” so be sure to check that out as well!

A Guide to Successful Cloud Adoption

Last week, I met with a number of our top clients near the GreenPages HQ in Portsmouth, NH at our annual Summit event to talk about successful adoption of cloud technologies. In this post, I’ll give a summary of my cloud adoption advice, and cover some of the feedback that I heard from customers during my discussions. Here we go…

The Market for IT Services

I see compute infrastructure looking more and more like a commodity, and that there is intense competition in the market for IT services, particularly Infrastructure-as-a-Service (IaaS).

  1. Every day, Amazon installs as much computing capacity in AWS as it used to run all of Amazon in 2002, when it was a $3.9 billion company.” – CIO Journal, May 2013
  2. “[Amazon] has dropped the price of renting dedicated virtual server instances on its EC2 compute cloud by up to 80 percent […]  from $10 to $2 per hour” – ZDNet,  July 2013
  3. “…Amazon cut charges for some of its services Friday, the 25th reduction since its launch in 2006.” – CRN, February 2013

I think that the first data point here is absolutely stunning, even considering that it covers a time span of 11 years. Of course, a simple Google search will return a number of other similar quotes. How can Amazon and others continue to drop their prices for IaaS, while improving quality at the same time? From a market behavior point of view, I think that the answer is clear – Amazon Web Services and others specialize in providing IaaS. That’s all they do. That’s their core business. Like any other for-profit business, IaaS providers prefer to make investments in projects that will improve their bottom line. And, like any other for-profit business, those investments enable companies like AWS to effectively compete with other providers (like Verizon/Terremark, for example) in the market.

Register for our upcoming webinar on 8/22 to learn how to deal with the challenges of securely managing corporate data across a broad array of computing platforms. 

With network and other technologies as they are, businesses now have a choice of where to host infrastructure that supports their applications. In other words, the captive corporate IT department may be the preferred provider of infrastructure (for now), but they are now effectively competing with outside IaaS providers. Why, then, would the business not choose the lowest cost provider? Well, the answer to that question is quite the debate in cloud computing (we’ll put that aside for now). Suffice to say that we think that internal corporate IT departments are now competing with outside providers to provide IaaS and other services to the business and that this will become more apparent as technology advances (e.g., as workloads become more portable, network speeds increase, storage becomes increasingly less costly, etc.).

Now here’s the punch line and the basis for our guidance on cloud computing; how should internal corporate IT position itself to stay competitive? At our annual Summit event last week, I discussed the progression of the corporate IT department from a provider of technology to a provider of services (see my whitepaper on cloud management for detail). The common thread is that corporate IT evolves by becoming closer and closer to the requirements of the business – and may even be able to anticipate requirements of the business or suggest emerging technology to benefit the business. To take advantage of cloud computing, one thing corporate IT can do is source commodity services to outside providers where it makes sense. Fundamentally, this has been commonplace in other industries for some time – manufacturing being one example. OEM automotive manufacturers like GM and Ford do not produce the windshields and brake calipers that are necessary for a complete automobile – it just isn’t worth it for GM or Ford to produce those things. They source windshields, brake calipers, and other components from companies who specialize. GM, Ford, and others are then left with more resources to invest in designing, assembling and marketing a product that appeals to end users like you and I.

So, it comes down to this: how do internal corporate IT departments make intelligent sourcing decisions? We suggest that the answer is in thinking about packaging and delivering IT services to the business.

GreenPages Assessment and Design Method

So, how does GreenPages recommend that customers take advantage of cloud computing? Even if you are not considering external cloud at this time, I think it makes sense to prepare your shop for it. Eventually, cloud may make sense for your shop even if, at this time, there is no fit for it. The guidance here is to take a methodical look at how your department is staffed and operated. ITIL v2 and v3 provide a good guide here of what should be examined:

  • Configuration Management
  • Financial Management
  • Incident and Problem Management
  • Change Management
  • Service Level and Availability, and Service Catalog Management
  • Lifecycle Management
  • Capacity Management
  • Business Level Management

 

Assigning a score to each of these areas in terms of repeatability, documentation, measurement, and continuous improvement will paint the picture of how well your department can make informed sourcing decisions. Conducting an assessment and making some housekeeping improvements where needed will serve two purposes:

  1. Plans for remediation could form one cornerstone of your cloud strategy
  2. Doing things according to good practice will add discipline to your IT department – which is valuable regardless of your position on cloud computing at this time

When and if cloud computing services look like a good option for your company, your department will be able to make an informed decision on which services to use at which times. And, if you’re building an internal private cloud, the processes listed above will form the cornerstone of the way you will operate as a service provider.

Case Study: Service Catalog and Private Cloud

Implementing a Service Catalog, corporate IT departments can take a solid first step to becoming a service provider and staying close to the requirements of the business. This year at VMworld in San Francisco, I’ll be leading a session to present a case study of a recent client that did exactly this with our help. If you’re going to be out at VMworld, swing by and listen in to my session!

 

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

IBM Acquiring SoftLayer for Private Cloud Infrastructure

IBM is acquiring SoftLayer, a privately held cloud infrastructure provider. IBM hopes SoftLayer will enable IBM to  marry the security, privacy and reliability of private clouds with the economy and speed of a public cloud, with Fortune 500 companies the target market.

IBM says the majority of the Fortune 500 have concerns about how cloud will work with the IT investments they have already made, and many have been waiting for a cloud that is better than “good enough.”   As a result, although cloud is growing quickly, it’s still only a small part of the total IT spend.  There’s a lot of opportunity for IBM to capitalize on.

SoftLayer has a breakthrough capability that provides an easy “on ramp” especially for the Fortune 500 to adopt cloud. And for the SoftLayer born-on-the-cloud customers, IBM opens a new market into the enterprise.   Specifically, SoftLayer allows cloud services to be created very quickly on dedicated servers — rather than a virtual ones, which is the norm in the public cloud.

By building out a cloud on a dedicated server , a client no longer has to worry about sharing computing resources with other companies — thereby improving privacy, security and overall computing performance.  By using dedicated servers, software that was built for on-premise use can be more easily ported to the cloud.  It doesn’t have to go though as much heavy configuration as it does with a virtual server, which it was not developed to work with.

This capability will be added to IBM’s SmartCloud portfolio. IBM SmartCloud offers 100 cloud-based solutions for line-of-business execs including Watson Engagement Advisor; hybrid solutions such as IBM PureSystems, mission-critical cloud services for SAP on our SmartCloud Enterprise+ and the best private cloud solutions in the market.

Headquartered in Dallas, SoftLayer serves 21,000 customers with a global cloud infrastructure platform spanning 13 data centers in the U.S., Asia and Europe. SoftLayer excels at running cloud-centric, performance-intensive applications in mobile, social, gaming and analytics.

IBM is also announcing today the formation of a new Cloud Services division that combines SoftLayer with IBM SmartCloud into a global platform, reporting to SVP Erich Clementi, IBM Global Technology Services.

Financial terms of the deal have not been disclosed and the acquisition is expected to close later in 2013 following standard regulatory review.