From one cloud to many: The current trend of cloud adoption

Picture credit: iStockPhoto

Over the next 12 months, the majority of business applications will be deployed to the cloud, with most of these being deployed to multiple clouds across multiple geographies.

That’s the key trend from a survey conducted by Equinix, in which the overwhelming majority of the 659 global respondents (77%) said they planned to deploy to multiple clouds in the next year, and a similar number (74%) expect a larger budget in 2015 for cloud services.

91% of new cloud-based offerings will be deployed in the organisation over the coming 12 months, while 45% of new cloud-based apps will be deployed at a third party colocation provider.

It’s clear that multiple cloud deployments are the way forward, but the report also made mention of how this was going to take place: interconnected data centres. 87% of respondents indicates that interconnection is required to meet cloud performance objectives, while 85% argue that direct connections to cloud providers are “highly valued.”

The report found that globally, 11% of firms are planning to deploy more than 10 cloud services in the next 12 months, compared with only 6% for North America. It’s an interesting trend, and shows how cloud deployments are catching up outside the US. 38% of firms globally are going to deploy between three and five services, compared to 30% for North America.

Globally, 7% said they were expecting to deploy to multiple clouds over the next three months; this again compares favourably with just 1% for North America. Only a minimal number (15% globally, 17% North America) said the process will take more than 12 months. Three quarters (74%) of global respondents say they will be deploying in multiple countries in some capacity.

“What surprised us about this survey is how quickly multi-cloud strategies are becoming the norm worldwide,” commented Ihab Tarazi, Equinix CTO. “Businesses have discovered that colocation provides a meaningful ROI for WAN optimisation, and it is clear that multi cloud deployment will improve the ROI even more.”

The report comprised IT decision makers, including IT executives (44%), IT managers (54%) and IT service providers (2%).

Reference Architecture, Converged, & Hyper-Converged Infrastructure: A Pizza Analogy

hyper-converged infrastructureThis morning, our CTO Chris Ward delivered an internal training that did a great job breaking down reference architecture, converged infrastructure, and hyper-converged infrastructure. To get his point across, Chris used the analogy of eating a pizza. He also discussed the major players and when it makes sense for organizations to use each. Below is a recap of what Chris covered in the training. You can hear more from Chris in his brand new whitepaper – an 8 Point Checklist for a Successful Data Center Move. You can also follow him on Twitter.

Reference Architecture

According to Chris, reference architecture is like getting a detailed recipe and making your own pizza. You need to go out and buy the ingredients, make the dough, add the toppings, and bake to the perfect temperature. With reference architecture, you essentially get an instruction book. If you’re highly technical, following the recipe is manageable. However, if you are more of a technology generalist, or if you’re newer to the filed, it may be difficult to follow and the chances you get lost in the recipe can be fairly high. The benefit here is that you have flexibility to make the pizza the way you want it. The downside is it doesn’t save you a ton of time. You still need to order the equipment, wait for the order to arrive, and then put it together.

The Players

  • EMC’s VSPEX – EMC storage, Cisco UCS compute, Cisco networking
  • Nimble – Nimble storage, Cisco UCS compute, Cisco (newer offering)
  • FlexPod – NetApp Storage, Cisco UCS compute, Cisco networking

There are several use cases when it makes sense to utilize reference architecture. These include when an organization:

  • Has disparate vendors where converged or hyper-converged infrastructure may not be an alternative and the organization is not open to a vendor switch
  • Requires more flexibility in components than converged infrastructure provides (i.e. you can add some extra garlic to your pizza and not have it be a big deal).
  • Doesn’t have a hardware refresh cycle between storage, compute and networking that is in alignment (i.e. you do not want to double up on servers you just bought last year)

Converged Infrastructure

Converged Infrastructure is like a take home pizza you buy at a grocery store (it’s not delivery it’s Digiorno!). Converged Infrastructure is more prepackaged than reference architecture. The dough has been made, the toppings have been added, but you still have to put it in the oven and bake it. Vendors do the physical rack, stack and cabling at the factory and ship it directly to the customer. Customers can expect this typically in 30-45 days of placing the order. You don’t have to wait months to get all parts shipped and then assemble yourself. However, the infrastructure is set in stone. If you are an IT department with a different shop than what you are getting with the converged infrastructure option, you can’t mix and match. There is also still integration that comes with converged infrastructure.

The players

There are several use cases when it makes sense to utilize converged infrastructure. These include when an organization:

  • Requires fast time to market (typically 30-45 days from order to constructed delivery. Keep in mind there is additional time on the front end before the order when planning the solution out).
  • Is building out of application PODS or private cloud. This is typically more of a use case in the enterprise space. For example, rolling out a new SAP environment and having, say, Vblock be solely dedicated to that one app running on it. Another example is a larger VDI project.
  • Requires known, guaranteed and predictable performance out of the infrastructure. With Vblock, VCE guarantees you the performance that you do not get with reference architecture
  • Requires large scalability – you can add to it over time. Keep in mind you need to have a clear direction of where you are headed before you start.
  • Is stuck in the mud with operations and or maintenance validation tasks. Again this is a more relevant use case in the enterprise space. Say an IT Department needs to upgrade from vSphere 5.1 to 5.5 in a cloud environment. This could take them 3-4 months to do all testing, etc. By the time they get everything together there could be a new update on its way out. This IT Department is always 2-3 upgrades behind because of all the manual work. With converged infrastructure, vendors do that work for you.

Hyper-Converged Infrastructure

A hyper-converged infrastructure is the equivalent to a fine dining pizza experience. You can sit back and have a glass of wine while your meal is served to you on a silver platter. Hyper-converged infrastructure is an in-a-box offering. It’s one physical unit – no cabling or wiring necessary. The only integration is to uplink it into your existing infrastructure. If you choose to go this route, you can place the order, overnight ship it and expect to have it on your floor in 48 hours. This is obviously a very fast time to market. As this is the newest space of the three, it’s a little less mature in terms of scalability. Hyper-converged infrastructure often makes the most sense for midmarket companies. Keep in mind, hyper-converged infrastructure is a take it or leave it, all or nothing deal.

The Players

It makes the most sense to utilize hyper-converged infrastructure when companies:

  • Storage and compute refresh cycles are roughly in sync
  • Are looking for out-of-the-box data protection (Simplivity)
  • Require known/guaranteed/predictable performance
  • Are looking for rack space and power consolidation savings
  • Require a small amount of scalability
  • Want a plug-and-play approach to infrastructure.

Which way makes the most sense for you to eat your pizza?

You can hear more from Chris in his brand new whitepaper – an 8 Point Checklist for a Successful Data Center Move. You can also follow him on Twitter.

 

Photo credit: http://www.sciencephoto.com/

By Ben Stephenson, Emerging Media Specialist

Interoute opens up second virtual data centre zone in Germany

Picture credit: iStockPhoto

Cloud services provider Interoute has announced it will open up a new virtual data centre (VDC) zone in Frankfurt on December 1.

The new VDC zone is the seventh to be launched in 2014, the 14th overall, and the second in Germany, alongside the zone in Berlin launched in 2012. The firm launched data centres in Milan, Hong Kong, New York, London, Slough and Madrid this year.

It’s another step towards protecting data of European customers within EU borders, and offers ultra low, in-country latency and a resilient platform connected via fibre. It also plays into the trend of managed services; customers can spin up or down dependent on their needs with a hands on IT infrastructure or a fully managed service.

The company is also committed to startups, with the first year of access free via the JumpStart-up program.

“Expanding the application universe that can be moved to the cloud is central to Interoute’s approach,” said Matthew Finnie, Interoute CTO. “Critical from a German perspective is data control and location.”

The data centres are closely linked to Interoute’s CloudStore, a one stop shop for enterprises to deploy applications on the firm’s VDC. Speaking to CloudTech at the launch of the Hong Kong VDC, CloudStore general manager Lee Myall noted how it enabled smaller customers to buy IT on a ‘help themselves’ basis.

With concerns over the state of US data, many cloud providers are expanding their operations to Europe. Salesforce announced the opening of its first European data centre in London last month, with France and Germany in its sights, while Amazon Web Services launched a Frankfurt data centre region last month to great fanfare.

Alongside the virtual data centres, Interoute’s portfolio also comprises 12 data centres, 31 colocation centres, and over 67,000km of lit fibre.

Data centres: To buy or not to buy – that is the question

Picture credit: iStockPhoto

By Nick Razey, CEO, Next Generation Data

When the CIO of a company recently contacted us about data space for housing 25 racks with the possibility of growing to 80 racks, he mentioned in passing that he was thinking about building his own data centre as an alternative. “We’re not experts in data centres but it would be much cheaper than buying from you,” he added.

Well, on the face of it, he could be right assuming a build cost of £2.5m, depreciate it over 10 years and you’ve a cost of £250K per annum, or per rack of £3,125 per year. That does sound cheap.

But how about the cost of capital, say an average of £100K pa, and equipment maintenance – perhaps another £50K pa? And of course rent and rates, say £90K pa, plus staffing where even assuming a bare minimum you will need £150K for salaries including all uplifts. So now the total is £640K pa or £8,000 per rack. Not so cheap after all.

But that’s the least of his problems. Remember he only wanted 25 racks initially. Those 25 racks now work out at an average of over £25,600 pa each. Still within a couple of years the data centre will be full so his price per rack will get back down to the lower level. But what if he keeps growing? What if he needs 85 racks? Where do the extra 5 racks go? Does he build another 80 rack data hall for a further £2.5m?

The build option looks cheaper than colocation only if the cost of real estate and staff are ignored and full occupancy is assumed

The foregoing analysis also assumes that he has an available site which is suitable (secure, safe from flooding and flightpaths), planning permission, a source of power, connectivity options, a quality design and build contractor and plenty of cash to invest.

By comparison, housing the same number of initial and potential additional racks in a modern tier 3 UK colocation data centre will cost between £5,000 and £10,000 per rack pa depending on location (with London/inner M25 locations being at the higher end of the scale). This includes space, power, cooling and associated infrastructure.

This illustration shows the own-build option looks cheaper than colocation only if the cost of real estate and staff are ignored and full occupancy is assumed. Some of the new single site ‘mega’ data centres – over 250,000 square feet – can further reduce the cost per rack by delivering even greater economies of scale. This is achieved by all data hall construction taking place within one building and the utilisation of a common facilities infrastructure (power supply, HVAC plant, fibre cross connects, security).

So why do so many companies consider building their own data centres? A charitable explanation is that they feel more secure with an in-house data centre and it’s more efficient for staff if the data centre is on-site. A more cynical explanation is that a data centre is the ultimate vanity project.

But as the saying goes, “Revenue is vanity, profit is sanity.”

NetSuite goes aggressive in ad campaign, targets Sage

Picture credit: iStockPhoto

Updated “All Sage lines terminate here.” This is the opening statement of an advert placed in yesterday’s Financial Times and Metro by NetSuite.

Parodying a railway network, the ad asks: “Has your business come to the end of the line with Sage?” adding it had poached 500 customers from its rival.

CEO Zach Nelson has approved the campaign, seeing it as an ideal time to further European expansion for NetSuite. Nelson is planning to move to the UK for a period next year to help with this, as well as oversee the recent acquisition of the UK-based commerce provider Venda back in July.

When the two companies spoke to CloudTech as the deal was being completed, they agreed they both “shared the same DNA”, with Pete Daffern, EMEA president at NetSuite, arguing: “It’s just us doubling down on our European investment.”

Sage’s SMB segment managing director Steve Attwell said in response to the advert: “In Sage’s world, businesses don’t run in a straight line and end. They adapt, evolve and grow.”

He added: “If Sage ran the underground, it wouldn’t be under the ground, it would not have an end of line and there would be no gaps to mind. You would get to your destination quicker, there would be no delays and you would get there with confidence.”

NetSuite is planning to announce a variety of new customers and launches at its SuiteConnect conference in London on November 10. Nelson will be delivering a keynote speech: “The Next Disruption: Collision of Product and Services Business”.

Despite its ambition, there’s still lots of work for NetSuite to do if it wants to topple Sage. The Newcastle-based firm recorded revenues of £657m for the first half of this year, compared to NetSuite’s £160m.

This feud looks like it will run and run. Take a look at the advert below – what do you think?

SDN Technologies: No Need to Pick the Winner, Just Get in the Game

With SDN, there are a lot of complementary technologies. Will the future be Change or Die? Or will it be Adopt & Co-mingle? In this short two minute video, GreenPages Solutions Architect Dan Allen discusses software define networking. You can hear more from Dan in this video blog about Cisco ASA updates and this video blog discussing wireless strategy.

 

SDN Technologies

http://www.youtube.com/watch?v=p6qgBY10SyY

 

Would you like to speak with Dan about SDN strategy or implementation? Email us at socialmedia@greenpages.com!

 

Dropbox and Microsoft partner for Office 365 storage: What does this mean for OneDrive?

Cloudy storage provider Dropbox has announced a strategic partnership with Microsoft whereby the service is being integrated more closely with Microsoft Office 365.

Dropbox customers will be able to access their accounts directly from Office apps, edit Office files from the Dropbox app, as well as sharing Dropbox links from Office.

In a blog post on the Dropbox website, head of product Ilya Fushman wrote: “We know that much of the world relies on a combination of Dropbox and Microsoft Office to get work done. That’s why we’re partnering with Microsoft to help you do more on your phones, tablets, and the web.”

The new features will be rolled out onto iOS and Android users, with Dropbox also confirming plans for a Windows Phone and Windows tablet app.

Satya Nadella, Microsoft CEO, has long been using the term ‘mobile-first’ and ‘cloud-first’ in his missives, almost for continuity now as much as anything else. In this instance, it’s more about enterprise and collaboration. “In our mobile-first and cloud-first world, people need easier ways to create, share and collaborate regardless of their device or platform,” he said.

“Together, Microsoft and Dropbox will provide our shared customers with flexible tools that put them at the centre for the way they live and work today.”

The news has come as a bit of a shock to many commentators, but it’s an interesting partnership from both ends. Dropbox has been looking for more of an enterprise focus, especially as it’s been losing out in that battle to Box, who has gained customer wins in the form of General Electric in recent months.

Similarly, Microsoft would be relishing this partnership as it gives the tech giant assurances it will “play nicely” with competitors. Gartner analyst Jeff Mann told the Guardian: “Both of them decided that they’re not really a threat. If they can work with each other, against the common enemy Google, primarily, or to some extent Box, or the other competitors that are in this market.”

What’s your view?

Salesforce customers: Learn from Code Spaces’ swift demise

Picture credit: iStockPhoto

A benchmark report by Adallom into the uptake of software as a service (SaaS) applications has found that Salesforce customers have the highest percentage of privileged access users – and warned about the problems that may cause businesses.

On average 7% of users on Salesforce accounts are privileged or have admin access, compared with 4% for Google Apps, 2% for Box and 1% for Office 365, the other three services analysed.

The report gave a grave warning over the prevalence of “super admin” accounts – ones with complete and unrestricted access to the SaaS. “A compromised “super admin” account represents a much greater threat to an organisation because it has access not only to view and edit privileged data, but also to modify access rights of other privileged users,” the report notes.

Regular readers of CloudTech will remember the unfortunate story of Code Spaces, the cloud provider which had to wave the white flag in June this year due to a DDoS attack. While their service was Amazon Web Services EC2, the hackers got in to the admin control panel, before creating backup logins and deleting data, backups and machine configurations.

“Customers, not vendors, are responsible for risk management,” the report notes. “While most enterprise SaaS providers have built-in support for two-factor authentication and IP restrictions that can be used with user accounts, sophisticated attackers can circumvent those controls through session hijacks and targeted malware.”

One customer in the study found over 100 Salesforce users with admin privileges. But that’s not the biggest problem.

11% of SaaS accounts are ‘zombie’ accounts according to the study; accounts which haven’t been touched for three months. There are perfectly good reasons why this could be the case, such as maternity leave. Yet 80% of companies still have at least one account on the system of a suspended or terminated employee.

These dormant accounts are the perfect opening point for hackers, the report argues. “An inactive account does not only represent a security risk, it’s also a financial burden on the company,” it argues. “In many of the organisations we protect, we often see double digit percentages of zombies – these are licenses which the company is paying for even though they aren’t being used.”

Similarly scary is the finding that the average company shares its files with 393 external domains, while 29% of employees share 98 corporate files with their personal email accounts on average. It can happen unintentionally through sync agents, but again it represents a serious security risk.

What’s more, 92% of respondents in a recent Forrester survey indicated their security controls for SaaS applications were effective. “Security professionals with this mindset are rolling the dice with their sensitive data,” said Forrester’s Andras Cser. “Perimeter and endpoint protections provide minimal protection against new, emerging and largely unknown threats.”

Earlier this week a report from Databarracks found that human error was responsible for one in five data loss incidents.

More than three quarters of workloads will be through cloud data centres by 2018

Picture credit: iStockPhoto

By 2018, 78% of workloads will be processed by cloud data centres, while annual global cloud IP traffic will reach 6.5 zettabytes (ZB).

These are the two main takeaways from Cisco’s latest Cloud Index study, which aims to show the extent of growth of global data centre and cloud-based IP traffic.

Annual global data centre IP traffic will reach 8.6 ZB by the end of 2018, up significantly from last year’s total of 3.1 ZB. Workload density for cloud data centres will reach 7.5, up from 5.2, whilst global cloud IP traffic will nearly quadruple over the next five years.

By 2018, 69% of global cloud workloads will be in private cloud data centres, with the remaining 31% in private. More than half (59%) will be software as a service workloads, compared to 28% IaaS and 13% PaaS. This represents a downturn for infrastructure as a service, which currently represents 44% of cloud workloads, however, over five years each sector will still see a significant compound annual growth rate (CAGR) – 33% for SaaS, 21% for PaaS and 13% for IaaS.

The report notes that in the private cloud, the majority of deployments were IaaS and PaaS, while th public cloud saw predominantly SaaS deployments.

2 billion people will be using cloud storage in 2018, while the global data created by Internet of Things devices will top 403 ZB each year. These are huge numbers, but the current figures are still extensive – 113.4ZB of IoT data and 922 million users of cloud storage in 2013.

According to the index, significant promoters of cloud traffic growth include the rapid adoption of cloud architectures, as well as the ability of cloud data centres to handle higher traffic loads.

Elsewhere the research found that IPv6 adoption will fuel cloud growth. Globally, the report shows that nearly a quarter (24%) of Internet users will be IPv6-ready by 2018, with nearly half of all fixed and mobile devices IPv6 capable. As of May this year, more than 96% of Internet traffic worldwide is still carried on the elderly IPv4 protocol.

The public versus private cloud discussion in the report will certainly be of interest to those who wrote Verizon’s yearly offering on the state of enterprise cloud computing, which decried that particular debate as “inadequate to describe the massive variety of cloud services available today.”

You can find the full 41 page report here.

7 reasons why cloud governance is a challenge: Should we eradicate shadow IT?

Picture credit: iStockPhoto

Another day, another report bemoaning shadow IT for cloud computing. SafeNet’s Challenges of Cloud Information Governance study, conducted by the Ponemon Institute, is the latest to put the blame of compromising data at the door of unapproved IT activity.

Shadow IT, which involves employees bypassing company policy on website and technology usage, has meant cloud security is “stormy”, according to the report. More than half (55%) of the 1,864 IT and IT security practitioners surveyed admitted they were “not confident” that IT knows all the cloud computing services in use at their company.

Respondents added that payment information (56%) was the data that presented the greatest security risk, ahead of customer information (50%), consumer data (34%) and email (23%). Payment info, however, was the least likely to be stored in the cloud, probably as a result of this risk.

Part of the problem for IT managers is that conventional security methods are difficult to enforce with cloud apps and products. 71% of respondents agreed with that statement, while around half (48%) believe it’s more difficult to control or restrict end-user access. Similarly, 61% said cloud increases the compliance risk, compared to only 8% who thinks it goes down.

Another problem, as the survey revealed, was the age old question of who is responsible for cloud data: the end user, or the cloud provider? It’s still not been answered. 33% argued it was the cloud user’s responsibility, 32% said the provider, while 35% said it was a shared responsibility.

Similarly, there is a lack of encryption in software as a service (SaaS) applications. Three quarters of respondents say they use document sharing and online backup tools, but only 28% say their organisation encrypts sensitive data directly within these apps.

As enterprise cloud usage will inevitably increase in the coming years, the 30 page full report (pdf here) paints a fairly bleak picture. SafeNet goes through seven reasons why cloud governance is a challenge:

  • Uncertainty about who is accountable for safeguarding confidential or sensitive information stored in the cloud
  • IT is out of the loop when companies make decisions on the usage of cloud resources
  • IT functions are not confident they know all the cloud resources being used
  • Companies say encryption is important, but aren’t walking the walk on protecting apps
  • An inability to control how employees and third parties handle sensitive data makes compliance more difficult
  • More employees are using cloud apps without appropriate security training
  • Third parties are allowed to access sensitive data without security reinforcement, such as multi-factor authentication

Shadow IT is often blamed for this lapse in security. Can you be certain as a CIO or senior manager that your workforce isn’t using Dropbox to ping over collaborative documents, for instance? A blog from MobileIron back in March pondered the question: “If an auditor had full access to your Dropbox account right now, would they find a single bit of corporate data that shouldn’t be there?”

In almost all of the cases, it’s difficult to say no. So what’s the solution? Blacklisting apps is a brute force method, although innovative employees can find many ways to break the system, whether it’s for malicious purposes or just an honest attempt to be more productive. As a CloudTech article mused yesterday, your employees are a bigger risk to data loss than cybercriminals.

Education, and increased visibility into cloud app usage is key to mitigating the risk of shadow IT, the report concludes – and it’s a good starting point. If you keep your head in the sand and pretend there isn’t a problem, your data could be seriously at risk.