Category Archives: Opinion

ISO 27018 and protecting personal information in the cloud: a first year scorecard

ISO 27018 has been around for a year - but is it effective?

ISO 27018 has been around for a year – but is it effective?

A year after it was published,  – the first international standard focusing on the protection of personal data in the public cloud – continues, unobtrusively and out of the spotlight, to move centre stage as the battle for cloud pre-eminence heats up.

At the highest level, this is a competitive field for those with the longest investment horizons and the deepest pockets – think million square foot data centres with 100,000+ servers using enough energy to power a city.  According to research firm Synergy, the cloud infrastructure services market – Infrastructure as a Service (Iaas), Platform as a Services (PaaS) and private and hybrid cloud – was worth $16bn in 2014, up 50 per cent on 2013, and is predicted to grow 30 per cent to over $21bn in 2015. Synergy estimated that the four largest players accounted for 50 per cent of this market, with Amazon at 28 per cent, Microsoft at 11 per cent, IBM at 7 per cent and Google at 5 per cent.  Of these, Microsoft’s 2014 revenues almost doubled over 2013, whilst Amazon’s and IBM’s were each up by around half.

Significantly, the proportion of computing sourced from the cloud compared to on-premise is set to rise steeply: enterprise applications in the cloud accounted for one fifth of the total in 2014 and this is predicted to increase to one third by 2018.

This growth represents a huge increase year on year in the amount of personal data (PII or personally identifiable information) going into the cloud and the number of cloud customers contracting for the various and growing types of cloud services on offer. but as the cloud continues to grow at these startling rates, the biggest inhibitor to cloud services growth – trust about security of personal data in the cloud – continues to hog the headlines.

Under data protection law, the Cloud Service Customer (CSC) retains responsibility for ensuring that its PII processing complies with the applicable rules.  In the language of the EU Data Protection Directive, the CSC is the data controller.  In the language of ISO 27018, the CSC is either a PII principal (processing her own data) or a PII controller (processing other PII principals’ data).

Where a CSC contracts with a Cloud Service Provider (CSP), Article 17 the EU Data Protection Directive sets out how the relationship is to be governed. The CSC must have a written agreement with the CSP; must select a CSP providing ‘sufficient guarantees’ over the technical security measures and organizational measures governing PII in the Cloud service concerned; must ensure compliance with those measures; and must ensure that the CSP acts only on the CSC’s instructions.

As the pace of migration to the cloud quickens, the world of data protection law continues both to be fragmented – 100 countries have their own laws – and to move at a pace driven by the need to mediate all competing interests rather than the pace of market developments.

In this world of burgeoning cloud uptake, ISO 27018 is proving effective at bridging the gap between the dizzying pace of Cloud market development and the slow and uncertain rate of legislative change by providing CSCs with a workable degree of assurance in meeting their data protection law responsibilities.  Almost a year on from publication of the standard, Microsoft has become the first major CSP (in February 2015) to achieve ISO 27018 certification for its Microsoft Azure (IaaS/PaaS), Office 365 (PaaS/Saas) and Dynamics CRM Online (SaaS) services (verified by BSI, the British Standards Institution) and its Microsoft Intune SaaS services (verified by Bureau Veritas).

In the context of privacy and cloud services, ISO 27018 builds on other information security standards within the IS 27000 family. This layered, interlocking approach is proving supple enough in practice to deal with the increasingly wide array of cloud services. For example, it is not tied to any particular kind of cloud service and, as Microsoft’s certifications show, applies to IaaS (Azure), PaaS (Azure and Office 365) and SaaS (Office 365 and Intune). If, as shown in the graphic below, you consider computing services as a stack of layered elements ranging from networking (at the bottom of the stack) up through equipment and software to data (at the top), and that each of these elements can be carried out on premise or from the cloud (from left to right), then ISO 27018 is flexible enough to cater for all situations across the continuum.

Software as a Licence to Software as a Service: the Cloud Continuum

Software as a Licence to Software as a Service: the cloud continuum

Indeed, the standard specifically states at Paragraph 5.1.1:

“Contractual agreements should clearly allocate responsibilities between the public cloud PII processor [i.e. the CSP], its sub-contractors and the cloud service customer, taking into account the type of cloud service in question (e.g. a service of an IaaS, PaaS or SaaS category of the cloud computing reference architecture).  For example, the allocation of responsibility for application layer controls may differ depending on whether the public cloud PII processor is providing a SaaS service or rather is providing a PaaS or IaaS service upon which the cloud service customer can build or layer its own applications.”

Equally, CSPs will generally not know whether their CSCs are sending PII to the cloud and, even if they do, they are unlikely to know whether or not particular data is PII. Here, another strength of ISO 27018 is that it applies regardless of whether particular data is, or is not, PII: certification simply assures the CSC that the service the CSP is providing is suitable for processing PII in relation to the performance by the CSP of its PII legal obligations.

Perhaps the biggest practical boon to the CSC however is the contractual certainty that ISO 27018 certification provides.  As more work migrates to the cloud, particularly in the enterprise space, the IT procurement functions of large customers will be following structured processes in order to meet the requirements of their business and, in certain cases, their regulators. In their requests for information, proposals and quotations from prospective CSPs, CSCs now have a range of interlocking standards including ISO 27018 to choose from in their statements of requirements for a particular Cloud procurement.  As well as short-circuiting the need for CSCs to spend time in writing up detailed specifications of their own requirements, verified compliance with these standards for the first time provides meaningful assurance and protection from risk around most aspects of cloud service provision. Organisations running competitive tenders can benchmark bidding CSPs against each other on their responses to these requirements, and then include as binding commitments the obligations to meet the requirements of the standards concerned in the contract when it is let.

In the cloud contract lifecycle, the flexibility provided by ISO 27018 certification, along with the contract and the CSP’s policy statements, goes beyond this to provide the CSC with a framework to discuss with the CSP on an ongoing basis the cloud PII measures taken and their adequacy.

In its first year, it is emerging that complying, and being seen to comply, with ISO 27018 is providing genuine assurance for CSCs in managing their data protection legal obligations.  This reassurance operates across the continuum of cloud services and through the procurement and contract lifecycle, regardless of whether or not any particular data is PII.  In customarily unobtrusive style, ISO 27018 is likely to go on being a ‘win’ for the standards world, cloud providers and their customers, and data protection regulators and policy makers around the world.

 

Giving employees the cloud they want

Business are taking the wrong approach to their cloud policies

Business are taking the wrong approach to their cloud policies

There is an old joke about the politician who is so convinced she is right when she goes against public opinion, that she states, “It’s not that we have the wrong policies, it’s that we have the wrong type of voters!” The foolishness of such an attitude is obvious and yet, when it comes to mandating business cloud usage, some companies are still trying to live by a similar motto despite large amounts of research to the contrary.

Cloud usage has grown rapidly in the UK, with adoption rates shooting up over 60% in the last four years, according to the latest figures from Vanson Bourne. This reflects the increasing digitalisation of business and society and the role cloud has in delivering that.  Yet, there is an ongoing problem with a lack of clarity and understanding around cloud policies and decision making within enterprises at all levels. This is only natural, as there is bound to be confusion when the IT department and the rest of the company have differing conceptions about what the cloud policy is and what it should be. Unfortunately, this confusion can create serious security issues, leaving IT departments stuck between a rock and a hard place.

Who is right? The answer is, unsurprisingly, both!  Increasingly non-IT decision makers and end-users are best placed to determine the value of new services to the business; but IT departments have long experience and expertise in the challenges of technology adoption and the implications for corporate data security and risk.

Cloud policy? What cloud policy?

Recent research from Trustmarque found that more than half (56 per cent) of office workers said their organisation didn’t have a cloud usage policy, while a further 28 per cent didn’t even know if one was in operation. Despite not knowing their employer’s cloud policy, nearly 1 in 2 office workers (46 per cent) said they still used cloud applications at work. Furthermore, 1 in 5 cloud users admitted to uploading sensitive company information to file sharing and personal cloud storage applications.

When employees aren’t sure how to behave in the cloud and companies don’t know what information employees are disseminating online, the question of a security breach becomes one of when, not if. Moreover, with 40 per cent of cloud users admitting to knowingly using cloud applications that haven’t been sanctioned or provided by IT, it is equally clear that employee behaviour isn’t about to change. Therefore, company policies must change instead – which often is easier said than done. On the one hand, cloud applications are helping increase productivity for many enterprises, and on the other, the behaviour of some staff is unquestionably risky. The challenge is maintaining an IT environment that supports employees’ changing working practices, but at the same time is highly secure.

By ignoring cloud policies, employees are also contributing to cloud sprawl. More than one quarter of cloud users (27 per cent), said they had downloaded cloud applications they no longer use. The sheer number and variety of cloud applications being used by employees’ means costs can quickly spiral out of control. This provides another catch-22 situation for CIOs seeking balance, as they look to keep costs down, ensure information security and empower employees to use the applications needed to work productively.

The road to bad security is paved with good intentions

The critical finding from the research is that employees know what they are doing is not sanctioned by their organisation and still engage in that behaviour. However, it’s important to recognise that this is generally not due to malicious intent, but rather because they see the potential benefits for themselves or their organisation and security restrictions mean their productivity is hampered – so employees look for a way around those barriers.

It is not in the interest of any business to constrain the impulse of employees to try and be more efficient. Instead, businesses should be looking for the best way to channel that instinct while improving security. There is a real opportunity for those businesses that can marry the desires of employees to use cloud productively, but with the appropriate security precautions in place, to get the very best out of cloud for the enterprise.

Stop restricting and start empowering

The ideal solution for companies is to move towards an integrated cloud adoption/security lifecycle that links measurement, risk/benefit assessment and policy creation, policy enforcement, education and app promotion, so that there is a positive feedback loop reinforcing both cloud adoption and good security practices.  This means an organisation will gain visibility into employees’ activity in the cloud so that they can allow their favourite applications to be used, while blocking specific risky activity. This is far more effective than a blanket ban as it doesn’t compromise the productive instincts of employees, but instead encourages good behaviour and promotes risk-aware adoption. In order for this change to be effected, IT departments need to alter their mind set and become the brokers of services such as cloud, rather than the builder of constricting systems. If organisations can empower their users by for example, providing cloud-enabled self-service, single sign-on and improved identity lifecycle management, they can simultaneously simplify adoption and reduce risk.

Ignorance of cloud policies among staff significantly raises the possibility of data loss, account hijacking and other cloud-related security threats. Yet since the motivation is, by and large, the desire to be productive rather than malicious, companies need to find a way to blend productivity and security instead of having them square off against each other. It is only through gaining visibility into cloud usage behaviour that companies can get the best of both worlds.

Written by James Butler, chief technology officer, Trustmarque

Why did anyone think HP was in it for public cloud?

HP president and chief executive officer Meg Whitman (right) is leading HP's largest restructuring ever

HP president and chief executive officer Meg Whitman (pictured right) is leading HP’s largest restructuring ever

Many have jumped on a recently published interview with Bill Hilf, the head of HP’s cloud business, as a sign HP is finally coming to terms with its inability to make a dent in Amazon’s public cloud business. But what had me scratching my head is not that HP would so blatantly seem to cede ground in this segment – but why many assume it wanted to in the first place.

For those of you that didn’t see the NYT piece, or the subsequent pieces from the hordes of tech insiders and journalists more or less towing the “I told you so” line, Hilf was quoted as candidly saying: “We thought people would rent or buy computing from us. It turns out that it makes no sense for us to go head-to-head [with AWS].”

HP has made mistakes in this space – the list is long, and others have done a wonderful job at fleshing out the classic “large incumbent struggles to adapt to new paradigm” narrative the company’s story, so far, smacks of.

I would only add that it’s a shame HP didn’t pull a “Dell” and publicly get out of the business of directly offering public cloud services to enterprise users, which was a good move. Standing up public cloud services is by most accounts an extremely capitally intensive exercise that a company like HP, given its current state, is simply not best positioned to see through.

But it’s also worth pointing out that a number of interrelated factors have been pushing HP towards private and hybrid cloud for some time now, and despite HP’s insistence that it still runs the largest OpenStack public cloud – a claim other vendors have made in the past – its dedication to public cloud has always seemed superficial at best (particularly if you’ve had the, um, privilege, of sitting through years of sermons from HP executives at conferences and exhibitions).

HP’s heritage is in hardware – desktops, printers and servers, and servers still present a reasonably large chunk of the company’s revenue, something it has no choice but to keep in mind as it seeks to move up the stack in other areas (its NFV and cloud workload management-focused acquisitions as of late attest to this, beyond the broader industry trend). According to the latest Synergy Research figures the company still has a lead in the cloud infrastructure market, but primarily in private cloud.

It wants to keep that lead in private cloud, no doubt, but it also wants to bolster its pitch to the scale-out market exclusively (where telcos are quite keen to play) without alienating its enterprise customers. This also means delivering capabilities that are starting to see increased demand among that segment, like hybrid cloud workload management, security and compliance tools, and offering a platform that has enough buy-in to ensure a large ecosystem of applications and services will be developed for it.

Whether OpenStack is the best way of hitting those sometimes competing objectives remains to be seen – HP hasn’t had these products in the market very long, and take-up has been slow – but that’s exactly what Helion is to HP.

Still, it’s worth pointing out that OpenStack, while trying to evolve capabilities that would whet the appetites of communications services providers and others in the scale-out segment (NFV, object storage, etc.), is seeing much more takeup from the private cloud crowd. Indeed one of the key benefits of OpenStack is easy burstability into, and (more of a work in progress), federatability between OpenStack-based public and private clouds, respectively. The latter, by the way, is definitely consistent with the logic underpinning HP’s latest cloud partnership with the European Commission, which looks at – among other things – the potential federatability of regional clouds that have strong security and governance requirements.

Even HP’s acquisition strategy – particularly its purchase of Eucalyptus, a software platform that makes it easy to shift workloads between on premise systems and AWS – seems in line with the view that a private cloud needs to be able to lean on someone else’s datacentre from time to time.

HP has clearly chosen its mechanism for doing just that, just as VMware looked at the public cloud and thought much the same in terms of extending vSphere and other legacy offerings. Like HP, it wanted to hedge its bets stand up its own public cloud platform because, apart from the “me too” aspect, it thought doing so was in line with where users were heading, and to a much more minimal extent didn’t want to let AWS, Microsoft and Google have all the fun if it didn’t have to. But public cloud definitely doesn’t seem front-of-mind for HP, or VMware, or most other vendors coming at this from an on-premise heritage (HP’s executives mentioned “public cloud” just once in the past three quarterly results calls with journalists and analysts).

Funnily enough, even VMware has come up with its own OpenStack distribution, and now touts a kind of “one cloud, any app, any device” mantra that has hybrid cloud written all of it – ‘hybrid cloud service’ being what the previous incarnation of its public cloud service was called.

All of this is of course happening against the backdrop of the slow crawl up the stack with NFV, SDN, cloud resource management software, PaaS, and so forth  – not just at HP. Cisco, Dell, and IBM, are all looking to make inroads in software, while at the same time on the hardware side fighting off lower-cost Asian ODMs that are – with the exception of IBM – starting to significantly encroach on their turf, particularly in the scale-out markets.

The point is, HP, like many old-hat enterprise vendors, know that what ultimately makes AWS so appealing isn’t its cost (it can actually be quite expensive, though prices – and margins – are dropping) or ease of procurement as an elastic hosting provider. It’s the massive ecosystem of services that give the platform so much value, and the ability to tap into them fairly quickly. HP has bet the farm on OpenStack’s capacity to evolve into a formidable competitor to AWS in that sense (IBM and Cisco also, with varying degrees, towing a similar line), and it shouldn’t be dismissed outright given the massive buy-in that open source community has.

But – and some would view this as part of the company’s problem – HP’s bread and butter has been and continues to be in offering the technologies and tools to stand up predominately private clouds, or in the case of service providers, very large private clouds (it’s also big on converged infrastructure), and to support those technologies and tools, which really isn’t – directly – the business that AWS is in, despite there being substantial overlap in the enterprise customers they go after.

However, while it started in this space as an elastic hosting provider offering CDN and storage services, AWS, on the other hand, has more or less evolved into a kind of application marketplace, where any app can be deployed on almost infinitely scalable compute and storage platforms. Interestingly, AWS’s messaging has shifted from outright hostility towards the private cloud crowd (and private cloud vendors) towards being more open to the idea some enterprises simply don’t want to expose their workloads or host them on shared infrastructure – in part because it understands there’s growing overlap, and because it wants them to on-board their workloads onto AWS.

HP’s problem isn’t that it tried and failed at the public cloud game – you can’t really fail at something if you don’t have a proper go at it; and on the private cloud front, Helion is still quite young, as is OpenStack, Cloud Foundry, and many of the technologies at the core of its revamped strategy.

Rather, it’s that HP, for all its restructuring efforts, talk of change and trumpeting of cloud, still risks getting stuck in its old-world thinking, which could ultimately hinder the company further as it seeks to transform itself. AWS senior vice president Andy Jassy, who hit out at tech companies like HP at the unveiling of Amazon’s Frankfurt-based cloud service last year, hit the nail on the head: “They’re pushing private cloud because it’s not all that different from their existing operating model. But now people are voting with their workloads… It remains to see how quickly [these companies] will change, because you can’t simply change your operating model overnight.”

The Internet of Things: Where hope tends to triumph over common sense

The Internet of Things is coming. But not anytime soon.

The Internet of Things is coming. But not anytime soon.

The excitement around the Internet of Things (IoT) continues to grow, and even more bullish predictions and lavish promises will be made made about and on behalf of it in the coming months. 2015 will see us reach “peak oil” in the form of increasingly outlandish predictions and plenty of over-enthusiastic venture capital investments.

But the IoT will not change the world in 2015. It will take at least 10 years for the IoT to become pervasive enough to transform the way we live and work, and in the meantime it’s up to us to decode the hype and figure out how the IoT will evolve, who will benefit, and what it takes to build an IoT network.

Let’s look at the predictions that have been made for the number of connected devices. The figure of 1 trillion has been used several times by a range of incumbents and can only have been arrived at using a very, very relaxed definition of what a “connected thing” is. Of course, if you’re willing to include RFID tags in your definition this number is relatively easy to achieve, but it doesn’t do much to help us understand how the IoT will evolve. At Ovum, we’re working on the basis of a window of between 30 billion and 50 billion connected devices by 2020. The reason for the large range is that there are simply too many factors at play to be any more precise.

Another domain where enthusiasm appears to be comfortably ahead of common sense is in discussions about the volume of data that the IoT will generate. Talk of an avalanche of data is nonsense. There will be no avalanche; instead we’ll see a steadily rising tide of data that will take time to become useful. When building IoT networks the “data question” is one of the things architects spend a lot of time thinking and worrying about. In truth, the creators of IoT networks are far more likely to be disappointed that their network is taking far longer than expected to reach the scale of deployment necessary to produce the volumes of data they had boasted about to their backers.

This article appeared in the latest issue of the BCN Magazine. Click here to download a digital version.

Even the question of who will make money out of the IoT, and where they will make it, is being influenced too much by hope and not enough by common sense. The future of the IoT does not lie in the connected home or in bracelets that count your steps and measure your heartbeat. The vast majority of IoT devices will not beautify our homes or help us with our personal training regime. Instead they will be put to work performing very mundane tasks like monitoring the location of shipping containers, parcels, and people. The “Industrial IoT” which spans manufacturing, utilities, distribution and logistics will make up by far the greatest share of the IoT market. These devices will largely remain unseen by us, most will be of an industrial grey colour, and only a very small number of them will produce data that is of any interest whatsoever outside a very specific and limited context.

Indeed, the “connected home” is going to be one of the biggest disappointments of the Internet of Things, as its promoters learn that the ability to change the colour of your livingroom lights while away on business doesn’t actually amount to a “life changing experience”. That isn’t to say that our homes won’t be increasingly instrumented and connected, they will. But the really transformational aspects of the IoT lie beyond the home.

There are two other domains where IoT will deliver transformation, but over a much longer timescale than enthusiasts predict. In the world of automotive, cars will become increasingly connected and increasingly smart. But it will take over a decade before the majority of cars in use can boast the levels of connectivity and intelligence we are now seeing in experimental form. The other domain that will be transformed over the long-term is healthcare, where IoT will provide us with the ability to monitor and diagnose conditions remotely, and enable us to deliver increasingly sophisticated healthcare services well beyond the boundaries of the hospital or the doctor’s surgery.

Gary Barnett

But again, we are in the earliest stages of research and experimentation and proving some of the ideas are practical, safe and beneficial enough to merit broader roll-out will take years and not months. The Internet of Things will transform the way we understand our environment as well as the people and things that exist within it, but that transformation will barely have begun by 2020.

Gary Barnett is Chief Analyst, Software with Ovum and also serves as the CTO for a non-profit organisation that is currently deploying what it hopes will become the world’s biggest urban air quality monitoring network.

How to achieve success in the cloud

To cloud or not to cloud? With the right strategy, it need not be the question.

To cloud or not to cloud? With the right strategy, it need not be the question.

There are two sides to the cloud coin: one positive, the other negative, and too many people focus on one at the expense of the other for a variety of reasons ranging from ignorance to wilful misdirection. But ultimately, success resides in embracing both sides and pulling together the capabilities of both enterprises and their suppliers to make the most of the positive and limit the negative.

Cloud services can either alleviate or compound the business challenges identified by Ovum’s annual ICT Enterprise Insights program, based on interviews with 6,500 senior IT executives. On the positive side both public and private clouds, and everything in between, help:

Boost ROI at various levels: From squeezing more utilization from the underlying infrastructure to making it easier to launch new projects with the extra resources exposed asa result.

Deal with the trauma of major organisational/ structural changes as they can adapt to the ups and downs of requirements evolution.

Improve customer/citizen experience, and therefore satisfaction: This has been one of the top drivers for cloud adoption. Cloud computing is at its heart user experience-centric. Unfortunately many forget this, preferring instead to approach cloud computing from a technical perspective.

Deal with security, security compliance, and regulatory compliance: An increasing number of companies acknowledge that public cloud security and compliance credentials are at least as good if not better than their own, particularly in a world where security and compliance challenges are evolving so rapidly. Similarly, private clouds require security to shift from reactive and static to proactive and dynamic security, whereby workloads and data need to be secured as they move in and out of internal IT’s boundaries.

On the other hand, cloud services have the potential to compound business challenges. For instance, the rise of public cloud adoption contributes to challenges related to increasing levels of outsourcing. It is all about relationship management, and therefore relates to another business challenge: improving supplier relationships.

In addition to having to adapt to new public cloud offerings (rather than the other way round), once the right contract is signed (another challenging task), enterprises need to proactively manage not only their use of the service but also their relationships with the service provider, if only to be able to keep up with their fast-evolving offerings.

Similarly, cloud computing adds to the age-old challenge of aligning business and IT at two levels: cloud-enabling IT, and cloud-centric business transformation.

From a cloud-enabling IT perspective, the challenge is to understand, manage, and bridge a variety of internal divides and convergences, including consumer versus enterprise IT, developers versus IT operations, and virtualisation ops people versus network and storage ops. As the pace of software delivery accelerates, developers and administrators need to not only to learn from and collaborate with one another, but also deliver the right user experience – not just the right business outcomes. Virtualisation ops people tend to be much more in favour than network and storage ops people of software-defined datacentre, storage, and networking (SDDC, SDS, SDN) with a view to increasingly take control of datacentre and network resources. But the storage and network ops people, however, are not so keen on letting the virtualisation people in.

When it comes to cloud-centric business transformation, IT is increasingly defined in terms of business outcomes within the context of its evolution from application siloes to standardised, shared, and metered IT resources, from a push to a pull provisioning model, and more importantly, from a cost centre to an innovation engine.

The challenge, then, is to understand, manage, and bridge a variety of internal divides and convergences including:

Outside-in (public clouds for green-field application development) versus inside-out (private cloud for legacy applicationmodernization) perspectives. Supporters of the two approaches can be found on both the business and IT sides of the enterprise.

Line-of-business executives (CFO, CMO, CSO) versus CIOs regarding cloud-related roles, budgets, and strategies: The up-andcoming role of chief digital officer (CDO) exemplifies the convergence between technology and business C-level executives. All CxOs can potentially fulfil this role, with CDOs increasingly regarded as “CEOs in waiting”. In this context, there is a tendency to describe the role as the object of a war between CIOs and other CxOs. But what digital enterprises need is not CxOs battling each other, but coordinating their IT investments and strategies. Easier said than done since, beyond the usual political struggles, there is a disparity between all side in terms of knowledge, priorities, and concerns.

Top executives versus middle management: Top executives who are broadly in favour of cloud computing in all its guises, versus middle management who are much less eager to take it on board, but need to be won over since they are critical to cloud strategy execution.

Laurent Lachal

Shadow IT versus Official IT: Where IT acknowledges the benefits of Shadow IT (it makes an organisation more responsive and capable of delivering products and services that IT cannot currently support) and its shortcomings (in terms of costs, security, and lack of coordination, for example). However, too much focus on control at the expense of user experience and empowerment perpetuates shadow IT.

Only then will your organisation manage to balance both sides of the cloud coin.

Laurent Lachal is leading Ovum Software Group’s cloud computing research. Besides Ovum, where he has spent most of his 20 year career as an analyst, Laurent has also been European software market group manager at Gartner Ltd.

A Big, Perhaps Watershed Week of Cloud Annoucements

  • Google harmonized its cloud computing business to a single entity, with a pricing model intended to hold customers by enticing them to build ever cheaper and more complex software. 
  • Cisco announced it would spend $1 billion on a “cloud of clouds” project. 
  • Microsoft’s new CEO made his first big public appearance, offering Office for the Apple iPad, partly as a way to sell more of its cloud-based Office 365 product.
  • Amazon Web Services announced the general release of its cloud-based desktop computing business, as well as a deal with to offer cloud-based enterprise software tools to industries like healthcare and manufacturing.

For more detail and opinions read this, and listen to this.

Will Disks Using Shingled Magnetic Recording Kill Tape for Cold Storage?

We previously reported on the rumored Seagate/eVault “cold storage” tech initiative seeking to use disks to supplant tape libraries.

Now come this analysis from The Register.

We know Facebook’s Open Compute Project has a cold storage vault configuration using shingled magnetic recording drives. Both Google (mail backup and more) and Amazon (Glacier) have tape vaults in their storage estate. Shingled drives could change that equation because, probably, the cost/GB of a 6TB shingled drive is a lot less that that of a 4TB drive and, over, say, 500,000 drives, that saving turns into a big sum of dollars.

What are shingled drives, you ask? This video from Seagate explains:

Survey Shows Extent of NSA/PRISM’s Damage to US Cloud Companies

A survey by the Cloud Security Alliance  found that 56% of non-US residents were now less likely to use US-based cloud providers, in light of recent revelations about government access to customer information.

During June and July of 2013, news of a whistleblower, US government contractor Edward Snowden, dominated global headlines. Snowden provided evidence of US government access to information from telecommunications and Internet providers via secret court orders as specified by the Patriot Act. The subsequent news leaks indicated that allied governments of the US may have also received some of this information and acted upon it in unknown ways. As this news became widespread, it led to a great deal of debate and soul searching about appropriate access to an individual’s digital information, both within the United States of America and any other country.

CSA initiated this survey to collect a broad spectrum of member opinions about this news, and to understand how this impacts attitudes about using public cloud providers.

Hey Network Solutions, New Rule: Use Social During an Outage

Network Solutions is in trouble today. Rumor has it DNS issues are the root cause, but it’s unclear. What is clear is if your site is hosted by NetSol it is unreachable.

If you dig really hard you can get links to their blog which might offer more detail. But… it’s unreachable (duh).

I picture NetSol personnel happily posting critical updates to a blog only they can reach.

New Rule: If your servers/dns/routers/network is experiencing problems, use your Twitter and Facebook accounts to communicate with customers. Don’t want your dirty laundry messing up your marketing? Set up Twitter/FB Support accounts.