Archivo de la categoría: Opinion

Can Safe Harbour stay afloat?

When the European Court of Justice declared the US-EU Safe Harbour framework invalid in the case of Schrems v Data Protection Commissioner, some 4,500 companies began to panic. Many are still struggling to decide what to do: should they implement an alternative method of transferring personal data from the EEA to the US, or should they simply wait to see what happens next?

Waiting is a risky game, as the European data protection authorities’ (DPAs) grace period extends only until January 31 2016, by which time companies must have their cross-Atlantic data transfers in order. After this date, enforcement action may be taken against those transferring personal data without a suitable mechanism in place to ensure adequate protections to personal data. Although the slow churning of US and EU authorities negotiating a replacement for Safe Harbour can be heard in the distance, no timeline has yet been set for its implementation. There is also the added complication of the newly approved EU General Data Protection Regulation, which is likely to muddy the waters of an already murky negotiation.

Will Safe Harbour 2.0 come to the rescue?

According to the European Commissioner for Justice, Consumers and Gender Equality (the Commissioner), the negotiations on ‘Safe Harbour 2’ continue, undoubtedly under added pressure following the invalidation of the original Safe Harbour framework. Whilst both sides understand the sense of urgency, no proposal has yet met the needs of both the national security services and the European DPAs.

In Autumn 2013, the European Commission created a report providing 13 recommendations for improving Safe Harbour Number 13 required that the Safe Harbour national security exception is used only to an extent that is strictly necessary. This recommendation remains a sticking point in negotiations. Human rights and privacy organisations have little hope that these hurdles will be effectively overcome: In November 2015, a letter was sent to the Commissioner from EU and US NGOs, urging politicians to commit to a comprehensive modernisation of data protection laws on both sides of the Atlantic.

Of course, the real bridge to cross is on US law reform, which the Commissioner sees as more about guaranteeing EU rules in the US than changing US law. It seems the ball is very much in the North American court.

Do not, however, be fooled by the House of Representatives passing the Judicial Redress Act, which allows foreign citizens to bring legal suits in the US for alleged violations of their privacy rights. Reform is not easy, and it is now for the Senate to decide whether to follow suit, or to find a way to water down the Act. The govtrack.us website which follows the progress of bills through Capitol Hill gives the act a 22% chance of success. With odds like these, maybe we shouldn’t bet on cross-Atlantic privacy reform in the immediate future

The future of global surveillance

Whilst there have been positive noises coming from the White House regarding the privacy rights of non-Americans, it is unlikely in a post-9/11 world that any government will allow itself to be prevented from accessing data of either its own or foreign nationals.

In light of recent terror attacks all over the world, the Snowden debate is more relevant than ever. How far should government intelligence agencies go towards monitoring communications? Snowden forced governments to think twice about their surveillance practices, but recent attacks may have the opposite effect. Although their so-called ‘snooping’ may breach citizens’ fundamental rights, it may be more a question of how many civil liberties citizens are willing to exchange for safety and security.

The British Government has suggested that fast-track aggressive surveillance proposals (dubbed ‘the Snoopers’ Charter’) are the way forward in helping prevent acts of terror. This new emphasis on drones and cyber-experts marks a big shift from 2010’s strategic defence review. This is a war fought online and across borders and one cannot ignore the context of Safe Harbour here.

The implications on global e-commerce

Hindering cross-border data transfer impedes e-commerce and can potentially causes huge industries to collapse. By 2017, over 45 percent of the world is expected to be engaging in online commerce. A clear path across the Atlantic is essential.

The Information Technology and Innovation Foundation put it bluntly in stating that, aside from taking an axe to the undersea fibre optic cables connecting Europe to the US, it is hard to imagine a more disruptive action to transatlantic digital commerce than a stalemate on data transfer– a global solution must be reached, and soon.

The future of global cross-border data transfer

Time is running out on the Safe Harbour negotiations, and creating frameworks such as this is not simple – especially when those negotiating are starting so far apart and one side (the EU) does not speak with a unified voice.

Most of the 28 European Member States have individual national DPAs, not all of whom agree on the overall approach to reform. If the DPAs could speak in one voice, there could be greater cooperation with the Federal Trade Commission, which could hasten agreements on suitable frameworks for cross-Atlantic data transfers. In the US, much will come down to the law makers and, with an election brewing, it is worth considering the different scenarios.

Even though the two main parties in the US stand at polar ends of the spectrum on many policies, they may not be so distant when it comes to global surveillance. In the wake of the Snowden revelations, Hilary Clinton defended US global surveillance practices. The Republican Party has also been seen in favour of increased surveillance on certain target groups. The question remains: if either party, when elected, is happy to continue with the current surveillance programme, how will the US find common ground with the EU?

Conclusion

Europe seems prepared to act alone in protecting the interests of EU citizens, and the CJEU’s decision in Schrems was a bold and unexpected move on the court’s part. However, with the ever increasing threat to EU citizens’ lives through organised terror, the pressure may be mounting on the EU to relax its stance on data privacy, which could mean that finding common ground with the US may not be so difficult after all. We shall have to wait and see how the US-EU negotiations on Safe Harbour 2 evolve, and whether the European Commission will stand firm and require the US to meet its ‘equivalent’ standard.

 

Written by Sarah Pearce, Partner & Jane Elphick, Associate at Cooley (UK) LLP.

Deciding between private and public cloud

cloud computing machine learning autonomousInnovation and technological agility is now at the heart of an organization’s ability to compete.  Companies that rapidly onboard new products and delivery models gain competitive advantage, not by eliminating the risk of business unknowns, but by learning quickly, and fine-tuning based on the experience gathered.

Yet traditional IT infrastructure models hamper an organizations’ ability to deliver the innovation and agility they need to compete. Enter the cloud.

Cloud-based infrastructure is an appealing prospect to address the IT business agility gap, characterized by the following:

  1. Self-service provisioning. Aimed at reducing the time to solution delivery, cloud allows users to choose and deploy resources from a defined menu of options.
  2. Elasticity to match demand.  Pay for what you use, when you use it, and with flexible capacity.
  3. Service-driven business model.  Transparent support, billing, provisioning, etc., allows consumers to focus on the workloads rather than service delivery.

There are many benefits to this approach – often times, cloud or “infrastructure as a service” providers allow users to pay for only what they consume, when they consume it, as well as fast, flexible infrastructure deployment, and low risks related to trial and error for new solutions.

Public cloud or private cloud – which is the right option?

A cloud model can exist either on-premises, as a private cloud, or via public cloud providers.

In fact, the most common model is a mix of private and public clouds.  According to a study published in the RightScale 2015 State of the Cloud Report, enterprises are increasingly adopting a portfolio of clouds, with 82 percent reporting a multi-cloud strategy as compared to 74 percent in 2014.

With that in mind, each workload you deploy (e.g. tier-1 apps, test/dev, etc.) needs to be evaluated to see if it should stay on-premises or be moved offsite.

So what are the tradeoffs to consider when deciding between private and public cloud?  First, let’s take a look at the considerations for keeping data on-premises.

  1. Predictable performance.  When consistent performance is needed to support key business applications, on-premises IT can deliver performance and reliability within tight tolerances.
  2. Data privacy.  It’s certainly possible to lose data from a private environment, but for the most part, on-premises IT is seen as a better choice for controlling highly confidential data.
  3. Governance and control.  The private cloud can be built to guarantee compliance – country restrictions, chain of custody support, or security clearance issues.

Despite these tradeoffs, there are instances in which a public cloud model is ideal, particularly cloud bursting, where an organization experiences temporary demand spikes (seasonal influxes).  The public cloud can also offer an affordable alternative to disaster recovery and backup/archiving.

Is your “private cloud” really a cloud at all?

There are many examples of the same old legacy IT dressed up with a thin veneer of cloud paint.  The fact is, traditional IT’s complexity and inefficiency makes it unsuitable to deliver a true private cloud.

Today, hyperconverged infrastructure is one of the fastest growing segments in the $107B IT infrastructure market, in part because of its ability to enable organizations to deliver a cloud-operating model with on-premises infrastructure.

Hyperconvergence surpasses the traditional IT model by incorporating IT infrastructure and services below the hypervisor onto commodity x86 “building blocks”.  For example, SimpliVity hyperconverged infrastructure is designed to work with any hypervi­sor on any industry-standard x86 server platform. The combined solution provides a single, shared resource pool across the entire IT stack, including built-in data efficiency and data protection, eliminating point products and inefficient siloed IT architectures.

Some of the key characteristics of this approach are:

  • Single vendor for deploying and supporting infrastructure.  Traditional IT requires users to integrate more than a dozen disparate components just to support their virtualized workloads.  This causes slow deployments, finger pointing, performance bottlenecks, and limits how it can be reused for changing workloads. Alternatively, hyperconvergence is architected as a single atomic building block, ready to be deployed when the customer unpacks the solution.
  • The ability to start small and scale out without penalty.  Hyperconvergence eliminates the need for resource allocation guesswork.  Simply start with the resources needed now, then add more, repurpose, or shut down resources with demand—all with minimal effort and cost, and no performance degradation.
  • Designed for self-service provisioning. Hyperconvergence offers the ability to create policies, provision resources, and move workloads, all at the VM-level, without worrying about the underlying physical infrastructure.  Because they are software defined, hyperconverged solutions can also integrate with orchestration and automation tools like VMware vRealize Automation and Cisco UCS Director.
  • Economics of public cloud. By converging all IT infrastructure components below the hypervisor and reducing operating expenses through simplified, VM-centric management, hyperconverged offerings deliver a cost model that closely rivals the public cloud. SimpliVity, for example, is able to deliver a cost-per-VM that is comparable to AWS, including associated operating expenses and labour costs.

It’s clear that the cloud presents a compelling vision of improved IT infrastructure, offering the agility required to support innovation, experimentation and competitive advantage.  For many enterprises, public cloud models are non-starters due to the regulatory, security, performance, and control drawbacks, for others, the public cloud or infrastructure as a service is an ideal way to quickly increase resources.

Hyperconvergence is also helping enterprises increase their business agility by offering all the cloud benefits, without added risks or uncertainty. Today technology underpins competitive advantage and organizations must choose what works best for their business and their applications, making an approach combining public cloud and private cloud built on hyperconverged infrastructure an even more viable solution.

Written by Rich Kucharski, VP Solutions Architecture, SimpliVity.

Cloud is growing up: from cost saving to competitive advantage

Analytics1The last decade witnessed one of, if not the most transformational waves of technological change ever to break on the shores of IT – cloud computing. Companies vied to position as the key holders to the cloud universe, and customers too, competed for the honor of being first to market in terms of their use and migration to the various cloud models.

The first phase of cloud was characterised by migration of business to the cloud.  This phase is still happening, with many companies of all shapes and sizes at varying stages along the migration path.

The initial catalyst for cloud adoption was, broadly speaking, cost and efficiency based. Amidst times of global economic fluctuations and downturn during the ‘mid-noughties’ the cloud model of IT promised considerable IT efficiencies and thus, cost savings. For the early migrators however, cloud has moved beyond simple cost efficiencies to the next phase of maturity: competitive advantage.

IDC reported earlier in the year that 80% of cloud applications in the future will be data-intensive; therefore, industry know-how and data are the true benefits of the cloud.

The brokerage of valuable data, (be it a clients’ own proprietary information about inventory or customer behavior, or wider industry data), and the delivery of this critical information as a service is where the competitive advantage can be truly found – it’s almost now a case of ‘Innovation as a Service’.

The changing modus operandi of cloud has largely been driven by the increasing, types, variety and volumes of streams of data businesses now require to stay competitive, and now the roll out of cognitive and analytics capabilities within cloud environments are as important to achieving business goals and competitive advantage, as the actual cloud structure itself.

There’s almost no better example of this, than the symbiotic relationship between Weather.com and its use of the cloud.  For a company like Weather.com the need to extract maximum value from global weather data, was paramount to producing accurate forecasting pictures, but also by using advanced analytics, the management of its data globally.

Through IoT deployments and cloud computing Weather.com collects data from more than 100,000 weather sensors, aircraft and drones, millions of Smartphones, buildings and even moving vehicles. The forecasting system itself ingests and processes data from thousands of sources, resulting in approximately 2.2 billion unique forecast points worldwide, geared to deliver over 26 billion forecasts a day.

By integrating real-time weather insights, Weather.com has been able to improve operational performance and decision-making. However, by shifting its (hugely data-intensive), services to the cloud and integrating it with advance analytics, it was not only able to deliver billions of highly accurate forecasts, it was also able to derive added value from this previously unavailable resource, and creating new value ad services and revenue streams.

Another great example is Shop Direct: as one of the UK’s largest online retailers, delivering more than 48 million products a year and welcoming over a million daily visitors across a variety of online and mobile platforms, the move to a hybrid cloud model increased flexibility and meant it was able to more quickly respond to changes in demand as it continues to grow.

With a number of digital department stores including £800m flagship brand, Very.co.uk, the cloud underpins the a variety of analytics, mobile, social and security offerings that enable Shop Direct to improve its customers’ online shopping experience while empowering its workforce to collaborate more easily too.

Smart use of cloud has allowed Shop Direct to continue building a preeminent position in the digital and mobile world, and it has been able to innovate and be being better prepared to tackle challenges such as high site traffic around the Black Friday and the Christmas period.

In the non-conformist, shifting and disruptive landscape of today’s businesses, innovation is the only surety of maintaining a preeminent position and setting a company apart from its competitors – as such, the place of the cloud as the market place for this innovation is insured.

Developments in big data, analytics and IoT highlight the pivotal importance of cloud environments as enablers of innovation, while cognitive capabilities like Watson (in conjunction with analytics engines), add informed intelligence to business processes, applications and customer touch points along every step of the business journey.

While many companies recognise that migration to the cloud is now a necessity, it is more important to be aware that the true, long-term business value can only be derived from what you actually operate in the cloud, and this is the true challenge for businesses and their IT departments as we look towards 2016 and beyond.

Written by Sebastian Krause, VP IBM Cloud Europe

Containers at Christmas: wrapping, cloud and competition

Empty road and containers in harbor at sunsetAs anyone that’s ever been disappointed by a Christmas present will tell you – shiny packaging can be very misleading. As we hear all the time, it’s what’s inside that counts…

What then, are we to make of the Docker hype, centred precisely on shiny, new packaging? (Docker is the vendor that two years ago found a way to containerise applications: other types of containers, operating system containers, have been around for a couple of decades)

It is not all about the packaging, of course. Perhaps we should say that it is more about on what the package is placed, and how it is managed (amongst other things) that matters most?

Regardless, containers are one part of a changing cloud, data centre and enterprise IT landscape, the ‘cloud native’ movement widely seen as driving a significant shift in enterprise infrastructure and application development.

What the industry is trying to figure out, and what could prove the most disruptive angle to watch as more and more enterprises roll out containers into production, is the developing competition within this whole container/cloud/data centre market.

The question of competition is a very hot topic in the container, devops and cloud space.  Nobody could have thought the OCI co-operation between Docker and CoreOS meant they were suddenly BFFs. Indeed, the drive to become the enterprise container of choice now seems to be at the forefront of both companies’ plans. Is this, however, the most dynamic relationship in the space? What about the Google-Docker-Mesos orchestration game? It would seem that Google’s trusted container experience is already allowing it to gain favour with enterprises, with Kubernetes taking a lead. And with CoreOS in bed with Google’s open source Kubernetes, placing it at the heart of Tectonic, does this mean that CoreOS has a stronger play in the enterprise market to Docker? We will wait and see…

We will also wait and see how the Big Cloud Three will come out of the expected container-driven market shift. Somebody described AWS as ‘a BT’ to me…that is, the incumbent who will be affected most by the new disruptive changes brought by containers, since it makes a lot of money from an older model of infrastructure….

Microsoft’s container ambition is also being watched closely. There is a lot of interest from both the development and IT Ops communities in their play in the emerging ecosystem. At a recent meet-up, an Azure evangelist had to field a number of deeply technical questions regarding exactly how Microsoft’s containers fair next to Linux’s. The question is whether, when assessing who will win the largest piece of the enterprise pie, this will prove the crux of the matter?

Containers are not merely changing the enterprise cloud game (with third place Google seemingly getting it very right) but also driving the IT Ops’ DevOps dream to reality; in fact, many are predicting that it could eventually prove a bit of a threat to Chef and Puppet’s future….

So, maybe kids at Christmas have got it right….it is all about the wrapping and boxes! We’ll have to wait a little longer than Christmas Day to find out.

Lucy Ashton. Head of Content & Production, Container WorldWritten by Lucy Ashton, Head of Content & Production, Container World

The end of the artisan world of IT computing

cloud computing machine learning autonomousWe are all working toward an era of autonomics ‒ a time when machines not only automate key processes and tasks, but truly begin to analyse and make decisions for themselves. We are on the cusp of a golden age for our ability to utilise the capacity of the machines that we create.

There is a lot of research about autonomic cloud computing and therefore there are a lot of definitions as to what it is. The definition from webopedia probably does the best job at describing autonomic computing.

It is, it says: “A type of computing model in which the system is self-healing, self-configured, self-protected and self-managed. Designed to mimic the human body’s nervous system in that the autonomic nervous system acts and reacts to stimuli independent of the individual’s conscious input.

“An autonomic computing environment functions with a high level of artificial intelligence while remaining invisible to the users. Just as the human body acts and responds without the individual controlling functions (e.g., internal temperature rises and falls, breathing rate fluctuates, glands secrete hormones in response to stimulus), the autonomic computing environment operates organically in response to the input it collects.”

Some of the features of autonomic computing are available today for organisations that have completed – or at least partly completed – their journey to the cloud. The more information that machines can interpret, the more opportunity they have to understand the world around them.

It spells the death of the artisan IT worker – a person working exclusively with one company, maintaining the servers and systems that kept a company running. Today, the ‘cloud’ has literally turned computing on its head. Companies can access computing services and storage at the click of a button, providing scalability, agility and control to exactly meet their needs. Companies pay for what they get and can scale up or down instantly. What’s more, they don’t need their army of IT artisans to keep the operation running.

This, of course, assumes that the applications that leverage the cloud have been developed to be native using a model like the one developed by Adam Wiggins, who co-founded Heroku. However, many current applications and the software stacks that support them can also use the cloud successfully.

More and more companies are beginning to realise the benefit that cloud can provide, either private, public or hybrid. For start-ups, the decision is easy. They are ‘cloud first’ businesses with no overheads or legacy IT infrastructure to slow them down. For CIOs of larger organisations, it’s a different picture. They need to move from a complex, heterogeneous IT infrastructure into the highly orchestrated and automated – and ultimately, highly scalable and autonomic – homogeneous new world.

CIOs are looking for companies with deep domain expertise as well as infrastructure at scale. In the switch to cloud services, the provision of managed services remains essential. To ensure a smooth and successful journey to the cloud, enterprises need a company that can bridge the gap between the heterogeneous and homogeneous infrastructure.

Using a trusted service provider to bridge that gap is vital to maintain a consistent service level to the business users that use or consume the application being hosted. But a cloud user has many more choices to make in the provision of their services. Companies can take a ‘do it myself approach’, where they are willing to outsource their web platform but keep control of testing and development. Alternatively, they can take a ‘do it with me’ approach, working closely with a provider in areas such as managed security and managed application services. This spreads the responsibility between the customer and provider, which can be decided at the outset of the contract.

In the final ‘do it for me’ scenario, trust in the service provider is absolute. It allows the enterprise customer to focus fully on the business outcomes. As more services are brought into the automation layer, delivery picks up speed which in turn means quick, predictable and high-quality service.

Hybrid cloud presents a scenario of the ‘best of both worlds’. Companies are secure in the knowledge that their most valuable data assets are still either on premise in the company’s own private servers or within a trusted hosting facility utilising isolated services. At the same time, they can rely on the flexibility of cloud to provide computing services that can be scaled up or down at will, at a much better price point than would otherwise be the case.

Companies who learn to trust their service provider will get the best user experience. In essence, the provider must become an extension of the customer’s business and not operate on the fringes as a vendor.

People, processes and technology all go together to create an IT solution. But they need to integrate between the company and the service provider as part of a cohesive solution to meet the company’s needs. The solution needs to be relevant for today but able to evolve in the future as business priorities change. Only then can we work toward a future where autonomics begins to play a much bigger part in our working lives.

Eventually, autonomic computing can evolve almost naturally, much like human intelligence has over the millennia. The only difference is that with cloud computing the advances will be made in years, not thousands of years. We are not there yet, but watch this space. In your lifetime, we are more than likely to make that breakthrough to lead us into a brave new world of cloud computing.

 

Written by Jamie Tyler, CenturyLink’s Director of Solutions Engineering, EMEA

How Silicon Valley is disrupting space

spaceship close upWe tend to think of the Space Industry as quintessentially cutting edge. As such it feels awfully strange to hear somebody compare it to the pre-Uber taxi industry – nowadays the definition of an ecosystem ripe for seismic technological disruption.

Yet comparing the two is exactly what Sean Casey (Founder and Managing Director of the Silicon Valley Space Centre) is doing, during a phone conversation ahead of his appearance at February’s IoT Data Analytics & Visualization event in Palo Alto.

“With all Silicon Valley things there’s kind of a standard formula that involves disruptive technologies and large markets. Uber’s that way. Airbnb is the same,” says Casey. “Space is dominated by a bunch of large companies, making big profits from the government and not really interested in disrupting their business. The way they’re launching their rockets today is the same way they’ve been doing it over the last forty years. The reliability has increased, but the price hasn’t come down.”

Nowadays, however, a satellite needn’t cost hundreds of millions of dollars. On the contrary, costs have even come down to as little as $150,000. Talk about economising! “Rather than spending hundreds of millions of dollars on individual satellites, we can fly hundreds of satellites at a greatly reduced cost and mitigate the risk of a single failure,” Casey says. In addition, he explains that these satellites have tremendous imaging and communications capabilities – technology leveraged from a very everyday source. “The amount of processing power that you can fly in a very small satellite comes from a tremendous processing power that we all have in our cell phones.”

Entrepreneur Elon Musk was one of the first to look at this scenario, founding SpaceX. “Maybe he was bringing some new technology to the table,” says Casey, “but he’s basically just restructured his business to make launch costs cheaper.”

However, due perhaps in part to the historical proximity of the US government and the Space Industry, regulatory opposition to newcomers has been particularly strident. It is a fact that clearly irritates Casey.

“Elon Musk has had to fight regulatory obstructions put up by people in Washington that said we want to keep you out of the business – I mean, how un-American is that? We’re supposed to be a capitalist country that embraces new opportunity and change. Get a grip! That stuff is temporary, it’s not long term. The satellite industry is often reluctant to fly new technologies because they don’t think they can sell that approach to their government customers.”

Whereas lower prices open the door to new customers, and new use cases – often moving hand-in-hand with developments in analytics. This brings us to perhaps the most interesting aspect of a very interesting discussion. There are, on the one hand, a number of immediate feasible use cases that come to Casey’s mind – analysing the flow of hospital visits to anticipate and epidemics, for example, not to mention a host of economic usages, such as recording and analysing shipping, resources, harvests and more…

On the other hand, while these satellites will certainly offer clients a privileged vantage point from which to view and analyse the world (we don’t refer to the ‘bird’s eye view’ for nothing), precisely what discoveries and uses will be discovered up there in the coming years remains vague – albeit in a tantalising sort of way.

“It’s one of those things that, if you’ve never looked at it. If you’ve never had that data before, you kind of don’t know what you’re going to find. After this is all played out, you’ll see that this was either a really big step forward or it was kind of a bust and really didn’t amount to anything.  It’s sort of like asking the guys at Twitter  to show that their company’s going to be as big as it became after they’d done their Series A Financing – because that’s where these satellite companies are. Most of them are Series A, some of them are Series B – SpaceX is a lot further on.”

One thing that looks certain is that Silicon Valley is eyeing up space as its Final Frontier. From OneWeb and O3b founder Greg Wyler’s aspiration to connect the planet, to Google’s  acquisition of Skybox and Monsanto’s acquisition of Climate Corp – plus a growing number of smaller investments in space-focussed start-ups, not to mention the aforementioned SpaceX and Amazon’s more overt investment in rocket science, capitalism is coming to the cosmos.

Sean Casey will be appearing at IoT Data Analytics & Visualization (February 9 – 11, 2016 Crowne Plaza Palo Alto). Click here to register.

Happy (belated) birthday, OpenStack: you have much to look forward to

Now past the five-year anniversary of OpenStack’s creation, the half-decade milestone provides an opportunity to look back on how far the project has come in that time – and to peer thoughtfully into OpenStack’s next few years. At present, OpenStack represents the collective efforts of hundreds of companies and an army of developers numbering in the thousands. Their active engagement in continually pushing the project’s technical boundaries and implementing new capabilities – demanded by OpenStack operators – has defined its success.

Companies involved with OpenStack include some of the most prestigious and interesting tech enterprises out there, so it’s no surprise that this past year has seen tremendous momentum surrounding OpenStack’s Win the Enterprise program. This initiative – central to the future of the OpenStack project – garnered displays of the same contagious enthusiasm demonstrated in the stratospheric year-over-year growth in attendance at OpenStack Summits (the most version of the event, held in Tokyo, being no exception). The widespread desire of respected and highly-capable companies and individuals to be involved with the project is profoundly assuring, and proves the recognition of OpenStack as a frontrunner for the title of most innovative software and development community when it comes to serving enterprises’ needs for cloud services.

With enterprise adoption front of mind, these are the key trends now propelling OpenStack into its next five years:

Continuing to Redefine OpenStack

The collaborative open source nature of OpenStack has successfully provided the project with many more facets and functionalities than could be dreamt of initially five years ago, and this increase in scope (along with the rise of myriad new related components) has led to the serious question: “What is OpenStack?” This is not merely an esoteric query – enterprises and operators must know what available software is-and-is-not OpenStack in order to proceed confidently in their decision-making around the implementation of consistent solutions in their clouds. Developers require clarity here as well, as their applications may potentially need to be prepared to operate across different public and private OpenStack clouds in multiple regions.

If someone were to look up OpenStack in the dictionary (although not yet in Webster’s), what they’d see there would be the output of OpenStack’s DefCore project, which has implemented a process that now has a number of monthly definition cycles under its belt. This process bases the definition of a piece of software as belonging to OpenStack on core capabilities, implementation code and APIs, and utilizes RefStack verification tests. Now OpenStack distributions and operators have this DefCore process to rely on in striving for consistent OpenStack implementations, especially for enterprise.

Enterprise Implementation Made Easy

The OpenStack developer community is operating under a new “big tent” paradigm, tightening coordination on project roadmaps and releases through mid-cycle planning sessions and improved communication. The intended result? A more integrated and well-documented stack. Actively inviting new major corporate sponsors and contributors (for example Fujitsu, a new Gold member of OpenStack as of this July) has helped to better inform the ease of implementation with which enterprise can get on board with OpenStack.

Of course, OpenStack will still require expertise to be implemented for any particular use case, as it’s a complicated, highly configurable piece of software that can run across distributed systems – not to mention the knowledge needed to select storage sub-systems and networking options, and to manage a production environment at scale. However, many capable distribution and implementation partners have arisen worldwide to provide for these needs (Mirantis, Canonical, Red Hat, Aptira, etc), and these certainly have advantages over proprietary choices when looking at the costs and effort it takes to get a production cloud up and running.

The OpenStack Accelerator

A positive phenomena that enterprises experience when enabling their developers and IT teams to work within the OpenStack community is seen in the dividends gained from new insights into technologies that can be valuable within their own IT infrastructure. The open collaborations at the heart of OpenStack expose contributors to a vast ecosystem of OpenStack innovations, which enterprises then benefit from internalizing. Examples of these innovations include network virtualization software (Astara, MidoNet), software-defined storage (Swift, Ceph, SolidFire), configuration management tools (Chef, Puppet, Ansible), and a new world of hardware components and systems offering enough benefit to make enterprises begin planning how to take advantage of them.

The pace of change driven by OpenStack’s fast-moving platform is now such that it can even create concern in many quarters of the IT industry. Enterprise-grade technology that evolves quickly and attracts a lot of investment interest will always have its detractors. Incumbent vendors fear erosion of market share. IT services providers fear retooling their expertise and workflows. Startups (healthily) fear the prospect of failure. But the difference is that startups and innovators choose to embrace what’s new anyway, despite the fear. That drives technology forward, and fast. And even when innovators don’t succeed, they leave behind a rich legacy of new software, talent, and tribal knowledge that we all stand on the shoulders of today. This has been so in the OpenStack community, and speaks well of its future.

 

DreamHost - Stefano Maffulli HeadshotStefano Maffulli is the Director of Cloud and Community at DreamHost, a global web hosting and cloud services provider whose offerings include the cloud computing service DreamCompute powered by OpenStack, and the cloud storage service DreamObjects powered by Ceph.

How data classification and security issues are affecting international standards in public sector cloud

Cloud technology is rapidly becoming the new normal, replacing traditional IT solutions. The revenues of top cloud service providers are doubling each year, at the start of a predicted period of sustained growth in cloud services. The private sector is leading this growth in workloads migrating to the cloud. Governments, however, are bringing up the rear, with under 5 percent of a given country’s public sector IT budget being dedicated to cloud spending. Once the public sector tackle the blockers  that are preventing uptake, spending looks likely to rapidly increase.

The classic NIST definition of the Cloud specifies Software (SaaS), Platform (PaaS) and Infrastructure (IaaS) as the main Cloud services (see figure 1 below), where each is supplied via network access on a self-service, on-demand, one-to-many, scalable and metered basis, from a private (dedicated), community (group), public (multi-tenant) or hybrid (load balancing) Cloud data centre.

Figure 1: Customer Managed to Cloud Service Provider Managed: The Continuum of Cloud Services

 

Kemp aas diagram 2

The Continuum of Cloud Services

 

The benefits of the Cloud are real and evidenced, especially between the private and public cloud where public cloud economies of scale, demand diversification and multi-tenancy are estimated to drive down the costs of an equivalent private cloud by up to ninety percent.

Also equally real are the blockers to public sector cloud adoption, where studies consistently show that management of security risk is at the centre of practical, front-line worries about cloud take-up, and that removing them will be indispensable to unlocking the potential for growth.  Demonstrating effective management of cloud security to and for all stakeholders is therefore central to cloud adoption by the public sector and a key driver of government cloud policy.

A number of governments have been at the forefront of developing an effective approach to cloud security management, especially the UK which has published a full suite of documentation covering the essentials.  (A list of the UK government documentation – which serves as an accessible ‘how to’ for countries who do not want to reinvent this particular wheel – is set out in the Annex to our white paper, Seeding the Public Cloud: Part II – the UK’s approach as a pathfinder for other countries).  The key elements for effective cloud security management have emerged as:

  • a transparent and published cloud security framework based on the data classification;
  • a structured and transparent approach to data classification; and
  • the use of international standards as an effective way to demonstrate compliance with the cloud security framework.

Data classification enables a cloud security framework to be developed and mapped to the different kinds of data. Here, the UK government has published a full set of cloud security principles, guidance and implementation dealing with the range of relevant issues from data in transit protection through to security of supply chain, personnel, service operations and consumer management. These cloud security principles have been taken up by the supplier community, and tier one providers like Amazon and Microsoft have published documentation based on them in order to assist UK public sector customers in making cloud service buying decisions consistently with the mandated requirements.

Data classification is the real key to unlocking the cloud. This allows organisations to categorise the data they possess by sensitivity and business impact in order to assess risk. The UK has recently moved to a three tier classification model (OFFICIAL → SECRET → TOP SECRET) and has indicated that the OFFICIAL category ‘covers up to ninety percent of public sector business’ like most policy development, service delivery, legal advice, personal data, contracts, statistics, case files, and administrative data. OFFICIAL data in the UK ‘must be secured against a threat model that is broadly similar to that faced by a large UK private company’ with levels of security controls that ‘are based on good, commercially available products in the same way that the best-run businesses manage their sensitive information’.

Compliance with the published security framework, in turn based on the data classification, can then be evidenced through procedures designed to assess and certify achievement of the cloud security standards. The UK’s cloud security guidance on standards references ISO 27001 as a standard to assess implementation of its cloud security principles.  ISO 27001 sets out for managing information security certain control objectives and the controls themselves against which an organisation can be certified, audited and benchmarked.  Organisations can request third party certification assurance and this certification can then be provided to the organisation’s customers.  ISO 27001 certification is generally expected for approved providers of UK G-Cloud services.

Allowing the public sector cloud to achieve its potential will take a combination of comprehensive data classification, effective cloud security frameworks, and the pragmatic assurance provided by evidenced adherence to generally accepted international standards. These will remove the blockers on the public sector cloud, unlocking the clear benefits.

Written by Richard Kemp, Founder of Kemp IT Law

Bringing the enterprise out of the shadows

Ian McEwanIan McEwan, VP and General Manager, EMEA at Egnyte discusses why IT departments must provide employees with secure, adaptive cloud-based file sync and share services, or run the risk of ‘shadow IT’ — inviting major security vulnerabilities and compliance issues within organisations.

The advent of cloud technology has brought a wide range of benefits to businesses of all sizes, improving processes by offering on-demand, distributed access to the information and applications that employees rely on. This change has not only made IT easier for businesses, it is also fueling new business models and leading to increased revenues for those making best use of the emerging technology.

The cloud arguably offers a business the greatest benefit when used for file sync and share services, allowing users to collaborate on projects in real-time, at any time on any device from any geographic location. File sync and share makes email attachments redundant, allowing businesses to reclaim and reduce the daily time spent by employees on email, as well as the chances of files being lost, leaked or overwritten. If used correctly, IT departments can have a comprehensive overview of all the files and activity on the system, enabling considerably better file management and organisation.

Employees ahead of the corporate crowd

Unfortunately business adoption of file sharing services is often behind where employees would like it to be and staff are turning to ‘shadow IT’ – unsanctioned consumer-grade file sharing solutions. These services undermine the security and centralised control of IT departments. Businesses lose visibility over who has access to certain files and where they are being stored, which can lead to serious security and compliance problems.

CIOs need to protect their companies from the negative impact of unsanctioned cloud applications by implementing a secure solution that monitors all file activity across their business.

Secure cloud-based file sharing

To satisfy both the individual user and business as a whole, IT departments need to identify file sharing services that deliver the agility that comes with storing files in the cloud. It starts with ensuring that a five-pronged security strategy is in place that can apply consistent, effective control and protection over the corporate information throughout its lifecycle. This strategy should cover:

  • User Security – controlling who can access which files, what they can do with them and how long their access will last.
  • Device Security – protecting corporate information at the point of consumption on end user devices.
  • Network Security – protecting data in transit (over encrypted channels) to prevent eavesdropping and tampering.
  • Data Centre Security – providing a choice of deployment model that offers storage options both on premises and in the cloud and total control over where the data is stored.
  • Content Security – attaching policies to the content itself to ensure it can’t leave the company’s controlled environment even when downloaded to a device.

A solution that addresses these security areas will allow efficient collaboration without sacrificing security, compliance and control.

A user friendly, business ready solution

Furthermore, the selected solution and strategy will need to keep up with business demands and industry regulations. Flexibility can be achieved if businesses consider adaptive file sharing services that give them access to files regardless of where they are stored – in the cloud, on premises or a hybrid approach. This enables a business to adapt the service for its own changing business preferences, as well as industry standards that can dictate where data is stored and how it is shared. Recent changes to the US-EU Safe Harbour regulations which determine how businesses from the US and EU must share and keep track of data, highlight the necessity for businesses to have an adaptive file sharing solution in place to meet the demands of new regulations,  or else risk heavy fines and reputational damage.

The final hurdle towards successful implementation of a cloud-based file sharing service is ensuring user adoption through simple functionality. If a service isn’t easy to use, staff may find themselves falling back on shadow IT services due to convenience. It is important, therefore, that IT seeks solutions that can be accessed across all devices, and can be integrated with other popular applications already in used within an organisation.

The integrity and privacy of a business’ information requires a secure, adaptive cloud-based file sharing solution that gives organisations comprehensive visibility and control across the lifecycle of its data. Overlooking the security implications of shadow IT services can result in a company incurring significant costs – not just in financial terms, but for a company’s brand, reputation and growth potential. It’s time for IT departments to act now and adopt cloud services that enable efficient collaboration, mitigate any chances of risk and lift the shadow from corporate data.

3 approaches to a successful enterprise IT platform rollout strategy

enterprise IT rolloutExecuting a successful enterprise IT platform rollout is as much about earning widespread support as it is about proper pacing. It’s necessary to sell the rollout within the organization, both to win budget approval and to gain general acceptance so that adoption of the new platform goes smoothly.

Each group being asked to change their ways and learn this new platform must have the value of the rollout identified and demonstrated for them. The goal of the rollout process is to see the platform solution become successfully adopted, self-sustaining, efficient in assisting users, and, ultimately, seamlessly embedded into the organization’s way of doing business.

Deploying a new solution for use across an organization boils down to three approaches, each with their advantages and drawbacks: rolling out slowly (to one department at a time), rolling out all at once (across the entire organization), or a cleverly targeted mix of the two.

Vertical Rollouts (taking departments one at a time, slow and steady)

This strategy applies when selecting a single department or business function within the organization (ex: customer support, HR, etc.), for an initial targeted rollout and deploying the new platform in phases to each vertical, one at a time. The benefit here is a greater focus on the specific needs and usage models within the department that is receiving full attention during their phase of the rollout implementation, yielding advantages in the customization of training and tools to best fit those users.

For example, the tools and interfaces used daily by customer service personnel may be entirely irrelevant to HR staff or to engineers, who will appreciate that their own solutions are being streamlined and that their time is being respected, rather than needing to accept a crude one-size-fits-all treatment and have to work to discover what components apply to them. It’s then more obvious to each vertical audience what the value added is for them personally, better garnering support and fast platform adoption. Because this type of rollout is incremental, it’s ripe for iterative improvements and evolution based on user feedback.

Where vertical, phased rollouts are less effective is in gaining visibility within the organization, and in lacking the rallying cry of an all-in effort. This can make it difficult to win over those in departments that aren’t offered the same immediate advantages, and to achieve the critical mass of adoption necessary to launch a platform into a self-sustaining orbit (even for those tools that could benefit any user regardless of their department).

Horizontal Rollouts (deploying to everyone at the same time)

Delivering components of a new platform across all departments at once comes with the power of an official company decree: “get on board because this is what we’re doing now.” This kind of large-scale rollout makes everyone take notice, and often makes it easier not only to get budget approval (for one large scale project and platform rather than a slew of small ones), but also to fold the effort into an overall company roadmap and present it as part of a cohesive strategy. Similar organizational roles in the company can connect and benefit from each other with a horizontal rollout, pooling their knowledge and best practices for using certain relevant tools and templates.

This strategy of reaching widely with the rollout helps to ensure continuity within the organization. However, big rollouts come with big stakes: the organization only gets one try to get the messaging and the execution correct – there aren’t opportunities to learn from missteps on a smaller scale and work out the kinks. Users in each department won’t receive special attention to ensure that they receive and recognize value from the rollout. In the worst-case scenario, a user may log in to the new platform for the first time, not see anything that speaks to them and their needs in a compelling way, and not return, at least not until the organization wages a costly revitalization campaign to try and win them over properly.  Even in this revitalization effort, a company may find users jaded by the loss of their investment in the previous platform rollout.

The Hybrid Approach to Rollouts

For many, the best rollout strategy will borrow a little from both of the approaches above. An organization can control the horizontal and the vertical aspects of a rollout to produce a two-dimensional, targeted deployment, with all the strengths of the approaches detailed above and less of the weaknesses. With this approach, each phase of a rollout can engage more closely with specific vertical groups that the tools being deployed most affect, while simultaneously casting a wide horizontal net to increase visibility and convey the rollouts as company initiatives key to overall strategy and demanding of attention across departments. Smartly targeting hybrid rollouts to introduce tools valuable across verticals – while focusing on the most valuable use case within each vertical – is essential to success with them. In short, hybrid rollouts offer something for many, and a lot specifically for the target user being introduced to the new platform.

In executing a hybrid rollout of your enterprise IT platform, begin with a foundational phase that addresses horizontal use cases, while enticing users with the knowledge that more is coming. Solicit and utilize user feedback, and put this information to work in serving more advanced use cases as the platform iterates and improves. Next, start making the case for why the vertical group with the most horizontally applicable use cases should embrace the platform. With that initial group of supporters won over, you have a staging area to approach other verticals with specific hybrid rollouts, putting together the puzzle of how best to approach each while showcasing a wide scope and specific value added for each type of user. Importantly, don’t try to sell the platform as immediately being all things to all people. Instead, define and convey a solid vision for the platform, identify the purpose of the existing release, and let these hybrid rollouts take hold at a natural pace. This allows the separate phases to win their target constituents and act as segments to a cohesive overall strategy.

If properly planned and executed, your enterprise IT platform rollout will look not like a patchwork quilt with benefits for some and not others, but rather a rich tapestry of solutions inviting to everyone, and beneficial to the organization as a whole.

 

roguen kellerWritten by Roguen Keller, Director of Global Services at Liferay, an enterprise open source portal and collaboration software company.