Todas las entradas hechas por Guest Author

Head in the clouds? What to consider when selecting a hybrid cloud partner

online shopping cartThe benefits of any cloud solution relies heavily on how well it’s built and how much advance planning goes into the design. Developing any organisation’s hybrid cloud infrastructure is no small feat, as there are many facets, from hardware selection to resource allocation, at play. So how do you get the most from your hybrid cloud provider?

Here are seven important considerations to make when designing and building out your hybrid cloud:

  1. Right-sizing workloads

One of the biggest advantages of a hybrid cloud service is the ability to match IT workloads to the environment that best suits it. You can build out hybrid cloud solutions with incredible hardware and impressive infrastructure, but if you don’t tailor your IT infrastructure to the specific demands on workloads, you may end up with performance snags, improper capacity allocation, poor availability or wasted resources. Dynamic or more volatile workloads are well suited to the hyper-scalability and speedy provisioning of hybrid cloud hosting, as are any cloud-native apps your business relies on. Performance workloads that require higher IOPS (input/output per second), CPU and utilisation are typically much better suited to a private cloud infrastructure if they have elastic qualities or requirements for self-service. More persistent workloads almost always deliver greater value and efficiency with dedicated servers in a managed hosting or co-location environment. Another key benefit to choosing a hybrid cloud configuration is the organisation only pays for extra compute resources as required.

  1. Security and compliance: securing data in a hybrid cloud

Different workloads may also have different security or compliance requirements which dictates a certain type of IT infrastructure hosting environment. For example, your most confidential data shouldn’t be hosted in a multi-tenant environment, especially if that business is subject to Health Insurance Portability and Accountability Act (HIPAA) or PCI compliance requirements. Might seem obvious, but when right-sizing your workloads, don’t overlook what data must be isolated, and also be sure to encrypt any data you may opt to host in the cloud. Whilst cloud hosting providers can’t provide your compliance for you, most offer an array of managed IT security solutions. Some even offer a third-party-audited Attestation of Compliance to help you document for auditors how their best practices validate against your organisation’s compliance needs.

  1. Data centre footprint: important considerations

There is a myriad of reasons an organisation may wish to outsource its IT infrastructure: from shrinking its IT footprint to driving greater efficiencies, from securing capacity for future growth, or simply to streamline core business functions. The bottom line is that data centres require massive amounts of capital expenditure to both build and maintain, and legacy infrastructure does become obsolete over time. This can place a huge capital and upfront strain onto any mid-to-large-sized businesses expenditure planning.

But data centre consolidation takes discipline, prioritisation and solid growth planning. The ability to migrate workloads to a single, unified platform consisting of a mix of cloud, hosting and datacentre colocation provides your IT Ops with greater flexibility and control, enabling a company to migrate workloads on its own terms and with a central partner answerable for the result.

  1. Hardware needs

For larger workloads should you seek to host on premises, in a private cloud, or through colocation, and what sort of performance needs do you have with hardware suppliers? A truly hybrid IT outsourcing solution enables you to deploy the best mix of enterprise-class, brand-name hardware that you either choose to manage yourself or consume fully-managed from a cloud hosting service provider. Performance requirements, configuration characteristics, your organisation’s access to specific domain expertise (in storage, networking, virtualisation, etc.) as well as the state of your current hardware often dictates the infrastructure mix you adopt. It may be the right time to review your inventory and decommission that hardware reaching end of life. Document the server de-commissioning and migration process thoroughly to ensure no data is lost mid-migration, and follow your lifecycle plan through for decommissioning servers.

  1. Personnel requirements

When designing and building any new IT infrastructure, it’s sometimes easy to get so caught up in the technology that you forget about the people who manage it. With cloud and managed hosting, you benefit from your provider’s expertise and their SLAs — so you don’t have to dedicate your own IT resource to maintaining those particular servers. This frees up valuable staff bandwidth so that your staff focuses on tasks core to business growth, or trains for the skills they’ll need to handle the trickier configuration issues you introduce to your IT infrastructure.

  1. When to implement disaster recovery

A recent study by Databarracks also found that 73% of UK SME’s have no proper disaster recovery plans in place in the event of data loss, so it’s well worth considering what your business continuity planning is in the event of a sustained outage. Building in redundancy and failover as part of your cloud environment is an essential part of any defined disaster recovery service.

For instance, you might wish to mirror a dedicated server environment on cloud virtual machines – paying for a small storage fee to house the redundant environment, but only paying for compute if you actually have to failover. That’s just one of the ways a truly hybrid solution can work for you. When updating your disaster recovery plans to accommodate your new infrastructure, it’s essential to determine your Recovery Point Objectives and Recovery Time Objective (RPO/RTO) on a workload-by-workload basis, and to design your solution with those priorities in mind.

Written by Annette Murphy, Commercial Director for Northern Europe at Zayo Group

The economics of disaster recovery

Disaster Recovery Plan - DRPCompanies increasingly need constant access to data and the cost of losing this access – downtime – can be catastrophic. Large organizations can quickly find themselves in the eye of a storm when software glitches strike. It can result in lost revenue, shaken customer loyalty and significant reputational damage.

In August 2013, the NASDAQ electronic exchange went down for 3 hours 11 minutes, causing the shutdown of trading in stocks like Apple, Facebook, Google and 3,200 other companies. It resulted in the loss of millions of dollars, paralyzing trading in stocks with a combined value of more than $5.9 trillion. The Royal Bank of Scotland has now had five outages in three years including on the most popular shopping day of the year. Bloomberg also experienced a global outage in April 2015 resulting in the unavailability of its terminals worldwide. Disaster recovery for these firms is not a luxury but an absolute necessity.

Yet whilst the costs of downtime are significant, it is becoming more and more expensive for companies to manage disaster recovery as they have more and more data to manage: by 2020 the average business will have to manage fifty times more information than it does today. Downtime costs companies on average $5600 per minute and yet the costs of disaster recovery systems can be crippling as companies build redundant storage systems that rarely get used. As a result, disaster recovery has traditionally been a luxury only deep-pocketed organizations could afford given the investment in equipment, effort and expertise to formulate a comprehensive disaster recovery plan.

Cloud computing is now making disaster recovery available to all by removing the need for a dedicated remote location and hardware altogether. The fast retrieval of files in the cloud allows companies to avoid fines for missing compliance deadlines. Furthermore, the cloud’s pay for use model means organizations need only pay for protection when they need it and still have backup and recovery assets standing by. It also means firms can add any amount of data quickly as well as easily expire and delete data. Compare this to traditional back up methods where it is easy to miss files, data is only current to the last back up (which is increasingly insufficient as more data is captured via web transactions) and recovery times are longer.

Netflix has now shifted to Amazon Web Services for its streaming service after experiencing an outage in its DVD operation in 2008 when it couldn’t ship to customers for three days because of a major database corruption. Netflix says the cloud allows it to meet increasing demand at a lower price than it would have paid if it still operated its own data centres. It has tested Amazon’s systems robustly with disaster recovery plans “Chaos Monkey”, “Simian Army” and “Chaos Kong” which simulated an outage affecting an entire Amazon region.

Traditionally it has been difficult for organizations like Netflix to migrate to the cloud for disaster recovery as they have grappled with how to move petabytes of data that is transactional and hence continually in use. With technology such as WANdisco’s Fusion active replication making it easy to move large volumes of data to the cloud whilst continuing with transactions, companies can now move critical applications and processes seamlessly enabling disaster recovery migration. In certain circumstances a move to the cloud even offers a chance to upgrade security with industry recognized audits making it much more secure than on site servers.

Society’s growing reliance on crucial computer systems mean that even short periods of downtime can result in significant financial loss or in some cases even put human lives at risk. In spite of this, many companies have been reluctant to allocate funding for Disaster Recovery as management often does not fully understand the risks. Time and time again network computing infrastructure has proven inadequate. Cloud computing offers an opportunity to step up to a higher level of recovery capability at a cost that is palatable to nearly any sized business. The economics of disaster recovery in the cloud are such that businesses today cannot afford not to use it.

Written by David Richards, Co-Founder, President and Chief Executive of WANdisco.

G-Cloud – why being certified matters

Cloud computingIt might surprise you to know that more than £900m worth of sales have now taken place via the G-Cloud platform since its launch. The Government initiated the G-Cloud program in 2012 to deliver computing based capability (from fundamental resources such as storage and processing to full-fledged applications) using cloud and it has been hugely successful, providing benefits to both customers and suppliers alike.

The G-Cloud framework is offered via the Digital Marketplace and is provided by The Crown Commercial Service (CCS), an organisation working to save money for the public sector and the taxpayer. The CCS acts on behalf of the Crown to drive savings for the taxpayer and improve the quality of commercial and procurement activity. The CCS’ procurement services can be used by central government departments and organisations across the public sector, including local government, health, education, not-for-profit and devolved administrations.

G-Cloud approves framework agreements with a number of service providers and lists those services on a publicly accessible portal known as the Digital Marketplace. This way, public sector organisations can approach the services listed on the Digital Marketplace without needing to go through a full tender process.

G-Cloud has substantial benefits for both providers and customers looking to buy services. For vendors the benefit is clear – to be awarded as an official supplier for G-Cloud demonstrates that the company has met the standards laid out in the G-Cloud framework and it is compliant with these standards. Furthermore, it also opens up an exciting new opportunities to supply the public sector in the UK with the chance to reduce their costs. Likewise it brings recognition to the brand and further emphasises their position as a reputable provider of digital services.

Where public sector organisations are concerned, G-Cloud gives quick and easy access to a roster of approved and certified suppliers that have been rigorously assessed, cutting down on the time to research and find such vendors in the marketplace. This provides companies with a head start in finding the cloud services that will best address their business and technical needs.

I am proud to say that iland was awarded a place on the G-Cloud framework agreement for supplying Infrastructure-as-a-Service (IaaS) and Disaster-Recovery-as-a-Service (DRaaS) at the end of last year. We deliver flexible, cost-effective and secure Infrastructure-as-a-Service solutions from data centres in London and Manchester, including Enterprise Cloud Services with Advanced Security and Compliance, Disaster-Recovery-as-a-Service and Cloud Backup.

So if you are looking to source a cloud provider, I would recommend that you start your search with those that have been awarded a place on the G-Cloud framework agreement. It is important to then work with prospective providers to ensure their platform, service level agreements, native management tools and support teams can deliver the solutions that best address your business goals as well as your security and compliance requirements. Ask questions up front. Ensure the provider gives you full transparency into your cloud environment. Get a demonstration. You will then be well on your way to capitalizing on the promises of cloud.

Written by Monica Brink, EMEA Marketing Director, iland

The easiest way to explain the cloud to your boss

one plus one cloud dealToday, approximately 90 per cent of businesses are using at least one cloud application. Yet, only 32 per cent of these companies are running more than a fifth of their applications in the cloud. The obvious conclusion is that many company executives haven’t quite grasped what the cloud can do for them, which is why it is time for IT organisations to take an active role in explaining the cloud to the business.

One of the predominant issues preventing enterprises from realising the benefits of the cloud is their limited understanding of the technology. In simple terms, cloud computing can be defined as a computing environment consisting of pooled IT resources that can be consumed on demand. The ultimate benefit of the approach is that applications can be accessed from any device with an Internet connection.

However, even more commonly, executives are interested in hearing business cases for the implementation of cloud. Now, let’s walk through some of the most compelling pro-cloud arguments with comments from industry experts.

The money argument

“But can we afford it?”

Luckily for you, the numbers are on your side.

As David Goulden, CEO of EMC Infrastructure, explains in a recent interview: “An immediate driver of many implementations is cost reduction. Both McKinsey and EMC analyses have found that enterprises moving to hybrid cloud can reduce their IT operating expense by 24%. That’s a significant number, and in essence can fund the people and process changes that yield the other benefits of hybrid cloud.”

But where do those cost reductions come from? Goulden explains that while lower hardware, software, facilities and telecom costs account for some of the savings, by far the most substantial reductions can be made in OPEX budgets: “The automation of hybrid cloud dramatically reduces the amount of labour needed to deploy new application software, and to monitor, operate, and make adjustments to the infrastructure. Tasks that used to take days are performed in minutes or seconds.”

The agility issue

“But how will it increase our agility?”

When it comes to cloud computing, agility is commonly used to describe the rapid provisioning of computer resources. However, as HyperStratus’ CEO Bernard Golden suggests, the term can be used to refer to two entirely different advantages: IT resource availability and responsiveness to changes in the business.

Furthermore, he argues that although internal IT availability is necessary for success, the ultimate aim of cloud computing efforts should be speeding business innovation to the market: “the ability to surround a physical product or service with supporting applications offers more value to customers and provides competitive advantage to the vendor. And knowing how to take advantage of cloud computing to speed delivery of complementary applications into the marketplace is crucial to win in the future.“

The security concern

“But will our information be safe?”

Short answer: that’s completely up to your cloud. The beauty of a well-designed hybrid cloud is that it allows enterprises to allocate their applications and data between different cloud solutions in a way that brings out the benefits of all and the drawbacks of none.

However, as Tech Republic’s Enterprise Editor Conner Forrest explains in a recent article: “One of the raging debates when it comes to cloud security is the level of security offered by private and public clouds. While a private cloud strategy may initially offer more control over your data and easier compliance to HIPAA standards and PCI, it is not inherently more or less secure. True security has more to do with your overall cloud strategy and how you are using the technology.” Thus, a haphazard mix of public and private doesn’t automatically make a hybrid cloud.

The customer angle

“But how will it benefit our customers?”

More recently, the C-suite has woken up to the reality that cloud applications can help them attract and retain customers. A good example of this comes from the University of North Texas, whose CFO Rama Dhuwaraha explains: “The typical student on campus today has about six different devices that need Internet access for parking services we offer, dining, classroom registration and paying bills online. During enrolment, most of them don’t want to go find a lab and then enrol – they want it at their fingertips. We have to extend those services to them.”

Overall, the value proposition of a customised cloud solution should be pretty clear. However, as Goulden emphasises: “Most companies simply don’t realise how quickly they can implement a hybrid cloud, or how much money and capability they’re leaving on the table until they have one”. Therefore, as IT professionals, it is our responsibility to take this message forward to the business and develop cloud strategies that serve the interest of the enterprise.

 

Written by Rob Bradburn, Senior Web Operations Manager, Digital Insights & Demand, EMC – EMEA Marketing

Harnessing the vertical cloud: why regulatory burdens don’t have to feel like an uphill struggle

cloud storm rainAs cloud adoption continues to grow, business innovation, scalability and agility are not only becoming realistic goals for the modern business, but mandatory requirements in order to facilitate growth and keep up with the competition. As many companies already have highly virtualised infrastructure in place, their IT strategy is increasingly focused on cloud adoption as a means of not just driving cost efficiencies but also innovation. Increasingly, businesses are looking at ways to ease the burden of meeting regulatory compliance and security requirements by implementing the relevant cloud adoption strategies.

As cloud computing is maturing at a rapid pace with many “as-a-service” type offerings such as infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), desktop-as-a-service (DaaS), disaster recovery-as-a-service (DRaaS), software-as-a-service (SaaS), such developments have paved the way for the anything-as-a-service (XaaS) model. This can be seen as the founding pedestal for next version of the cloud development: the “vertical cloud”. The vertical cloud is designed to deliver core applications, tools and the corresponding biota of a specific vertical that allow organisations to customise cloud services themselves to their specific needs and tastes.

The vertical cloud allows enterprises to pick and choose what to operate in the cloud and on customer’s premises, based on security and compliance requirements that govern their businesses. In industries such as banking and finance or insurance, for example, regulatory compliance is the prime driver while choosing the relevant architecture for their IT Infrastructure. With major regulations on banking and finance in progress, including Basel III and EU General Data Protection Regulation (GDPR), regulatory compliance will remain a major area of investment throughout 2016.

However, using the vertical cloud shifts the onus of compliance to the cloud provider on account of their proven and re-usable governance and security frameworks. Vertical cloud offerings can come pre-packaged with the required regulatory obligations and thus offer organisations a relief from the encumbrance of ensuring compliance themselves.

Continued growth in cloud and IT infrastructure spending
Analysts foresee that a significant acceleration in global cloud-related spending will continue. During 2016, global spending on IT services is forecast to reach $3.54 trillion, as companies continue to adopt cloud services, according to Gartner. This trend is no different in the United Kingdom, where adoption continues to grow. For example, the banking and finance sector is predicted to increase spending on IT in 2016, a big part of which is dedicated to cloud services. Recent guidance issued by the Financial Conduct Authority is likely to perpetuate this trend, by advocating the implementation of the cloud by financial services organisations, paving the way for firms in this sector to take advantage of cloud services and the innovation that it can foster.

The main factors driving cloud adoption are industry competition and pace of change brought on by digitisation. However, businesses need to be more nimble and use the cloud to absorb planned and unscheduled changes swiftly and seamlessly. In order to enable companies to deal with market trends and deviations, the cloud value chain takes a holistic approach to the current business in the context of a changing market. Here is a snapshot of a few such phases which, in rapid evolutionary terms, lead to the adoption of the vertical cloud, a concept that encompasses them all.

The cloud service economy is here. The current trend in cloud adoption is to look beyond asset management and traditional methods used to accomplish business outcome (e.g. developing, testing and repairing). Instead, various flexible ‘as-a-Service’ models offered by cloud firms allow businesses to employ technology solutions themselves, freeing IT teams to focus instead on architectural and advisory services. From the perspective of an IT Infrastructure, the expectation in the cloud service economy is to have ‘value’ delivered directly by the investment made instead of traditional and laborious ways of realising it.

Anything-as-a-Service as a prelude to vertical cloud. Cloud thinking is spurring some organisations to explore running their entire IT operations on the Anything-as-a-Service (XaaS) model with cost vs service consumption variability. Ultimately, however, it is the digital disruption across industries that added to the increasing complexity of continuing traditional in-house handling. As such, businesses are faced with the need to handle next generation requirements such as big data analytics, cognitive computing, mobility and smart solutions, Internet of Things (IoT) and other examples of digitisation.

Security and regulatory compliance are complex and exhaustive, requiring IT infrastructure and applications to be constantly ready for ever-evolving demands. Hence, pursuing the vertical cloud strategy can help business not only to advance and accelerate business growth and gain competitive advantage, but also to mitigate security and regulatory compliance.

 

Written by Nachiket Deshpande, Vice President of Infrastructure Services, Cognizant.

Containers: 3 big myths

schneiderJoe Schneider is DevOps Engineer at Bunchball, a company that offers gamificaiton as a service to likes of Applebee’s and Ford Canada.

This February Schneider is appearing at Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA), where he’ll be cutting through the cloudy abstractions to detail Bunchball’s real world experience with containers. Here, exclusively for Business Cloud News, Schneider explodes three myths surrounding one of the container hype…

One: ‘Containers are contained.’

If you’re really concerned about security, or if you’re in a really security conscious environment, you have to take a lot of extra steps. You can’t just throw containers into the mix and leave it at that: it’s not as secure as VM.

When we instigated containers, at least, the tools weren’t there. Now Docker has made security tools available, but we haven’t transitioned from the stance of ‘OK, Docker is what it is and recognise that’ to a more secure environment. What we have done instead is try to make sure the edges are secure: we put a lot a of emphasis on that. At the container level we haven’t done much, because the tools weren’t there.

Two: The myth of the ten thousand container deployment

You’ll see the likes of Mesosphere, or Docker Swarm, say, ‘we can deploy ten thousand containers in like thirty seconds’ – and similar claims.  Well, that’s a really synthetic test: these kinds of numbers are 100% hype. In the real world such a capacity is pretty much useless. No one cares about deploying ten thousands little apps that do literally nothing, that just go ‘hello world.’

The tricky bit with containers is actually linking them together. When you start with static hosts, or even VMs, they don’t change very often, so you don’t realise how much interconnection there is between your different applications. When you destroy and recreate your applications in their entirety via containers, you discover that you actually have to recreate all that plumbing on the fly and automate that and make it more agile. That can catch you by surprise if you don’t know about it ahead of time.

Three: ‘Deployment is straightforward’

We’ve been running containers in production for a year now. Before then we were playing around a little bit with some internal apps, but now we run everything except one application on containers in production. And that was a bit of a paradigm change for us. The line that Docker gives is that you can take your existing apps and put them in a container that’s going to work in exactly the same way. Well, that’s not really true. You have to actually think about it a little bit differently: Especially with the deployment process.

An example of a real ‘gotcha’ for us was that we presumed Systemd and Docker would play nice together and they don’t. That really hit us in the deployment process – we had to delete the old one and start a new one using system and that was always very flaky. Don’t try to home grow your own one, actually use something that is designed to work with Docker.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA),

Tackling the resource gap in the transition to hybrid IT

AI-Artificial-Intelligence-Machine-Learning-Cognitive-ComputingIs hybrid IT inevitable? That’s a question we ask customers a lot. From our discussions with CIOs and CEOs there is one overriding response and that is the need for change. It is very clear that across all sectors, CEOs are challenging their IT departments to innovate – to come up with something different.

Established companies are seeing new threats coming into the market. These new players are lean, hungry and driving innovation through their use of IT solutions. Our view is that more than 70 percent of all CEOs are putting a much bigger ask on their IT departments than they did a few years ago.

There has never been so much focus on the CIO or IT departmental manager from a strategic standpoint. IT directors need to demonstrate how they can drive more uptime, improve the customer experience, or enhance the e-commerce proposition for instance, in a bid to win new business. For them, it is time to step up to the plate. But in reality there’s little or no increase in budget to accommodate these new demands.

We call the difference between what the IT department is being asked to do, and what it is able to do, the resources gap. Seemingly, with the rate of change in the IT landscape increasing, the demands on CIO’s by the business increasing and with little or no increase in IT budgets from one year to the next, that gap is only going to get wider.

But by changing their way of working, companies can free up additional resources to go and find their innovative zeal and get closer to meeting their business’ demands. Embracing Hybrid IT as their infrastructure strategy can extend the range of resources available to companies and their ability to meet business demands almost overnight.

Innovate your way to growth

A Hybrid IT environment provides a combination of its existing on-premise resources with public and private cloud offerings from a third party hosting company. Hybrid IT has the ability to provide the best of both worlds – sensitive data can still be retained in-house by the user company, whilst the cloud, either private or public, provides the resources and computing power that is needed to scale up (or down) when necessary.

Traditionally, 80 percent of an IT department’s budget is spent just ‘keeping the lights on’. That means using IT to keep servers working, powering desktop PCs, backing up work and general maintenance etc.

But with the CEO now raising the bar, more innovation in the cloud is required. Companies need to keep their operation running but reapportion the budget so they can become more agile, adaptable and versatile to keep up with today’s modern business needs.

This is where Hybrid IT comes in. Companies can mix and match their needs to any type of solution. That can be their existing in-house capability, or they can share the resources and expertise of a managed services provider. The cloud can be private – servers that are the exclusive preserve of one company – or public, sharing utilities with a number of other companies.

Costs are kept to a minimum because the company only pays for what they use. They can own the computing power, but not the hardware. Crucially, it can be switched on or off according to needs. So, if there is a peak in demand, a busy time of year, a last minute rush, they can turn on this resource to match the demand. And off again.

This is the journey to the Hybrid cloud and the birth of the agile, innovative market-focused company.

Meeting the market needs

Moving to hybrid IT is a journey.  Choosing the right partner to make that journey with is crucial to the success of the business. In the past, businesses could get away with a rigid customer / supplier relationship with their service provider. Now, there needs to be a much greater emphasis on creating a partnership so that the managed services provider can really get to understand the business. Only by truly getting under the skin of a business can the layers be peeled back to reveal a solution to the underlying problem.

The relationship between customer and managed service provider is now also much more strategic and contextual. The end users are looking for outcomes, not just equipment to plug a gap.

As an example, take an airline company operating in a highly competitive environment. They view themselves as being not in the people transportation sector, but as a retailer providing a full shopping service (with a trip across the Atlantic thrown in). They want to use cloud services to take their customer on a digital experience, so the minute a customer buys a ticket is when the journey starts.

When the passenger arrives at the airport, they need to check in, choose the seats they want, do the bag drop and clear security all using on-line booking systems. Once in the lounge, they’ll access the Wi-Fi system, check their Hotmail, browse Facebook, start sharing pictures etc. They may also choose last minute adjustments to their journey like changing their booking or choosing to sit in a different part of the aircraft.

Merely saying “we’re going to do this using the cloud” is likely to lead to the project misfiring. As a good partner the service provider should have the experience of building and running traditional infrastructure environments and new based on innovative cloud solutions so that they can bring ‘real world’ transformation experience to the partnership. Importantly they must also have the confidence to demonstrate digital leadership and understand of the business and its strategy to add real value to that customer as it undertakes the journey of digital transformation.

Costs can certainly be rationalised along the way. Ultimately with a hybrid system you only pay for what you use. At the end of the day, the peak periods will cost the same, or less, than the off-peak operating expenses. So, with added security, compute power, speed, cost efficiencies and ‘value-added’ services, hybrid IT can provide the agility businesses need.

With these solutions, companies have no need to ‘mind the gap’ between the resources they need and the budget they have. Hybrid IT has the ability to bridge that gap and ensure businesses operate with the agility and speed they need to meet the needs of the competitive modern world.

 

Written by Jonathan Barrett, Vice President of Sales, CenturyLink, EMEA

Can Safe Harbour stay afloat?

When the European Court of Justice declared the US-EU Safe Harbour framework invalid in the case of Schrems v Data Protection Commissioner, some 4,500 companies began to panic. Many are still struggling to decide what to do: should they implement an alternative method of transferring personal data from the EEA to the US, or should they simply wait to see what happens next?

Waiting is a risky game, as the European data protection authorities’ (DPAs) grace period extends only until January 31 2016, by which time companies must have their cross-Atlantic data transfers in order. After this date, enforcement action may be taken against those transferring personal data without a suitable mechanism in place to ensure adequate protections to personal data. Although the slow churning of US and EU authorities negotiating a replacement for Safe Harbour can be heard in the distance, no timeline has yet been set for its implementation. There is also the added complication of the newly approved EU General Data Protection Regulation, which is likely to muddy the waters of an already murky negotiation.

Will Safe Harbour 2.0 come to the rescue?

According to the European Commissioner for Justice, Consumers and Gender Equality (the Commissioner), the negotiations on ‘Safe Harbour 2’ continue, undoubtedly under added pressure following the invalidation of the original Safe Harbour framework. Whilst both sides understand the sense of urgency, no proposal has yet met the needs of both the national security services and the European DPAs.

In Autumn 2013, the European Commission created a report providing 13 recommendations for improving Safe Harbour Number 13 required that the Safe Harbour national security exception is used only to an extent that is strictly necessary. This recommendation remains a sticking point in negotiations. Human rights and privacy organisations have little hope that these hurdles will be effectively overcome: In November 2015, a letter was sent to the Commissioner from EU and US NGOs, urging politicians to commit to a comprehensive modernisation of data protection laws on both sides of the Atlantic.

Of course, the real bridge to cross is on US law reform, which the Commissioner sees as more about guaranteeing EU rules in the US than changing US law. It seems the ball is very much in the North American court.

Do not, however, be fooled by the House of Representatives passing the Judicial Redress Act, which allows foreign citizens to bring legal suits in the US for alleged violations of their privacy rights. Reform is not easy, and it is now for the Senate to decide whether to follow suit, or to find a way to water down the Act. The govtrack.us website which follows the progress of bills through Capitol Hill gives the act a 22% chance of success. With odds like these, maybe we shouldn’t bet on cross-Atlantic privacy reform in the immediate future

The future of global surveillance

Whilst there have been positive noises coming from the White House regarding the privacy rights of non-Americans, it is unlikely in a post-9/11 world that any government will allow itself to be prevented from accessing data of either its own or foreign nationals.

In light of recent terror attacks all over the world, the Snowden debate is more relevant than ever. How far should government intelligence agencies go towards monitoring communications? Snowden forced governments to think twice about their surveillance practices, but recent attacks may have the opposite effect. Although their so-called ‘snooping’ may breach citizens’ fundamental rights, it may be more a question of how many civil liberties citizens are willing to exchange for safety and security.

The British Government has suggested that fast-track aggressive surveillance proposals (dubbed ‘the Snoopers’ Charter’) are the way forward in helping prevent acts of terror. This new emphasis on drones and cyber-experts marks a big shift from 2010’s strategic defence review. This is a war fought online and across borders and one cannot ignore the context of Safe Harbour here.

The implications on global e-commerce

Hindering cross-border data transfer impedes e-commerce and can potentially causes huge industries to collapse. By 2017, over 45 percent of the world is expected to be engaging in online commerce. A clear path across the Atlantic is essential.

The Information Technology and Innovation Foundation put it bluntly in stating that, aside from taking an axe to the undersea fibre optic cables connecting Europe to the US, it is hard to imagine a more disruptive action to transatlantic digital commerce than a stalemate on data transfer– a global solution must be reached, and soon.

The future of global cross-border data transfer

Time is running out on the Safe Harbour negotiations, and creating frameworks such as this is not simple – especially when those negotiating are starting so far apart and one side (the EU) does not speak with a unified voice.

Most of the 28 European Member States have individual national DPAs, not all of whom agree on the overall approach to reform. If the DPAs could speak in one voice, there could be greater cooperation with the Federal Trade Commission, which could hasten agreements on suitable frameworks for cross-Atlantic data transfers. In the US, much will come down to the law makers and, with an election brewing, it is worth considering the different scenarios.

Even though the two main parties in the US stand at polar ends of the spectrum on many policies, they may not be so distant when it comes to global surveillance. In the wake of the Snowden revelations, Hilary Clinton defended US global surveillance practices. The Republican Party has also been seen in favour of increased surveillance on certain target groups. The question remains: if either party, when elected, is happy to continue with the current surveillance programme, how will the US find common ground with the EU?

Conclusion

Europe seems prepared to act alone in protecting the interests of EU citizens, and the CJEU’s decision in Schrems was a bold and unexpected move on the court’s part. However, with the ever increasing threat to EU citizens’ lives through organised terror, the pressure may be mounting on the EU to relax its stance on data privacy, which could mean that finding common ground with the US may not be so difficult after all. We shall have to wait and see how the US-EU negotiations on Safe Harbour 2 evolve, and whether the European Commission will stand firm and require the US to meet its ‘equivalent’ standard.

 

Written by Sarah Pearce, Partner & Jane Elphick, Associate at Cooley (UK) LLP.

Deciding between private and public cloud

cloud computing machine learning autonomousInnovation and technological agility is now at the heart of an organization’s ability to compete.  Companies that rapidly onboard new products and delivery models gain competitive advantage, not by eliminating the risk of business unknowns, but by learning quickly, and fine-tuning based on the experience gathered.

Yet traditional IT infrastructure models hamper an organizations’ ability to deliver the innovation and agility they need to compete. Enter the cloud.

Cloud-based infrastructure is an appealing prospect to address the IT business agility gap, characterized by the following:

  1. Self-service provisioning. Aimed at reducing the time to solution delivery, cloud allows users to choose and deploy resources from a defined menu of options.
  2. Elasticity to match demand.  Pay for what you use, when you use it, and with flexible capacity.
  3. Service-driven business model.  Transparent support, billing, provisioning, etc., allows consumers to focus on the workloads rather than service delivery.

There are many benefits to this approach – often times, cloud or “infrastructure as a service” providers allow users to pay for only what they consume, when they consume it, as well as fast, flexible infrastructure deployment, and low risks related to trial and error for new solutions.

Public cloud or private cloud – which is the right option?

A cloud model can exist either on-premises, as a private cloud, or via public cloud providers.

In fact, the most common model is a mix of private and public clouds.  According to a study published in the RightScale 2015 State of the Cloud Report, enterprises are increasingly adopting a portfolio of clouds, with 82 percent reporting a multi-cloud strategy as compared to 74 percent in 2014.

With that in mind, each workload you deploy (e.g. tier-1 apps, test/dev, etc.) needs to be evaluated to see if it should stay on-premises or be moved offsite.

So what are the tradeoffs to consider when deciding between private and public cloud?  First, let’s take a look at the considerations for keeping data on-premises.

  1. Predictable performance.  When consistent performance is needed to support key business applications, on-premises IT can deliver performance and reliability within tight tolerances.
  2. Data privacy.  It’s certainly possible to lose data from a private environment, but for the most part, on-premises IT is seen as a better choice for controlling highly confidential data.
  3. Governance and control.  The private cloud can be built to guarantee compliance – country restrictions, chain of custody support, or security clearance issues.

Despite these tradeoffs, there are instances in which a public cloud model is ideal, particularly cloud bursting, where an organization experiences temporary demand spikes (seasonal influxes).  The public cloud can also offer an affordable alternative to disaster recovery and backup/archiving.

Is your “private cloud” really a cloud at all?

There are many examples of the same old legacy IT dressed up with a thin veneer of cloud paint.  The fact is, traditional IT’s complexity and inefficiency makes it unsuitable to deliver a true private cloud.

Today, hyperconverged infrastructure is one of the fastest growing segments in the $107B IT infrastructure market, in part because of its ability to enable organizations to deliver a cloud-operating model with on-premises infrastructure.

Hyperconvergence surpasses the traditional IT model by incorporating IT infrastructure and services below the hypervisor onto commodity x86 “building blocks”.  For example, SimpliVity hyperconverged infrastructure is designed to work with any hypervi­sor on any industry-standard x86 server platform. The combined solution provides a single, shared resource pool across the entire IT stack, including built-in data efficiency and data protection, eliminating point products and inefficient siloed IT architectures.

Some of the key characteristics of this approach are:

  • Single vendor for deploying and supporting infrastructure.  Traditional IT requires users to integrate more than a dozen disparate components just to support their virtualized workloads.  This causes slow deployments, finger pointing, performance bottlenecks, and limits how it can be reused for changing workloads. Alternatively, hyperconvergence is architected as a single atomic building block, ready to be deployed when the customer unpacks the solution.
  • The ability to start small and scale out without penalty.  Hyperconvergence eliminates the need for resource allocation guesswork.  Simply start with the resources needed now, then add more, repurpose, or shut down resources with demand—all with minimal effort and cost, and no performance degradation.
  • Designed for self-service provisioning. Hyperconvergence offers the ability to create policies, provision resources, and move workloads, all at the VM-level, without worrying about the underlying physical infrastructure.  Because they are software defined, hyperconverged solutions can also integrate with orchestration and automation tools like VMware vRealize Automation and Cisco UCS Director.
  • Economics of public cloud. By converging all IT infrastructure components below the hypervisor and reducing operating expenses through simplified, VM-centric management, hyperconverged offerings deliver a cost model that closely rivals the public cloud. SimpliVity, for example, is able to deliver a cost-per-VM that is comparable to AWS, including associated operating expenses and labour costs.

It’s clear that the cloud presents a compelling vision of improved IT infrastructure, offering the agility required to support innovation, experimentation and competitive advantage.  For many enterprises, public cloud models are non-starters due to the regulatory, security, performance, and control drawbacks, for others, the public cloud or infrastructure as a service is an ideal way to quickly increase resources.

Hyperconvergence is also helping enterprises increase their business agility by offering all the cloud benefits, without added risks or uncertainty. Today technology underpins competitive advantage and organizations must choose what works best for their business and their applications, making an approach combining public cloud and private cloud built on hyperconverged infrastructure an even more viable solution.

Written by Rich Kucharski, VP Solutions Architecture, SimpliVity.

Cloud is growing up: from cost saving to competitive advantage

Analytics1The last decade witnessed one of, if not the most transformational waves of technological change ever to break on the shores of IT – cloud computing. Companies vied to position as the key holders to the cloud universe, and customers too, competed for the honor of being first to market in terms of their use and migration to the various cloud models.

The first phase of cloud was characterised by migration of business to the cloud.  This phase is still happening, with many companies of all shapes and sizes at varying stages along the migration path.

The initial catalyst for cloud adoption was, broadly speaking, cost and efficiency based. Amidst times of global economic fluctuations and downturn during the ‘mid-noughties’ the cloud model of IT promised considerable IT efficiencies and thus, cost savings. For the early migrators however, cloud has moved beyond simple cost efficiencies to the next phase of maturity: competitive advantage.

IDC reported earlier in the year that 80% of cloud applications in the future will be data-intensive; therefore, industry know-how and data are the true benefits of the cloud.

The brokerage of valuable data, (be it a clients’ own proprietary information about inventory or customer behavior, or wider industry data), and the delivery of this critical information as a service is where the competitive advantage can be truly found – it’s almost now a case of ‘Innovation as a Service’.

The changing modus operandi of cloud has largely been driven by the increasing, types, variety and volumes of streams of data businesses now require to stay competitive, and now the roll out of cognitive and analytics capabilities within cloud environments are as important to achieving business goals and competitive advantage, as the actual cloud structure itself.

There’s almost no better example of this, than the symbiotic relationship between Weather.com and its use of the cloud.  For a company like Weather.com the need to extract maximum value from global weather data, was paramount to producing accurate forecasting pictures, but also by using advanced analytics, the management of its data globally.

Through IoT deployments and cloud computing Weather.com collects data from more than 100,000 weather sensors, aircraft and drones, millions of Smartphones, buildings and even moving vehicles. The forecasting system itself ingests and processes data from thousands of sources, resulting in approximately 2.2 billion unique forecast points worldwide, geared to deliver over 26 billion forecasts a day.

By integrating real-time weather insights, Weather.com has been able to improve operational performance and decision-making. However, by shifting its (hugely data-intensive), services to the cloud and integrating it with advance analytics, it was not only able to deliver billions of highly accurate forecasts, it was also able to derive added value from this previously unavailable resource, and creating new value ad services and revenue streams.

Another great example is Shop Direct: as one of the UK’s largest online retailers, delivering more than 48 million products a year and welcoming over a million daily visitors across a variety of online and mobile platforms, the move to a hybrid cloud model increased flexibility and meant it was able to more quickly respond to changes in demand as it continues to grow.

With a number of digital department stores including £800m flagship brand, Very.co.uk, the cloud underpins the a variety of analytics, mobile, social and security offerings that enable Shop Direct to improve its customers’ online shopping experience while empowering its workforce to collaborate more easily too.

Smart use of cloud has allowed Shop Direct to continue building a preeminent position in the digital and mobile world, and it has been able to innovate and be being better prepared to tackle challenges such as high site traffic around the Black Friday and the Christmas period.

In the non-conformist, shifting and disruptive landscape of today’s businesses, innovation is the only surety of maintaining a preeminent position and setting a company apart from its competitors – as such, the place of the cloud as the market place for this innovation is insured.

Developments in big data, analytics and IoT highlight the pivotal importance of cloud environments as enablers of innovation, while cognitive capabilities like Watson (in conjunction with analytics engines), add informed intelligence to business processes, applications and customer touch points along every step of the business journey.

While many companies recognise that migration to the cloud is now a necessity, it is more important to be aware that the true, long-term business value can only be derived from what you actually operate in the cloud, and this is the true challenge for businesses and their IT departments as we look towards 2016 and beyond.

Written by Sebastian Krause, VP IBM Cloud Europe