Archivo de la categoría: Opinion

The Six Myths of Hybrid IT

It is time to dispel some hybrid cloud myths

Bennett: It is time to debunk some hybrid cloud myths

Many companies face an ongoing dilemma: How to get the most out of legacy IT equipment and applications (many of which host mission-critical applications like their ERP, accounting/payroll systems, etc.), while taking advantage of the latest technological advances to keep their company competitive and nimble.

The combination of cloud and third-party datacentres has caused a shift in the way we approach building and maintaining our IT infrastructure. A best-of-breed approach previously meant a blending of heterogeneous technology solutions into an IT ecosystem. It now focuses on the services and technologies that remain on-premises and those that ultimately will be migrated off-premises.

A hybrid approach to IT infrastructure enables internal IT groups to support legacy systems with the flexibility to optimise service delivery and performance thru third-party providers. Reconciling resources leads to improved business agility, more rapid delivery of services, exposure to innovative technologies, and increased network availability and business uptime, without having to make the budget case for CAPEX investment. However, despite its many benefits, a blended on-premises and off-premises operating model is fraught with misconceptions and myths — perpetuating a “what-if?” type of mentality that often stalls innovation and business initiatives.

Here are the facts behind some of the most widespread hybrid IT myths:

Myth #1: “I can do it better myself.”

If you’re in IT and not aligned with business objectives, you may eventually find yourself out of a job. The hard truth is that you can’t be better at everything. Technology is driving change so rapidly that almost no one can keep up.

So while it’s not always easy to say “I can’t do everything as well as someone else can,” it’s perfectly acceptable to stick to what you’re good at and then evaluate other opportunities to evolve your business. In this case, outsourcing select IT functionality where you can realise improved capabilities and value for your business. Let expert IT outsource providers do what they do best, managing IT infrastructure for companies 24/7/365, while you concentrate on IT strategy to keep your business competitive and strong.

Myth #2: “I’ll lose control in a hybrid IT environment.”

A functional IT leader with responsibility over infrastructure that management wants to outsource may fear the loss of his or her team’s jobs. Instead, the day-to-day management of the company’s infrastructure might be better served off-premise, allowing the IT leader to focus on strategy and direction of the IT functions that differentiate her business in order to stay ahead of fast-moving market innovation and customer demands.

In the early days of IT, it was one size fits all. Today, an IT leader has more control than ever. For example, you can buy a service that comes with little management and spin resources up using imbedded API interfaces. The days where you bought a managed service and had no control, or visibility, over it are gone. With the availability of portals, plug-ins and platforms, internal resources have more control if they want their environment managed by a third party, or want the ability to manage it outright on their own.

Myth #3: “Hybrid IT is too hard to manage.”

Do you want to differentiate your IT capabilities as a means to better support the business? If you do want to manage it on your own, you need to have the people and processes in place to do so. An alternative is to partner with a service provider offering multiple off-premise options and a more agile operating model than doing all of it on your own.  Many providers bundle management interfaces, orchestration, automation and portals with their offerings, which provides IT with complete transparency and granular control into your outsourced solution.  These portals are also API-enabled to ensure these tools can be integrated into any internal tools you have already invested in, and provide end to end visibility into the entire Hybrid environment.

Myth #4: “Hybrid IT is less secure than my dedicated environment.”

In reality, today’s IT service providers are likely more compliant than your business could ever achieve on its own. To be constantly diligent and compliant, a company may need to employ a team of internal IT security professionals to manage day-to-day security concerns. Instead, it makes sense to let a team of external experts worry about data security and provide a “lessons-learned” approach to your company’s security practice.

There are cases where insourcing makes sense, especially when it comes to the business’ mission-critical applications. Some data should absolutely be kept as secure and as close to your users as possible. However, outsourced infrastructure is increasingly becoming more secure because providers focus exclusively on the technology and how it enables their users. For example, most cloud providers will encrypt your data and hand the key to you only. As a result, secure integration of disparate solutions is quickly becoming the rule, rather than the exception.

Myth #5: “Hybrid IT is inherently less reliable than the way we do it now.”

Placing computing closer to users and, in parallel, spreading it across multiple locations, will result in a more resilient application than if you had it in a fixed, single location. In fact, the more mission-critical the application becomes, the more you should spread it across multiple providers and locations. For example, if you build an application for the cloud you’re not relying on any one application component being up in order to fulfil its availability. This “shared nothing” approach to infrastructure and application design not only makes your critical applications more available, it also adds a level of scalability that is not available in traditional in-house only approaches.

Myth #6: “This is too hard to budget for.”

Today’s managed service providers can perform budgeting as well as reporting on your behalf. Again, internal IT can own this, empowering it to recommend whether to insource or outsource a particular aspect of infrastructure based on the needs of the business. However, in terms of registration, costs, and other considerations, partnering with a third-party service can become a huge value-add for the business.

Adopting a hybrid IT model lowers the risk of your IT resources and the business they support. You don’t have to make huge investments all at once. You can start incrementally, picking the options that help you in the short term and, as you gain experience, allow you the opportunity to jump in with both feet later. Hybrid IT lets you evolve your infrastructure as your business needs change.

If IT and technology has taught us anything, it’s that you can’t afford to let fear prevent your company from doing what it must to remain competitive.

Written by Mike Bennett, vice president global datacentre acquisition and expansion, CenturyLink EMEA

Will Microsoft’s ‘walled-garden’ approach to virtualisation pay off?

Microsoft's approach to virtualisation: Strategic intent or tunnel vision?

Microsoft’s approach to virtualisation: Strategic intent or tunnel vision?

While the data centre of old played host to an array of physical technologies, the data centre of today and of the future is based on virtualisation, public or private clouds, containers, converged servers, and other forms of software-defined solutions. Eighty percent of workloads are now virtualised with most companies using heterogeneous environments.

As the virtual revolution continues on, new industry players are emerging ready to take-on the market’s dominating forces. Now is the time for the innovators to strike and to stake a claim in this lucrative and growing movement.

Since its inception, VMware has been the 800 lb gorilla of virtualisation. Yet even VMware’s market dominance is under pressure from OpenSource offerings like KVM, RHEV-M, OpenStack, Linux Containers and Docker. There can be no doubting the challenge to VMware presented by purveyors of such open virtualisation options; among other things, they feature REST APIs that allow easy integration with other management tools and applications, regardless of platform.

I see it as a form of natural selection; new trends materialise every few years and throw down the gauntlet to prevailing organisations – adapt, innovate or die. Each time this happens, some new players will rise and other established players will sink.

VMware is determined to remain afloat and has responded to the challenge by creating an open REST API for VSphere and other components of the VMware stack.  While I don’t personally believe that this attempt has resulted in the most elegant API, there can be no arguing that it is at least accessible and well-documented, allowing for integration with almost anything in a heterogeneous data centre. For that, I must applaud them.

So what of the other giants of yore? Will Microsoft, for example, retain its regal status in the years to come? Not if the Windows-specific API it has lumbered itself with is anything to go by! While I understand why Microsoft has aspired to take on VMware in the enterprise data centre, its API, utilising WMI (Windows Management Instrumentation), only runs on Windows! As far as I’m concerned this makes it as useless as a chocolate teapot. What on earth is the organisation’s end-goal here?

There are two possible answers that spring to my mind, first that this is a strategic move or second that Microsoft’s eyesight is failing.

Could the Windows-only approach to integrating with Microsoft’s Hyper-V virtualisation platform be an intentional strategic move on its part? Is the long-game for Windows Server to take over the enterprise data centre?

In support of this, I have been taking note of Microsoft sales reps encouraging customers to switch from VMware products to Microsoft Hyper-V. In this exchange on Microsoft’s Technet forum, a forum user asked how to integrate Hyper-V with a product running on Linux.  A Microsoft representative then responded saying (albeit in a veiled way) that you can only interface with Hyper-V using WMI, which only runs on Windows…

But what if this isn’t one part of a much larger scheme? The only alternative I can fathom then is that this is a case of extreme tunnel vision, the outcome of a technology company that still doesn’t really get the tectonic IT disruptions and changes happening in the outside world. If it turns out that Microsoft really does want Windows Server to take over the enterprise data centre…well, all I can say is, good luck with that!

Don’t get me wrong. I am a great believer in competition, it is vital for the progression of both technology and markets. And it certainly is no bad thing when an alpha gorilla faces troop challenger. It’s what stops them getting stale, invigorating them and forcing them to prove why they deserve their silver back.

In reality, Microsoft probably is one of the few players that can seriously threaten VMWare’s near monopolistic market dominance of server virtualisation. But it won’t do it like this. So unless new CEO Satya Nadella’s company moves to provide platform-neutral APIs, I am sad to say that its offering will be relegated to the museum of IT applications.

To end with a bit of advice to all those building big data and web-scale applications, with auto-scaling orchestration between applications and virtualisation hypervisors: skip Hyper-V and don’t go near Microsoft until it “gets it” when it comes to open APIs.

Written by David Dennis, vice president, marketing & products, GroundWork

Will datacentre economics paralyse the Internet of Things?

The way data and datacentres are managed may need to change drastically in the IoT era

The way data and datacentres are managed may need to change drastically in the IoT era

The statistics predicting what the Internet of Things (IoT) will look like and when it will take shape vary widely. Whether you believe there will be 25 billion or 50 billion Internet-enabled devices by 2050, there will certainly be far more devices than there are today. Forrester has predicted 82% of companies will be using Internet of Things (IoT) applications by 2017. But unless CIOs pay close attention to the economics of the datacentre, they will struggle to be successful. The sheer volume of data we expect to manage across these IoT infrastructures could paralyse companies and their investments in technology.

The Value of Information is Relative

ABI Research has calculated that there will be 16 Zettabytes of data by 2020. Consider this next to another industry estimate that there will be 44 Zettabytes by 2020.  While others have said that humanity only produced 2.7 Zettabytes up to 2013. Bottom line: the exponential growth in data is huge.

The natural first instinct for any datacentre manager or CIO is to consider where he or she will put that data. Depending on the industry sector there are regulatory and legal requirements, which mean companies will have to be able to collect, process and analyse runaway amounts of data.  By 2019 another estimate suggests that means processing 2 Zettabytes a month!

One way to react is to simply buy more hardware. From a database perspective the traditional approach would be to create more clusters in order to manage such huge stores of data. However, a critical element of IoT is that it’s based on low-cost technology, and although the individual pieces of data have a value, there is a limit to that value. For example, you do not need to be told every hour by your talking fridge that you need more milk or be informed by your smart heating system what the temperature is at home.  While IoT will lead to smart devices everywhere, its value is relative to the actionable insight it offers.

A key element of the cost benefit equation that needs more consideration is the impact of investment requirements at the backend of an IoT data infrastructure. As the IoT is creating a world of smart devices distributed across networks CIOs have to make a decision about whether the collection, storage and analytics happens locally near the device or is driven to a centralised management system.  There could be some logic to keeping the intelligence locally, depending on the application, because it could speed up the process of providing actionable insight. The company could use low-cost, commoditised devices to collect information but it will still become prohibitively expensive if the company has to buy vast numbers of costly database licenses to ensure the system performs efficiently – never mind the cost of integrating data from such a distributed architecture.

As a result, the Internet of Things represents a great opportunity for open source software thanks to the cost effectiveness of open source versus traditional database solutions. Today, open source-based databases have the functionality, scalability and reliability to cope with the explosion in data that comes with the IoT while transforming the economics of the datacentre. A point which Gartner’s recent Open Source Database Management report endorsed when it said:  “Open source RDBMSs have matured and today can be considered by information leaders, DBAs and application development management as a standard infrastructure choice for a large majority of new enterprise applications.”

The Cost of Integrating Structured and Unstructured

There are other key considerations when calculating the economic impact of the IoT on the datacentre. The world of IoT will be made up of a wide variety of data, structured and unstructured. Already, the need for working with unstructured data has given rise to NoSQL-only niche solutions. The deployment of these types of databases, spurred on by the rise of Internet-based applications and their popularity with developers, is proliferating because they offer the affordability of open source. Yet, their use is leading to operational and integration headaches as data silos spring up all around the IT infrastructure due to limitations in these NoSQL-only solutions. In some cases, such as where ACID properties are required and robust DBA tools are available, it may be more efficient to use a relational database with NoSQL capabilities built in and get the best of both worlds rather than create yet another data silo.  In other cases, such has for very high velocity data streams, keeping the data in these newer data stores and integrating them may be optimal.

A key priority for every CIO is integrating information as economically as possible so organizations can create a complete picture of its business and its customers.  The Postgres community has been at the forefront of addressing this challenge with the creation of Foreign Data Wrappers (FDWs), which can integrate data from disparate sources, likes MongoDB, Hadoop and MySQL. FDWs link external data stores to Postgres databases so users access and manipulate data from foreign sources as if it were part of the native Postgres tables. Such simple, inexpensive solutions for connecting new data streams emerging along with the Internet of Everything will be critical to unlocking value from data.

The Internet of Things promises a great transformation in the ability of enterprises to holistically understand their business and customer environment in real time and deliver superior customer engagement.  It is critical, though, that CIOs understand the economic impact on their datacentre investments.  The IoT creates a number of new challenges, which can be addressed using the right technology strategy.

Written by Pierre Fricke, vice president of product, EnterpriseDB

Cloud banking: lit from within

Financial services companies are warming to cloud services

Financial services companies are warming to cloud services

In a world where, as John Schlesinger, chief enterprise architect at Temenos, argues, servers are about to stop getting cheaper, the advantages of cloud computing in terms of cost and customer experience look more compelling than ever. In the banking market, however, the spread of cloud systems has been slower than elsewhere due to factors including concern about data security, uncertainty about the position regulators will take on cloud technologies and the challenge of managing migration from the in-house, legacy IT systems that currently run banks’ critical functions.

So just how hot is cloud banking right now? A quick temperature check of the financial services industry’s attitude to cloud banking in April triggered a warm response.

There are two sides to every story and never more so than when discussing with banks the shift from in-house technology to on-demand cloud-based services. So in Temenos’ recent survey Cloud-banking heat map, we asked two key questions: what are the benefits you seek from cloud services; and what, if any, are the barriers to adoption you face?

Echoing the results of a similar Ovum survey The Critical Role for Cloud in the Transformation of Retail Banks,last year, our results show that cloud is no longer just about cost reduction, as 50 per cent of respondents see cloud as a means to adopt new technology, and 34 per cent reported the ability to add new business functionality more quickly as a top benefit. This is a very encouraging sign that banks are seeing the adoption of cloud technology as a means to support the delivery of new products and services.

That is not to say that the long term cost benefits of cloud services are any less important. In fact the highest scoring benefit sought from the cloud, at 58 per cent of respondents, is to reduce overall IT costs. Not at all surprising given the profitability hit banks have taken post financial crisis, cost-savings are an obvious driver of a cloud-based IT strategy.

The top reported barriers to adopting cloud services are concerns over data security (55 per cent) and reliability and availability (47 per cent), which are common challenges for financial institutions that are used to managing and maintaining their own IT. This highlights the need for cloud providers to do more to demonstrate to the industry the robustness of their security controls and availability metrics, as paradoxically we may find that security and reliability is a benefit rather than a barrier to cloud.

Concern over regulatory compliance is another top factor against cloud banking, cited by 45 per cent of respondents. This is no surprise in such a heavily regulated sector, and there is no quick fix, but when talking to lawyers in this space, the feeling is that with a high level of due diligence on the on the banks’ part, and a transparent and collaborative approach on the cloud provider’s part, a solution could be found that meets all parties’ needs, including those of the regulator.

In response to this, we see cloud software vendors, their platform partners and industry organisations are working closely to address security concerns. Co-ordinated efforts such as the Cloud Security Alliance and its Cloud Controls Matrix have set out security principles for cloud vendors and assist prospective customers in assessing security risk at individual cloud providers. Cloud providers themselves are investing heavily in compliance and security expertise to the extent that many observers argue that a well-implemented migration to the cloud can result in higher levels of security than an in-house system, as well as access to real-time reporting mechanisms that are often superior, too.

As the industry continues to warm up to cloud banking, we will see the same issues raised and discussed again and again. And rightly so: the only way to support the banking industry in any leap in technology and faith is by addressing issues and challenges openly until all parties are convinced of its viability.

However, while clear challenges remain to more rapid adoption of cloud-based technology in banking, it is clear that change is happening. Already, analysts at Gartner predict that by 2016, more than 60 per cent of global banks will process the majority of their transactions in the cloud. Many are already moving less sensitive functions there and developing strategies to enable them to capture the benefit of cloud-based systems for their core operations.

Written by David Arnott, chief executive of Temenos

Is force of habit defining your hybrid cloud destiny?

Experience breeds habit, which isn't necessarily the best thing strategically

Experience breeds habit, which isn’t necessarily the best thing strategically

I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

And the point of all this?  Simply, that people’s viewpoints are constrained by their experiences and what keeps them busy day-to-day, so often they miss an opportunity to do something different.  For those people working day-to-day in a traditional IT department , keeping systems up and running,  hybrid cloud is all about integrating an existing on-site system with an off-site cloud.  This is a nice, easy one to grasp in principal but the reality is somewhat harder to realize.

The idea of connecting an on-site System of Record to a cloud-based System of Engagement:  pulling data from both to generate new insights is conceptually well understood.  That said, the number of organisations making production use of such arrangements is few and far between.  For example, combining historical customer transaction information with real-time geospatial, social and mobile data and then applying analytics to generate new insights which uncover new sales potential.  For many organizations though, the challenge in granting access to the existing enterprise systems is simply too great.  Security concerns, the ability to embrace the speed of change that is required and the challenge to extract the right data in a form that is immediately usable by the analytical tools may be simply a hurdle too high.  Indeed, many clients I’ve worked with have stated that they’re simply not going to do this.  They understand the benefits, but the pain they see themselves having to go through to get these makes this unattractive to pursue.

So, if this story aligns with your view of hybrid cloud and you’ve already put it in the “too hard” box then what is your way forward?

For most organizations, no single cloud provider is going to provide all of the services they might want to consume.  Implicitly then, if they need to bring data from these disparate cloud services together then there is a hybrid cloud use case:  linking cloud to cloud.  Even in the on-site to off-site hybrid cloud case there are real differences when the relationship is static compared to when you are dynamically bursting in and out of off-site capacity.  Many organizations are looking to cloud as a more-effective and agile platform for backup and archiving or for disaster recovery.  All of these are hybrid cloud use cases to but if you’ve already written off ‘hybrid’ then you’re likely missing very real opportunities to do what is right for the business.

Regardless of the hybrid cloud use case, you need to keep in mind three key principals which are:

  1. Portability – the ability to run and consume services and data from wherever it is most appropriate to do so, be that cloud or non-cloud, on-site or off-site.
  2. Security, visibility and control – to be assured that end-to-end, regardless of where the ‘end’ is, you are running services in such a way that they are appropriately secure, well managed and their characteristics are well understood.
  3. Developer productivity – developers should be focused on solving business problems and not be constrained by needing to worry about how or when supporting infrastructure platforms are being deployed.  They should be able to consume and integrate services from many different sources to solve problems rather than having to create everything they need from scratch.

Business applications need to be portable such that they can both run as well as consume other services from wherever is most appropriate.  To do that, your developers need to be more unconstrained by the underlying platform(s) and so can develop for any cloud or on-site IT platform.  All this needs to be done in a way that allows enterprise controls, visibility and security to be extended to the cloud platforms that are being used.

If you come from that traditional IT department background, you’ll be familiar with the processes that are in place to ensure that systems are well managed, change is controlled and service levels are maintained.  These processes may not be compatible with the ways that clouds open up new opportunities.  This leads to the need to look a creating a “two-speed” IT organisation to provide the rigor where needed for the Systems of Record whilst enabling rapid change and delivery in the Systems of Engagement space.

Cloud generates innovation and hence diversity.  Economics, regulation and open communities drive standardization and it is this, and in particular open standards, which facilitates integration in all of these hybrid cases.

So, ask yourself.  With more than 65 per cent of enterprise IT organizations making commitments on hybrid cloud technologies before 2016 are you ensuring that your definitions – and hence your technologies choices – reflect future opportunities rather than past prejudices?

Written by I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

Written by John Easton, IBM distinguished engineer and leading cloud advisor for Europe

Lessons from the Holborn fire: how disaster recovery as a service helps with business continuity

Disaster recovery is creeping up on the priority list for enterprises

Disaster recovery is creeping up on the priority list for enterprises

The recent fire in Holborn highlighted an important lesson in business continuity and disaster recovery (BC/DR) planning: when a prompt evacuation is necessary ‒ whether because of a fire, flood or other disaster ‒ you need to be able to relocate operations without advance notice.

The fire, which was caused by a ruptured gas main, led to the evacuation of 5,000 people from nearby buildings, and nearly 2,000 customers experienced power outages. Some people lost Internet and mobile connectivity as well.

While firefighters worked to stifle the flames, restaurants and theatres were forced to turn away patrons and cancel performances, with no way to preserve their revenue streams. The numerous legal and financial firms in the area, at least, had the option to relocate their business operations. Some did, relying on cloud-based services to resume their operations remotely. But those who depended on physical resources on-site were, like the restaurants and theatres, forced to bide their time while the fire was extinguished.

These organisations’ disparate experiences reveals the increasing role of cloud-based solutions ‒ particularly disaster recovery as a service (DRaaS) solutions ‒ in BC/DR strategies.

The benefits of DRaaS

Today, an increasing number of businesses are turning to the cloud for disaster recovery. The DRaaS market is expected to experience a compounded annual growth rate of 55.2 per cent from 2013 to 2018, according to global research company MarketsandMarkets.

The appeal of DRaaS solutions is that they provide the ability to recover key IT systems and data quickly, which is crucial to meeting your customers’ expectations for high availability. To meet these demands within the context of a realistic recovery time frame, you should establish two recovery time objectives (RTOs): one for operational issues that are specific to your individual environment (e.g., a server outage) and another for regional disasters (e.g., a fire). RTOs for operational issues are typically the most aggressive (0-4 hours). You have a bit more leeway when dealing with disasters affecting your facility, but RTOs should ideally remain under 24 hours.

DRaaS solutions’ centralised management capabilities allow the provider to assist with restoring not only data but your entire IT environment, including applications, operating systems and systems configurations. Typically systems can be restored to physical hardware, virtual machines or another cloud environment. This service enables faster recovery times and eases the burden on your in-house IT staff by eliminating the need to reconfigure your servers, PCs and other hardware when restoring data and applications. In addition, it allows your employees to resume operations quickly, since you can access the environment from anywhere with a suitable Internet connection.

Scalability is another key benefit of DRaaS solutions. According to a survey by 451 Research, the amount of data storage professionals manage has grown from 215 TB in 2012 to 285 TB in 2014. To accommodate this storage growth, companies storing backups in physical servers have to purchase and configure additional servers. Unfortunately, increasing storage capacity can be hindered by companies’ shrinking storage budgets and, in some cases, lack of available rack space.

DRaaS addresses this issue by allowing you to scale your storage space as needed. For some businesses, the solution is more cost-effective than dedicated on-premise data centres or colocation solutions, because cloud providers typically charge only for the capacity used. Redundant data elimination and compression maximise storage space and further minimise cost.

When data needs to be maintained on-site

Standard DRaaS delivery models are able to help many businesses meet their BC/DR goals, but what if your organisation needs to keep data or applications on-site? Perhaps you have rigorous RTOs for specific data sets, and meeting those recovery time frames requires an on-premise backup solution. Or maybe you have unique applications that are difficult to run in a mixture of physical and virtual environments. In these cases, your business can leverage a hybrid DRaaS strategy which allows you to store critical data in an on-site appliance, offloading data to the cloud as needed.

You might be wondering, though, what happens to the data stored in an appliance in the event that you have to evacuate your facility. The answer depends on the type of service the vendor provides for the appliance. If you’re unable to access the appliance, recovering the data would require you to either access an alternate backup stored at an off-site location or wait until you regain access to your facility, assuming it’s still intact. For this reason, it’s important to carefully evaluate potential hybrid-infrastructure DRaaS providers.

DRaaS as part of a comprehensive BC/DR strategy

In order for DRaaS to be most effective for remote recovery, the solution must be part of a comprehensive BC/DR strategy. After all, what good is restored data if employees don’t have the rest of the tools and information they need to do their jobs? These additional resources could include the following:

•         Alternate workspace arrangements

•         Provisions for backup Internet connectivity

•         Remote network access solutions

•         Guidelines for using personal devices

•         Backup telephony solution

The Holborn fire was finally extinguished 36 hours after it erupted, but not before landing a blow on the local economy to the tune of £40 million. Businesses using cloud services as part of a larger business continuity strategy, however, were able to maintain continuity of operations and minimise their lost revenue. With the right resources in place, evacuating your building doesn’t have to mean abandoning your business.

By Matt Kingswood, head of managed services, IT Specialists (ITS)

G-Cloud: Much has been achieved, but the programme still needs work

The UK government is ahead of the curve in cloud, but work still needs doing

The UK government is ahead of the curve in cloud, but work still needs doing

Thanks to G-Cloud, the once stagnant public sector IT marketplace that was dominated by a small number of large incumbent providers, is thriving. More and more SMEs are listing their assured cloud services on the framework, which is driving further competition and forcing down costs for public sector organisations, ultimately benefitting each and every UK tax payer.  But the programme still needs work.

G-Cloud initially aimed to achieve annual savings of more than £120m and to account for at least half of all new central Government spend by this year. The Government Digital Service has already estimated that G-Cloud is yielding efficiencies of at least 50 per cent, comfortably exceeding the initial target set when the Government’s Cloud Strategy was published in 2011.

According to the latest figures, the total reported G-Cloud sales to date have now exceeded £591m, with 49 per cent of total sales by value and 58 per cent by volume, having been awarded to SMEs. 76 per cent of total sales by value were through central Government; 24 per cent through the wider public sector, so while significant progress has been made, more work is clearly needed to educate local Government organisations on the benefits of G-Cloud and assured cloud services.

To provide an example of the significant savings achieved by a public sector organisation following a move to the cloud, the DVLA’s ‘View driving record’ platform, hosted on GOV.UK, secured online access to driving records for up to 40 million drivers for the insurance industry, which it is hoped will help to reduce premiums. Due to innovative approaches including cloud hosting, the DVLA managed to save 66 per cent against the original cost estimate.

Contracts held within the wider public sector with an estimated total value of over £6bn are coming to an end.  Therefore continued focus must be placed on disaggregating large contracts to ensure that all digital and ICT requirements that can be based on the cloud are based on the cloud, and sourced from the transparent and vendor-diverse Government Digital Marketplace.

Suppliers, especially SMEs and new players who don’t have extensive networks within the sector, also need much better visibility of downstream opportunities. Currently, G-Cloud is less transparent than conventional procurements in this respect, where pre-tender market engagements and prior information notices are now commonplace and expected.

However, where spend controls cannot be applied, outreach and education must accelerate, and G-Cloud terms and conditions must also meet the needs of the wider public sector. The G-Cloud two year contract term is often cited as a reason for not procuring services through the framework, as is the perceived inability for buyers to incorporate local, but mandatory terms and conditions.

The Public Contracts Regulations 2015 introduced a number of changes to EU procurement regulations, and implemented the Lord Young reforms, which aim to make public procurements more accessible and less onerous for SMEs. These regulations provide new opportunities for further contractual innovation, including (but not limited to) dynamic purchasing systems, clarification of what a material contract change means in practice, and giving buyers the ability to take supplier performance into account when awarding a contract.

The G-Cloud Framework terms and conditions must evolve to meet the needs of the market as a whole, introducing more flexibility to accommodate complex legacy and future requirements, and optimising the opportunities afforded by the new public contract regulations. The introduction of the Experian score as the sole means of determining a supplier’s financial health in the G-Cloud 6 Framework is very SME unfriendly, and does not align to the Crown Commercial Service’s own policy on evaluation of financial stability. The current drafting needs to be revisited for G-Cloud 7.

As all parts of the public sector are expected to be subject to ongoing fiscal pressure, and because digitising public services will continue to be a focus for the new Conservative Government, the wider public sector uptake of G-Cloud services must continue to be a priority. Looking to the future of G-Cloud, the Government will need to put more focus on educating buyers on G-Cloud procurement, the very real opportunities that G-Cloud can bring, underlined by the many success stories to date, and ensuring the framework terms and conditions are sufficiently flexible to support the needs of the entire buying community. G-Cloud demonstrates the possibilities when Government is prepared to be radical and innovative and in order to build on the significant progress that has been made, we hope that G-Cloud will be made a priority over the next five years.

Written by Nicky Stewart, commercial director at Skyscape Cloud Services

Google’s IoT land grab: will Brillo help or hinder?

Google's having a go at the Internet of Things, but how will it sit with developers and device manufacturers?

Google’s having a go at the Internet of Things, but how will it sit with developers and device manufacturers?

The long rumoured Project Brillo, Google’s answer to the Internet of Things, was finally unveiled this week at the company’s annual I/O conference, and while the project shows promise it comes at time when device manufacturers and developers are increasingly being forced to choose between IoT ecosystems. Contrary to Google’s stated aims, Brillo could – for the same reason – hinder interoperability and choice in IoT rather than facilitate it.

It’s difficult to see Project Brillo as anything more than it really is – an attempt at grabbing highly sought-after ground in the IoT space. It has two key components. There’s Brillo, which is essentially a mini Android OS (made up of some of the services the fully fledged OS abstracts) which Google claims can run on tiny embeddable IP-connected devices (critically, the company hasn’t revealed what the minimum specs for those devices are); and Weave, a proprietary set of APIs that help developers manage the communications layer linking apps on mobile phones to sensors via the cloud.

Brillo will also come with metrics and crash reporting to help developers test and de-bug their IoT services.

The company claims the Weave programme, which will see manufacturers certify to run Brillo on their embeddable devices in much the same Google works with handset makers to certify Android-based mobile devices, will help drive interoperability and quality – two things IoT desperately needs.

The challenge is it’s not entirely clear how Google’s Brillo will deliver on either front. Full-whack Android is almost a case-in-point in itself. Despite have more than a few years to mature, the Android ecosystem is still plagued by fragmentation, which produces its fair share of headaches for developers. As we recently alluded to in an article about Google trying to tackle this problem, developing for a multitude of platforms running Android can be a nightmare; an app running smoothly on an LG G3 can be prone to crashing on a Xiaomi or Sony device because of architectural or resource constraint differences.

This may be further complicated in the IoT space by the fact that embeddable software is, at least currently, much more difficult to upgrade than Android, likely leading to even more software heterogeneity than one currently finds in the Android ecosystem.

Another thing to consider is that most embeddable IoT devices currently in the market or planned for deployment are so computationally and power-constrained (particularly for industrial applications, which is where most IoT stuff is happening these days) that it’s unclear whether there will be a market for Brillo to tap into anytime soon. This  isn’t really much use for developers – the cohort Google’s trying to go after.

For device manufacturers, the challenge will be whether building to Google’s specs will be worth the added cost of building alongside ARM, Arduino, Intel Edison or other IoT architectures. History would suggest that it’s always cheaper to build to one architecture rather than multiple (which is what’s driving standards development in this nascent space), and while Google tries to ease the pain of dealing with different manufacturers on the developer side by abstracting lower level functions through APIs, it could create a situation where manufacturers will have to choose which ecosystem they play in – leading to more fragmentation and as a result more frustration for developers. For developers, at least those unfamiliar with Android, it comes at the cost of being locked into a slew of proprietary (or at least Google-owned) technologies and APIs rather than open technologies that could – in a truly interoperable way – weave Brillo and non-Brillo devices with cloud services and mobile apps.

Don’t get me wrong – Google’s reasoning is sound. The Internet of Things is the cool new kid on the block with forecast revenues so vast they could make a grown man weep. There are a fleet of developers building apps and services for Android and the company has great relationships with pretty much every silicon manufacturer on the planet. It seems reasonable to believe that the company which best captures the embeddable software space stands a pretty good chance at winning out at other levels of the IoT stack. But IoT craves interoperability and choice (and standards) more than anything, which even in the best of circumstances can create a tenuous relationship between developers and device manufacturers, where their respective needs stand in opposition. Unfortunately, it’s not quite clear whether Brillo or Weave will truly deliver on the needs of either camp.

Five key enterprise PaaS trends to look out for this year

PaaS will see a big shakeup this year according to Rene Hermes, general manager EMEA, Apprenda

PaaS will see a big shakeup this year according to Rene Hermes, general manager EMEA, Apprenda

The last year has shown that a growing number of enterprises are now choosing Platform as a Service (PaaS) ahead of Infrastructure as a Service (IaaS) as the cornerstone of their private/hybrid cloud strategy. While the enterprise cloud market has obviously experienced a substantial amount of change over the last year, the one thing that’s certain is that this will keep on accelerating over the coming months.

Here are five specific enterprise cloud trends that we believe will prove significant throughout the rest of 2015 and beyond.

The PaaS standard will increasingly be to containerise – While we’ve always committed to the concept of a container-based PaaS, we’re now seeing Docker popularise the concept. The broader enterprise world is now successfully vetting the viability of a container-based architecture, and we’re seeing enterprises move from just asking about containers as a roadmap item to now asking for implementation details. This year won’t necessarily see broad-based customer adoption, but we’re anticipating a major shift as PaaS becomes synonymous with the use of containers.

Practical microservices capabilities will win out over empty posturing – It’s probably fair to say that most of the microservices ‘advice’ offered by enterprise PaaS vendors to date has been questionable at best. Too many vendors have simply repackaged the Service-Oriented Architecture conversation and represented it as their microservices positioning. That’s fine, but it hasn’t helped customers at all as vendors have avoided being held accountable to microservices at both a feature and execution level. This isn’t sustainable, and PaaS and cloud vendors will need to deliver practical guidance driven by core enterprise PaaS features if they are to be taken seriously.

Internet of Things will be a key driver for PaaS implementations – For PaaS to be successful they need to support core business use cases. However too many PaaS implementations are deployed just to simplify the IT model so that developers can quickly build cloud-enabled applications. That approach simply isn’t going to withstand the pressure caused by the increased take-up of innovations such as The Internet of Things that will require web-service back-ends that are easy to manage, highly available and massively scalable.

Containerising OpenStack services set to create confusion – The move towards OpenStack being deployed within containers is interesting, but we believe adoption will prove slow. With many now expecting container control and management to sit within the PaaS layer, moves such as containerised OpenStack are likely just to cause confusion. Given that PaaS is becoming the dominant form of cloud assembly, containerised IaaS will stall as it conflicts directly with the continued growth in enterprises deploying private/hybrid PaaS – regardless of whether they’ve built IaaS already.

PaaS buyers to dismiss infrastructure prescriptive solutions – Many PaaS vendors do a lot of marketing around being portable, but in reality many organisations find that this can increase IT risk and drive lock-in by deliberately creating stack dependencies. We’re finding PaaS buyers much keener to challenge vendors on their infrastructure portability as early as the proof of concept phase. That’s because customers want an enterprise PaaS that doesn’t favour one infrastructure over another. To ensure this outcome, customers are now using their RFPs and proofs of concept to insist that PaaS vendors demonstrate that their solutions are portable across multiple infrastructure solutions.

By Rene Hermes, general manager EMEA, Apprenda

The channel must embrace cloud to build for the future

The channel needs to embrace cloud services in order to succeed in IT today

The channel needs to embrace cloud services in order to succeed in IT today

With cloud acceptance growing, more and more businesses are dipping their toes in the water and trying out cloud based services and applications in a bid to work smarter and lower IT expenditure. But with recent research suggesting that four in ten ICT decision-makers feel their deployment fails to live up to the hype – more needs to be done to ensure cloud migration is a success.

This is where the channel has a vital role to play and can bridge the knowledge gap and help end-users reap the benefits that cloud technology can provide.

With the cloud becoming a mainstream solution for businesses and an integral part of an organisation’s IT strategy, the channel is presented with a huge opportunity. Offering cloud services to the market has the potential to yield high revenues, so it’s vital that the channel takes a realistic approach to adopting cloud within its portfolio, and becomes a trusted advisor to the end user.

We have identified three key reasons why resellers shy away from broadening their offering to encompass cloud for new and existing customers. A common barrier is a simple lack of understanding of the cloud and its benefits. However, if a business is keen to adopt this technology, it is vital that its reseller is able to offer advice and guidance to prevent them looking elsewhere.

Research by Opal back in 2010 found that 40 per cent of resellers admit a sense of ‘fear and confusion’ around cloud computing, with the apprehension to embrace the technology also extending to end users, with 57 per cent reporting uncertainty among their customer bases. This lack of education means they are missing out on huge opportunities for their business. A collaborative approach between the reseller and cloud vendor will help to ensure a seamless knowledge transfer followed by successful partnership and delivery.

The sheer upheaval caused by offering the cloud will see some resellers needing to re-evaluate their own business models and strategies to fulfil the need. Those that are unaccustomed to a service-oriented business model may find that becoming a cloud reseller presents strategic challenges as they rely on out-dated business plans and models that don’t enable this new technology. However, failing to evolve business models could leave resellers behind in the adoption curve, whilst their competitors are getting ahead. Working with an already established partner will help resellers re-evaluate their existing business plans to ensure they can offer cloud solutions to their customers.

Resellers are finding it challenging to provide their customers with quick, scalable cloud solutions due to the fact that moving existing technology services into cloud services can be time consuming, and staff will be focused on working to integrate these within the enterprise. However, this issue can easily be resolved by choosing a trusted cloud provider, and in turn building a successful partnership.

Although resellers will come across barriers when looking at providing their customers with cloud services, these shouldn’t get in the way of progression. In order to enter a successful partnership with a cloud provider, there are some important factors resellers should consider before taking the plunge.

Scalability

Before choosing a prospective partner, resellers need to ensure it has the scalability and technology innovation to provide a simple integration of current IT services into the cloud. Recent research has proved that deploying cloud services from three or more suppliers can damage a company’s business agility. UK businesses state a preference for procuring cloud services from a single supplier for ease of management. It’s important to make sure the chosen provider has the ability to provide one fully encompassed cloud service that can offer everything their customers require.

Brand reputation

Choosing a partner that offers not only a best-of breed private, public and hybrid cloud solution, but also has the ability to provide the reseller with a branded platform will give an extra layer of credibility to the business for not only existing customers, but future ones as well. Resellers are more likely to choose a cloud provider that gives them control over the appearance, as well as support and access to infrastructure of the cloud platform.

Industry experience

It’s vital to ensure the cloud provider has extensive industry experience and knowledge with a proven track record in order to meet the required criteria of scalability and performance. The partner must have the knowledge in order to educate and offer advice to the reseller. If they are able to do so, the reseller can therefore pass this knowledge on to their own customers.

By not offering the cloud, resellers will miss out on vast opportunities and in turn, lose potential revenue as well as new and existing customers. The channel must now embrace the cloud and take advantage of the partnerships available in order to succeed.

Written by Matthew Munson, chief technology officer, Cube52