Todas las entradas hechas por Guest Author

Businesses are ready for cloud – but lack of transparency is limiting its usefulness

cloud puzzleDespite common perceptions, cutting costs isn’t the primary reason businesses are choosing cloud these days. The other major advantages are the agility and scalability cloud brings, enabling organisations to quickly respond to business demand. The combination of benefits is driving both IT and lines of business to rely on cloud to serve as a foundation for innovation and enablement.

But the advantages of cloud cannot be fully harnessed if transparency into the environments is compromised. Clouds that limit visibility result in significant operational and financial issues, including performance problems or outages, challenges reporting to management, and unexpected bills. In fact, challenges with transparency restrict 63% of organizations from growing their cloud usage. That’s according to a recent global survey conducted by Forrester Consulting that we commissioned. The survey sought insights from 275 IT executives and decision makers who are experienced cloud customers.

When it comes to data about cloud environments, what are organisations looking for from their providers? Clearly security and compliance information is important. Worryingly, 39% of those surveyed said they lacked security data and 47% said they lacked compliance data. Not surprisingly, the majority said they needed on-demand access to necessary reports to make compliance and audit processes easier.

That said, on-demand reporting technology only goes so far, and many respondents wanted suggestions and/or support from experts on staff at the cloud provider. In light of evolving security risks and corporate compliance concerns – especially as lines of business adopt cloud without IT involvement – cloud providers need to simplify the process for ensuring advanced security and compliance in the cloud, not get in the way.

Beyond security and compliance, performance information, historical information and clear details about costs and upcoming bills are also key. Without this, businesses find it hard to plan for or meet the needs of their end users. It also makes it extremely difficult to budget properly.

Just like with their own servers, organisations need to understand the performance of a cloud service to get the most from it, whether that means making sure resources are running properly, anticipating potential issues or preventing wasteful “zombie virtual machines.” Due to a lack of transparency from their cloud providers, more than a third of the respondents in the survey ended up with bills they hadn’t expected and 39% found they were paying for resources they weren’t actually using.

Cloud customers can use data to make better purchasing decisions. Clear information from a cloud provider will help companies discover where they need more resources, or even where they can regain capacity and maximise their spend.

Once again though, beyond the on-demand data, customers require solid support to ensure they are getting what they need from cloud. In the survey, 60% of respondents said that problems with support were restricting their plans to increase their usage of cloud. Issues like slow response times, lack of human support, lack of expertise of the support personnel and higher-than-expected support costs started with the onboarding process and only continued. Aside from preventing customers from reaping the benefits of cloud, these issues leave businesses feeling that they’re seen more as a source of revenue than as a valued cloud customer.

When it comes down to it, cloud customers should not settle for cloud services that limit visibility into the cloud environments. Compromises in transparency mean sacrifices to very agility, scalability and cost benefits that drive organizations to cloud in the first place. And beyond transparency, customers should not underestimate the human element of cloud. A cloud provider’s customer support plays a huge role in speeding return on cloud investment, and ultimately, in determining success and failure of a cloud initiative.

As the Forrester study states, “Whether you are a first-time cloud user or looking to grow your cloud portfolio, our research shows that your chances of success are greater with a trusted cloud provider at your side — one that gives you the technology and experts to solve your challenges.”

You can read more about the survey findings in the study, “Is Your Cloud Provider Keeping Secrets? Demand Data Transparency, Compliance Expertise, and Human Support From Your Global Cloud Providers.”

Written by Dante Orsini, senior vice president, iland

Sixth-sensors: The future of the Internet of Things and the connected business

IT departments will soon have to worry about IoT

IT departments will soon have to worry about IoT

An IT admin walks in to his cabin and instantly knows something is wrong. He does not even have to look at his dashboard to identify the problem. Instead, he heads straight to the server room to fix the server which is overheating because of a failed fan.

The IT admin does not have a sixth-sense. He is alerted to the problem by an internet-enabled thermostat in the server room which sensed the rise in temperature and automatically changed the lighting to alert the admin, through an internet-enabled lightbulb and his smart watch.

This is not the plot of a futuristic Sci-Fi movie. It is 2015 and just one example of how the Internet of Things (IoT) is already at work in business.

Smart living

Every few years, IT communities become awash with new buzzwords and trends that early adopters declare as the next big thing and sceptics decry as impractical and over-hyped. Over time, some fizzle out because of low industry acceptance, while others go on to really disrupt the industry.

From smart cars to watches and even homes, connected technologies are already changing consumer lives, fueling growing expectations and apprehensions. Last year, the government demonstrated its belief in the future potential of technology when it pledged to spend £45m to develop the IoT, more than doubling the funds available to the UK technology firms developing everyday devices that can communicate over the internet.

In the consumer market, IoT technology is already being lapped up. Within just a few months of its launch, Apple claimed 75% of the smartwatch market. As yet, self-driving cars are yet to take to Britain’s roadways. However, with prototypes already being pioneered and app developers racing to create everything from connected entertainment to automated piloting using GPS, when the infrastructure required to make smart cities a reality is sanctioned by local councils and city mayors, IoT could literally find itself in the driving seat.

Smart workplaces

Outside of very early prototype projects, currently, IoT does not rank highly on the enterprise agenda, which is typically a few years behind the general technology adoption cycle. However, in the not-too-distant future, smart-devices will be the norm – IDC estimates the market will be worth $8.9 Trillion by 2020, with 212 billion connected devices.

With the promise of enhanced business processes and intelligence, IoT is increasingly being touted as a holy amalgamation of big data, mobility and cloud technology. Despite this, in the short term at least, businesses will be reluctant to flow of sensitive data through such internet-enabled devices due to obvious security concerns. The exception is in the large businesses that have already explored the potential of machine-to-machine connectivity in their industries, such as automotive and insurance.

Where smart devices are catching up in day-to-day business is in an entirely different function of operations – facilities. What if your management decides to get internet-enabled LED bulbs and thermostats which connect to the internet? Will the IoT bring additional responsibilities on to the service desk? A definite yes.

Facilities need to be managed – and a tool to manage them. That’s just the start. For example, each bulb in a smart IoT connected environment must be monitored and checked to confirm they are working.

Assuming there are over 100 such appliances in an office environment, consider all the IP addresses that will need to be allocated. Likewise, a mesh network would also be required to control the IP address allocation, where one connected device would result in an ad-hoc network.

As previously non-IT facilities start to be connected to the internet, it will be the job of the IT team to make sure they’re working well. As the volume of devices connected to the network grows, securing it will be even more challenging.

Of course, organisations can get around the security challenge by having a local network dedicated only for these devices, but the management of this expanded estate would nonetheless require a dedicated management tool.

Where large organisations have already invested in machine-to-machine (M2M) interactions and deployed connected devices in their facilities, the purpose has typically been to achieve automation and gather more intelligence.

As yet, smaller businesses do not have to worry about automation and logistics at such large scales and it’s clear that the IoT is definitely not going transform their business operations overnight. However, before long, IoT will be something all IT departments should learn to manage – especially the new generation of IoT-connected devices which would traditionally have been classed and managed as non-IT assets.

Written by Pradyut Roy, product consultant, ManageEngine

Networking the Future with SDN

SDN will be vital for everything from monitoring to security

SDN will be vital for everything from monitoring to security

The nature of business is constantly changing; customers are demanding faster, more responsive services, and as a result, firms need to ensure that their backend technology is up to scratch. Increasing adoption of the cloud, mobility and big data technologies has encouraged the IT department to address how they can best support these developing trends whilst benefiting the customer and employee experience.

By looking at the heart of their infrastructure, the network, businesses can provide more agile and flexible IT services that can quickly meet user demand.  So what improvements can be made to the networks to satiate customer demand?

A software defined network (SDN) is emerging as an obvious approach for technology decision makers, empowering them to provide a faster, more agile and scalable infrastructure. SDN is considered the next evolution of the network, providing a way for businesses to upgrade their networks through software rather than through hardware – at a much lower cost.

SDN provides holistic network management and the ability to apply more granular unified security policies whilst reducing operational expenses such as the need to use specific vendor hardware and additional technology investments. In fact, IDC recently predicted that this market is set to grow from $960 million in 2014 to more than $8 billion by 2018, globally.

A Growing Trend

Datacentres and service providers have, until now, been the most common adopters of SDN solutions. As a result there has been a notable improvement in better customer service and faster response times with firms deploying new and innovative applications quicker than ever. In the past year, we have seen firms in sectors like healthcare and education take advantage of the technology. However, while SDN is developing quickly, it is still in its early stages, with several industries yet to consider it.

There is a focus to encourage more firms to recognise the benefits of SDN in the form of the OpenDaylight Project. The OpenDaylight Project is a collaborative open source project which aims to accelerate the adoption of SDN – having already laid the foundation for SDN deployments today, it is considered to be the central control component and intelligence that allows customers to achieve network-wide objectives in a much more simplified fashion. The community, which includes more than a dozen vendors, is addressing the need for an open reference framework programmability and control enabling accelerated innovation for customers of any size and in any vertical.

Driving Business Insights

Looking ahead to the future for this new way of networking, there are a number of ways SDN can benefit the business. For example, SDN looks set to emerge as the new choice for deploying analytics in an economical and distributed way – in part due to the flexible nature of its infrastructure and the growing prominence of APIs – as the SDN optimized network can be maintained and configured with less staff and at a lower cost.

Data analytics-as-a-service is being tipped as the vehicle that will make big data commoditised and consumable for enterprises in the coming years; analyst house IDC found that by 2017, 80% of the CIO’s time will be focused on analytics – and Gartner predicts that by 2017 most business users and analysts in organisations will have access to self-service tools to prepare data for analysis themselves.

However, the right network environment will be key so that data analytics has the right environment to flourish. An SDN implementation offers a more holistic approach to network management with the ability to apply more granular unified security policies while reducing operational expenses. Being able to manage the network centrally is a huge benefit for firms as they look to increase innovation and become more flexible in response to changing technology trends.

Using analytics in tandem with a newly optimized SDN can empower IT to quickly identify any bottlenecks or problems and also help to deploy the fixes. For example, if a firm notices that one of their applications is suffering from a slow response time and sees that part of the network is experiencing a lot of latency at the same time, it could immediately address the issue and re-route traffic to a stronger connection.

Realising the Potential of SDN

In order to implement an SDN solution, it will be imperative for enterprises to firstly make themselves familiar with the technology and its components, create cross functional IT teams that include applications, security, systems and network to get an understanding what they wish to achieve and secondly, investigate best-of-breed vendor solutions that can deliver innovative and reliable SDN solutions which leverage existing investments without the need to overhaul longstanding technologies. This way, businesses can reap the benefits of SDN whilst saving time as well as money and mitigate risk.

Using analytics and SDN in combination is just one future possibility which could make it far simpler for businesses to deploy servers and support users in a more cost-effective and less resource-intensive way. It can also provide an overall improved user experience. With SDN offering the power to automate and make the network faster and big data providing the brains behind the operation; it’s an exciting match that could be an enterprise game changer.

Written by Markus Nispel, vice president of solutions architecture and innovation at Extreme Networks

The Six Myths of Hybrid IT

It is time to dispel some hybrid cloud myths

Bennett: It is time to debunk some hybrid cloud myths

Many companies face an ongoing dilemma: How to get the most out of legacy IT equipment and applications (many of which host mission-critical applications like their ERP, accounting/payroll systems, etc.), while taking advantage of the latest technological advances to keep their company competitive and nimble.

The combination of cloud and third-party datacentres has caused a shift in the way we approach building and maintaining our IT infrastructure. A best-of-breed approach previously meant a blending of heterogeneous technology solutions into an IT ecosystem. It now focuses on the services and technologies that remain on-premises and those that ultimately will be migrated off-premises.

A hybrid approach to IT infrastructure enables internal IT groups to support legacy systems with the flexibility to optimise service delivery and performance thru third-party providers. Reconciling resources leads to improved business agility, more rapid delivery of services, exposure to innovative technologies, and increased network availability and business uptime, without having to make the budget case for CAPEX investment. However, despite its many benefits, a blended on-premises and off-premises operating model is fraught with misconceptions and myths — perpetuating a “what-if?” type of mentality that often stalls innovation and business initiatives.

Here are the facts behind some of the most widespread hybrid IT myths:

Myth #1: “I can do it better myself.”

If you’re in IT and not aligned with business objectives, you may eventually find yourself out of a job. The hard truth is that you can’t be better at everything. Technology is driving change so rapidly that almost no one can keep up.

So while it’s not always easy to say “I can’t do everything as well as someone else can,” it’s perfectly acceptable to stick to what you’re good at and then evaluate other opportunities to evolve your business. In this case, outsourcing select IT functionality where you can realise improved capabilities and value for your business. Let expert IT outsource providers do what they do best, managing IT infrastructure for companies 24/7/365, while you concentrate on IT strategy to keep your business competitive and strong.

Myth #2: “I’ll lose control in a hybrid IT environment.”

A functional IT leader with responsibility over infrastructure that management wants to outsource may fear the loss of his or her team’s jobs. Instead, the day-to-day management of the company’s infrastructure might be better served off-premise, allowing the IT leader to focus on strategy and direction of the IT functions that differentiate her business in order to stay ahead of fast-moving market innovation and customer demands.

In the early days of IT, it was one size fits all. Today, an IT leader has more control than ever. For example, you can buy a service that comes with little management and spin resources up using imbedded API interfaces. The days where you bought a managed service and had no control, or visibility, over it are gone. With the availability of portals, plug-ins and platforms, internal resources have more control if they want their environment managed by a third party, or want the ability to manage it outright on their own.

Myth #3: “Hybrid IT is too hard to manage.”

Do you want to differentiate your IT capabilities as a means to better support the business? If you do want to manage it on your own, you need to have the people and processes in place to do so. An alternative is to partner with a service provider offering multiple off-premise options and a more agile operating model than doing all of it on your own.  Many providers bundle management interfaces, orchestration, automation and portals with their offerings, which provides IT with complete transparency and granular control into your outsourced solution.  These portals are also API-enabled to ensure these tools can be integrated into any internal tools you have already invested in, and provide end to end visibility into the entire Hybrid environment.

Myth #4: “Hybrid IT is less secure than my dedicated environment.”

In reality, today’s IT service providers are likely more compliant than your business could ever achieve on its own. To be constantly diligent and compliant, a company may need to employ a team of internal IT security professionals to manage day-to-day security concerns. Instead, it makes sense to let a team of external experts worry about data security and provide a “lessons-learned” approach to your company’s security practice.

There are cases where insourcing makes sense, especially when it comes to the business’ mission-critical applications. Some data should absolutely be kept as secure and as close to your users as possible. However, outsourced infrastructure is increasingly becoming more secure because providers focus exclusively on the technology and how it enables their users. For example, most cloud providers will encrypt your data and hand the key to you only. As a result, secure integration of disparate solutions is quickly becoming the rule, rather than the exception.

Myth #5: “Hybrid IT is inherently less reliable than the way we do it now.”

Placing computing closer to users and, in parallel, spreading it across multiple locations, will result in a more resilient application than if you had it in a fixed, single location. In fact, the more mission-critical the application becomes, the more you should spread it across multiple providers and locations. For example, if you build an application for the cloud you’re not relying on any one application component being up in order to fulfil its availability. This “shared nothing” approach to infrastructure and application design not only makes your critical applications more available, it also adds a level of scalability that is not available in traditional in-house only approaches.

Myth #6: “This is too hard to budget for.”

Today’s managed service providers can perform budgeting as well as reporting on your behalf. Again, internal IT can own this, empowering it to recommend whether to insource or outsource a particular aspect of infrastructure based on the needs of the business. However, in terms of registration, costs, and other considerations, partnering with a third-party service can become a huge value-add for the business.

Adopting a hybrid IT model lowers the risk of your IT resources and the business they support. You don’t have to make huge investments all at once. You can start incrementally, picking the options that help you in the short term and, as you gain experience, allow you the opportunity to jump in with both feet later. Hybrid IT lets you evolve your infrastructure as your business needs change.

If IT and technology has taught us anything, it’s that you can’t afford to let fear prevent your company from doing what it must to remain competitive.

Written by Mike Bennett, vice president global datacentre acquisition and expansion, CenturyLink EMEA

Will Microsoft’s ‘walled-garden’ approach to virtualisation pay off?

Microsoft's approach to virtualisation: Strategic intent or tunnel vision?

Microsoft’s approach to virtualisation: Strategic intent or tunnel vision?

While the data centre of old played host to an array of physical technologies, the data centre of today and of the future is based on virtualisation, public or private clouds, containers, converged servers, and other forms of software-defined solutions. Eighty percent of workloads are now virtualised with most companies using heterogeneous environments.

As the virtual revolution continues on, new industry players are emerging ready to take-on the market’s dominating forces. Now is the time for the innovators to strike and to stake a claim in this lucrative and growing movement.

Since its inception, VMware has been the 800 lb gorilla of virtualisation. Yet even VMware’s market dominance is under pressure from OpenSource offerings like KVM, RHEV-M, OpenStack, Linux Containers and Docker. There can be no doubting the challenge to VMware presented by purveyors of such open virtualisation options; among other things, they feature REST APIs that allow easy integration with other management tools and applications, regardless of platform.

I see it as a form of natural selection; new trends materialise every few years and throw down the gauntlet to prevailing organisations – adapt, innovate or die. Each time this happens, some new players will rise and other established players will sink.

VMware is determined to remain afloat and has responded to the challenge by creating an open REST API for VSphere and other components of the VMware stack.  While I don’t personally believe that this attempt has resulted in the most elegant API, there can be no arguing that it is at least accessible and well-documented, allowing for integration with almost anything in a heterogeneous data centre. For that, I must applaud them.

So what of the other giants of yore? Will Microsoft, for example, retain its regal status in the years to come? Not if the Windows-specific API it has lumbered itself with is anything to go by! While I understand why Microsoft has aspired to take on VMware in the enterprise data centre, its API, utilising WMI (Windows Management Instrumentation), only runs on Windows! As far as I’m concerned this makes it as useless as a chocolate teapot. What on earth is the organisation’s end-goal here?

There are two possible answers that spring to my mind, first that this is a strategic move or second that Microsoft’s eyesight is failing.

Could the Windows-only approach to integrating with Microsoft’s Hyper-V virtualisation platform be an intentional strategic move on its part? Is the long-game for Windows Server to take over the enterprise data centre?

In support of this, I have been taking note of Microsoft sales reps encouraging customers to switch from VMware products to Microsoft Hyper-V. In this exchange on Microsoft’s Technet forum, a forum user asked how to integrate Hyper-V with a product running on Linux.  A Microsoft representative then responded saying (albeit in a veiled way) that you can only interface with Hyper-V using WMI, which only runs on Windows…

But what if this isn’t one part of a much larger scheme? The only alternative I can fathom then is that this is a case of extreme tunnel vision, the outcome of a technology company that still doesn’t really get the tectonic IT disruptions and changes happening in the outside world. If it turns out that Microsoft really does want Windows Server to take over the enterprise data centre…well, all I can say is, good luck with that!

Don’t get me wrong. I am a great believer in competition, it is vital for the progression of both technology and markets. And it certainly is no bad thing when an alpha gorilla faces troop challenger. It’s what stops them getting stale, invigorating them and forcing them to prove why they deserve their silver back.

In reality, Microsoft probably is one of the few players that can seriously threaten VMWare’s near monopolistic market dominance of server virtualisation. But it won’t do it like this. So unless new CEO Satya Nadella’s company moves to provide platform-neutral APIs, I am sad to say that its offering will be relegated to the museum of IT applications.

To end with a bit of advice to all those building big data and web-scale applications, with auto-scaling orchestration between applications and virtualisation hypervisors: skip Hyper-V and don’t go near Microsoft until it “gets it” when it comes to open APIs.

Written by David Dennis, vice president, marketing & products, GroundWork

Will datacentre economics paralyse the Internet of Things?

The way data and datacentres are managed may need to change drastically in the IoT era

The way data and datacentres are managed may need to change drastically in the IoT era

The statistics predicting what the Internet of Things (IoT) will look like and when it will take shape vary widely. Whether you believe there will be 25 billion or 50 billion Internet-enabled devices by 2050, there will certainly be far more devices than there are today. Forrester has predicted 82% of companies will be using Internet of Things (IoT) applications by 2017. But unless CIOs pay close attention to the economics of the datacentre, they will struggle to be successful. The sheer volume of data we expect to manage across these IoT infrastructures could paralyse companies and their investments in technology.

The Value of Information is Relative

ABI Research has calculated that there will be 16 Zettabytes of data by 2020. Consider this next to another industry estimate that there will be 44 Zettabytes by 2020.  While others have said that humanity only produced 2.7 Zettabytes up to 2013. Bottom line: the exponential growth in data is huge.

The natural first instinct for any datacentre manager or CIO is to consider where he or she will put that data. Depending on the industry sector there are regulatory and legal requirements, which mean companies will have to be able to collect, process and analyse runaway amounts of data.  By 2019 another estimate suggests that means processing 2 Zettabytes a month!

One way to react is to simply buy more hardware. From a database perspective the traditional approach would be to create more clusters in order to manage such huge stores of data. However, a critical element of IoT is that it’s based on low-cost technology, and although the individual pieces of data have a value, there is a limit to that value. For example, you do not need to be told every hour by your talking fridge that you need more milk or be informed by your smart heating system what the temperature is at home.  While IoT will lead to smart devices everywhere, its value is relative to the actionable insight it offers.

A key element of the cost benefit equation that needs more consideration is the impact of investment requirements at the backend of an IoT data infrastructure. As the IoT is creating a world of smart devices distributed across networks CIOs have to make a decision about whether the collection, storage and analytics happens locally near the device or is driven to a centralised management system.  There could be some logic to keeping the intelligence locally, depending on the application, because it could speed up the process of providing actionable insight. The company could use low-cost, commoditised devices to collect information but it will still become prohibitively expensive if the company has to buy vast numbers of costly database licenses to ensure the system performs efficiently – never mind the cost of integrating data from such a distributed architecture.

As a result, the Internet of Things represents a great opportunity for open source software thanks to the cost effectiveness of open source versus traditional database solutions. Today, open source-based databases have the functionality, scalability and reliability to cope with the explosion in data that comes with the IoT while transforming the economics of the datacentre. A point which Gartner’s recent Open Source Database Management report endorsed when it said:  “Open source RDBMSs have matured and today can be considered by information leaders, DBAs and application development management as a standard infrastructure choice for a large majority of new enterprise applications.”

The Cost of Integrating Structured and Unstructured

There are other key considerations when calculating the economic impact of the IoT on the datacentre. The world of IoT will be made up of a wide variety of data, structured and unstructured. Already, the need for working with unstructured data has given rise to NoSQL-only niche solutions. The deployment of these types of databases, spurred on by the rise of Internet-based applications and their popularity with developers, is proliferating because they offer the affordability of open source. Yet, their use is leading to operational and integration headaches as data silos spring up all around the IT infrastructure due to limitations in these NoSQL-only solutions. In some cases, such as where ACID properties are required and robust DBA tools are available, it may be more efficient to use a relational database with NoSQL capabilities built in and get the best of both worlds rather than create yet another data silo.  In other cases, such has for very high velocity data streams, keeping the data in these newer data stores and integrating them may be optimal.

A key priority for every CIO is integrating information as economically as possible so organizations can create a complete picture of its business and its customers.  The Postgres community has been at the forefront of addressing this challenge with the creation of Foreign Data Wrappers (FDWs), which can integrate data from disparate sources, likes MongoDB, Hadoop and MySQL. FDWs link external data stores to Postgres databases so users access and manipulate data from foreign sources as if it were part of the native Postgres tables. Such simple, inexpensive solutions for connecting new data streams emerging along with the Internet of Everything will be critical to unlocking value from data.

The Internet of Things promises a great transformation in the ability of enterprises to holistically understand their business and customer environment in real time and deliver superior customer engagement.  It is critical, though, that CIOs understand the economic impact on their datacentre investments.  The IoT creates a number of new challenges, which can be addressed using the right technology strategy.

Written by Pierre Fricke, vice president of product, EnterpriseDB

Cloud banking: lit from within

Financial services companies are warming to cloud services

Financial services companies are warming to cloud services

In a world where, as John Schlesinger, chief enterprise architect at Temenos, argues, servers are about to stop getting cheaper, the advantages of cloud computing in terms of cost and customer experience look more compelling than ever. In the banking market, however, the spread of cloud systems has been slower than elsewhere due to factors including concern about data security, uncertainty about the position regulators will take on cloud technologies and the challenge of managing migration from the in-house, legacy IT systems that currently run banks’ critical functions.

So just how hot is cloud banking right now? A quick temperature check of the financial services industry’s attitude to cloud banking in April triggered a warm response.

There are two sides to every story and never more so than when discussing with banks the shift from in-house technology to on-demand cloud-based services. So in Temenos’ recent survey Cloud-banking heat map, we asked two key questions: what are the benefits you seek from cloud services; and what, if any, are the barriers to adoption you face?

Echoing the results of a similar Ovum survey The Critical Role for Cloud in the Transformation of Retail Banks,last year, our results show that cloud is no longer just about cost reduction, as 50 per cent of respondents see cloud as a means to adopt new technology, and 34 per cent reported the ability to add new business functionality more quickly as a top benefit. This is a very encouraging sign that banks are seeing the adoption of cloud technology as a means to support the delivery of new products and services.

That is not to say that the long term cost benefits of cloud services are any less important. In fact the highest scoring benefit sought from the cloud, at 58 per cent of respondents, is to reduce overall IT costs. Not at all surprising given the profitability hit banks have taken post financial crisis, cost-savings are an obvious driver of a cloud-based IT strategy.

The top reported barriers to adopting cloud services are concerns over data security (55 per cent) and reliability and availability (47 per cent), which are common challenges for financial institutions that are used to managing and maintaining their own IT. This highlights the need for cloud providers to do more to demonstrate to the industry the robustness of their security controls and availability metrics, as paradoxically we may find that security and reliability is a benefit rather than a barrier to cloud.

Concern over regulatory compliance is another top factor against cloud banking, cited by 45 per cent of respondents. This is no surprise in such a heavily regulated sector, and there is no quick fix, but when talking to lawyers in this space, the feeling is that with a high level of due diligence on the on the banks’ part, and a transparent and collaborative approach on the cloud provider’s part, a solution could be found that meets all parties’ needs, including those of the regulator.

In response to this, we see cloud software vendors, their platform partners and industry organisations are working closely to address security concerns. Co-ordinated efforts such as the Cloud Security Alliance and its Cloud Controls Matrix have set out security principles for cloud vendors and assist prospective customers in assessing security risk at individual cloud providers. Cloud providers themselves are investing heavily in compliance and security expertise to the extent that many observers argue that a well-implemented migration to the cloud can result in higher levels of security than an in-house system, as well as access to real-time reporting mechanisms that are often superior, too.

As the industry continues to warm up to cloud banking, we will see the same issues raised and discussed again and again. And rightly so: the only way to support the banking industry in any leap in technology and faith is by addressing issues and challenges openly until all parties are convinced of its viability.

However, while clear challenges remain to more rapid adoption of cloud-based technology in banking, it is clear that change is happening. Already, analysts at Gartner predict that by 2016, more than 60 per cent of global banks will process the majority of their transactions in the cloud. Many are already moving less sensitive functions there and developing strategies to enable them to capture the benefit of cloud-based systems for their core operations.

Written by David Arnott, chief executive of Temenos

Is force of habit defining your hybrid cloud destiny?

Experience breeds habit, which isn't necessarily the best thing strategically

Experience breeds habit, which isn’t necessarily the best thing strategically

I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

And the point of all this?  Simply, that people’s viewpoints are constrained by their experiences and what keeps them busy day-to-day, so often they miss an opportunity to do something different.  For those people working day-to-day in a traditional IT department , keeping systems up and running,  hybrid cloud is all about integrating an existing on-site system with an off-site cloud.  This is a nice, easy one to grasp in principal but the reality is somewhat harder to realize.

The idea of connecting an on-site System of Record to a cloud-based System of Engagement:  pulling data from both to generate new insights is conceptually well understood.  That said, the number of organisations making production use of such arrangements is few and far between.  For example, combining historical customer transaction information with real-time geospatial, social and mobile data and then applying analytics to generate new insights which uncover new sales potential.  For many organizations though, the challenge in granting access to the existing enterprise systems is simply too great.  Security concerns, the ability to embrace the speed of change that is required and the challenge to extract the right data in a form that is immediately usable by the analytical tools may be simply a hurdle too high.  Indeed, many clients I’ve worked with have stated that they’re simply not going to do this.  They understand the benefits, but the pain they see themselves having to go through to get these makes this unattractive to pursue.

So, if this story aligns with your view of hybrid cloud and you’ve already put it in the “too hard” box then what is your way forward?

For most organizations, no single cloud provider is going to provide all of the services they might want to consume.  Implicitly then, if they need to bring data from these disparate cloud services together then there is a hybrid cloud use case:  linking cloud to cloud.  Even in the on-site to off-site hybrid cloud case there are real differences when the relationship is static compared to when you are dynamically bursting in and out of off-site capacity.  Many organizations are looking to cloud as a more-effective and agile platform for backup and archiving or for disaster recovery.  All of these are hybrid cloud use cases to but if you’ve already written off ‘hybrid’ then you’re likely missing very real opportunities to do what is right for the business.

Regardless of the hybrid cloud use case, you need to keep in mind three key principals which are:

  1. Portability – the ability to run and consume services and data from wherever it is most appropriate to do so, be that cloud or non-cloud, on-site or off-site.
  2. Security, visibility and control – to be assured that end-to-end, regardless of where the ‘end’ is, you are running services in such a way that they are appropriately secure, well managed and their characteristics are well understood.
  3. Developer productivity – developers should be focused on solving business problems and not be constrained by needing to worry about how or when supporting infrastructure platforms are being deployed.  They should be able to consume and integrate services from many different sources to solve problems rather than having to create everything they need from scratch.

Business applications need to be portable such that they can both run as well as consume other services from wherever is most appropriate.  To do that, your developers need to be more unconstrained by the underlying platform(s) and so can develop for any cloud or on-site IT platform.  All this needs to be done in a way that allows enterprise controls, visibility and security to be extended to the cloud platforms that are being used.

If you come from that traditional IT department background, you’ll be familiar with the processes that are in place to ensure that systems are well managed, change is controlled and service levels are maintained.  These processes may not be compatible with the ways that clouds open up new opportunities.  This leads to the need to look a creating a “two-speed” IT organisation to provide the rigor where needed for the Systems of Record whilst enabling rapid change and delivery in the Systems of Engagement space.

Cloud generates innovation and hence diversity.  Economics, regulation and open communities drive standardization and it is this, and in particular open standards, which facilitates integration in all of these hybrid cases.

So, ask yourself.  With more than 65 per cent of enterprise IT organizations making commitments on hybrid cloud technologies before 2016 are you ensuring that your definitions – and hence your technologies choices – reflect future opportunities rather than past prejudices?

Written by I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

Written by John Easton, IBM distinguished engineer and leading cloud advisor for Europe

Lessons from the Holborn fire: how disaster recovery as a service helps with business continuity

Disaster recovery is creeping up on the priority list for enterprises

Disaster recovery is creeping up on the priority list for enterprises

The recent fire in Holborn highlighted an important lesson in business continuity and disaster recovery (BC/DR) planning: when a prompt evacuation is necessary ‒ whether because of a fire, flood or other disaster ‒ you need to be able to relocate operations without advance notice.

The fire, which was caused by a ruptured gas main, led to the evacuation of 5,000 people from nearby buildings, and nearly 2,000 customers experienced power outages. Some people lost Internet and mobile connectivity as well.

While firefighters worked to stifle the flames, restaurants and theatres were forced to turn away patrons and cancel performances, with no way to preserve their revenue streams. The numerous legal and financial firms in the area, at least, had the option to relocate their business operations. Some did, relying on cloud-based services to resume their operations remotely. But those who depended on physical resources on-site were, like the restaurants and theatres, forced to bide their time while the fire was extinguished.

These organisations’ disparate experiences reveals the increasing role of cloud-based solutions ‒ particularly disaster recovery as a service (DRaaS) solutions ‒ in BC/DR strategies.

The benefits of DRaaS

Today, an increasing number of businesses are turning to the cloud for disaster recovery. The DRaaS market is expected to experience a compounded annual growth rate of 55.2 per cent from 2013 to 2018, according to global research company MarketsandMarkets.

The appeal of DRaaS solutions is that they provide the ability to recover key IT systems and data quickly, which is crucial to meeting your customers’ expectations for high availability. To meet these demands within the context of a realistic recovery time frame, you should establish two recovery time objectives (RTOs): one for operational issues that are specific to your individual environment (e.g., a server outage) and another for regional disasters (e.g., a fire). RTOs for operational issues are typically the most aggressive (0-4 hours). You have a bit more leeway when dealing with disasters affecting your facility, but RTOs should ideally remain under 24 hours.

DRaaS solutions’ centralised management capabilities allow the provider to assist with restoring not only data but your entire IT environment, including applications, operating systems and systems configurations. Typically systems can be restored to physical hardware, virtual machines or another cloud environment. This service enables faster recovery times and eases the burden on your in-house IT staff by eliminating the need to reconfigure your servers, PCs and other hardware when restoring data and applications. In addition, it allows your employees to resume operations quickly, since you can access the environment from anywhere with a suitable Internet connection.

Scalability is another key benefit of DRaaS solutions. According to a survey by 451 Research, the amount of data storage professionals manage has grown from 215 TB in 2012 to 285 TB in 2014. To accommodate this storage growth, companies storing backups in physical servers have to purchase and configure additional servers. Unfortunately, increasing storage capacity can be hindered by companies’ shrinking storage budgets and, in some cases, lack of available rack space.

DRaaS addresses this issue by allowing you to scale your storage space as needed. For some businesses, the solution is more cost-effective than dedicated on-premise data centres or colocation solutions, because cloud providers typically charge only for the capacity used. Redundant data elimination and compression maximise storage space and further minimise cost.

When data needs to be maintained on-site

Standard DRaaS delivery models are able to help many businesses meet their BC/DR goals, but what if your organisation needs to keep data or applications on-site? Perhaps you have rigorous RTOs for specific data sets, and meeting those recovery time frames requires an on-premise backup solution. Or maybe you have unique applications that are difficult to run in a mixture of physical and virtual environments. In these cases, your business can leverage a hybrid DRaaS strategy which allows you to store critical data in an on-site appliance, offloading data to the cloud as needed.

You might be wondering, though, what happens to the data stored in an appliance in the event that you have to evacuate your facility. The answer depends on the type of service the vendor provides for the appliance. If you’re unable to access the appliance, recovering the data would require you to either access an alternate backup stored at an off-site location or wait until you regain access to your facility, assuming it’s still intact. For this reason, it’s important to carefully evaluate potential hybrid-infrastructure DRaaS providers.

DRaaS as part of a comprehensive BC/DR strategy

In order for DRaaS to be most effective for remote recovery, the solution must be part of a comprehensive BC/DR strategy. After all, what good is restored data if employees don’t have the rest of the tools and information they need to do their jobs? These additional resources could include the following:

•         Alternate workspace arrangements

•         Provisions for backup Internet connectivity

•         Remote network access solutions

•         Guidelines for using personal devices

•         Backup telephony solution

The Holborn fire was finally extinguished 36 hours after it erupted, but not before landing a blow on the local economy to the tune of £40 million. Businesses using cloud services as part of a larger business continuity strategy, however, were able to maintain continuity of operations and minimise their lost revenue. With the right resources in place, evacuating your building doesn’t have to mean abandoning your business.

By Matt Kingswood, head of managed services, IT Specialists (ITS)

G-Cloud: Much has been achieved, but the programme still needs work

The UK government is ahead of the curve in cloud, but work still needs doing

The UK government is ahead of the curve in cloud, but work still needs doing

Thanks to G-Cloud, the once stagnant public sector IT marketplace that was dominated by a small number of large incumbent providers, is thriving. More and more SMEs are listing their assured cloud services on the framework, which is driving further competition and forcing down costs for public sector organisations, ultimately benefitting each and every UK tax payer.  But the programme still needs work.

G-Cloud initially aimed to achieve annual savings of more than £120m and to account for at least half of all new central Government spend by this year. The Government Digital Service has already estimated that G-Cloud is yielding efficiencies of at least 50 per cent, comfortably exceeding the initial target set when the Government’s Cloud Strategy was published in 2011.

According to the latest figures, the total reported G-Cloud sales to date have now exceeded £591m, with 49 per cent of total sales by value and 58 per cent by volume, having been awarded to SMEs. 76 per cent of total sales by value were through central Government; 24 per cent through the wider public sector, so while significant progress has been made, more work is clearly needed to educate local Government organisations on the benefits of G-Cloud and assured cloud services.

To provide an example of the significant savings achieved by a public sector organisation following a move to the cloud, the DVLA’s ‘View driving record’ platform, hosted on GOV.UK, secured online access to driving records for up to 40 million drivers for the insurance industry, which it is hoped will help to reduce premiums. Due to innovative approaches including cloud hosting, the DVLA managed to save 66 per cent against the original cost estimate.

Contracts held within the wider public sector with an estimated total value of over £6bn are coming to an end.  Therefore continued focus must be placed on disaggregating large contracts to ensure that all digital and ICT requirements that can be based on the cloud are based on the cloud, and sourced from the transparent and vendor-diverse Government Digital Marketplace.

Suppliers, especially SMEs and new players who don’t have extensive networks within the sector, also need much better visibility of downstream opportunities. Currently, G-Cloud is less transparent than conventional procurements in this respect, where pre-tender market engagements and prior information notices are now commonplace and expected.

However, where spend controls cannot be applied, outreach and education must accelerate, and G-Cloud terms and conditions must also meet the needs of the wider public sector. The G-Cloud two year contract term is often cited as a reason for not procuring services through the framework, as is the perceived inability for buyers to incorporate local, but mandatory terms and conditions.

The Public Contracts Regulations 2015 introduced a number of changes to EU procurement regulations, and implemented the Lord Young reforms, which aim to make public procurements more accessible and less onerous for SMEs. These regulations provide new opportunities for further contractual innovation, including (but not limited to) dynamic purchasing systems, clarification of what a material contract change means in practice, and giving buyers the ability to take supplier performance into account when awarding a contract.

The G-Cloud Framework terms and conditions must evolve to meet the needs of the market as a whole, introducing more flexibility to accommodate complex legacy and future requirements, and optimising the opportunities afforded by the new public contract regulations. The introduction of the Experian score as the sole means of determining a supplier’s financial health in the G-Cloud 6 Framework is very SME unfriendly, and does not align to the Crown Commercial Service’s own policy on evaluation of financial stability. The current drafting needs to be revisited for G-Cloud 7.

As all parts of the public sector are expected to be subject to ongoing fiscal pressure, and because digitising public services will continue to be a focus for the new Conservative Government, the wider public sector uptake of G-Cloud services must continue to be a priority. Looking to the future of G-Cloud, the Government will need to put more focus on educating buyers on G-Cloud procurement, the very real opportunities that G-Cloud can bring, underlined by the many success stories to date, and ensuring the framework terms and conditions are sufficiently flexible to support the needs of the entire buying community. G-Cloud demonstrates the possibilities when Government is prepared to be radical and innovative and in order to build on the significant progress that has been made, we hope that G-Cloud will be made a priority over the next five years.

Written by Nicky Stewart, commercial director at Skyscape Cloud Services