Archivo de la categoría: Opinion

The flexible working phenomenon – what’s holding us back?

Business people working together in officeWe live in a world where the 9-5 office job is rapidly becoming obsolete. The office worker is no longer chained to a desk, thanks to the rapid rise and swift adoption of technologies which enable work to take place at home, on the move, or anywhere with an internet or mobile connection.

At least, that’s what the world would have you believe. According to the latest research from UC EXPO, many workers still aren’t aware that they have the right to request flexible working from their employers. Even more worryingly, many office-based workers say that not all employees have access to these seemingly universal policies. So what’s going on at an employee level? Is the flexible working revolution really as advanced as it seems?

A flexible revolution – embracing new working ideals

It can’t be denied that the workplace and attitudes towards the traditional office-based role is changing. In a sharp increase on previous years, 27% of UK office workers now regularly work outside their base, and just under that (22%) say that they have worked at home, remotely, or elsewhere with flexible hours more in 2015 than they did in previous years.

It’s clear that the option to work flexible hours is seen as a right nowadays, but interestingly, so is remote working. The right to request flexible working became law in 2014, but 74% of the UK’s office-based workforce think that requesting remote working should be a right too.

It’s not just the ability to ‘be your own boss’ which makes flexible working so attractive. 82% of UK workers are much more likely to take a job that offers flexible working benefits than one that doesn’t, which presents an issue for businesses that don’t adhere to this. Whilst some workers are excluded whose job roles do not require a strict 9-5 policy, the benefits of flexible working are more widely recognised than a year ago, with a whopping 90% of those surveyed citing flexible working as essential to maintaining a better work/life balance. So much so, in fact, that it is valued higher than any other benefits, including a season ticket loan and daily free breakfast!

What’s stalling the flexible phenomenon?

Despite the widespread acknowledgment and appreciation of flexible working policies, it seems that total adoption is still a long way away. The concerns of recent years are still prevalent, including questions around BYOD security and the ability to trust employees to actually work when they are out of the office on company time. 67% of UK office workers, in fact, believe that productivity levels either increase or stay the same when working remotely.

Dear Future Im Ready, message on paper, smart phone and coffee on tableAlthough the concerns around productivity and security are decreasing, thanks to increasingly secure technologies available, a worrying number of UK office workers are still not aware of their right to request flexible working. In 2015, 50% of workers were unaware of this law, whereas in early 2016, around 39% are still unaware. So, despite a decrease, it’s still a significant proportion of the workforce who are potentially missing out on adopting the work style that suits them best.

The future of UC

Unified Communications technologies are helping to stimulate the growth of flexible working culture – most of us have used video conferencing at some point, in addition to other cloud-based collaboration tools. This is starting to become more sophisticated, and eventually, we will see a much more fluid working policy for the majority of UK businesses. As UC EXPO exhibitor Tim Bishop of Unify comments: “The office as we know faces an uncertain future. According to our research, 69% of knowledge workers say that having a single office as a physical workplace is less important than it was in the past, and 49% report that their organizations operate through technology and communication (such as virtual teams) rather than through offices and locations”.

Whilst Unify, and many others, argue that this will be a good thing, until the concerns around security are truly resolved, and we have a foolproof method of ensuring productivity and security when employees work remotely, there will always be something holding us back to some extent. That said, it’s clear that this is the future of the workforce – time for businesses and technology providers alike to get on board and embrace the change.

Written by Bradley Maule-ffinch, Director of Strategy at UC EXPO

 

 

 

 

About UC EXPO

 UC EXPO is Europe’s largest unified communications & collaboration (UC&C) event, for those looking to find out how the latest unified communications can drive and support their business. The event showcases brand new exclusive content and senior level insights from across the industry. UC EXPO 2016, together with Unified Communications Insight (www.ucinsight.com) and the world’s largest UC&C LinkedIn group delivers news, insight and knowledge throughout the year. Attending UC EXPO 2016 will help to ensure business decisions are being made based on the latest best practice for improved communications and collaboration, and organisations are able to continue, or start their journey in enabling workforce mobility.

 UC EXPO 2016 will take place on 19-20 April 2016, at Olympia, London. 

 For full details of the event, or to register for free, visit www.ucexpo.co.uk or follow UC EXPO on Twitter using the hashtag #UCEXPO.

Head in the clouds: Four key trends affecting enterprises

New trends, concept imageCloud is changing the way businesses are functioning and has provided a new and improved level of flexibility and collaboration. Companies worldwide are realising the cloud’s capabilities to generate new business models and promote sustainable competitive advantage; the impact of this is becoming very apparent: a Verizon report recently revealed that 69 per cent of businesses who have used cloud have put it to use to significantly reengineer one or more of their business processes. It’s easy to see why there’s still so much hype around cloud. We’ve heard so much about cloud computing over the last few years that you could be forgiven for thinking that it is now universally adopted, but the reality is that we are still only just scratching the surface, as cloud is still very much in a period of growth and expansion.

Looking beyond the horizon

At present, the majority of corporate cloud adoption is around Infrastructure-as-a-Service (IaaS) and Software-as-a-Service (SaaS) offerings such as AWS, Azure, Office 365 and Salesforce.com. These services offer cheap buy-in and a relatively painless implementation process, which remains separate from the rest of corporate IT. Industry analyst Gartner says IaaS spending is set to grow 38.4 per cent over the course of 2016, while worldwide SaaS spending is set to grow 20.3 per cent over the year, reaching $37.7 billion. However, the real promise of cloud is much more than IaaS, PaaS or SaaS: it’s a transformative technology moving compute power and infrastructure between on-premise resources, private cloud and public cloud.

As enterprises come to realise the true potential of cloud, we’ll enter a period of great opportunity for enterprise IT, but there will be plenty of adoption-related matters to navigate. Here are four big areas enterprises will have to deal with as cloud continues to take the world by storm:

  1. Hybrid cloud will continue to dominate

Hybrid cloud will rocket up the agenda, as businesses and providers alike continue to realise that there is no one-size-fits-all approach to cloud adoption. Being able to mix and match public and private cloud services from a range of different providers enables businesses to build an environment that meets their unique needs more effectively. To date, this has been held back by interoperability challenges between cloud services, but a strong backing for open application programming interfaces (APIs) and multi-cloud orchestration platforms is making it far easier to integrate cloud services and on-premise workloads alike. As a result, we will continue to see hybrid cloud dominate the conversation.

  1. Emergence of iPaaS

NASA is teaming up with IBM to host a code-a-thon for developers interested in supporting space exploration through apps

The drive towards integration of on premise applications and cloud is giving rise to Integration Platform as a Service (iPaaS). Cloud integration still remains a daunting task for many organizations, but iPaaS is a cloud-based integrations solution that is slowly and steadily gaining traction within enterprises. With iPaaS, users can develop integration flows that connect applications residing in the cloud or on premise, and deploy them without installing any hardware or software. Although iPaaS is relatively new to the market, categories of iPaaS vendors in the market are beginning to emerge, including ecommerce/B2B integration and cloud integration. With integration challenges still a huge issue for enterprises using cloud, demand for iPaaS is only set to grow over the coming months.

  1. Containers will become reality

To date, a lot of noise has been made about the possibilities of container technology, but in reality its use has yet to fully kick-off. That’s set to change as household name public clouds such as Amazon, Microsoft and Google are now embracing containers; IBM’s Blue Mix offering in particular is set to make waves with its triple-pronged Public, Dedicated and Local delivery model. Building a wave of momentum for many application and OS technology manufacturers to ride, it will now become increasingly realistic for them to construct support services around container technology. This does present a threat to traditional virtualization approach, but over time a shift in hypervisors is on the cards and container technology can only improve from this point.

  1. Cloud will be used for Data Resiliency/Recovery services

With cloud storage prices coming down drastically and continuous improvements being made to cloud gateway platforms, the focus is set to shift to cloud-powered backup and disaster recovery services. We are in an age where everything is being offered ‘as a service’; the idea of cloud-powered on-demand usability suits backup and disaster recovery services very well because they do not affect the immediate production data. As such, this should be an area where cloud use will dramatically increase over the next year.

With all emerging technologies, it takes time to fully figure out what they actually mean for enterprises, and these four cloud trends reflect that. In reality we’re only just getting started with cloud, now they understand how it works, the time has come for enterprises to turn the screw and begin driving even more benefits from it.

Written by Kalyan Kumar, Chief Technologist at HCL Technologies.

Digital Transformation: Seven Big Traps to avoid in Implementing Bimodal IT

Zumos de verdura ecolcica‘Bimodal IT’ is a term coined by Gartner. It describes one approach for both keeping the lights on with mission critical, but stable core IT systems (Mode 1), whilst taking another route (Mode 2) to delivering the innovative new applications required to digitally transform and differentiate the business.

Both streams of IT are critical. Mode 1 requires highly specialised programmers, long and detailed development cycles. Control, detailed planning and process adherence are of priority. Projects are technical and require little involvement from business teams. Mode 2 requires a high degree of business involvement, fast turnaround, and frequent updates; effectively a quick sprint to rapidly transform business ideas into applications.

According to a recent survey by the analyst group, nearly 40 per cent of CIOs have embraced bimodal IT, with the majority of the remainder planning to follow in the next three years. Those yet to implement bimodal IT were tellingly those who also fared worst in terms of digital strategy performance.

If you’re one of the recently converted, you won’t want to rush blindly into bimodal IT, oblivious to the mistakes made by those who have already ventured down that path.

Based on experience over many customer projects, here are seven mistakes and misconceptions I’ve learned firms need to avoid when implementing bimodal IT:

1. Thinking bimodal IT impacts only IT – In transforming how IT operates, bimodal IT changes the way the business operates too. Mode 2 is about bringing IT and business together to collaboratively bring new ideas to market. This requires the business to be much more actively involved, as well as take different approaches to planning, budgeting and decision making.

2. Lacking strong (business) leadership – Strong IT and business leadership is absolutely critical to implementing bimodal IT. The individual responsible for operationally setting up Mode 2 needs to be a strong leader, and ideally even a business leader. That’s because the goals and KPIs of Mode 2 are so completely different from Mode 1. When Mode 2 is set up by someone with a Mode 1 mind-set, they tend to focus on the wrong things (e.g. upfront planning vs. learning as you go, technical evaluations vs. business value etc.), limiting the team’s chance of success

3. Confusing Mode 2 with ‘agile’ – One of the biggest misconception about bimodal IT is that Mode 2 is synonymous with agile. Don’t get me wrong; iterative development is a key part of it. Because requirements for digital applications are often fuzzy, teams need to work in short, iterative cycles, creating functionality, releasing it, and iterating continually based on user feedback. But the Process element extends beyond agile, encompassing DevOps practices (to achieve the deployment agility required for continuous iteration) and new governance models.

4. Not creating dedicated teams for Mode 1/2 – Organisations that have one team serving as both Mode 1 and Mode 2 will inevitably fail. For starters, Mode 1 always takes precedence over Mode 2. When your SAP production instance goes down, your team is going to drop everything to put out the fire, leaving the innovation project on the shelf. Second, Mode 1 and Mode 2 require a different set of people, processes and platforms. By forcing one team to perform double duty, you’re not setting yourself up for success.

5. Overlooking the Matchmaker role – When building your Mode 2 team, it’s important to identify the individual(s) that will help cultivate and prioritise new project ideas through a strong dialog with the business. These matchmakers have a deep understanding of, and trusted relationship with the business, which they can leverage to uncover new opportunities that can be exploited with Mode 2. Without them, it’s much harder to identify projects that deliver real business impact.

6. Keeping Mode 1 and 2 completely separate – While we believe Mode 1 and Mode 2 teams should have separate reporting structures, the two teams should never be isolated from each other. In fact, the two should collaborate and work closely together, whether to integrate a Mode 2 digital application with a system of record or to transfer maintenance of a digital application to Mode 1 once it becomes mission critical, requiring stability and security over speed and agility.

7. Ignoring technical debt – Mode 2 is a great way to rapidly bring new applications to market. However, you can’t move fast at the expense of accumulating technical debt along the way. It is important to ensure maintainability, refactoring applications over time as required.

While 75 per cent of IT organisations will have a bimodal capability by 2017, Gartner predicts that half of those will make a mess. Don’t be one of them! Avoid the mistakes above to you implement bimodal IT properly and sustainably, with a focus on the right business outcomes that drive your digital innovation initiatives forward.

Written by Roald Kruit, Co-founder at Mendix

What the buzz is DevOps?

Pixelated DevOpsIn an industry where there seems to be a constant conveyor belt of buzzwords, you’ll struggle to find one that is currently more widely used that DevOps.

In its simplest form, DevOps is, among other things, a business practise which ensures greater collaboration between the development and operations function within the organization, the Holy Grail for most businesses! Development often considers operations too regimented, and operations tends to consider developers too wishy-washy. Finding a middle ground can be a tricky task.

But this is where DevOps fits perfectly; a cultural shift which enables collaboration between development and operations. It’s an ideology which strengthens communication, collaboration, integration and automation.

There are various nuances of the definition, but is more or less the same irrelevant of who you are talking to, but the use-case can vary. Not dramatically, but the output of DevOps can depend on the organization which you belong to, and the business case for the cultural change within the organization itself.

What is refreshing is that DevOps seems to be one of few concepts/technologies/ideologies which doesn’t seem to focus on being more cost effective. Almost every use case for DevOps focuses on proactive business benefits, as opposed to simply reducing CAPEX/OPEX.

The business applications for DevOps are potentially limitless, though here, we’ll focus on three areas; speed of delivery, improved quality and greater control/security.

First and foremost, speed. Speed is defining almost every facet of the digital business landscape, as well as consumer expectations. If you’re not working fast enough, your boss will start looking over your shoulder, and if you’re not releasing products fast enough your customers will buy elsewhere. In short, if you’re not fast, you’re not in business.

“DevOps enables IT to move applications from development and into production as quickly as possible,” said Brett Hofer, Global DevOps Practise Lead at Dynatrace.

Fast delivery design, vector illustration“DevOps can also ensure testing doesn’t occur too late in the development lifecycle, to maximise its potential value. If you don’t integrate automated testing throughout development, operations teams will have to repeat tests manually every time a configuration is made, and problems will be found too late to make vital changes,” said Hofer.

The concept of DevOps brings development and operations teams together, ensuring that the team are working in a complimentary manner. The essence of collaboration which is driven by DevOps allows teams to work towards the same objectives to ensure that product delivery is more efficient.

“If companies align toolsets so teams are able to share insights and cooperate effectively, they can ensure everyone is working toward the same goals and that everyone is measured against the same benchmarks. With a unified view of performance data across teams, DevOps gives employees a unified comprehensive outlook that translates into an overall competitive advantage,” said Hofer.

Speed to market is all well and good, but this does not necessarily guarantee you will have the most effective product. An alternative objective for DevOps is evolution and continuous evaluation.

“As a DevOps user, Salesforce has seen benefits in several areas,” said Pauline Dufour, EMEA Developer Relations team at Salesforce. “The continual iteration, testing and collaboration that DevOps involves means it is much easier to incorporate customer feedback into products and to do this more quickly.”

“This has a big impact on our customers as we really do include much of their feedback into our product design and upgrades,” said Dufour. “The DevOps approach also enables us to be more innovative and nimble – values that are core to our company. Continual collaboration and iteration means that we are able to deliver continual innovation.”

While there are other uses for the concept, Salesforce have seemingly prioritized product relevance, keeping themselves ahead of competitors. Here, DevOps enables the team to update the product offering, building in new features and answering the call of customer feedback, while minimizing downtown and disruption to customers.

Open cloud retail sign“In fact we believe that unless businesses adopt an open, integrated approach they will find themselves displaced by digital disruptors, as we’ve seen with Uber and Hailo in the taxi industry,” said Dufour. “For organisations with a less collaborative and open culture, DevOps may be harder to implement, but I believe it is definitely worth the effort – it can turn your development into a competitive advantage.”

Alongside Salesforce, the Copyright Licensing Agency (CLA) have also utilised this methodology of continuous development to develop its new product offering Digital Content Store. The offering is being trialled currently by five universities, and will enable CLA’s customers to more effectively manage extracts which are under licence, as well as making the content more widely available for the students.

“I’d define DevOps as a culture which enables IT (as a whole, not just Development and Ops) to be more productive and efficient,” said Adam Sewell, IT Director at the CLA. “Which in turn means they can be more reactive to changes in the market, more responsive in terms of delivering solutions to customer (e.g. by taking feedback from customers actually using new products early on in the product lifecycle and being able to develop and release new features faster and with confidence) and ultimately, be more innovative as a business.”

As with every other aspect of the community, security is another consideration here. While most people would now consider themselves cloud experts, let’s not forget that it is just entering the mass market. Most buyers are continually concerned with security, robustness and reliability. DevOps presents a very simple solution.

“In product development data has to be both accessible and secure,” said Ash Ashutosh, CEO at Actifio. “It’s a tricky balancing act, made all the more difficult by excess physical copy growth. More data copies will just increase the ‘attack surface’. So the idea is to create fewer physical copies, decrease the number of security targets, mask sensitive data, create an audit trail and reduce overall risk.

“The control of sensitive data starts with the reducing excess physical copies. What’s essential is that the system incorporates all key technical standards and multiple levels of data security that will address physical, virtual and hybrid environments. It’s fast, simple to understand and operate. It supports and helps to reinforce broader enterprise security strategies.”

Although the question of cost will always arise, as we can see from the examples above, early adopters of cloud technologies and derived methodologies (including DevOps), can create new business opportunities, launching brands into new markets and attracting new customers. Cloud, DevOps and all the other buzzwords in this space are more than just a means of reducing cost.

Head in the clouds? What to consider when selecting a hybrid cloud partner

online shopping cartThe benefits of any cloud solution relies heavily on how well it’s built and how much advance planning goes into the design. Developing any organisation’s hybrid cloud infrastructure is no small feat, as there are many facets, from hardware selection to resource allocation, at play. So how do you get the most from your hybrid cloud provider?

Here are seven important considerations to make when designing and building out your hybrid cloud:

  1. Right-sizing workloads

One of the biggest advantages of a hybrid cloud service is the ability to match IT workloads to the environment that best suits it. You can build out hybrid cloud solutions with incredible hardware and impressive infrastructure, but if you don’t tailor your IT infrastructure to the specific demands on workloads, you may end up with performance snags, improper capacity allocation, poor availability or wasted resources. Dynamic or more volatile workloads are well suited to the hyper-scalability and speedy provisioning of hybrid cloud hosting, as are any cloud-native apps your business relies on. Performance workloads that require higher IOPS (input/output per second), CPU and utilisation are typically much better suited to a private cloud infrastructure if they have elastic qualities or requirements for self-service. More persistent workloads almost always deliver greater value and efficiency with dedicated servers in a managed hosting or co-location environment. Another key benefit to choosing a hybrid cloud configuration is the organisation only pays for extra compute resources as required.

  1. Security and compliance: securing data in a hybrid cloud

Different workloads may also have different security or compliance requirements which dictates a certain type of IT infrastructure hosting environment. For example, your most confidential data shouldn’t be hosted in a multi-tenant environment, especially if that business is subject to Health Insurance Portability and Accountability Act (HIPAA) or PCI compliance requirements. Might seem obvious, but when right-sizing your workloads, don’t overlook what data must be isolated, and also be sure to encrypt any data you may opt to host in the cloud. Whilst cloud hosting providers can’t provide your compliance for you, most offer an array of managed IT security solutions. Some even offer a third-party-audited Attestation of Compliance to help you document for auditors how their best practices validate against your organisation’s compliance needs.

  1. Data centre footprint: important considerations

There is a myriad of reasons an organisation may wish to outsource its IT infrastructure: from shrinking its IT footprint to driving greater efficiencies, from securing capacity for future growth, or simply to streamline core business functions. The bottom line is that data centres require massive amounts of capital expenditure to both build and maintain, and legacy infrastructure does become obsolete over time. This can place a huge capital and upfront strain onto any mid-to-large-sized businesses expenditure planning.

But data centre consolidation takes discipline, prioritisation and solid growth planning. The ability to migrate workloads to a single, unified platform consisting of a mix of cloud, hosting and datacentre colocation provides your IT Ops with greater flexibility and control, enabling a company to migrate workloads on its own terms and with a central partner answerable for the result.

  1. Hardware needs

For larger workloads should you seek to host on premises, in a private cloud, or through colocation, and what sort of performance needs do you have with hardware suppliers? A truly hybrid IT outsourcing solution enables you to deploy the best mix of enterprise-class, brand-name hardware that you either choose to manage yourself or consume fully-managed from a cloud hosting service provider. Performance requirements, configuration characteristics, your organisation’s access to specific domain expertise (in storage, networking, virtualisation, etc.) as well as the state of your current hardware often dictates the infrastructure mix you adopt. It may be the right time to review your inventory and decommission that hardware reaching end of life. Document the server de-commissioning and migration process thoroughly to ensure no data is lost mid-migration, and follow your lifecycle plan through for decommissioning servers.

  1. Personnel requirements

When designing and building any new IT infrastructure, it’s sometimes easy to get so caught up in the technology that you forget about the people who manage it. With cloud and managed hosting, you benefit from your provider’s expertise and their SLAs — so you don’t have to dedicate your own IT resource to maintaining those particular servers. This frees up valuable staff bandwidth so that your staff focuses on tasks core to business growth, or trains for the skills they’ll need to handle the trickier configuration issues you introduce to your IT infrastructure.

  1. When to implement disaster recovery

A recent study by Databarracks also found that 73% of UK SME’s have no proper disaster recovery plans in place in the event of data loss, so it’s well worth considering what your business continuity planning is in the event of a sustained outage. Building in redundancy and failover as part of your cloud environment is an essential part of any defined disaster recovery service.

For instance, you might wish to mirror a dedicated server environment on cloud virtual machines – paying for a small storage fee to house the redundant environment, but only paying for compute if you actually have to failover. That’s just one of the ways a truly hybrid solution can work for you. When updating your disaster recovery plans to accommodate your new infrastructure, it’s essential to determine your Recovery Point Objectives and Recovery Time Objective (RPO/RTO) on a workload-by-workload basis, and to design your solution with those priorities in mind.

Written by Annette Murphy, Commercial Director for Northern Europe at Zayo Group

The economics of disaster recovery

Disaster Recovery Plan - DRPCompanies increasingly need constant access to data and the cost of losing this access – downtime – can be catastrophic. Large organizations can quickly find themselves in the eye of a storm when software glitches strike. It can result in lost revenue, shaken customer loyalty and significant reputational damage.

In August 2013, the NASDAQ electronic exchange went down for 3 hours 11 minutes, causing the shutdown of trading in stocks like Apple, Facebook, Google and 3,200 other companies. It resulted in the loss of millions of dollars, paralyzing trading in stocks with a combined value of more than $5.9 trillion. The Royal Bank of Scotland has now had five outages in three years including on the most popular shopping day of the year. Bloomberg also experienced a global outage in April 2015 resulting in the unavailability of its terminals worldwide. Disaster recovery for these firms is not a luxury but an absolute necessity.

Yet whilst the costs of downtime are significant, it is becoming more and more expensive for companies to manage disaster recovery as they have more and more data to manage: by 2020 the average business will have to manage fifty times more information than it does today. Downtime costs companies on average $5600 per minute and yet the costs of disaster recovery systems can be crippling as companies build redundant storage systems that rarely get used. As a result, disaster recovery has traditionally been a luxury only deep-pocketed organizations could afford given the investment in equipment, effort and expertise to formulate a comprehensive disaster recovery plan.

Cloud computing is now making disaster recovery available to all by removing the need for a dedicated remote location and hardware altogether. The fast retrieval of files in the cloud allows companies to avoid fines for missing compliance deadlines. Furthermore, the cloud’s pay for use model means organizations need only pay for protection when they need it and still have backup and recovery assets standing by. It also means firms can add any amount of data quickly as well as easily expire and delete data. Compare this to traditional back up methods where it is easy to miss files, data is only current to the last back up (which is increasingly insufficient as more data is captured via web transactions) and recovery times are longer.

Netflix has now shifted to Amazon Web Services for its streaming service after experiencing an outage in its DVD operation in 2008 when it couldn’t ship to customers for three days because of a major database corruption. Netflix says the cloud allows it to meet increasing demand at a lower price than it would have paid if it still operated its own data centres. It has tested Amazon’s systems robustly with disaster recovery plans “Chaos Monkey”, “Simian Army” and “Chaos Kong” which simulated an outage affecting an entire Amazon region.

Traditionally it has been difficult for organizations like Netflix to migrate to the cloud for disaster recovery as they have grappled with how to move petabytes of data that is transactional and hence continually in use. With technology such as WANdisco’s Fusion active replication making it easy to move large volumes of data to the cloud whilst continuing with transactions, companies can now move critical applications and processes seamlessly enabling disaster recovery migration. In certain circumstances a move to the cloud even offers a chance to upgrade security with industry recognized audits making it much more secure than on site servers.

Society’s growing reliance on crucial computer systems mean that even short periods of downtime can result in significant financial loss or in some cases even put human lives at risk. In spite of this, many companies have been reluctant to allocate funding for Disaster Recovery as management often does not fully understand the risks. Time and time again network computing infrastructure has proven inadequate. Cloud computing offers an opportunity to step up to a higher level of recovery capability at a cost that is palatable to nearly any sized business. The economics of disaster recovery in the cloud are such that businesses today cannot afford not to use it.

Written by David Richards, Co-Founder, President and Chief Executive of WANdisco.

The easiest way to explain the cloud to your boss

one plus one cloud dealToday, approximately 90 per cent of businesses are using at least one cloud application. Yet, only 32 per cent of these companies are running more than a fifth of their applications in the cloud. The obvious conclusion is that many company executives haven’t quite grasped what the cloud can do for them, which is why it is time for IT organisations to take an active role in explaining the cloud to the business.

One of the predominant issues preventing enterprises from realising the benefits of the cloud is their limited understanding of the technology. In simple terms, cloud computing can be defined as a computing environment consisting of pooled IT resources that can be consumed on demand. The ultimate benefit of the approach is that applications can be accessed from any device with an Internet connection.

However, even more commonly, executives are interested in hearing business cases for the implementation of cloud. Now, let’s walk through some of the most compelling pro-cloud arguments with comments from industry experts.

The money argument

“But can we afford it?”

Luckily for you, the numbers are on your side.

As David Goulden, CEO of EMC Infrastructure, explains in a recent interview: “An immediate driver of many implementations is cost reduction. Both McKinsey and EMC analyses have found that enterprises moving to hybrid cloud can reduce their IT operating expense by 24%. That’s a significant number, and in essence can fund the people and process changes that yield the other benefits of hybrid cloud.”

But where do those cost reductions come from? Goulden explains that while lower hardware, software, facilities and telecom costs account for some of the savings, by far the most substantial reductions can be made in OPEX budgets: “The automation of hybrid cloud dramatically reduces the amount of labour needed to deploy new application software, and to monitor, operate, and make adjustments to the infrastructure. Tasks that used to take days are performed in minutes or seconds.”

The agility issue

“But how will it increase our agility?”

When it comes to cloud computing, agility is commonly used to describe the rapid provisioning of computer resources. However, as HyperStratus’ CEO Bernard Golden suggests, the term can be used to refer to two entirely different advantages: IT resource availability and responsiveness to changes in the business.

Furthermore, he argues that although internal IT availability is necessary for success, the ultimate aim of cloud computing efforts should be speeding business innovation to the market: “the ability to surround a physical product or service with supporting applications offers more value to customers and provides competitive advantage to the vendor. And knowing how to take advantage of cloud computing to speed delivery of complementary applications into the marketplace is crucial to win in the future.“

The security concern

“But will our information be safe?”

Short answer: that’s completely up to your cloud. The beauty of a well-designed hybrid cloud is that it allows enterprises to allocate their applications and data between different cloud solutions in a way that brings out the benefits of all and the drawbacks of none.

However, as Tech Republic’s Enterprise Editor Conner Forrest explains in a recent article: “One of the raging debates when it comes to cloud security is the level of security offered by private and public clouds. While a private cloud strategy may initially offer more control over your data and easier compliance to HIPAA standards and PCI, it is not inherently more or less secure. True security has more to do with your overall cloud strategy and how you are using the technology.” Thus, a haphazard mix of public and private doesn’t automatically make a hybrid cloud.

The customer angle

“But how will it benefit our customers?”

More recently, the C-suite has woken up to the reality that cloud applications can help them attract and retain customers. A good example of this comes from the University of North Texas, whose CFO Rama Dhuwaraha explains: “The typical student on campus today has about six different devices that need Internet access for parking services we offer, dining, classroom registration and paying bills online. During enrolment, most of them don’t want to go find a lab and then enrol – they want it at their fingertips. We have to extend those services to them.”

Overall, the value proposition of a customised cloud solution should be pretty clear. However, as Goulden emphasises: “Most companies simply don’t realise how quickly they can implement a hybrid cloud, or how much money and capability they’re leaving on the table until they have one”. Therefore, as IT professionals, it is our responsibility to take this message forward to the business and develop cloud strategies that serve the interest of the enterprise.

 

Written by Rob Bradburn, Senior Web Operations Manager, Digital Insights & Demand, EMC – EMEA Marketing

Harnessing the vertical cloud: why regulatory burdens don’t have to feel like an uphill struggle

cloud storm rainAs cloud adoption continues to grow, business innovation, scalability and agility are not only becoming realistic goals for the modern business, but mandatory requirements in order to facilitate growth and keep up with the competition. As many companies already have highly virtualised infrastructure in place, their IT strategy is increasingly focused on cloud adoption as a means of not just driving cost efficiencies but also innovation. Increasingly, businesses are looking at ways to ease the burden of meeting regulatory compliance and security requirements by implementing the relevant cloud adoption strategies.

As cloud computing is maturing at a rapid pace with many “as-a-service” type offerings such as infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), desktop-as-a-service (DaaS), disaster recovery-as-a-service (DRaaS), software-as-a-service (SaaS), such developments have paved the way for the anything-as-a-service (XaaS) model. This can be seen as the founding pedestal for next version of the cloud development: the “vertical cloud”. The vertical cloud is designed to deliver core applications, tools and the corresponding biota of a specific vertical that allow organisations to customise cloud services themselves to their specific needs and tastes.

The vertical cloud allows enterprises to pick and choose what to operate in the cloud and on customer’s premises, based on security and compliance requirements that govern their businesses. In industries such as banking and finance or insurance, for example, regulatory compliance is the prime driver while choosing the relevant architecture for their IT Infrastructure. With major regulations on banking and finance in progress, including Basel III and EU General Data Protection Regulation (GDPR), regulatory compliance will remain a major area of investment throughout 2016.

However, using the vertical cloud shifts the onus of compliance to the cloud provider on account of their proven and re-usable governance and security frameworks. Vertical cloud offerings can come pre-packaged with the required regulatory obligations and thus offer organisations a relief from the encumbrance of ensuring compliance themselves.

Continued growth in cloud and IT infrastructure spending
Analysts foresee that a significant acceleration in global cloud-related spending will continue. During 2016, global spending on IT services is forecast to reach $3.54 trillion, as companies continue to adopt cloud services, according to Gartner. This trend is no different in the United Kingdom, where adoption continues to grow. For example, the banking and finance sector is predicted to increase spending on IT in 2016, a big part of which is dedicated to cloud services. Recent guidance issued by the Financial Conduct Authority is likely to perpetuate this trend, by advocating the implementation of the cloud by financial services organisations, paving the way for firms in this sector to take advantage of cloud services and the innovation that it can foster.

The main factors driving cloud adoption are industry competition and pace of change brought on by digitisation. However, businesses need to be more nimble and use the cloud to absorb planned and unscheduled changes swiftly and seamlessly. In order to enable companies to deal with market trends and deviations, the cloud value chain takes a holistic approach to the current business in the context of a changing market. Here is a snapshot of a few such phases which, in rapid evolutionary terms, lead to the adoption of the vertical cloud, a concept that encompasses them all.

The cloud service economy is here. The current trend in cloud adoption is to look beyond asset management and traditional methods used to accomplish business outcome (e.g. developing, testing and repairing). Instead, various flexible ‘as-a-Service’ models offered by cloud firms allow businesses to employ technology solutions themselves, freeing IT teams to focus instead on architectural and advisory services. From the perspective of an IT Infrastructure, the expectation in the cloud service economy is to have ‘value’ delivered directly by the investment made instead of traditional and laborious ways of realising it.

Anything-as-a-Service as a prelude to vertical cloud. Cloud thinking is spurring some organisations to explore running their entire IT operations on the Anything-as-a-Service (XaaS) model with cost vs service consumption variability. Ultimately, however, it is the digital disruption across industries that added to the increasing complexity of continuing traditional in-house handling. As such, businesses are faced with the need to handle next generation requirements such as big data analytics, cognitive computing, mobility and smart solutions, Internet of Things (IoT) and other examples of digitisation.

Security and regulatory compliance are complex and exhaustive, requiring IT infrastructure and applications to be constantly ready for ever-evolving demands. Hence, pursuing the vertical cloud strategy can help business not only to advance and accelerate business growth and gain competitive advantage, but also to mitigate security and regulatory compliance.

 

Written by Nachiket Deshpande, Vice President of Infrastructure Services, Cognizant.

Containers: 3 big myths

schneiderJoe Schneider is DevOps Engineer at Bunchball, a company that offers gamificaiton as a service to likes of Applebee’s and Ford Canada.

This February Schneider is appearing at Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA), where he’ll be cutting through the cloudy abstractions to detail Bunchball’s real world experience with containers. Here, exclusively for Business Cloud News, Schneider explodes three myths surrounding one of the container hype…

One: ‘Containers are contained.’

If you’re really concerned about security, or if you’re in a really security conscious environment, you have to take a lot of extra steps. You can’t just throw containers into the mix and leave it at that: it’s not as secure as VM.

When we instigated containers, at least, the tools weren’t there. Now Docker has made security tools available, but we haven’t transitioned from the stance of ‘OK, Docker is what it is and recognise that’ to a more secure environment. What we have done instead is try to make sure the edges are secure: we put a lot a of emphasis on that. At the container level we haven’t done much, because the tools weren’t there.

Two: The myth of the ten thousand container deployment

You’ll see the likes of Mesosphere, or Docker Swarm, say, ‘we can deploy ten thousand containers in like thirty seconds’ – and similar claims.  Well, that’s a really synthetic test: these kinds of numbers are 100% hype. In the real world such a capacity is pretty much useless. No one cares about deploying ten thousands little apps that do literally nothing, that just go ‘hello world.’

The tricky bit with containers is actually linking them together. When you start with static hosts, or even VMs, they don’t change very often, so you don’t realise how much interconnection there is between your different applications. When you destroy and recreate your applications in their entirety via containers, you discover that you actually have to recreate all that plumbing on the fly and automate that and make it more agile. That can catch you by surprise if you don’t know about it ahead of time.

Three: ‘Deployment is straightforward’

We’ve been running containers in production for a year now. Before then we were playing around a little bit with some internal apps, but now we run everything except one application on containers in production. And that was a bit of a paradigm change for us. The line that Docker gives is that you can take your existing apps and put them in a container that’s going to work in exactly the same way. Well, that’s not really true. You have to actually think about it a little bit differently: Especially with the deployment process.

An example of a real ‘gotcha’ for us was that we presumed Systemd and Docker would play nice together and they don’t. That really hit us in the deployment process – we had to delete the old one and start a new one using system and that was always very flaky. Don’t try to home grow your own one, actually use something that is designed to work with Docker.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA),

Tackling the resource gap in the transition to hybrid IT

AI-Artificial-Intelligence-Machine-Learning-Cognitive-ComputingIs hybrid IT inevitable? That’s a question we ask customers a lot. From our discussions with CIOs and CEOs there is one overriding response and that is the need for change. It is very clear that across all sectors, CEOs are challenging their IT departments to innovate – to come up with something different.

Established companies are seeing new threats coming into the market. These new players are lean, hungry and driving innovation through their use of IT solutions. Our view is that more than 70 percent of all CEOs are putting a much bigger ask on their IT departments than they did a few years ago.

There has never been so much focus on the CIO or IT departmental manager from a strategic standpoint. IT directors need to demonstrate how they can drive more uptime, improve the customer experience, or enhance the e-commerce proposition for instance, in a bid to win new business. For them, it is time to step up to the plate. But in reality there’s little or no increase in budget to accommodate these new demands.

We call the difference between what the IT department is being asked to do, and what it is able to do, the resources gap. Seemingly, with the rate of change in the IT landscape increasing, the demands on CIO’s by the business increasing and with little or no increase in IT budgets from one year to the next, that gap is only going to get wider.

But by changing their way of working, companies can free up additional resources to go and find their innovative zeal and get closer to meeting their business’ demands. Embracing Hybrid IT as their infrastructure strategy can extend the range of resources available to companies and their ability to meet business demands almost overnight.

Innovate your way to growth

A Hybrid IT environment provides a combination of its existing on-premise resources with public and private cloud offerings from a third party hosting company. Hybrid IT has the ability to provide the best of both worlds – sensitive data can still be retained in-house by the user company, whilst the cloud, either private or public, provides the resources and computing power that is needed to scale up (or down) when necessary.

Traditionally, 80 percent of an IT department’s budget is spent just ‘keeping the lights on’. That means using IT to keep servers working, powering desktop PCs, backing up work and general maintenance etc.

But with the CEO now raising the bar, more innovation in the cloud is required. Companies need to keep their operation running but reapportion the budget so they can become more agile, adaptable and versatile to keep up with today’s modern business needs.

This is where Hybrid IT comes in. Companies can mix and match their needs to any type of solution. That can be their existing in-house capability, or they can share the resources and expertise of a managed services provider. The cloud can be private – servers that are the exclusive preserve of one company – or public, sharing utilities with a number of other companies.

Costs are kept to a minimum because the company only pays for what they use. They can own the computing power, but not the hardware. Crucially, it can be switched on or off according to needs. So, if there is a peak in demand, a busy time of year, a last minute rush, they can turn on this resource to match the demand. And off again.

This is the journey to the Hybrid cloud and the birth of the agile, innovative market-focused company.

Meeting the market needs

Moving to hybrid IT is a journey.  Choosing the right partner to make that journey with is crucial to the success of the business. In the past, businesses could get away with a rigid customer / supplier relationship with their service provider. Now, there needs to be a much greater emphasis on creating a partnership so that the managed services provider can really get to understand the business. Only by truly getting under the skin of a business can the layers be peeled back to reveal a solution to the underlying problem.

The relationship between customer and managed service provider is now also much more strategic and contextual. The end users are looking for outcomes, not just equipment to plug a gap.

As an example, take an airline company operating in a highly competitive environment. They view themselves as being not in the people transportation sector, but as a retailer providing a full shopping service (with a trip across the Atlantic thrown in). They want to use cloud services to take their customer on a digital experience, so the minute a customer buys a ticket is when the journey starts.

When the passenger arrives at the airport, they need to check in, choose the seats they want, do the bag drop and clear security all using on-line booking systems. Once in the lounge, they’ll access the Wi-Fi system, check their Hotmail, browse Facebook, start sharing pictures etc. They may also choose last minute adjustments to their journey like changing their booking or choosing to sit in a different part of the aircraft.

Merely saying “we’re going to do this using the cloud” is likely to lead to the project misfiring. As a good partner the service provider should have the experience of building and running traditional infrastructure environments and new based on innovative cloud solutions so that they can bring ‘real world’ transformation experience to the partnership. Importantly they must also have the confidence to demonstrate digital leadership and understand of the business and its strategy to add real value to that customer as it undertakes the journey of digital transformation.

Costs can certainly be rationalised along the way. Ultimately with a hybrid system you only pay for what you use. At the end of the day, the peak periods will cost the same, or less, than the off-peak operating expenses. So, with added security, compute power, speed, cost efficiencies and ‘value-added’ services, hybrid IT can provide the agility businesses need.

With these solutions, companies have no need to ‘mind the gap’ between the resources they need and the budget they have. Hybrid IT has the ability to bridge that gap and ensure businesses operate with the agility and speed they need to meet the needs of the competitive modern world.

 

Written by Jonathan Barrett, Vice President of Sales, CenturyLink, EMEA