All posts by monicabrink

Forget about the Internet of Things for a minute – and consider the Security of Things

(c)iStock.com/rzoze19

Opinion The Internet of Things (IoT) is well and truly upon us – and will clearly be even more prevalent in the future. Today, IoT is already branching out into commercial networks as well as enterprise applications. Smart devices are becoming more commonplace in our households with everyday appliances now able to communicate with the internet to help our lives to run more smoothly and interconnected devices are now essential tools in our working lives as well. This is all fantastic news – right?

While it’s easy to get excited about all the new gadgets that the era of the IoT has delivered, it is important to take a step back from all the excitement and talk about security.

Millions of people across the globe are connecting with these devices and sharing valuable data. However, the potential misuse of this data still remains fairly well hidden, disguised under IoT’s novelty halo effect. Infosecurity experts have long warned that IoT devices are set to be a security nightmare as they are often deployed with little or no consideration for security. The question is: are enough people aware of this and are the right security measures being taken – particularly by organisations that need to protect business critical and sensitive data? Recent distributed denial-of-service (DDoS) attacks such as that experienced by the DNS provider, Dyn – which made it impossible to access the likes of Twitter, Amazon and Netflix – should be a serious wakeup call.

In its early days, the World Wide Web brought with it little protection from misuse. This, of course, generated consumer distrust, consequently slowing down initial e-commerce efforts. But, if we fast-forward to the present day, it is now the case that e-commerce represents around 15% of all retail sales in the UK, with an expected £5million to be spent online this Black Friday in the UK alone. 

This is in no doubt due to the fact that today data encryption and other security measures are simply assumed. People no longer fear sending their credit card information over the wire. As a result, security issues for the most part are kept in the background. It almost seems as though we are in a cycle in which consumers and organisations blindly trust companies with their valuable data and it is only when a case of known and reported intrusions arises that action is taken and data security is examined.

Whether it is the IoT or the cloud, companies need to begin using security technologies and procedures that have already proven to be reliable

This, in some respects, also echoes the initial response to the cloud, which saw low user adoption for the first few years due to security worries around the security of the data being stored offsite. Compare that to the beginning of this year when the UK Cloud adoption rate climbed to 84% according to the Cloud Industry Forum.

It has been found that most of the IoT devices that have been hacked to date have had default usernames and passwords, and at no point had the manufacturers prompted users to change these. Increasingly, hackers are able to use malware software to scour the web for devices that have basic security and detect vulnerabilities. This enables the hackers to upload malicious code so that the devices can be used to attack a targeted website.

What is really worrying is that the owners of the IoT devices are usually unaware of the attack. This is because once a device has been hijacked it can be impossible to tell as they often continue to work exactly as normal. Issues will then begin to occur behind the scenes when the compromised system is subsequently put on the same network as personal computers, corporate servers and even confidential government data.

The main issue is, without knowing which devices exchange data within a specific network or the internet as a whole, there is no way to develop an adequate security strategy. In theory, every single device that is being added to a network needs to be evaluated, but this is just as painstaking as it sounds.

Whether it is the IoT or the cloud, companies need to begin using security technologies and procedures that have already been proven to be reliable. This means applying on-premise levels of IT security to cloud workloads. For example, two-factor authentication, role-based access control, encryption, and vulnerability scanning can enable a protective shield for the cloud to scan all incoming and outgoing data for malicious code, regardless of the device being used. The right level of security technologies embedded into the cloud platform allows companies to gain control of all web-based traffic in order to actively manage which communications should be permitted and which should be blocked.

Recent high profile cyber attacks and, increasingly, ransomware threats have spurred a long overdue discussion about the gaps in IoT security.  Unless the security side of IoT is sorted out, it could hold back wider adoption of the technology. Early adopters beware; the best advice is to follow the data. Know how the company behind your latest gadgets and interconnected devices handles security and ensure that any cloud provider is able to provide you with the reports and ongoing visibility that will enable security settings to be managed and maintained. 

Test, test, then test again: Analysing the latest cloud DR strategies

(c)iStock.com/XtockImages

In September, we hosted a roundtable with fifteen business leaders to discuss and debate the findings from our The State of IT Disaster Recovery Amongst UK Businesses survey.  The debate was chaired by Chris Francis, techUK board member. Customers Wavex and Bluestone also participated in the discussion as did our partner Zerto and industry influencers Ray Bricknell from Behind Every Cloud and analyst, Peter Roe from TechMarketView.  The event was lively and thought provoking. 

Outages definitely happen more frequently than we think.  We ran through the scale of outages that had been reported in the press in just the last month including organisations like British Airways, ING Bank and Glasgow City Council.British Airways lost its check in facility due to a largely unexplained ‘IT glitch’,  ING Bank’s regional data centre went offline due to a fire drill gone wrong (reports suggest that more than one million customers were affected by the downtime), and Glasgow City Council lost its email for three days after a fire system blew in the Council’s data centre.

Our survey backed up the high frequency of outages showing that 95% of companies surveyed had faced an IT outage in the past 12 months.  Interestingly four fifths, or 87% of that 95% who suffered outages, considered them severe enough to trigger a failover. We looked at some of the reasons for those outages and top of the list were system failure and human error. So, it is often not the big headlines we see such as environmental threats, storms or even a terrorist threat that brings our systems down, but more day-to-day mundane issues.  The group also suggested that often it was at the application level that the issues occur rather than the entire infrastructure being taken down. 

We also discussed the importance of managing expectations and how disaster recovery should be baked in rather than seen as an add on. Most businesses have a complex environment with legacy systems so they can’t really expect there to be 100% availability all of the time. That said, the term disaster recovery can scare people so those around the table felt that we should really talk more about maintaining ‘Business as Usual’ and resilience. DR isn’t about failing over an entire site anymore, it’s actually about pre-empting issues, for example testing and making sure that everything is going to work before you make changes to a system.

The discussion moved on to the impact of downtime. The survey found that every second really does count. When we asked respondents about the impact of downtime and how catastrophic this was, 42% said near seconds would have a big impact. This statistic rose to nearly 70% when it came to minutes. The group’s advice was that businesses really need to focus on recovery times when looking at a DR solution. We also talked about how much budget is spent on meeting recovery goals. The reality is that you can’t pay enough to compensate for downtime, but for most businesses there will always be some kind of trade-off between budget and downtime.

The group discussed whether business decision makers really understand the financial impact of downtime. Is more education needed about recovery times, what can be recovered, and prioritising different systems so the business understands what will happen when outages take place?

We then moved on to look at overconfidence in DR solutions.  The survey found that 58% had issues when failing over despite 40% being confident that their disaster recovery plans would work. Only 32% executed a failover and were confident and it all worked well. 10% did not failover but were confident that it would work well.   We talked to the group about this misplaced confidence and that while IT leaders know the importance of having a DR solution and taking measures to implement one, there appears to be a gap between believing your business is protected in a disaster and having that translate to a successful failover.

The bottom line is that DR strategies are prone to failure unless failover systems are thoroughly and robustly tested.  Confidence in failover comes down to the frequency that IT teams actually perform testing, and whether they are testing the aspects that are really important, such as at the application level.  Equally are they testing network access, performance, security and so on?  We certainly believe that testing needs to be done frequently to build evidence and a proven strategy. If testing only takes place once a year or once every few years then how confident can organisations be?

The group agreed that the complex web of interlocking IT systems is one of the biggest inhibitors to successful testing. While testing may be conducted on one part of a system in isolation, if that fell over this can often trigger a chain of events in other systems that the organisation wouldn’t be able to control.

The group agreed that there is an intrinsic disconnect between what management wants to hear in terms of DR recovery times and what management wants to spend.

In conclusion, we discussed the need to balance downtime versus cost as no one has an unlimited budget. A lot of the issues raised in the survey are challenges that can be traced directly back to simply not testing enough or not doing enough high quality testing.  The overall advice that iland recommends from the survey is to test, test and test again – and, importantly, to make sure that DR testing can be performed non-intrusively so that production applications are not affected, is cost-effective and does not place a large administrative burden on IT teams.

Editor’s note: You can download a copy of the survey results here.

Why security and compliance are still the main blockers to cloud adoption

(c)iStock.com/LPETTET

According to the Cloud Industry Forum, 80% of UK companies are adopting cloud technology as a key part of their overall IT and business strategy. However, one of the perceived barriers to cloud adoption continues to be concerns around security and compliance, which was the key topic of discussion at a recent techUK panel in which iland took part. During this discussion, we talked about the fact that nine out of ten security professionals worry about cloud security.

One of the key aspects that we’ve certainly found here at iland and which was borne out in a survey around cloud security that we conducted earlier in the year with independent analyst firm Enterprise Management Associates (EMA), is that companies actually now consider cloud security to be superior to on-premise environments, but often expose themselves to risk by blindly relying on a glut of technology they are unable to actively manage. 

Our survey found that nearly half (47%) of security personnel admitted to simply trusting their cloud providers to meet security agreements without further verification.  This highlighted that transparency continues to be a key issue, as many providers do not offer detailed insights into the cloud environment.  Or, if they do, this is certainly not up to the same levels customers are accustomed to in their own data centre operations.  At the same time, we also found thatteams tend to throw technology at the problem, however tech alone will not solve the problem. Again, the survey showed that 48% more security technologies are deployed in the cloud than on-premise.

Further, security features now top the list of priorities companies consider when selecting a cloud provider ahead of performance, reliability, management tools and cost.  Therefore, our advice to companies is that it is really important firstly to verify your cloud provider’s claims and, secondly, to ensure that you can properly leverage the technology that you are deploying

Interestingly, our survey showed that there appears to be much more alignment between IT and the business. Asrespondents indicated IT would rather delay a new application deployment due to security concerns than deploy it in a potentially insecure environment, and the business agreed in an almost 3 to 1 margin. To my mind, this represents a fundamental shift in organisational dynamics, where business should no longer view security personnel as naysayers, but allies who are committed to fighting threats alongside the business.

One of the key problems that accentuate security issues appears to be around skills and staffing shortages.  Infact over two thirds (68%) of organisations EMA admitted that they have staffing shortages and 34 % have skills shortages, which directly correlate to flaws and opposing perceptions uncovered in our study.  While IT has made monumental progress in identifying and adopting necessary security technologies, cloud providers must do more to ensure teams can easily validate claims, manage disparate tools, anticipate threats and take action when needed.

Further, we can see there is a lack of understanding of compliance among IT personnel. While 96% of security professionals acknowledge that their organisations have compliance related workloads in the cloud, only 69% of IT teams identified the same. This gap could lead to exposures for the organisation if IT were to place a compliance-related workload into a non-compliant cloud provider.  

And, finally, clearly defined responsibilities are needed both with your cloud service provider and within your own company, as clearly in the end, where security is concerned, the buck stops with you. There is no point claiming that you thought someone else had it covered.  This is where DevSecOps comes in as the next evolution of DevOps whereby you make security the responsibility of every member of the team, at every step of the way, right from dev through to ops. 

Right now and for the forecastable future cloud adoption sees no sign of abating. Therefore, it is critical that we get security and compliance right. Otherwise it will continue to be a blocker for organisations and could hinder innovation and competitive advantage.

Read more: A warning shot: Why cloud security remains more important than ever 

Reducing threats and management headaches across private clouds: A guide

(c)iStock.com/tzahiV

While public cloud implementations are steadily increasing, private clouds in customers’ own data centres continue to be deployed because of the perceived higher levels of security and control they offer.

But the management of a private cloud can be complex and many organisations underestimate the scale of the challenge. Security and management in particular continues to be pain points. Many go in with the assumption that in a private cloud, IT departments have more control so the environment will either inherently be more secure, or will be easier to implement and maintain security controls.  Unfortunately, many organisations subsequently find the management challenge is greater than anticipated and adoption is difficult, which often has more to do with organisational and IT transformation issues rather than the technology itself. This puts a whole new dimension on the scale of the challenge, and raises the question of whether the internal IT team is up to it.

In order to secure your private cloud, you will need to manage your cloud footprint – including key performance metrics, network and virtual machine configuration, disaster recovery and backup, intrusion detection, access management, patching, antivirus – and many more security considerations.  

The sheer volume of security controls needed can be overwhelming and not many IT departments have the people power or expertise to be able to manage diverse security solutions from multiple vendors. But, the real kicker is often not just the security technology itself, it’s the internal processes required to implement private cloud security. 

The IT team needs to work with the security department and procurement to define responsibility for who looks after what when it comes to maintaining infrastructure. Internal departments also won’t necessarily prioritise your request and complex negotiations and acts of diplomacy often ensue. Evaluation processes alone can take six months or more and it’s a dance that needs to be done for not one security solution but for multiple solutions.

Add to this the fact that the threat landscape continues to grow and become ever more complex. Some cloud initiatives are increasingly stalling or getting cancelled altogether because the security risks are deemed to be too high. This results in an uncomfortable situation for IT leaders as lines of business in their organisations are still demanding the agility, scalability and cost savings that cloud computing can deliver. IT leaders know they can’t abandon cloud altogether, the benefits are too great – and yet they also know whose head will be on the line if an outage, data loss or hacking incident was traced to a cloud workload. It is not surprising that some IT leaders look back with nostalgia on the simple outsourcing models of the past.

A secure hosted private cloud from a provider like iland can provide an answer for companies with workloads that require isolated infrastructure. A hosted private cloud can deliver all the integrated security, management, support and availability levels you would get in a traditional data centre environment, but with all the convenience and benefits of cloud.

Instead of buying, installing, integrating and maintaining separate security solutions, secure hosted private cloud solutions like iland’s also offer a purpose built management console that integrates the management and reporting of security and compliance settings, smoothing the path to completing audits.  Embedded security features in a hosted cloud platform include role-based access control, two-factor authentication, VM encryption, vulnerability scanning, anti-virus/anti-malware, integrity monitoring, intrusion detection, detailed firewall and log events,. Additionally, all private cloud resources can be managed from a single place, with access to performance metrics, granular billing data, VM management capabilities and DR management. Hosted private cloud customers also benefit from flexible pricing models that deliver predictable and controllable operational expenses in reservation or burst options while avoiding the capital expenses of on-premise private clouds.

As the security challenge continues for cloud workloads, secure hosted private clouds offer stretched IT departments the best of both worlds. Removing the burden of hardware management, shifting capital expenses to operating costs and benefitting from the latest innovations, this model seamlessly augments on-premise data centres and delivers a robust, enterprise-class secure private cloud with all the features of a public cloud – but without the management overhead.

Why the nirvana of cloud scalability can be easily achievable today

(c)iStock.com/BsWei

It’s perhaps the most captivating myth of cloud: autoscaling. A major selling point, the promise of autoscaling is that the amount of computing resource automatically scales based on load. It can handle those unexpected traffic spikes better, automatically, with no human intervention. Workloads just know for example when your website has been mentioned on a television show or in a major magazine, demand spikes, and the cloud compensates. The cloud becomes a nimble, always-on, semi-sentient member of your IT staff with extraordinary reflexes and reacts accordingly.

That cloud nirvana doesn’t really exist though, or rather for most, is out of reach.

Why? Well, there are dozens of reasons, from the technological to the practical. And unfortunately, it’s this promise that IT leaders are often sold on – the belief that autoscaling is easy, quick to set up and always ensures 100 percent uptime. The truth about autoscaling, as with most technological promises, is it’s a little more involved.

We often say we want autoscaling, when, in our heart of hearts, we all know that there are so many ways a system could seem to be spiking, but that spike may in fact have nothing to do with demand. It could be an attack. It could be a runaway process. It could be some middling attempt at recursion by a newbie developer. The list goes on.

For this mythical self-scaling automatic magic-cloud to exist, it would not only have to be amazingly responsive, but it would also have to be intelligent enough to triage, in those moments, the reason for the spike and respond – or not- accordingly. Few humans have that skill, let alone distant, application-unaware computing systems.

So autoscaling isn’t a little more involved, it’s actually a lot more involved. And it’s also not exactly ‘auto’. It is in fact complex, time consuming and demands a great deal of technical knowledge and skill. To create a truly automated and self-healing architecture that scales with little or no human intervention requires custom scripts and templates that can take months for a skilled team to get right, and many organisations have neither the time nor those resources to make it work.

So, what’s the next best thing? Alerting to a potential demand spike so a real live human can assess the situation (major television show or print magazine timing, marketing email blast data, and some knowledge of whether the last patch might be at fault) and make an intelligent scaling decision.

But, as has been often pointed out, people aren’t actually doing that. Instead they are just over-provisioning. Why? Well, in part because resizing instances or VMs in most clouds is a taxing process. Often, it requires downtime and restarts in new, larger instances. And then, when the hullabaloo about your artisanal locally-grown alpaca friendship bracelets passes, you need to re-set the whole thing again to a smaller size.

IT people are busy. They don’t have time for this either. Couple it with the fact that they are chastised when systems are under-provisioned or fail, that re-starting a system may land it on an unfortunate server filled with noisy neighbors, and that all of this is happening at the scale of dozens or hundreds of servers at a time – and this feels like a great time to just over-provision everything and leave well enough alone.

There is an alternative. Managed clouds – like the iland cloud – are different. iland’s managed cloud doesn’t establish instances; instead, you get an entire resource pool of CPU, RAM and Storage – just like your on-premise cluster – and without turning VMs on or off, you have the ability to add and remove resources on the fly or re-assign resources to different VMs as required.

A managed cloud is not exactly like having an autonomic self-motivated mega-computer presciently rejiggering your RAM, but most of us also aren’t eager to invite HAL9000 to our staff meetings. Instead, you can achieve a happy middle ground of cost-savings and ultimate control – and that feels like an IT myth whose time has come.

Understanding UK cloud adoption strategies: Where are we currently at?

(c)iStock.com/PeskyMonkey

Last week, as part of London Technology Week, we held a breakfast panel discussion looking at cloud adoption strategies and overcoming cloud security and compliance challenges. Industry experts from Cisco, techUK, Behind Every Cloud, as well as cloud users joined us to discuss and explore adoption strategies for ensuring success. In particular, we wanted to understand what is working in the UK and what is not. What are companies aiming to achieve, and where are they in their cloud adoption journeys? 

Simon Herbert from Cisco kicked off the discussion by saying that companies will undoubtedly be atdifferent stages but what is important is to understand the priorities and desired outcomes for the business. Simon went on to say that it’s about understanding where you are and where you need to be and developing a cloud adoption strategy in line with this.  In Herbert’s eyes moving to the cloud is not simply flicking a switch, it is about having a more considered approach.

Sue Daley from techUK wholeheartedly agreedand said that at techUK they are focused on helping organisations optimise cloud and make the most out of their cloud investment. Daley went on to say that here in the UK we have a very vibrant cloud computing market with lots of opportunities to use cloud and a strong appetite amongst UK businesses.

Sue talked about the six key areas in techUK’s Cloud 20/20 vision aimed at keeping the UK at the forefront of cloud adoption. These are enabling data portability and system interoperability within the cloud computing ecosystem, building trust in the security of cloud services, making sure that we have the right regulatory environment to support cloud, while addressing the culture change that cloud brings. Another key area in the 20/20 vision is to ensure effective public sector adoption, and lastly making sure that here in the UK there is a communications infrastructure that is ready for mass cloud adoption.

Ray Bricknell from Behind Every Cloud talked about the fact that many of the companies he works with are mid-tier financial services companies who are not ‘born in the cloud’ and therefore not adopting cloud as quickly as you would expect.  He said that often there is a mandate from the CEO to move everything to the cloud but the reality is that many of these organisations are still at early stages.  They are trying to figure out which workloads to prioritise and who they should work with, often driven by a specific project rather than a big bang approach.

Krisztian Kenderesi from Bluestone Financial Services Group, an iland customer, agreed and said that often the cloud decision is not an IT decision but a business one.  He went on to say that Bluestone is using iland’s Disaster Recovery-as-a-Service and again that was driven by a specific need at the time.

As a panel we discussed some of the barriers to adoption and why some companies are not using cloud. Without a doubt data privacy, data protection, security and compliance are some of the main reasons. Bricknell suggested that one of the biggest barriers is a perception issue around cloud. He went on to say that it is an urban myth that cloud is cheaper but when an organisation compares the security, scalability and availability that is available with the right cloud solution, the costs of achieving this themselves become prohibitively expensive.

Kenderesi replied that at Bluestone they were motivated by cost saving initiatives as well as a ‘cloud-first’ mentality. He also went on to say that by using a DRaaS provider like iland a company could reduce overall in-house costs by 40% and achieve much better recovery points.

The conversation moved on to managing the cloud. Bricknell said that many of the companies he has worked with often don’t expect the move to the cloud to involve such a big management task to integrate information from multiple cloud solutions.  He commented that a lot of vendors don’t want to tell you what is going on under the hood and how important it is to have transparency with your cloud provider. Daley stated that interoperability and data portability in the cloud ecosystem are growing issues being discussed.

The panel concluded by summing up what IT leaders should look for when transitioning to cloud. Transparency and visibility are key. Shadow IT is prevalent and therefore interoperability and compliance with other vendors needs to be considered. Also, with imminent data protection changes around liability and data controls, this will introduce a lot more data protection requirements. We all agreed that when cloud becomes more strategic you need a much more open and trusted relationship with multiple providers, whereas when it is ad hoc, it doesn’t matter so much if the provider you work with is more proprietary.

Where security is concerned, the online threat environment is constantly evolving and therefore again you need an open and trusted relationship with your cloud service provider in order to constantly adjust to new threats. At the same time providers must tell their customers of any issues or breaches so that they are correctly dealt with at the time.  And finally, it is highly unlikely that an organisation will work with just one provider, most companies will spread their risk across multiple vendors which is why visibility into the service is absolutely key.

The key steps when migrating from a physical environment to the cloud

(c)iStock.com/BsWei

As companies continue to step up the pace of migration to the cloud, many find themselves in the situation of having to bridge the gap between their physical and cloud infrastructures, which brings new challenges. IT has applications it regards as critical but lines of business have different application requirements that they regard as critical. As a result, IT often ends up becoming overloaded with requests, and as they attempt to satisfy demand with more resources, this can lead to over-provisioning of cloud infrastructure and runaway costs.

As more workloads are moved to the cloud, IT management must evolve and adopt a new approach to delivering infrastructure, applications and end user access. Indeed, IT needs to remove its attachment to physical infrastructure and start to present themselves as a service providing internal customers with access to flexible resources on-demand to support digital business initiatives. That however is easier said than done and for many migrating their physical environment into the cloud is extremely daunting.

In our experience, a good way to begin cloud migration is to use cloud for net new workloads. By running new and often non-mission-critical workloads in the cloud, the operational team can grow accustomed to managing the cloud and understanding performance metrics, and even gain confidence by estimating costs. Being able to slowly move workloads to the cloud will help acclimatise to the new way of working and ensure that IT understands the environment that it is working in, before migrating major workloads.

We often get asked what type of applications organisations should move to the cloud. Thisvery much depends on the organisation, but our advice is to take an inventory of your data and applications, and then decide which applications are most important to be hosted in the cloud. You need to assess and tier according to business criticality, considering how much of your environment is virtual and how much is physical, and then identify any critical components, such as specific networking requirements or physical systems. A lack of knowledge and visibility of what assets organisations have and are using is a common challenge for companies due to the complexity of IT infrastructure, so understanding what you have before you consider what to move is really important.

In many cases, migrating virtual and less mission-critical applications first feels like the swiftest path to initial success. However, understanding the full range of applications you’re ultimately migrating will help you select a provider best positioned to address your needs. Using cloud-based disaster recovery services is also a good way to start out and become comfortable with cloud operations – particularly if your cloud service provider enables self-service DR management and testing.

Physical systems are often the most trying when migrating to the cloud. They are usually the remainders of an older time and part of the IT environment because they are necessary and critical to business operations. There are, however, instances when moving these legacy systems to the cloud is beneficial to the business.

Alongside this, moving workloads to the cloud can mean losing visibility into performance metrics, long term history, and even cost visibility. This can greatly increase the burden of managing your cloud workloads and introduce some fear with respect to billing, costs and performance. While migration to cloud is considered to have low set up costs, the ongoing management is equally critical to your cloud decision. Making sure that you work with your cloud provider to maintain the transparency and visibility of these systems is important to keep the business costs and IT budget running as normal.

Your cloud service provider should be providing support and training to help ease cloud migration. Companies considering cloud need advisory and architecture advice and services to help them through this transition. However, many providers fall short on the basic on-boarding and support processes they offer to customers as part of cloud deployments.

You should ensure that you understand the levels of on-boarding training and support and ongoing customer support for your cloud offering. This goes beyond understanding self-help, knowledge based or message boards. Make sure that you understand the additional costs and the SLAs that you are entering into. Equally, as you look to grow your cloud services, look for a provider that provides support over the phone.

In summary,cloud-based applications offer many benefits, including the ability to scale IT resources when needed, quickly launch new apps and ensure a high level of performance at all times – not to mention, if you have a disaster recovery programme in place, keeping the business running if disaster strikes. Having applications in the cloud provides a stable and scalable infrastructure on demand and provides IT employees with the freedom to focus on more strategic initiatives that drive digital transformation across the business.

Read more: Analysing security and regulatory concerns with cloud app migration

A warning shot: Why cloud security remains more important than ever

(c)iStock.com/LeoWolfert

With all the recent well-publicised hacking and malware attacks, not to mention numerous meteorological events that have affected companies around the globe over the last year, IT leaders are very aware of the need for robust cloud security and compliance.

That said it is infact now easier for companies to engage in poor security practices because users do not have the same control over their cloud infrastructure that they have over their own on-premise infrastructure. Often, organisations using public cloud assume that their cloud provider is taking care of security and they may even have assurances of that from the provider. Yet usually, the customer has no visibility of the public cloud infrastructure they are using and little transparency with regard to security settings. For that reason, they are placing a lot of trust in the promise that the public cloud provider is addressing security when that may not actually be the case.

Ultimately, companies are becoming more complacent towards risk, simply because they don’t have visibility into the security of the cloud infrastructure they are using and don’t have a way to monitor that security. But as is often the case, ignorance is not bliss. The reality is that managing and monitoring cloud security is an ongoing task and customers need to work with a provider that is able and willing to proactively provide them with security information, alerts and notifications.

This is becoming even more important as companies use the public cloud for more mission-critical production applications. They need to ensure that they are deploying the same security features that are usually deployed for on-premise applications in the cloud.

One of the big consequences of public cloud security failure is downtime, and again there have been plenty of well-publicised examples of this from large public cloud providers over the last year.  Downtime in the cloud can have a serious impact on a customer’s business as it often means applications and services are not available. Another consequence is that security shortcomings make it difficult for customers to meet industry compliance regulations – particularly for customers in the finance, healthcare and retail industries. And then there is, of course, the potential loss of data – particularly sensitive customer data – that can lead to serious financial and reputation costs for companies.

Enterprise customers need to engage with a cloud provider that is prepared to partner with them around cloud security and compliance. They should demand visibility into native security and compliance functionality as well as support. Equally important, teams need to get precise clarity on who is responsible for each security measure – the vendor or the customer.

Increasingly, IT organisations are looking to cloud providers to deliver security assurance across multiple layers of the application. This is especially true as more teams are structured with IT generalists, rather than traditional security, networking, server and storage specialists. As pressures on IT teams increase, cloud providers must do more to arm customers with intuitive, advanced security functionality that includes alerts to potential threats as well as recommendations for addressing the issues.

Anticipating this demand, we at iland have partnered with industry leaders such as Trend Micro, Hytrust, Tenable and Nimble Storage to build advanced security into our cloud infrastructure and disaster recovery services, including features like VM encryption, vulnerability scanning, anti-virus and malware and intrusion detection. Further, we invested in providing customers with a single management console that can be used to access detailed security reporting in addition to every other component of their global cloud resources, such as performance, billing, capacity, backup and disaster recovery.  

In summary, cloud is becoming a far less risky proposition for customers, if – and that is a big if – they partner with the right provider. In fact, many of our customers have realised that we have invested in more advanced security technologies than they could in their own on-premise data centres. However, the cloud providers’ stance on cloud security needs to go beyond security technology to also provide security reporting and recommendations to customers. Through our cloud console, customers are able to generate a report which, at any time, shows them how their cloud resources and applications are performing against all of these security parameters – it’s that type of information and security partnership that is needed to ensure ongoing cloud security for customers.

MWC takeaways: Making sure your infrastructure is secure for the connected world

(c)iStock.com/Maciej Noskowski

With Mobile World Congress over for another year, I’ve now had a chance to digest everything that was on show last week. This conference, which is arguably the planet’s best venue for mobile industry networking, looked at the Internet of Things and the connected world, anything and everything from developments such as 5G to newly connected toothbrush devices that ensure consumers brush their teeth as the dentist intended.

What all these new technology innovations seem to have in common is the capability to generate obscene – and yet potentially very useful – amounts of data. How organisations manage and use this data – and how they keep it secure – will be a major challenge and one of the key predictors of success across many industries.

With an overwhelming array of new technology producing ever increasing amounts of often sensitive data, now more than ever there is scope for hackers to breach personal and company sensitive data. With reports highlighting the need to safeguard the confidential data on employees’ smartphones and tablets, the security of connected devices is becoming even more problematic and is set to be a big issue in 2016.

This was further highlighted by recent research conducted by analyst firm, Gartner, who predicted that half of employers will require employees to supply their own devices for work by 2017, which opens up a lot of sensitive data that will be available via millions of unsecure devices.

This got me thinking about the fact that even if you secure devices on a network, you still need to secure your systems and infrastructure right from the server to the end user. This includes wherever that infrastructure might be – most of which is likely to be in the cloud.  With the growth of IoT, the Connected World, mobile devices and cloud being key themes for 2016, companies need to ensure that the end-to-end attack surfaces are all fully protected.

This is clearly evident from the many infrastructure breaches we have seen recently in the press – from the well-known UK telecoms provider that suffered a well-publicised infrastructure breach at the end of October 2015, to lesser known small and medium-sized businesses that have been completely shelved by a cyber-attack in the final quarter of last year. With more businesses adopting cloud than ever before, the cloud infrastructure that employees are working from also needs to be just as secure to cope with a security breach and protect all of that data.

Making sure your cloud networks, infrastructure, applications and data are as secure as possible is a vital part of leveraging the technological innovations that were presented at Mobile World Congress. Here are three security issues that organisations must consider and address to ensure a fully-secure cloud:

Threat landscape monitoring against attacks: Making sure that you know where the most vulnerable points are in your existing infrastructure means you can work to address and protect them. Having a cloud infrastructure in place that can monitor the threat landscape, scan for vulnerabilities and detect any potential threats can keep your organisation safe from debilitating infrastructure breaches.

Compliance: Many companies have higher levels of compliance policies to adhere to, including industry regulatory compliance requirements. Having a fully-compliant cloud infrastructure that fits to your country’s regulations and adheres to data sovereignty rules is essential in these highly regulated environments. More importantly, though, is the need to have the visibility into your cloud infrastructure that enables you to monitor cloud security and prove (to the C-Suite or auditors) that your company apps and data are secure and compliant. Cloud transparency and security and compliance reporting will become essential as cloud adoption grows and is used for more mission-critical business workloads.

Encryption of data: Having the ability to encrypt sensitive data is beneficial for a plethora of reasons; including making sure that service providers cannot access this information, deterring hackers and adding an additional layer of security for extra-sensitive data. As companies take on multiple clouds to manage data, it is important to ensure security and flexibility when transferring data between clouds. Alongside this, having the ability to hold your own key to this data encryption provides the power and security that comes with placing the highest possible restrictions on who can access sensitive data.

It is vital to have conversations with your cloud provider to ensure that you are on the same page where security is concerned. Otherwise, your infrastructure may not be fully protected and this can mean your organisation will remain mired in using cloud for the most basic use cases or, worse, expose your company data and apps to unacceptable risk.

There is no doubt that the onslaught of new technology – in some cases technology beyond our wildest dreams as showcased last week at MWC – brings with it additional security risks and threats. With the Internet of Things and the connected world growing exponentially, there will undoubtedly be more infrastructure breaches. In research that we conducted last June with Forrester, which covered the challenges companies face in dealing with their cloud providers, over half of respondents (55%) found that critical data which was available to cloud providers but hidden from users creates challenges with implementing proper controls. In today’s digital world, the consequence of not implementing proper controls around sensitive data is huge.

Our research clearly shows that more needs to be done in order for companies to feel safe using the cloud and being part of the connected world without feeling at risk of a breach. So, before you go racing off to implement the latest ‘must have’ gadget or new technology, the first step is to ensure that your systems are secure right at the core of the organisation. This clearly includes ensuring that your cloud infrastructure provides the security as well as the insight, and reporting into security that is required for your organisation to successfully be part of the connected world and the Internet of Things.

The cloud’s the limit: Disaster recovery, compliance, and other key 2015 trends

(c)iStock.com/roberthyrons

We are well into the first month of 2016, and many predictions from the cloud sector have started to reveal different trends that we may see over the next 12 months. In 2015 we saw IaaS (infrastructure as a service) and PaaS (platform as a service) growing 51%, with cloud firmly being pushed into the limelight. Looking to the future, there are many indications of what to expect in the year to come, including improvements to current-generation cloud environments as well as private cloud resurging amongst early and late cloud adopters,

However, while many are in the midst of starting their New Year’s resolutions and looking towards the year ahead, it’s also a good time to look back at the previous year and take stock of what’s been achieved by both vendors and customers in the cloud industry.

There has been a growing realisation that no cloud computing strategy is complete without a cloud security and compliance strategy

Probably the biggest cloud trend in 2015 was security and compliance. There’s been a growing realisation that no cloud computing strategy is complete without a cloud security and compliance strategy. Spurred on by numerous high profile breaches, security was brought to the forefront of many business plans; the EU also expressed its concern about the levels of cyber security being implemented, and agreed new cyber security rules imposing new network security requirements on businesses providing essential services.

As these viruses, hackers and industry regulations from HIPAA to Safe Harbor has started to impact IT departments more, the focus was sharply tuned into how best to apply on-premise levels of security and compliance in the cloud. iland certainly had a big year in this realm – our investments in cloud security and compliance which culminated in the release of ECS-AS in the second half of the year have created a lot of interest and excitement from customers.

Another trend in 2015 was that Disaster-Recovery-as-a-Service continued to claw its way to the top of CTO priority lists. With IT budgets tight and faster adoption of cloud in general, we’ve noticed a growing comfort level and confidence in cloud-based disaster recovery solutions. iland certainly experienced growth in this area in 2015 with customers like Strata adopting DRaaS and reaping the benefits from it. As seen currently in the UK with the floods affecting the north of England, disaster recovery is not an area that should be overlooked; marketing company Pi Industries recently implemented a disaster recovery solution and has urged other businesses to do the same.

We’ve noticed a growing comfort and confidence in cloud-based disaster recovery solutions

2015 also bought a keener focus on cloud costs. Many customers have experienced that cloud can actually cost more, not less, if the required levels of support and management tools are not available. Although the main driver for cloud continues to be agility, there is still a strong desire for insight into costs and billing and the capability to be able to carefully manage cloud costs. As cloud becomes more of an integral component of the overall IT strategy, IT leaders are rightly demanding that it comes with the visibility, service and support levels they’re used to receiving from enterprise IT.

These cloud trends from 2015 highlight the need for cloud providers to offer cost transparency and security services that thoroughly protect the customer– as well as customers being aware of the current threats to business, be that via natural disaster or a hacker at the end of a computer. So, as we get further into 2016 and IT leaders make their plans for 2016, companies would be well placed to consider these trends and look for cloud partners that can help in these key areas as their cloud journeys continue.