All posts by graham.jarvis

Are you sure your data centre is really as secure as you think it is?

(c)iStock.com/bjdlzx

Data centres are only as secure as the connectivity that links them up to a network. They can otherwise be prone to cyber-attacks or information security breaches, which can have catastrophic consequences for any organisations that wish to transmit data and backup data between their own data centres. With network latency being an issue that needs to be addressed, the transmission to and from data centres or to the outside world could increase the risks associated with these threats.

According to Clive Longbottom – renowned analyst and Client Services Director at Quocirca, latency can lead to lost transactions whenever there is a failure in the connectivity, application or platform. High latency levels also make it impossible to make real-time IT use such as voice or video transmissions, but he thinks that latency is only one part of the equation. There are also complex network, application and hardware mix considerations to bear in mind.

“Latency has two effects on security in data centres: the first issue is about how closely you can keep your data centres in synchronicity with each other, and when you are transmitting data you have got to have security keys”, explains David Trossell – CEO of Bridgeworks. So whether you are putting data into the cloud or into a multi-tenanted data centre, it’s crucial to be secure. In other words no unauthorised person should have the ability to pry into the data itself.

Encrypt data

So underlying the protection of the data centre from an information security perspective is the need for enterprises to encrypt data whenever it is uploaded to the cloud or transmitted between data centres for the purposes of back-up and retrieval. The encryption needs to occur while the data is at rest, when it isn’t being sent across a network. It’s also worth noting that the most secure encrypted data only has one person who holds the keys to it.

“An AS 256 key offers the strongest encryption, but the problem is that a strong security key takes up more computing power and yet with encrypted data you shouldn’t be able to perform any de-duplication which looks for repeated patterns in the data”, says Trossell. To improve the ability to quickly transmit data most organisations would traditionally opt for a wide area network (WAN) optimisation tool, where the encryption process occurs while the data is in transit using IPSEC.

Encrypting the data at rest means that it’s more secure. For traditional WAN optimisation,the keys would have to be offered to the WAN optimisation engine in order to decrypt it, de-duplicate it and before using internet security protocol IPsec across the wide area network. The traditional WAN optimisation engine would then need to strip off IPsec, and this would then permit the re-encryption of the data. This means that you now have two security keys in a couple of places – this can be the biggest security risk.

“For the highest levels of security, data should be encrypted before it hits storage”, says Longbottom. He adds: “This requires full stream, speed capable encryption and yet this is often not feasible.” He says the next level there is about storing and then encrypting, and it requires deleting the unencrypted version afterwards: “This is encryption at rest and on the move, but too many organisations just go for encryption on the move, and so if someone can get to the storage media, then all of the information is there for them to access it and what is typically overlooked is key management.”

Mind your keys

“If you lose the keys completely, no-one should be able to get at the information – even yourself; and if they are compromised, you at least have control over them as long as you are aware that you can do something about it”, he explains. However, if the keys are held by a third party, then he says it becomes a richer target for hackers, “as it will likely hold the keys for a group of companies rather than just one, and the speed of response from noticing the breach to notification to the customer to action begin taken could be a lot longer.”

The trouble is that the data is traditionally often not secure when it is encrypted while in transit across the network. “The issue here is that if you have a high speed WAN link, then this will inhibit the movement of data and then you are not fulfilling your WAN optimisation”, comments Trossell. His colleague Claire Buchanan, CCO at Bridgeworks adds: “You are impacting on your recovery time objective (RTO) and on your recovery point objective (RPO).” The RPO is the last point of when the data was backed up, and the RTO is how quickly the data can be retrieved and put back to work.

Gain control

“With encryption at rest the corporate is in full control and it is the sole owner of the key, but normally WAN optimisation tools simply pass the data through with no acceleration and in order to provide some level of security, traditional WAN optimisation tools provide an IPsec layer – but this is not anywhere close to the levels of security that many corporations require”, she explains.

To gain control she thinks that organisations need a new solution. In her view that solution is self-configuring and optimised networks (SCIONs), which provide a high level of security and enable organisations to significantly reduce latency. They use machine intelligence to become not only self-configuring, but to also self-manage and self-monitor any kind of network – particularly WANs. This enables any kind of transition to cloud-based infrastructures easier to achieve while providing a secure way for organisations to maximise the utilisation of their infrastructure by up to 98%. SCIONs reduce the effects of latency too, going well beyond transactional flow processing and steady state infrastructures.

Security used to be quite light in terms of security compliance, but a number of new threats have arisen and they weren’t as high as they are now. “You have the internal problems, such as the one represented by Snowden, and with more powerful machines the lower encryption of 128 bit is far easier to crack than something with 256 bit encryption which adds layers of complexity.” Trossell claims that nowadays there are more disgruntled employees than ever – and Wikileaks is an example of it, but the employees have to have the keys before they can access the encryption.

Longbottom adds that it wasn’t long ago that 40 bit encryption was seen as being sufficient. It required a low level of computing resources and in most cases it was adequately hard to break. Increased resource availability made it easier to break within a matter of minutes. “Therefore the move has been to 256 bit – AES, 3DES, BloFISH and so on”, he says before adding that cloud computing provides a means for hackers to apply brute strength to try and break the keys.

The solution is to keep the keys on site, and to limit the number of people who have access to them.  By doing this the data and therefore the data centres remain secure.  “Previously organisations have had no choice, but to simply move the encrypted data at a slow speed, and with traditional WAN optimisation it simply passes the data along the pipe without any acceleration”, says Buchanan. Corporations still think it’s the only way to go, but not anymore. Encryption is often needed to ensure that the data is secure whenever there is a need to transmit data between data centres or to the cloud without compromising on speed or security.

Speed with security

Buchanan adds that WANrockIT – the market leader in SCION solutions – can help organisations to improve the speed and security of this process: “With WANrockIT your encryption is just another block of data to us, accelerated just like any other data without it being touched – plus, if you are using encrypted data, the software has the ability to put IPsec on top so that you effectively get double encryption.”

One anonymous Bridgeworks’ customer, for example, tried to transfer a 32GB video file over a 500MB satellite link with 600ms of latency, and it took 20 hours to complete. With WANrockIT installed in just 11 minutes, the process only took 10 minutes to complete. Another customer could only do incremental back-ups of 50GBs rather than being able to do nightly back-ups of 430GBs – again the issue was latency at 86ms. It took 12 hours on their OC12 pipes, but when WANrockIT was installed the 50GBs back-ups were securely completed within 45 minutes. This allowed the full nightly back-ups to complete, and so the organisation could rest in the knowledge that its data was secure.

The security of an organisation’s data centre is therefore as much about its data as it is about how it prevents hacking and unplanned incidents that could prevent it from operating. Leaving a data centre without the ability to quickly and securely back-up inherently means that it’s insecure by nature as it won’t be able to respond whenever a disaster occurs.

So if your data centre is over reliant on sending sensitive data across a network without securing it at rest – before it is transmitted to another data centre or to the cloud, then it is potentially putting itself at risk. With data loss and downtime costing the UK £10.5bn a year, according to the EMC Global Data Protection Index, is it worth the risk? To protect your data centre and to speed up data transfers, use a SCION solution such as WANrockIT that does it quickly and securely.

Disaster recovery: Where time matters

(c)iStock.com/ziggymaj

Disasters can strike at any time. They may be caused by human error, cyber-attacks or by natural disasters such as earthquakes, fires, floods and hurricanes. Even so it’s quite tempting to sit back and relax, to not worry about the consequences of these upon one’s business – perhaps for cost reasons, but investments in business continuity are like an insurance policy. It’s not just about disaster recovery because the best way to prevent downtime is to keep a step ahead of any potential disaster scenario.

Yet when unforeseen incidents do occur, the organisation’s disaster recovery plan should instantly kick in to ensure that business continuity can be maintained with either no interruption or a minimal amount of it. An e-commerce firm, for example, could lose sales to its competitors if its website goes down. Downtime can also damage the company’s brand reputation. For these reasons alone business continuity can’t wait, and so large volumes of data need to traditionally have a batch window for data for backup and replication. This becomes increasingly challenging with the growth of big data.

Avoiding complacency

So are organisations taking business continuity seriously? They are according to Claire Buchanan, chief commercial officer (CCO) at Bridgeworks: “I think that most businesses take business continuity seriously, but how they handle it is another thing”. In other words it’s about how companies manage disaster recovery and business continuity that makes the difference.

These two disciplines are in many respects becoming synonymous too. “From what I understand from Gartner, disaster recovery and business continuity are merging to become IT services continuity, and the analyst firm has found that 34% of inbound calls from corporate customers, those that are asking for analyst help, is about how they improve their business continuity”, she says.

Phil Taylor, Director and Founder of Flex/50 Ltd concurs with this view, stating that a high percentage of organisations are taking disaster recovery and business continuity seriously. “Businesses these days can’t afford to ignore business continuity particularly because of our total dependence on IT systems and networks”, he says. The on-going push for mobile services and media rich applications will, he says, generate increasing transaction rates and huge data volumes too.

Buchanan nevertheless adds that most businesses think they are ready for business continuity, but once disasters actually strike the real problems occur. “So what you’ve got to be able to do is to minimise the impact of unplanned downtime when something disruptive happens, and with social media and everything else the reputational risk with a business not being able to function as it should is huge”, she explains. In her experience the problem is that consciousness slips as time goes on.

Bryan Foss, a visiting professor at Bristol Business School and Fellow of the British Computer Society, finds: “Operational risks have often failed to get the executive and budgetary attention they deserve as boards may have been falsely assured that the risks fit within their risk appetite.” Another issue is that you can’t plan for when a disaster will happen, but you can plan to prevent it from causing the loss of service availability, financial or reputational damage.

To prevent damaging issues from arising Buchanan says organisations need to be able to provide support for end-to-end applications and services where availability is unaffected by disruptive events. When they do occur, the end user shouldn’t notice what’s going on – it should be transparent, according to Buchanan. “We saw what happened during Hurricane Sandy, and the data centredata centres in New York – they took a massive hit”, she says.  The October 2012 storm damaged a number of data centredata centres and took websites offline.

Backup, backup!

Traditionally, backing up is performed overnight when most users have logged off their organisation’s systems. “Now, in the days where we expect 24×7 usage and as the amount of data is every increasing the backup window is being squeezed more than ever before, and this has led to solutions being employed that depend on an organisation’s Recovery Point Objectives (RPO) and the Recovery Time Objectives (RTO)”, Buchanan explains.

“For some organisations such as financial services institutions, where these are ideally set at zero, synchronous replication is employed, and this suggests that the data copies are in the same data centre or the data centredata centres are located a few miles or kilometres from each other”, she adds. This is the standard way to minimise data retrieval times, and this what most people have done in the past because they are trying to support data synchronisation. Yet placing data centredata centres in the same circle of disruption can be disastrous whenever a flood, terrorist attack, power outage and so on occurs.

With other organisations an RTO and RPO of a few milliseconds is acceptable and so they can be placed further apart, but this replication doesn’t negate the need for backing up with modern technologies which allow machines to be backed up whilst they are still operational.

Comparing backups

Her colleague and CEO of Bridgeworks, David Trossell, adds that backup-as-service (BaaS) can help by reducing infrastructure-related capital investment costs. “It’s simple to deploy and you only pay for what you use, however, the checks and balances with BaaS needn’t be treated any differently from on-site backupbackups”, he explains. In other words when backup is installed within a data centre, performance is governed by the capability of the devices employed – such as tape or disks.  In contrast performance with BaaS is governed by its connection to a cloud service provider, and Trossell says this defines that speed at which data can be transferred to a cloud.

“A good and efficient method of moving data to the cloud is essential, but organisations should keep a backup copy of the data on-site as well as off-site and this principle applies to BaaS”, he advises.

Essentially, this means that a cloud service provider should secure the data in another region where the CSP operates. Also, in some circumstances it might be cheaper to bring the backup function in-house or with certain types of sensitive data a hybrid cloud approach might be more suitable.

Time is the ruler

Trossell says time is the ruler of all things, and he’s right. The challenge though is for organisations to be able to achieve more than 95% bandwidth utilisation from their networks. This is because of the way that the network protocol TCP/IP works. “Customers are using around 15% of their bandwidth, and some people try to run multiple streams which you have to be able to run down physical connections from the ingress to the egress in order to attain 95% utilisation”, reveals Buchanan.

For example, one Bridgeworks customer needed to backup 70TB of data using a 10GB WAN. The process took the customer 42 days to complete. “They were looking to replicate their entire environment which was going to cost up to £2m, and we put in our boxes within half an hour as a proof of concept”, she explains. Bridgeworks’ team restricted the bandwidth on the WAN network to 200MB, which resulted in the customer being able to complete an entire backup within just 7 days – achieving “80% expansion headroom on the connection and 75% on the number of days they clawed back”, she says. The customer has also since then been able to increase their data volumes.

Providing wider choice

“At the moment with outdated technology CEOs and decision-makers haven’t had the choice with regards to the distance between their data centredata centres without having to think about the impact of network latency, but WANrockIT is given the decision-maker the power to make a different choice to the one that has been historically made”, says Trossell. He claims that WANrockIT gives decision-makers freedom, good economics, a high level of frequency and it maximises the infrastructure in a way that means that their organisations don’t need to throw anything away.

Phil Taylor nevertheless concludes with some valid advice: “People need to be clear about their requirements and governing criteria because at the lowest level all data should be backed-up…, and business continuity must consider all operations of a business – not just IT systems”.

To ensure that a disaster recovery plan works, it has to be regularly tested. Time is of the essence, and so data backupbackups need to be exercised regularly with continuous availability in a way that maintenance doesn’t also prove to be disruptive. Testing will help to iron out any flaws in the process before disaster strikes.