The Distinction Between Data Backup and Business Continuity By @BrandonGarcin | @CloudExpo #Cloud

Believe it or not, the first data backups were made on paper. Dating back as early as the 18th century, the “technology” was used in the form of paper tapes constructed from punch cards to control the functions of automated machinery such as textile looms. The concept of these cards was then further developed by IBM in the early days of data processing, where data input, storage and commands were captured using a series of hole punches.

In 1956, IBM introduced the 350 disk storage unit – the first ever hard disk drive. It was 60 inches in length, stood nearly 70 inches tall and had a capacity of 5 million 6-bit characters – or 3.75 MB of data. And although less than 4 megabytes may not seem like much (a decent cell phone can today take a picture with a larger file size), the 350 unit represents a more modern concept of storage and data backup that over time has become a critical element of business planning and strategy.

read more

NTT announces five major additions to Enterprise Cloud

NTT cloud diagramJapan’s NTT Communications (NTT Com) has announced five major improvements to its Enterprise Cloud in a bid to become the chosen vehicle for digital transformation. The new enhanced Enterprise Cloud is immediately available in Japan, and will be rolled out in the UK, Singapore, US, Australia, Hong Kong and Germany later this year.

According to NTT Com, enterprises want to make the difficult crossing from traditional IT to the cloud but need greater support. In response, it has announced that its new Enterprise Cloud offering will make the journey easier. The new improved offerings are described by NTT Com as Hosted Private Cloud for traditional ICT, Multi Tenant Cloud for cloud native ICT, seamless hybrid clouds, free and seamless connections between cloud platforms and a cloud management platform to give full visibility and governance.

NCC Com said enterprises with cloud ambitions face two major challenges: migration of their traditional systems and, having gone to the cloud, changing their mode of development-operations to fit the new ‘cloud native’ application culture.

NTT Com outlined how each of these five Enterprise Cloud ‘enhancements’ will help its target clients. The Hosted Private Cloud for Traditional ICT now consists of dedicated bare-metal servers with options for multi-hypervisor environments, including VMware vSphere and Microsoft Hyper-V. The logic of the service is to make it easier for companies with traditional ICT to migrate to a hosted private cloud.

The Enterprise-class Multi-tenant Cloud for Cloud-Native ICT is based on OpenStack architecture, giving customers an industry-standard open API to control the Enterprise Cloud. It comes with Platform-as-a-Service (PaaS) software from Cloud Foundry in order to provide Dev-Ops efficiencies. The open architecture was needed to address customer concerns about vendor lock-ins, says NTT Com.

The Seamless Hybrid Cloud Environment is created for clients by NTT Com configuring all the relevant network components (the virtual servers, bare-metal servers, firewalls and load balancers) running on complex on-premises environments.

The promised ‘Free and Seamless Connection between Cloud Platforms’ is made by connecting the Enterprise Cloud with a 10Gbps best-effort closed network, free of charge. In addition, connectivity between Enterprise Cloud platforms and data centres is provided at ‘competitive’ prices globally, said NTT Com.

Finally, the new Cloud Management Platform (CMP) promises full visibility and IT governance by unifying the control of both Enterprise Cloud and third-party providers’ clouds, including Amazon Web Services (AWS) and Microsoft Azure.

IBM and SanDisk join forces to create software defined flash storage for cloud

Sandisk infiniflashFlash storage maker SanDisk and IBM are working together on a new software defined, all flash storage system for data centres. News of this collaboration comes days after BCN revealed that EMC had introduced a new category of flash storage for the same market.

SanDisk’s new InfiniFlash, a high capacity and performance flash-based software defined storage system, features IBM’s Spectrum Scale file system. The joint product is the two manufacturers’ answer to the increasing demands faced by data centres who can never get enough capacity and performance and will need more flexibility in future. An all flash system gives the requisite computing power and the IBM-authored software definition provides the agility, according to SanDisk.

Flash is the only technology that can support the many variables of the modern hybrid cloud, according to SanDisk, which listed bi-modal IT, traditional and cloud native applications and the increasing workload created by social, mobile and real-time processing as drivers for the need for a powerful storage infrastructure. The InfiniFlash for IBM Spectrum is described as ultra-dense and scalable, meaning that it can be bought in small increments that can be easily snapped to together to quickly build a hyperscale infrastructure. SanDisk claimed it offers the lowest price per IOPS/TB on the market and the option for independent storage.

SanDisk claims InfiniFlash has five times the density, fifty times the performance and four times the reliability of traditional hard disks, while using 80% less power. Pricing starts at $1 per gigabyte (GB) for an all-flash system. When used with software stacks designed to reduced data (through de-duplication and other techniques) the cost of storage could fall to around 20 cents per GB, claims SanDisk.

The IBM Spectrum Scale, meanwhile, uses software definition to create efficiencies through file, object and integrated data analytics designed for technical computing, big data analysis, cognitive computing, Hadoop Distributed File System, private cloud and content repositories.

Ravi Swaminathan, SanDisk’s general manager of System and Software Solutions, promised the ‘best of both worlds’ to data centres. “Customers can afford to deploy flash at petabyte-scale, which drive business growth through new services and offerings for their end-customers,” said Swaminathan.

MWC takeaways: Making sure your infrastructure is secure for the connected world

(c)iStock.com/Maciej Noskowski

With Mobile World Congress over for another year, I’ve now had a chance to digest everything that was on show last week. This conference, which is arguably the planet’s best venue for mobile industry networking, looked at the Internet of Things and the connected world, anything and everything from developments such as 5G to newly connected toothbrush devices that ensure consumers brush their teeth as the dentist intended.

What all these new technology innovations seem to have in common is the capability to generate obscene – and yet potentially very useful – amounts of data. How organisations manage and use this data – and how they keep it secure – will be a major challenge and one of the key predictors of success across many industries.

With an overwhelming array of new technology producing ever increasing amounts of often sensitive data, now more than ever there is scope for hackers to breach personal and company sensitive data. With reports highlighting the need to safeguard the confidential data on employees’ smartphones and tablets, the security of connected devices is becoming even more problematic and is set to be a big issue in 2016.

This was further highlighted by recent research conducted by analyst firm, Gartner, who predicted that half of employers will require employees to supply their own devices for work by 2017, which opens up a lot of sensitive data that will be available via millions of unsecure devices.

This got me thinking about the fact that even if you secure devices on a network, you still need to secure your systems and infrastructure right from the server to the end user. This includes wherever that infrastructure might be – most of which is likely to be in the cloud.  With the growth of IoT, the Connected World, mobile devices and cloud being key themes for 2016, companies need to ensure that the end-to-end attack surfaces are all fully protected.

This is clearly evident from the many infrastructure breaches we have seen recently in the press – from the well-known UK telecoms provider that suffered a well-publicised infrastructure breach at the end of October 2015, to lesser known small and medium-sized businesses that have been completely shelved by a cyber-attack in the final quarter of last year. With more businesses adopting cloud than ever before, the cloud infrastructure that employees are working from also needs to be just as secure to cope with a security breach and protect all of that data.

Making sure your cloud networks, infrastructure, applications and data are as secure as possible is a vital part of leveraging the technological innovations that were presented at Mobile World Congress. Here are three security issues that organisations must consider and address to ensure a fully-secure cloud:

Threat landscape monitoring against attacks: Making sure that you know where the most vulnerable points are in your existing infrastructure means you can work to address and protect them. Having a cloud infrastructure in place that can monitor the threat landscape, scan for vulnerabilities and detect any potential threats can keep your organisation safe from debilitating infrastructure breaches.

Compliance: Many companies have higher levels of compliance policies to adhere to, including industry regulatory compliance requirements. Having a fully-compliant cloud infrastructure that fits to your country’s regulations and adheres to data sovereignty rules is essential in these highly regulated environments. More importantly, though, is the need to have the visibility into your cloud infrastructure that enables you to monitor cloud security and prove (to the C-Suite or auditors) that your company apps and data are secure and compliant. Cloud transparency and security and compliance reporting will become essential as cloud adoption grows and is used for more mission-critical business workloads.

Encryption of data: Having the ability to encrypt sensitive data is beneficial for a plethora of reasons; including making sure that service providers cannot access this information, deterring hackers and adding an additional layer of security for extra-sensitive data. As companies take on multiple clouds to manage data, it is important to ensure security and flexibility when transferring data between clouds. Alongside this, having the ability to hold your own key to this data encryption provides the power and security that comes with placing the highest possible restrictions on who can access sensitive data.

It is vital to have conversations with your cloud provider to ensure that you are on the same page where security is concerned. Otherwise, your infrastructure may not be fully protected and this can mean your organisation will remain mired in using cloud for the most basic use cases or, worse, expose your company data and apps to unacceptable risk.

There is no doubt that the onslaught of new technology – in some cases technology beyond our wildest dreams as showcased last week at MWC – brings with it additional security risks and threats. With the Internet of Things and the connected world growing exponentially, there will undoubtedly be more infrastructure breaches. In research that we conducted last June with Forrester, which covered the challenges companies face in dealing with their cloud providers, over half of respondents (55%) found that critical data which was available to cloud providers but hidden from users creates challenges with implementing proper controls. In today’s digital world, the consequence of not implementing proper controls around sensitive data is huge.

Our research clearly shows that more needs to be done in order for companies to feel safe using the cloud and being part of the connected world without feeling at risk of a breach. So, before you go racing off to implement the latest ‘must have’ gadget or new technology, the first step is to ensure that your systems are secure right at the core of the organisation. This clearly includes ensuring that your cloud infrastructure provides the security as well as the insight, and reporting into security that is required for your organisation to successfully be part of the connected world and the Internet of Things.

Azure Site Recovery: 4 Things You Need to Know

Disaster recovery has traditionally been a complex and expensive proposition for many organizations. Many have chosen to rely on backups of data as the method of disaster recovery. This approach is cost effective, however, it can result in extended downtime during a disaster while new servers are provisioned (referred to as Recovery Time Objective or RTO) and potentially large data loss of information created from the time of the backup the time of the failure (referred to as Recovery Point Objective). In the worst case scenario, these backups are not viable at all and there is a total loss. For those who have looked into more advanced disaster recovery models, the complexity and costs of such a system quickly add up. Azure Site Recovery helps bring disaster recovery to all companies in four key ways.

 

Azure Site Recovery makes disaster recovery easy by delivering it as a cloud hosted service

The Azure Site Recovery lives within the Microsoft cloud and is controlled and configured through the Azure Management Portal. There is no requirement to patch or maintain servers; it’s disaster recovery orchestration as a service. Using Site Recovery does not require that you use Azure as the destination of replication. It can protect your workloads between 2 company-owned sites. For example, if you have a branch office and a home office that both run VMware or Hyper-V, you can use Azure Site Recovery to replicate, protect and fail over workloads between your existing sites. It also has the optional function of being able to replicate data directly to Azure which can be used to avoid the expense and complexity of building and maintaining a disaster recovery site. 

 

Azure Site Recovery is capable of handling almost any source workload and platform

Azure Site Recovery offers an impressive list of platforms and applications it can protect. Azure site recovery can protect any workload running on VMware Virtual Machines on vSphere or ESXi, Hyper-V VMs with or without System Center Virtual Machine Manager and, yes; even physical workloads can be replicated and failed over to Azure. Microsoft has worked internally with its application teams to make sure Azure Site Recovery works with many of the most popular Microsoft solutions including Active Directory, DNS, Web apps (IIS, SQL), SCOM, SharePoint, Exchange (non-DAG), Remote Desktop/VDI, Dynamics AX, Dynamics CRM, and Windows File Server. They have also independently tested protecting SAP, Linux (OS and Apps) and Oracle workloads.

 

Azure Site Recovery has predictable and affordable pricing

Unlike traditional disaster recovery products that require building and maintaining a warm or hot DR site, Site Recovery allows you to replicate VMs to Azure. Azure Site Recovery offers a simple pricing model that makes it easy to estimate costs. For virtual machines protected between company-owned sites, it is a flat $16/month per protected virtual machines. If you are protecting your workloads to Azure then it is $54/month per protected server. In addition, the first 31 days of protection for any server is free. This allows you to try out and test Azure site recovery before you have to pay for it. It is also a way for you to use Azure Site Recovery to migrate your workloads to Azure for free.

 

Azure Site Recovery is secure and reliable

Azure Site Recovery continuously monitors the replication and health of the protected workloads from Azure. In the event of an inability to replicate data, you can configure alerts to email you a notification. Protecting the privacy of your data is a top priority in Site Recovery. All communication between your on premises environment and Azure is sent over SSL encrypted channels. All of your data is encrypted both when in transit and at rest in Azure. Performing failover testing with Azure Site Recovery allows you to do a test failover without impacting your production workloads.

 

For these reasons, companies should be considering adding Azure Site Recovery to their business continuity and disaster recovery toolbox.

 

[If you’re looking for more Microsoft resources, download our recent webinar around strategies for migrating to Office 365]

 

By Justin Gallagher, Enterprise Consultant

What’s Happening in the JavaScript Ecosystem By @YFain | @CloudExpo #Cloud

Lots of things are happening there. As of today it’s the liveliest software ecosystem. The last time I’ve seen such an interesting gathering was 15 years ago in Java.
Fifteen years ago Java developers were looking down on the JavaScript folks. It was assumed that JavaScript was only good for highlighting menus and making an impression that the Web site is current by displaying a running clock on the page. Mobile phones still had buttons with one digits and three letters. There were no App stores. Java was promising “Write once run everywhere”, but now we can see that JavaScript actually delivered on this promise.

read more

Red Hat and Eurotech to jointly re-engineer the cloud for better IoT

redhat office logoRed Hat is teaming up with Italy’s Eurotech in a bid to help Internet of Things projects get bigger and more flexible without sacrificing their security.

The companies have pooled their technical powers to combat the scale, latency, reliability and security weaknesses within complex Internets of Things. Their joint ambition is to obviate the need for the mass consignment of data to the cloud for real-time processing. Instead, they want to set up a more robust alternative system that works by using essential data aggregation, transformation, integration and routing.

North Carolina based open source champion Red Hat and Camaro based machine to machine (M2M) system maker Eurotech say they have two objectives for the IoT: simplify and accelerate. They aim to combine open source cloud software and M2M platforms into a single architecture that bridges the gap between operational and information technology.

All the inherent weaknesses of the IoT – from its lack of scalability to its insecurity – can be tackled by pushing computing power to the network edge, according to the partners. This will help IoT project managers to avoid the risk of shipping masses of data to the cloud for real-time processing. With all the essential data aggregation, data transformation, integration and routing taking place locally, and less exposed to a journey across the cloud, security and performance can be tightened up.

Another productivity dividend will come from placing the processes close to the operational devices. By devolving power away from the centre and allowing remote devices to trigger business rules the partners aim to automate greater numbers of machine processes.

The foundations of this new architecture will be Red Hat’s Enterprise Linux and JBoss Middleware along with Eurotech’s Everyware Software Framework and Everyware Cloud. These are to be integrated to provide the security, management and application support spanning the whole hierarchy of the cloud from device tier to the data centre, according to a Red Hat statement.

“Open Source and Java are important pillars in both our strategies. These factors ensure a good alignment,” said Eurotech CMO Robert Andres, CMO, Eurotech.

Hung up on hybrid: The rise of cloud 2.0 and what it means for the data centre

(c)iStock.com/Spondylolithesis

By Steve Davis, marketing director, NGD

We’ve seen it many times before; first generation technology products creating huge untapped marketplaces but eventually being bettered either by their originators or competitors. Think VCRs and then CDRs, both were usurped by DVRs and streaming, or the first mobile phones becoming the smartphones of today – the list goes on.

Cloud computing is no exception. The original ‘product’ concept remains very much in vogue but the technology and infrastructure holding it together keeps on getting faster, more functional, more reliable – put simply, better. Growing user and cloud service provider maturity is seeing to that. After 10 years of cloud, the industry and users have learned valuable lessons on what does and doesn’t work. They still like it and want much more of it but there’s no longer room for a one size fits all approach.

With this evolution, cloud “1.0” has morphed into “2.0” in the past year or so; while the name has been around for a few years, 451 Research among others have recently put it again at the forefront. The two core varieties of public and private have ‘cross-pollenated’ and given rise to hybrid, an increasingly ‘virulent’ strain. This is because companies are realising that they need many different types of cloud services in order to meet a growing list of customer needs.

For the best of both worlds, hybrid cloud offers a private cloud combined with the use of public cloud services which together create a unified, automated, and well-managed computing environment.

Economics and speed are the two greatest issues driving this market change. Look at the numbers. According to RightScale’s 2016 State of the Cloud Report, hybrid cloud adoption rose from 58% in 2015 to 71% thanks to the increased adoption of private cloud computing, which rose to 77%. Synergy Research’s 2015 review of the global cloud market found public IaaS/PaaS services had the highest growth rate at 51%, followed by private and hybrid cloud infrastructure services at 45%.

It’s an incestuous business. Enterprises using public clouds for storing non-sensitive data and for easy access to office applications and productivity tools, automatically become hybrid cloud users as soon as they connect any of these elements with private clouds, and vice versa.  Many still prefer the peace of mind of retaining private cloud infrastructure for manging core business applications as well as embracing those still valuable on-premise legacy systems and equipment which just can’t be virtualised.

Equally, a company might want to use a public cloud development platform that sends data to a private cloud or a data centre based application, or move data from a number of SaaS (Software as a Service) applications between private or data centre resources. A business process is therefore designed as a service so that it can connect with environments as though they were a single environment.

Hybrid and the data centre

So where does cloud 2.0 and the rise of hybrid leave the data centre? Clearly, the buck must continue to eventually stop with the data centre provider as it is ultimately the rock supporting any flavour of cloud – public, private or hybrid. Whether you are a service provider, systems integrator, reseller or the end user you will want to be sure the data centres involved have strong physical security, sufficient power supply on tap for the high density racks that allow scaling of services at will, and of course, diverse high speed connectivity for reliable anyplace, anytime access.

But for implementing hybrid environments, the devil is in the detail.  Often what isn’t considered is how to connect public and private clouds. And don’t forget some applications may still remain outside of cloud type infrastructures. There is not only the concern around latency between these three models but the cost of connectivity needs to be built into the business plan.

Location of the public and private cloud is a primary concern and needs careful consideration. The time to cover large geographical distances must be factored in and as a result the closer the environments can be positioned the better. The security of the connections and how they are routed also needs to be examined. If the links between the two clouds was impacted then how might this affect your organisation?

Customers who are actively building hybrid solutions increasingly demand their private clouds to be as close to the public cloud as possible. This is because using public Internet connections to connect to public cloud can expose end users to possible congestion and latency whilst direct connections do not come cheap.  Sure, latency between private and public cloud can be reduced but with some costs. Caching can help sometimes and the use of traffic optimisation devices is well-proven but each adds more complexity and cost to what should be a relatively straightforward solution.  Developers need to be conscious of the fact that moving large amounts of data between private and public cloud will cause latency and sometimes will need to redesigned purely to get over latency problems.

In a perfect world it would be ideal to use a single facility to host both public and private cloud infrastructure and use the various backup solutions available for data stored in the private and public clouds. This would reduce latency, connectivity costs and provide a far higher level of control for the end user. Obviously the location would have to be in a scalable, highly secure data centre with good on-site engineering services available for providing remote hands as necessary.  And thanks to the excellent quality of modern monitoring and diagnostics tools, much of the technical support can now be done remotely by the provider or user these days.

DevOps, Security and Compliance | @DevOpsSummit #DevOps #Microservices

DevOps bridges the gap between Development and Operations to accelerate software delivery and increase business agility and time-to-market. With its roots in the Agile movement, DevOps fosters collaboration between teams and streamlines processes, with the goal of breaking silos in order to “go fast.”
Information Security (InfoSec) and compliance are critical to businesses across the globe, especially given past examples of data breaches and looming cybersecurity threats. InfoSec has long been thought of as the group that “slows things down” – the wet towel to your DevOps efforts – often requiring a more conservative approach as a means of mitigating risk. Traditionally, DevOps was viewed as a risk to InfoSec, with the increased velocity of software releases seen as a threat to governance and security/regulatory controls (these, by the way, often require the separation of duties, rather than the breaking of silos.)

read more

FalconStor and Fujitsu Team Up | @CloudExpo @FalconStor #Cloud

FalconStor Software® Inc. and Fujitsu have teamed up to provide a bundled hardware and software SAP HANA storage solution. The SAP-certified bundle consists of the FUJITSU Storage ETERNUS DX S3 system and FalconStor’s NSS IO multi-cluster technology. The joint solution not only meets the high demands of SAP HANA through application-optimized shared storage components but provides for seamless business continuity with its innovative, intelligent approach. The bundle is available immediately from Fujitsu.

read more