All posts by monicabrink

How to get the right kind of control over your cloud: A guide

Trust in the cloud hasn’t always been universal. There was a time when security and risk management leaders feared entrusting critical data and infrastructure to a third-party cloud provider. This was understandable, arising from the history of network management, where IT teams were intimately familiar with managing the resources that made up their IT infrastructures, from the buildings they were housed in, to the electricity and cooling supply, through to the server, all the way down to the storage and networking infrastructure.

However, this familiarity isn’t possible when you delegate responsibility to your cloud provider and hanging onto it can prevent organisations from gaining optimal cloud efficiency and security. Clearly, a shift in mindset is needed.

In their report, “CISO Playbook: How to Retain the Right Kinds of Control in the Cloud” Gartner make the analogy that moving to the cloud is a bit like flying somewhere on a plane, compared to driving your own car on a journey. You are relinquishing control of your journey to the flight crew of a plane, which can cause anxiety. However, this anxiety is not rational because whereas you might check the oil, tyres and windshield washer fluid on your car once in a blue moon, the plane will be checked rigorously, every flight. To sum up, this means that migrating to the cloud requires a new outlook on how you control your data and a better understanding of what cloud service providers do to ensure security so that you feel comfortable giving up ownership of the underlying platform.

In today’s context, customers still own their data but share stewardship with cloud providers. The concept of “control” has changed from physical location-based ownership to control of processes. Information security and risk management leaders therefore need to adopt a new approach of indirect control to achieve efficiency, security and above all peace of mind. With this in mind, we will try to define how you can get the right kind of control over your cloud.

Design the right identity and access management strategy

Security teams and developers can find cloud-based control concepts difficult to grasp. But really, it’s a similar situation to giving up ownership of the fibre and copper in their wide-area networks: telecommunications carriers own the physical infrastructure, but   data remains owned and controlled by their customers. It’s all about delineating security responsibility. Once you’ve defined the hand-off point, you’ll know that beyond this your CSP is responsible for security.

Your responsibility lies in designing an Identity Access Management strategy that covers not only the cloud platform but also the applications and services that the cloud platform is presenting to the outside world. Access should be based on giving users permissions on a “least privilege” basis, rather than giving blanket authority to all. This improves audit capabilities and reduces the risk of unauthorised changes to the platform.

On top of that, you should work with your cloud provider to ensure encryption for higher degrees of logical isolation. Encryption of data at rest and in transit is often seen as another way to secure, segregate and isolate data on a public cloud platform. While it is highly unlikely that anyone would be able to break into a public cloud data centre and physically steal a disk drive containing your data, it is highly recommended that you consider using encryption of data at rest.

Increase monitoring and re-orient audit objectives

With the regulatory environment growing in complexity, organisations using the cloud are increasingly asked to demonstrate their strong governance. The fact that you’ve delegated some control to your CSP means that you’ll have to demonstrate that governance procedures are in place and are being followed.

In order to do so, you should seek to work with a cloud service provider that provides security and compliance monitoring and reporting. And, has the necessary approach and compliance attestations that ensures your cloud workloads will be able to meet the necessary requirements come audit time.

Compare your security requirements and measure CSP performance against SLAs

Another point to pay close attention to is the contractual terms that bind the CSP with respect to protection of customer data and privacy. Contracts with hyperscale cloud providers tend to overwhelmingly protect those CSPs, but it is possible to work with some CSPs to reach agreement on terms more favourable to customers.

The final impact and recommendation is around cloud service provider contracts and SLAs. Many CSPs, especially the hyperscale providers, can be extremely rigid with their SLAs, and can be very inflexible when asked to change them. It’s important to find out where your CSP stands on different aspects of compliance. Are they able to share their certifications and attestations? How flexible are they with their SLAs on subjects such as availability? Will they pay out service credits if service is not available according to the SLA? These are questions you will need to have answers to before going forward with your CSP. An extra piece of advice I would give is to compare your security requirements for externally hosted data to the capabilities of CSPs in the context of your risk appetite.

To summarise, with security risks and compliance regulations only increasing, along with the adoption of cloud services, it’s important to understand shared responsibility with regards to cloud security. Striking the right balance between relinquishing and maintaining control in the cloud will enable your business to securely leverage the many benefits of cloud services. Having control of your cloud doesn’t mean you should manage every aspect of it, but make sure you know what you are accountable for instead to gain the right kind of control.

How cloud scalability is seducing the financial sector

The financial sector, while forging ahead in other areas of digital transformation, has been relatively slow to adopt the cloud and there has been good reason for it: banks have to deal with highly sensitive data and sharing data storage and compute resources with others could not even be envisaged, let alone adopted.

However, just under two years ago, the Financial Conduct Authority (FCA) published a new guidance for firms outsourcing to the ‘cloud’ and other third-party IT services which paved the way for banks, insurers and other financial services companies to take advantage of cloud computing services. In this new guidance, the regulator outlined that there was no fundamental reason why cloud services (including public cloud services) couldn’t be implemented, subject to compliance with specific guidance for financial firms outsourcing to the cloud and other third-party IT services.

According to Celent, financial services firms will progressively abandon private data centres and triple the amount of data they upload to the cloud in the next three years. Because of the huge and increasing amounts of data financial services firms need to manage, the scalability of cloud has become an attractive feature – especially considering the fact that the number of daily transactions can stretch into the millions. On top of that, the volume of transactional data is not always predictable, so financial institutions must be able to scale up quickly on demand.

While scalability is an attractive aspect of the cloud for financial firms, it’s important to evaluate scalability in conjunction with other key elements of cloud services including security, cost-effectiveness and transparency.

Combining scalability with security

Security is a key concern for banks as they deal with highly sensitive data and increasing regulations around data privacy, particularly with the EU General Data Protection Regulation (GDPR) coming into force in May, 2018. Even if they are fully responsible for clients’ data security, their cloud services provider (CSP) will maintain the security of the cloud infrastructure their apps and data are hosted on. Therefore, the scalability benefits of cloud must be combined with security features that measure up to on-premises levels of cloud security. The good news is that some cloud providers have significantly improved  security offerings, the best of which have security features such as data encryption, vulnerability scanning, intrusion detection and more baked into the cloud platform and offer full reporting on security and compliance elements which financial services firms increasingly need for auditing purposes.  

What used to discourage the financial sector from adopting the cloud is now what’s appealing to it. Banks who take advantage of cloud computing often actually benefit from stronger security safeguards than they are able to invest in for on-premises IT infrastructure. The cloud is certainly more secure than many legacy platforms, so if financial organisations choose the right cloud services provider, they can actually experience a higher level of security than they would via legacy solutions.

However, there’s no question that we’re seeing a rise of cloud-based malware and, according to Palo Alto Networks, 70% of cybersecurity professionals working in large organisations in the UK say the rush to the cloud is not taking full account of the security risks. Even more worrying, the survey reveals that only 15% of UK security professionals were able to maintain consistent, enterprise-class cyber security across their cloud networks and endpoints. Add to this the fact that financial services companies need to scale up quickly in an increasingly regulated environment and you’ll understand why financial firms need to pay careful attention to cloud security and compliance credentials. Choosing a cloud services provider with advanced security features is vital to financial institutions and can help them to report on the security of all of their workloads in the cloud to pass compliance audits.

Combining scalability with cost-effectiveness

Another essential factor for the financial sector when adopting cloud is of course cost. The annual IT spend for global capital markets keeps increasing and, while cloud computing promises many economic benefits, these can only be realised when there’s a good match between cloud workloads and cloud resource utilisation. Cloud computing has the potential to save the industry billions of pounds, as the volume of transactional data increases and the cost of information security escalates in an increasingly complex threat environment.

Some cloud providers, such as iland, enable customers to scale their reserved cloud resources to exactly the amount of GB required. The billing is then determined based on actual compute usage and so customers only pay for what they use. This ensures that customers always have the cloud capacity available, without having to pay for more than what they need. This is far more cost-efficient than provisioning on-premises equipment for maximum workloads and having it lie idle for much of the time.

Combining scalability with transparency

Financial services firms are also seeking transparency into the policies and processes as well as operations of their cloud provider. In recognition of the flexible and collaborative nature of cloud service providers, the new EBA guidance launched a few months ago sets out the terms and processes under which chain outsourcing – a cloud provider outsourcing an element of its provision to a third party – is acceptable. As with most aspects of the guidance, strong emphasis is placed on ongoing risk management and transparency between the CSP and financial organisation. 

Throughout all aspects of the EBA guidelines, it is abundantly clear that the relationship between financial organisations and their CSPs needs to be extremely close and transparent, and conducted at a senior level. Verifiable trust through certification is the linchpin of the whole relationship and the partnership will be dysfunctional (and potentially inviable) without this cornerstone in place. Fortunately, this kind of transparency and commitment to open partnership has been built into the DNA of some cloud providers from the outset. iland, for example, has a dedicated compliance team that focuses on helping customers provide continuous monitoring and evidence of compliant cloud services to regulators.

Overall, financial services firms should not be tentative in taking a step to the cloud; the investment in time and budget in building and managing IT infrastructure can be dramatically reduced and the on-demand scalability benefits are particularly important in this sector. Cloud providers have significantly developed their security capabilities and can offer dedicated, sector-specific support through cloud migration and management. The publication of guidance from regulators at the FCA and EBA, plus the expertise that CSPs have developed for the financial services sector should  give financial services firms more confidence in the cloud and encourage them to fully embrace its possibilities and benefits.

How to pick a cloud your CIO will love: Three questions to ask

It will come as no surprise to those of you already leveraging cloud services that Forrester Research predicted that in 2018, cloud computing will become a must-have business technology. The scalability, agility and cost model allow IT teams to redirect their energy toward accelerating business initiatives without worrying about costly infrastructure investments.

As many have learned, however, it’s important when evaluating cloud providers to look closely at various elements of their services. Comparing quotes and services from different providers isn’t comparing apples to apples. Make sure you take into account the compatibility, accessibility, visibility, resiliency, security and support that is associated with the services from any provider – along with any indirect or intangible costs that may result from making the switch.

So, where do you begin? What questions do you ask as you evaluate cloud providers? After more than a decade spent helping customers migrate to the cloud for hosting, backup and disaster recovery use cases, here are some key questions we’ve identified:

What are the migration options to the cloud?

It is important to evaluate how you will migrate into a new environment with the least amount of reconfiguration and change possible. Account for details like the technologies you already have in place, your networking specifications and if you have any physical workloads to consider. To help make the migration as smooth as possible, ask yourself:

  • What do you move?
  • How will you get it there?
  • What technologies and tools does the provider use while assisting their customers during the transition?
  • Do they provide a managed migration service?
  • How can I identify and prioritise workloads for migration and will the provider help me with that process?

Does the cloud provider have options to support third-party networking capabilities?

As more and more organisations embrace the hybrid cloud, they look to leverage the networking options they utilise on-premises with their cloud services. This lessens the overhead of mixed technologies and skill sets and simplifies communication between systems. Ask the cloud provider what networking technologies are supported and who manages those systems. Are devices provisioned for you, or is self-service deployment the only option? Is it virtual or physical? Who manages the device once it’s installed, and will support help with connection issues? Connectivity is paramount to continuous business operations from your site to the cloud, and cloud networking can be complex. Choose a provider who can work with your existing environment to lessen the overall impact.

What about compliance and security in the cloud?

It is increasingly important to think about security and compliance when running both on-premises and cloud workloads. When implementing a hybrid cloud environment, make sure that you evaluate whether the cloud providers you are considering include built-in security and compliance tools that are available on the cloud platform itself. These tools need to be at least robust as, or even more robust, than what you currently have in your data centre. Be sure to ask about the visibility and alerting available within security and compliance settings and if the provider will assist with remediation actions as well as compliance requirements for any regulatory audits your cloud workloads will need to be governed by.

As data protection laws and regulations come into place, especially with the onset of Brexit and the EU General Data Protection Regulation (GDPR), it’s important to verify that the cloud provider handles your data in accordance with data sovereignty and local data laws. You may be in a situation in which data cannot leave a certain country or geographic region. Make sure you understand how your cloud provider leverages their load balancing technology. You’ll want to know if this is regulated for you, and if the provider can guarantee data sovereignty and that it is not sent somewhere it shouldn’t be.

The journey to the cloud is all about removing the overhead of infrastructure and focusing IT resources on delivering value to the business. Choosing a cloud provider is one of the biggest IT challenges out there. The cloud provider needs to be able to meet your existing needs across capacity, services, support, security and compliance and be able to scale and grow for your future goals. Moving to the cloud is a major investment and choosing wisely the first time will avoid the pain of moving providers and the challenges that can bring.

Eight steps for a pain-free cloud migration: Assessment, migration and support

Cloud adoption by UK companies has now neared 90%, according to Cloud Industry Forum, and it won’t be long before all organisations are benefiting to some degree from the flexibility, efficiency and cost-savings of the cloud. Moving past the first wave of adoption we’re seeing businesses ramp up the complexity of the workloads and applications that they’re migrating to the cloud. Perhaps this is the reason that 90% is also the proportion of companies that have reported difficulties with their cloud migration projects. This is frustrating for IT teams when they’re deploying cloud solutions that are supposed to be reducing their burden and making life simpler.

With over a decade of helping customers adopt cloud services, our iland deployment teams know that performing a pain-free migration to the cloud is achievable but that preparation is crucial to project success. Progressing through the following key stages offers a better chance of running a smooth migration with minimum disruption.

Set your goals at the outset

Every organisation has different priorities when it comes to the cloud, and there’s no “one cloud fits all” solution. Selecting the best options for your organisation means first understanding what you want to move, how you’ll get it to the cloud and how you’ll manage it once it’s there. You also need to identify how migrating core data systems to the cloud will impact on your security and compliance programmes. Having a clear handle on these goals at the outset will enable you to properly scope your project.

Before you begin – assess your on-premises

Preparing for cloud migration is a valuable opportunity to take stock of your on-premises data and applications and rank them in terms of business-criticality. This helps inform both the structure you’ll want in your cloud environment and also the order in which to migrate applications.

Ask the hard questions: does this application really need to move to the cloud or can it be decommissioned? In a cloud environment where you pay for the resources you use it doesn’t make economic sense to migrate legacy applications that no longer serve their purpose.   

Once you have a full inventory of your environment and its workloads, you need to flag up those specific networking requirements and physical appliances that may need special care in the cloud. This ranked inventory can then be used to calculate the required cloud resources and associated costs. Importantly, this process can also be used to classify and prioritize workloads which is invaluable in driving costs down in, for example, cloud-based disaster recovery scenarios where different workloads can be allocated different levels of protection.

Establish tech support during and post-migration

Many organisations take their first steps into the cloud when looking for disaster recovery solutions, enticed by the facility to replicate data continuously to a secondary location with virtually no downtime or lost data. This is fundamentally the same as a cloud migration, except that it is planned at a convenient time, rather than prompted by an extreme event. This means that once the switch is flipped, the migration should be as smooth as a DR event. However, most organisations will want to know that there is an expert on hand should anything go wrong, so 24/7 support should be factored into the equation.

Boost what you already have

Look at your on-premises environment and work out how to create synergies with the cloud. For example, VMware-users will find there’s much to be said for choosing a VMware-based cloud environment which is equipped with tools and templates specifically designed for smoothly transitioning initial workloads and templates. It’s an opportunity to refresh the VM environment and build out a new, clean system in the cloud. This doesn’t mean you can’t transition to a cloud that differs from your on-premises environment, but it’s a factor worth taking into consideration.

Migration of physical workloads

Of the 90% of businesses that reported difficulty migrating to the cloud, complexity was the most commonly cited issue, and you can bet that shifting physical systems is at the root of much of that. They are often the last vestiges of legacy IT strategies and remain because they underpin business operations. You need to determine if there is a benefit to moving them to the cloud and if so take up one of two options: virtualise the ones that can be virtualised – possibly using software options – or find a cloud provider that can support physical systems within the cloud, either on standard servers or co-located custom systems.

Determine the information transfer approach

The approach to transferring information to the cloud will depend on the size of the dataset. In the age of virtualisation and of relatively large network pipes, seeding can often be viewed as a costly, inefficient and error prone process. However, if datasets are sufficiently large, seeding may be the best option, with your service provider providing encrypted drives from which they’ll help you manually import data into the cloud. A more innovative approach sees seeding used to jumpstart the migration process. By seeding the cloud data centre with a point in time of your environment, you then use your standard network connection with the cloud to sync any changes before cut-over. This minimises downtime and represents the best of both worlds.

Check network connectivity

Your network pipe will be seeing a lot more traffic and while most organisations will find they have adequate bandwidth, it’s best to check ahead that your bandwidth will be sufficient. If your mission-critical applications demand live-streaming with zero latency you may wish to investigate direct connectivity to the cloud via VPN.

Consider post-migration management and support as part of the buying decision

Your migration project is complete, now you have to manage your cloud environment and get accustomed to the variation from managing on-premises applications. The power and usability of management tools should be part of the selection criteria so that you are confident you will have ongoing visibility and the facility to monitor security, costs and performance. Furthermore, support is a crucial part of your ongoing relationship with your cloud service provider and you need to select an option that gives you the support you need, when you need it, at the right price.

As more and more businesses take the plunge and move mission-critical systems to the cloud, we’ll see the skills and experience of in-house teams increase and the ability to handle complex migrations will rise in tandem. Until then, IT teams charged with migration projects shouldn’t be afraid to wring as much support and advice out of cloud service providers as possible so that they can achieve a pain-free migration and start reaping the benefits of the cloud.

Why there is a clearing in the cloud for financial services

The financial services sector is one of the prime markets that stands to gain many benefits from cloud computing but, to date, this has proved rather difficult. This is hardly surprising when one considers the heavy regulation within this industry. On top of this there is also a lot of fear from organisations pertaining to potential security risks with the control of data still very much a primary concern. Barely a day goes by without the announcement of the latest outage and no-one wants to be next in the firing line.

The potential benefits that come from the growth of cloud computing in this sector are vast, allowing for real-time execution of business critical activities such as fraud detection, instant lending decisions and extensive risk calculations. Cloud computing has also been a key driver in helping lenders achieve scalability quickly while also helping lower IT costs. When implemented properly, moving to the cloud can drastically reduce the operations and maintenance cost of IT, whilst ensuring that organisational agility is not slowed down by infrastructure. Many providers of financing solutions operate across multiple regions so this agility becomes vital when looking to innovate, launch products and structure deals quickly; they cannot afford to be beholden to legacy technology.

The dynamic nature of cloud however necessitates security and compliance controls that, granted, can be daunting. Issues around mobility and multi-tenancy, identity and access management, data protection and incident response and assessment all need to be addressed. And with multiple modes – SaaS, PaaS, IaaS, public, private, hybrid – creating added complexity in how security and compliance is carried out and by whom, I can certainly understand why IT leaders in the financial services sector may initially think twice about leveraging cloud. 

ISO 27001 is a widely adopted global security standard and framework that sets out requirements and best practices for a comprehensive approach to managing company and customer information. As all companies, including those in the financial sector, race to combat security threats and address evolving compliance requirements, they often struggle to implement and demonstrate the consistent security management that is core to ISO 27001. Proving IT security practices is also key to satisfying the new European Union General Data Protection Regulation before it goes into effect in 2018.

We can look to one of iland’s own customers in the sector, Bluestone, to exemplify the importance of regulation when trying to implement a cloud computing strategy. The multi-national financial services company leverages iland’s Disaster-Recovery-as-a-Service (DRaaS) with advanced security to ensure IT resiliency as it transitions towards its ‘cloud-first’ strategy.  According to Bluestone’s global head of IT operations: “Having a cloud-based disaster recovery solution that helps Bluestone to meet Financial Conduct Authority regulations and ISO 27001 standard requirements is essential. iland supports us with advanced security and compliance reporting that speeds up and significantly simplifies our compliance processes.”

The fact of the matter is, in today’s world, compliance isn’t just about satisfying regulations – it’s about staying ahead of threats and assuring end-customers that their data is safe. And this is never more important than when individuals’ money is at stake.

As well as simplifying industry compliance, cloud services also provide Bluestone with a number of other tangible business benefits that include:

  • The avoidance of downtime and simplified DR management – The organisation can quickly recover from any IT incident, achieving recovery time objectives measured in minutes and recovery point objectives measured in seconds. iland’s intuitive cloud management console enables Bluestone to execute failovers and view and manage DR resources, delivering all performance, security and costing information within a single interface. Further to this, the Bluestone team can perform full, non-intrusive testing of the DR solution on demand via the console with no impact on operations.
  • The protection of customer data with integrated advanced security – In the event of a failover, Bluestone’s workloads are protected against emerging IT threats with advanced security integrated into iland’s DRaaS offering. Features include antivirus and malware detection, vulnerability scanning, whole disk encryption, SSL-VPN, intrusion detection and prevention, event logging, deep packet inspection and other advanced capabilities. The Bluestone team also accesses on-demand security reports at the touch of a button through the iland cloud console.
  • Reduction in IT resiliency costs by 40 percent – Thanks to iland’s straightforward pricing model, Bluestone only pays for compute resources when it requires a failover. As a result, the company has achieved a reduction of 40 percent in the overall cost model of its DR solution.

Cloud services clearly have the potential to help companies within the financial services sector to protect essential IT systems and innovate in the digital age. However, it is often the case that these companies simply don’t have the resources to be experts in all things IT. The key is in choosing a strong and trusted service provider that will be able to work alongside the organisation to ensure its needs are being met, whether that is in relation to security, compliance, backup or costs, so that it may focus on continuing to delight customers with innovative financial services.

Read more: Four nines and failure rates: Will the cloud ever cut it for transactional banking?

Why APIs are key to successful digital transformation

Digital transformation continues to dominate boardroom discussions as businesses increasingly realise the organisational and cost efficiencies that digitisation can provide.

The concept reflects technology’s role in both shaping and stimulating strategic decision-making, with its ability to automate and simplify business processes, improve customer relationships, enhance productivity, and reap cost savings. In fact, IDC predicts that by the end of 2017, two-thirds of CEOs of global 2000 enterprises will have digital transformation at the centre of their corporate strategy.

However, it can be a challenge for organisations to implement a digitisation strategy against a background of increasingly complex day-to-day IT operations, which often involve managing both cloud and on-premises IT infrastructure. For many, application programming interfaces (APIs) are an essential component of merging the old and the new IT platforms, capturing vast amounts of data and ultimately achieving their digital transformation strategy.

The most common description of an API is a set of functions and procedures that allow applications to access the features or data of an operating system, application or other service to extend its capability or even create an entirely new feature. APIs are everywhere in our personal lives – whether that’s watching YouTube, posting on Facebook or purchasing something online from Amazon. APIs enable our digital lives.

The potential of APIs to deliver business advantage cannot be underestimated, either. APIs are igniting a cultural shift within many organisations, enabling the integration of diverse IT systems, building more collaborative and self-service IT environments, and deriving revenues from existing IT assets.

Analyst firm Gartner claims APIs can minimise the friction often caused by organisations implementing a ‘bimodal’ IT strategy – where legacy applications run alongside more innovative digital solutions. It says APIs are the layer through which ‘Mode 1’ and ‘Mode 2’ can connect, bridging the gap between core data and functionality, and a more experimental, innovative application.

APIs can also help simplify engagement with customers, while at the same time providing them with instant, valuable data insights into their business.

Here at iland we have prioritised the integration of APIs into our secure cloud platform to provide our customers with a simple way to engage with and manage their virtual machines and applications in the iland cloud.

For example, customers can access a wealth of data, reports and services via a single pane of glass. Using VMware’s vCloud Director multitenant platform as a foundation, iland’s custom console taps into more than twenty third-party services, such as the VMware vCenter solution, the VMware NSX Manager solution, Salesforce, and Veeam Software.

Customers can make API calls directly to iland’s console and cloud infrastructure, and access the enormous amount of performance, capacity, security and workload metadata that is stored there. Via a user-friendly interface, customers can quickly provision services, set role-based security, and analyse granular performance and capacity patterns. Through the console’s transparent billing capabilities, they get insight into expenses and ways to reduce costs, and they can also make feature recommendations via the console to improve the solution.

Everything in the interface can be automated through iland’s API or software development kits (SDKs), so customers can automate routine operational tasks without having to invest heavily in automation up-front. Also, because the iland cloud provides SDK for Java, Python, Erlang, and Golang, most of its API consumers can use their favourite language to jump right in and get started.

APIs also enable organisations to automate routing operational cloud management tasks – it’s popular with training companies who frequently spin up classroom templates, for example. The API makes it easy to spin up 100 classroom labs for a week, then spin down the labs when the class session has concluded.

In addition, customers can use APIs to integrate with their existing management tools. With our technology, for example, users can access monitoring data and integrate it with their existing monitoring or management tools, thereby extending their pre-existing tools into the iland Cloud.

Additionally, organisations are using APIs to add automation within a disaster recovery (DR) environment. For example, a customer can upload a virtual desktop template to iland and then automate the deployment of hundreds of copies of that desktop when a DR event is declared. Some DR customers are also using APIs to manage their DNS changes, so that their public services are migrated from production to DR in a failover event.

Finally, users can leverage APIs in a development or test environment; they can use scripts to simply power down the environment in the evenings or at weekends, and power back on in the morning, thereby saving money.

For cloud service providers, value added resellers (VARs) and system integrators (SIs), it’s becoming clear that true API integration is the new value-add. If companies want to build sustainable, profitable businesses in the new world of digitisation, they should consider developing skills that enable them to leverage the power and flexibility of APIs.

A 2016 report claims 44 percent of IT decision-makers believe building and managing APIs is fundamental to IT’s ability to complete digital transformation projects more quickly, and the same number said API reuse would significantly speed up digital transformation.

As digital transformation projects pick up pace, APIs will play an integral role in managing and optimising hybrid IT environments across both cloud and on-premises which is becoming the new reality in most businesses today. 

Read more: Why 2017 is quickly becoming the year of the API economy

The key themes underpinning the continued momentum for UK cloud adoption

I recently read with interest the results of the latest Cloud Industry Forum research survey into UK Cloud adoption and a couple of aspects jumped out at me, undoubtedly mirroring the experience iland is having within the UK market.

Overall cloud adoption rates among British businesses now stand at 88% and the rate of cloud adoption has increased by a whopping 83% since 2010, with an increase of 5% since last year.  In fact, today, we rarely speak with a customer that isn’t using some type of cloud service and that’s reflected in these results. The difference now is that, rather than taking an adhoc approach, companies of all sizes in the UK are taking a more strategic, planned approach to cloud adoption in recognition of its importance to business transformation.

Conducted in February 2017, the research, polled 250 IT and business decision-makers in large enterprises, small to medium-sized businesses (SMEs) and public sector organisations.  The survey found that 67% of users expect to increase their adoption of cloud services over the coming year – again, this matches our experience – particularly for companies in highly regulated sectors such as Financial Services and Bio-tech/pharma.  These organisations may have been slow to the cloud adoption trend but adoption rates have now started to speed up. I think that is because there is now growing realisation that higher levels of IT security and industry compliance can be achieved in the cloud than on-premises and new cloud use cases are emerging as a result. 

Here at iland, what we have seen is a more significant increase in cloud adoption amongst small and public sector organisations – cloud security levels have come a long way and this, combined with the cost-effectiveness of cloud models, has attracted both small companies and public sector organisations to the cloud. The survey results showed a similar pattern with small and public sector organisation adoption rates now standing at 82% for both, up from 54% and 62%, respectively, a year ago.

Cloud-based disaster recovery solutions mean that even small and medium organisations can achieve a business continuity strategy previously only available to large enterprises. A great example of this is the East Thames public housing association who have kicked off their ‘cloud-first’ strategy with a successful DRaaS project.

The research also highlighted that 70% of respondents are either currently seeing or anticipate seeing their organisation have a competitive advantage from utilising cloud services – we hear this again and again from our customers. The benefits of cloud adoption surpass their expectations and drive competitive advantage. Many iland customers have even discovered unexpected use cases from cloud services such as using DR cloud capacity for development and testing.

So, the survey results are encouraging and the only way is up for the UK cloud industry (just as long as we deal with both the Brexit and GDPR issue which are undoubtedly both big concerns for UK organisations). However, I was surprised by one of the results that the survey uncovered about cloud migration.  The survey advised that: “On average it took 15 months to migrate applications to the cloud, with 90% experiencing difficulties when migrating to a cloud solution.”

The reasons given for these migration difficulties included complexity of migration (43%) and lack of internal skills/knowledge (32%). Clearly, more planning and expert assistance is needed up front to prepare for cloud migrations.

After all, there are very few companies who can make either the monetary or time investment required to develop new cloud implementation and management skills and a partnership between the customer and cloud service provider based on mutual understanding of responsibilities needs to fill that gap. Indeed, as UK cloud adoption levels continue to rise, the complexity of cloud projects will only increase and a high-touch, consultative approach from cloud providers will become much more important.

Assessing biotechnology in the age of cloud computing

In order to ensure that patient outcomes are constantly being improved upon it is important that the speed of change within the biotechnology sector occurs at an exponential rate. However, this continued drive for innovation puts immense pressure on IT departments to develop new technologies at speed, while also making sure that they do this cost effectively.

Add to this the fact that, more so than other industries, biotech firms are extremely tightly regulated. As a result, IT groups within this industry are often reluctant to introduce more complexity into what is already a very complex environment. To them, expanding a data centre can often feel a whole lot easier than navigating the regulations of the cloud.  Despite this, growth in the demand for cloud computing in life sciences research and development is escalating due to the benefits it brings to the industry – benefits like exceeding regulatory requirements, for example.

At iland, we have worked with many companies in the healthcare, life sciences and biotech industries. Therefore, we know from experience that the implementation of cloud computing in biotechnology empowers organisations with the control and flexibility needed to lead the way in both the research world as well as the businesses world. For example, we recently worked with a US based biotechnology organisation on their backup and disaster recovery (DR) strategy, and were able to drive global data centre consolidation with host-based replication to the iland cloud. As a result, their DR testing and auditing processes were greatly simplified and streamlined which drove significant cost savings as well as compliance assurance.  

If you still need convincing here are three key benefits that we believe cloud brings to biotech organisations: 

Processing big data

When the Human Genome Project began it was one of the most extensive research projects in the field to date costing billions of pounds and lasting over a decade. These days, thanks largely to cloud technology, it can be done in just 26 hours. Things such as drug R&D, clinical research as well as a whole host of other areas have benefited just as much from the rapid growth of computational power. The better your technology is at crunching huge sets of data, the quicker you can innovate.

Cloud computing within the biotech sector can take big data analysis to the next level by means of performance, connectivity, on-demand infrastructure and flexible provisioning. Labs can also benefit from immense computing power without the cost and complexity of running big onsite server rooms. They can also scale up at will in order to make use of new research and ideas almost instantly.

Concerns have been voiced that so called scientific computing in the cloud may make results less reproducible. One concern is that cloud computing will be a computing ‘black box’ that obscures details needed to accurately interpret the results of computational analyses. In actual fact, by leveraging the application program interfaces (APIs) in the iland cloud, biotech customers are able to integrate cloud data back into on-premises IT systems to ensure that data analyses done in the cloud can be easily shared and consumed by other applications. Essentially, cloud computing services bring more players to the table to solve the giant puzzle. It’s a win-win situation from an economic and patient standpoint, and several big name companies are jumping on the biotech cloud bandwagon. 

Compliance and access control

Biotech companies need to maintain strong access and authentication controls, while also being able to collaborate easily. For this reason audit trails and other measures are often required to verify that information has not been improperly altered, and that good experimental and manufacturing procedures have been followed. At the same time biotechnologists need to be able to access and share data across multiple departments or even multiple companies.

Cloud computing in biotechnology makes this all possible. The iland cloud, for instance, centralises data, ensuring security and data sovereignty while facilitating collaboration. It supports extensive user and role based access control, two-factor authentication and integrity monitoring to prevent improper access and changes. In addition to data encryption, vulnerability scanning and intrusion detection, these measures facilitate security and compliance, without disrupting the internal workflow.

Real-time reporting

Complex regulatory requirements and logistics combined with niche markets make efficiency paramount within biotechnology. Even minor mistakes as a result of sloppy process management can easily result in major issues. Real-time operational reporting dramatically improves efficiency, quality control and decision making, allowing organisations to react instantly to challenges and opportunities, both internal and external.

As well as enhanced billing visibility and resource management functions, the release of our latest Secure Cloud Services means that the iland cloud now includes on-demand security and compliance reports. This advanced cloud management functionality is designed to foster strategic, self-sufficient control of a cloud environment, optimising overall cloud usage and costs to drive business initiatives and growth.

Without a shadow of a doubt, cloud technology can help biotechnology companies build the future. From research and development to marketing, computing affects everything your organisation does. With rich experience in the biotech, healthcare and life sciences sector, you should talk to iland today to find out how our cloud hosting services can give you the power to develop at the speed of thought, not the speed of compliance or processing. 

Read more: Why the cloud could hold the cure to diseases

2017 is the year when IoT gets serious: Here’s what cloud providers need to know

If you attended Mobile World Congress (MWC) last month you would have heard a lot of buzz about 5G and the Internet of Things (IoT) with many vendors like HPE, IBM, VMware and others talking about their role in 5G and IoT.  As iland is a V-Cloud partner, I was particularly interested to hear about the VMware IoT theatre sessions which went on throughout the show during which VMware talked about their IoT strategy and how it was helping organisations to harness IoT devices.

As you would expect there was a lot on show. There was everything from IoT-enabled safety jackets to keep people safe at sea or in the mountains to connected cars showing the latest innovations in vehicle-to-vehicle communications and autonomous driving. Sierra Wireless demonstrated a live LTE CAT-M network for smart water solutions while Huawei put the spotlight on its connected drone.

Moving away from all the hype of Mobile World Congress, I believe that 2017 will be the year where the focus turns to real deployments and the monetisation of IoT.  It is fair to say that in 2016 IoT was still in its infancy in terms of revenue and deployments. Now however I think we will start to see some real live systems that increase productivity, increase customer satisfaction and develop new revenue streams.

At MWC we heard that 5G will be the panacea for IoT. However 5G is still a number of years from being realised in any meaningful way. In the meantime, telcos will have to deal with new IoT models using alternative technologies today. Telecom operators’ strategies and business models for generating revenues from IoT will continue to develop through 2017 and I think we will continue to see the battle between NB-IoT and LTE-M play out.

Regulation and standardisation will also come into focus more in 2017. On the regulatory front, I have to admit we haven’t seen much come through yet but I expect to see more government interest as IoT becomes more pervasive in smart cities, the public sector and energy.

Smart cities will lead the charge in IoT deployments. The awareness of what ‘smart city’ means is now beginning to capture the attention of residents. They value safety (smart lighting), convenience (transportation, parking) and the potential cost savings that cities can deliver (smart metres, on-demand waste pickup and so on).  That said, cities will continue to be strained by the need for money to support the deployment of sensors to gather data and the integration of city-wide systems.

As business investment in cloud technologies continues, so IoT moves up the agenda. IoT data tends to be heterogeneous and stored across multiple systems. As such, the market is calling for analytical tools that seamlessly connect to and combine all those cloud-hosted data sources, enabling businesses to explore and visualise any type of data stored anywhere in order maximise the value of their IoT investment.

Equally organisations across the world will be deploying flexible business intelligence solutions that allow them to analyse data from multiple sources of varying formats. Joining together incongruent IoT data into a single pane of glass, companies can have a more holistic view of the business, which means they can more easily identify problems and respond quickly. In other words we need to extract the data from IoT devices then figure out what it all means. With the solution to the ‘last-mile’ of IoT data organisations can then start to increase efficiencies and optimise their business offering.

While the possibilities for IoT to improve our world seem endless, concerns over security are very real. As we put more and more critical applications and functions into the realm of IoT it increases the opportunity for breaches.  In 2016, the market finally began to take security seriously, largely because of the increase in IoT hacks. Incidents such as the big denial-of-service attack in October and even the potential of a drone injecting a malicious virus via lights (from outside a building) caused great concern throughout the industry. As a result, we saw some solid announcements such as the Industrial Internet Consortium releasing its security framework. With all the new vulnerable devices now being put into service, hackers will continue to exploit IoT systems. I think we can certainly expect to see more large scale breaches as hackers look for newly connected devices, especially in the energy and transportation areas.

Many of iland’s top customers are working in the IoT space and they are looking to iland to provide the most secure and top of the line environments to host their applications in the cloud. Here at iland we take this very seriously and we are constantly improving, upgrading and fortifying our platform to meet the growing needs of the IoT market.  This means applying on-premise levels of IT security to cloud workloads. For example two factor authentication, role based access control, encryption and vulnerability scanning which enables a protective shield for the cloud to scan all incoming and outgoing data for malicious code, regardless of the device being used.  Additionally, we have just introduced our Secure Cloud Services designed to foster strategic, self-sufficient visibility and control of a cloud environment.

I believe that 2017 will be the ‘Year of IoT’ and at iland we are focused on making sure that we provide the most secure and compliant environment for our customers so that they can take advantage of the opportunities that IoT presents. Making sure your IT systems are compliant and having the reporting mechanisms so that you can prove your data is secure will be a critical success factor in order to leverage IoT devices.  

Editor’s note: if you are interested in finding out more about the iland Secure Cloud™ or iland Secure Cloud Services please visit here.

2017 is the year when IoT gets serious: Here’s what cloud providers need to know

If you attended Mobile World Congress (MWC) last month you would have heard a lot of buzz about 5G and the Internet of Things (IoT) with many vendors like HPE, IBM, VMware and others talking about their role in 5G and IoT.  As iland is a V-Cloud partner, I was particularly interested to hear about the VMware IoT theatre sessions which went on throughout the show during which VMware talked about their IoT strategy and how it was helping organisations to harness IoT devices.

As you would expect there was a lot on show. There was everything from IoT-enabled safety jackets to keep people safe at sea or in the mountains to connected cars showing the latest innovations in vehicle-to-vehicle communications and autonomous driving. Sierra Wireless demonstrated a live LTE CAT-M network for smart water solutions while Huawei put the spotlight on its connected drone.

Moving away from all the hype of Mobile World Congress, I believe that 2017 will be the year where the focus turns to real deployments and the monetisation of IoT.  It is fair to say that in 2016 IoT was still in its infancy in terms of revenue and deployments. Now however I think we will start to see some real live systems that increase productivity, increase customer satisfaction and develop new revenue streams.

At MWC we heard that 5G will be the panacea for IoT. However 5G is still a number of years from being realised in any meaningful way. In the meantime, telcos will have to deal with new IoT models using alternative technologies today. Telecom operators’ strategies and business models for generating revenues from IoT will continue to develop through 2017 and I think we will continue to see the battle between NB-IoT and LTE-M play out.

Regulation and standardisation will also come into focus more in 2017. On the regulatory front, I have to admit we haven’t seen much come through yet but I expect to see more government interest as IoT becomes more pervasive in smart cities, the public sector and energy.

Smart cities will lead the charge in IoT deployments. The awareness of what ‘smart city’ means is now beginning to capture the attention of residents. They value safety (smart lighting), convenience (transportation, parking) and the potential cost savings that cities can deliver (smart metres, on-demand waste pickup and so on).  That said, cities will continue to be strained by the need for money to support the deployment of sensors to gather data and the integration of city-wide systems.

As business investment in cloud technologies continues, so IoT moves up the agenda. IoT data tends to be heterogeneous and stored across multiple systems. As such, the market is calling for analytical tools that seamlessly connect to and combine all those cloud-hosted data sources, enabling businesses to explore and visualise any type of data stored anywhere in order maximise the value of their IoT investment.

Equally organisations across the world will be deploying flexible business intelligence solutions that allow them to analyse data from multiple sources of varying formats. Joining together incongruent IoT data into a single pane of glass, companies can have a more holistic view of the business, which means they can more easily identify problems and respond quickly. In other words we need to extract the data from IoT devices then figure out what it all means. With the solution to the ‘last-mile’ of IoT data organisations can then start to increase efficiencies and optimise their business offering.

While the possibilities for IoT to improve our world seem endless, concerns over security are very real. As we put more and more critical applications and functions into the realm of IoT it increases the opportunity for breaches.  In 2016, the market finally began to take security seriously, largely because of the increase in IoT hacks. Incidents such as the big denial-of-service attack in October and even the potential of a drone injecting a malicious virus via lights (from outside a building) caused great concern throughout the industry. As a result, we saw some solid announcements such as the Industrial Internet Consortium releasing its security framework. With all the new vulnerable devices now being put into service, hackers will continue to exploit IoT systems. I think we can certainly expect to see more large scale breaches as hackers look for newly connected devices, especially in the energy and transportation areas.

Many of iland’s top customers are working in the IoT space and they are looking to iland to provide the most secure and top of the line environments to host their applications in the cloud. Here at iland we take this very seriously and we are constantly improving, upgrading and fortifying our platform to meet the growing needs of the IoT market.  This means applying on-premise levels of IT security to cloud workloads. For example two factor authentication, role based access control, encryption and vulnerability scanning which enables a protective shield for the cloud to scan all incoming and outgoing data for malicious code, regardless of the device being used.  Additionally, we have just introduced our Secure Cloud Services designed to foster strategic, self-sufficient visibility and control of a cloud environment.

I believe that 2017 will be the ‘Year of IoT’ and at iland we are focused on making sure that we provide the most secure and compliant environment for our customers so that they can take advantage of the opportunities that IoT presents. Making sure your IT systems are compliant and having the reporting mechanisms so that you can prove your data is secure will be a critical success factor in order to leverage IoT devices.  

Editor’s note: if you are interested in finding out more about the iland Secure Cloud™ or iland Secure Cloud Services please visit here.