China cloud infrastructure services grew 67% in Q419 says Canalys – as Covid-19 response praised

China’s cloud computing market continues to intrigue because of its potential – and according to a new study from analyst firm Canalys, cloud infrastructure services in China grew 67% in Q419, making the country’s spend at more than one tenth of the overall market.

Total spend reached $3.3 billion (£2.8bn) in the quarter, with Alibaba Cloud accounting for almost half (46.4%) of outlay making it the clear market leader. Tencent Cloud increased its share to 18%, while Baidu AI Cloud is the third ranked vendor with 8.8% market share.

Not surprisingly, business in the most recent three months has been dominated by the Covid-19 outbreak. The Canalys note found that China’s cloud companies were quick to act and get resources available to businesses who needed it – something that Western providers are beginning to do themselves as the epidemic became a pandemic.

Alibaba Cloud offered credits to organisations enabling them to buy its Elastic Compute Service, as well as cybersecurity services, alongside making its AI-powered platform available for free to research institutions working on treating and preventing coronavirus. Tencent Cloud did the same, as well as launching a remote working offering, while Baidu AI Cloud made its online doctor consultation platform free for any queries.

Examples of US-headquartered companies following suit include Slack and Box, who earlier this week said researchers working on Covid-19 research, response or mitigation can access their paid plans for free. Dropbox said it would offer free Dropbox Business and HelloSign Enterprise subscriptions for a three-month period to non-profits and NGOs ‘focused on fighting Covid-19.’

“The benefits of cloud computing were demonstrated by the leading cloud service providers in response to the escalating coronavirus crisis,” said Yih Khai Wong, Canalys senior analyst. “They rapidly deployed continuity measures for organisations and established resource-intensive workloads to analyse vast datasets.

“Cloud companies opened their platforms, allowing new and existing customers to use more resources for free to help maintain operations,” Wong added. “This set the precedent for technology companies around the world that offer cloud-based services in their response to helping organisations affected by coronavirus.”

This can be seen as an optimistic analysis of China’s cloud ecosystem. According to Synergy Research, in September, hyperscaler capex was down 2% based on year-by-year figures, with the Chinese market, dropping 37% year on year in Q2, primarily responsible. China’s overall outlook remains poor, with the most recent analysis from the Asia Cloud Computing Association (ACCA) playing the country in second-last position for infrastructure, although noting that, alongside India, the size of the operation counted against them.

According to further Synergy figures from May, Amazon Web Services (AWS) remains the cloud market leader across all geographies, but the Asia Pacific (APAC) landscape differs from the other AWS-Azure-Google 1-2-3. Alibaba is the second player across APAC, with Tencent at #4 and Sinnet #6.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Perfecting your remote working strategy


Sandra Vogel

20 Mar, 2020

In the current climate it has become incumbent on all of those who can work from home to do so. While some organisations are used to supporting remote working – whether from home or other locations – for many the current situation presents a significant challenge. There is much to consider, and very little time to put measures in place. 

Providing the technological means for remote working is not just about ensuring your people have adequate hardware and software tools. There is much more to it than that. We spoke to a number of organisations where flexible and remote working are already embedded to get their guidance for others who might be embarking on this for the first time, scaling up, or just looking to refine their approach. 

Freedom and boundaries

If the organisation is providing its people with tools such as internet access, computers, and software, there needs to be a measure of understanding around using these for non-work activities. Auth0 is a multinational organisation with offices in six countries and staff working remotely across more than 30 countries. Steven Rees-Pullman, International Vice President, tells IT Pro: “An acceptable use policy will help users and support staff know what is and isn’t ok when using your networks and software. It’s a simple step but clear guidelines will prevent easily avoidable mistakes that put your network at risk.”

A key aspect of getting remote working right is to avoid micromanaging people. You’ll only get the best from people if you trust them. That doesn’t mean giving people a free rein to do whatever they like, of course. There must be boundaries, and these will be set by the nature of your business, collaborative working needs, time-scales, deadlines, and many other parameters.

Still, getting the right balance between freedom and control, and trusting people is crucial. This includes giving freedom around when and where people want to work. Digital marketing consultancy Croud employs more than 200 permanent staff across three countries. Director of Operations, Katherine Sale tells IT Pro: “Some people are most productive at home where there are no office distractions, whereas for others they need to be around people where they can share ideas and get immediate feedback. The more each of us as individuals can become self-aware of our working habits and the more organisations can be flexible and accommodating towards this, businesses are going to be more productive and efficient.”

Software and security

For those being pushed towards remote working for the first time at the moment, there is some good news about the selection of software. With so many strong collaboration tools available off the shelf, there may well be applications that fit your requirements ready and waiting for you. And they are as appropriate for small organisations of a few people as they are for large multinationals.

Sale tells IT Pro: “Keep it simple. There is no need for jazzy, expensive technology. We use Google Drive, which is a really accessible tool for lots of businesses, no matter what size they are. Using Google Drive means we all have access to relevant work and documents anywhere anytime.”

Maintaining adequate levels of system security remains vital regardless of where people are working. Auth0’s Rees-Pullman tells IT Pro: “When you ‘go remote’, your firewall effectively disappears, and you need to secure your remote employees as well as the third party software they use every day.” Getting adequate systems in place is crucial, and in addition Rees-Pullman also advocates having a backup plan in case something goes wrong. 

Pulling it all together

MediaCom is a planning and buying agency with more than 1000 staff, and is very experienced with supporting remote working. There is sage advice from Elaine Bremner, Chief HR and Talent Officer, when she told IT Pro, “It’s important to make sure you have a way to measure output, and this must be done through easy to use, real-time tech.” She is clear that organisations should measure “output, not presenteeism”, and that what’s needed is “A list of tasks with agreed deliverables and deadlines” – much as is the requirement when working in an office, in fact.

Organisations that get remote working right in the short term may find it extends naturally into the longer term, to the benefit of both organisation and staff alike. Steven Rees-Pullman told IT Pro “We believe work is something you do, not a place you go. So while we can absolutely quantify the benefits of remote working in terms of hours saved commuting or dollars in childcare, there’s also the intangible benefit of making work part of life, not a separate thing. Our employees have the freedom to do work on their own terms in a way that allows them to be most productive.” 

Amen to that.

Microsoft Teams surpasses 44 million users after remote working surge


Bobby Hellard

20 Mar, 2020

Microsoft Teams gained 12 million users in just one week following a surge of remote working to combat the spread of the COVID-19 virus.

The communications platform reported 44 million users as of 18 March, up from 32 million on 11 March.

The figures came as the conferencing service celebrated its third birthday, but also just a few days after it suffered a two-hour outage – likely linked to the sudden influx of users.

Microsoft’s corporate VP for 365, Jared Spataro, said that the service had seen an “unprecedented” spike in usage in just seven days. He also suggested that the numbers might not drop when the coronavirus pandemic is under control.

“It’s very clear that enabling remote work is more important than ever, and that it will continue to have lasting value beyond the COVID-19 outbreak,” Jared Spataro, Microsoft’s corporate VP for 365 wrote in a blog post. “We are committed to building the tools that help organisations, teams, and individuals stay productive and connected even when they need to work apart.”

For its third birthday, Microsoft added some new features to Teams. The first is a function that uses AI to reduce background noise – which will certainly be welcome following the government’s decision to close all schools. There is also a button allows users to read messages while offline and a ‘raise hand’ feature to help get their questions across during busy video meetings.

Although Teams is now far ahead of the competition, a leaked Microsoft partner video suggested the company is keen to thwart rival video conferencing service Zoom. An alleged internal company video, posted to Twitter on Thursday, suggests Microsoft sees Zoom as an ’emerging threat’ to Teams as it is often used by businesses in tandem with Google’s G Suite and fierce rival Slack.

This week, Slack also unveiled a host of new functions and reported a spike in users due to the coronavirus. The smaller platform saw over 7,000 new users in just 47 days, according to TechCrunch. For context, it only recorded 5,000 new users in its last quarter report.

Realising the impact of unsecured container deployments: A guide

A recently published report by StackRox on the state of containers and Kubernetes security has revealed the statistics related to security concerns in data centres with containerised workloads. 94% of respondents out of 540 IT and security professionals who participated in the survey had experienced security incidents in the last 12 months. Misconfigurations and human errors were the primary issues which came out of the survey.

As a result, enterprises who have already deployed, or are in the process to deploy containers, are impacted by lacking security in hosting applications with containers. This has a subtle impact on the overall process of adoption of containers into the data centre modernisation strategy of many enterprises.

Impact on deployments

A recent CNCF survey found that security is already one of the top roadblocks in using/deploying containers.

Further, in the StackRox survey is it seen that 44% of respondents have slowed down application deployment into production due to the container or Kubernetes concerns. This data shows container adoption and deployments have been already impacted and further new security issues will halt the progress.

Investment in security strategies

Security incidents and vulnerabilities found in Kubernetes have made enterprises think about re-strategising their container deployment process. Earlier, while adopting and implementing containers, enterprises had less emphasis on security aspects and that leads to lower CAPEX. Now, with the insights which came out of the StackRox and CNCF surveys, the importance of security integration has been realised.

Due to a wide range of use cases of containers to boost digital innovation, enterprises will take actionable steps to harden containerised workloads. One will be to go for containers or Kubernetes security platforms or use managed solutions or services for containers. It will help them to automate management of containers and Kubernetes clusters to stay secure and updated.

Security skills

Kubernetes and containers are open source and comparatively new technologies that are evolving with time. But the huge acceptance of containers has resulted in realisations in terms of security glitches that have occurred due to lack of knowledge and skills to follow security practices.

The main highlight of the StackRox report is that most security glitches only happen due to misconfiguration. To tackle this, enterprises will look to hire highly-skilled engineers, train their existing resources and mandate them to follow best practices for container security. Kubernetes is a leading orchestration platform and it is considered that containers will be managed with it only. Resources having Kubernetes expertise with secure cluster deployments and management will also be on top of the list for hiring.

DevSecOps

Puppet’s recent 2019 State of DevOps Report threw light on the importance of integrating security in the software delivery lifecycle. It is suggested in the report that organisations adopting DevOps should prioritise security in a delivery cycle of software services. It is also found that the container environment will be less impacted if security practices are followed while developing and deploying applications and tools are integrated to handle testing and security incidents.

As more automation will involve in configuration and management of containers, there will be fewer changes for misconfigurations and human errors. Enterprises will look to amalgamate DevOps methodologies with security teams and developers to make sure containers will not suffer from security breaches.

Zero Trust in container networks

The authorisation of access by different levels of users is key to secure any data centre environment. For containers, orchestration platforms like Kubernetes offer modules like Role-Based Access Control (RBAC), PodSecurityPolicy and authentication mechanisms to strengthen cluster and pod access. Moving further from this, Zero Trust network overlays will begin to implement within Kubernetes clusters that are hosting a vast number of microservices.

The use of service mesh technologies like Istio and LinkedD is one of the movements to use the Zero Trust network overlay. Usage service meshes will be increased to get better visibility, control on networking and encryption of data between microservices.

Conclusion

The adoption of containers and Kubernetes has resulted in bringing agility in digital transformation progress. Security concerns are a proven roadblock; however, various containers and Kubernetes security measures can be implemented with existing mechanisms, best practices and managed solutions.

Editor’s note: Find out more about container security and Kubernetes security best practice here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Is there still an app for that?


David Howell

19 Mar, 2020

In 2018, Apple CEO, Tim Cook announced the App Store had over 20 million developers, with the App Store itself receiving 500 million weekly visits. The app, it seems, continues to be popular: in the third quarter of 2019, there were a total of 29.6 billion app downloads worldwide, according to market research firm Sensor Tower – a 9.7% year-on-year increase overall, with Google Play downloads growing 11.4% to 21.6 billion and App Store downloads growing 5.3% to 8 billion.

Mobile access to the internet is also growing: According to Ericsson, the total number of mobile subscriptions in Q3 2019 was around eight billion, with 61 million subscriptions added during the quarter. By 2025, 90% of subscriptions are projected to be for mobile broadband.

Smartphone penetration continues to rise,” says Ericsson. “Subscriptions associated with smartphones account for around 70% of all mobile phone subscriptions. It is estimated there will be 5.6 billion smartphone subscriptions by the end of 2019. The number of smartphone subscriptions is forecast to reach 7.4 billion in 2025.”

With a massive install base of devices, is the app still the best way for businesses to reach this vast mobile audience? Mobile retail continues to expand, as consumers embrace m-commerce as they did e-commerce, with many familiar names profiting from this trend, such as Amazon and eBay, which remain the most popular sites for mobile access.

In its most recent annual ‘Global State of Mobile’ report, Comscore concludes: “Looking at a snapshot of the retail category in the US, retail apps reached 87% of the total app audience in 2019: a 16% increase since June 2017. Total audience tends to skew 25-54 and female. Interestingly, we still see almost a quarter of time spent consuming retail content on desktop, which may be due to the larger screen real estate that can facilitate a closer examination of online purchases.”

In-app purchases continue to be highly prevalent in one specific category of apps: Gaming. Research from Deloitte showed gamers, in general, will spend an average of £3.59 per month, or £43.05 a year, on in-app purchases. This rises to over £120 with the 25 to 34 age group. In-app purchases, though, outside of gaming are also accelerating.  

Consumers increasingly want integrated shopping experiences. Expect in-app purchasing, in general, to accelerate as m-commerce expands. However, care should be taken not to frustrate app users. In research carried out by Kantar Media for Ofcom, one male teen, for instance, told researchers: “I would download a free app and the next thing you know it is 69p to get to such and such a level, even though they said it was free in the first place.”

“Currently, when speaking on omnichannel, we often speak to a presence on each one,” says Sean Farrington, AVP of Interactive Development at LiveArea. “We have a native app, or two. We have a desktop app and also skill on Alexa. Each of these is unique and sadly, almost entirely independent. Technologies, such as PWA (Progressive Web Apps), Service Worker, and Notification APIs supported by personalised content and marketing, will prompt us to redefine our multichannel strategy.”

He adds: “Rather than focusing on a unique presence on each channel, we will begin to see this as a single application, unified across all the channels. This allows us to be more perceptive in how and when we engage with our customers and offers us a whole new world of touchpoints and possibilities. It will also allow us to provide intelligence when choosing the channel to engage upon.”

App fatigue and originality

With the app now over ten years old, are consumers still using the app as their primary channel for accessing services? According to research from Comscore, the answer is yes. In the UK, 86% of mobile minutes are spent using apps. Social media (88%), lifestyle (85%), and coupons and incentive (79%) are the top categories for mobile app usage.

The sheer number of apps available on the app stores has often been pointed to as an issue with regards to discoverability and why the longevity of an app on a user’s phone can be so short. Ofcom’s research suggests consumers are not ready to give up their apps.

“App users appear to have a strong functional reliance and emotional attachment to apps,” the report states. “As part of the research process, participants were asked to live without apps for a day. The absence of apps during this deprivation exercise left many feeling frustrated. Teens and younger adults, in particular, worried about being excluded from their social circle without access to apps.”

To reduce app fatigue and increase install rates, apps need to be useful initially, but the key to long-term connections is to ensure your business’ apps stay engaging and of practical use. Apps that are not updated regularly, don’t communicate with their users via push notifications, and don’t integrate with other areas of your business will be quickly uninstalled.

Speaking to Cloud Pro, Raj Bawa, operations director at JBi Digital says: “I wouldn’t consider app fatigue to be a major challenge for those using the app channel, as apps are there to make life easier for businesses. Apps go a long way in boosting the productivity of businesses, though it’s true there is a level of fatigue from a consumer perspective. To prevent such fatigue from taking place, developers need to consider providing application programming interfaces (APIs) that allow the software to be installed and maintained much more easily.”

If an app is an appropriate component of your business, making it as relevant and engaging as possible will combat app fatigue. Rob Sandbach, managing director at Manchester-based digital design agency Indiespring, says: “It’s a problem if your app performs the same task as a handful of identical apps and it’s an even bigger issue if your app does the same thing, but worse. App fatigue can be easily avoided if your app adds true value to the users’ day and does it better than any competing options out there. It comes down to user experience – you need to be constantly on the ball to protect that by measuring app performance and user engagement.”

An appy future

Analysis by Adjust shows, on average, apps will be deleted around six days after they have been installed. The highest attrition rate is for entertainment and lifestyle apps, with the best performers being in the e-Commerce, travel and health categories. Consumers want apps that not only fulfil an immediate need but that also, continue to be useful.

This approach is supported by Ashley Friedlein, founder of Econsultancy and Guild, who tells Cloud Pro: “Mobile usage is only increasing, as is app usage. It will be hard to break the monopoly of the ‘mega apps’ that take up the most app usage time. However, the success of Snapchat and now TikTok show that it is still possible to create colossal traction very quickly through apps. Alongside the mega apps there is always room for more focused apps that do a particular thing very well.” 

The accelerating development of 5G and its inherent low latency could open a new era for apps that require a constant and fast connection to the mobile internet. Where many apps in the past have been handicapped by low bandwidth, this should disappear if the promises being made for 5G become a reality for every user.

If your business has yet to invest in app development, or you are about to upgrade your existing apps, the Ofcom report offers some insight that can be used to ensure the new apps your company creates will find their audience. 

“App users stated various criteria that they felt described their ‘best apps’, including being quick and easy to use; being reliable and not crashing; performing the functions described, and having appealing aesthetics,” the report states. “When asked to consider criteria for their ‘best apps’, there was little mention of safety and security. These did not appear to be front-of-mind for participants due to a lack of negative experiences with apps.”

Speaking to Cloud Pro, Alex Froom, chief product officer and founder of Zipabout, explains: “Where do we currently stand with the native or web app debate? It’s horses for courses. Unfortunately, it was web-app capabilities that opened app development to organisations lacking the commercial capability to build good apps. The result: an oversaturation of rubbish apps. Rather than focusing on native versus web, what’s more of an issue is the ease of development and whether you need an app at all.”

LiveArea’s Farrington adds: “I think the recognition of the importance of the mobile channel, particularly within m-commerce, has already set upon us. Over the next year or two, I think the focus will be more around how a business can refine their brand, strategy, and marketing efforts to adapt to, and take advantage of, a wide new array of opportunities the channel presents us. Differentiation, amongst competitors on the mobile channel, will become the primary battleground, and it will be won by companies that put themselves in a position to understand their customer regarding their brand.”

From a business perspective, the ‘mobile-first’ approach is now the norm. No matter which category or market your business trades within, having a mobile strategy is critical. Accessing services from fast food delivery to travel, and of course, social media, the app continues to dominate all other access channels.

What SMBs can do now to mitigate the economic outcomes of Covid-19

If you’ve ever watched a disaster movie, you know the basics of staying prepared: stockpile food, avoid large crowds, and make sure to (vigorously) wash your hands. The same advice holds true for the novel coronavirus, Covid-19, which has infected more than 220,000 people as of this write-up. But while everyone is busying themselves with canned foods and sanitary wipes, there’s something else you need to prepare: your business. 

The economic costs of the coronavirus are predicted to surpass those of the 2003 SARS epidemic. Already, the virus has caused numerous businesses to close their doors or decrease their output—and as the infection continues to spread, these consequences will only exacerbate. From event cancellations to productivity loss, the coronavirus means big changes for your company. And the only way to get ahead of these changes is by understanding what you’re up against.

If you’re the leader of an SMB, it’s time to start preparing. The best way to do this is by utilising the right datasets to predict how your company will be affected. In short, the only way to prepare your company is by knowing how it operates, what losses it can handle, and how it can adapt.

Through data analytics, your company will gain the time and tools to batten down, strategise, and overcome. 

Event cancellations

As the coronavirus continues to spread, event cancellations will become the norm. Already, several high-profile conferences have shut down, which means less networking and partnership opportunities. Of course, staying safe is more important, but that doesn’t mean your business won’t take a hit. Thus, it’s important to know which conferences are worth the risk and which ones are worth a pass.

Data analytics can provide a better idea of how different conferences have influenced your business. If you’ve gone to a conference annually with no return, it doesn’t make much sense to go again. But if your business has recently seen a spike in demand for a specific product, it might make sense to attend conferences focused on that offering. Ultimately, the decision of whether to attend will fall to you, but data visualisation can help you decide by identifying patterns and weighing the pros and cons.  

Travel and supply chain issues 

Already, the coronavirus has caused dozens of countries to shut their borders and cancel flights. Obviously, these cancellations can cause several issues for businesses that work internationally. But it’s not just international companies who suffer; travel issues also mean supply chain issues, which can create problems for entirely domestic companies. 

If your company relies on parts from overseas, you’re probably about to face a shortage. And it’s important that you understand how this shortage will affect your business. Through the power of BI (Business Analytics), you’ll clearly see the economic and productivity consequences of foregoing or stalling the manufacturing process. And once you understand these consequences, you can take proactive steps to mitigate them, perhaps by investing in other products or decreasing your current overall expenses.  

Is your company built to work remotely?  

Many companies have already told their employees to work from home. While this may not be feasible for your own company, the simple fact is that some employees will have no other option. If an employee gets sick or needs to care for a loved one, you can expect to see a significant rise in remote work requests. And the more remote employees you have, the more communication issues you’ll likely face. 

Analytics tools can help prepare your business for any communication or availability issues. Additionally, you can get a better idea of what to prioritise as you ready your telecommuting workforce. By developing a more informed understanding of your audiovisual and unified communications needs, you’ll have the resources to better inform your spend. And through this spend, you’ll be able to decrease communication issues and swiftly address any problems that may arise. 

Data analytics provides insight into your company. And through this insight, you’ll gain a better understanding of what your business can and can’t handle. You’ll also gain the tools to better prepare your business for whatever happens next—be it a virus, natural disaster, or any other unexpected event.  

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Slack simplifies its platform


Bobby Hellard

18 Mar, 2020

Slack has made a host of changes to its platform easier to navigate, with new customisable functions for sidebars and shortcuts.

The first, most notable change, is in the navigation bar, which has a function to search between recent conversations without the need for too much input. This comes with easier ways to see information with mentions, reactions to your messages, files, people and apps all in the one place, ready for reference.

Inside the bar is a new compose button, oddly similar to what you would have in an email. This is a more convenient way to draft messages before choosing whether to send to the relevant person or channel, according to Slack. If you stop midway through, a draft will be saved for you – which, again, sound very much like email. Another change will deal with all those Slack channels that become a bit of a headache. Currently, these are displayed as a single long list that can become unwieldy for the many users who have a lot of vital channels that need regular check-ins. To simplify this, the platform is adding customisable sections that are collapsable.

The feature, which is restricted to paid plans, will allow users to organise those messy channels, direct messages and apps into sections within the sidebar. Channels can be dragged and dropped, ordered into sections and organised any way you like and the sections can be named with emojis too.

The many tools available on Slack will also be easier to discover and use without the need to switch between windows and tabs. This will be via new shortcuts in the shape of a lightning bolt icon next to the message input field. The example above is called ‘Simple Poll’. In the coming weeks, more apps will be made available through these shortcuts.

Rollout of the updates begins today and will continue “over the next several weeks”, the company said, with updates to the mobile versions of the app coming at an unspecified future date. 

Google Cloud postpones Next event after initial online-only move

Google Cloud is postponing its Cloud Next event over coronavirus fears, having previously made the move to stage the conference virtually.

The company had at the start of this month announced that Next, originally due to be at San Francisco on April 6-8 with an expected attendance of more than 30,000, was going online-only.

Now Google is scrapping things altogether, although promising that the event will still take place ‘when the timing is right.’

“Google Cloud has decided to postpone Google Cloud Next ’20: Digital Connect out of concern for the health and safety of our customers, partners, employees and local communities, and based on recent decisions made by the federal and local governments regarding the coronavirus,” wrote Google Cloud chief marketing officer Alison Wagonfeld in a blog post.

“Right now, the most important thing we can do is focus our attention on supporting our customers, partners, and each other.

“Please know that we are fully committed to bringing Google Cloud Next ’20 to life, but will hold the event when the timing is right,” added Wagonfeld. “We will share the new date when we have a better sense of the evolving situation.”

Google parent Alphabet has already issued guidance to employees over remote working. As reported by CNN, the company is recommending that all workers in North America, Europe, Africa and the Middle East work remotely. Yet as alleged by Business Insider (paywall), some contract workers appear to not be bound by this commitment, with employees – both full-time and contractors – sending a memo to executives demanding stronger policies.

Whatever would have been in store at Next, Google Cloud has certainly been busy on the news front this year. Partnerships have been struck, such as with Bharti Airtel, customers have been won in the shape of Lloyds Banking Group and Major League Baseball, while various product launches and iterations have come through, announced at events such as security conference RSA and retail gathering NRF.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Backup and Sync: That syncing feeling


Andy Webb
K.G. Orphanides

27 Mar, 2020

A capable file-sync tool, but it’s no backup behemoth

Price 
£4

If your business uses G-Suite, then good news: you’re already subscribing to a cloud-based backup and synchronisation solution. How much capacity each user gets depends on your G-Suite subscription level. The G-Suite Basic tier has 30GB by default, with higher tiers providing unlimited storage and various options to individually upgrade a user’s drive to 100GB or 1TB.

Like Microsoft’s OneDrive-based cloud storage and other syncing-oriented services, it can be extremely useful if you or your corporate users want to access their files from multiple computers, or simply have all their work to hand when they move to a new workstation.

Google’s Backup and Sync tool works the same for business and home users, although the G-Suite version includes granular admin tools to enable features such as filestream – the ability to stream, rather than sync, G-Suite content – and offline access.

At install time, you’re prompted to choose which folders you want to back up and whether you’d rather have images and videos automatically compressed – for free storage – or left at their original quality. Other options here allow users to set upload and download speed limits, proxy settings and set specific file types to be excluded from their backups.

Note that, since July 2019, photos and videos are no longer made available via Google Drive, but instead can only be accessed through the Google Photos interface, although both Drive and uncompressed Photos content both still count towards your storage allowance. 

You’re then asked to choose whether you want to sync everything that’s already in your Google Drive, to copy nothing to the local hard disk, or to sync only selected folders. You can also change the default path of your Google Drive folder.

Once installed, users can dig through the advanced settings to have the application automatically back up files from connected cameras, SD cards and USB devices, and to configure it to automatically delete or retain synchronised copies of removed files.

The backup side of the utility is a more recent addition, but once again, it’s online only. You can’t use the Google Backup and Sync utility to backup data to your network or external storage media, even as an additional feature.

Whatever folders you’ve opted to back up – all the user’s files by default – are automatically copied to cloud storage, where their contents can be browsed and accessed online via the ‘Computers’ tab in the Grive web interface. 

Unlike your main Google Drive, where data is kept in sync between all connected devices, every computer the user adds gets a dedicated entry here. The folder structure is retained and there are no granular features to control whether or not specific file types are included in your backup.

You can’t schedule backups, but any files that change are instantly uploaded if you have an internet connection, or synced in bulk when you’re next online. By default, 100 versions of a file are saved, but you can manually switch particularly important files to unlimited versioning.

Note that the Backup and Sync application won’t let you back up your entire user space, let alone your entire computer. Your OS and software are beyond the scope of this one, even if you have the kind of internet connection that makes fully cloud-based backups a realistic prospect.

File recovery is a matter of re-downloading any lost files or old versions that you might need. You can download an entire system archive and move files from an old system to a newly-installed system’s synced folders, but this isn’t as smooth as the targeted recovery tools you’ll find in many more fully-featured backup utilities.

Google Drive is an excellent syncing tool and G-Suite using businesses will almost certainly want to have their users’ data synced – although those dealing with sensitive customer and financial data should restrict these files and consider using a private storage solution.

But despite having ‘Backup’ in the name, if you want any kind of disaster recovery protection, you’re probably still going to need a more sophisticated backup tool to work alongside Google Drive’s very capable file synchronisation.

How to migrate your SMB network to the cloud


Andy Webb
K.G. Orphanides

26 Mar, 2020

So, the time has finally come. You migrated the office email to the cloud and the Exchange servers now sit forlorn in the corner of the IT room, collecting dust and half-finished cups of coffee. Then you moved the files to the cloud, and listened to the whine of the file server’s disk array fade away for the last time.

But the domain controller is still there in the rack, the last beacon of hum amongst the windowless desolation of loose carpet tiles and unidentifiable cables that is the server room. All that shiny new cloud infrastructure depends on this last server, home to your Active Directory and the photos of the 2008 office party. It’s time to migrate this vestige to the cloud as well and let silence fall in the server room.

Starting point

In this tutorial, we’ll be looking at a small business setup, comprising a single server running ESXi, hosting a single Windows Server 2012r2 virtual machine which is acting as domain controller and file server, and a firewall/router capable of maintaining an IPSec VPN link to the cloud. The ESXi box needs enough spare disk space to install the Azure Site Recovery (ASR) appliance image to handle the migration. We will be using a PFSense firewall, but any decent business firewall/router should be OK, such as Cisco ASA, Checkpoint, Sonicwall and so on.

We’ll be using Microsoft Azure for this tutorial: the specifics of the migration process vary between various cloud providers, so you’ll also need to use Azure. We assume for the purposes of this tutorial that you’ve already created an Azure account, chosen a region and set up a resource group. Whilst this can be done in almost all Azure regions, you need to set everything up in the same region for this to work.

Our starting network is laid out as follows:

Subnet – 10.10.12.0/24

Server – 10.10.12.50

Firewall – 10.10.12.10

DHCP range – 10.10.12.100-200

ESXi server 10.10.12.5

Domain – smeoffice.local

Step 1: Virtual network setup

The first thing to do is to set up a virtual network for the cloud server to connect to. In the Azure portal, click on “Create a resource” at the top of the left hand menu bar, and then select the Networking category and choose Virtual network from the top of the right hand menu. Choose an address space, and a subnet within it, and select the appropriate resource group from the pull-down menu. We used 10.20.0.0/16 for the address space, and 10.20.1.0/24 for the subnet.

Step 2: Set up a Virtual Network Gateway for the cloud network

Next, we need to set up a Virtual Network Gateway for the cloud subnet. This will act as the cloud end of a VPN tunnel from the office to the cloud subnet. Go back to the Azure portal home screen using the menu on the left, and click on “Create a resource” again. Type “Virtual network gateway” into the search box and select it from the drop-down menu.

Click on Create, then give the gateway a name. There are several types of VPN gateway available, with different pricing models. We’ll be using the basic, cheapest option. Select this from the drop-down SKU menu, then select the virtual network we just created from the Virtual network dropdown menu. Enter a name for the public IP address and leave everything else on the defaults.

Now click on Review + create at the bottom of the page. If everything looks OK on the next page, hit the Create button at the bottom of the page. This deployment may take a while, so now is a good time to make a cup of coffee or write a novel.

Step 3: Create a local network gateway 

Now we need to create a local network gateway in Azure to represent your office firewall or router. Click on Create a resource, type “local network gateway” into the search bar and select it from the menu as before. 

Click on Create, then give it a name and fill in the office subnet details and the public IP address of the firewall. Select the appropriate subscription and resource group, then click on create.

Step 4: Create a site-to-site VPN from the office to the cloud subnet

We can now create the VPN link between the new cloud subnet and the existing office subnet. Select all resources from the left hand menu, then locate your virtual network gateway on the list and select it. Click on connections, then on the add button.

Name your connection, and select site-to-site from the drop-down connection type menu. Select your local gateway from the list, and provide a shared key for the VPN. Now click OK.

Now we just need to configure the office end of this VPN on your office firewall or router. Don’t forget to add appropriate firewall rules to allow traffic across the link between your office and the cloud subnet.

Step 5: Set up a recovery services vault

In the Azure portal, click on create a resource again, then type “recovery” into the search box. Select backup and site recovery from the list, then click on create. Name your recovery services vault and select the appropriate resource group and region, then click on review + create. Check the details and click on create.

Step 6: Configure the vault and deploy the site recovery server image

From the list of resources, select the vault we just created. Click on getting started in the site recovery column on the overview screen. Now click on prepare infrastructure. Answer the protection goal questions. You’ll be prompted to use Azure Migrate instead.

Right now, we still recommend using Azure Site Recovery (ASR) for this process, as Azure Migrate v2, released in July 2019, is still relatively new and a wider range of support and information resources are available for ASR if you encounter any snags or have any unusual use cases.   Tick the box to bypass Migrate, answer the last question and click on OK. 

Next is the deployment planner. Whilst Microsoft recommends running this tool, it’s largely designed to calculate the required bandwidth for regular replication of your on-site systems to Azure. As we’ll be using this for a one-off migration, you can safely skip this step. Select “I will do it later” from the menu and click OK to move on to the next stage.

Click on the add configuration server button, make sure that the server type is for VMware, then download the virtual machine template and import it into your ESXi server. It’s quite large so the download will take a while. The virtual machine image requires about 50GB of disk space on the ESX server to install.

Once the import is complete and the virtual machine has booted, connect to its console and accept the license terms, then on the next screen set a password for the local administrator account. You can allocate a static IP to this server at this point if you wish, but as it will only be required for a short time, it isn’t really necessary. 

Log in to the console of the ASR server using the password you just set. The ASR setup wizard will start, but before proceeding with that, open server manager and disable IE ESC, as it will cause irritation later in the process.

Step 7: ASR server configuration

Now you can move on to the ASR setup wizard. First, you need to set a name for this server to identify it in your Azure portal, then provide your Azure portal login so that the ASR server can register itself with Azure. 

If you have a multi-tenancy Azure setup, then you should provide an account associated with the target tenant, but if you have only the default tenant set up then you can use your root account login. Once the wizard has completed, the server will register itself with Azure, then reboot. 

Log in again and wait for the browser to open and display the ASR management page. Select the network interfaces for communication with the on-premises systems, and with Azure. As there is only one subnet in our office network, there is only one interface to choose. You can safely ignore the warning about using a dynamic IP, as this server is not going to be around for very long.

Save these settings, then click on continue to move on to the next page. Here, click on the sign-in link for Azure and tick the box to grant it the required permissions, then select the appropriate subscription and resource group, and the recovery services vault we created in step 5. Click continue to move on to the next stage.

Next, we install MySQL. Tick the box to accept the license, then click on the link to download and install MySQL. Once installation is complete, click continue to move on to configuration validation. Once this has completed, click continue. Once again you may safely ignore the warning about providing a static IP address and continue.

Step 8: Set up VMware and office server access

Now we need to provide the ASR server with the address and credentials for our VMware ESXi server, as well as an account with administrator-level access to the servers we wish to migrate to Azure. Click on the button to add the ESXi server and fill in your server details in the pop-up window.

Once you’ve filled those in, click on add, wait for them to be validated, then click on continue.

Click on the link to add VM credentials, then provide an administrator-level account for the domain controller. Click add and wait for validation, then click on continue followed by the last button to finalize the configuration. This will take a few minutes. Once it has completed successfully, you can log out of the ASR server console.

Step 9: Prepare the domain controller for migration

Now we need to prepare the domain controller for migration. Login to the domain controller and check for updates. Apply any outstanding updates, and reboot as necessary to ensure that there are no updates pending at the next boot.

Make sure remote connections to this server via RDP are enabled, and that they’re allowed through the firewall in all three profiles, otherwise you won’t be able to access the server remotely after the migration.

Step 10: Set up your replication source and destination

Go back to the Azure management portal, and select the vault we configured earlier. Click on getting started from the site recovery menu, then select prepare infrastructure. Check that the answers you provided in step 6 are still selected, then click OK. Skip the deployment planning as before, and move on to configure the source.

Select the configuration server and vSphere host that we set up from the available options. If you only have one configuration server set up and linked only to one vSphere or vCenter server, they will already have been selected. Then click on OK. On the next page select your Azure subscription, choose which deployment model you want, and click OK. We used the default resource manager model.

Lastly, we need to create a replication policy and associate it with the configuration server. Click on create and associate, enter a name for the policy and click on OK. The other options can be left on the defaults as we will be using this primarily for migration, not for regular snapshots or disaster recovery planning.

Wait for the creation processes to complete, then verify that the policy we just created is shown in the selection box, and click OK to close the policy creation window. Then click OK again to close the prepare infrastructure window and return to the vault administration page.

Step 11: Starting replication

Once started, the replication process will use a lot of internet bandwidth to perform the initial replication. You might want to perform this step outside office hours to avoid slowing down internet access for the users during working time.

To begin replication of the virtual machine to Azure, click on replicate application to open the enable replication setup. On the first page, check that the pre-filled values are correct, and select the configuration server we set up in step 7 as the process server. Click OK to move on to page two.

Once again, check the pre-filled answers are correct, then select your resource group, and choose the network and subnet we created in step 1 for the post-failover Azure network. Click OK to continue.

Select the VM to be migrated from the list of virtual machines on the next page. If you are migrating more than one VM, select them all here before clicking OK to continue. Set the type of disk you want for your VMs and choose the domain admin account we associated with the configuration server from the user account drop-down menu.

What type of disk you choose will depend on your usage patterns and on the costs associated with the disks. As everyone’s usage patterns are different, you will need to work this out for yourself – Microsoft provides tools to help estimate potential costs. Once you’ve made your choice, click on create target resources to continue.

Once the resource creation tasks have finished, check that the replication policy we created in the previous step is selected, and click on OK. We don’t need multi-VM consistency for this migration, but if you are migrating two or more servers that depend on each other, such as mirrored database servers, you may want to enable this. Now click on the enable replication button, and wait for the notification that replication has been enabled. This will take some time, as it needs to install the mobility agent on the target virtual machine and then prepare it for replication. For us it took about 12 minutes. Once this is complete, the initial replication will start.

How long this takes will depend on the amount of data to be moved and the speed of your office internet connection. To keep an eye on the progress, go back to the vault configuration page, select replicated items from the protected items section of the menu and click on the name of the VM in question.

Don’t worry if you get a warning that the initial replication has been flow controlled, as it is replicating a large amount of data this first time and unless you have an extremely fast upload speed, your host’s disks will be faster so will have to be throttled to prevent the configuration server running out of cache space. 

Step 12: Failover test

Before running the migration for real, it’s important to do a failover test to make sure that everything behaves as you expect it to. Once the initial replication has completed, select the vault, then click on replicated items in the menu. Select the virtual machine from the list on the right, then verify that its replication health is OK and that there are no warnings or issues that need to be resolved.

Select test failover from the menu bar, check to make sure that the time on the pre-selected recovery point is within the last hour and select the virtual network we created in step 1 from the drop-down menu. Now click OK to start the test failover running. This will take a few minutes. 

Once the test failover has completed, go back to the home page of the Azure portal and select virtual machines from the menu bar across the top. Select the failover test VM from the list. It’ll have the same name as your on-premises VM with “-test” appended to it. Verify that there are no errors showing and note down the private IP address assigned to it.

Now connect to that IP using your preferred RDP client and log in with your domain admin credentials. If you are unable to connect to the Azure VM, check that the VPN you set up in step 4 is connected.

Check that the server is running as you expected, and that all the services are OK. Once you are satisfied that all is well, disconnect from the server and go back to the Azure portal. Return to the vault page and the replicated items list, and select the server again. Click on cleanup test failover, tick the box to complete testing and click OK. This will remove the test VM and tidy everything up. Don’t leave the test VM running as it will incur costs.

Step 13: Migration time

In this step, we will shut down the on-site virtual machine and start up the Azure replica. As this will cause some downtime for your users, it should probably be done out of hours. This has the added benefit of minimising the likelihood of any changes to files or system state since the last replication point.

Go back to the vault screen and the list of replicated items. Select the virtual machine from the list, and verify that the replication is healthy and the current recovery point objective (RPO) is recent. If everything looks OK, click on failover from the top menu bar. Check the recovery point and tick the box to shut down the VM before beginning the failover, then click OK to start the failover task running.

Click on the notification bell towards the top right of the Azure portal and select the failover task. Monitor it to ensure that all the stages are completing successfully. Under some circumstances (generally relating to VMware licensing or version) it may fail to shut down the on-site VM, so you’ll need to do that manually once the replication has completed. This will not affect the failover process, however.

Once the failover process has completed, cleanly shut down the on-site virtual machine if it is still running, then go to the virtual machines section in the Azure portal and select the new VM created by the failover. Make a note of the private IP address that has been assigned to it.

Now we need to set up the DHCP service on the office firewall to take over from the server. Configure it to allocate the same pool of IP addresses as the server used, but set the DNS server option for DHCP clients to the private IP address of the new cloud VM. Remember to also re-create any static DHCP assignments that were present on the server. 

Refresh the DHCP leases on the office PCs (rebooting them is probably the quickest way), then test connectivity to the cloud VM by pinging it from one of the PCs, first by IP address then by name. Check that any mapped drives are working as expected, but be aware they will be slower than they were when the server was local, unless you have an extremely fast internet connection.

If everything looks fine from the client machines, connect to the server via RDP and check that everything is working as it should at that end. Pay special attention to the DNS zones on the server and make sure that the server’s old IP address has been updated to its new private IP in the forward lookup zones. Correct any remaining instances of the old IP address that you find.

Step 14: Final steps and tidying up

Return to the Azure portal, and to the list of replicated items in the recovery services vault. Select the virtual machine from the list. Ignore the replication errors, as those were caused when we shut down the VMware virtual machine in the office. Select complete migration from the menu bar and click OK to continue. This will disable the replication of the office virtual machine and tidy up the Azure site recovery setup. It’ll take a few minutes to complete.

Once that’s finished, return to the vault page and verify that the office VM is no longer listed on the replicated items page. If you have no further use for Azure Site Recovery at this time, you can delete the config and resources associated with it. 

From the vault page, select site recovery infrastructure, then on the next page select replication policies. Select the replication policy that we created in step 10, then click on the dots at the end of the line for the associated configuration server and select dissociate. Repeat this for the failback policy as well.

With that done, you should have fully and completely migrated your entire office infrastructure to the cloud. Welcome to the future.

Return to the list of replication policies, click on the dots at the end of the line for each of the policies we just dissociated, and select delete. Next select configuration servers from the vault menu, and select the config server from the list. Right click on the office ESX server and hit delete, then hit yes to confirm. Once that task has completed, click on delete on the upper menu bar to remove the ASR server and hit OK to confirm. Lastly, go back to the vault screen and select delete from the top menu bar, then hit yes on the next page to delete the recovery services vault.

Don’t forget to update the configuration for any devices in the office that do not obtain their IP addresses via DHCP to use the new IP address for DNS, and for any other services configured using the server’s IP address rather than its name.