Slack and Google staff told to work from home due to coronavirus


Sabina Weston

6 Mar, 2020

Slack, Facebook, and Microsoft have all advised their employees to work remotely following the spread of the coronavirus to the west coast of the United States.

The request comes after Google sent home most of its 8,000 staff in Dublin after an employee showed flu-like symptoms on Monday. Meanwhile, Amazon has reportedly asked its employees to test their VPNs to ensure that they would manage the possibility of working remotely on a massive scale.

In a statement to their staff, Slack’s senior vice president of people Robby Kwok announced that the company would be shutting down their San Francisco offices until Monday.

“On March 5, we learned that a San Francisco-based Slack employee was notified by the Centers for Disease Control and Prevention that they were in an area with potential COVID-19 exposure while traveling overseas,” he said.

Although the staff member was not confirmed to be suffering from the coronavirus, the company announced that they would be closing the premises for deep cleaning and urged employees to work from home on Friday.

Facebook and Microsoft have also instructed their San Francisco Bay and Seattle area staff to work from home. Microsoft announced in a statement that two of its employees in Puget Sound were diagnosed with COVID-19.

Earlier this week, the UK government revealed in its report that, due to the coronavirus, it expects up to a fifth of the UK workforce to be off on sick leave at the same time.

The shutdown means that the tech industry’s labour force is working from home on an unprecedented scale. While last year’s CIPD Job Quality Index reported that 54% of the UK’s workforce works flexibly, in the face of recent developments, that number is now expected to increase significantly.

Organisations across the world are now having to face the very real prospect of office closures, forcing the entirety of their staff to work remotely. As such, many companies will need to ensure that their staff are able to access systems from home, including ensuring that all relevant employees have virtual private networks installed and that they can access company intranets.

Western Digital hires Cisco’s David Goeckeler as its new CEO


Keumars Afifi-Sabet

6 Mar, 2020

Hard disk manufacturer Western Digital has appointed David Goeckeler to serve as its new CEO after its former chief Steve Milligan announced his intention to retire late last year.

Goeckeler will leave networking giant Cisco to join the firm as its new leader, having served as executive vice president and general manager for its networking and security business. This division has been valued at $34 billion.

One of his last actions with Cisco was overseeing the release of smart troubleshooting functionality for monitoring applications in January, at the firm’s flagship Cisco Live event.

Western Digital’s former CEO announced he was retiring in October having been with the firm since 2002 and having occupied a leadership role since 2013. He also occupied senior roles with Hitachi from 2007, before rejoining Western Digital in 2012 when it acquired the company.

Joining the firm from Monday 9 March, Goeckeler said the entire industry is now at an exciting inflection point. This, he added, involves all organisations deploying infrastructure that’s software-driven, and powered by data and cloud computing, hinting at the possible direction he may take Western Digital in.

“This megatrend has only just now reached an initial stage of adoption and will drive a massive wave of new opportunity,” Goeckeler continued. “In this IT landscape, the explosive growth of connected devices will continue fueling an ever-increasing demand for access to data.

“With large-scale hard disk drive and semiconductor memory franchises, Western Digital is strongly positioned to capitalize on this emerging opportunity and push the boundaries of both software and physical hardware innovation within an extremely important layer of the technology stack.”

Western Digital’s board chairman, Matthew Massengill, added the new appointee boasted an exceptional track record of driving profit at scale while executing innovative business strategies to expand his division into new markets.

“With experience as a software engineer as well as running large semiconductor development projects,” he continued, “his breadth of technology expertise, business acumen and history of building and operating world-class organizations make him the right person to lead Western Digital in a world increasingly driven by applications and data.”

Milligan will continue to serve as an advisor to Western Digital until September this year, as originally planned.

How to build a cloud-based IP PBX telephony system


K.G. Orphanides
Andy Webb

6 Mar, 2020

Migrating office IT infrastructure to the cloud is an increasingly popular choice for SMEs keen to save on in-house hardware. However, the old office phone system is still there, squatting like a toad in a corner of the comms room, or bolted to the office wall like a brutalist homage to greyish-blue plastic.

This tutorial will show you how to get rid of your old PBX and deploy an open source VoIP solution on the cloud platform of your choice, using Ubuntu 18.04, Asterisk 13, and FreePBX 14.

The first thing you need to do is choose a cloud provider, and set up a virtual network there. If you’ve already migrated some of your IT infrastructure to the cloud, you probably already have this in place, but if not, you’ll need to set it up before starting this tutorial. We used Azure, which is reflected in our screenshots. However, these instructions are platform-agnostic.

Access to this virtual network can be either via a site-to-site VPN from your office to the virtual network via your cloud provider’s VPN service, or directly to the public IP of the new PBX server. Either way, your office internet connection will need to have a static IP address.In this tutorial, we’ll be connecting directly to the public IP of the PBX. VPN users will need to adjust the SIP NAT settings accordingly.

With a virtual network prepared, we can start building a cloud PBX.

Step 1: Deploy Cloud VM

Deploy an Ubuntu 18.04 virtual machine with a single NIC connected to that virtual network. If you generate new access keys for this deployment, make sure to save them somewhere safe, and document their location and pass phrase, as well as the username needed to log in. Via your cloud provider’s management console, assign it a static public IP address and set up a security group associated with this IP address. The default rules in the security group will block direct access to the server from the internet.

Even if your office phones will access this PBX via a VPN, it’ll still need a public IP address in order to communicate with the trunk provider. We’ll cover setting up a trunk later in this tutorial. For now, add an access rule to the security group allowing connections from your office IP address on TCP ports 22, 80 and 443.

Step 2: Install dependencies

Log in to your new virtual machine as a normal user – not as root, even if your cloud provider allows it – via ssh, and run the following commands to update it and install some required software and dependencies.

sudo apt update
sudo apt upgrade -y
sudo apt install tasksel -y
sudo tasksel install lamp-server
sudo apt install sox mpg123 ffmpeg lame mongodb -y

FreePBX version 14 and earlier are built for php5.6 and do not support the newer php7 that Ubuntu 18.04 ships with. To get it working, we need to add the php5.6 repository and install php5.6. Run the following commands to add the repository and install and enable php5.6 on the PBX.

sudo add-apt-repository ppa:ondrej/php < /dev/null
sudo apt update
sudo apt upgrade -y
sudo apt install php5.6 php5.6-cgi php5.6-cli php5.6-curl php5.6-fpm php5.6-gd -y
sudo apt install  php5.6-mbstring   php5.6-mysql php5.6-odbc php-xml -y
sudo apt install php5.6-xml php5.6-bcmath php-pear libapache2-mod-php5.6 -y
sudo a2dismod php7.2
sudo a2enmod proxy_fcgi setenvif php5.6
sudo a2enconf php5.6-fpm
sudo update-alternatives –set php /usr/bin/php5.6
sudo update-alternatives –set phar /usr/bin/phar5.6
sudo update-alternatives –set phar.phar /usr/bin/phar.phar5.6
sudo sed -i ‘s/www-data/asterisk/’ /etc/php/5.6/fpm/pool.d/www.conf

Next, we’ll add the official node.js repository and install node.js from there. While node.js is available from the standard Ubuntu repositories, a more up-to-date version is required for FreePBX.

sudo curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash –
sudo apt update
sudo apt install nodejs -y

Step 3: Add a swap file (optional)

Whilst some people would argue that you no longer need a swap file, Linux is generally happier with one and Asterisk will warn you that it’s missing every time you log in.

sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048
sudo chmod 0600 /var/swapfile
sudo mkswap /var/swapfile
sudo swapon /var/swapfile
sudo echo /var/swapfile swap swap defaults 0 0 >>/etc/fsta

Step 4: Install Asterisk

Install the Asterisk VoIP PBX package. By default this will also install the sound files for English, but if you need other languages you’ll have to install those separately. As an example, we’ll add French as well. To keep things simple, we’re using the version from the default Ubuntu repository, rather than using a third party repo to get a newer version.

sudo apt install asterisk -y
sudo apt install asterisk-core-sounds-fr -y

Step 5: Configure Apache

To configure the PBX, we’ll install the FreePBX web GUI. We’ll be using version 14 of FreePBX. However, before we can do that we need to change a few Apache settings. Run the following commands to setup Apache for FreePBX:

sudo a2enmod rewrite
sudo sed -i ‘s/\(^upload_max_filesize = \).*/\120M/’ /etc/php/5.6/cgi/php.ini
sudo sed -i ‘s/www-data/asterisk/’ /etc/apache2/envvars

Now open /etc/apache2/apache2.conf in your preferred text editor and go to line 170. Remember to use sudo to edit this file as root. We need to change the override permissions for the webserver root directory so that FreePBX can use rewrite statements in htaccess files without causing errors. Find the section that looks like this:

<Directory /var/www/>
        Options Indexes FollowSymLinks
        AllowOverride None
        Require all granted
</Directory>

Change the AllowOverride value from None to All, so that it looks like this:

<Directory /var/www/>
        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted
</Directory>

Now restart Apache and PHP with the following commands:

sudo service apache2 restart
sudo service php5.6-fpm restart

Step 6: Download and install FreePBX

We’ll be using the latest release of FreePBX version 14. Download and unpack it:

cd ~
wget http://mirror.freepbx.org/modules/packages/freepbx/freepbx-14.0-latest.tgz tar xvzf freepbx-14.0-latest.tgz

This will place the FreePBX installer into a directory in your home folder called “freepbx”. Go to this directory and install FreePBX by running the following commands. You can safely ignore the PHP warning it generates. The installer will ask you to provide multiple responses relating to install locations and database setup. Just press enter each time to accept the default value.

cd ~/freepbx
sudo ./install
sudo rm /var/www/html/index.html

That last command removes the default index file that was installed with Apache. This is no longer needed as the FreePDX package provides its own PHP-based index file.

Step 7: Initial setup

For most of the remaining steps, we move away from the command line, and configure the system using the FreePBX web front end. Open your preferred web browser, and go to the public IP address of your new PBX.

You’ll be presented with the initial setup screen, as shown below. Enter an admin username, password and email address, and make a note of the username and password somewhere secure. It’s also a good idea to set the system identifier to the same value as the hostname you chose for the virtual machine. This avoids any confusion that might arise from the PBX having a different name at command line than it does when using the web interface. Finally, click on the Submit button in the bottom right to close this screen.

Step 8: First login and module configuration

Once you’ve completed the initial setup process, you’ll be taken to the login screen. Click on FreePBX Administration and log in using the admin username and password you chose earlier. As this is your first login, you’ll be asked to set the system locale and timezone. Set them to the correct values, and click on the submit button.

Once this is done, you’ll be taken to the system dashboard. This is the landing page when you login to the interface, and presents you with information about the current system state.

Along the top of the page are various drop-down menus for administering the system. Note the Apply Config button at the top right. This is used to commit and activate changes made in the web interface to the PBX. It’s only shown when there are uncommitted changes. Don’t click on it yet.

The first thing to do is to update the FreePBX modules. Mouse over the Admin menu at the top left, then select Module Admin from the menu. Click on Check Online to populate the list and see if there are any new module updates available. Once it’s finished, click on the Upgrade All button on the right, above the module list. This flags all modules with available updates ready for upgrading.

Towards the top left of the module update page is a list of repositories. We will only be using the pre-selected standard repository, not the extended or unsupported ones.

Now click on the process button towards the top right. On the next page, click Confirm and wait for the modules to be updated. Once the process completes, click on the Return button at the bottom right corner of the progress box that popped up when you confirmed the upgrade. This will take you back to the module admin page.

Step 9: Basic module installation

The default installation of FreePBX only comes with the core modules. For a basic office PBX, we’ll need to install a few more to set up some standard office features. From the list of modules available, select the following and click on the Download and Install button.

From the Admin section, select the Backup and Restore module
From the Applications section, select the Ring Groups and Calendar module
From the Reports section, select the Call Event Logging module

With those four selected, click the Process button, and confirm the install on the next page. Once complete, click on return to close the status window and go back to the module admin page one more time. Click on Check Online again, then install the Time Conditions module from the Applications section in the same way you did the previous modules.

Step 10: Advanced settings menu

Mouse over Settings at the right-hand end of the menu bar, then select Advanced Settings from the drop-down menu. Scroll down to Dialplan and Operational. Locate the Country Indication Tones setting and set it to the correct value for your country. Then scroll down a little way, still within this section, and locate the SIP Channel Driver setting. Set this to “chan_sip”. Scroll down to the bottom of the page and click on Submit.

Next, mouseover the Settings menu again and select Asterisk SIP Settings from the menu. On the General SIP Settings tab, locate the NAT Settings section and click on the Detect Network Settings button to automatically fill in the correct NAT settings.

If you are connecting to the PBX via a VPN from your office to the cloud network, then you will need to add your office IP ranges to the list of local networks here.

Next, click on the Chan SIP Settings tab. At the top of the options, set NAT to yes, then scroll down the page until you reach the Advanced General Settings section. Set the Bind Port value to 5060 and the TLS Bind Port value to 5061. Scroll down to the bottom and click Submit in the bottom right.

Step 11: Adding extensions and a ring group

Next, we need to set up some extensions. We’re going to use SIP, the most common extension type used for both VoIP desk phones and softphones. Mouse over the Applications menu and select Extensions from the drop-down. Click on the Add Extension button and select New Chan_SIP Extension.

Enter a number and display name for the new extension, and make a note of the extension number along with the “Secret” value. You’ll need these, as well as the IP address of the PBX, to configure your phones. You can leave the Outbound CID value blank as, in this case, it’ll be set by the trunk when we configure it later.

Next, click on the Voicemail tab, enable voicemail and set a password. Voicemail is optional, but it’s useful to have at least one extension with it enabled as a failover destination for unanswered incoming calls.

Now go to the Advanced tab, locate the NAT Mode option and set it to “Yes – (force_rport,comedia)”. Finally, click the Submit button at the bottom right. Repeat this process for each new extension number you need to add to your system. We used three digit extension numbers, but you can use any length you wish.

Those of you using a site-to-site VPN to access the PBX will instead want to set the NAT mode for your extensions to “No – (no)” instead.

With that done, click on the Apply Config button at the top right, and wait until the “Reloading” message disappears before continuing below.

Go back to the Applications menu and select Ring Groups from the drop-down. Click on Add Ring Group. On the next page, the main details you need to fill in are the ring group number, the group description, and the extension list. Extensions can be easily added to the list using the User Quick Select drop-down to the right of the Extension List box.

It’s best to use a different length number for ring groups than for extensions, as it clearly distinguishes them for the users. We’ll use four digit numbers for out ring groups, but you can use however many digits you want.

Lastly, scroll to the bottom of the page, and click on the “Destination if no answer” box. Select Voicemail from the list of options, and choose which extension’s voicemail box unanswered calls should go to.

Step 12: Add a trunk and outbound route

In order to make and receive calls, the PBX will need a trunk. VoIP trunks are available from many different providers. Which one is best for you will depend on your requirements and budget.

Mouse over the Connectivity menu, and select Trunks from the drop-down menu. Click Add Trunk and select the correct type to match your VoIP trunk provider’s offering. In this case we will be using a SIP trunk, as those are the most common type.

Under the General tab of the Add Trunk page, choose a name for your trunk. If you plan on having multiple trunks from different VoIP providers, it helps to name each trunk after its provider. Set the Outbound CallerID and Maximum Channels values as per the values supplied by your trunk provider.

Next, click on the SIP Setting tab and input the details provided by your VoIP trunk provider. For the inbound and outbound trunk names, we used the trunk name we provided on the general tab, with “_in” and “_out” appended.

Next, we need to add an outbound route for making calls outside of the PBX. We’re using the traditional prefix method, in this case prefixing a number with a ‘9’ to get an outside line. You can alternatively program the PBX to determine whether a number is internal or external based on matching the pattern of the number, but that’s more complicated, as you have to know all the possible number patterns that your users might dial.

Mouse over Connectivity on the menu bar and select Outbound Routes from the drop-down menu. Click on Add Outbound Route. On the Route Settings tab, give the route a name and select the trunk you just created from the drop-down list for Trunk Sequence for Matched Routes.

Next, click on the Dial Patterns tab and fill in the dial pattern. Set the prefix to ‘9’ and put a full stop in the “match pattern” box. This will match any number you dial that starts with a ‘9’, and strip that leading ‘9’ off. So to call 0343 222 1234 – an outside line – you would actually dial 90343 222 1234. Finally click on Submit at the bottom right.

Step 13: Add outbound routes to emergency services

All phone systems must be able to make emergency calls and prioritise them over normal calls. To do this, a special emergency outbound route is created.

Once again, mouse over Connectivity on the menu bar, select Outbound Routes from the menu, then click on Add Outbound Route. On the Route Settings tab, give the route a name like “Emergency” or “999”, set the Route CID to your geographic phone number, as provided by your VoIP provider, and set the route type to “Emergency”.

This last step gives calls to emergency services priority over other routes. In the event that all channels are in use when someone places an emergency call, the PBX will drop another call in order to free up a channel for the emergency call. Set the Route Position to first, above your normal outbound route.

Next, click on the Dial Patterns tab and set the values shown in the image below. These will recognise calls to 999 and 112, with or without the outside line prefix.

Step 14: Adding inbound routes

To receive calls from outside, an inbound route must be defined. This tells the PBX where to route incoming calls based on the number called (referred to as DID) and the caller ID (CID).

Mouse over Connectivity on the menu bar and select Inbound Routes from the menu. Click on Add Inbound Route. On the General tab, enter your main office phone number in the DID Number box, without any spaces. Enter it in the same format that your VoIP provider used when they gave it to you. Add a description for this inbound route in the box provided at the top of the section. At the bottom of the section, set the destination to the ring group we created earlier, then click the Submit button at the bottom right.

Click on the Apply Changes button at the top right to commit the new configuration entries to the PBX.

Step 15: Add cloud firewall rules for your phones and trunk provider

In order for the phones and the VoIP trunk provider to communicate with the PBX, we will need to add some access rules to the cloud provider’s firewall settings for this virtual machine.

You need to open ports 5060-5061 and 10000-20000 for both TCP and UDP protocols from the IP addresses of your trunk provider and your office. This rule will allow the phones and the trunk provider to communicate with the PBX properly. Refer to your cloud provider’s documentation for details on how to configure the firewall.

Capacity and costs

Your VM requirements, and therefore your costs, for the cloud PBX will depend entirely on your requirements. A basic PBX for an office with only a few staff won’t need much in the way of resources, and could run happily on a low-power virtual machine with a single core and 2GB RAM.

However, if you have several hundred users making a large number of simultaneous calls, then you’ll need a significantly more powerful virtual machine with much more memory. Likewise, site-to-site VPNs vary in price depending on your requirements.

Don’t choose a larger virtual machine than you actually need, and don’t use the site to site VPN features unless you really need them. Only provision what you actually require, and you’ll keep your costs to a minimum.

Q&A: UK Cloud Awards judge Andi Mann


Cloud Pro

5 Mar, 2020

Please could you tell us a little bit more about who you are and what you do?

I am a lifelong technologist with a global perspective, having worked for 30 years in multiple roles for enterprise IT teams, as a leading industry analyst, and with several software vendors across Europe, US, and Asia-Pacific.

Currently, I work at Splunk as a strategic advisor, learning from research, customers, thought leaders, so I can advocate and lead innovative product development internally; and advocate and advise customers and others externally at conferences, in journals, and directly with technology and business leaders.

How would you describe the UK Cloud Awards in a nutshell?

Best in show! The UK Cloud Awards offer a revealing look at how IT is driving UK businesses forward and recognises ‘the best of the best’ in excellence and innovation.

What appealed to you about becoming a judge for this year’s UK Cloud Awards?

As a newbie to the UK Cloud Awards, I was particularly excited just to learn from all the entrants, and see the innovation and expertise they are bringing to the industry.

Beyond that, I was also attracted by the opportunity to leverage my own experience and expertise in cloud, digital, automation, and more to help recognise modern leaders who are making a difference to how businesses benefit from technology.

What are you most looking forward to about being involved in this year’s awards?

Without a doubt, I am most looking forward to learning about the amazing developments, innovations, and especially the business impact that this year’s award nominees are bringing to the UK, and to the world.

This year’s awards have had a bit of a makeover, with new categories and some other tweaks. Tell us why people should be getting excited about all of that/the awards?

Today, every business is a technology business – and not just a cloud business, or an IoT business, or a digital business. Today, every technology matters – it may be a new DevOps approach that tips you over the edge to beat your closest competitors; it may be a collaboration project that drives customer satisfaction to record levels; it may be a big data analytics outcome that delivers new sales and builds revenue; and so on.

It is not enough to have excellence in just one area, so this years’ awards recognizes that technology differentiation matters across the board, by shining a light on outstanding achievements across many different technologies and methodologies.

Do you have a category/categories you’re most excited about?

I am most excited for the Digital Transformation Project of the Year, the DevOps Project of the Year, and the ML/AI Project of the Year categories. Digital Transformation is a buzzword-made-real that is changing not just the IT industry, but really is changing the worlds, so I have high expectations to be amazed by entrants in this category. I have been deeply engaged with DevOps for about a decade – almost as long as anyone – but love to keep learning from practitioners, so I expect to learn from this category.

With my work, I am heavily focused on how businesses bring data, analytics, machine learning, and artificial intelligence to everything from IT Ops/App Dev and cyber security to BI to Edge and IoT, so I expect to be fascinated to see the innovative developments in this category.

What are you looking for when you’re reading an entry? How can people make sure theirs stands out?

I will certainly be looking at how innovative or differentiated the entry is, but primarily I will look for the impact that the project, technology, or approach has had on business goals – or for non-profits, on constituent or member outcomes. We can all deliver new technology, but innovation is much more than having a good idea – imagination without execution is merely hallucination!

For me, it is not enough to demonstrate how amazing a technology implementation is per se, entrants must show why it mattered. If they can show – especially in real, measurable ways – how their entry has delivered on specific, important, business-level goals, they will have a much better chance of getting a vote from me.

What would you say to those thinking about entering but haven’t fully decided to do so as yet?

Why wait?! You cannot win if you don’t enter! Even to make the shortlist will be a major source of inspiration, not just for managers, not just for marketing, but for the teams of individual contributors who did the hard work, likely over many months or even years, to make your project a success.

So, if you are not going to enter for the amazing prizes, for the accolades of your peers, for the management recognition, or for the opportunity to market your win to customers, do it to show the hard workers who made your project possible that this is an achievement you are proud of, and which you want to show off to the world.

Do you have a standout cloud moment from 2019?

Not one single moment, but I would cite the number and severity of major breaches of cloud-based data that have thrown some cold water on the raging fire of cloud computing. This continues to be something our industry needs to address.

Security, privacy, compliance, governance – these should be job #1 for any business. Customers are demanding it, but too many cloud providers are not living up to their promises. It is up to the cloud customers to make that change and make security a priority.

Major League Baseball makes Google Cloud official cloud partner as AWS strikes out

Major League Baseball (MLB) has selected Google Cloud as its official cloud partner, appearing to bring to an end the company’s long-standing relationship with Amazon Web Services (AWS).

In what both companies described as a ‘powerful multi-year collaboration’, MLB will migrate its cloud and on-premise systems to Google Cloud, as well as move Google in to power tracking technology Statcast. Machine learning – which was cited when MLB extended its AWS deal in 2018 – is also a factor in this move, with the baseball arbiter also citing application management and video storage capabilities.

MLB will continue to use Google’s Ad Manager and Dynamic Ad Insertion to power its advertising arm for the third season running – a factor noted by the company in this change.

“MLB has enjoyed a strong partnership with Google based on Google Ad Manager’s live ad delivery with MLB.tv as well as YouTube’s strong fan engagement during exclusive live games,” said Jason Gaedtke, MLB chief technology officer in a statement. “We are excited to strengthen this partnership by consolidating MLB infrastructure on Google Cloud and incorporating Google’s world-class machine learning technology to provide personalised and immersive fan experiences.”

After two months of 2020, Google continues to be by far the noisiest of the main cloud providers in terms of updates and announcements. Among its other customers acquired this year are Lowe’s and Wayfair, announced during the NRF retail extravaganza in January, and decentralised network Hedera Hashgraph.

What makes this move interesting is with regard to AWS’ expertise in the sporting arena, with several marquee brands on board. Formula 1 and NASCAR are the best known in terms of arbiters, with the Bundesliga signing up earlier this year. Google Cloud’s best-known sporting customer to date is the Golden State Warriors, in a deal announced this time last year.

CloudTech has reached out to Google Cloud to confirm whether this is a single cloud provider deal, but it is worth noting the MLB logo no longer appears on AWS’ dedicated sports client page. As of February 29, it was still there (screenshot: CloudTech).

You can read the full announcement here.

Photo by Jose Morales on Unsplash

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Oracle expected to slash 1,000-plus jobs in Europe


Keumars Afifi-Sabet

5 Mar, 2020

Oracle is preparing to cut more than a thousand jobs across locations in Europe as part of a wider restructuring following several turbulent months.

The software giant may slash up to 1,300 staff in Ireland, as well as possibly Amsterdam and Malaga, with employees invited to reapply for their roles, according to the Irish Times.

This comes following its inconsistent financial results, with second-quarter revenues for the 2020 fiscal year, which closed 30 November 2019, falling short of analyst expectations.

Staff based in Ireland were invited to an all-hands meeting with managers on Wednesday afternoon, according to the report, in which they were told about the plans. 

An unnamed spokesperson told the Irish Times that the company would continue to rebalance resources and restructure teams as Oracle’s cloud business grows. IT Pro approached Oracle for confirmation but the company declined to comment.

The company has undergone several staffing fluctuations over the last year or so, with the latest round of layoffs coming almost exactly a year after the company cut approximately 350 roles in order to remain close to AWS’ cloud model.

The wider ambitions meant cutting back staffing in areas such as the Oracle Cloud Infrastructure (OCI) unit, as well as its infrastructure as a service (IaaS) business aimed at compute, storage and network resources.

The latest round of up-to-1,300 job cuts could affect employees working across its sales, business development and solutions engineering units. 

The firm’s second-quarter 2020 financial results saw revenue from its cloud and on-premise licensing business drop by 7% to $1.1 billion, while cloud services and license support revenue rose 3% to $6.8 billion.

The news comes in contrast with the company’s ambitions set out in October last year, with Oracle’s executive vice president for its OCI unit Don Johnson outlining plans to hire 2,000 additional workers.

The hires were expected to feed into the expansion of its cloud computing services as the company attempts to compete more strongly against the likes of Azure and AWS. These jobs would be added in the US and India at the firm’s software development hubs.

A day in the trenches with IT operations: How to create a more seamless practice

Traditionally, IT operators are responsible for ‘keeping the lights on’ in an IT organisation. This sounds simple, but the reality is harsh, with much complexity behind the scenes. Furthermore, digital transformation trends are quickly changing the IT operations responsibility from ‘keeping the lights on’ to ‘keeping the business competitive’.

IT operators are now not only responsible for uptime, but also for the performance and quality of digital services provided by and to the business. To a large extent, maintaining available and high-performing digital services is precisely what it means to be digitally transformed.

I’ve spent my fair share of time as an MSP team lead, and on the operations floor in large IT organisations. The job of an enterprise IT operator is full of uncertainty. Let’s look at a typical day in the life of an IT operator, and how she addresses common challenges like:

  • Segregated monitoring and alerting tools causing confusion and unnecessary delays in troubleshooting
  • Resolving a critical issue quickly through creative investigations that go beyond analysing alert data
  • Legacy processes, such as from ITIL, working against the kind of open collaboration required to fix issues in the DevOps era

Starting the day with a critical application outage

Karen is a senior network analyst (L4  IT Operator) who works for a large global financial organisation. She is considered a subject matter expert (SME) in network load balancing, network firewalls, and application delivery. She is driving to the office when  she gets a call informing her that a major banking application is down at her company. Every minute of downtime affects the bottom line of the business. She finds parking and rushes to her desk, only to find hundreds of alert emails queued in her inbox. The alerts are coming from an application monitoring tool she can’t access – more on that later.

The L1 operator walks to Karen’s desk in a distressed state. Due to the criticality of the app, the outage caused the various monitoring and logging tools to generate hundreds of incidents, all of which were assigned to Karen. She spends considerable time looking through the incidents with no end in sight. Karen logs on to her designated network connectivity, bandwidth analysis, load balancer and firewall uptime monitoring tools—none of which indicate any issues.

Yet the application is still down, so Karen decides that the best course of action is to ignore the alert flood and the monitoring metrics and tackle the problem head-on. She starts troubleshooting every link in the application chain, confirming that the firewall ports are open and that the load balancer is configured correctly. She crawled through dozens of long log files, and finally, five hours later, discovered that the application servers behind the load balancer were unresponsive: bingo, the culprit has been identified.

Root cause found: now more stalls

Next, Karen contacts the application team. The person responsible for the application was out of the office so the application managers scheduled a war room call two hours later. Karen joins the call from home, along with 12 other individuals, most of whom she’s never worked with in her role.

The manager starts the call tackling all angles of the issue. Karen, however, knew that the issue was caused by two application servers. After a 30-minute  discussion, Karen shared her screen and was able to prove that the issue was caused by the app servers. After further investigation, the application team discovered that an approved change executed the night before had changed the application’s TCP port: a critical error on the application’s team part.

Later investigations showed that an APM (Application Performance Monitoring) tool generated a relevant alert and an incident that could have helped solve the issue much quicker.  The alert was missed by the application team, and adding to that misery, the ITOps team didn’t have access to the APM system.  Karen had no way of gathering telemetry (or lack of) from the APM tool directly.

A day later, the fix is applied

The application team requested approval for emergency change so they could fix the application configuration file and restart the servers. The repair took less than 10  minutes, but the application had been down for almost 24 hours. 

It is now 10pm on Monday. Karen is exhausted, having worked a 14-hour day with no breaks.  How does the business measure the value of the time Karen spent resolving this outage? While her manager applauded her analytical skills, it wasn’t the best use of her specialised skill set and definitely not how she should have spent her day (and night).

Does this sound familiar?

I’m sure the story above resonates with IT operations professionals and it is unfortunate that similar occurrences are common.

Here are some takeaways:

  • The segregated monitoring and alerting tools did not provide operational value. That’s because the alerts and metrics are not centralised for view by all the appropriate stakeholders, and aren’t mapped to the business and in this case, the banking application
  • Just because a tool generates alerts and incidents, it doesn’t necessarily help the user locate the root cause
  • A flood of uncorrelated alerts and incidents makes matters worse. Many operators spend a lot of time looking at irrelevant data, sifting through the noise with their naked eyes. Karen quickly decided to go to the source, the application that was down, but not all ITOps people will do that
  • Legacy processes (such as ITIL) are designed to restrain the user from abrupt changes by implementing a lot of process red tape. On the flipside, this prevents the operators from fixing issues quickly when they arise. Karen did not have access to the application monitoring tool nor was she allowed to communicate directly with the application team.  She needed a manager to schedule a war room call. This hierarchy created costly delays which turned a five-to-10 minute fix into an all-day outage

Creating a better path for IT operators

Too many enterprise IT operations teams are living in the past: disconnected tools and antiquated processes which don’t map well to the pace of change and complexity in modern IT environments. Applications are going to live between on-premises and multi-public cloud for the foreseeable future. Coupled with the growing volume of event data and the rising velocity of deployments, complexity will grow and along with it, increased risks to user productivity and customer experience. 

Here’s an action plan for 2020 to better manage IT performance and enable ITOps teams to be more productive:

  • It’s time to seriously consider machine learning alert and event correlation platforms: It is no longer humanly possible for operators to sift through the flood of alarm data. Machine-learning alert correlation products are maturing and providing tangible value to IT organisations
  • It’s also time to restructure relic processes designed for mostly static infrastructure and applications: Today’s application agility requires training of IT operators so that they intuitively identify business risk and cooperate fluidly to keep digital services in optimal state
  • Finally, it’s time to reconsider the traditional siloed approach for ITOps monitoring and alerting: Having the observable data separated in different buckets does not provide much value unless we can correlate it to the respective business services

In taking these three steps, we can create a new IT operations practice that supports and even enhances the elusive digital transformation that most every company today would like to achieve.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Q&A: UK Cloud Awards judge Mitchell Feldman


Cloud Pro

4 Mar, 2020

Please could you tell us a little bit more about who you are and what you do?

I have been in the IT industry for 20-plus years with 10-plus years in the cloud space. As part of Hewlett Packard Enterprise (HPE), my role is to promote and amplify the amazing work we do in hybrid cloud.

I am creative by design and I’m never more happy then when I am building content that wins the hearts, minds and where possible, the souls of our audience.

How would you describe the UK Cloud Awards in three words?

Forward-thinking, inspirational, prestigious

What appealed to you about becoming a judge for this year’s UK Cloud Awards?

The UK Cloud Awards have been very kind to me as a previous winner, but more so I love their passion for creating a better industry.

What are you most looking forward to about being involved in this year’s awards?

I can’t lie, I love reading the entries. It’s fascinating to learn about how businesses are challenging the status quo and creating amazing new outcomes by leveraging the cloud. This is digital transformation at its best.

This year’s awards have had a bit of a makeover, with new categories and some other tweaks. Tell us why people should be getting excited about all of that/the awards?

Winning an award at this event will bring more success than the award itself. Winning (and even being a runner up) will showcase your business in front of some of the most important people in the industry. It’s win-win.

Do you have a category/categories you’re most excited about?

The geek in me makes me gravitate to transformational projects so it has to be Internet of Things (IoT) Project of the Year.

What are you looking for when you’re reading an entry? How can people make sure theirs stands out?

Video is king! For me, the more you invest in your entry, the greater chance you have of showing the judges just how good you are. I have seen amazing use cases fail due to a low-quality entry. Invest in this initiative like it’s the best customer you could ever win, it will pay dividends for years to come.

What would you say to those thinking about entering but haven’t fully decided to do so as yet?

Why wouldn’t you want to make your business more famous?

Do you have a standout cloud moment from 2019?

The industry changing its narrative and coming to the realisation that we live in a hybrid cloud world.

What are your top three cloud predictions for 2020?

Containerisation will continue to dominate.

AI use cases will be more pervasive than ever before.

Social media platforms will have more accountability to protect society.

Is there anything else you would like to add?

Third-party endorsement of your businesses success (i.e. winning an award) is one of the most powerful marketing tools you will ever have. Take advantage of this amazing opportunity to raise the profile of your business and become the one that everyone else aspires to be.

Q&A: UK Cloud Awards judge Anthony Hodson


Cloud Pro

3 Mar, 2020

Please could you tell us a little bit more about who you are and what you do?

I am an AWS Solution Architect and Consultant working for Managed Service Provider (MSP), Ensono. Ensono serves mid-tier to large enterprises. I help enterprises understand the opportunities cloud has, and then move workloads into or grow workloads within AWS’ cloud. My background spans traditional managed hosting, DevOps tooling and advisory, and Fintech. I enjoy seeing the progress that cloud can bring.

How would you describe the UK Cloud Awards in three words? 

Commending Cloud Creativity

What appealed to you about becoming a judge for this year’s UK Cloud Awards? 

In addition to the splendid party, I wanted to hear the innovations and results that have been delivered in 2019. I can then use this inspire those I speak with in my day job.

What are you most looking forward to about being involved in this year’s awards?

The quality of submissions last year was high. I’m looking forward to more stories of innovation and collaboration and I’m hoping for some that don’t just take a bigger share of the market but make the market bigger.

This year’s awards have had a bit of a makeover, with new categories and some other tweaks. Tell us why people should be getting excited about all of that/the awards? 

This year we’re focusing less on products (three categories) and more on successful projects (ten categories) and recognition for outstanding contributions (five categories). This means, to be recognised, it’s about what the people managed to achieve. We want to recognise those who’ve made the biggest and most innovative strides in progress; Afterall productization can only come after a successful pilot.

Do you have a category/categories you’re most excited about? 

The Internet of Things (IoT) is an area where we see technology enter our day-to-day world. This year we have a new category ‘IoT project of the year.’ I hope to learn how these sensors can be used to improve life on the spinning rock we call home.

What are you looking for when you’re reading an entry? How can people make sure theirs stands out? 

When I read an entry I’m looking for a good story: what was the problem and how did you know there was one? Why should we care? What were the challenges to overcome it? What roles were played and who made up the team? Finally, what measurable outcome was there and what lays ahead in the sequel? Sending marketing material off the website usually does not achieve this.

What would you say to those thinking about entering but haven’t fully decided to do so as yet?

Putting together an entry that explains what you’ve created, why it was hard and what outcomes were achieved, for whom is the basis for many a good sales pitch. Crystallising this into a concise and moving piece will not only offer the chance of industry recognition but it’ll arm you internally with getting more backing for future projects, thereby extending out the ‘DevOps ripple of progress’.

Do you have a standout cloud moment from 2019?

Personally I particularly enjoyed my time with the CIF and the Containers and Functions as a Service webinar. From a technology perspective, I was excited to see AWS release its ‘serverless’ Kubernetes offering AWS ECS Fargate, in doing so, taking out more ‘undifferentiated heavy lifting’ shortening the cycle from idea to delivery.

What are your top 3 cloud predictions for 2020?

1. AWS provides a fully-integrated Disaster Recovery as a Service, manifested through a check-box in the console (or API of-course)
2. Hyperscale providers will continue to hope their customers align with a single hyperscale cloud; the market (driven by compliance and risk mitigation) moves towards multi-hyperscale cloud, those savvy enough use Kubernetes to do this. Google takes the ground as the secondary site for these differentiator workloads (being the Kubernetes mothership).
3. AWS’ CEO Andy Jassy’s love for Vintage Rock spills out of the Re:invent AWS keynote into the AWS re:play party with the Eagles returning from retirement…

Is there anything else you would like to add?

All too often in technology, I see great work where it’s hard to prove it made a measurable impact. When you start a project (or write about a successful project), find some data which gives a base-line: ideally, it’s quantifiable, it could be qualitative (surveys even). For advice and inspiration read (or listen) to Nicole Forsgren’s / Jezz Humble / Gene Kim book ‘Accelerate’ Note there are free chapters on Google books.

Cloud computing accelerating climate change is a misnomer, scientists find

Data centre workloads, powered by the rise in cloud computing, may not be the threat to the climate many have feared, according to a new report.

The study, published in the journal Science last week, argued that while global data centre energy has increased over the past decade, this growth is negligible compared with the rise of workloads during that time.

According to the research, 2018 saw global data centre usage pinned at 205 terawatt-hours (TWhs), comprising around 1% of global electricity consumption. This represents a 6% uptick compared with 2020 figures, yet global data centre compute instances rose by 550% over that time. To put it as energy use per compute instance, the intensity of energy used by global data centres has decreased by 20% annually since 2010.

The paper cites various improvements as key to this change. Greater server virtualisation has meant a sixfold increase in compute instances with only a 25% rise in server energy use, according to the research. More energy-efficient port technologies, the report cites, have enabled a 10-fold increase in data centre IP traffic with only ‘modest’ increases in network device energy usage.

What’s more, the rise of the hyperscalers has helped. The move away from more traditional, smaller data centres – comprising almost four in five compute instances in 2010 – has resulted in greater PUE (power usage effectiveness) due to power supply efficiencies, as well as stronger cooling systems. Hyperscale data centres, as part of larger, more energy-efficient cloud data centres, now make up 89% of compute instances in 2018, the report estimates.

The average PUE per data centre has dropped to 0.75 in 2018, representing a significant improvement. When this publication attended the opening of Rackspace’s UK data centre campus in 2015, the PUE was 1.15, which at the time was noted as ‘almost unheard of in commercially available multi-tenant data centres.’

Plenty of initiatives are taking place which show how the industry is looking to harness the planet’s natural cooling systems to create a more sustainable future. In September, SIMEC Atlantis Energy announced plans to build an ocean-powered data centre in Caithness, off the Scottish coast. The company, who according to reports is in the process of arranging commercial deals for the site, is following in the footsteps of Microsoft, who experimented with placing a data centre underwater in 2018 off Orkney.

The naturally cooler temperatures of islands in the northern hemisphere, most notably Scandinavia, have long since been seen as advantageous. In what was seen as a landmark ruling in 2016, the Swedish government confirmed data centre operators would be subject to reduction in electricity taxation, putting the industry on a similar footing as manufacturing among others.

In terms of the hyperscale cloud providers, Google touts itself as leading the way, saying as far back as April 2018 that it had become the first public cloud provider to run all its clouds on renewable energy. The company says its PUE across all data centres for 2019 was at 1.1, citing favourably an industry average of 1.67.

Following the release of the Science report, Urs Holzle, SVP for technical infrastructure at Google Cloud, said the findings ‘validated’ the company’s efforts, which included utilising machine learning to automatically optimise cooling, and smart sensors for temperature control. “We’ll continue to deploy new technologies and share the lessons we learn in the process, design the most efficient data centres possible, and disclose data on our progress,” wrote Holzle.

Amazon Web Services (AWS), the leader in cloud infrastructure, says that as of 2018 it exceeded 50% renewable energy usage and has ‘made a lot of progress’ on its commitment to 100% renewable usage. The company has previously received criticism, with a report from Greenpeace this time last year saying AWS ‘appears to have abandoned its commitment to renewable energy’. Last month, Amazon CEO Jeff Bezos said he would commit $10 billion to address climate change.

CloudTech contacted AWS for comment and was pointed in the direction of a 451 Research report from November which found that AWS’ infrastructure was 3.6 times more energy efficient than the median of enterprise data centres surveyed.

One potential future area of concern with regard to computational power is that of Bitcoin. The energy required to mine the cryptocurrency has led to various headlines, with the University of Cambridge arguing that Bitcoin’s energy usage, based on TWh per year, equalled that of Switzerland. Pat Gelsinger, the CEO of VMware, previously said when exploring the concept of ‘green blockchain’ that the energy required to process it was ‘almost criminal.’

Michel Rauchs, who worked on the Cambridge project, is speaking at Blockchain Expo later this month on whether Bitcoin is ‘boiling the oceans.’ His argument is that the question is more nuanced than many believe – not helped by the extremist opinions on both sides.

“The way that Bitcoin is being valued for different people right now is completely subjective,” Rauchs tells CloudTech. “For some people it’s really an essential; for other people it’s some sort of gimmick, and it’s definitely not worth the electricity it consumes.

“There is no easy answer,” he adds. “The only thing that we can say today is that Bitcoin right now is at least not directly contributing to climate change, though the level of energy consumption is really high. You need to look at the energy mix – what sources of energy are going into producing that electricity.”

The report concludes that, despite the good news, the IT industry, data centre operators and policy makers cannot ‘rest on their laurels’. If Moore’s Law is anything to go by – albeit a long-standing dictum which may be reaching the end of its natural life itself – demand will continue to proliferate, with the next doubling of global data centre compute instances predicted to occur within the next four years.

You can read the full article here (preview only, client access required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.