Running extreme workloads in the cloud: A guide

As any IT professional knows, running large and resource intensive workloads in the cloud is extremely difficult. The cloud is often billed as a panacea, but the truth is that for most organisations, architecting large workloads in the cloud is a heroic endeavour – one that must be executed with exact precision. There is no margin for error, and one small misstep can result in nightmares for CIOs.

According to the Forrester study, "Cloud Migration: Critical Drivers For Success", 89% of early migrators have experienced performance challenges after migrating their mission-critical applications. Running mission-critical applications in the cloud is difficult. Alleviating risk, limiting business disruption and ensuring the target architecture will satisfy the most stringent SLAs and performance requirements requires extensive experience and a special skillset that is rare in the industry.

A few months ago, I was speaking with a prospective customer about his organisation’s current infrastructure situation. They were in a tough spot: to stay competitive, his company needed to push its SAP workloads to the cloud. However, in his view, this wasn’t going to happen anytime soon. With a 50+ terabyte database and over two million transactions daily, shifting to cloud wasn’t a real possibility – not even a remote one.

He told me, “There’s no way you can do that.” and his general thinking is actually correct. Most cloud services providers can’t handle what he was looking for. However, with the right experience, technology, and approach, it can be done; I’ve solved my fair share of difficult engineering problems, including some like this. Here are four recommended features to look for in a cloud provider that will enable your organisation to successfully move extreme workloads to the cloud:

Purpose-built hardware

Firstly, it is vitally important that your cloud infrastructure is powered by hardware that is purpose-built to support mission-critical workloads. Purpose-built hardware provides better performance and control as it is specifically designed to suit the needs of your organisation. Your reference architecture should capitalise on enterprise-grade infrastructure to provide the necessary storage, compute, and networking equipment to handle even the most aggressive workload requirements.

Connected infrastructure as a service

One of the most powerful features for mission-critical cloud migration is Connected IaaS, a dedicated kit that supports the most resource intensive workloads. The environment connects to a general, multi-tenanted infrastructure so that organisations are able to run workloads such as the 50TB, multi-million transaction beast mentioned above alongside more general purpose workloads. Additionally, Connected IaaS helps to meet the most stringent compliance and security requirements, and is capable of connecting with other cloud services.

Cloud management platform

A cloud management platform is essential for providing a unified, cloud-agnostic control pane that can bring together infrastructure orchestration, enterprise application automation, business intelligence (BI) and service delivery into one single convenient tool. A cloud management platform can allow organisations to run mission-critical enterprise applications in the cloud, with the performance and scalability needed to be competitive.

Cloud resource technology

Another highly valuable feature is MicroVM, a unique cloud resource technology that ensures cost efficiency, no matter how demanding or complicated the workloads. Research has shown the importance of cost efficient for organisations, with 52% embarking on their cloud migration journeys to achieve cost savings. As part of the xStream Cloud Management Platform, the MicroVM construct is able to dynamically tailor resource allocation to meet the exact workload demands and then, like a utility provider, only bills for the resources used.

So, it is clear that migrating and managing mission critical applications in the cloud can be a challenging process. Choosing an experienced cloud provider that is able to accommodate your mission-critical cloud needs is essential; feature enterprise-grade infrastructure, Connected IaaS, a cloud management platform and cost-saving resource technology are all essential in maintaining a smooth transition.

Most importantly, enterprises should partner with a provider whose expertise is in running SAP in the cloud whereby experts have extensive experience working directly with SAP and other mission-critical applications. Whether like the client I mentioned, you have a 50TB database, or have the data of millions of customers to uphold, there are excellent solutions to ensure the performance and security of your business is not compromised. Given the clear advantage to enterprises, it is unsurprising that the migration of mission critical applications continues to grow. By 2019, 62% of organisations engaged in active cloud projects are expected to migrate and with the correct cloud provider, your organisation could be one of them.

Orchestration in the cloud: What is it all about?

Can orchestration be considered a better alternative to provisioning and configuration management, especially in the case of cloud-native applications? We can look at this from a variety of angles; comparing against data centre-oriented solutions; differentiating orchestration of infrastructure (in the cloud and out of the cloud) versus containers (focusing mostly on cloud), as well as looking at best practices under different scenarios.

It’s worth noting here that this topic can span not only a plethora of articles but a plethora of books – but as the great Richard Feynman used to say, it is not only about reading or working through problems, but also to discuss ideas, talk about them, and communicate to others.

I would like to start with my favourite definition of orchestration, found in the Webster dictionary. Orchestration is ‘harmonious organisation’.

Infrastructure or containers?

When discussing orchestration, inevitably, the first question we ask ourselves is: infrastructure orchestration or container orchestration?

These are two separate Goliaths to engage, but undoubtedly we will face them both in the current IT arena. It all depends on the level of abstraction we wish to attain, and also, on how we organise the stack and which layers we want to take care of, or the opposite.

If we have decided to manage at the infrastructure level, we will work with virtual machines and/or bare metal servers – in other words, either a multi-tenant or a single-tenant server. Say we hire our cloud in an IaaS fashion, then we are handed resources such as the aforementioned plus networking resources, storage, load balancer, databases, DNS, and so on. From there, we build our infrastructure as we prefer.

If we have decided to manage at the CaaS (sometimes seen as PaaS) level, we will be managing the lifecycle of containers or, as they are frequently referred to in the literature, workloads. For those unfamiliar with containers, it is a not-so-new way of looking at workloads. Some of the most popular are Docker, Rkt, and LXC. Containers are extremely good to define an immutable architecture, also for microservice definition – not to mention they are lightweight, easily portable, and can be packed to use another day.

There are pros and cons to each of these – but for now, let us proceed in discussing the orchestration aspect on these two endpoints.

Infrastructure

There are several choices to orchestrate infrastructure: here are the two that seem to be among the most popular in companies today.

Provisioning and configuration management: One way of doing this is with the solid old school way of the combo PXe/Kickstart files, although it is slowly being replaced by more automated solutions, and some companies still stick to it, or alternatives such as Cobbler. On the other side, we use tools such as Foreman. Foreman has support for BIOS and UEFI across different operating systems, and it integrates with configuration management tools such as Puppet and Chef. Foreman shines in data centre provisioning and leaves us with an easy to manage infrastructure ready to be used or config managed even more.

Once the provision aspect is complete, we move onto configuration management, which will allow for the management throughout the lifecycle. There are many flavours: Ansible, Chef, Puppet, Salt, even the old and reliable CFengine. The last two are my favourites; even Ansible, a Swiss army knife that helped me many times, given the simplicity and master-less way of work.

Orchestration and optional configuration management: Now, orchestration implies conceptually something different – as mentioned before, harmonious organisation – and the tool that is frequently used nowadays is Terraform. On the upside, it allows to orchestrate in a data centre or in the cloud, integrating with different clouds such as AWS, Oracle Cloud, Azure, and even AliCloud. Terraform has many providers and sometimes the flexibility of the resource management lies in the underlying layer. Besides the cloud providers, it is also possible to integrate Terraform with third parties such as PagerDuty and handle all types of resources. From first hand experience, that sort of integration was smooth and simple, although granted, sometimes not mature enough.

Not all providers will yield the same flexibility. When I started to work with Terraform in Oracle Cloud, OCI did not have the maturity to do auto-scaling; hence, the provider was not allowing Terraform to create autoscaling groups, sometimes so vital that I took it for granted due to working with Terraform and AWS in the past. So another tip is to take a look at the capabilities of the provider, whether cloud or anything else. Sometimes our tools simply do not integrate well with each other, and to design a proper architecture, that is an aspect which cannot be taken lightly.

Another plus of Terraform is that it allows to orchestrate any piece of infrastructure, not only compute machines; it goes from virtual machines, bare metal and such, into networking resources and storage resources. Again, it will depend on the cloud and the Terraform provider and plugins used.

What makes Terraform new generation tools is not only the orchestration, but the infrastructure as a code (IaaC) aspect. The industry steered towards IaaC everywhere, and Terraform is no exception. We are allowed to store our resource definitions in files in any VCS system, Git, SVN, or any other, and that is massive: it allows us to have a versioned infrastructure, teams can interact and everybody is up to speed, and it is possible to manage branches and define different releases, separating versions of infrastructure and environment such as production, staging, UAT, and so on. This is now considered a must: it is not wishful thinking, but the best practice way of doing it.

Once the initial steps with Terraform are done, the provisioning can be completed with something such as Cloud-Init, although any bootstrapping will do. A popular alternative here seems to be Ansible: I have used it and as stated previously, it is a Swiss army knife for small, simple initial tasks. If we are starting to work on cloud, Cloud-Init will fit the bill. After that, other configuration management tools can take over.

That being said, I am adept to immutable infrastructure, so I limit configuration management to the minimum. My thoughts are that in the future, configuration management tools will not be needed. If and when something fails, it should be destroyed and re-instantiated. System administrators must not know the name of resources and only SSH into them as a last resort – if ever.

Container orchestration

Containers are not a new thing anymore; they have been around for a few years (or decades depending on how we look at it), they are stable enough and useful enough that we may choose them for our platform.

Although containers in a data centre is fun, containers in the cloud is amazing, especially because most clouds nowadays provide us with container orchestration, plus a plethora of solutions exist in case we cannot get enough. Some examples include ECS, Amazon Container Service; ACS, Azure Container Services; CoreOS Fleet; Docker Swarm; GCE, Google Container Engine; Kubernetes, and others.

Although I have left Kubernetes last, it has taken the spotlight. There are three reasons this tool has a future:

  • It was designed by Google and that has merit on its own, due to the humongous environment in which it was used and was able to thrive
  • It is the selected one from the Cloud Native Computing Foundation (CNCF), and that means it has bigger chances to stay afloat. The CNCF is very important for cloud-native applications and it it supported by many companies (such as Oracle)
  • The architecture is simple and easy to learn, can be deployed rapidly, and scaled easily

Kubernetes is a very promising tool that is already delivering results. If you are thinking about container orchestration at scale, starting to delve into something such as Minikube and slowly progressing to easy-to-use tools such as Rancher will significantly help to pave the road ahead.

Conclusion

There are many solutions, as has been shown, depending on what sort of infrastructure is being managed; also where the infrastructure is located, the scale, and how it is currently being distributed.

Technologies can also be used jointly. Before Oracle Cloud had OKE (Oracle Kubernetes Engine), the way we implemented Kubernetes in the cloud was through a Terraform plugin that instantiated the necessary infrastructure, and then deployed the Kubernetes cluster on top of it for us to continue configuring, managing, and installing applications such as ElasticSearch on top of it.

The industry is moving towards cloud, and that new paradigm means everything to be delivered as XaaS (everything as a service). This in turn means that building distributed architectures, reliable, performant, scalable and at a lower cost will be, and for some companies already is, a huge competitive advantage.

Nonetheless, there are many technologies to choose from. Often, aligning with the industry standard is a smart decision. It means it is proven, used by companies, in current development, and will be maintained for years ahead.

Local government slow to adopt cloud services, research shows


Keumars Afifi-Sabet

24 May, 2018

The majority of local authorities across the UK are yet to embrace cloud services to handle citizen data, citing concerns over data fragmentation and funding for infrastructure.

In fact, 80% of councils are still using on-premise infrastructure, either in isolation or in conjunction with a cloud service, to access and manage citizen data, according to a Freedom of Information (FoI) request.

When the government published a ‘Cloud First’ policy in 2013 calling for the public sector to consider cloud services, it highlighted the public cloud as its preferred model. Despite this, the FoI data collected by virtualisation firm Citrix shows that private cloud is the most favoured model, used by 30% of councils surveyed, followed by a hybrid mode used by a quarter of respondents, with only 7.5% using the public cloud.

Citrix sent FoIs to 80 councils, getting 40 responses. Asked what approximate proportion of applications and data are stored in the cloud, the majority, 77.5% said they stored up to a quarter of these assets in the cloud, with 5% storing everything on-premise. No council surveyed stored more than 75% of their data and applications in the cloud.

Moreover, data fragmentation remains a major concern, with 70% of IT teams ‘not confident’ the authority they work for has a ‘single view’ of citizen data; in that there is only one database entry per individual, with access to all service history.

“Local councils today are under enormous pressure from both central government and British citizens to deliver better services, at lower costs,” said Darren Fields, Citrix managing director in the UK & Ireland.

“With local authorities facing an overall funding gap of £5.8 billion by the end of the decade, councils are always on the lookout for innovative, cost-effective technology to help deliver efficient citizen services, whilst also improving productivity for staff and reducing costs.”

Despite the majority of local authorities, 75%, considering investing in cloud infrastructure in the next 12 months, only 7.5% are planning on downsizing their physical IT infrastructure by getting rid of on-premise servers or physical hardware, for instance. Just over a third, 35%, have no plans to do so.

However, an academic paper published last year warned that councils across the UK shouldn’t be rushed into migrating their applications and data. The report found that despite the political pressure on local government to move to the cloud, there were few examples of best practice.

Each of three councils’ cloud deployments examined in the report was “well implemented and well supported by cloud supplier staff”, the researchers said, and had a string of positives, but all case studies examined showed a need to “develop an appropriate full and growing cloud strategy aligned to business strategy, together with an internal support programme to manage demand”.

“These factors require planning, managing and monitoring to ensure the best use, value and benefit is obtained from the investment in the technology to help ensure efficient, effective and successful IT,” the report’s authors said.

Fields added: “The cloud has the potential to transform public services, yet many local authorities are held back by legacy IT systems – making it a demanding and challenging exercise to consolidate and transition data and applications to the cloud.”

“However, the cloud will inevitably become integral to service delivery – solutions are typically more cost effective, scalable, secure and flexible – and are likely to become an indispensable asset for local authorities looking to deliver first-class services to residents across the UK.”

EA snaps up GameFly subsidiary’s cloud gaming assets


Clare Hopping

24 May, 2018

EA has bought video game rental service GameFly’s cloud streaming technology and hired some of its employees in a drive to boost its own games streaming service.

Although the two portions of GameFly’s business are a large chunk of its entire set-up, EA won’t own the company’s streaming division, which will continue to operate independently.

It’s not clear exactly how EA will integrate the technology into its platform, but the company has hinted that it hopes to launch its own games streaming service within three years. This would ensure it can keep up with some of its developer competitors, as well as other games streaming platforms like Sony’s PlayStation Now.

“Cloud gaming is an exciting frontier that will help us to give even more players the ability to experience games on any device from anywhere,” said EA chief technology officer Ken Mosssaid. “We’re thrilled to bring this talented team’s expertise into EA as we continue to innovate and expand the future of games and play.”

Neither company commented on how much the deal is worth or when the acquisition is expected to close.

Although games streaming is a natural progression for the cloud, no one has really managed to make it work yet for consoles because the broadband speeds needed to make it work aren’t widely available yet.

Playstation Now has had a limited amount of success, but only because it was able to buy tech firm OnLive’s patents when it decided not to pursue streaming efforts. OnLive folded just five years after launching in 2015. Microsoft is also looking to launch a games streaming service for its XBox console, which is due to launch by 2020.

Picture: Shutterstock

How to turn a Raspberry Pi into a VPN server


Chris Finnamore

29 May, 2018

With a VPN server plugged into your router, you can create a secure, encrypted connection from anywhere in the world to your home network.

This has several advantages, such as being able to access files on your NAS without any fiddly configuration or, as the connection is encrypted, the option to use your laptop on a public Wi-Fi hotspot without worrying about someone intercepting the data you transmit. Best of all, as you control everything, you can rest assured that your data is completely safe and that you’re not relying on any other service.

You don’t need any special hardware to make the VPN, either: a simple Raspberry Pi will do. This guide will show you how to set up your Raspberry Pi with the OpenVPN server, using the clever PiVPN script.

Installation

The PiVPN OpenVPN installer we’re going to use doesn’t currently support the latest Raspbian Stretch distro, so you’ll need to use Raspbian Jessie, the previous version. Jessie was the current version of Raspbian until July 2017, and will be supported for at least another six months, by which point the PiVPN script should hopefully have been updated to support Raspbian Stretch.

The free No-IP service will make sure you can always connect to your VPN, even if your IP address changes

As we’ll be using the Raspberry Pi in command-line mode, it makes sense to use the Lite version of Raspbian, which doesn’t come with a graphical user interface but is a fifth of the size of Raspbian proper.

Download the 2017-07-05-raspbian-jessie-lite.zip file using this link, and extract the .img file inside. Next, download and install Win32 Disk Imager. Finally, plug your microSD card into a card reader, and make sure the card doesn’t have anything on it you need.

Use Win32 Disk Imager to write the Raspbian Jessie Lite image to a microSD card

Run Win32 Disk Imager. If you’re logged in with a standard (rather than administrator) Windows account, you’ll need to enter your administrator password. Click the blue folder icon on the right of the white box at the top, then browse to the Raspbian Jessie Lite .img file you extracted and double-click it.

Bear in mind that the file browser may default to the administrator account’s Download folder instead of the current user’s, so you’ll need to browse to the correct location manually.

Double-check your microSD card is the one listed under Device and click Write. When the process is finished, eject the microSD card, put it in the Raspberry Pi and boot up. Your Raspberry Pi will just need a keyboard and monitor connected, and won’t need to be online for now. Log in with the normal login: username pi, and password raspberry.

As the Raspberry Pi is just going to be a server, it makes sense for it to be hidden in a corner next to your router. You don’t want to have to make room for a monitor, keyboard and mouse, so it’s best to set up the Raspberry Pi to be controlled remotely, in so-called ‘crustless’ mode. To do this, you need to enable Secure Shell (SSH). Type sudo raspi-config.

The first job is to change the Raspberry Pi’s password, so that only you will be able to log in over SSH. Select option 1 and enter your password twice to change it. Now go to option 5, Interfacing options. Select P2 SSH, then answer Yes to the question, ‘Would you like the SSH server to be enabled?’. The Raspberry Pi will confirm SSH is on. Go down to Finish.

Shut down your Raspberry Pi by typing sudo shutdown now, unplug your monitor and keyboard, and plug your Raspberry Pi into your router with an Ethernet cable. You don’t need the monitor and keyboard any more, as you’ll be controlling your Raspberry Pi remotely.

Finding Pi

Turn on your Raspberry Pi again, and give it 30 seconds or so to boot up. It’s now time to find your Raspberry Pi on the network. The easiest way to do this is to use the Fing Android app, which will list every device connected to your network and show the Raspberry Pi as Raspberry, along with its IP address. Under Windows, you can use an IP scanner such as Angry IP Scanner (requires Java). Angry IP Scanner will list your Raspberry Pi as raspberrypi.local in the Hostname column.

Angry IP Scanner will show you where your Pi is on your network

To connect to your Raspberry Pi you need the PuTTY SSH client – Download and install it. Put your Raspberry Pi’s IP address in the Host Name box, make sure the SSH radio button is selected, then click Open. You will receive a security message, so double-check it’s the Raspberry Pi’s IP address in the top-left of the PuTTY window and click Yes to trust the device.

Use the free PuTTY tool to connect to your Raspberry Pi remotely

You can now log in with your Raspberry Pi’s username and password, and you’ll have a command line just as if your Raspberry Pi was sitting in front of you. The first thing to do is to download and install any Linux updates, so use the command sudo apt-get update, followed by sudo apt-get upgrade.

It’s now time to start installing OpenVPN (via the PiVPN installer). Type curl -L https://install.pivpn.io | bash to start the installer. You navigate the installer using your keyboard’s arrow keys, and use tab and shift-tab to switch to the Yes/No, OK/Cancel options at the bottom of the various pages and back again. You’ll also need the space bar to select some options.

Static IP

The first step is to set a static IP address; this is needed so your router always knows where your Raspberry Pi is on the network, so it can send it incoming VPN traffic. You can keep the IP address your Raspberry Pi currently has, but it may be worth setting one high in the IP range, so your router is unlikely to assign the same IP address automatically to a different device joining your network.

To do this, select No when asked if you want to use your current network settings as a static address, then change the last number of the address to, for example, 100, to give you an address such as 192.168.1.100. You can check this address isn’t yet taken on your network using Fing or Angry IP Scanner.

The IPv4 default gateway should already be set to your router’s IP address, so you can leave this as it is. Check the settings are correct and select Yes to continue.

Follow the steps to select which user’s directory will store OpenVPN’s configuration; you’ll only have the default ‘pi’ installed, so select that. The next screen advises you to enable ‘unattended-upgrades’ so that your Raspberry Pi will automatically update itself with security patches; vital for a machine that is always connected to the internet and has a network port open all the time. Select Yes to enable this feature.

You can now select whether to use the TCP or UDP protocols. UDP is generally preferred as it’s far quicker, but TCP has advantages in certain situations. Select UDP and press OK. Leave the port as the default (1194) and press OK. Write the port number down somewhere and confirm the port is correct in the next screen.

Paranoia level

You’ll now be given three levels of encryption. The PiVPN installer recommends 2,048-bit encryption as a good compromise between security and how long it takes to generate the key, but you might want to select 4,096 for the utmost in security. If you select a 4,096-bit key you are given the option of downloading key components from a public key generation service, to cut down on generation time.

2,048-bit encryption will be enough for most people, but 4,096-bit is available for the tinfoil-behatted

However, if you’re truly serious about security (or ‘paranoid’, as the PiVPN installer puts it), you can generate your own keys from scratch. This took less than an hour on our Raspberry Pi 3, so isn’t really a big deal.

Generating encryption keys on a Raspberry Pi can take a while

A word of warning, though. The PiVPN installer doesn’t support going back a stage to change a setting you’ve already made: if you get an IP address wrong, you’ll need to quit the installer and start again, including spending an hour on key generation.

Connecting to your Pi

Now you have the keys to encrypt your VPN connection, it’s time to work out how users are going to connect to your Raspberry Pi. There are two ways to do this: using your external IP address, or using a dynamic DNS service.

The external IP address is the easy way, as this requires less tinkering with your router. The disadvantage of this is that, on most residential broadband packages, the IP address changes periodically, which could leave you unable to connect to your VPN if the IP changes when you’re out of the house.

To avoid this, you need to use a dynamic DNS service. This will give you an address such as pivpn.dynamicdns.com, which will translate to your router’s current external IP address. What’s more, when you’ve entered the details into your router’s settings, the router will update the DNS service automatically when its external IP address changes, so the address you’ve chosen will always translate to your home connection’s external address.

First, find out which dynamic DNS services, if any, your router supports. You’ll most likely need to hunt around in your router’s settings menu. You’ll usually need to sign up on the dynamic DNS service’s web page, but some routers, such as our Netgear D7800, let you sign up straight from the interface.

If your router doesn’t support dynamic DNS, you can get around it by signing up for a No-IP account and using the Windows application. This will send your external IP address to No-IP automatically, but you’ll have to leave your PC on (though not necessarily logged in) for it to work.

Once you’ve signed up, the No-IP dashboard will give you the option of choosing a hostname and a selection of different domains. It will fill in your IP address automatically. If the hostname you want is already taken, just keep trying hostname/domain combinations until you find a free one. You’ll now see your hostname listed in the Hostnames section of the No-IP dashboard, with No Dynamic Update Detected underneath. Click this and the wizard will walk you through setting up DNS updating on your router.

Back in the PiVPN installer, choose Public IP (external IP address) or, if you’re using dynamic DNS, DNS Entry. If you choose DNS Entry you’ll need to put in your dynamic DNS hostname. Double-check it’s correct before confirming.

You’ll now be asked to choose the DNS Provider for those connecting to your VPN. This is because they’ll need a DNS service to browse the web, via your Raspberry Pi’s VPN server. The DNS servers will be able to see which websites those connecting to the VPN have visited, so if you’re very concerned about privacy, you might want to build and use your own DNS server. That is beyond the scope of this article, though: for most people Google’s DNS will be fine.

The final step will tell you to ‘run pivpn add to create the ovpn profiles’, but will prompt you to reboot your Raspberry Pi first. Press OK to reboot. Now is a good time to forward the necessary ports on your router. Whether you decide to use an external IP address to connect to your Raspberry Pi VPN or use dynamic DNS, you’ll need to go into your router’s settings and forward a port to your Raspberry Pi, so your router knows where to send incoming VPN connections.

No-IP has port-forwarding guides, and there is a comprehensive router list at portforward.com. Remember that we’re using the default OpenVPN UDP port 1194, so forward that to your Raspberry Pi’s IP address (making sure you select UDP not TCP).

Access granted

Log back into your Raspberry Pi with PuTTY. Type sudo-apt get update then sudo apt-get upgrade to make sure your Raspberry Pi is up to date with all the latest security patches.

The next step is to add OpenVPN profiles – the users, or clients, who will be connecting to your VPN. Each client will have a username and password, which they will need to use in conjunction with a special file you generate using PiVPN, in order to connect.

Type pivpn add, and then enter the username and password for the first client you want to have access. You’ll see that a .ovpn file will be generated and copied to /home/pi/ovpns. Add any other profiles you need. If you need to remove a profile, type pivpn revoke, followed by the profile name.

To copy the .ovpn files off the Raspberry Pi, it’s easiest to use SFTP. You’ll need an FTP client – our favourite is WinSCP. Download and install WinSCP, then in the screen that appears when you run it, make sure SFTP is selected under File protocol. Enter your Raspberry Pi’s IP address under Host name, make sure the Port number is 22, then enter your username and password and click Login. Click Yes to add the Raspberry Pi’s host key to the cache, to avoid seeing the warning again.

By default you’ll be in /home/pi, so browse to the ovpns folder and copy the .ovpn files to your PC. The combination of the .ovpn file and its matching username and password are what will let you connect to your VPN, but first you’ll need some client software.

Download the OpenVPN Installer. Install it, then copy your .ovpn files to C:\Program Files\OpenVPN\config. If you’re going to need to send these files to other people, it’s a good idea to encrypt the files with 7-Zip beforehand, in case they are intercepted in transit. Many email services won’t let you send an encrypted file as an attachment, so you’ll need to host it in a file-sharing service such as Dropbox, then send a link to the Dropbox folder containing the encrypted file.

It’s also a good idea to send the username and password separately from the 7-Zip-encrypted .ovpn file, preferably using an encrypted platform such as WhatsApp.

Load the OpenVPN Gui. This sits in your System Tray, and right-clicking it will give you the list of OpenVPN profiles you have installed. Just click Connect to enjoy your secure connection to your Raspberry Pi.

If you want to check the VPN is working from outside your home network, the easiest way is to connect through a smartphone running in wireless hotspot mode. We found the VPN connection let us browse the contents of our network’s NAS, as well as connect to a game server.

Image: Shutterstock

The top five in-demand cloud skills for 2018

As businesses of every size push forward with cloud projects in 2018, the demand for cloud skills is accelerating. Public cloud adoption is expected to climb significantly and the IDC predict spending will reach £197 billion in just three years.

But as cutting-edge technologies, like machine learning, continue to reshape the job market the skills gap looms large across the industry. With over 350,000 specialists needed to help fill cloud roles there’s clearly a massive opportunity for professionals that can prove their skills.

Whether you’re taking your first steps into cloud or are aiming to increase your marketability, this is your opportunity to expand your cloud skill set in 2018.

Cloud security

Businesses are comfortable storing their data with public cloud providers. The idea that a company’s data is not secure in the cloud just isn’t true anymore.

Most companies simply cannot provide the same level of security expertise as the leading cloud providers. Microsoft, for example, plan to invest over $1 billion dollars annually on cyber security.

But businesses must still pay close attention to their cloud security. Cloud providers operate under the shared responsibility model, outlining the security responsibilities between vendor and business. In short, businesses cannot rely on their vendor to ensure the security of their data and services; their staff must also understand and work towards security.

That means it’s still crucial for IT professionals to possess an understanding of cloud security – even if the heavy lifting is performed by the cloud providers. To ensure their organisations are protected, professionals must learn how to utilise the security tools offered by the likes of Amazon Web Services (AWS) and Microsoft Azure.

And for professionals aiming to specialise their cloud security skills, there are a number of industry-standard qualifications available. Perhaps the most well-known is (ISC)2’s CCSP (Certified Cloud Security Professional) which builds on the knowledge taught through the popular CISSP certification.

Machine learning and AI

While machine learning, AI and big data may have just been seen as buzzwords for many businesses in the past, they’re now at the heart of an increasing number of IT projects.

Analyst firm IDC predicts explosive growth for machine learning and AI, with spend with increasing by 50% over the next three years. As a result, every major cloud vendor is now developing or expanding services that allow organisations to leverage these technologies in their applications.

The two largest cloud platforms, Amazon Web Services (AWS) and Microsoft Azure, both provide Machine Learning tools.

“These tools are easy to set up and there are plenty of tutorials available online. But to get valuable information out of them you’ll need strong data science skills,” says Mike Brown, Lead Cloud Instructor at Firebrand Training.

Microsoft is pushing ahead of the competition in data science training for professionals, creating the Professional Program for Data Science alongside a new certification – the MCSA Machine Learning that aligns to the expert-level MCSE: Data Management & Analytics certification.

Serverless architecture

Serverless architecture removes the need for developers to manage underlying infrastructure when they want to run or build an application.

“It’s the way that all new services should be designed,” says Brown. “The idea that applications should be deployed to a server or two is an old way of thinking.”

By adopting serverless architecture, developers can build services that are scalable and easier to patch or upgrade. This is often cheaper than designs that are based on servers.

Businesses were previously concerned about vendor lock-in when adopting serverless architecture. For example, if you’re using one cloud provider to host your serverless components, and they raise their prices, you could be “locked” into their service and forced to pay the higher fee.

Today, major cloud platforms use industry standard technologies and programming languages which means moving serverless applications from one vendor to another is no longer an obstacle.

Professionals can dive into learning serverless application development online but you’ll need to choose a platform first. If you favour AWS, consider following their Lambda tutorials and webinars to get started.

Cloud migration and multi-cloud deployment

As IDC’s report revealed, public cloud migration is accelerating and businesses need professionals knowledgeable in cloud to shift their apps and services.

Businesses that are struggling to scale resources to meet demand or are aiming to save time on menial tasks like database backup or maintenance will benefit from moving to the cloud.

But cloud migration isn’t a fast process and it’s by no means risk-free. Without skilled professionals, businesses risk downtime on critical applications and incorrect implementation could open them up to security vulnerabilities.

In the enterprise, multi-cloud deployments are increasingly common. Enterprises want the flexibility to choose different environments based on performance and cost. Because of this, professionals will want to consider expanding their skills across multiple platforms – particularly Azure, AWS and Google Cloud Platform.

Automation

“For me, automation is key to providing a cloud service for business. Auto-scaling, Infrastructure as code, automated monitoring and reporting all play a part in good cloud design,” says Brown.

“There’s currently a move to 3rd party services that allow us to automate across multiple platforms using the same tool set.”

Jenkins, Terraform and Chef are all popular tools that allow automation across multiple platforms and professionals aiming to increase their marketability should consider adding these skills to their learning path as soon as possible.

The key to marketability in cloud

The key to employability in today’s cloud jobs market is to gain cross-platform skills. If you’ve already achieved your MCSE Cloud Platform and Infrastructure certification, consider widening your skills to include certifications from AWS and Google Cloud Platform.

By transferring your knowledge between cloud platforms, you’ll diversify your skillset and boost your employability in 2018.

Editor's note: Once you've taken a look at the 2018 list, make sure to compare it with the 2017 list, which can be found here.

Start New Vision with Internet of Things | @ExpoDX @CaffeineGeeks #IoT #IIoT #SmartCities #DigitalTransformation

The global internet of things market is estimated to value US$ 847.0 Bn in 2016 and is projected to register a CAGR of over 21% in terms of value during the forecast period 2017–2026. The report offers in-depth insights, revenue details, and other vital information regarding the global internet of things market, and the various trends, drivers, restraints, opportunities, and threats in the target market till 2026. The report includes PEST analysis, Porter’s Five Forces analysis, and opportunity map analysis for in-depth understanding of the market. The report offers insightful and detailed information regarding the various key players operating in the global internet of things market, their financials, supply chain trends, technological innovations, key developments, apart from future strategies, SWOT analysis, acquisitions & mergers, and market footprint. The global internet of things market report has been segmented on the basis of type, sales channel, application, and region.

read more

Sipgate Team review


Dave Mitchell

22 May, 2018

Deployment can be lengthy, but Sipgate Team offers inexpensive cloud-hosted IP PBX services with plenty of call features

Price 
From £14.95/month exc VAT

Small businesses that want a simple, low-cost cloud-based IP PBX will find Sipgate Team ticks plenty of boxes. The Light version starts at only £14.95 per month for three users and can be upgraded in small increments so you only pay for the features you need.

The Light version doesn’t cover calls to landlines and mobiles, which will be charged to your account as they are made – so if this is an issue consider the UK or EU call packages. The UK Call Pack bundle, for example, starts at £44.95 per month and includes all VoIP calls to UK landlines and mobiles regardless of their duration.

This latest version shows off a redesigned web portal which we found easy to use. We settled for the numbers already assigned to our account but could request local or international numbers and port over existing landlines with prices for the latter starting at £20.

To avoid unauthorized access, Sipgate posts a start code to the main account holder. We could configure our users and phones but had to wait two working days for this to arrive before we could activate our account.

Sipgate doesn’t have an import function so each user account must be created manually by entering their name and email address and assigning a phone number plus extension. On completion, they’ll receive an email with their personal web portal login details.

Hardware and software phones are also configured manually but plenty of help is at hand. We use Yealink T23G IP phones and the portal provided screenshots of their web interface showing clearly where the SIP account and proxy details are entered.

Sipgate doesn’t offer its own softphones but supports plenty of third-party products so we chose the popular Zoiper for testing. After downloading the preconfigured Windows version from Sipgate, we added the SIP ID and password as displayed in the user’s web portal.

The Zoiper iOS app is configured by tapping the QR icon at the top of its dialpad screen and scanning the code displayed in your account web page. It took a few more seconds to add our SIP credentials after which our iPad was successfully registered.

Businesses with remote offices will like the Sipgate Location feature. These link different geographical locations together within the same package, each with their own set of users, and all VoIP calls between them are free.

Place selected users in a group with a dedicated phone number and extension and when it’s called, all their phones will ring. Both groups and users can have custom greeting messages assigned but the Click2Record feature from previous versions is no longer provided so we had to record our messages separately as MP3 files and upload them as new announcements.

Group voicemail allows callers to leave a message while call forwarding and hunting rulesets redirect them to other users or phone numbers. These are quite versatile as multiple rulesets for both groups and users can apply a range of actions while schedules determine when they are active.

Call queuing and an IVR (interactive voice response) service are available in the Pro package which costs from £10 per month extra. The IVR service is basic when compared to RingCentral but does allow you to upload MP3 greeting messages that present callers with a choice of up to 10 extension numbers.

The lack of free softphones will increase per-user costs, but Sipgate’s low monthly charges and no minimum contract period can offset this extra expense. For small businesses looking to make the jump to cloud-hosted VoIP, Sipgate Team is a good choice that offers slick call handling features and easy management.

Spec 2018: Slack lets developers build and test apps inside Slack


Adam Shepherd

22 May, 2018

Slack has announced a suite of new developer tools, as part of a range of platform changes designed to foster the growth of its app ecosystem.

The new features will make it much quicker and easier for developers to build and test Slack apps within Slack itself, according to the company’s developer advocacy lead, Bear Douglas.

“Right now if, for example, you want to try out Slack’s API, you have to go create an app token, you have to get the keys you need, you maybe have to spin up a project, spin up a web server, in order just to ping calls to our API,” she told Cloud Pro. “Now, what you’ll be able to do is install an app inside Slack and then directly from Slack test out our API.”

Reference documents are also easier to access. Developers can now search them via a Slack command, and they can then be read within the Slack client, or the longer-form version can be accessed by clicking out to Slack’s website.

The changes were announced as part of Spec 2018, the company’s inaugural annual conference for developers, customers and partners.

Alongside these developer tools, the company is also launching some new features designed to make Slack apps more visible for everyday users. The most prominent is App Actions – a framework for adding third-party app integrations to the three-dot menu that currently houses functions including pinning messages to channels and marking them as unread.

The feature will launch with pre-made integrations for five services, including Asana, Bitbucket, HubSpot, Zendesk and Jira. As an example, the feature will allow users to convert a Slack message from a colleague directly into an Asana task.

Power users of Slack will likely be aware that these kinds of integrations already exist, in the form of Slack commands. These already allow users to interact with third-party services in a number of ways, such as adding tasks to project management tools, sharing files from cloud storage platforms, and more.

In fact, Douglas told Cloud Pro that the new App Actions don’t actually add any new functionality that wasn’t already offered by Slack commands, but emphasised that they provide a much more intuitive and user-friendly way to access these capabilities, pointing out that while developers and techies are inherently familiar with using slash-based commands thanks to tools like IRC, most line-of-business users are much more used to contextual menu systems.

“It’s not something that should be undersold, because at the moment, a lot of users … may not be aware of the integrations that are already installed on their team, and so some of the work that we’ve done over the past year and a half or so has been on making integrations more discoverable.”

One App Action that may prove very useful is the ability to view the JSON for specific elements within Slack.

“If you see something beautiful inside Slack,” Douglas said, “and you wonder ‘how can I do that?’, there will be an Action in that overflow menu that lets you inspect the JSON for that message. So if you want to copy something beautiful you saw, or just break it down, you can do that automatically without leaving Slack.”

Making apps more attractive is something that the company is emphasising, and it will soon launch a new toolkit for modifying the UI of Slack Apps, which it is dubbing ‘Block Kit’.

Slack is aiming to build on its initial success as an enterprise collaboration platform, pursuing a partnership-based strategy in which the platform integrates as deeply as possible with the numerous SaaS-based tools that businesses rely on. The strategy is similar to the tactic Facebook has taken with its Workplace platform, which also announced a slew of new integrations earlier in the year.

“They’re absolutely integral. They’re what makes Slack more useful than just a messaging service, and it’s something that we try to make clear to our [third-party] developers: that they are key to our success,” Douglas told Cloud Pro.

“It’s really about lowering the barriers and making it easy to build Slack apps. So we’re building our tooling offering all the time, and we’re trying to be as responsive as possible.”