All posts by Danny Bradbury

Amazon updates development environment powering Alexa


Danny Bradbury

2 Feb, 2021

Amazon has announced an update to Lex, its conversational artificial intelligence (AI) interface service for applications, in order to make it easier to build bots with support for multiple languages.

Lex is the cloud-based service that powers Amazon’s Alexa speech-based virtual assistant. The company also offers it as a service that allows people to build virtual agents, conversational IVR systems, self-service chatbots, or informational bots.

Organizations define conversational flows using a management console that then produces a bot they can attach to various applications, like Facebook Messenger.

The company released its Version 2 enhancements and a collection of updates to the application programming interface (API) used to access the service.

One of the biggest V2  enhancements is the additional language support. Developers can now add multiple languages to a single bot, managing them collectively throughout the development and deployment process. 

According to Martin Beeby, principal advocate for Amazon Web Services, developers can add new languages during development and switch between them to compare conversations.

The updated development tooling also simplifies version control to track different bot versions more easily. Previously, developers had to version a bot’s underlying components individually, but the new feature allows them to version at the bot level.

Lex also comes with new productivity features, including saving partially completed bots and uploading sample utterances in bulk. A new configuration process makes it easier for developers to understand where they are in their bot’s configuration, Beeby added.

Finally, Lex now features a streaming conversation API that can handle interruptions in the conversation flow. It can accommodate typical conversational speed bumps, such as a user pausing to think or asking to hold for a moment while looking up some information.

Cisco’s new SD-WAN routers bring 5G and virtualisation to enterprises


Danny Bradbury

27 Jan, 2021

Cisco has launched four new devices in its Catalyst 8000 Edge series. They include a 5G cellular gateway, two Virtual CPE Edge units, and a heavy-duty aggregation router, all designed to support SD-WAN networking.

The company debuted its Catalyst 8000 family last October, and Cisco expanded the range by launching the Cisco Catalyst 8500L, the latest addition to the Catalyst 8500 Edge family. This is a collection of what Cisco calls aggregated service routers designed with large enterprises and cloud service providers in mind. They’re an evolution of the company’s ASR 1000 series of routers. 

Cisco designed the Catalyst 8500L for SD-WAN use cases, where wide area network (WAN) connectivity is configurable via software. They also support the emerging secure access service edge (SASE) model that Gartner defined in mid-2019. That builds on SD-WAN to offer zero-trust networking capabilities for secure access.

The 8500L is an SD-WAN capable router that offers WAN connectivity at speeds of up to 10 Gbps. Rackable in a 1RU form factor, it runs either at core sites or colocation sites and features twelve x86 cores and up to 64 GB of memory, which Cisco says will support secure connectivity for thousands of remote sites. It also includes Cisco Trust Anchor technology, a tamper-proof trusted platform module that guarantees the hardware is authentic to protect against supply chain attacks.

The Cisco Catalyst 8200 is a 1 Gbps branch office router with eight CPU cores and 8 GB of RAM, which Cisco says doubles the performance over the existing ISR 4300 series. That is helped by a hardware performance acceleration feature called Intel QuickAssist.

Complementing the Catalyst 8200 is the Cisco Catalyst 8200 uCPE, a branch office customer premises equipment (CPE) device with a small physical footprint. It features eight CPU cores and supports up to 500 Mbps aggregate IPsec performance. It can also take pluggable interface modules (PIMs), which are hardware extensions giving it cellular connectivity options.

Speaking of cellular communications, the new Cisco Catalyst Cellular Gateway 5G plugs into a router to offer on-premises cellular connectivity in sub-6GHz bands. The device uses Power over Ethernet (PoE) to relay cellular communications back to the router.

Admins can manage the devices using software, including vManage, licensed under Cisco’s DNA model. vManage has three tiers: 

  • DNA Essentials offers core SD-WAN, routing, and security features. 
  • DNA Advantage gives subscribers the Essentials package, plus support for advanced routing capabilities including MPLS BGP; better security support with Advanced Malware Protection and SSL proxy features; and access to Cisco’s Cloud OnRamp, which helps with connectivity to multiple cloud service providers.
  • DNA Premier adds centralized security management for all locations.

How to automate your infrastructure with Ansible


Danny Bradbury

2 Dec, 2020

Hands up if you’ve ever encountered this problem: you set up an environment on a server somewhere, and along the way, you made countless web searches to solve a myriad of small problems. By the time you’re done, you’ve already forgotten most of the problems you encountered and what you did to solve them. In six months, you have to set it all up again on another server, repeating each painstaking step and relearning everything as you go.

Traditionally, sysadmins would write bash scripts to handle this stuff. Scripts are often brittle, requiring just the right environment to run in, and it takes extra code to ensure that they account for different edge cases without breaking. Scaling that up to dozens of servers is a daunting task, prone to error.

Ansible solves that problem. It’s an IT automation tool that lets you describe what you want your environment to look like using simple files. The tool then uses those files to go out and make the necessary changes. The files, known as playbooks, support programming steps such as loops and conditionals, giving you lots of control over what happens to your environment. You can reuse these playbooks over time, building up a library of different scenarios.

Ansible is a Red Hat product, and while there are paid versions with additional support and services bolted on, you can install this open-source project for free. It’s a Python-based program that runs on the box you want to administer your infrastructure from, which must be a Unix-like system (typically Linux). It can administer Linux and Windows machines (which we call hosts) without installing anything on them, making it simpler to use at scale. To accomplish this, it uses SSH certificates, or remote PowerShell execution on Windows.

We’re going to show you how to create a simple Linux, Apache, MySQL and PHP (LAMP) stack setup in Ansible.

To start with, you’ll need to install Ansible. That’s simple enough; on Ubuntu, put the PPA for Ansible in your sources file and then tell the OS to go and get it:

$ sudo apt update

$ sudo apt install software-properties-common

$ sudo apt-add-repository –yes –update ppa:ansible/ansible

$ sudo apt install ansible

To test it out, you’ll need a server that has Linux running on it, either locally or in the cloud. You must then create an SSH key for that server on your Ansible box and copy the public key up to the server.

Now we can get to the fun part. Ansible uses an inventory file called hosts to define many of your infrastructure parameters, including the hosts that you want to administer. Ansible reads information in key-value pairs, and the inventory file uses either the INI or YAML formats. We’ll use INI for our inventory.

Make a list of the hosts that you’re going to manage by putting them in the inventory file. Modify the default hosts file in your /etc/ansible/ folder, making a backup of the default one first. This is our basic inventory file:

# Ansible hosts

 [LAN]

db_server ansible_host=192.168.1.88

db_server ansible_become=yes

db_server ansible_become_user=root

The phrase in the square brackets is your label for a group of hosts that you want to control. You can put multiple hosts in a group, and a host can exist in multiple groups. We gave our host an alias of db_server. Replace the IP address here with the address of the host you want to control.

The next two lines enable Ansible to take control of this server for everything using sudo. ansible-become tells it to become a sudo user, while ansible-become-user tells it which sudoer account to use. Note that we haven’t listed a password here.

You can use Ansible to run shell commands that influence multiple hosts, but it’s better to use modules. These are native Ansible functions that replicate many Linux commands, such as copy (which replicates cp), user, and service to manage Linux services. Here, we’ll use Ansible’s apt module to install Apache on the host.

ansible db_server -m apt -a ‘name=apache2 state=present update_cache=true’ -u danny –ask-become-pass

The -m flag tells us we’re running a module (apt), while -a specifies the arguments. update_cache=true tells Ansible to update the packages cache (the equivalent of apt-get upgrade), which is good practice. -u specifies the user account we’re logging in as, while –ask-become-pass tells Ansible to ask us for the user password when elevating privileges.

state=present is the most interesting flag. It tells us how we want Ansible to leave things when it’s done. In this case, we want the installed package to be present. You could also use absent to ensure it isn’t there, or latest to install and then upgrade to the latest version.

Then, Ansible tells us the result (truncated here to avoid the reams of stdout text).

db_server | CHANGED => {

    “ansible_facts”: {

        “discovered_interpreter_python”: “/usr/bin/python3”

    },

    “cache_update_time”: 1606575195,

    “cache_updated”: true,

    “changed”: true,

    “stderr”: “”,

    “stderr_lines”: [],

Run it again, and you’ll see that changed = false. The script can handle itself whether the software is already installed or not. This ability to get the same result no matter how many times you run a script is known as idempotence, and it’s a key feature that makes Ansible less brittle than a bunch of bash scripts.

Running ad hoc commands like this is fine, but what if we want to string commands together and reuse them later? This is where playbooks come in. Let’s create a playbook for Apache using the YAML format. We create the following file and save it as /etc/ansible/lampstack.yml:

– hosts: lan

  gather_facts: yes

  tasks:

  – name: install apache

    apt: pkg=apache2 state=present update_cache=true

  – name: start apache

    service: name=apache2 state=started enabled=yes

    notify:

    – restart apache

  handlers:

    – name: restart apache

      service: name=apache2 state=restarted

hosts tells us which group we’re running this script on. gather_facts tells Ansible to interrogate the host for key facts. This is handy for more complex scripts that might take steps based on these facts.

Playbooks list individual tasks, which you can name as you wish. Here, we have two: one to install Apache, and one to start the Apache service after it’s installed.

notify calls another kind of task known as a handler. This is a task that doesn’t run automatically. Instead, it only runs when another task tells it to. A typical use for a handler is to run only when a change is made on a machine. In this case, we restart Apache if the system calls for it.

Run this using ansible-playbook lampstack.yml –ask-become-pass.

So, that’s a playbook. Let’s take this and expand it a little to install an entire LAMP stack. Update the file to look like this:

– hosts: lan

  gather_facts: yes

   tasks:

  – name: update apt cache

    apt: update_cache=yes cache_valid_time=3600

   – name: install all of the things

    apt: name={{item}} state=present

    with_items:

      – apache2

      – mysql-server

      – php

      – php-mysql

      – php-gd

      – php-ssh2

      – libapache2-mod-php

      – python3-pip

   – name: install python mysql library

    pip:

      name: pymysql

   – name: start apache

    service: name=apache2 state=started enabled=yes

    notify:

    – restart apache

   handlers:

    – name: restart apache

      service: name=apache2 state=restarted

Note that we’ve moved our apt cache update operation into its own task because we’re going to be installing several things and we don’t need to update the cache each time. Then, we use a loop. The {{item}} variable repeats the apt installation with all the package names indicated in the with_items group. Finally, we use Python’s pip command to install a Python connector that enables the language to interact with the MySQL database.

There are plenty of other things we can do with Ansible, including breaking out more complex Playbooks into sub-files known as roles. You can then reuse these roles to support different Ansible scripts.

When you’re writing Ansible scripts, you’ll probably run into plenty of errors and speed bumps that will send you searching for answers, especially if you’re not a master at it. The same is true of general sysadmin work and bash scripting, but if you use this research while writing an Ansible script, you’ll have a clear and repeatable recipe for future infrastructure deployments that you can handle at scale.

Getting started with Kubernetes


Danny Bradbury

19 Mar, 2020

Container systems like Docker are a popular way to build ‘cloud native’ applications designed for cloud environments from the beginning. You can have thousands of containers in a typical enterprise deployment, and they’re often even more ephemeral than virtual machines, appearing and disappearing in seconds. The problem with containers is that they’re difficult to manage at scale, load balancing and updating them in turn via the command line. It’s like trying to herd a bunch of sheep by dealing with each animal individually.

Enter Kubernetes. If containers are sheep, then Kubernetes is your sheepdog. You can use it to handle tasks across lots of containers and keep them in line. Google created Kubernetes in 2014 and then launched the Cloud Native Computing Foundation (CNCF) in partnership with the Linux Computing Foundation to offer it as an open project for the community. Kubernetes can work with different container systems, but the most common is Docker.

One problem that Kubernetes solves is IP address management. Docker manages its own IP addresses when creating containers, independently of the host virtual server’s IP in a cloud environment. Containers on different nodes may even have the same IP address as each other. This makes it difficult for containers on different nodes to communicate with each other, and because containers on the same host share the same host IP address space, they can’t use the same ports. Two computers on the same node can’t each expose a service over port 80, for example.

Understanding Kubernetes pods and clusters

Kubernetes solves problems like this by grouping containers into pods. Each container in a pod has the same IP address, and they can communicate with each other on localhost. It exposes these pods as services (an example might be a database or a web app). Collections of pods and the nodes they run on are known as clusters, and each container in a clustered pod can talk to containers in other pods using Kubernetes’ built-in name resolution.

You can have multiple pods running on a node (a physical or virtual server). Each node runs its own Kubelet, which ensures that a cluster is in the correct state, along with a kube-proxy, which handles network communication for the pods. Nodes work together to form a cluster.

Kubernetes manages all this using several components. The first is the network overlay, which handles networking between different pods. You can install a variety with a range of capabilities, including advanced ones like the Istio service mesh.

The second component is etcd, which is a database for all the objects in the cluster, storing their details as a series of key:value pairs. etcd runs on a master node, which is a machine used to administer all the worker nodes in the cluster. The master node contains an API server that acts as an interface for all components in the cluster.

A node controller running on the master node handles when nodes go down, while a service controller manages accounts and access tokens so that pods can authenticate and access each other’s services. A replication controller creates running copies of pods across different nodes that run the same services, sharing workloads and acting as backups.

Installing and running Kubernetes

Installing Kubernetes will be different on each machine. It runs not just on Linux, but also on Windows and macOS. In summary, you’ll install your container system (usually Docker) on your master and worker nodes. You’ll also install Kubernetes on each of these nodes, which means installing these tools: kubeadm for cluster setup, kubectl for cluster control, and kubelet, which registers each node with the Kubernetes API controller.

You’ll enable your kubelet service on each of these nodes so that it’s ready to talk to the API. Then initialise your cluster using the kubeadm command kubeadmin init for your master node. This will give you a custom kubeadm join command that you can copy and use to join each worker node to the cluster.

After this, you can create a pod. You define the pod’s characteristic using a configuration file known as a PodSpec. This is often written in YAML (“YAML ain’t markup language”), which is a human- and machine-readable configuration format. Your YAML file will define the name space that your pod exists in (you can name Kubernetes clusters differently so that you can run multiple clusters on the same physical machine). 

The PodSpec also defines the details for each container inside the pod, including the Docker images on which they’re based. This file can define a pod-based volume for them in the same pod so that they can store data on disk and share it. You can create a pod using a single command – kubectl create – passing it the name of your YAML file.

Running copies of a pod for resilience and workload sharing is known as replication, and a collection of replicated pods is called a replica set. While you can handle replica sets directly, you’ll often control them using another kind of Kubernetes object known as a deployment. These are objects in the Kubernetes cluster that you use to create and update replica sets, and clear them away when you’re done with them. Replica sets can contain many pods, and a deployment gives you a strategy to update them all (adding a new version, say).

A YAML-based deployment file also contains a PodSpec. After creating a deployment (and therefore its replica pods) using a simple kubectl create command, you can then update the whole deployment by changing the version of the container image it’s using. Do this using kubectl set image, passing it a new version number for the image. The deployment now updates all the pods with the new specification behind the scenes, taking care to keep a percentage of pods running at all times so that the service keeps working.

This is all great, but how do we actually talk to and reference these pods? If we have, say, ten web server pods in a replica set, we don’t want to work out which one’s IP to visit. That’s where Kubernetes’ services come in. We define a service that exposes that replica set using a single IP address and a service name like ‘marketing-server’. You can connect to the service’s IP address or the service name (using Kubernetes’ DNS service) and the service interacts with the pods behind the scenes to deliver what you need.

That’s a short introduction to Kubernetes. As you can imagine, there’s plenty more to learn. If you’re hoping to manage native cloud services in any significant way, you’re going to bump up against it frequently, so it pays to invest the time in grokking this innovative open source technology as much as you can. With Kubernetes now running on both Amazon Web Services and Azure alongside Google’s cloud service, it’s already behind many of the cloud services that people use today.