Todas las entradas hechas por Steve Cassidy

What is serverless computing?


Steve Cassidy

14 Jul, 2020

If you’re looking to move away from hybrid cloud and pack up your on-premises servers all together, but are worried about how your applications will run in the cloud, serverless computing could be the right strategy for you.

Serverless computing? As in running everything on desktops?

Ah, no – serverless computing means building your server functions in the cloud, rather than on a local, physical machine. In this way, they can benefit from demand-driven management, spinning up as required then closing down again when, for example, the entire human race decides to stay at home for several months. Ideally, functions should be fully portable, eschewing platform-specific services and tricks, so they’ll run in any data centre.

So we can go serverless and retire our old servers?

It’s unlikely that you’d be able to do a straightforward lift-and-shift of your old, badly behaved suite of IT resources up into the cloud. Any function that depends on some older technology (say, for example, a Windows dialog box) will have to be rebuilt with modern tools that embrace scalability and movability. Indeed, even once you’ve moved, it might make sense to keep your older servers running in parallel for some time, as a fallback in case of unforeseen hiccups. 

Could we at least streamline our local admin team?

If that’s your plan, make sure they’re ready to come back on a consultancy basis: you’re going to need their knowledge more than ever while the development is in progress, and likely for some time afterwards. Only the very simplest of businesses can make a consequence-free shift, and they’re still likely to need some techie oversight to ensure everything is scaling and behaving like it should.

Surely moving our everyday line-of-business functions off-site is going to slow things down?

If you have a big on-site compute load then it might, but for outward-facing services – that is, ones used by your customers rather than your employees – moving to a scalable architecture could speed things up. What’s more, a serverless approach easily allows for multiple instances so you can, for example, create different versions of your site for different users and markets.

Is it wise to put our critical functions in the hands of a third party?

Part of the beauty of the serverless model is that you’re not tied to any single provider. If there’s a problem with one host, you can just pop a serverless image onto a flash drive and fire it up somewhere else. Running instances here and there might not be cheap, but it’s a much more resilient position than one where yanking out a 13A lead will scuttle your whole operation.

Are there other benefits?

Most popular business apps are now very old: histories stretching back 20 years or more are not uncommon. That means you’re working with two decades of accumulated bug fixes, function changes and bloat. The process of moving to a serverless model gives you a chance to take stock, assess which parts of your code portfolio could work better in the cloud, and to re-engineer any broken or backward functions. 

So when will our everyday apps go serverless?

Basic, network-shared apps aren’t going to magically transform into serverless versions: the cost of moving outweighs the advantages. However, it may be that service providers (like your card payment processor) migrate you to serverless because you’re only using one specific part of their offering, so it makes sense for them to only fire up the code you’re using. That move will probably be entirely invisible to you, though – which is just as it should be. 

Managing cloud lifecycles


Steve Cassidy

20 Feb, 2020

Hardly anybody talks about lifecycles in IT, least of all me. I don’t see the end of use of any device as a special occasion to be marked and celebrated: I still have working PCs from the late 1990s. Even so, I had to stop and pay attention when I heard a senior exec from Arm – the world’s most popular CPU maker no less – mention that major cloud players are now reinvesting in their data centres on a yearly basis.

This is an incredibly short lifecycle, but when it comes to the cloud there are multiple things that might need to be retired, upgraded or otherwise cycled. One is the data centre hardware itself; this might seem like a very fundamental refresh, and it could transform the customer experience, making things either faster or slower. But, in these days of virtual machines and serverless design, it might equally be completely invisible from the outside, except where it leads to a change in tariffs.

Then there are upgrades to the orchestrator or container OS. These tend to happen with little or no notice, excused by the urgency of applying the latest security updates. As a result, any dependencies on old code or deprecated features may only come to light on the day of the switch. As a savvy cloud customer, your best defences against such upheaval are to spread your systems across multiple suppliers, maintain portfolios of containers running different software versions and take a strong DevOps approach to your own estate.

Other scenarios include the sort of big move when a beta site is finally promoted and becomes the main site, and the eventuality of a cloud provider being taken over by another, resulting in a burst of service changes and tariff renegotiation. Remember, lots of high-tech businesses operate with the express intention of being acquired at some point, once they have a good portfolio of customers, a steady revenue stream and hence a high share price. Such a strategy is music to the ears of venture capitalist backers, eager to recoup their investment and profits; I will leave you to consider whether it’s well suited to cloud services, which place a high emphasis on continuous and uninterrupted service. There’s a reason why many cloud company contracts are all about inhibiting customer mobility.

Migration patterns

It’s clear that, when we’re talking about the cloud, “lifecycle” entails a spread of quite different activities, and bringing them all together under one banner doesn’t do you much good: the lessons learnt from going through one of the above events won’t do much to help with others. 

However, the situation doesn’t have to be complicated – at least not if you actually have developers, and aren’t just stuck with a narrow selection of package providers. If you are in this lucky position, and you’ve been listening to at least the tone of your development team’s comments on the various fads and fashions in development, there’s a fair chance that your IT portfolio will have been built with the sorts of tools that produce nice, mobile and tablet-friendly, infinitely resizeable, bandwidth-aware, cloud-scalable websites. If that’s what you’re working with, it can be relatively easy to ride out lifecycle events.

Unfortunately, this is by no means universally the case, especially not for systems that have been around long enough for large parts of the business to have been built on them. If you already have a code base that works, it can be tough to secure the development time and cost commitment to move it from (say) QuickBASIC or COBOL onto Ruby on Rails, Java or PHP. 

Yet this is itself one of the most significant lifecycle events, or at least part of it. It may seem a stretch to refer to code migration as a lifecycle end, but when you first unleash your prototype on a public cloud platform, nobody really knows how it’s going to perform, or how resource-hungry it might be, and your production systems person is not going to want those kind of unknowns messing up their carefully controlled production space. The requirements for releasing that prototype into the big bad world thus emerge from the development and testing process. 

That output ought to, at least, incorporate a statement about what needs to be done, and after how long, with an eye on three quite distinct systems. First, there’s the prototype in its current state, which at this point is probably still languishing on Amazon or Azure. Then, of course, there’s the predecessor system, which is going to hang around for a couple of quarters at least as your fallback of last resort. Then there’s the finished, deployed product – which, despite your diligent testing, will still have bugs that need finding and fixing. Redevelopment involves managing not one, but three overlapping lifecycles.

If you’re wondering how much of this is specific to the cloud, you have a good point. You would have had very similar concerns as a project manager in 1978, working in MACRO-11 or FORTRAN. Those systems lack the dynamic resource management aspect of a cloud service, but while cloud suppliers may seek to sell the whole idea of the “journey to the cloud”, for most businesses reliability, rather than flexibility, remains the priority. 

The question, indeed, is whether your boringly constant compute loads are actually at the end of their unglamorous lifecycle at all. It’s possible to bring up some very ancient operating systems and app loads entirely in cloud-resident servers, precisely because many corporates have concluded that their code doesn’t need reworking. Rather, they have chosen to lift and shift entire server rooms of hardware into virtual machines, in strategies that can only in the very loosest sense be described as “cloud-centric”.

Fun with the law

Despite the best efforts of man and machine, cloud services go down. And when it happens, it’s remarkable how even grizzled business people think that legally mandated compensation will be an immediate and useful remedy. Yes, of course, you will have confirmed your provider’s refund and compensation policy before signing up, but remember that when they run into a hosting issue, or when their orchestrator software is compromised by an infrastructure attack, they will suddenly be obliged to pay out not just for you, but for everybody on their hosting platform. What’s the effect going to be on their bottom line, and on future charges?

If you’ve been good about developing a serverless platform, hopping from one cloud host to another isn’t going to be a big issue. Even if you’re in the middle of a contract, you may be able to reduce your charges from the cloud provider you’re leaving, simply by winding down whatever you were previously running on their platform. After all, the whole point of elastic cloud compute is that you can turn the demand up and down as needed.

Sometimes you might end up in the opposite situation, where you reach the end of a fixed-term contract and have no option but to move on. This comes up more often than your classic techie or development person imagines, thanks to the provider’s imperative to get the best value out of whatever hardware is currently sitting in the hosting centre. If there’s spare capacity in the short term, it makes sense for the vendor to cut you a time-limited deal, perhaps keeping your cloud portfolio on a hosting platform from a few years ago and thereby not overlapping the reinvestment costs on their newer – possibly less compatible – platform.

Hardware and software changes

For some reason that nobody seems minded to contest, it’s assumed in the cloud industry that customers will be agile enough to handle cloud vendors making root and branch changes to the software platform with effectively no notice. You come in to the office with your coffee and doughnuts, to be greeted by a “please wait” or a similarly opaque error, which means that your cloud login and resources are now being managed by something quite new, and apparently untested with at least your password database, if not the content of your various memberships and virtual machines. 

Most people active in IT operations management would not want to characterise this as a lifecycle opportunity. That particular field of business is particularly big on control and forward planning, which are somewhat at odds with the idea of giant cloud suppliers changing environments around without warning. When you and 100 million other users are suddenly switched to an unfamiliar system, the behaviour you have to adopt comes not from the cloud vocabulary, but rather from the British government: we’re talking about cyber-resilience. 

If that sounds like a buzzword, it sort of is. Cyber-resilience is a new philosophy, established more in the UK than the US, which encourages you to anticipate the problem of increasingly unreliable cloud services. It’s not a question of what plan B might look like: it is, rather, what you can say about plan Z. And that’s sound sense, because finding your main cloud supplier has changed software stack could be as disastrous for your business as a ransomware attack. It can also mark a very sharp lifecycle judgement, because your duty isn’t to meekly follow your provider’s software roadmap: it’s to make sure that a rational spread of cloud services, and a minimalist and functionally driven approach to your own systems designs, gives you the widest possible range of workable, reachable, high-performance assets. 

Don’t panic!

If you’re already invested in cloud infrastructure, this talk might seem fanciful; in reality, few businesses experience the full force of all these different scenarios. The biggest difficulties with the cloud usually involve remembering where you left all your experiments, who has copies of which data sets, and how to identify your data once it skips off to the dark web. The dominant mode here is all about things that live on too long past their rightful end, and that’s slightly more manageable than the abrupt cessations of access or service we’ve been discussing.

Even so, it’s important to carry out the thought experiments, and to recognise that lifecycles can be chaotic things that require a proactive mindset. One could even say that the lifecycle of the “lifecycle” – in the sense of a predictable, manageable process – is coming to an end, as the new era of resilience dawns.

Seven IT upgrades that pay for themselves


Steve Cassidy

8 Oct, 2018

In business, the bottom line is the bottom line. It’s all about making – or saving – more money than you spend.

When it comes to IT projects, however, things always prove a little more complicated. They require you to make a bunch of assumptions or projections about how a system is likely to benefit a business, and balance those against initial setup costs and additional expenses like training and the removal of legacy equipment. As such, it’s not always easy to make guarantees that a return on investment is certain.

Those various complications add up to a total cost of ownership, and that total cost can be a red flag for senior management, particularly if they lean towards an «if it ain’t broke…» attitude.

Of course, some projects are easier wins than others. We’ve identified a handful of IT upgrade projects – some practical and some more strategic – that are almost guaranteed to benefit almost any business. If your organisation hasn’t started these already, it should certainly be considering them.

1. Get virtual

There’s something reassuringly tangible about physical hardware, and, although we’re nearing the end of 2018, there’s plenty of hardware resellers out there that will happily supply you with a room full of servers.

If you’re wanting something more cost-effective, virtual machines are by far the best option. The great bit is that you don’t have to go all-out to see immediate benefits, as running just two VMs in a single server can help you cut your physical footprint in half.

And of course, that’s the most conservative setup. The big cloud hosts and enterprise IT people routinely wedge in 20 server instances per physical machine, and technologies such as Docker and Container Virtualisation can take the figure higher still. You don’t need super-powered hardware to do it, either: most individual servers run at under 5% load most of their lives.

Yes, there are plenty of people around who have been burned by too-early, too-ambitious attempts at wholesale virtualisation. But in the decade or more that hypervisor technology has been mainstream, things have only been getting better and easier. Most of the annoying bugs that bit early adopters are mere painful memories.

2. Offload into the cloud – where it makes sense

I’ve seen it argued that the cloud gives you the best return on investment in the business; after all, since cloud services are classified as a running cost rather than a capital expenditure, you’re getting a benefit without technically making an investment. However, if your business already owns a rack of perfectly good servers, mothballing them in favour of hosted services doesn’t maximise your return on that investment ƒ it minimises it.

Indeed, if you unthinkingly shift everything into the cloud you’re almost certainly throwing money away. Azure VMs can cost around $200 a month in fairly low-usage situations, for example. That’s a year’s worth of running costs for a small, sensibly specified business server. Cloud providers emphasise that such pricing covers spikes in compute power, for those distressing days when you just don’t have enough of your own. But do you really need to pay distress-load rates for a year-round, non-distress requirement?

It’s also important to be realistic about the value of what you’re replacing. Quite a lot of migrations are based on absurd comparative costings. I recently walked through a bank’s server room and found it filled with 7U servers, each with a single CPU sitting in a four-way motherboard, and a single 9GB hard disk occupying its 12-tray RAID enclosure. Clearly, some rationalisation was overdue; I might have recommended that the racks be consolidated into a single 12-core, 1U server and a storage array network, offering a more flexible performance envelope and more easily controlled costs, looking into the future.

However, the bank had decided that cloud was the way to go – because it had accounted for each server in each rack as a cost of £250,000. That huge sum had been arrived at by factoring in all sorts of considerations such as premises, insurance, and so on, which of course could not be ditched by moving into the cloud. Once it became apparent that the cost-saving was likely to be minimal, the argument for the cloud seemed far weaker. Then the first few emails announcing tariff shifts from their hosting provider came in.

The lesson is that you should engage with the cloud, but only where it’s cost-effective to do so. Don’t be swayed by TCO comparisons that include irreducible overheads such as pension contributions, plus inflated costs for service and inappropriately specified top-end gear. Most corporate servers are perfectly capable of participating in a well-designed hybrid cloud deployment: this lets you extract the maximum value out of the assets you already own.

3. Outsource IT functions to specialists

This is a superset of the cloud option, but it’s a more multi-faceted idea. The logic, though, is simple: «computer people» are broadly the same no matter what industry they inhabit, so it makes sense to treat them as a commodity. Better career prospects for them, less admin for you: you can focus your budget on people who advance your business.

The catch is the notion that all IT bods are created equal. Back in about 1998, you could get away with that assumption. But today it’s likely that IT is the core of your business: it’s the machinery that delivers your products or services, the way staff communicate and even the way you interact with customers. Think about that and it’s clear that you don’t want all the knowledge of how your processes work to be left in the care of someone who has no particular stake in your business.

Even if everything goes well, there’s the question of continuity when you switch or leave providers. We’ve all heard stories of online systems asking for passwords and credentials nobody has, because the setup was handled by a contractor.

This is another area where the return looks great if you focus only on the balance sheet – but it’s key to weigh the benefits carefully against the potential savings. You may decide to outsource only certain roles, while keeping the engineers in-house.

4. Get on board with Web 2.0

You want your business’ web presence to be accessible to as many people as possible, using as many different devices as possible, for as little as possible. That’s especially true for sites that do business in the browser, such as online retailers. It’s no surprise, then, that the great and grand retail houses run continuous development models; they invest in making it as easy as possible for customers to spend money, without having to think too hard about how to use features or a particular platform.

So if your site doesn’t work like theirs do, why not? Of course, most IT-savvy businesses now have histories of product buying and code cutting and document production. It may seem a challenge to shift from historic architecture over to a modern, adaptive design and presentation that accommodates phones and tablets as well as desktop browsers.

But if you can follow the models of the established online giants, your visitors will immediately know how to interact with you: in effect, they’ll come to you having been organically, socially trained.

Just remember that it’s about more than aping the appearance and feel of a site. It can be more discouraging and frustrating for customers if a site looks familiar, but doesn’t work in the same way as other sites.

5. Don’t just call – hyperconverge!

The name sounds futuristic, but hyperconvergence mostly just means moving telephony onto VoIP and bringing it within the purview of your IT department. This can be a great money-saver, because phone costs are a big fat number on most larger business’ balance sheets. Run your phone calls over your network and you cut out a lot of phone-specific line items. No more great big hot 50V power boxes buzzing away in a cupboard, with those odd button-festooned phones running from them: simply give everyone a free desktop app and a headset.

It’s the same persuasive rationale as the original Skype for home users, translated into businesses. And it’s becoming increasingly powerful as most businesses are experiencing lower volumes of calls these days than a decade or two ago. It makes perfect sense to ditch the high-cost telephone infrastructure and replace it with something that can also be used for internal and external videoconferencing.

The only question is if your company network is up to the job. It’s a good bet that even your senior IT staff don’t know exactly how many Ethernet packets are dropped per year, month, day or hour. That’s because most applications can handle the odd network hiccup without missing a beat – but that’s very hard to do with audio, and stuttering, glitchy phone calls create a very unprofessional impression.

As a result, some companies with business-critical telephony traffic allow their phone provider to put in a whole separate infrastructure for voice; or they might give VoIP top priority across the entire LAN and effectively squeeze the data part out of the equation. Whatever approach you choose, it’s likely to provide an eye-opening insight into the state of your network, and a chance to make your LAN work more for its money.

6. Let employees bring their own devices

Like many bright ideas, this one looks like a no-brainer at first glance – just think how much you’ll save if you don’t have to buy and support a fleet of laptops and smartphones! – but it comes with a lot of caveats.

One is simply about how shiny consumer devices can colour the way your employees do their jobs. If your web developers are all using iPhones, that doesn’t promise a great customer experience for Android users. At the least, you need to provide a spread of platforms for testing, including a grubby old Windows PC. That ought to shake up the inexperienced developer who always upgrades to the latest iPad.

There’s also the management overhead to think about. If employees are reading work emails, accessing work servers and writing work code on their own devices, you need clear policies handling issues like privacy and intellectual property, and you’ll also want some sort of MDM (mobile device management) system in place to deal with lost phones and leavers.

You’ll still need to think about segmenting tasks: there are simple everyday jobs that can be left out in BYO land, and some complex, critical ones that can’t. It’s extremely unlikely you’ll be able to completely abdicate responsibility for client systems, so when it comes to that RoI meeting it’s best not to over-promise the potential of BYOD.

7. Embrace open-source

Many businesses rely on a library of bespoke scripts and cobbled-together apps that do specific tasks with a minimum of fuss. Unfortunately, these normally also come with a minimum of documentation, and no guarantee of compatibility with future OS releases, unfamiliar networks and so on.

Even if it seems less efficient in the short run, you’re generally better off using established systems. In particular, if you can build your processes on open-source software, a huge amount of testing, upgrading and support comes for free.

Of course, in practice it’s not always quite that easy. You do need to plan for what happens when the relevant coding team has an internal hissy fit and suddenly bifurcates into two coding teams. Even so, that’s a small price to pay, compared to your average industry-partner quote for a custom solution. And when it comes to taking on new employees, you may well be able to draw on a huge pool of existing open-source expertise.

However, there aren’t that many OSS projects that address the kind of kind of industry-specific IT jobs that tend to attract bespoke solutions in the first place. If you really want to drive RoI then it’s worth exploring to what extent it’s possible to structure your processes around the open-source tools available, rather than vice versa.

What is cloud bursting?


Steve Cassidy

4 Oct, 2018

Cloud bursting is a rather nebulous buzzword that sounds cool but, I’m afraid, has nothing to do with Kate Bush. Instead, it’s the term used to describe a setup where you run your business mostly on your own kit, but also have a set of cloud accounts sitting idle, ready to take on extra «bursts» of work when demand peaks.

Isn’t that already the idea behind hybrid cloud?

Yes and no. Hybrid cloud is an umbrella term for dividing up your computing resources across local and off-premises servers; cloud bursting is a specific way of using those resources.

In practice, a cloud burst setup might use containerised VMs and some form of load orchestration package to shift containers to locations where user sessions can reach them. It will probably require quite a lot of work at the database design level as well, so that this too can be replicated, multi-homed or remotely accessed. In short, cloud bursting isn’t an architecture or a computing philosophy, but a capability of your entire technology estate.


The infrastructure of enterprises is changing fast, but some of the strategic cloud opportunities aren’t being taken up because of the pace of the tech revolution. So what does the future hold for cloud in the enterprise? Find out more in this research paper.

Download now


Is it just an agile implementation of hybrid cloud?

That’s a question of semantics. A cloud bursting setup should quickly respond to unforeseen changes in demand, but this isn’t quite what’s conventionally meant by «agile». Agility is about being able to retool your code quickly to adapt to changing circumstances, whereas cloud bursting requires everything to be in place well before the high-load day comes.

You need to have your cloud accounts in place and paid up, you need to be sure that your code platform will run on the cloud, and you need to make sure that it’s actually capable of meeting the demands you want to place on it. Doing this properly involves a great deal of pre-emptive development and testing. I’d be very wary of a business that went into a cloud bursting project with an «agile» mindset.

Is a cloud bursting setup is cheaper than regular cloud hosting?

It might work out that way, but the two models aren’t perfectly comparable. Hybrid cloud usually tends to imply an IAAS model, whereas cloud bursting finds most interest from heavy SAAS users.

Cloud bursting also relies on your orchestration software correctly working out when to spin up the offsite services and incur the associated charges – which involves an element of voodoo, as it’s exquisitely difficult to distinguish between blips and booms as they’re happening. A hybrid cloud setup with plenty of slack capacity may or may not work out cheaper, but it’s likely to be more dependable, and have a more predictable cost.

When is cloud bursting the right answer?

There are some such scenarios, but they’re mostly inside the world of IT itself. For example, if you’re an antivirus developer combatting zero-day exploits, you’ll want the ability to scale your download links out into the cloud on bad virus days. Some classes of simulation can also easily parcel up workloads and hand them off to compute nodes with no regard for where those nodes are hosted. Unfortunately, this model has become controversial, since it’s currently mostly employed by Bitcoin-mining trojans.

What’s the key downside of a cloud bursting approach?

Finance directors probably aren’t going to love cloud burst projects, because (as we’ve noted) the costs are unpredictable by design. What’s more, since the whole point of cloud bursting is that you don’t use it regularly, it’s only when you really need to fire up those cloud servers that you discover that a recent update has unexpectedly broken your meticulously crafted handover routines.

These inherent risks will tend to push most businesses back in the direction of a more traditional hybrid architecture.


‘Blue sky thinking: Cloud computing, culture and the enterprise’ presents the results of cloud research from UK businesses, capturing the scope of cloud adoption in 2018.

Download now


Why are vendors pushing cloud bursting as the next big thing?

I suspect that the vendors aren’t trying to get you specifically into cloud bursting. They want to make you think more generally about where your computing resources live.

A little research, and my own anecdotal experience, suggests that very few companies have actually committed to a full-on cloud bursting model – which probably tells you everything you need to know.

How to boost your business Wi-Fi


Steve Cassidy

17 Jul, 2018

There’s a sense in many offices that Wi-Fi represents a great break for freedom – as if your old Ethernet infrastructure was some kind of authoritarian dystopia. There’s something romantic in that idea, but it’s apt to turn sour when the realisation dawns that an overloaded or poorly configured wireless network can be every bit as flaky as a wired one.

Indeed, the experience can be even more disagreeable if you don’t understand what’s going on. I’ve seen one business resort to adding more and more DSL lines and Wi-Fi-enabled routers, to try to resolve an issue where wireless users were intermittently losing internet access. Nothing helped: in the end, it turned out that the wireless network itself was working fine. The problem was the ISP rotating its live DNS servers in some baroque plan to knock out hackers or spammers.

So lesson one is: before you start planning to upgrade your wireless provision, first of all ask yourself what the problem is you’re trying to solve, and then investigate whether it could conceivably be caused by bugs or bottlenecks elsewhere on the network. If that’s the case then a large, expensive Wi-Fi upgrade project may be no help to you at all. You might get better results from simply spending a few quid to replace old trampled patch leads.

1 – Multiple services make for resilient networks

When people talk about «boosting» their Wi-Fi, they’re almost always talking about speed. But there’s no single way to increase the throughput of a wireless network.

It may be that you need a ripout and redesign of your entire setup. Or it might be a case of tracking down a misconfiguration, in which all the machines simply sit showing their busy cursors because of a poor DSL link or a foolishly chosen cloud dependency.

The culprit might not even be connected to your network: it could be a machine like an arc welder that generates RF interference as a by-product of its regular duties, and flattens the wireless connection of any device within a 10m radius. Upgrading your Wi-Fi is rarely just about picking a quicker router.

Speed isn’t the only consideration, either. Do you want to control or log guest accesses – or will you in the future? Should you prioritise internal staff or internal IT people’s allocated bandwidth? Might you even want a honeypot machine to divert and ensnare would-be intruders? These functions are likely to exceed the capabilities of your standard small plastic box with screw-on antenna ears.

If your Wi-Fi is important enough to warrant an upgrade then don’t limit your thinking (or your spend) to a slightly better router. Finally, think about robustness. Investing in multiple DSL lines with multiple providers makes it harder for random outages and blips to knock your business offline. Being able to route internally over an Ethernet programmable router (look for «layer 3 routing and VLANs» in the description) at least gives you some ability to respond on a bad day.

2 – Remember, it’s radio, not X-rays

If you’re ready to upgrade your wireless network – or to set one up for the first time – then you should start by taking a look at your premises. You need to work out how you can achieve reasonably uniform coverage. You can do the basic research by just wandering about the building holding a smartphone loaded with a free signal-strength metering app.

There are much more satisfyingly complex devices than that, of course. These may become useful when you have the problem of a wireless footprint that overlaps with that of your neighbours. The issue might be overcrowded channels, or it might be down to the general weirdness of RF signal propagation, which can mean that you get horrific interference from a next-door network that, by rights, ought to be weak and distant.

Almost never is the solution to boost the transmission power of your APs. Turning the power down on your base stations and installing more of them, in collections that make best use of wired back-links and collective operation, is much more likely to fix dead spots and interference than a single huge, throbbing, white-hot emitter in the corner of your office.

3 – Wi-Fi over a single cable

Once you start shopping for business-grade Wi-Fi gear, you’ll quickly encounter Power over Ethernet (PoE). This can be a convenient solution for devices that don’t draw much power and don’t necessarily want to be situated right next to a mains socket.

However, PoE can also be a dangerous temptation to the rookie network designer. «Look, it just runs off one wire – without the annual testing and safety considerations of a 240V mains connection!»

The catch is that the power still has to come from somewhere – most often a PoE-capable switch. This might be a convenient way to work if you want to run 24 access points from a single wiring cupboard with one (rather hot) Ethernet switch carrying the load. But very few businesses require that kind of density of access points. It’s more likely you’ll have only a few PoE devices.

So for your medium-sized office, you’ll probably end up acquiring and setting up additional PoE switches alongside your main LAN hardware – which is hardly any simpler or cheaper than using mains power. It also brings up the situation of having your wireless estate on one VLAN and everything else on another.

4 – Strength in numbers

More APs is almost always better than trying to increase signal strength. It does have implications for management, though.

Businesses taking their first steps beyond a traditional single-line DSL router often have a hard time converting to a setup where access control and data routing are entirely separate jobs from the business of managing radio signals, advertising services and exchanging certificates.

How you handle it depends – at least partly – on what sort of access points you’ve chosen. Some firms opt for sophisticated devices that can do all sorts of things for themselves, while others favour tiny dumb boxes with barely more than an LED and a cable port.

The larger your network grows, the more sense the latter type makes: you don’t want to be setting up a dozen APs individually, you want them all to be slaves to a central management interface. That’s especially so if you need to service a site with peculiar Wi-Fi propagation, handle a highly variable load or deal with a large number of guests wandering in and out of the office.

5 – The temptation of SSO

Single sign-on (SSO) is something of a holy grail in IT. The idea is that users should only have to identify themselves once during a normal working day, no matter how many systems they access.

It’s not too hard to achieve when it comes to Wi-Fi access, but it’s not a very slick system, on either the network side or the clients’. The bit of the Wi-Fi login cache that handles SSO, and decides if a password saved in a web page can be used to sign in to a particular WLAN, is also the bit that gets sniffed by hotel Wi-Fi systems to tag a single location as «definitely my home» and overcome all other applicants for the tag: set this attribute on your Wi-Fi for guests at your peril.

And while it sounds attractive to have to enter just a single password – after which a portfolio of machines, routers and cloud services will recognise your user as already validated – the reality isn’t as great. For one thing, people are used to typing in passwords these days: it isn’t a scary techie ritual any more. You don’t need to shield them from it.

Then there’s the continual and unresolvable fight between vendors as to who owns the authentication database itself. Nobody with a real job to do could possibly keep up with the in-depth technical mastery required to shift from one authentication mechanism to another – but that doesn’t stop various players from trying to tempt you to take up their system or proprietary architecture. The result is an unwelcome chunk of extra complexity for you to master.

6 – Beware compatibility gotchas

On the subject of proprietary approaches, it’s a fact that many base stations and Wi-Fi enabled devices just don’t work together.

Sometimes the problem is about range, or about contention (how many devices in total you can get into one repeater) or concurrency (how many devices can communicate at the same time). Other times it’s an idiosyncratic firmware issue, or some quirky issue with certificates on one side of the conversation, which renders the other side effectively mute.

I’ve seen plenty of firms run into these problems, and the result tends to be cardboard boxes full of phones, still with months on their contracts but unable to connect to the company WLAN since the last upgrade. It’s not a good look for the IT man in the spotlight: «You’ve broken the Wi-Fi!» is an accusation that always seems to come from the best-connected, least calm member of your company.

The real solution is to acknowledge the reality of compatibility issues, and plan for them. You don’t have to delve into the technical minutiae of your shiny new service, but you do need to work out how, and for how long, you need to keep the old one running in parallel to sidestep any generational problems. Thus, your warehouse barcode readers can keep connecting to the old SSIDs, while new tablets and laptops can take advantage of the new Wi-Fi.

If users are educated about this «sunset management» then hopefully they’ll feel their needs are being respected, and legacy devices can be upgraded at a manageable pace and at a convenient time.

7 – Manage those guests

One pervasive idea about Wi-Fi is that it can and should be «free». It’s a lovely vision, and it has perhaps helped push the telephone companies to cheapen up roaming data access – but within a business it’s a needless indulgence that makes it difficult to fully secure your IT portfolio. After all, it’s your responsibility not to get hacked, nor to facilitate someone else’s hack; opening up your network to all and sundry, with no questions asked, is hardly a good start.

That doesn’t mean you can’t let visitors use your network at all – but it does mean you should give them managed guest access. Think about how much bandwidth you want guests to have, and what resources you want to let them access. Do you want to treat staff and their personal devices as if they were visitors, or do they get a different level of service?

8 – What about cloud management?

The bigger your network grows – the more users, APs and network resources it embraces – the more important management becomes. And it’s not just about convenience but, again, security.

Our own Jon Honeyball became a fan of Cisco’s cloud-based Meraki management service when it enabled him to see that over 3,000 new devices had tickled his wireless perimeter in a week. It’s a statistic that makes for instant decisions in boardrooms. It’s very unlikely that all of these contacts were malicious. Most were probably just cars driving past with Wi-Fi-enabled phones.

Spotting the difference is where threat-detection systems really start to sort themselves into sheep and goats, and that’s something you can operate in-house: you don’t absolutely have to run all your devices from a vendor’s cloud service layer. Your local resources, like separate DSL lines and routers, already sit behind cloud-aggregated, collectively managed base stations.

If you’re in a business that doesn’t touch the Wi-Fi from one year to the next, cloud management may hardly matter at all. And while a cloud-based solution may seem to offer security advantages, it’s still necessary to protect your own network, so it’s not as if you can forget about security. Advanced password management for both users and administrators should be an absolute must for any cloud-managed Wi-Fi campuses.

Images: Shutterstock

How to get the most out of document management


Steve Cassidy

3 Jul, 2018

Document management isn’t a sexy new idea. It’s at least as old as computing itself: as a child, I remember asking my parents why they needed so many little keys on their office keyrings, and getting the answer that these were for their individual, lockable output trays on the various printers and copiers they used.

The documents they handled were so sensitive that even a casual bit of misfiling could spell disaster – and individual secretaries were, even then, becoming anachronistic. So their employer had invested in a complicated, state-of-the-art copier that could keep track of who was making each copy, and deposit each user’s prints securely into their own personal out tray.

Some years later, I happened to meet a programmer who had worked on the Post Office’s OCR project – a project which transformed the business of letter sorting by electronically reading handwritten postcodes and turning them into those little blue dots you used to see printed on the outside of the envelope. It was a necessary evolution, he explained, as the number of letters being processed was outstripping the availability of workers to route the mail by hand.

Like the locking trays, those blue dots were reflections of the supreme importance of pieces of paper – and the growing difficulty of managing them. From one perspective, you might say that the Rise of the Machines began in the 1970s as a direct expression of the need for document management.

That’s not to say that it’s ancient history. Document management remains crucial in the 21st century. For sure, it’s become something much broader than simply keeping track of sheets of paper: the QR code on your smartphone that lets you board a plane is a document, just as much as a letter from your landlord that lets you move into a new office. But the old issues – too many eyes on one type of document, not enough on another – remain as prevalent as they ever were.

All that’s changed is that the modern equivalent of the lockable-tray copier has to deal with those who carry their sensitive data around on an iPad rather than in a cardboard folder (and who don’t necessarily understand the limits of security when it comes to Wi-Fi printing). Indeed, there’s still much to be said for the lockable paper tray, as a metaphor if not a reality. It may go against the optimistic precepts of certain computing gurus, but it’s a practical solution to an everyday problem – and that’s what document management is all about.

Paper trail

Document management can save you a lot of money – and space

Small companies tend to assume that their workflow is too simple to justify investing in anything more than a filing cabinet or two. But even if you don’t need to do much in the way of actual managing, technology can help. One of my old clients, in the course of a deal, ended up holding some sensitive documents that were (even by the standards of these things) very long. It rarely needed to refer to them, but had to retain them securely – which meant dedicating two full-height office cupboards, in a room with a locked door.

As you can imagine, the mere process of scanning in all this paperwork yielded huge rewards. The data was downsized into a single locked drawer, allowing the company to situate two additional staff in the room that had been freed up. In consequence, it ended up increasing its turnover by about a quarter of a million pounds a year.

That might sound like a special case, but that’s the nature of the beast. Talking about document management in general terms has always been a challenge, because it’s in the narrowest, most specialised roles that the technology most visibly pays for itself. Those who really need a lightning-fast write-once storage subsystem already know it; with the more generic stuff, like a simple scan and store process, it can be harder to point to exactly where the benefits are going to justify the investment.

Indeed, I come across plenty of businesses that are suspicious that costly document-management projects are scams or rip-offs. (Then again, that’s not unique to document management – such accusations come up with IT projects of all kinds.)

Factor in a general perception of document management as simple and old-fashioned, and it’s easy to understand why companies baulk at spending money on something that «ought to be easy». But even if we accept that some aspects of the technology are simple and old-fashioned, that’s no bad thing. It’s a classic geek mistake to think that every modern problem needs a rarefied, compute-intense solution.

Get the hardware right

You might assume that document management starts with a scanner, but it’s nigh-on impossible to do rational document management if your printers aren’t up to the job. If something starts out as paper, there’s a good chance it’s going to get printed out again at some point; we may want to save the environment, but people are more comfortable clutching a nice physical piece of A4 than referring to a digital representation of it.

Modern, superfast scanners such as these by Xerox and Fujitsu are a valuable addition to a digitising office

Indeed, you should probably proceed on the assumption that people are going to print more than you bargain for – and the same applies to scanning. I’ve had arguments with companies who simply refuse to believe the figures for numbers of pages ingested per day into their document management systems.

In short, the best advice is not to skimp on the hardware, even if the initial cost seems higher than you’d hoped. Depending on your needs, you may be able to save money by investing in a big multifunction office printer with its own ADF, so it can tear through big scan jobs in minutes or seconds. By all means, test your procedures with a slow, clunky £29 inkjet MFP before you roll them out, but realise that thousands if not millions of sheets of paper are likely to pass through them before you next come to review your document management needs.

A few more practical points: if you expect to scan lots of big documents, scanners that move the paper past the head, rather than the other way about, are normally much faster.

(Read the PC Pro buyer’s guide to desktop scanners in issue 278, p92, where you’ll see reviews of Brother, Fujitsu, Plustek and Xerox machines. Or jump to the A-List on p18.) If you need to digitise lots of bound documents, camera mounts can grab high-quality snapshots while you turn the pages by hand.

Keep the goal in sight

The ambitions of document-management have expanded a long way beyond those early lock-boxes. One currently fashionable idea is seeking to connect together the many different apparitions of a customer across your diverse products and systems. The benefits are obvious, but if your CRM is in the cloud, your email server is 8,000 miles away, and your document scans are right beside you, it becomes quite a major project. Ask yourself whether it’s worth the investment: you may well find that you’re dealing with Pareto’s 80/20 rule, as 80% of the data you’re storing could well end up sitting dormant for the entirety of its retention cycle.

Indeed, while the benefits of document management may not all be obvious, there’s much to be said for keeping things simple. As I’ve mentioned, it’s an easy mistake to try to push the technology out of its comfort zone, and beyond what’s really advantageous to your company. Keep focused on the practicalities and you won’t go far wrong.

Image: Shutterstock

The identity crisis: Password managers and your business


Steve Cassidy

3 Apr, 2018

It used to be the case that when someone said they were having an «identity crisis», they would go on to tell you about their imaginary friend. However, this is 2018 and issues of identity are all over the news – and of the utmost importance to businesses.

If you’re the go-to person for an organisation of any size or scale, you’ll know that problems with passwords have gone from a quiet, almost academic bit of admin to a headline-grabbing, company-destroying risk. So every business should be asking: what are the potential hazards, and what can we do to protect ourselves?

The ID problem

Nobody can get away from the need for passwords these days. They used to be the preserve of the office network, but now you can’t even avoid them if you’re unemployed: benefit systems want you to log in and prove who you are to access your personalised view, save your data and so on. And as online security has become a growing burden, not just at work but in our personal lives, it’s been no surprise to see password managers gaining popularity all over the app and web service marketplace.

Great, problem solved – no? Well, that was the theory. But cynics such as myself weren’t at all surprised when it emerged that these services had security vulnerabilities of their own. In the summer of 2017 we saw a spate of accusations that one web password manager or another had been hacked or cracked.

Regardless of whether your precious identity data had actually been compromised or not, this was a painful wake-up call for customers. Many had entrusted their passwords to such systems believing this would allow them to stop worrying about security scares; now they found themselves forced to think about questions such as what happens when your password manager gets taken offline and you don’t have paper copies of all the passwords you’ve loaded into it.

And what if you get caught without a Plan B on the day when a hacker (or disgruntled staff member) changes all those passwords and locks you out of your own system?

Five years ago, one aspect of this discussion would have been what makes a good or a bad password. Today, that’s rather a moot point. First, due to the fact that folklore is the dominant source of advice on the topic for most people, your typical CIO – or, as it often is, an overstretched support junior – has to cope with all the possible levels of password quality across their whole organisation.

Secondly, it’s a fact of life that most companies are no longer in a position to fully dictate their own password policies, thanks to an increasing reliance on external service providers. Your company procedures may state that all passwords must be deposited in escrow, written in blood on vellum, or changed every leap year: the reality comes down to the cloud operator’s policy.

Security in the cloud

Ah yes, the cloud – the single greatest confounding factor when it comes to password security. At the start of this decade, it was still possible to talk about «single sign-on» and mean nothing more than granting access to the LAN plus Active Directory resources, and perhaps a few HTTP services.

Meanwhile, in 2018, we have to deal with much bigger challenges of scope. Your access security systems have to work inside the company office; in employees’ homes; with the third-party services that your business signs up to; with your smartphone apps, on at least two platforms; with physical tokens for building access; on networks where you are a passing guest; in IPv6 environments… well, that’s enough semicolons for now. You get the picture.

Needless to say, where there’s a technical challenge this confusing, there’s a proliferation of outsourced «solutions» that can help you get on. However, these are almost entirely aimed at larger businesses, where a dedicated individual is available to negotiate between what the business wants to do with identities – the usual staff join/move/leave lifecycle – and the demands made by regulations or relationships with third parties.

And even then, recent trends in larger business IT make things very complicated. Remember, both identity solutions and line-of-business services tend to live in the cloud, and a lot of their appeal to customers is down to their ability to interoperate with other services by way of inter-supplier APIs.

So if, for example, you’re logged into Salesforce and hit a button to switch to another app, it’s not your PC that forwards your credentials to the next host: Salesforce initiates a direct conversation, server to server. We’re very much living in the age of the business-to-business API economy – and good luck managing that.

Then there’s software-defined networking (SDN) – an idea that can deliver a great security boost for your network. SDN takes advantage of the fact that there’s enough computing power floating around now for even a humble network switch to actively isolate, monitor and manage the network traffic generated and received by each individual PC.

This is seriously useful when it comes to infection control: after all, in most company networks, PCs have next to no need to talk directly to each other – only viruses do that. SDN ensures that PCs only talk to the appropriate servers and routers, using rules that relate to the individual, rather than to the floor or department their computer happens to be in.

The thing about SDN is that it requires users to authenticate before they can have any sort of access to the network. No biggie, you might think – users these days have been schooled by Wi-Fi to expect a login prompt. However, if your identity broker is in the cloud, you need a way for users to access that before logging into the SDN-secured network.

From an architectural perspective, the answer is simple: just have a default access policy that lists the identity servers as always available, without credentials. But that’s not quite the same as saying that every cloud- based identity broker recognises the problem. Many businesses undertake big reorganisations in order to escape the «Microsoft Trap» of server-centric networking, only to fall into a maze of incompatible authenticators, each of which is sufficiently new to consider a three-year product lifecycle in this field as perfectly normal.

All of which brings me to another issue: portability.

Moving your users around from service to service

If you’re thinking of engaging a cloud-based password-management service, this is a key question: how easy is it for the administrator to do drastic things with the database of users and passwords? Is it possible to upload bulk lists of users (say, on the day your company takes over another one) and indeed, download and examine such lists, looking for issues such as duplicate passwords?

These aren’t unreasonable things for an IT department to want to do. Yet, online password managers, anxious about the potential for abuse, tend to rule it out completely. This is an unfortunate side effect of the influence of consumer security policies – everyone gets treated as a separate individual with no security crossovers.

But, if you think about it, that’s the diametric opposite of what most companies actually want. Your firm’s user database is built on groups and policies, not on hundreds of unique individuals.

There is another way. It might sound unfashionable in 2018, but what people are crying out for, in a forest of password-as-service cloud apps, is a return to the glory days of Active Directory. The simplest answer to bridging the divide between cloud identity and LAN identity is to focus on the lowest common denominator, namely an old-school Windows Domain environment. Don’t rely on the cloud for everything: use it to grant access to a Windows server, which can take on the traditional role of local service manager and gateway.

It’s an approach with numerous benefits. For a start, nobody in the old-school LAN world is going to hold your company user list to ransom, or make changes to pricing once you’re on board, or restrict your choices of IoT deployment to a limited roster of approved partner manufacturers. Indeed, the idea helps justify the high price of Windows Server licences – they’re steep if you just want file and print services, but if you look at the complexity and cost of managing passwords and user identities, it starts to make a lot of sense.

Crystal balls

Passwords have their benefits, but (as my colleague Davey Winder has frequently noted) a physical token can be a powerful alternative or supplement to a conventional password. Indeed, it remains a great puzzle that business hasn’t really embraced the idea. You can find products that use USB or Bluetooth to provide preset usernames and passwords, but these tend to exist only in specialised niches.

Notably, in the consumer sector, the idea of using a physical key has been superseded by two-factor authentication (2FA), where a login attempt generates a second single-use password that’s sent to the customer’s registered mobile number. This too has its strengths, but there’s an assumption of continuous internet access – or, in some cases SMS service – that isn’t always realistic. It’s fine if you’re sitting at your desk trying to log into your email, but less so if you’re standing in a snowy car park late at night, trying to get into the office because you’ve been called out to deal with a network outage.

In fact, if you’re going to rely on any sort of single sign-on system, there’s an almost inevitable requirement for defence in depth – that is, you need the same identity data to be accessible in several different ways, so it can remain available under most plausible scenarios. Again, this is certainly not a new insight when it comes to system design, but it’s one the always-connected generation finds easy to forget.

This doesn’t have to mean investing in layer upon layer of redundant infrastructure. What it might mean, however, is a «fog computing» approach – a model where cloud-based services connect directly to the perimeter of your home network and devices. In this case, you want systems that are reachable from that snowy car park, able to remember the last state of the security database – and just smart enough to let you in.

Image: Shutterstock

Your guide to Facebook Workplace


Steve Cassidy

1 Mar, 2018

We’ve tried Facebook before. It was hard to keep coming up with new content to post, and it didn’t seem to benefit us much.

You’re echoing the experience of many organisations who have tried using Facebook as a marketing tool. The fact is, while Facebook’s potential for promotion and relationship-building can be formidable, it’s not right for everyone. «Workplace by Facebook» is something quite different: simply put, it’s a custom version of the Facebook environment for messaging between co-workers.

This sounds like a terrible idea – won’t people be distracted by chit-chat and memes when they’re supposed to be working?

It must be admitted, the Workplace vision of what people get up to at work isn’t universal. I certainly wouldn’t suggest that a company of forestry workers or a brass band try to use Facebook on the job.

Yet, the evangelical slogans about embracing social media aren’t entirely off base. If you trial Workplace and get nothing more from it than a chance to remind your staff to get on with their jobs, that’s still better than souring the working environment with glowering intrusions to check up on what they’re doing online.

It sounds like my employee communications will be running inside someone else’s cloud. What about security and privacy?

At the time of writing, Workplace offers a fairly simple framework providing virtual private meeting places for people who work in different businesses. The idea is to allow discussion of mutual projects without exposing other information and resources.

To be sure, it’s hard to overlook Facebook’s historic habit of eagerly rolling out new features and letting users do the field-testing. But there are good opportunities here. You can create and tear down a collaborative group more or less on a whim. It might exist for only an afternoon; it might also be that it has only one external member, advising a whole internal team (think legal matters, or health and safety). Adapting your mindset beyond the email model is a key part of getting the most from these consumer crossover platforms.

At the end of the day, isn’t this just another online chat system?

Facebook’s communications credentials certainly started with simple chat, but have blossomed to include both audio and video connections. This means you can substitute Workplace for services such as Skype, WhatsApp and dedicated VoIP systems. Yes, there are some notable gaps in the feature set, like the absence of a POTS (analogue phone service) gateway such as Skype Out, or true multi-feed video conferencing for virtual meeting room creation. Still, Facebook brings other advantages – for example, Facebook Live sessions, which are not only streamed but stored for future reference.

What’s more, like it or not, Facebook has tremendous member loyalty. For some people it’s the first place they go in the morning, and the last at night. Harnessing that feel-good factor to foster both collaborative and productive relationships isn’t a silly thing to be doing. If you can get employees to feel more positively about work, you’ve achieved something.

That sounds good, but I’m still concerned about oversight. We have to own our own business-critical systems.

That’s not an issue on Workplace. There are at least two defined classes of super-user, namely administrators, and «IT Teams». Administrators can define the entire environment, in terms of how existing Facebook accounts are allowed into the Workplace separate playpen, and how the Workplace system handles things such as single sign-on with mature Windows networks. What’s more, Facebook provides one-on-one help for admins, so you can always get a guided support session and ask as many questions as you need. In short, whatever arrangement works for you ought to be attainable.

And how do we handle things such as oversight and legal compliance?

This is where that second group comes in. They’re referred to in terms of IT, but they really act as compliance officers: these are the guys who make sure you’re not breaking any laws or conditions of service, and keeping paper trails as required. Over the years, we’ve seen many collaboration platforms created by brilliant but inexperienced youths, which entirely lack the oversight features a business needs. Consequently, the fact that Workplace by Facebook doesn’t fall into that trap is itself a definite recommendation.

Image: Shutterstock