All posts by Keri Allan

How to avoid corrupting your hybrid work strategy


Keri Allan

11 Jan, 2022

With businesses forced to close their offices for months on end, we’ve witnessed one of the greatest working arrangement shakeups in history. Companies implemented new systems and policies while embarking on digitally transform projects, as workers had to adapt to a completely new way of working. 

While many thrived, others struggled, and almost everyone agrees the old ways won’t work any longer. As such, hybrid work has become a central tenant in the marketing campaigns and portfolios of countless vendors, with promises of trust, empowerment and flexibility rampant. With several concerns mounting, though, especially around application overload and employee monitoring, there’s a risk the reality of hybrid work won’t match these early ambitions.

The evolution of hybrid work

The evolution of hybrid work started before the pandemic, with forward-thinking businesses fundamentally changing how they use space – introducing hot-desking, breakout rooms and collaboration spaces, alongside increased digitisation.

“This was well underway before the pandemic, with progressive organisations embracing a plethora of digital tools from virtual work environments like Slack and Microsoft Teams to shared cloud storage and SharePoint sites,” says Matt Hancocks, senior director at Gartner.  

When the pandemic hit, remote working was becoming more accessible, and many tech companies were quick to jump on this trend, directing their product development to further amplify it. 

“Since many tech companies had been quick to adapt, there’d also been a gravitation towards using the company’s products to support their own hybrid set-ups,” adds Alok Alstrom, founder of the Future of Work Institute. “Instead of developing products for ‘someone else’, they viewed themselves as the first users of their products.”

Most organisations were reluctant to embrace remote working prior to COVID-19. Fully remote employees comprised less than 5% of the global workforce, rising to 10% if you included employees who occasionally worked from home, Gartner figures show. When lockdowns led to approximately 70% of the world’s knowledge workers working remotely, however, 75% of businesses discovered that productivity was the same, if not better.

Hybrid work – help or hindrance?

Now many organisations see hybrid work wasn’t a barrier to productivity, they’ve been happy to embrace various models, but the explosion of technologies and systems might actually be more oppressive than liberating.

When governments mandated working from home, for instance, many organisations implemented new tools to monitor employee productivity, including screen capture, measuring keystrokes, webcam photography and web monitoring. This was considered heavy-handed by many, and not suitable for all work environments, such as those roles revolving around thinking time and creativity. 

Maintaining an online presence actually become the main source of stress for employees, according to IDC’s Meike Escherich, associate research director – future of work. Both Escherich and Hancocks agree the solution lies with moving away from measuring productivity by output, and towards focusing on business outcomes. Organisations that implement this mentality shift will sustain greater benefits from today’s hybrid working world, whereas businesses that focus on monitoring their employees risk alienating workers.

Digital fatigue is another concern, Al Fox, Director and head of HR at B2B marketing firm Fox Agency, tells IT Pro. “We’ve always avoided micromanagement and surveillance, but digital fatigue is an issue when working online all day,” he says. “Creatives love working together in an office where they can share or draw ideas on paper or board, and that doesn’t work quite as well virtually. 

“For this reason, they try and meet in person when they can. For others, a day filled with Teams or Zoom meetings can be extremely tiring and lack the spontaneity of real-life meetings. The convenience of video meetings is amazing, but there are always downsides, it would be foolish to pretend there aren’t.”

Crafting a hybrid model for 2022

As the world reopens, hybrid work is being driven by employees’ desire to maintain the flexibility and empowerment remote working provided. Autonomy over one’s working day has become more important than remuneration to many, which has led to what’s become known as ‘The Great Resignation’. 

Roughly 65% of employees are prepared to quit and seek employment elsewhere if their company isn’t prepared to offer a degree of flexibility and remote working, Gartner figures show. With UK vacancies reaching an all-time high, therefore, businesses must consider genuine hybrid working options as a key tool in retaining talent.

Employers are also benefiting from workers realising they’re no longer tied to their location. “People in York or Inverness can now work for a London-based company or even one in San Francisco,” says Fox. “That’s a big change and one that’s worked for us as it’s opened the talent pool right up.”

Going forward, the most successful work strategies will be human-centric, Hancocks says, and organisations should rethink their relationships with employees. This journey is underway for many organisations, with businesses reducing how many days employees must be office-based. Others, meanwhile, are taking a more radical approach. 

“Virgin Money announced a new employee deal, consisting of several initiatives closely co-developed with employees,” he adds. “The main one is around a completely remote work offering, that allows employees to work remotely anywhere in the UK. It includes enhanced holiday leave and six welfare days. This exemplifies the emergence of the new employee value proposition we’re likely to start seeing from many organisations.”

Dropbox, meanwhile, is making a distinction between synchronous versus asynchronous work; when are people required to work together and when are they able to work alone? To do this, the firm uses blocks of time in calendars to distinguish between availability for either type of work. 

“Examples of work design are even emerging in the quite mundane, such as the PowerPoint presentation,” Hancocks continues. “Using tools like PowerPoint 365, people can record their presentation, upload it to a suitable site and make it available for colleagues to view at a time that suits them.”

There’s no silver bullet to designing the perfect hybrid work strategy. What’s certain, though, is that the businesses set to thrive are those that are agile, adaptive and use technology to empower employees, rather than monitor and control them.

The rise of cloud misconfiguration threats and how to avoid them


Keri Allan

5 Oct, 2021

With cloud adoption accelerating, the growing scale of cloud environments is outpacing the capacity for businesses to keep them secure. This is why many organisations feel vulnerable to data breaches that might arise as a result of cloud configuration errors. 

More than 80% of the 300 cloud engineering and security professionals questioned by Sonatype and Fugue in their latest cloud security report said they felt their organisations were at risk. Factors include teams struggling with an expanding IT ‘surface area’, an increasingly complex threat landscape, and recruitment challenges coupled with a widening skills gap. 

A major security threat 

Misconfiguration is a major problem because cloud environments can be enormously complicated, and mistakes can be very hard to detect and manually remediate. According to Gartner, the vast majority of publicly disclosed cloud-related security breaches are directly caused by preventable misconfiguration mistakes made by users, highlighting how great of a security threat they truly are.

“Often companies use default configurations, which are insecure for many use cases, and unfortunately there’s still a significant skills gap,” says Kevin Curran, professor of cyber security at Ulster University. “The cloud industry is relatively new, so there’s a noticeable deficit in knowledgeable cloud architects and engineers.”

He claims there are numerous scanning services constantly seeking out vulnerabilities to exploit, and, because flaws can be abused within minutes of creation, it’s led to an urgent race between attackers and defenders

“An attacker can typically detect a cloud misconfiguration vulnerability within ten minutes of deployment, but cloud teams are slower in detecting their own misconfigurations,” he adds. “In fact, only 10% are matching the speed of hackers.”

Misconfiguration can happen for many reasons, such as organisations prioritising legacy apps over cloud security, Ben Matthews, a partner at consultancy firm Altman Solon, points out. “Even with the significant growth in cloud adoption in recent years,” he adds, “the current and likely enduring prevalence of mixed and hybrid environments mean that this problem isn’t going away anytime soon.”

There are several other common causes of cloud misconfiguration, too. Those questioned as part of Sonatype and Fugue’s study cited too many APIs and interfaces to govern, a lack of controls, oversight and policy, and even simple negligence, as among the main reasons. 

A fifth (20%) noted their businesses haven’t been adequately monitoring their cloud environments for misconfiguration, while 21% reported not checking infrastructure as code (IaC) prior to deployment. IaC is a process for managing and provisioning IT infrastructure through code instead of manual processes. 

It’s a people problem

Experts agree that cloud misconfiguration is, first and foremost, a people problem, with traditional security challenges such as alert fatigue, the complexity of managing applications and workloads, and human error playing a significant role. 

“Laziness, a lack of knowledge or oversight, simple mistakes, cutting corners, rushing a project – all these things play into misconfigurations,” points out Andras Cser, vice president and principal analyst at Forrester. 

Organisations also find the demand for cloud security expertise is outstripping supply, making it harder than ever to retain staff with the knowledge required to guarantee cloud security. Often, there’s also confusion within businesses as to who’s responsible for checking for vulnerabilities, and, if any are found, ensuring they’re removed.

“Secure configuration of cloud resources is the responsibility of cloud users and not the cloud service providers,” clarifies Gartner’s senior director analyst, Tom Croll. “Often, misconfigurations arise due to confusion within organisations about who’s responsible for detecting, preventing and remediating insecure cloud assets. Application teams create workloads, often outside the visibility of security departments and security teams often lack the resources, cooperation or tools to ensure workloads are protected from misconfiguration mistakes.”

Curran continues by highlighting that different teams are responsible at different stages of any cloud project. For instance, cloud developers using IaC to develop and deploy cloud infrastructure should be aware of the major security parameters included in the software development cycle. The security team, on the other hand, is generally responsible for monitoring and the compliance team for audits. To make things more complicated, Sonatype and Fugue’s report suggests cloud security requires more cross-team collaboration than in the data centre. More than a third (38%) of those surveyed, however, cited friction existing between teams over cloud security roles.

Avoiding cloud configuration errors

Wherever possible, organisations will want to prevent cloud misconfiguration problems from arising in the first place. This can be achieved by using tools such as IaC scanning during the development phase, and the adoption of policy as code (PaC), which, according to Curran, has revolutionised how IT policy is implemented. 

Rather than following written rules and checklists, in PaC, policies are expressed “as code” and can be used to automatically assess the compliance posture of IaC and the cloud environments organisations are actively running. 

“Using PaC for cloud security is significantly more efficient and cost-effective as it’s repeatable, shareable, scalable and consistent,” he explains, adding: “It also greatly reduces security risks due to human error.” Of course, mistakes can be missed and, therefore, continuous 24/7 monitoring should be core to a business’ cloud security operation in order to maximise the chances of finding potential vulnerabilities.

Experts advise businesses to use automated security services, such as cloud security posture management (CSPM), which are designed to identify misconfiguration issues and compliance risks in the cloud. This particular tool automates the process of finding and fixing threats across all kinds of cloud environments. 

“These allow cloud platform admins to create a good baseline of cloud configuration artefacts, then detect any drifts from it,” Forrester’s Cser continues. “It also takes advantage of best-practice templates that will flag issues around S3 buckets or overprivileged instances, for example. Automated CSPM visibility, detection and remediation should be continuous.”

The role of cloud native at the edge


Keri Allan

12 Nov, 2020

By 2025 analyst firm Gartner predicts that three quarters of enterprise-generated data will be created and processed at the edge – meaning outside of a traditional data centre or cloud and closer to end users.

The Linux Foundation defines edge computing as the delivery of computing capabilities to the logical extremes of a network in order to improve the performance, security, operating cost and reliability of applications and services. “By shortening the distance between devices and the cloud resources that serve them, edge computing mitigates the latency and bandwidth constraints of today’s internet, ushering in new classes of applications,” the foundation explains in its Open Glossary of Edge Computing.

Edge computing has been on a large hype cycle for several years now and many consider it “the Wild West”. This is because there’s a high volume of chaotic activity in this area, resulting in duplicated efforts, as technologists all vie to find the best solutions.

“It’s early doors,” says Brian Partridge, research director at 451 Research. “Vendors and service providers are throwing stuff at the wall to see what sticks. Enterprises are experimenting, investors are making large bets. In short, the market is thrashing, crowded and there’s a lot of confusion.” 

A synergy between cloud native and the edge 

Edge computing opens up many possibilities for organisations looking to scale their infrastructure and support more latency-sensitive applications. As cloud native infrastructures were created to improve flexibility, scalability and reliability, many developers are looking to replicate these benefits close to the data’s source, at the edge. 

“Cloud native can help organisations fully leverage edge computing by providing the same operational consistency at the edge as it does in the cloud,” notes Priyanka Sharma, general manager of the Cloud Native Computing Foundation (CNCF). 

“It offers high levels of interoperability and compatibility through the use of open standards and serves as a launchpad for innovation based on the flexible nature of its container orchestration engine. It also enables remote devops teams to work faster and more efficiently,” she points out. 

Benefits of using cloud native at the edge

Benefits of using cloud native at the edge include the ability to complete faster rollbacks. Therefore, edge deployments that break or have bugs can be rapidly returned to a working state, says William Fellows, co-founder and research director of 451 Research. 

“We’re also seeing more granular, layered container support whereby updates are portioned into smaller chunks or targeted at limited environments and thus don’t require an entire container image update. Cloud native microservices provide an immensely flexible way of developing and delivering fine-grain service and control,” he adds.

There are also financial benefits to taking the cloud native path. The reduction in bandwidth and streamlined data that cloud native provides can reduce costs, making it an incredibly efficient tool for businesses. 

“It can also allow consumption-based pricing approach to edge computing without a large upfront CapEx spend,” notes Andrew Buss, IDC research director for European enterprise infrastructure.

However, it wouldn’t be the “Wild West” out there right now if cloud native was the perfect solution. There are still several challenges still to work on, including security concerns. 

Containers are very appealing due to them being lightweight but they’re actually very bad at ‘containing’,” points out Ildikó Vanska, ecosystem technical lead at the Open Infrastructure Foundation (formerly the OpenStack Foundation).

“This means they don’t provide the same level of isolation as virtual machines, which can lead to every container running on the same kernel being compromised. That’s unacceptable from a security perspective. We should see this as a challenge that we still need to work on, not a downside to applying cloud native principles to edge computing,” she explains. 

There’s also the complexity of dealing with highly modular systems, so for those interested in moving towards cloud native edge computing, you need to prepare by investing the time and resources necessary to effectively implement it. 

What should businesses be thinking about when embarking on cloud native edge computing? 

Cloud native edge solutions are still relatively rare; IDC’s European Enterprise Infrastructure and Multicloud survey from May 2020 showed that the biggest edge investments are still on-premise. 

“However, we expect this to shift in the coming years as cloud native edge solutions become more widely available and mature and we have more use cases that take advantage of cloud as part of their design,” says Gabriele Roberti, Research Manager for IDC’s European Vertical Markets, Customer Insights and Analysis team, and Lead for IDC’s European Edge Computing Launchpad.

For those businesses eager to take the leap, Partridge recommends starting with the application vision, requirements and expected outcomes. After targeting edge use cases that can support a desired business objective – such as lowering operations costs – you can then turn your attention to the system required. 

Laura Foster, programme manager for tech and innovation at techUK, reiterates that it’s important to build a use case that works for your business needs. 

“There’s an exciting ecosystem of service providers, innovators and collaboration networks that can help build the right path for you, but the journey towards cloud native edge computing also needs to go hand in hand with cultural change,” she points out. 

“Emerging technologies, including edge computing, will pioneer innovation, but only if businesses push for change. Retraining and reskilling workforces is a fundamental part of an innovation journey and can often be the key to getting it right,” she concludes.

Data visibility: The biggest problem with public clouds


Keri Allan

14 May, 2019

Use of public cloud continues to grow. In fact, 84% of businesses had placed additional workloads into the public cloud in 2018, according to a recent report by Dimension Research. Almost a quarter of those (21%) reported that their increase in public cloud workloads was significant.

However, while respondents were almost unanimous (99%) in their belief that cloud visibility is vital to operational control, only 20% of respondents said they were able to access the data they need to monitor public clouds accurately.

“If there’s any part of your business – including your network – which you can’t see, then you can’t determine how it’s performing or if it is exposing your business to risks such as poor user experience or security compromise,” points out Scott Register, vice president, product management at Ixia, the commissioner of the report.

This sounds like a major issue and yet surprisingly, it’s nothing new. Tony Lock, distinguished analyst and director of engagement at Freeform Dynamics, has been reporting on visibility issues for over five years, and not just regarding public cloud.

“Believe it or not despite people having had IT monitoring technology almost since IT began, we still don’t have good visibility in a lot of systems,” he tells us. “Now we’re getting so much more data thrown at us, visibility is even more of a challenge – just trying to work out what’s important through all of the noise.”

He adds that for many years public cloud providers have been slow to improve their services and make it easier for organisations to see what’s happening, largely because they handled it all for them.

“To a degree, you can understand why [providers] didn’t focus on monitoring to begin with, as they’ve got their own internal monitoring systems and they were looking after everything. But if a customer is going to use them for years and years then they want to see what’s in there, how it’s being used and if it’s secure.”

The cost of zero visibility

A lack of visibility in the public cloud is a business risk in terms of security, compliance and governance, but it can also affect business costs. For example, companies may be unaware that they’re paying for idle virtual machines unnecessarily.

Then there’s performance. Almost half of those that responded to Ixia’s survey stated that a lack of visibility has led to application performance issues. These blind spots hide clues key to identifying the root cause of a performance issue, and can also lead to inaccurate fixes.

Another issue relates to legal requirements and data protection. With a lack of visibility, some businesses may not be aware that they have customer information in the public cloud, which is a problem when “the local regulations and laws state it should not be stored outside of a company’s domain”, highlights Lock.

Then there are the complexities around data protection and where the liability sits should a data breach occur.

“Often a daisy chain of different companies is involved in cloud storage, with standard terms and conditions of business, which exclude liability,” explains BCS Internet Specialist Group committee member, Peter Lewinton. “This can leave the organisation that collected the data [being] liable for the failure of a key supplier somewhere in the chain – sometimes without understanding that this is the position. This applies to all forms of cloud storage, but there’s less control with the public cloud.”

Understandably, security continues to be a big concern for enterprises. The majority (87%) of those questioned by Ixia said they’re concerned that their lack of visibility obscures security threats, but it’s also worth noting that general security concerns regarding public cloud still apply.

What’s the solution?

Lock believes that things are changing and vendors are beginning to listen to the concerns of customers. Vendors have started to make more APIs available and several third-party vendors are also creating software that can run inside virtualised environments to feed back more information to customers. “This move is partly down to customer pressure and partly down to market maturity,” he notes.

Ixia’s Scott Register recommends either a physical or virtual network tap that effectively mirrors traffic on a network segment or physical interface to a downstream device for monitoring.

“These taps are often interconnected with specialised monitoring gear such as network packet brokers, which can provide advanced processing, such as aggregation, decryption, filtering and granular access controls. Once the relevant packets are available, hundreds of vendors offer specialised tools that use the packet data for application or network performance monitoring as well as security analytics.”

Are vendors really to blame?

Although many businesses suffer with poor public cloud visibility, Owain Williams, technical lead at Vouchercloud, believes customers are too quick to blame the provider. He argues that there are many reliable vendors already providing the necessary access tools and that a lack of visibility is often down to the customer.

“This is my experience. As such it’s often entirely solvable from the business. The main providers already give you the tools you need. Businesses can log every single byte going in and out if they wish – new folders, permission changes, alerts; all the bells and whistles. If the tools themselves are inefficient, then businesses need to re-evaluate their cloud provider.

Instead, he believes that many of the visibility problems that businesses encounter can be traced back to those managing infrastructure – employees that may be in need of extra training and support.

“Better education for people – those charged with provisioning the infrastructure – is a strong first port of call,” he argues. “It’s about ensuring the businesses and individuals have the right training and experience to make the most of their public cloud service. The tools already exist to assure visibility is as robust as possible – it’s provided by these large public cloud organisations. Invariably, it’s a case of properly identifying and utilising these tools.”

2019’s highest-paying IT certifications


Keri Allan

5 Apr, 2019

In a competitive talent market, such as IT, obtaining a certification is a sure way to verify your expertise, demonstrate your knowledge quickly to others, and ultimately make job hunting a far smoother process. Recruiters look for credentials to back up details provided on an applicant’s CV and many companies request certain types of certification in order for an applicant to even be considered for a role.

According to training provider Global Knowledge, 89% of the global IT industry is certified. It recently published its list of the 15 top paying IT certifications in 2019, showing that employers are focusing on specific areas, in particular, cloud computing, cyber security, networking and project management. In fact, cloud and project management dominated the top five spots.

Global Knowledge 2019 report:

No. Certification Avg. salary (in $)
1. Google Certified Professional Cloud Architect 139,529
2. PMP – Project Management Professional 135,798
3. Certified ScrumMaster 135,441
4. AWS Certified Solutions Architect (Associate) 132,840
5. AWS Certified Developer (Associate) 130,369
6. Microsoft Certified Solutions Expert – Server Infrastructure 121,288
7. ITIL Foundation 120,566
8. Certified Information Security Manager 118,412
9. Certified in Risk and Information Systems Control 117,395
10. Certified Information Systems Security Professional 116,900
11. Certified Ethical Hacker 116,306
12. Citrix Certified Associate – Virtualisation 113,442
13. CompTIA Security+ 110,321
14. CompTIA Network+ 107,143
15. Cisco Certified Network Prof. Routing and Switching 106,957

Although the figures provided represent a look at the US market, we can see that Google’s own Cloud Architect certification is now the best qualification to pursue in terms of average salary, closely followed by qualifications in project management and then development roles for AWS.

“The two leading areas are cyber security and cloud computing, followed by virtualisation, network and wireless LANs,” notes Zane Schweer, Global Knowledge’s director of marketing communications. “Up and coming certifications focus on AI, cognitive computing, machine learning, IoT, mobility and end-point management.”

Cloud comes out on top

“Cloud computing is paramount to every aspect of modern business,” explains Deshini Newman, managing director EMEA of non-profit organisations (ISC)2. “It’s reflective of the highly agile and cost-effective way that businesses need to work now, and so skilled professionals need to demonstrate that they are proficient in the same platforms, methodologies and approaches towards development, maintenance, detection and implementation.”

Jisc, a non-profit which specialises in further and higher education technology solutions, has joined many other organisations in adopting a cloud-first approach to IT, and so relies heavily on services like Amazon AWS and Microsoft Azure.

“Certified training in either or both of these services is important for a variety of roles,” explains Peter Kent, head of IT governance and communications at Jisc, “either to give the detailed technical know-how to operate them or simply to demonstrate an understanding of how they fit into our infrastructure landscape.”

“Accompanying these, related networking and server certifications such as Cisco Certified Network Associate (CCNA) and Microsoft Certified Solutions Expert (MSCE) are important as many cloud infrastructures still need to work with remaining or hybrid on-premise infrastructures,” he notes.

Security certifications are also high on the most-wanted list, but they are required across a variety of different platforms and disciplines. One of the growth areas (ISC)2 has seen is in cybersecurity certifications in relation to the cloud. “This is something that is reflected by the positioning of the cloud within the Global Knowledge top 15,” Newman points out.

Aside from technical training, ITIL is still considered a key certification as a way of benchmarking an individual’s understanding of the infrastructure and process framework that IT teams have in place.

“But with ITIL v4 just around the corner I’d recommend holding off any training until v4 courses are widely available,” advises Kent.

And it’s not just about the accreditation – it can often also be about the company behind the certification itself. This is part of what makes the most desirable certifications desirable – the credibility and support of the issuing bodies.

The benefits of certification

Global Knowledge’s report highlighted that businesses believe that having certified IT professionals on staff offer a number of benefits – most importantly helping them meet client requirements, close skills gaps and solve technical issues more quickly.

This is great for the company, but what do you gain as an individual? Well, aside from being in higher demand and the ability to perform a job faster, the main answer is a larger paycheque.

“In North America, it’s roughly a 15% increase in salary, while in EMEA its 3%,” says Schweer. “We attribute cost of living and other unique circumstances within each country to the difference,” he notes.

Research by (ISC)2 and recruitment firm Mason Frank International also showed similar results.

“In our latest Salesforce salary survey 39% of respondents indicated that their salary had increased since becoming certified and those holding certifications at the rarer end of the spectrum are more likely to benefit from a pay increase,” says director Andy Mason.

“While the exact amount of money an individual can earn will fluctuate from sector to sector, it is clear that certifications in any sector can and do make a big financial difference,” agrees Newman. “That’s on top of setting individuals apart at the top of their profession.”

Does certification create an ‘opportunity shortage’?

However, not everyone regards certifications as the be-all-and-end-all for recruiting the best possible staff. Some, such as Mango Solutions’ head of data engineering, Mark Sellors, actually believe that it can often ‘lock-out’ certain candidates that might be perfect for a role.

“This can be troubling for a number of reasons,” he says. “In many cases certifications are worked out in an individual’s personal time. This means those with significant responsibilities outside of their existing job may not be in a position to do additional study, and that’s not to mention the cost of some of these certs.”

He adds that using certifications as a bar above which one must reach can also further reduce gender diversity within the IT space, as a past study by Hewlett Packard found that women are much less likely than men to apply for a job if they don’t meet all of the listed entry requirements.

It’s Sellors’ belief that the problem facing many hiring managers is not just a talent one, but rather a one of opportunity.

“They’re not giving great candidates the opportunity to excel in these roles as they’ve latched on to the idea that talent can be proven with a certificate,” explains Sellors. “Certifications can be useful in certain circumstances – for example when trying to prove a certain degree of knowledge during a career switch, or moving from one technical area to another. They’re also a great way to quickly ramp up knowledge when your existing role shifts in a new direction.

“More often than not, however, they prove little beyond the candidate’s ability to cram for an exam. Deep technical knowledge comes from experience and there’s sadly no shortcut for that.”

2019 will be the year cloud-native becomes the new norm


Keri Allan

8 Jan, 2019

The vast majority of businesses see cloud as a critical component of their digital transformation strategy – some 68% of businesses already have cloud-based systems place, or are in the process of implementing them, according to technology consultancy Amido.

But more specifically, businesses are recognising the benefits of cloud-native applications: software designed specifically to run on cloud infrastructure.

In 2018, the Cloud Native Computing Foundation (CNCF), a vendor-neutral home for cloud-native projects, saw its end-user community grow to over 50 members. This includes household names such as Uber, Airbnb, Netflix, Adidas, Spotify, Mastercard and Morgan Stanley.

Cloud native applications offer hyperscale provisioning, resilience, high availability and responsiveness, all which help businesses operate faster with greater flexibility. It’s, therefore, no real surprise that many industry experts believe that 2019 is the year cloud-native will become the ‘new normal’.

The benefits of cloud-native technology

“For CIOs, cloud-native is an enabler; a transformative technology,” says Amido’s chief technology officer (CTO) Simon Evan. “They’re using it to do things they can’t do on premise. Driving this are things like AI workloads, which benefit all sectors from finance through to healthcare and retail. You can free up staff from menial tasks, improve customer experience and benefit from predictive analytics,” he highlights.

“Taking a cloud-native approach means businesses can harness the real power of the cloud to their advantage as it offers them faster responses to the changing needs of the business and the market, ensures their technology portfolio is up to date and driving innovation and improves the customer experience while increasing ROI,” adds Puja Prabhakar, senior director, Applications and Infrastructure at consultancy firm Avanade UKI.

Cloud-native technologies can often become “boring” compared to emerging apps, according to CNCF CTO/COO Chris Aniszczyk, as the tech stabilises and matures over the years. However, he argues this shouldn’t be seen as a negative.

“Boring means organisations can focus on delivering business value, rather than spending time on making the technology usable,” he explains.

Experts advise businesses to embrace these ‘boring’ technologies in 2019, particularly the installation and configuration of platforms and containers as a service platforms, such as Docker, OpenShift and Kubernetes.

“I expect more traction for Kubernetes as more organisations use it for distributed applications across hybrid cloud infrastructure that includes public clouds, private clouds, multiple public clouds, public clouds with on-premise environments and combinations of them all,” says Jay Lyman, principal analyst, Cloud Native and DevOps at 451 Research.

He believes that more organisations will leverage containers and microservices for not only new cloud-native applications but also increasingly those built on traditional and legacy infrastructure.

The rise of serverless in 2019

Prabhakar adds that businesses should also consider how they’re designing their full stack and backend application engineering. Specifically, she believes engineering should be focused on creating applications inherently designed for development on the cloud, such as serverless frameworks, microservices frameworks, API integration frameworks, DevOps, data stores, and machine learning.

Other cloud-native technologies set to take a front-row seat in 2019 include commercialised service mesh offerings, which, according to CNCF’s Aniszczyk, are the next frontier in making service-to-service communication safer, faster and more reliable.

“Service meshes like Linkerd are ready to be used in production deployments and can help businesses scale applications without latency or downtime. They can also be used to help secure traffic between services and applications,” he points out.

Following an explosion of interest in 2018, serverless technologies also look set to pick up momentum in 2019.

“Serverless for enterprise is a huge trend,” says Liz Rice, chair of 2018’s CloudNativeCon and KubeCon events. “We’ll see lots of discussions on how and where enterprises can apply architectures based on serverless functions and perhaps a better understanding of the cultural/DevSecOps implications of serverless functions will emerge in 2019.”

“Serverless won’t be appropriate for all classes of application, and will co-exist alongside container architectures for some time to come,” she adds.

Talent and security challenges remain

In December, a flaw allowing easy access into every single machine in a cluster via the Kubernetes API server was quickly caught and resolved, making security another hot topic. The community came together to discuss how to best solve security challenges facing the open source/cloud-native community and a number of security-related initiatives have been announced to help organisations go beyond what is natively provided by the Kubernetes platform. “And as we go into 2019, I expect we’ll continue to see more efforts crop up,” Aniszczyk says.

In response to all these trends, businesses need to invest not just in technology, but also in acquiring new talent and retraining existing staff in cloud-native methodology and technology.

“Adoption of cloud-native technology will only be held back by the lack of skills in the market,” points out Ilja Summala, CTO of Nordcloud Group.

Lyman agrees that the lack of cloud-native expertise and experience is probably the biggest challenge facing the industry. “Few organisations can find large numbers of Kubernetes and other cloud-native experts and even if they could find them, it is an expensive proposition. This is why including and training existing staff in cloud-native initiatives as much as possible will be critical moving forward.”

He also recommends talent also focuses on open source technology.

“End users have never been as participatory and influential as with Kubernetes,” he explains. “There is ample room to get involved with many open source software projects and Kubernetes Special Interest Groups (SIGs), and this is helping the community to focus more directly on the problems that companies are facing and the objectives they are trying to meet.”

However, there’s one other issue that looks set to take longer to resolve. That’s changing the culture of how we work, a big challenge for business that’s not going to be fixed overnight.

“The shift from monolithic/waterfall to agile/DevOps is more about process and organisational psychology than it is about which technologies to employ,” points out Mark Collier, COO of the OpenStack Foundation. “This has been talked about for several years and it’s not going away anytime soon. It’s the big problem that enterprises must address and it’s going to take years to get there as it’s a generational shift in philosophy.”

Academics: Full cloud is like Netflix, bursting is just boring old iPlayer


Keri Allan

12 Jul, 2018

It’s easy to see why cloud bursting – where an application is run in a private cloud or data centre and then ‘bursts’ into a public cloud when demand dictates – could appeal to research universities.

It can provide institutions with an escape valve when their in-house resources are fully committed, helping to potentially speed up research and save costs.

In recent years adoption of cloud computing has been transforming research and education, and although change within academia can be slow, the latest UK Research & Innovation (UKRI) e-infrastructure report has shown a growing interest in community and public clouds.

“We also see that scientific computing teams at universities and research institutes are starting to look very seriously at virtualising their in-house compute clusters,” says Martin Hamilton, a member of UKRI’s Cloud for Research working group.

Although educational researchers tend to “thrash kit within an inch of its life”, Hamilton says there’s a “growing recognition that having the option of running a virtual machine (VM) image can make it easier for researchers to share and re-use code.”

However, there are divergent opinions within the research community as to how best cloud resources should be deployed. While bursting remains a go-to choice for some, others either remain reticent or have avoided the technology entirely in favour of a full-fledged cloud.

Cloud bursting advocates

Two of the world’s biggest champions of the cloud bursting approach are the University of Cambridge and, on the other side of the world, the National University of Singapore (NUS).

“NUS has a wide range of computing requirements, making it impractical for all resources and capacity requirements to be provided in-house,” says Tommy Hor, NUS’ chief information technology officer, speaking to Cloud Pro.

The National University of Singapore deploys cloud bursting to support its research projects

“Our researchers occasionally have ad-hoc service demands that require dedicated computing resources to speed up their work. We have started migrating our in-house pay-per-use service to the cloud, and this will give us greater financial agility and economies of scale.”

The University of Cambridge has gone as far as providing its own cloud bursting capabilities. Its Research Computing Services (RCS) operation has a dedicated private ‘public sector’ cloud designed specifically for scientific and technical computing.

“Researchers from across Cambridge University, plus UK universities and companies, use RCS for cloud bursting,” says Dr Paul Calleja, the university’s director of Research Computing. “Research undertaken includes large-scale genomic analysis for clinical diagnosis and simulations of jet engines.”

Cloud bursting challenges and limitations

But while cloud bursting has potential benefits, there are still problems to be ironed out. This includes interoperability issues between environments, pricing models and security.

“We recently saw a number of Docker images laden with malware removed from the public registry, opening black doors onto users’ machines and running cryptocurrency mining processes,” says UKRI’s Martin Hamilton.

“Things like this take on an even greater significance when we are talking about compute jobs to calculate stresses on airframes, analyse CT images looking for tumours, or model the effect a new drug will have on the human body.”

For the University of Bristol, cloud bursting is seen as a highly restrictive approach to deployment, one that needlessly increases the complexity of a network.

“In my opinion cloud bursting limits the use of the cloud to being just an extension of a local on-premise compute cluster,” says Dr Christopher Woods, leader of the university’s Research Software Engineering group, which is fully in the cloud.

“It also means you get the worst of both worlds – you’re running both a cluster and a cloud, so have twice the complexity.”

He adds that, in his experience, bursting can introduce problems when it comes to moving data between on-premise and the cloud, and that the “up-front-investment ‘batch queue’ way of using a cluster” isn’t always compatible with the on-demand way of paying for cloud computing services.

A stepping-stone to cloud

Cloud providers and organisations like Jisc are looking to address some of these issues by negotiating data egress waivers and special pricing agreements for universities.

However, as Dr Woods notes, universities may struggle with a change of payment model.

“The biggest issue is the money side. Universities are terribly slow at moving money around so it’s difficult to work out how the money would make its way from a researcher’s grant to the provider.

“A big question is how do they go from CAPEX to OPEX? Maybe this is why cloud bursting can be a good stepping-stone, as it lets universities effectively turn cloud into a CAPEX investment that’s been prepaid for.

“It’s a way to dip their toes in the water and get their heads around new contracts and procurement models,” he says.

Woods considers cloud bursting a “sticking plaster solution” that will disappear as more organisations trust their data to cloud providers and the option becomes cheaper than on-premise.

“My feeling is that the cost of cloud will be competitive by 2020 and that most universities will be fully on cloud by the end of 2025,” he says.

The iPlayer of cloud deployment

Woods says that cloud bursting, by definition, only offers a slice of the flexibility that full cloud deployment brings, something he suggests can be compared to TV streaming services.

“You get to run interactive simulations, interactive data analysis and publish interactive papers that can be re-run and re-used by others. The best way to describe the difference is that the cloud is the ‘Netflix of simulation’, while on-premise is like watching the BBC following a TV schedule.

“Cloud bursting is like iPlayer – a hybrid mix of terrestrial TV and on-demand streaming that’s unsatisfactory compared to just binge-watching whatever you want on Netflix on demand.”

The importance of engineers

Research software engineers like Woods at the University of Bristol are a relatively new kind of academic, using their DevOps mindset and technical knowledge to support other researchers.

Hamilton believes that this new mindset is going to be essential for research in the years to come, helping “researchers get to grips with the tools available and develop their scientific computing applications.”

In Woods’ experience, cloud providers are frequently only doing work with those institutions that are able to support projects with in-house research software engineers.

“You need to have that skill set within the university to make it work,” says Woods. “Academics want to solve a genome – they have no interest in putting together the supercomputer that will do that. You really need that layer of person to lead the way.

“Those institutions that have people that understand software and hardware – and can bring the two together – will be the ones to prosper and take advantage of everything cloud offers,” he adds.

Image: Shutterstock

Businesses ‘should already be on their journey to UCaaS’


Keri Allan

19 Jun, 2018

The unified communication and collaboration (UCC) market is seeing a dramatic shift towards cloud-based solutions, with unified communications as a service (UCaaS) leading the way.

The global user base of cloud UCaaS has now surpassed 43 million, with new users estimated to grow at a compound annual growth rate (CAGR) of 23% from 2016 to 2023, according to analyst firm Frost & Sullivan.

This move is due in part to growing confidence in cloud solutions and a better understanding of the benefits it can offer, but also down to many customer premises equipment (CPE) assets nearing end of life.

Art Schoeller, vice president, principal analyst at Forrester Research states that more than one out of three IT professionals considering UCC will deploy it as a subscription service in their next upgrade cycle.

However, as communication systems have historically had long lifecycles of 10 years or more, with a heavy emphasis on ‘investment protection’, he notes that “this might mean that some [of our respondents] might not move to UCaaS for another five years or so”.

In the meantime, many existing CPE assets are being integrated with cloud during their sunset years, while digital transformation initiatives are also pushing IT departments to look more closely at moving to UCaaS.

All change

Elka Popova, digital transformation vice president at Frost & Sullivan believes that customer confidence in UCaaS is also growing thanks to a recent swathe of mergers, acquisition and restructuring projects by IP telephony and UCC providers.

“In 2016 and 2017 the industry was marked by significant merger and acquisition (M&A) and restructuring activity affecting key providers. Their repositioning and strategy realignment is likely to determine the industry’s evolution and growth trajectory over the next few years.

“M&As, bankruptcy protection, internal reorganisation, international expansion and solution repackaging will aim to improve industry health and boost customer confidence in service and company long-term viability.”

The benefits of UCaaS

A key reason why companies are turning to UCaaS is the fact it offers a wide variety of communications technology and collaboration applications and services.

“Organisations are turning to UCaaS to reduce operational costs, expand into new markets or regions, boost creativity and innovation and also improve sales and marketing effectiveness,” points out Rob Arnold, Frost & Sullivan’s connected work industry principal.

Businesses are also looking for something that’s ‘on-demand’, offering greater flexibility over the services they may have been used to in the past.

“Customers like the flexibility of a cloud service, in most cases with little or no upfront cost or hardware investment – depending on the service,” says Cathy Gerosa, head of regulatory affairs at the Federation of Communication Services (FCS).

“The other major benefit for business is the flexibility it gives with today’s workforce, enabling greater collaboration between both internal team members and external parties, regardless of where they’re located.”

Developing a transition plan – things to consider

But as the market shifts towards this subscription service, vendors must support organisations by developing a transition plan that protects their existing investments and offers minimal disruption to the business.

Benefits such as predictable billing, outsourced ownership and the move from CAPEX to OPEX are clear, but understandably, businesses still have concerns – particularly around security, visibility and a potential lack of control.

For many companies, the first step is often a hybrid implementation that spans on-premise and cloud.

“Some organisations still feel that they need to control their own communications, especially from a security and risk perspective,” notes Forrester’s Schoeller.

Plus, there are cons to balance the pros. New endpoint devices may be needed, even if the old system still works well, leading to not only extra spending but changes to the way staff work and how they interact with others.

“People used to maintaining older phone systems have to really shift their mindset, skill set and approach to the job. Plus, the move to UCaaS is very disruptive to the historical distribution channel for communications systems,” Schoeller adds.

The key lies in selecting the right provider and developing a strong partnership in which you can have faith that the technology and implementation is what you need.

“The UCaaS market is relatively new so businesses do need to understand who they are taking their services from,” advises Gerosa. “For example, are they financially stable but at the same time nimble in how services are delivered and developed?”

Vertical is the new black

UCaaS adoption looks set to grow at a steady pace, but analysts believe on-premise solutions will continue to play a role for at least the next decade.

While hybrid may be the first step many organisations take, providers look set to push forward with new technologies, capabilities and offerings, even if change is perhaps slow on the side of customers.

“The next phase in the industry’s evolution will be marked by the emergence of ‘productivity UC’ ‘IoT UC’ and ‘vertical UC’ – vertical is the ‘new black’,” says Popova.

“Tailored services bundles, industry certifications, integration with vertical-specific apps and partnerships with vertical experts will deliver superior value in targeted industries.”

Gerosa believes we’ll soon see an even wider selection of services being offered by providers, with greater emphasis being placed on platform-agnostic applications.

“How businesses consume these services will be interesting with some of the dominant players like Microsoft with their 365 suite developing further,” says Gerosa.

“As we see today with mobile phone apps, the potential in cloud just keeps growing so the ability to integrate these applications regardless of which supplier they are taken from will become more of a requirement.”

“If businesses have not already started the journey down the UCaaS world we would certainly recommend they start now,” she adds.

Image: Shutterstock

Pushing cloud AI closer to the edge


Keri Allan

12 Apr, 2018

Cloud-based AI services continue to grow in popularity, thanks to their low cost, easy-to-use integration and potential to create complex services.

In the words of Daniel Hulme, senior research associate at UCL, “cloud-based solutions are cheaper, more flexible and more secure” than anything else on the market.

By 2020 it’s believed that as many as 60% of personal technology device vendors will be using third-party AI cloud services to enhance the features they offer in their products. However, we’re also likely to see a significant growth of cloud-based AI services in the business sector.

One of the biggest drivers of this has been the proliferation of VPAs in the consumer space, made popular by the development of smart speakers by the likes of Amazon and Google.

Users have quickly adopted the technology into their everyday lives, and businesses were quick to realise the potential locked away in these devices, particularly when it comes to delivering new products.

Drivers of cloud-based AI services

Amazon’s Alexa was the first personal assistant to achieve mass-market appeal

“It’s a confluence of factors,” says Philip Carnelley, AVP Enterprise Software Group at analyst firm IDC. “There is no doubt the consumer experience of using Alexa, Siri and Google Now has helped familiarise businesses with the power of AI.

“But there is also a lot of publicity around AI achievements, like DeepMind’s game-winning efforts – AlphaGo winning against the Go champion for example – or Microsoft’s breakthrough efforts in speech recognition.

He adds that improvements to the underlying platforms, such as the greater availability of infrastructure-as-a-service (IaaS) and new developments in graphical processing units, are making the whole package more cost-effective.

Yet, it’s important to remember that despite there being so much activity in the sector, the technology is still in its infancy.

“AI is still very much a developing market,” says Alan Priestley, research director for technology and service partners at Gartner. “We’re in the very early stages. People are currently building and training AI models, or algorithms, to attempt to do what the human brain does, which is analyse natural content.”

The likes of Google, Amazon and Facebook are leading this early development precisely because they have so much untapped data at their disposal, he adds.

The role of the cloud

Vendors have helped drive AI concepts thanks to open source code

The cloud has become an integral part of this development, primarily because of the vast computing resources at a company’s disposal.

“The hyper-scale vendors have all invested heavily in this and are building application programming interfaces (APIs) to enable themselves – and others – to use services in the cloud that leverage AI capabilities,” says Priestley.

“By virtue of their huge amount of captive compute resource, data and software skill set, [these vendors have been] instrumental in turning some of the AI concepts into reality.”

This includes the development of a host of open source tools that the wider community is using today, including TensorFlow and MXNet, and large vendor services are frequently being utilised when training AI models.

According to IDC, businesses are already seeing the value of deploying these cloud-based AI solutions. Although less than 10% of European companies use AI in operational systems today, three times that amount are currently experimenting with, piloting or planning AI usage – whether that be to improve sales and marketing, planning and scheduling, or general efficiency.

Benefits to business

Chatbots were an early AI hit within many businesses

“Businesses are seeing early implementations that show how AI-driven solutions, like chatbots, can improve the customer experience and thereby grow businesses – so others want to follow suit,” says Carnelley.

“Unsurprisingly, companies offering AI products and services are growing fast,” he points out.

Indeed, chatbots were one of the earliest AI-powered features to break into the enterprise sphere, and interest looks set to continue.

According to a report published this month by IT company Spiceworks, within the next 12 months, 40% of large businesses expect to implement one or more intelligent assistants or AI chatbots on company-owned devices. They will be joined by 25% mid-sized companies and 27% of small businesses.

However, organisations are also looking more widely at the many ways AI solutions could help them.

The insurance industry, in particular, is looking at how AI can be used to help predict credit scores and how someone may respond to a premium.

“This is not just making a decision but interpreting the data,” says Priestley. “A lot of this wasn’t originally in digital form, but completed by hand. This has been scanned and stored but until recently it was impossible for computer systems to utilise this information. Now, with AI, technology can extract this data and use it to inform decisions.”

Another example he highlights is the medical sector, which is deploying AI-powered systems to help improve the process of capturing and analysing patient data.

“At the moment, MRI and CT scans are interpreted by a human, but there’s a lot of work underfoot to apply AI algorithms that improve the interpretation of these images, and diagnosis (via AI),” says Priestley.

Moving to the edge

Self-driving cars will need latency-free analytics

Given the sheer amount of computational power on hand, the development of AI services is almost exclusively taking place in the cloud but, looking forward, experts believe that many will, at least partially, move to the edge.

The latency associated with the cloud will soon become a problem, especially as more devices require intelligent services that are capable of analysing data and delivering information in real time.

“If I’m in a self-driving car it cannot wait to contact the cloud before making a decision on what to do,” says Priestley. “A lot of inferencing will take place in the cloud, but an increasingly large amount of AI deployment will take place in edge devices.

“They’ll still have a cloud connection, but the workload will be distributed between the two, with much of the initial work done at the edge. When the device itself can’t make a decision, it will connect to the ‘higher authority’ – in the form of the cloud – to look at the information and help it make a decision.”

Essentially, organisations will use the cloud for what it’s good at – scale, training and developing APIs and storing data. Yet it’s clear that the future of cloud-only AI is coming to an end.

Image: Shutterstock