Google Cloud confirms Intel Ice Lake processor support for N2 VMs

Bobby Hellard

30 Sep, 2021

Google Cloud has announced that its Compute Engine N2 virtual machines (VMs) will be available with Intel’s 10 nanometer Ice Lake Xeon processors.

There’s no specific date for the release, but Google now joins a list of companies that includes Amazon, Microsoft, and Oracle, which are set to use the latest generation of Xeon Scalable chips for their cloud services.

Google claims that using the 10nm chips in the N2 VMs will offer a 30% boost in price-performance compared to the previous generation of Xeons. The current version of N2 uses Intel’s 14nm second-generation processors, known as Cascade Lake.

The new N2 VMs will be offered at the same price as the existing Cascade Lake N2, and their usage can be discounted using existing N2 committed use discounts, according to Google.

The news comes just two weeks before Google Cloud Next, the cloud giant’s annual conference, where more details of the announcement will likely be shared. This is somewhat behind the rest of the industry, however, with Amazon, Microsoft, and Oracle all confirming support shortly after Intel officially revealed Ice Lake in April.

Google’s N2 will be available in preview early in the fourth quarter of 2021 in the US, Europe, and Southeast Asia, while availability in additional Google Cloud regions, in line with current N2 machine family regions, is planned for “the coming months”, the tech giant said.

The Ice Lake-N2 VMs have already been used by select customers, such as e-commerce firm Shopify, which used them to increase performance and reduce response times for its applications.

“With Google Cloud’s new N2-Ice Lake VMs, we were able to achieve improvements on all these areas,” said Justin Reid, senior staff engineer at Shopify. “We were able to achieve over 10% performance improvements for one of our compute-intensive workloads by running on the new N2 Ice Lake VMs and also achieve lower request latency for our users as compared to previous generation N2 Cascade Lake VMs.”

The rise of cloud misconfiguration threats and how to avoid them

Keri Allan

5 Oct, 2021

With cloud adoption accelerating, the growing scale of cloud environments is outpacing the capacity for businesses to keep them secure. This is why many organisations feel vulnerable to data breaches that might arise as a result of cloud configuration errors. 

More than 80% of the 300 cloud engineering and security professionals questioned by Sonatype and Fugue in their latest cloud security report said they felt their organisations were at risk. Factors include teams struggling with an expanding IT ‘surface area’, an increasingly complex threat landscape, and recruitment challenges coupled with a widening skills gap. 

A major security threat 

Misconfiguration is a major problem because cloud environments can be enormously complicated, and mistakes can be very hard to detect and manually remediate. According to Gartner, the vast majority of publicly disclosed cloud-related security breaches are directly caused by preventable misconfiguration mistakes made by users, highlighting how great of a security threat they truly are.

“Often companies use default configurations, which are insecure for many use cases, and unfortunately there’s still a significant skills gap,” says Kevin Curran, professor of cyber security at Ulster University. “The cloud industry is relatively new, so there’s a noticeable deficit in knowledgeable cloud architects and engineers.”

He claims there are numerous scanning services constantly seeking out vulnerabilities to exploit, and, because flaws can be abused within minutes of creation, it’s led to an urgent race between attackers and defenders

“An attacker can typically detect a cloud misconfiguration vulnerability within ten minutes of deployment, but cloud teams are slower in detecting their own misconfigurations,” he adds. “In fact, only 10% are matching the speed of hackers.”

Misconfiguration can happen for many reasons, such as organisations prioritising legacy apps over cloud security, Ben Matthews, a partner at consultancy firm Altman Solon, points out. “Even with the significant growth in cloud adoption in recent years,” he adds, “the current and likely enduring prevalence of mixed and hybrid environments mean that this problem isn’t going away anytime soon.”

There are several other common causes of cloud misconfiguration, too. Those questioned as part of Sonatype and Fugue’s study cited too many APIs and interfaces to govern, a lack of controls, oversight and policy, and even simple negligence, as among the main reasons. 

A fifth (20%) noted their businesses haven’t been adequately monitoring their cloud environments for misconfiguration, while 21% reported not checking infrastructure as code (IaC) prior to deployment. IaC is a process for managing and provisioning IT infrastructure through code instead of manual processes. 

It’s a people problem

Experts agree that cloud misconfiguration is, first and foremost, a people problem, with traditional security challenges such as alert fatigue, the complexity of managing applications and workloads, and human error playing a significant role. 

“Laziness, a lack of knowledge or oversight, simple mistakes, cutting corners, rushing a project – all these things play into misconfigurations,” points out Andras Cser, vice president and principal analyst at Forrester. 

Organisations also find the demand for cloud security expertise is outstripping supply, making it harder than ever to retain staff with the knowledge required to guarantee cloud security. Often, there’s also confusion within businesses as to who’s responsible for checking for vulnerabilities, and, if any are found, ensuring they’re removed.

“Secure configuration of cloud resources is the responsibility of cloud users and not the cloud service providers,” clarifies Gartner’s senior director analyst, Tom Croll. “Often, misconfigurations arise due to confusion within organisations about who’s responsible for detecting, preventing and remediating insecure cloud assets. Application teams create workloads, often outside the visibility of security departments and security teams often lack the resources, cooperation or tools to ensure workloads are protected from misconfiguration mistakes.”

Curran continues by highlighting that different teams are responsible at different stages of any cloud project. For instance, cloud developers using IaC to develop and deploy cloud infrastructure should be aware of the major security parameters included in the software development cycle. The security team, on the other hand, is generally responsible for monitoring and the compliance team for audits. To make things more complicated, Sonatype and Fugue’s report suggests cloud security requires more cross-team collaboration than in the data centre. More than a third (38%) of those surveyed, however, cited friction existing between teams over cloud security roles.

Avoiding cloud configuration errors

Wherever possible, organisations will want to prevent cloud misconfiguration problems from arising in the first place. This can be achieved by using tools such as IaC scanning during the development phase, and the adoption of policy as code (PaC), which, according to Curran, has revolutionised how IT policy is implemented. 

Rather than following written rules and checklists, in PaC, policies are expressed “as code” and can be used to automatically assess the compliance posture of IaC and the cloud environments organisations are actively running. 

“Using PaC for cloud security is significantly more efficient and cost-effective as it’s repeatable, shareable, scalable and consistent,” he explains, adding: “It also greatly reduces security risks due to human error.” Of course, mistakes can be missed and, therefore, continuous 24/7 monitoring should be core to a business’ cloud security operation in order to maximise the chances of finding potential vulnerabilities.

Experts advise businesses to use automated security services, such as cloud security posture management (CSPM), which are designed to identify misconfiguration issues and compliance risks in the cloud. This particular tool automates the process of finding and fixing threats across all kinds of cloud environments. 

“These allow cloud platform admins to create a good baseline of cloud configuration artefacts, then detect any drifts from it,” Forrester’s Cser continues. “It also takes advantage of best-practice templates that will flag issues around S3 buckets or overprivileged instances, for example. Automated CSPM visibility, detection and remediation should be continuous.”

SolarWinds hackers are targeting Microsoft AD servers

Sabina Weston

29 Sep, 2021

Nobelium, the hacking group responsible for last year’s cyber attack on SolarWinds, is now stealing data from Active Directory Federation Services (AD FS) servers.

That’s according to Microsoft’s Threat Intelligence Center (MSTIC), which has issued a warning about Nobelium’s latest actions on its blog.

The Russian state-backed hacking group was found to be using a post-exploitation backdoor dubbed FoggyWeb in order to remotely exfiltrate sensitive data as well as maintain persistence on victims’ networks, warned MSTIC researcher Ramin Nafisi.

In order to steal the data, Nobelium hackers first gain admin privileges to AD FS servers by employing “multiple tactics to pursue credential theft”. Once they manage to compromise the server, they then deploy FoggyWeb “to remotely exfiltrate the configuration database of compromised AD FS servers, decrypted token-signing certificates and token-decryption certificates”, wrote Nafisi.

The “passive and highly targeted” FoggyWeb backdoor “has been observed in the wild as early as April 2021”, he added.

Microsoft stated that it had notified all customers believed to be targeted by Nobelium. However, it didn’t rule out that some organisations might still be at risk. It recommends that potential victims audit their on-premises and cloud infrastructure, “remove user and app access”, strengthen their passwords, as well as “use a hardware security module (HSM) in securing AD FS servers to prevent the exfiltration of secrets by FoggyWeb”.

The tech giant also advised organisations to “harden and secure AD FS deployments” by taking additional measures, including limiting on-network access via host firewall and requiring all cloud admins to use multi-factor authentication.

The warning comes three months after Nobelium was found to have engaged in “password spray and brute-force attacks” on Microsoft’s customers, with around 10% of the targets being based in the UK.

The hackers implanted “information-stealing malware” on a device belonging to a Microsoft customer support agent, through which they obtained “basic account information for a small number of [Microsoft’s] customers”, according to the tech giant.

Prior to this, Nobelium launched a wave of attacks on more than 150 government agencies, think tanks, consultants, and NGOs from 24 countries, targeting an estimated 3,000 email accounts.

A third of businesses plan to set to spend $1 million on AI by 2023

Bobby Hellard

29 Sep, 2021

A third of organisations with plans to adopt artificial intelligence (AI) have said they will invest $1 million or more into the technology over the next two years. 

That’s according to Gartner’s annual Emerging Technology Product Leaders survey, where the majority of respondents (87%) predict industry-wide funding for AI increasing at a “moderate to fast pace” throughout 2022.

The survey was conducted between April and June of this year with 268 respondents from China, Hong Kong, Israel, Japan, Singapore, the UK and the US. Respondents were required to be involved in their organisation’s portfolio decisions when it comes to emerging technology and to work at an organisation in the high-tech industry with enterprise-wide revenue for fiscal year 2020 of $10 million or more.

AI seems to be the priority for most, with an average planned investment of $679,000 in computer vision over the next two years. Compared with other emerging technology areas, such as cloud and IoT, AI technologies had the second-highest reported ‘mean funding’ allocation. 
“Rapidly evolving, diverse AI technologies will impact every industry,” said Errol Rasit, managing vice president at Gartner.

“Technology organisations are increasing investments in AI as they recognise its potential to not only assess critical data and improve business efficiency, but also to create new products and services, expand their customer base and generate new revenue. These are serious investments that will help to dispel AI hype.”

Just over half of the respondents reported significant customer adoption of their AI-enabled products and services. 41% per cent of the respondents also cited AI emerging technologies as still being in development or at early adoption stages, suggesting there is a wave of potential adoption as new or augmented AI products and services are set to enter general availability.

The report is in contrast to another Gartner report from earlier in September, which highlighted the lack of talent the UK is currently facing and the barriers it could create for businesses adopting emerging technology. 

The perceived lack of talent was cited as the leading factor inhibiting adoption for six technology domains: compute infrastructure and platform services, network, security, digital workplace, IT automation and storage and database.  

Cloudflare takes aim at “exorbitant” AWS fees with R2 storage service

Bobby Hellard

29 Sep, 2021

Internet giant Cloudflare has made a bold pitch for enterprise customers with its new R2 object storage service. 

Cloudflare claims the selling point of R2 is that it comes with no “outrageous” charges for migrating data to external services, pitting it directly against Amazon’s dominant S3 service. 

R2 Storage is designed for the edge, according to Cloudflare, and offers customers the ability to store large amounts of data and extract it for no additional cost. 
In order to build websites and applications, developers need to store photos, videos, and graphics in easily accessible places, but that can become an expensive problem over time. AWS S3 is well known for its “egress” charges that can result in hefty bills over time, and Microsoft Azure and Google Cloud also implement similar fees for data migration.

However, both Azure and Google Cloud offer substantial discounts for their mutual Cloudflare customers, according to a Cloudflare blog from July.

Increasingly egregious bandwidth pricing has made cloud storage an expensive headache for some developers, and eventually leads to vendor lock-in, according to Cloudflare. As such, the company is making it its mission to heIp build a better internet by focusing on making it faster, safer, and also more affordable for everyone.

“Since AWS launched S3, cloud storage has attracted, and then locked in, developers with exorbitant egress fees,” said Matthew Prince, co-founder and CEO of Cloudflare. “We want developers to keep developing, not worrying about their storage bill. 

“Our aim is to make R2 Storage the least expensive, most reliable option for storing data, with no egress charges. I’m constantly amazed by what developers are building on our platform, and look forward to continued innovation as we expand the tools they have access to.”

As well as entering the enterprise storage business, Cloudflare this week also announced its first foray into the email security industry.

How the cloud is supporting retailers in the age of Black Friday

Cloud Pro

30 Sep, 2021

Anyone who’s worked in retail will tell you that it’s often far from easy. That doesn’t just apply to those on the shop floor, though – running a retail business can present a range of challenges for those working behind the scenes, too. It’s an incredibly fast-moving industry, with myriad shifting patterns that can see customers ebb and flow on an almost daily basis.

In such a fluid sector, the demands on IT departments are high. For ecommerce businesses, infrastructure has to be stable, performant and available at all times, and user experience is of the utmost importance. This can be a tough balancing act, particularly for organisations that only have a limited technical workforce, and these pressures have driven many companies into the cloud.

Over the last decade, the world has been increasingly shifting towards online shopping. In fact, according to figures from Statista, global ecommerce sales have grown by more than 200% since 2014 and are projected to surpass $600 trillion by 2024. That growth in demand has driven a vast proliferation of channels, moving from desktop websites to mobile sites, dedicated apps and even social media storefronts – all of which require time, talent and resources to maintain.

Online shopping has also sharpened the effects of seasonal peaks and troughs. Alongside traditionally busy periods like Christmas, Summer and Easter, increasing globalisation has added new entries to the retail calendar, including Black Friday and Cyber Monday. As shoppers eagerly flock to their favourite store pages in search of bargains, the infrastructure behind them has to cope with a sudden influx of traffic which can be orders of magnitude higher than the normal average.

As if that wasn’t enough, the pandemic threw further complications into the path of retailers with national lockdowns and stay-at-home orders. Almost overnight, businesses were forced to transition to a fully digital business model, and the legions of people easing the boredom of being stuck at home by shopping online caused problems even for digitally native organisations.

The consequence of all of this is that ecommerce businesses have had to undergo a rapid transformation in order to make sure their infrastructure is flexible, scalable and responsive enough to support these trends. Cloud technologies have been instrumental in this; when traffic to an online storefront spikes, more infrastructure capacity is needed to ensure that visitors aren’t put off by poor performance and long load times. However, in an on-premises environment, adding more capacity means installing and spinning up more physical appliances.

A cloud-based model, on the other hand, allows extra capacity to be quickly and easily added as it becomes necessary, and because it’s charged on a consumption basis, you can turn it off again once the spike passes. Content delivery networks (CDNs) such as G-Core Labs’ can be particularly helpful here, automatically performing load-balancing duties to ensure that traffic is spread out over as many servers as necessary in order to ensure stable performance. This scalability and elasticity makes cloud infrastructure like G-Core Labs Cloud significantly more cost-effective for dealing with unexpected surges than traditional servers.

For instance, major Asian online retailer Zalora found that its infrastructure was no longer able to cope with the traffic demands placed on it. Zalora moved its entire infrastructure to the cloud and can now handle site traffic increasing by 300-400% during sales without experiencing any dip in performance.

The growth of online shopping has also opened up new markets for retailers, who can reach customers all over the world. The same is also true for their rivals, though, and a global digital economy means more competition for sales. Retailers need to be smarter about winning and retaining customers, and must rely on more than discounts to entice people to their page.

Appropriately, digital marketing technology has exploded in order to fill this need. Brands can now engage with their customers across a huge range of channels, including email, social media and instant messaging platforms, with cloud-based tools not just to efficiently automate these communications, but to track the interactions with customers across all an organisation’s channels. This helps retailers form deeper and more meaningful connections with customers, increasing brand loyalty and boosting the relationship.

Customer-facing technology like online storefronts and digital marketing aren’t the only tools that have drawn retailers to the cloud, however. There are many back-office and line of business roles within retail that have benefitted from the recent growth of SaaS applications, including areas like stock control, logistics management, payroll and more. It’s even helped modernise physical stores, and cloud-based AI systems can now be used to perform complex operations like measuring footfall numbers or stock levels from CCTV footage.

Arguably the most significant change that the cloud has introduced, however, is a focus on data-driven decision-making. All of the tools and techniques we’ve spoken about so far generate information about who shoppers are, what products and services they’re most interested in, when they make purchases, what device they make purchases with, and much more. All of that data can be collected, harnessed and analysed in order to increase your store’s effectiveness.

This can be something as simple as changing what time your email newsletter goes out in order to match your customers’ activity patterns, to a more in-depth change like analysing bounce rates to make your store easier to navigate. The shrewdest retailers are taking all of their available customer and sales data and combining into what’s known as a “single customer view”, representing a near-complete picture of that brand’s customer base. Rather than investing in large data centre deployments in order to support this, however, many organisations have turned to cloud platforms like G-Core Labs in order to facilitate these efforts. The G-Core Labs infrastructure is based on Intel solutions, including the latest 3rd Gen Intel Xeon Scalable (Ice Lake) processors, ensuring enterprise-grade performance for AI and data workloads.

For example, US department store Macy’s uses big data to create price lists for each of its 800 outlets – and is able to do so in real time using the cloud. It also uses analytics to create personalised offers for its customers, and the number of variations for a single mailing campaign can reach an impressive 500,000.

Of course, all of that data also makes an attractive target for hackers, and retailers in particular need to ensure that their cyber security practices are up to standard. Cloud-based endpoint protection systems can help safeguard back-office staff, while robust monitoring and alerting tools can flag suspicious activity on any public-facing sites. Intel SGX sensitive data protection technology is also integrated into the G-Core Labs cloud.

Distributed denial of service (DDoS) attacks are one common tactic that cyber criminals often use against e-commerce businesses, flooding sites with traffic until they crash, then offering to shut the traffic off in exchange for a payment. The perpetrators of these kinds of attacks deliberately time them around peak shopping times like Black Friday or Christmas, making them potentially one of the most financially damaging situations an e-commerce business can face. Thankfully, modern DDoS mitigation services like G-Core Labs’ DDoS protection have evolved to cope with these kinds of attacks, putting a layer in front of the site that can detect and intercept malicious traffic before the target’s infrastructure can be overwhelmed.

Online health and beauty retailer eVitamins tried multiple solutions to reduce the impact of DDoS attacks – including adopting intrusion prevention systems, blocking suspicious IP addresses and analysing logs – but failed to sufficiently reduce the disruption and cost they caused the business. It finally achieved effective DDoS protection by transferring its infrastructure to a public cloud environment.

The world of retail has changed enormously over the past decade, and it’s not going to stop any time soon. Both high street and ecommerce businesses are in a period of rapid evolution, and cloud services are essential for staying ahead of the curve and on top of your competition. Whether you want to increase customer retention, maximise transactions or simply make life easier for the technical teams keeping your business moving, G-Core Labs’ cloud technology is an essential tool in your arsenal.

Learn more about G-Core Labs’ services

HPE GreenLake takes aim at data protection and analytics

Sabina Weston

28 Sep, 2021

HPE has announced a series of new GreenLake offerings which signal its entrance into the analytics and data protection markets.

HPE GreenLake for analytics is a set of open and unified analytics cloud services that aim to modernise data and applications stored on-premises, at the edge, and in the cloud. It will enable analytics and data science teams to scale up Apache Spark lakehouses and speed up artificial intelligence (AI) and machine learning (ML) workflows, according to HPE. 

HPE GreenLake for data protection offers backup cloud services as well as disaster recovery sourced from HPE’s recent acquisition of Zerto. It provides restore times of as little as minutes, allowing organisations to recover from ransomware attacks without impacting business operations, regardless of the scenario.

HPE has also annunced the Edge-to-Cloud Adoption Framework, which aims to support customers in creating an effective cloud operating model by being able to evaluate it using categories such as Strategy and Governance, People, Operations, Innovation, Applications, DevOps, Data, and Security

Beside its three main new offerings, HPE has also showcased a new addition to its AI Ops for infrastructure, HPE InfoSight. Known as HPE InfoSight App Insights, the tool is capable of detecting application anomalies, providing recommendations, and preventing disruptions in application workloads. Customers looking to make smarter, data-based IT decisions across edge-to-cloud will also benefit from the new HPE CloudPhysics.

Commenting on the announcements, HPE president and CEO Antonio Neri said that “data is at the heart of every modernisation initiative in every industry”.

He adds, however, that many organisations “have been forced to settle for legacy analytics platforms that lack cloud-native capabilities, or force complex migrations to the public cloud that require customers to adapt new processes and risk vendor lock-in.”

According to Neri, the big data and analytics software market, estimated by IDC to be worth $110 billion by 2023, “is ripe for disruption, as customers seek a hybrid solution for enterprise datasets on-premises and at the edge”.

“The new HPE GreenLake cloud services for analytics empower customers to overcome these trade-offs and give them one platform to unify and modernise data everywhere,” he said. Together with the new HPE GreenLake cloud services for data protection, HPE provides customers with an unparalleled platform to protect, secure, and capitalise on the full value of their data, from edge to cloud.” 

Customers can benefit from the HPE Edge-to-Cloud Adoption Framework starting today, while HPE GreenLake for analytics and HPE GreenLake for data protection will become available in the first half of 2022.

Amazon to offer cyber insurance to UK SMBs

Daniel Todd

28 Sep, 2021

Amazon has partnered with Superscript to start offering insurance to small and medium-sized businesses in the UK, with companies soon able to order cyber insurance via the tech giant. 

It’s Amazon’s first push into the country’s insurance market, as it looks to offer businesses an alternative to the traditional players.

According to broker Superscript, members of Amazon’s Business Prime Programme will be able to purchase its various plans, which include cyber insurance, contents insurance, and professional indemnity insurance. 

These will be underwritten by “major UK insurers”, the firm says, and will be discounted by 20% in comparison to current rates.

As reported by Reuters, a recent Capgemini survey of 12,000 people found that more than 50% of customers are already prepared to buy insurance from non-traditional outlets such as big tech or insurance technology companies. 

“The (insurance) industry needs to bridge the divide between insurers and customers by providing a quick, smooth buying process that is customer-centric,” Cameron Shearer, Superscript co-founder and CEO, said in a statement.

Superscript says its cyber insurance is designed to cover the risks that come with storing and handling data when running a business. This includes accidental privacy breaches, cyber crime, as well as hacking, extortion and ransomware. Malware is also covered, as is lost income and restoring data, and denial of service attacks.

Essentially, the broker’s cyber insurance covers compensation businesses may have to pay due to data or security breaches, media content liability such as intellectual property infringement, GDPR defence costs, and penalties, credit or debit card breaches unauthorised usage of systems, and data breach response services.

Superscript also currently offers IT contractors and consultants insurance and software developer insurance via its own website.

Amazon’s foray into UK business insurance with Superscript follows US insurtech firm Next Insurance’s announcement back in March that it would be offering insurance to US small businesses via Amazon Prime.

“As businesses come out of the pandemic and gradually resume normalcy, we want customers to have the best-in-class tools to run their business,” Molly Dobson, country manager for Amazon Business UK & Ireland, said in a statement.

LogMeIn GoToAssist Remote Support 5 review: A great support package

Dave Mitchell

30 Sep, 2021

Uncomplicated cloud-based remote support that’s easy to use, well featured and very affordable


In our new world of remote working, being able to effectively support your employees is more important than ever. LogMeIn’s GoToAssist Remote Support is a well established cloud-hosted support service, and this latest update adds flexibility by allowing technicians to launch remote-access sessions from their personal web portal or a dedicated desktop application.

Whichever method you choose, it’s easy to set up an on-demand connection. With a few clicks you can generate a nine-digit access code and a web link, which can be sent directly to the client via email or SMS. The end user can then either click on the link or manually open the website and enter the code to initiate the session.

A few security measures help prevent unauthorised access. Before remote control is granted, the user is shown a web page displaying details of the person requesting the session; they must also manually download the temporary agent and explicitly permit remote access. While the session is running, either party can click the disconnection button to instantly end the session – and afterwards the agent immediately unloads itself, so there’s no opportunity for hackers to probe it.

From the technician’s side, the web portal and desktop app are effectively identical, with an upper menu providing an extensive range of support tools. You’ll find options to reboot and reconnect, send files to the client, browse their storage devices and transfer the session to another technician. Technicians can share their own screens too, via a browser console on the remote system, and for general troubleshooting there’s a handy information button, which shows a snapshot of the client system’s CPU usage, memory and running processes. 

The major difference between the web and app interfaces emerges when multiple support sessions are open together. In the application, a sidebar appears showing thumbnail views, so you can keep an eye on what’s going on and quickly swap between systems. If you’re using the web portal, you can still manage multiple sessions, but each one opens in its own web page.

Unattended access can be set up in two ways. If you’re already inside an on-demand session then you just have to click the “Add Device” button on the toolbar; this pops up a request on the remote system asking permission to install a permanent agent, and the technician’s remote control is temporarily suspended, so only the user can approve this. We tried this process on Windows and macOS and found it seamless on both. Alternatively, you can download the permanent agent from the web console and send it to the client for manual installation. 

Mobile support is mixed. An Android agent is in the works, but it’s currently in beta and officially only works on Samsung and LG mobiles. On iOS, meanwhile, the technician app only supports GoToAssist v4, and any unattended systems set up from v5 won’t show in its console. You can still open the web portal in Safari, however, and use that to create on-demand sessions or access unattended systems.

It’s also worth noting that support for Android and iOS client devices is an add-on, costing £12 per month for each technician. We tested it on an iPad with the GoToAssist Customer v5 app loaded and had no problems remotely viewing its screen from the desktop console and web portal.

Mobile camera sharing is another handy tool, giving users an easy way to show technicians what they can see, and enabling voice conversations too. These connections work similarly to regular support sessions, but use a separate URL and require their own security code.

LogMeIn’s GoToAssist Remote Support 5 is easy to use and affordable, as subscriptions are based on the number of technicians with no limits on simultaneous sessions. The iOS technician app needs an update, but overall this is a great support package that makes a good fit for most SMBs.

Cisco and AMD help modernise defence industry IT infrastructure

Sabina Weston

27 Sep, 2021

Cisco has announced that it is helping defence departments modernise and simplify their IT infrastructure with its AMD-powered rack servers.

The UCS C4200 Series Rack Server Chassis hosts four UCS C125 M5 Rack Server Nodes in two rack units (2RU) with shared power and cooling. The server nodes are powered by AMD EPYC processors, which boast “the highest core density in the industry”.

This has made it possible to cut down on the number of cables used by defence departments, reducing the number from 200 individual cables by 60% to only 80.

Cisco has managed to merge 20 racks of gear down to a single rack, as well as cut down on power consumption and licensing costs, making running the technology more affordable for the defence departments, which are typically funded by taxpayers.

The new offerings make it easier to manage servers: in a case study, Cisco detailed that “one defence agency deployed the Cisco UCS C-Series to simplify infrastructure management and scaling”. 

The unnamed defence department no longer has to manually manage servers on an individual basis, and can now use UCS Manager to orchestrate them “collectively using software-defined service profiles”.

Cisco’s UCS Manager simplifies the deployment of service profiles to both rack and blade servers, with defence departments being able to manage as many as 160 nodes at the same time. 

“As a result, the defence agency has streamlined infrastructure management, established greater consistency of server configuration and security, and simplified scaling without the need for downtime,” said Cisco.

The combination of the UCS C4200 Series Rack Server Chassis and UCS C125 M5 Rack Server Nodes is used to benefit “various defence departments in multiple countries”. However, Cisco didn’t specify which exact states are customers.

One of the benefactors could be the US Department of Defense (DoD), which has strong ties with the American tech giant. In July, Cisco launched Webex for defence, an all-in-one collaboration platform specifically made for the military department. Authorised to work with the DoD’s national security systems, the new tool integrates with Cisco’s full Webex portfolio of devices, allowing users to connect securely from phone, desktop, or video.