This exploit could give users free Windows 7 updates beyond 2020


Keumars Afifi-Sabet

10 Dec, 2019

Members of an online forum have developed a tool that could be used to bypass eligibility checks for Windows 7 extended support and receive free updates after the OS reaches end-of-life.

Only a handful of Windows 7 users can continue to receive updates from Microsoft through its paid-for Extended Support Updates (ESU) programme after 14 January, through to January 2023.

This scheme was first introduced for enterprise customers in August and later extended to SMB users after Microsoft identified “challenges in today’s economy”.

The ESU programme is not available to all businesses, however. Users on tech support platform My Digital Life have therefore developed a prototype tool that could theoretically allow ineligible businesses to continue to receive free updates beyond 14 January.

Before ESU patches are beamed to eligible machines, Windows 7 performs a check to determine whether or not users can receive these updates. This involves the installation and activation of an ESU license key. The created tool bypasses this eligibility check, which is only performed during installation, so users would, in theory, continue to receive Windows 7 updates for free through the ESU scheme without paying an ESU subscription.

The bypass was tested on the Windows 7 update KB4528069, a dummy update which was issued to users in November so they could verify whether or not they were eligible for extended support after 14 January.

Although the tool has worked on the test patch, its creators urged My Digital Life forum members to consider this as a prototype, and not a fully-fledged workaround, as things may change by February 2020.

Microsoft will be keen to ensure there aren’t any ways to undermine the ESU scheme once Windows 7 reaches end-of-life due to the sums it’s charging eligible businesses, and an ultimate desire to shift machines to Windows 10.

The firm is likely to change the way the eligibility check is performed given how simple it’s been proven to bypass.

It’s certainly not a tool that Microsoft is likely to condone, but it does demonstrate the extent to which Windows 7 is still popular as users are trying to retain undisrupted access to the legacy OS.

Businesses have just weeks to upgrade their devices running Windows 7 and Windows XP or face restrictions on accessing critical security updates.

Microsoft launches Office 365 phishing campaign tracker


Keumars Afifi-Sabet

10 Dec, 2019

Microsoft has devised a phishing campaign dashboard for its Office 365 Advanced Threat Protection (ATP) module to give customers a broader overview of phishing threats beyond just individual attacks.

The newly-announced ‘campaign views’ tool provides additional context and visibility around phishing campaigns. This aims to give businesses under constant threat from phishing attempts a fuller story of how attackers came to target an organisation, and how well attempts were resisted. 

Security teams with access to the dashboard can see summary details about a broader campaign, including when it started, any activity patterns and a timeline, as well as how far-reaching the campaign was and how many victims it claimed. 

The ‘Campaign views’ tool also provides a list of IP addresses and senders used to orchestrate the attack, as well as the URLs manifested in the attack. Moreover, security staff will be able to assess which messages were blocked, delivered to junk or quarantine, or allowed into an inbox.

“It’s no secret that most cyberattacks are initiated over an email. But it’s not just one email – it’s typically a swarm of email designed to maximize the impact of the attack,” said Microsoft group program manager with Office 365 security Girish Chander. 

“The common pattern or template across these waves of email defines their attack ‘campaign’, and attackers are getting better and better at morphing attacks quickly to evade detection and prevention. 

“Being able to spot the forest for the trees – or in this case the entire email campaign over individual messages – is critical to ensuring comprehensive protection for the organization and users as it allows security teams to spot weaknesses in defenses quicker, identify vulnerable users and take remediation steps faster, and harvest attacker intelligence to track and thwart future attacks.”

Office 365’s ATP tool is an email filtration system that safeguards an organisation against malicious threats posed by email messages, links and any collaboration tools. 

With the additional information at hand, Microsoft is hoping that security teams within organisations can more effectively help compromised users, and improve the overall security setup by eliminating any configuration flaws. 

Related campaigns to those targeting the organisation can also be investigated, and the teams can help hunt down threats that use the same indicators of compromise.

The ‘campaign views’ dashboards are available to customers with a suite of Office 365 plans including ATP Plan 2, Office 365 E5, Microsoft 365 E5 Security, and Microsoft 365 E5.

These new features have started rollout out into public preview, with Microsoft suggesting the features are expected to be available more generally over the next few days and weeks.

Why cybersecurity needs to focus more on customer endpoints going forward

  • Cloud-based endpoint protection platforms (EPP) are proliferating across enterprises today as CIOs and CISOs prioritise greater resiliency in their endpoint security strategies going into 2020
  • Gartner predicts that global information security and risk management end-user spending is forecast to grow at a five-year CAGR of 9.2% to reach $174.5 billion in 2022, with approximately $50bn spent on endpoint security
  • Endpoint security tools are 24% of all IT security spending, and by 2020 global IT security spending will reach $128bn according to Morgan Stanley Research
  • 70% of all breaches still originate at endpoints, despite the increased IT spending on this threat surface, according to IDC

There’s a surge of activity happening right now in enterprises that are prioritising more resiliency in their endpoint security strategies going into 2020. The factors motivating CIOs, CISOs, IT, and practice directors to prioritise endpoint resiliency include more effective asset management based on real-time data while securing and ensuring every endpoint can heal itself using designed-in regenerative software at the BIOS level of every device.

CIOs say the real-time monitoring helps reduce asset management operating expense, a big plus many of them appreciate give their tight budgets. Sean Maxwell, chief commercial officer at Absolute, says, “Trust is at the centre of every endpoint discussion today as CIOs, CISOs and their teams want the assurance every endpoint will be able to heal itself and keep functioning.”

The endpoint market is heating up going into 2020

Over thirty vendors are competing in the endpoint security market right now. A few of the most interesting are Absolute Software, Microsoft, Palo Alto Networks, and others who are seeing a surge of activity from enterprises based on discussions with CIOs and CISOs.

Absolute Software’s Persistence self-healing endpoint security technology is embedded in the firmware of more than 500 million devices and gives CIOs, CISOs and their team’s complete visibility and control over devices and data. Absolute is the leading visibility and control platform that provides enterprises with tamper-proof resilience and protection of all devices, data, and applications.

Like Absolute, Microsoft is unique in how they are the only vendor to provide built-in endpoint protection at the device level, with the core focus being on the OS. Windows 10 has Windows Defender Antivirus now integrated at the OS level, the same System Center Endpoint Protection delivers in Windows 7 and 8 OS. Microsoft Defender Advanced Threat Protection (ATP) incident response console aggregates alerts and incident response activities across Microsoft Defender ATP, Office 365 ATP, Azure ATP, and Active Directory, in addition to Azure.

Further evidence of how enterprise customers are placing a high priority on endpoint security is the increase in valuations of key providers in this market, including Absolute Software (TSE: ABT) and others. Absolute’s stock price has jumped 13% in just a month, following their latest earnings announcement on November 12th with a transcript of their earnings call here.

Absolute’s CEO Christy Wyatt commented during the company’s most recent earnings call that, “The ability to utilise near real-time data from the endpoint to… to deliver actionable insights to IT about where controls are failing and the ability to apply resilience to self-heal and reinforce those security controls will become a critical skill for every one of our customers. This is the essence of Absolute’s platform, which adds resiliency to our customer’s operations.” It’s evident from what CIOs and CISOs are saying that resiliency is transforming endpoint security today and will accelerate in 2020.

Key takeaways from conversations with enterprise cybersecurity leaders

The conversations with CIOs, CISOs, and IT Directors provided valuable insights into why resiliency is becoming a high priority for endpoint security strategies today. The following are key takeaways from the conversations:

  • Known humorously as the “fun button” cybersecurity teams enjoy being able to brick any device any time while monitoring the activity happening on it in real-time. One CIO told the story of how their laptops had been given to a service provider who was supposed to destroy them to stay in compliance with the Health Insurance Portability and Accountability Act (HIPAA), and one had been resold on the back market, ending up in a 3rd world nation. As the hacker attempted to rebuild the machine, the security team watched as each new image was loaded, at which time they would promptly brick the machine. After 19 tries, the hacker gave up and called the image re-build “brick me"
     
  • IT budgets for 2020 are flat or slightly up, with many CIOs being given the goal of reducing asset management operating expenses, making resiliency ideal for better managing device costs. The more effectively assets are managed, the more secure an organization becomes. That’s another motivating factor motivating enterprises to adopt resiliency as a core part of the endpoint security strategies
     
  • One CIO was adamant they had nine software agents on every endpoint, but Absolute’s Resilience platform found 16, saving the enterprise from potential security gaps. The gold image an enterprise IT team was using had inadvertently captured only a subset of the total number of software endpoints active on their networks. Absolute’s Resilience offering and Persistence technology enabled the CIO to discover gaps in endpoint security the team didn’t know existed before
     
  • Endpoints enabled with Resiliency have proven their ability to autonomously self-heal themselves, earning the trust of CIOs and CISOs, who are adopting Absolute to alleviate costly network interruptions and potential breaches in the process. 19% of endpoints across a typical IT network require at least one client or patch management repair monthly, according to Absolute’s 2019 Endpoint Security Trends Report. The report also found that increasing security spending on protecting endpoints doesn’t increase an organizations’ safety – and in some instances, reduces it. Having a systematic, design-in solution to these challenges gives CIOs, CISO, and their teams greater peace of mind and reduces expensive interruptions and potential breaches that impede their organizations’ growth.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Box Business Plus review: Cloud storage that’s very hard to beat


Dave Mitchell

10 Dec, 2019

Business cloud collaboration at its best, with unlimited storage, tight security and great management features

Price 
£19

Box is one of the most capable file-sharing services on the market, offering a great range of cloud collaboration features. Those come at a price, mind you: the Business Plus version on review costs £20 per user each month, with a modest 5% discount if you pay yearly.

Still, we can’t complain too much when every user gets a classy selection of file-sharing and syncing services, and unlimited cloud storage. Or, to be precise, there’s no limit on total usage; – there’s a 5GB limit on the size of each uploaded item, which is a long way short of Citrix ShareFile’s 100GB cap, but for the average small business that won’t be a problem at all. 

Administrators, meanwhile, get a wealth of management tools, with features including user activity tracking and enhanced reporting. If your business has data residency requirements, you can take advantage of the Box Zones add-on, which lets you choose precisely where your files will be stored. Options include AWS in London and Azure in Cardiff, with pricing starting at £4 per month – although you should note that Box Zones is only available to customers with a minimum of ten users.

To set users up on Box, you simply send each one an email invitation from the Box admin portal. After they have accepted, they will be able to log in to their personal cloud portal, view their cloud folders, create new ones and invite colleagues to share their contents. Box lets you finely specify exactly what sort of access each collaborator should have, with seven permission levels ranging from view-only to co-owner.

It’s also possible to securely share files with collaborators outside of the company, by enabling the Share Link option and sending an email. If you want to receive a file from someone else, you can generate a secure link that allows them to upload a file directly to your cloud folder. 

One aspect of Box that’s a bit confusing is the way it presents a choice of two different client apps to download. Box Sync provides standard syncing services between a user’s local folder and their cloud repository to ensure all versions are kept up to date, while Box Drive aims to save local hard disk space by keeping all your folders in the cloud – although you can select specific files and folders for offline access. It may not be obvious to a user which one they should install, and the two apps won’t coexist on the same system so a little support might be necessary to help people get the right client. 

You might also be disappointed to discover that, although you can grant folder access to an unlimited number of collaborators – including those outside of your organisation – each one needs their own Box account to access the share.

On the plus side, Box can do some very clever and useful things. File versioning is included as standard: Starter subscriptions get access to 25 old versions, while Business Plus customers get 50 versions and the Enterprise tier ups the limit to 100. The free Box Tools utility lets you edit documents in the cloud, too – a clever trick, although it’s a bit annoying that Microsoft Edge isn’t currently supported.

Then there’s the free Box Relay Lite automation tool, which lets you create simple workflows that can, for example, move newly uploaded files from one folder to another, or ask another user for approval. On top of all this, the Business Plus subscription also supports up to three SaaS integrations with external apps, such as Slack and Salesforce

The choice of desktop apps may confuse users, but overall Box is an excellent package of cloud file-sharing and collaboration services. The Business Plus plan is expensive, but if you want top-notch security, an insight into user activity and the ability to choose where their data resides, it’s very hard to beat. 

IDC picks Trend Micro as the top vendor in SDC workload protection

Cybersecurity solutions provider Trend Micro has been named as the number one vendor in Software-Defined Compute (SDC) workload protection in IDC’s latest report.

Trend Micro company achieved a market share lead of 35.5% in 2018. Steve Quane, Trend Micro’s network defence and hybrid cloud security executive VP, said: “We predicted a decade ago that organisations would need multi-layered security to protect their cloud environments and software-defined data centres.”

Frank Dickson, IDC’s security and trust program vice president said: “For years, Trend Micro has steadily built out its SDC workload protection capabilities for virtual, public cloud and container environments, offering tight integration with AWS, Azure and Google Cloud Platform.”

"Although the future has not been written, Trend Micro is the dominant player in this market," he added.

Real-time security has been embedded into running applications over this time and Trend Micro has made sure that it focuses on security-as-code and automation in order to seamlessly build protection into DevOps pipelines, including pre-runtime scanning of container images. This carries on with the launch of XDR in the month of August.

XDR plays an important role in correlating data across network, server, email, endpoint, and cloud workloads in order to identify spiteful activity which might otherwise go unnoticed.

In the month of October, Trend Micro further built on these capabilities by its move to acquire Cloud Conformity, a leader in security posture management. The firm announced the launch of its cloud security services program called Trend Micro Cloud One to address security challenges faced by customers around storage, data centre, IaaS, serverless architectures and containers.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Winning the IT availability war: How to combat costly downtime

Analysts predict global enterprises will spend nearly $2 trillion on digital transformation by 20221. With digital initiatives and technology becoming ubiquitous with business today, one would think that companies would be more than ready for a world where virtually every touchpoint with customers is digital. Unfortunately, the fact that Target, British Airways, Facebook and Twitter all experienced major IT outages in 2019 suggests there is still work left to do to keep services and an optimal customer experience up and running smoothly.

To explore precisely what enterprises are doing to detect, mitigate and hopefully prevent outages LogicMonitor commissioned an IT Outage Impact Study. The independent study surveyed 300 IT decision makers at organisations in the US, Canada, UK, Australia and New Zealand to discover whether or not IT leadership is concerned about “keeping the lights on” for their businesses. The research revealed a stark reality at odds with today’s omnipresent digitisation: IT teams are concerned about their ability to avoid costly outages, mitigate downtime, and reliably provide the 24/7 availability that customers and partners demand.

Are outages inevitable?

IT teams worldwide agree on two things: performance and availability are top priorities for their department. These two mission-critical priorities, in fact, beat out security and cost, which is surprising considering how much attention security gets in today’s data-breach heavy environment.

Yet IT’s intense focus on keeping the network up and running at peak performance has not prevented downtime. In fact, 96% of survey respondents report experiencing at least one IT outage in the past three years, which is bad news if performance and availability are considered make or break areas for modern organisations.

Common causes of downtime include network failure, surges in usage, human error, software malfunction and infrastructure that fails. What is surprising, however, is that enterprises report that more than half of the downtime they experience could have been prevented.

Worryingly, IT decision makers are pessimistic when it comes to their ability to influence all-important availability. More than half (53%) of the 300 IT professionals surveyed say they expect to experience a brownout or outage so severe that the national media will cover the story, and the same percentage said someone in their organisation will lose his or her job as a result of a severe outage.

This begs the question: if even the most skilled technical experts in IT can’t prevent outages, who (or what) can?

The true costs of downtime

Negative media coverage and career impacts aside, downtime comes with additional costs for organisations. Survey respondents identify lost revenue, lost productivity and compliance-related costs as other factors associated with IT outages and brownouts (periods of dramatically reduced or slowed service). And these costs add-up quickly. Organisations with frequent outages and brownouts experience:

  • 16 times higher costs associated with mitigating downtime than organisations with few or zero outages
  • Nearly two times the number of team members to troubleshoot problems related to downtime
  • Two times as long to troubleshoot problems related to downtime

How to win the availability war

If more than half of outages and brownouts are avoidable, according to 300 global IT experts, then every organisation should be taking proactive steps to prevent these disruptive events. The best-performing organisations are already working to prevent costly downtime. Consider taking the following actions to do the same:

  • Embrace comprehensive monitoring. In today’s digital world, many companies operate in a hybrid IT environment with infrastructure both on-premises and in the cloud. Trying to spot trends using siloed monitoring tools for each platform is inefficient and prone to error.

    Identify and implement software that comprehensively monitors infrastructures, allowing the team to view IT systems through a single pane of glass. Consider extensibility and scalability during the selection process as well to ensure the platform integrates with all technologies – present and future
     

  • Use a monitoring solution that provides early visibility into trends that could signify trouble ahead. Data forecasting can proactively identify future failures and ultimately prevent an outage before it impacts the business. Teams should build a high level of redundancy into their monitoring systems as an additional method to prevent downtime and focus on eliminating single points of failure that might cause a system to go down
     
  • Don’t wait to create an IT outage response plan. Hopefully it will never be needed, but it’s critical to have a defined process for handling outages from escalation and remediation to communication and root cause analysis. Set a plan on who to involve (and when) to ensure IT can respond quickly if an outage does occur

While the 2019 LogicMonitor’s 2019 Outage Impact Study revealed that downtime is surprisingly common, it also showed that top-performing organisations are able to banish downtime from their day-to-day operations through advanced planning and comprehensive monitoring software. In the end, it is possible to win the IT availability war, with the right combination of skilled team members and powerful SaaS monitoring technology. But every minute of downtime is pricey – so there’s no time to waste.

Read more: Most outages can potentially be avoided, argues IT – yet the business side is pessimistic

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

View from the airport: AWS Re:Invent 2019


Bobby Hellard

6 Dec, 2019

Gather round kids, because AWS Re:Invent is over for another year and this is now a safe space to talk about all those subjects that were off-limits during the cloud conference.

Namely multi-cloud and project JEDI.

Having safely left Las Vegas via McCarran International Airport, I feel I can finally discuss these hot topics. I can say what I want about Microsoft winning the Pentagon’s $10 billion cloud migration project (AWS is appealing that) and also anything about the term for using services from more than one cloud provider.

The multi-cloud discourse had no place at Re:Invent. It was definitely not mentioned by any AWS executives. But it’s so hot right now that it almost burns Amazon because it refuses to acknowledge it.

The very word is banned if recent reports are true. Back in August, the cloud giant released a “co-branding” guide for partners that said it would not approve the term “multi-cloud” or “any other language that implies designing or supporting more than one cloud provider“.

Unfortunately for AWS, that word was brought up in various briefings I attended throughout the event. Such as at the end of a Sophos security roundtable, where journalists were invited to ask questions and clearly took pleasure in asking difficult questions like “what’s your beef with multi-cloud?” and “how much did it suck to lose out to Microsoft?”

Ouch.

Many companies have gone ‘all-in’ on AWS, but many more have merely added a single services or integrated certain products to offer their customers – regardless of their cloud provider(s). This is the case for Sophos, which had to confess to offering cloud security to multi-cloud environments.

It was a strange end to an otherwise brilliant session. Sophos is a great company with a broad range of security expertise, but with AWS trying to quash talk of multi-cloud, it forces partners to tackle the difficult question.

“Our products do – sorry to my AWS brothers – support multiple clouds…” Sophos director Andy Miller said while the room collectively cringed. “We work in AWS and Azure and Google.”

The Seattle Seahawks, which has integrated AWS machine learning algorithms into various parts of its organisation, found itself in the same position. From player performance to business processes, the technology will run throughout the NFL franchise over the next five years. But as its tech lead, Chip Suttles, told me, it also uses Office 365 and Azure.

Similarly, when I asked the platform team lead of Monzo, Chris Evans, why his company uses AWS, he told me it was partly due to Amazon having the technology they needed at the time. If it started now, they could use Azure, Google Cloud and so on… he even used that dirty word ‘multi-cloud’. Evans and co are more than happy with what AWS gives them and it’s also a large part of the fintech company’s success, but nothing suggests that wouldn’t be the same with different providers.

AWS may be the biggest provider of cloud computing, but its massive lead on its rivals is due to it being the first to capitalise when it was an emerging technology. The big trend within that technology now is using multiple providers and AWS is strangely taking a legacy-like approach. It wants you and your company to use it exclusively, but no matter how many great tools and functions you provide you can’t please everyone. In the world of multi-cloud, AWS is fighting a losing battle.

On Thursday, Andy Jassy was pencilled in for a Q and A session. Journalists were asked to submit questions beforehand, presumably to vet them, and the word on the street (or strip) was that JEDI questions had been sent in. Unfortunately, we will never know what these were as Jassy didn’t take any questions and instead conducted an interview with Roger Goodwell, the commissioner of the NFL.

One assumes that after his three-hour keynote and various appearances throughout the week, Jassy saw a list of multi-cloud and JEDI questions and thought, “Nah, I’ll pass”.

How the AWS and Verizon re:Invent partnership shows the way forward for telcos and cloud providers

At the main AWS re:Invent keynote, the biggest headline is usually saved until the end. Last year, it was the announcement of AWS Outposts, with VMware CEO Pat Gelsinger taking to the stage to join AWS chief Andy Jassy.

This time around it was Verizon, whose CEO Hans Vestberg joined Jassy to announce a partnership to deliver cloud and edge computing souped up with 5G connectivity. The move is also a showcase for AWS Wavelength, which is a major edge play: embedding compute and storage services on the edge of operators’ 5G networks, enabling the delivery of ultra-low latency applications.

Vestberg noted Verizon’s ‘eight currencies’ it believed in for 5G; a message first put out at the start of this year at CES and which went far beyond speed and throughput, the only primary capabilities for 4G. “The most important [aspect] is when you can slice this and give them to individuals and applications; you have a transformative technology that’s going to transform consumer behaviour, transform businesses, transform society,” he said.

For the ‘builders’ – developers who form such a key part of the re:Invent audience base – this promise of 5G, encapsulating lower latency, mobility and connectivity, is vital for the applications they are creating. Yet the journey for the data being transmitted is arduous; going from the device to the cell tower, to the aggregation site, to the Internet, and to the cloud provider, before going back.

As Jassy noted, the most exciting applications to be ushered in, such as autonomous industrial equipment, or applications for smart cities, can’t wait that long. “If you want to have the types of applications that have that last mile connectivity, but actually do something meaningful, those applications need a certain amount of compute and a certain amount of storage,” he said. “What [developers] really want is AWS to be embedded somehow in these 5G edge locations.”

Hence this AWS and Verizon collaboration – which Jassy noted had been in the works for around 18 months. “In placing AWS compute and storage services at the edge of Verizon’s 5G Ultra Wideband network with AWS Wavelength, AWS and Verizon bring processing power and storage physically closer to 5G mobile users and wireless devices, and enable developers to build applications that can deliver enhanced user experiences like near real-time analytics for instant decision making, immersive game streaming, and automated robotic systems in manufacturing facilities,” the companies noted in the press materials.

The move also evidently strengthens the relationship between Verizon and AWS, for whom the lines of business are now clearly demarcated.

As industry watchers will recall, in 2011, when cloud infrastructure was still nascent, Verizon acquired early pioneer Terremark. The company said at the time the move would ‘clear the way for Verizon to lead the rapidly evolving global managed IT infrastructure and cloud services market.’ The telco’s efforts to become a cloud provider in its own right fell flat, eventually being sold up to IBM. As Synergy Research’s John Dinsdale put it to this reporter back in 2016, ‘the speed of cloud market development and the aggressiveness of the leading cloud providers largely left [telcos] behind.’

The thinking has since changed. 18 months ago – around the time the two companies started consulting on the edge and 5G partnership – Verizon moved to AWS as its preferred public cloud provider, migrating more than 1,000 of its business-critical applications and backend systems.

Today, the much-derided ‘telco cloud’ is now about partnerships and making the most out of both sides’ assets. AT&T announced deals with IBM and Microsoft in successive days in July in a move which raised eyebrows in the industry – and according to Nick McQuire, VP enterprise at CCS Insight, this is an idea finally beginning to bear fruit.

“The announcements, above all, are about developers,” said McQuire. “For 5G to meet the enormous hype and expectation surrounding it this year, operators are now desperate to woo developers to the platform to create 5G applications which at the moment are very thin on the ground.

“AWS has the cloud, edge computing and IoT assets – some of the best in the market – and it also has developers, so it’s no surprise it’s pushing into this area and partnering with leading telcos.”

Read more: AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

Picture credit: Amazon Web Services/Screenshot

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google founders Larry Page and Sergey Brin step down from Alphabet management


Connor Jones

4 Dec, 2019

Google founders Larry Page and Sergey Brin are stepping aside from their leadership roles of Google’s parent company Alphabet, bringing to an end an unprecedented reign at the helm of one of the most influential companies in history.

The iconic silicon valley duo will remain as board members, shareholders and overall “proud parents” of the companies they founded and led since starting the search engine giant from a California garage in 1998.

Page and Brin said they wanted to simplify the management structure of the tech giant, adding there was no need to have two CEOs and a president of the same company.

Google’s Sundar Pichai, who joined the company in 2004 and was appointed CEO in 2015, will assume the CEO role of both Google and Alphabet.

“I will continue to be very focused on Google and the deep work we’re doing to push the boundaries of computing and build a more helpful Google for everyone,” said Pichai. “At the same time, I’m excited about Alphabet and its long term focus on tackling big challenges through technology.”

“While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it’s time to assume the role of proud parents – offering advice and love, but not daily nagging,” said Page and Brin in a joint letter.

Alphabet, the multinational conglomerate holding company which houses Google and a number of other projects it has launched, including DeepMind, was created in 2015 and replaced Google as the publicly-traded company.

Shortly after, Pichai replaced Page as CEO at Google following months of rumours that he would be the next man for the job. Pichai previously held positions at Google such as product chief and head of Android prior to assuming the top job at the company.

Among other notable successes, Pichai’s reign at Google has seen the company invest heavily in green energy. Google Cloud said in 2018 it runs entirely on green energy and that the company has invested billions in building a variety of green datacentre facilities across the world, including locations in Finland and Denmark.

Google has also been embroiled in controversy throughout 2019. Most recently, the EU announced it plans to launch an investigation into the company’s data collection practices. The UK’s competition watchdog also announced it will be investigating Google’s £2 billion acquisition of data analysis firm Looker.

How smart cybersecurity solutions are increasingly powered by AI and ML

Now that data breaches are more common, 'digital trust' is a top priority for C-level leaders that build and maintain the IT infrastructure for digital transformation. Besides, for most organisations, losing digital trust can have a significant impact on brand reputation and the bottom line.

Artificial intelligence (AI) and machine learning (ML) have been adopted for their automation benefits, from predictive outcomes to advanced data analytics. AI-based cybersecurity can augment the capabilities of IT staff and help organisations deflect cyber threats, according to the latest market study by Frost & Sullivan.

AI and ML market development

Particularly, AI and ML have been used widely in cybersecurity industries, by both hacking and security communities, making the security landscape even more sophisticated. Many organisations, regardless of size, are now facing greater challenges in day-to-day IT security operations.

Many of them indicate that the cost of threat management, particularly threat detection and response, is too high. Meanwhile, AI-driven attacks have increased in number and frequency, requiring security professionals to have more advanced, smart and automated technologies to combat these automated attacks.

With digital transformation a priority for a majority of enterprises today, there is a proliferation of connected devices, offering customers convenience, efficient services and better experiences. However, this connectivity also increases the potential risk of cyberattacks for enterprises and users.

Cybercriminals are also using more sophisticated methods to attack organisations. These include polymorphic malware, AI and other automated techniques. Enterprises are struggling with a lack of trained staff and cybersecurity expertise to counter the more sophisticated attacks.

These increasing challenges in security operations suggest the need for a smarter, more adaptable, automated and predictive security strategy. AI and ML are increasingly being developed by security companies to strengthen their competitiveness using their own AI or ML algorithms to empower security products and augment the capabilities of existing IT and cybersecurity staff in enterprises.

AI and ML are being incorporated into all stages of cybersecurity to enable enterprises to adopt a smarter, more proactive and automated approach toward cyber defense, including threat prevention or protection, threat detection or hunting, and threat response to predictive security strategies.

While technology startups have been the most proactive in introducing multiple AI-enabled security offerings into the market, larger IT vendors have also incorporated AI and ML into their existing enterprise security solutions.

Outlook for AI and ML applications growth

"With cybersecurity solutions powered by AI capabilities, vendors can better support enterprises and their cybersecurity teams with less time and manpower investment and higher efficiency to identify the cybersecurity gaps," said Amy Lin, industry analyst at Frost & Sullivan.

Key AI and ML market trends for cybersecurity include:

  • Embracing and incorporating AI-enabled capabilities into exiting solutions to intensify the competitive advantage
  • Supporting a more holistic cybersecurity framework from detection to response and further prediction
  • Assisting cybersecurity expert teams on operations with lower false-positive rates and enhancing their ability to react

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.