All posts by Dale Walker

IT Pro 20/20: Small businesses will lead the new normal


Dale Walker

3 Aug, 2020

Welcome to the seventh issue of IT Pro 20/20, our digital magazine that brings all of the previous month’s most important tech issues into clear view.

Each month, we will shine a spotlight on the content that we feel every IT professional should be aware of, only in a condensed version that can be read on the go, at a time that suits you.

This time we’re turning our attention to a part of the tech industry that has shown remarkable resilience during a time of upheaval. The small business and startups sectors have long been associated with innovation, agility, fresh talent, and, let’s face it, quirkiness. The smaller you are, the easier it is to shake things up, whether that’s in the way employees work or the markets that are targeted.

Yet, the coronavirus pandemic has forced many larger businesses to go back to basics. Restructuring is now common, and so too is mass remote work. Some business processes no longer work as well as they did, while others have ground to a halt entirely. In many ways, the larger firms have had to behave like startups in order to survive, taking cues about the prioritisation of agility above all else and abandoning longer-tail projects in favour of those that provide immediate value.

It’s this idea of smaller businesses driving the new normal that’s the theme for most of our articles this month, although it’s most prominent in our lead feature, which takes a look at the trends that are likely to define how we work over the next few years. Elsewhere, we’ve got some industry tips on how to begin building your SMB tech stack, a somewhat frank discussion on the hidden dangers of bringing staff back from furlough, and a look at whether it’s wise to launch a new business right now.

DOWNLOAD THE JULY ISSUE OF IT PRO 20/20 HERE

The next IT Pro 20/20 will be available on Friday 28 August. Previous issues can be found here.

We hope you enjoy reading this month’s issue. If you would like to receive each issue in your inbox as they release, you can subscribe to our mailing list here.

Google launches Confidential VMs for sensitive data processing


Dale Walker

14 Jul, 2020

Confidential VMs will be the first product in Google Cloud’s new confidential computing portfolio, the company has revealed, allowing companies to process sensitive data while keeping it encrypted in memory.

The announcement aims to capitalise on a growing interest in confidential computing, a field that promises to revolutionise cloud computing by providing what is in effect permanent uptime on data encryption.

Until now, like many cloud providers, Google offered encryption on data at rest and while in transit, requiring that data to be decrypted before it could be processed. Through Confidential VMs, Google customers encrypt data while it is being processed inside a virtual machine.

Google’s new feature is an evolution of its Shielded VMs, a tool launched in 2018 that companies could deploy to strip out most of the potentially vulnerable startup processes that trigger when attempting to create a new environment. This is in addition to a few layers of extra protection against external attacks, and monitoring systems that check for unexpected changes to data.

These added layers of security were required given that data is normally decrypted in order to be processed inside the VM – something that not only creates added risk from external attacks, but also forces companies to deploy strict access controls to ensure only the right employees handle the data.

The Confidential VMs feature, available as a beta today, attempts to solve these issues by allowing customers to encrypt their data in memory, meaning encryption can be maintained while it is being used, indexed, queried, or trained on.

This promises to have profound implications for those industries that process highly sensitive or heavily regulated data, such as those in finance and health, or government agencies. Companies in these sectors, which are usually forced to keep most of their data processing in their own private networks, now have a public cloud option, Google claims.

“These companies want to adopt the latest cloud technologies, but strict requirements for data privacy or compliance are often barriers,” Sunil Potti, general manager and VP of Security at Google Cloud. “Confidential VMs… will help us better serve customers in these industries, so they can securely take advantage of the innovation of the cloud while also simplifying security operations.”

Providing confidential computing is largely a question of hardware, something that many vendors have grappled with over the past few years. In this case, Google has turned to AMD and its second-generation EPYC CPUs – these now support a ‘Secure Encrypted Virtualisation (SEV)’ feature, which allows a VM to run with encrypted memory using a unique, non-exportable, key.

“Our deep partnership with Google Cloud on its Confidential VMs solution helps ensure that customers can secure their data and achieve performance when adopting this transformational technology,” said Dan McNamara, senior vice president and general manager of AMD’s Server Business Unit.

“Confidential VMs offer high performance for the most demanding computational tasks all while keeping VM memory encrypted with a dedicated per-VM instance key that is generated and managed by our hardware.”

The company has also confirmed that any customers already running workloads in a VM on Google Cloud Platform will be able to shift these over to a Confidential VM using a checkbox.

Google has also said that VM memory encryption will not interfere with workload output, promising that the performance of Confidential VMs will be on-par with that of non-confidential VMs.

BigQuery Omni pulls Google, AWS, and Azure analytics into one UI


Dale Walker

14 Jul, 2020

Google has launched an early version of BigQuery Omni, its new analytics tool that lets users access and view data across Google Cloud and Amazon Web Services without leaving the Big Query UI.

Powered by Google Anthos, its vendor-neutral app development platform, users will be able to use SQL and the standard BigQuery APIs to manipulate data silos sourced from multiple platforms, without having to manage the underlying infrastructure.

Although the initial alpha launch of the service is restricted to Google Cloud and AWS, Google has also confirmed that Microsoft Azure will eventually be supported.

The tool has been designed to target those customers who rely on multiple cloud service providers and are forced to juggle and consolidate a number of analytics tools in order to get a view of their data.

This is made possible by the decoupling of storage and compute, according to the firm. The compute side has always been regarded as ‘stateless’ but, until now, BigQuery required data to be stored in Google Cloud – this restriction has now been scrapped, allowing customers to store their data in any supported public cloud.

This single view means that customers can use BigQuery Omni to run SQL queries on clusters in whichever region the data resides. For example, it will be possible to query Google Analytics 360 Ads data stored in Google Cloud while simultaneously querying logs data from any apps stored in AWS S3. This can then be used to build a dashboard to get a complete view of audience behaviour alongside ad spend.

This means customers can avoid any costs associated with moving or copying data between cloud platforms in order to get a full view, Google claims.

“85% of respondents to 451 Research’s Voice of the Enterprise Data & Analytics, Data Platforms 1H20 survey agreed that the ability to run the same database on multiple cloud/data centre environments is an important consideration when selecting a new data platform,” said Matt Aslett, research director, Data, AI and Analytics, 451 Research.

“As hybrid and multi-cloud adoption has become the norm, enterprises are increasingly looking for data products that provide a consistent experience and lower complexity of using multiple clouds, while enabling the ongoing use of existing infrastructure investments,” he added.

The new system is built using Anthos, an app development platform launched last year to appease customers that wanted a single programming model that allowed for data to be moved between their various cloud providers without charge or requiring changes.

The underlying infrastructure is run entirely by Google, including any communication between cloud providers, on the familiar BigQuery UI, so there will be little operational change from the customers’ perspective, the company claims.

BigQuery Omni’s engine will run on Anthos clusters inside the BigQuery managed service, and will source data from the various data silos across a customer’s public cloud services, provided they have provided authorisation. In order to run queries, data is temporarily moved from the cloud provider’s data storage to the BigQuery cluster running on Anthos.

For now, BigQuery Omni is only available in private alpha, so customers will need to apply to Google to use it if they’re interested. It’s also only available for AWS S3 for now, with Azure support coming soon.

There is currently no general release date available.

Can Microsoft’s new approach to AI erase the memory of Tay?


Dale Walker

28 May, 2020

On 23 March 2016, Microsoft released Tay to the world, a chatbot built through a collaboration between the company’s research and Bing divisions. Unlike other chatbots, this was designed as a proof of concept to show that artificial intelligence could interact with users effectively across very human communication media – in this case, Twitter.

Unfortunately, Microsoft hadn’t quite understood the dangers of releasing a tool that learns from user interaction into the Twittersphere, a place not exactly known for constructive criticism and polite political discourse. Exploiting its “repeat after me” function – it’s still unclear whether this was built in or a learned function – users were able to expose the @TayandYou account to inflammatory statements about some of the internet’s most popular talking points of the time, whether that was Gamergate, feminism, or simply entries from the Urban Dictionary.

Within days, this self-described “AI with zero chill” had been transformed into an accomplished purveyor of hate. The chatbot was supposed to demonstrate how AI can learn from environmental cues and communicate nuanced ideas in a language familiar to those on Twitter. What we got was a bot that echoed some of the more hateful sides of the internet, either through the parroting of genuine messages or users attempting to derail the project.

Microsoft said at the time that it was “deeply sorry for the unintended offensive and hurtful tweets from Tay”, and that it would “look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values”.

Tay was meant to show just how far Microsoft’s research teams had outpaced those of its rivals in the AI arms race. Instead, it was a catastrophic failure, proving that technology capable of learning and making its own decisions based on its environment needed to have guardrails, not just in development but also in its application.

It’s clear the experiment served as a wake-up call for a company that, until that point, had little in the way of internal processes to govern how such technology was developed. Some three months after it pulled the plug on Tay, CEO Satya Nadella would go on to pen an article in Slate, that tackled the issue head on, arguing that AI and humans can exist in harmony if strict development safeguards exist – something that is now regarded as the genesis of Microsoft’s current internal policies.

Heading into the Aether

By 2017, Microsoft had started work on turning Nadella’s ideals into something a little more tangible, deciding that the answer was to involve as many voices as possible in development. To this end, Microsoft created the Aether Committee, an internal ethics board made up of representatives from all the various departments involved in the development and delivery of AI products. Today it includes people from its engineering teams, sales teams, and research departments, as well as a representative from Microsoft’s Office of the Chief Economist.

“One way of accounting for the broad use of the technology across a range of different sectors is to bring a range of perspectives to the actual work,” says Natasha Crampton, head of Microsoft’s Office of Responsible AI. “As much as possible, you want to try and have a diverse group of people actually building the technology. That then helps you try and avoid the situation where your training data is a miss-match for the real world conditions for which the technology is used.”

Aether, which stands for ‘AI, Ethics and Effects in Engineering and Research’, was the first step in creating greater accountability and communicating what can and can’t be done at a development level among the many teams operating across the globe. The committee was tasked with ensuring that any work on AI being done at the company followed a set of new principles now being championed by Microsoft – fairness, reliability in its purpose, safety, privacy and security, and inclusivity of wider society, all of which are underpinned by the ideas of transparency and accountability.

While Aether proved useful for communicating these new ideas, the committee functioned solely as a non-binding advisory body, promoting fairly vague principles without the powers to create or enforce policy. Its limitations in affecting change soon became clear and in 2018, Microsoft created the Office for Responsible AI, a separate department responsible for enforcing new development policies and maintaining oversight of all AI development, with the help of the committee.

“You can think of the Office for Responsible AI as the governance complement to the advisory function of the Aether Committee work,” explains Cramption, who has headed up the unit since July 2019. “So together, we have this company wide approach where we use Aether as this thought leadership body to unravel some of these really thorny questions, and my office sort of defines the company wide approach and roles.”

Crampton and her team have been tasked with coming up with a set of internal policies to help govern the development of AI at Microsoft – in other words, safeguards to prevent a repeat of Tay. A large part of that is case management, which ensures that new and existing projects are able to effectively translate academic AI papers into something fit for the real world, as well as improving communication between development, sales and public-facing teams – something that was clearly ineffective during the rollout of Tay in 2016.

One way it does this is by assessing projects for ‘sensitive use cases’– anything that’s likely to infringe on the fundamental human rights of the end user or lead to the denial of services (such as a loan application), anything involving the public sector, or any service that could harm the end user, either physically, psychologically, or materially. Through this, the Aether committee and OFRA can triage use cases as Microsoft works with customers, and pass recommendations up to the senior leadership team.

At least, that’s the idea. Most of Microsoft’s AI policies are still in a pilot phase.

“We’ve been piloting the standard for coming up to five months now and we’re starting to get some interesting learnings,” explains Crampton.

“We’re imminently about to bring on more groups as part of our phase 2 – the rollout is accompanied by research and a learning process as we go, because the whole point of the pilot is to make sure that we are refining the standard and improving it as we learn more. Based on what we see now, my expectation is that by the end of the next financial year, so in 2021, we will have it rolled out across the company.”

Owning up to mistakes

Microsoft’s work in this area is as much about righting the wrongs of the past as it is about ensuring AI is built for everyone. Tay was certainly a blemish on Microsoft’s development history, but the incident also shattered any trust the public had placed in AI-based software and served as validation for those who feel the technology is overhyped, or perhaps even dangerous.

“We’ve tried to approach this in a humble way, recognising that we don’t have all the answers yet … these are really tough challenges that we’re confronting,” says Crampton.

“The near term role of it is to really be a champion for developing new practices … to uphold our principles. I really want to be forward leaning on that and really try and do what we did for privacy or security and embed responsible AI considerations into our thinking when we’re developing technologies.”

Crampton believes her soon-to-be company-wide AI policies are a reflection of a maturing Microsoft, a company trying to be more responsible with how it develops software, and how it speaks to its customer base.

“When customers come to us and ask us to help them with solutions, we think about how we can help the customer with a product in a way that upholds our principles,” says Crampton. “And … sometimes we are conscious of how a system may be expanded in the future.

“A number of our AI technologies are building blocks, especially when we think about some of our cognitive services. By design, we make available the building block and the customer adds various components. If we use facial recognition as an example, we provide the [facial] matching technology, but the customer adds their cameras, their database, their people and their processes. Recognising that there are all of these different actors, different people contributing different things to create an overall system and interaction between components is really important, and so is the accountability between different players.”

“We really try hard to communicate the limitations of our technology,” adds Crampton. “In the past, I think Microsoft has really just focused heavily on the capability. We think it’s important to talk about the limitations as well. And so we are trying to communicate those to our customers so that they are educated about the downstream issues that could arise.”

Bottlerocket is Amazon’s new purpose-built OS for running containers


Dale Walker

11 Mar, 2020

Amazon Web Services has unveiled a free open source operating system called Bottlerocket, designed specifically to run containers on bare metal or virtual machines.

Bottlerocket is being pitched as a purpose-built operating system designed with a single-step process to make it far easier to automate updates, while also cutting out much of the unnecessary elements found in general-purpose software.

Its biggest selling aspect is its dual partition setup, running as active and inactive. When an update is issued, the inactive side is changed first, with the system then switching the positions of the partitions in order to complete the update.

The OS also uses image-based updates, which means the update can be rolled back in its entirety if necessary, helping to reduce downtime and minimise process failure. This is in contrast to most general-purpose operating systems which use a package-by-package approach.

As part of the slimmed-down design, Bottlerock takes a different approach to authentication and secure login normally found on general-purpose systems. There’s no SSH server to support secure logins, although users can use a separate container to access admin controls.

The new OS also supports all the container tools you might expect, including Docker images and anything conforming with the Open Container Initiative standard. The system is also built with some third-party components, including the Linux kernel, the container’s runtime, and Kubernetes. 

The OS is currently in a preview state, and is hosted on GitHub alongside a host of tools and documentation to support its use. Among these is a Bottlerocket Charter, which claims that the OS is open and “not a Kubernetes distro, nor an Amazon distro”, adding that such a platform can only be built with the support of a wider community.

Despite its open nature, the OS is optimised to work best with AWS tools out of the box, specifically Amazon’s Elastic Kubernetes Service (EKS).

The OS is currently available in a free public preview as an Amazon Machine Image (AMI) for Elastic Cloud Compute. Once released to general availability, Bottlerocket will come with three years of support, incorporated into AWS support plans at no extra cost.

Avast shutters Jumpshot unit in wake of data privacy concerns


Dale Walker

30 Jan, 2020

Avast has announced it will be closing down its Jumpshot data analytics unit, only a day after launching an internal investigation into data-sharing practices following user complaints.

On Tuesday, the company was accused of collecting customer online behaviour data and passing it on to its Jumpshot unit, where it was then sold to third parties, according to an investigation by PC Mag and Motherboard.

Avast has denied the allegations contained in the report and, according to company CEO Ondrej Vlcek, both Avast and Jumpshot have “acted fully within legal bounds” and have “committed themselves to 100% GDPR compliance”.

However, announcing the decision on Thursday, Vlcek said that a review of Avast’s practices revealed that the “data collection business is not in line with our privacy priorities as a company”.

“Protecting people is Avast’s top priority and must be embedded in everything we do in our business and in our products. Anything to the contrary is unacceptable,” said Vlcek.

“I firmly believe it will help Avast focus on and unlock its full potential to deliver on its promise of security and privacy. And I especially thank our users, whose recent feedback accelerated our decision to take quick action,” he added. “As CEO of Avast, I feel personally responsible and I would like to apologize to all concerned.”

Vlcek also said the decision would impact “hundreds” of Jumpshot employees and customers, but that it was “the right thing to do”.

Avast intends to continue paying Jumpshot vendors and suppliers in full during the wind-down process, and will begin notifying customers shortly. It also added that the closure would not affect its 2019 fiscal results.

Having once led the anti-virus market, Avast is now considered the fifth-largest provider of security software behind Symantec, McAfee, ESET and Bitdefender.

Memes and Viking funerals: The internet reacts to the death of Windows 7


Dale Walker

14 Jan, 2020

It’s now official: Microsoft’s extended support for Windows 7 has now come to an end, marking the retirement of an operating system that enjoyed enormous success and popularity over its ten-year lifetime.

Organisations across the globe should now be working to ensure Windows 7 is removed from their environments, although, given the nature of software migration, it’s very likely that the operating system will maintain a large market share for many months to come.

The moment is bittersweet, as although Windows 10 has been the most popular desktop operating system globally since December 2017, the decline of Windows 7 has been remarkably slow given that it first hit the market in 2009. An intuitive design, stable performance and a much-loved taskbar were a breath of fresh air following the widely criticised Windows Vista, something that helped maintain its popularity through the disastrous launch of Windows 8.

The death of Windows 7 seems to have hit some harder than others, but fortunately, the measured, stable, and not at all unhinged IT industry has decided to give the software the send-off it deserves.

The outpouring of love on social media ranges from touching to the truly bizarre.

However, the best tribute so far comes courtesy of employees at Databarracks, who decided to express their reverence of the software by holding a traditional Viking funeral, complete with a little boat.

Touching stuff.

As of December 2019, Windows 7 held a market share of just under 27% globally, although this varies considerably depending on the demographic you look at. In the regions of Africa and India, for example, almost 40% of desktops are running Windows 7, while areas such as the US and the UK lead the way with 18% and 16% respectively.

The numbers stand in stark contrast to those of Windows Vista and the much younger Windows 8 – a sign that the ghost of Windows 7 is almost certainly going to linger for many years to come.

Those that have been heavy reliant on Windows 7 should have already started to migrate away from the OS. However, for those looking for tips about how to move operations over, we’ve put together this handy guide, although perhaps it’s time you start looking for Microsoft alternatives.

HPE hybrid IT revenue falls 11% triggering share slump


Dale Walker

26 Nov, 2019

HPE missed analyst estimates in its fourth-quarter earnings report on Monday, largely driven by slowing demand for core products and wider economic uncertainty.

Shares slumped 4% in after-hours trading, prompted by a drop of 11% in the company’s Hybrid IT division, by far HPE’s largest unit comprising its servers, storage and data centre products. Revenue for the unit came in at $5.67 billion, just short of the $5.74 billion expected by analysts.

The company’s Intelligence Edge unit, a field that HPE has aggressively targeted, also saw revenue slump by 6.5%, falling from $773 million to $723 million year over year.

In September, HPE CEO Antonio Neri warned that turbulent economic factors, including trade tensions, were creating “uneven demand” and would rattle customer confidence for some time to come.

Commenting on this week’s earnings, he said: “We had a very successful fiscal year, marked by strong and consistent performance. Through our disciplined execution, we improved profitability across the company and significantly exceeded our original non-GAAP earnings and cash flow outlook, while sharpening our focus, transforming our culture and delivering differentiated innovation to our customers as they accelerate their digital transformations.

“I am confident in our ability to drive sustainable, profitable growth as we continue to shift our portfolio to higher-value, software-defined solutions and execute our pivot to offering everything as a service by 2022,” Neri continued. “Our strategy to deliver an edge-to-cloud platform-as-a-service is unmatched in the industry.”

There were some positive signs for the company. Quarterly profit was slightly higher than analyst estimates, earning 49 cents per share compared to the 46 cents per share that was anticipated, as reported by Reuters.

HPE also gave a positive outlook for the year ahead, estimating a $1.01 to $1.17 per share in profits and $1.78 to $1.94 per share in adjusted profits.

The earnings report brings to an end a year marked by a series of strategic acquisitions that will likely serve to further diversify HPE’s earnings in 2020. In May the company acquired supercomputing giant Cray, a deal that came as somewhat of a surprise but will certainly lead to more developments in its high-performance computing division.

HPE also acquired the intellectual property of big data analytics specialist MapR in August. This deal included a bunch of AI technology and expertise that HPE said would be put towards its Intelligent Data Platform.

Microsoft Teams surpasses 20 million daily users


Dale Walker

20 Nov, 2019

Microsoft Teams has surpassed 20 million daily active users, an increase of 7 million since July, the company has revealed.

The collaboration suite, available as part of Microsoft’s premium Office 365 brand or as a free version, exceeded the popular Slack platform as being the most used business chat app earlier this year.

In fact, Teams now only sits behind Microsoft’s other chat platform Skype for Business in terms of popularity. However, the company has made it clear in the past that Teams will eventually supersede Skype as its primary tool.

Although Slack has grown to 12 million daily users as of October, that modest growth has been dwarfed by the successes of Microsoft Teams. Slack shares fell as much as 10% following Microsoft’s announcement on Tuesday, and have fallen 18% since the company’s public launch in June.

Meanwhile, Workplace by Facebook reported in October that it had surpassed 3 million subscribers, up one million since February.

Microsoft announced a batch of updates for Teams at its annual Ignite conference earlier this month, including a simple sign-on mechanism for front line workers, new integrations with Yammer and Skype, an expansion of the Advanced Threat Protection suite to cover messages within Teams, and the general availability of private chats.

Microsoft’s Office 365 subscription gives premium access to the Teams app, alongside its traditional productivity software. Commercial Cloud, the division that Teams falls under, is one of the highest performing areas of the company, making up over one-third of its overall revenue, according to third-quarter figures.

Slack hasn’t spent the year idle, having overhauled the underlying technology of its desktop app in July to help reduce RAM usage by 50% and improving loading times by 33%. The company also added new features in August that help admins manage permissions, and a Workflow Builder in October, that allows users to automate routine functions using custom workflows.

View from the airport: Microsoft Ignite 2019


Dale Walker

12 Nov, 2019

Microsoft’s enormous flagship Ignite conference is now over for another year, but the roadshow has only just started – the company is set to visit a total of 30 different countries between now and mid-April 2020.

It was perhaps the largest conference I had ever attended personally, and featured a massive number of updates across its software and cloud portfolio. Major highlights include a new hybrid cloud platform dubbed Azure Arc, a new strategy and look for its Edge browser, and a raft of updates for its Teams suite.

What struck me most, however, was Microsoft’s pragmatic attitude to both its investments and long-term goals. Whether the result of shrewd thinking or simply a realisation that the market isn’t where the company wants it to be, Microsoft is looking to compromise.

Perhaps the most prominent example of this was in its security content. Ignite’s security and compliance keynote – the first of its kind for Microsoft – was packed with updates across Microsoft’s portfolio. However, despite the announcements, Microsoft wanted attendees to leave with 3 simple rules to follow: Turn on MFA, stay up to date, and use SecureScore (the company’s built-in compliance tool).

In fact, speaking with executives after the show, I learned that these rules had been baked into every security presentation across the week, some 21 sessions. Every presenter was told to include one last slide that reminded users to turn on MFA.

It’s a self-confessed compromise from a company that has spent a number of years trying to pivot its customers towards passwordless security. In the words of Alex Simmons, corporate vice president for Microsoft’s Identity division, this is “the new Microsoft, it’s a little bit more empathetic”.

Passwordless remains an incredibly important part of Microsoft’s security strategy, but it’s clear that this goal is still some distance away – at least three years according to Simmons. While it’s managed to onboard over 100 million of its users to things like biometric security, it still has around 700 million left to go. It’s clear that customers aren’t quite ready to completely change their own approaches to security. Whether that’s due to the complexity of legacy hardware, or simply a reluctance to change, the company isn’t quite meeting its customers where they are.

It’s clear then why Microsoft is now making its MFA tool free for every user – it’s a reluctant nod to customers, saying: “If you’re going to use passwords, at least do it properly”.

The same can be said for Microsoft’s ambition for its Edge browser. The industry was eager to see what the company would do with its modernised platform, having just moved to the Chromium source code. It was never going to be enough to say that Edge now has parity with Google Chrome over things like compatibility and performance. Microsoft knew that, and from the way Edge is now being sold it seems the company is no longer trying to compete with rivals that are too far ahead in the race.

Instead, the company has taken quite a bold, and potentially innovative, step to try and fuse together the capabilities of a web browser with the data of a company’s intranet. Edge is now pitched as a business companion tool that quite frankly could make the browser relevant again in the market.

Microsoft has had a difficult year, no more so than with a Windows platform that’s been plagued with bugs and shoddy launches. Those experiences, backlash from customers and its growing irrelevance in certain markets seems to have humbled the company, and so it’s now time to start meeting customers where they are, rather than telling them where they need to be.