All posts by Keumars Afifi-Sabet

Microsoft launches dedicated host service alongside licensing changes

Keumars Afifi-Sabet

6 Aug, 2019

Microsoft is previewing an ‘Azure Dedicated Host’ service for enterprises looking to run their Linux and Windows virtual machines (VMs) on their own physical servers, alongside a set of changes to licensing costs.

The dedicated host service will target enterprise customers which prioritise the security benefits of physical hosting over shared cloud hosting, as well as the isolation of their sensitive information.

These servers will not be shared with any other customer, and businesses which opt for one will retain full control over how services run on the machine.

The Azure Dedicated Host is available in two iterations. The first type is based on the 2.3GHz Intel Xeon E5-2673 v4 processor and has a maximum of 64 virtual CPUs available. This can be chosen in a 256GiB and 448GiB RAM configuration, priced at $4.055 per hour and $4.492 per hour respectively.

The second version, meanwhile, is based on the Intel Xeon Platinum 8168 processor with 72 virtual CPUs available and is priced at $4.039 per hour in a 144GiB configuration.

Moreover, several can be grouped together into larger host groups in a particular region, so businesses can build clusters of physical servers.

The dedicated hosts will be subject to automatic maintenance by default, although administrators can defer host maintenance operations and apply them within a 35-day window. It’s possible, during this window, to retain full control over the server maintenance.

This has been announced in conjunction with a set of key changes to the pricing of software licenses, which sees a separation between on-premise outsourcing services and cloud services. Customers will need an additional ‘software assurance’ to run Microsoft software on public cloud services from 1 October this year.

Businesses using rival cloud providers, like Amazon Web Services (AWS) or Google Cloud Platform (GCP) should, therefore, expect the cost of running Microsoft software to increase.

The introduction of Azure Dedicated Host, on the other hand, has also seen Microsoft roll out an Azure Hybrid Benefit licensing option, which allows customers to use software without the need for a ‘software assurance’.

Both Google and Amazon have launched similar dedicated physical services in recent, years, with Azure the latest major cloud provider to follow suit.

Google, for instance, launched sole-tenant nodes in its Compute Engine last June, which allowed businesses to run instances on their own dedicated architecture as opposed to sharing hosting with other customers. These are similar to AWS’ EC2 dedicated hosts.

Elsewhere, Microsoft has increased the bug bounty rewards as part of a big security push that has also seen the launch of the Azure Security Lab.

The highest bounty will be doubled to $40,000, while those with access to the lab can attempt a set of scenario-based challenges with a maximum award of $300,000.

The new lab itself is a set of dedicated cloud hosts that offers security researchers a secure space to test against Infrastructure as a Service (IaaS) attacks.

Organisations are invited to apply to join the new security-focused community by requesting a Windows or Linux VM, with successful applicants given access to campaigns for targeted scenarios and added incentives.

G Suite now offers enhanced security for high-risk users

Keumars Afifi-Sabet

1 Aug, 2019

Google has extended its advanced security programme to enterprise customers using its G Suite, Google Cloud Platform (GCP) and Cloud Identity products, giving IT administrators the ability to set stronger internal controls.

Organisations can enrol senior executives and those employees at high-risk of cyber attacks into Google’s Advanced Protection Program (APP), which will bring their level of security up to the standards of Google’s own employees.

Within the next few days, IT administrators can select the members of their organisation who they assess as needing stronger protections, and Google will automatically apply a set of stricter cyber security policies to their activities.

There are several changes to how those enrolled in the programme can access Google’s products, including enforced FIDO keys, blocking access to non-trusted third-party apps automatically, and enhanced scanning of incoming emails.

These changes will come alongside making Titan security keys, Google’s own FIDO key, available for purchase in Japan, Canada, France and the UK, as well as using machine learning to improve security alerts for IT administrators.

The use of such FIDO keys will be mandatory for those enrolled in the advanced security programme, meaning access to critical Google apps may be disrupted for users without them. Third-party apps will also be automatically blocked for APP users unless explicitly whitelisted.

The use of machine learning, meanwhile, will be directed towards analysing activity within the G Suite to detect unusual behaviour. In practical terms, IT administrators signed up to the service will receive a stream of anomalous activity alerts on a security dashboard.

This raft of added security protections will bolster the security across organisations signed up to Google’s enterprise products by both demanding more of high-risk employees and adding more robust provisions.

However, the majority of these practices can be seen as essential for good cyber security hygiene, regardless, and raise the question as to why they haven’t been introduced to customers up to now. It’s especially pertinent given Google employees have adhered to the APP regime since it was launched two years ago.

Google, at the time of launch, restricted the APP to those at elevated risk of attack and who are also “willing to trade off a bit of convenience for more protection”.

There is now, however, no stopping IT administrators from now enrolling their entire organisation to the programme should they deem it the best defence against cyber threats.

VMware strikes public cloud partnership with Google Cloud

Keumars Afifi-Sabet

30 Jul, 2019

Google Cloud Platform (GCP) will support VMware workloads as part of a partnership between the two companies to generate additional options for customers looking to run a hybrid cloud strategy.

Up to now, Google’s cloud arm was the only major public cloud provider to not support VMware. Enterprise customers will, however, from later this year be able to run VMware workloads on the platform.

The Google Cloud VMware Solution, as it’s dubbed, will use software-defined data centre tools including NSX networking, vSAN storage software provided by GCP, as well as vSphere compute. This will be governed through CloudSimple.

The partnership has not yet been formally announced, a spokesperson told Cloud Pro, but is being widely reported by a host of US titles including Bloomberg.

VMware will benefit from their customers given the flexibility to move workloads from their own servers to the public cloud, including existing Vmware tools, policies and practices, according to the firm’s CEO Thomas Kurian.

The firm’s customers will also be given access to Google’s artificial intelligence (AI), machine learning and analytic tools, as well as being able to deploy their apps to regions where Google has data centres. Moreover, these enterprises will also be able to run networking tools through GCP, beyond virtualisation software.

The partnership between GCP and VMware is similar in nature to other agreements struck between the virtualisation firm and rival public cloud providers, including Amazon Web Services (AWS).

These two companies, for instance, struck an agreement in late 2017 in which businesses could migrate their processes and apps to the public cloud. This was extended to Europe in March last year.

In April, meanwhile, Microsoft introduced native VMware support for its Azure cloud platform. The announcement meant customers were able to run their workloads in native environments, also through tools like vSphere, vSAN, vCenter and NSX, with workloads ported to Azure with relative ease.

VMware’s latest partnership with GCP points towards its strengthening in the public cloud arena, as it aims to offer a greater scale of flexibility for its enterprise customers.

Sistema Plastics uses Epicor to iron out inventory woes

Keumars Afifi-Sabet

25 Jul, 2019

For manufacturing companies specialising in fast-moving-consumer-goods (FMCG), the need for reliable enterprise resource planning (ERP) software is paramount. Firms look to these systems to handle many aspects of day-to-day operations, from staying on top of inventory to managing the sales process.

Sistema Plastics runs a single manufacturing site in Auckland, New Zealand, but ships worldwide through a series of third-party retailers, including Amazon and Asda. Indeed, if you peer into any kitchen cabinet you’ll likely find something manufactured by the firm, from microwavable containers to lunchboxes to reusable water bottles.

The company has grown rapidly over the last decade – a single manufacturing run now is in the region of 30-40,000 units. As a result of this rapid growth, inventory management had started to spiral out of control to the point it was being stored “almost anywhere” with no real way of tracking it, Sistema’s CTO, Greg Heeley, tells Cloud Pro.

Four years ago, the company brought in Epicor’s flagship ERP platform to handle a major transformation in its manufacturing processes.

The system can only do what you tell it to

The challenge Sistema faced at the time, Heeley says, was finding a product that could support the way inventory was configured, as well as finding and managing stock. This highly pressing issue was borne from Sistema’s rapid growth coupled with severely restricted physical floor space. Problems deepened when Sistema outgrew its first plant and began opening up several smaller sites. Being spread in such a way, across multiple locations, meant workers would regularly move parts that were needed in one plant from another and vice versa.

To compound these issues, employees neglected to feed accurate information into Epicor ERP, such as where the stock was kept and whether it had been moved, making it even more difficult to use the software.

“Putting stock somewhere and telling the system is one thing, but if you then move it and don’t tell the system – that’s something else,” Heeley explains. In light of this, the firm devised procedures around how the stock was recorded and got people trained up to follow the new system.

“There are some areas like that we struggled with; more personnel than system-driven,” he adds. “The system can’t do what you don’t tell it – and we sometimes didn’t tell it what to do.”

It was only in 2016 that a single site large enough to handle the scale of manufacturing operations was found and things finally began to click with the software. This step forward involved putting in automatic inventory systems, among other measures, to ensure all inventory problems were consigned to the past.

The skills shortage bites hard

Looking ahead, Sistema is shifting its focus to grow as a company now that the software underpinning its operations has been tamed. But the problems the firm now faces are unique to a company that both manufactures plastic goods and is based in “the middle of technically nowhere”.

For a company that keeps a close eye on its carbon footprint, being based in New Zealand has proved a massive hindrance. Sistema’s environmental ranking scores must take into account shipping materials in and out, as well as the products manufactured. This is all offset against its external energy consumption, which is proving a battle. Plastics itself, meanwhile, has become stigmatised due to the effects of discarded materials on the environment.

Sistema’s need to hire more high-skilled staff, however, is chief among the firm’s concerns, and this isn’t helped by the company’s location either. New Zealand’s economy is based predominantly on agriculture and tourism, not manufacturing or engineering. Attracting people to work in “the middle of technically nowhere”, therefore, is something the firm will have to look at addressing in the coming years.

There’s scope for installing human resources (HR) modules in ERP software to assist in talent acquisition. More often than not, this involves automating processes like payroll and benefits to give hiring managers more time to focus on finding talented workers. ERP can also make a difference to a firm’s sustainability goals, with greater visibility over stock allowing Sistema to gain full control over the products ordered, consumed and re-used.

Looking ahead, Sistema is seeking to further buy into document management, and automate a host of processes by implementing Epicor’s DocStar enterprise content management platform. While the firm has adopted Electronic Data Interchange (EDI), a digital exchange of business documents in a standardised format, many of its customers haven’t. This means the manufacturer often receives orders that are 50 to 60 times longer than they should be, which must be then manually entered into their systems. Epicor’s software, Heeley claims, can help the company get around this.

Automating accounts payable (AP) document scanning, as well as document record handling, are also on the horizon. Meanwhile, Sistema has ambitions to digitise standard operating procedures for the shop floor and to index visually-compelling videos that teach employees how to perform tasks. Heeley also touts robotics, predictive analytics, and edge computing as areas he’d be keen to explore, but what he does next will largely depend on what makes the most sense for the company.

“It does become a challenge of we haven’t got infinite resources, and there aren’t infinite resources in the country either,” he continues.

“So we definitely have to be selective about the projects that we take on, and [we need to] know which one’s going to produce the best results in the short term. We will get to all of them eventually, but it’s just about prioritising.”

Microsoft’s $1bn OpenAI partnership underpinned with closer Azure ties

Keumars Afifi-Sabet

23 Jul, 2019

Microsoft has invested $1 billion into an industry wide artificial intelligence (AI) partnership that will harness Azure cloud technology to develop AI for supercomputers.

The not-for-profit organisation OpenAI, co-founded by Tesla CEO Elon Musk, is basing its partnership with Microsoft on three key areas, largely focused on how the firm’s Azure cloud platform can integrate with ongoing work.

The two organisations will jointly build “Azure AI supercomputing technologies” while OpenAI will port its existing services to run on Microsoft’s cloud platform. Moreover, the company will become OpenAI’s preferred partner for marketing AI technologies as when they are commercialised.

The initiative will also focus on creating artificial general intelligence (AGI). This differs from conventional AI in its broad and multi-functional nature, as opposed to being developed for specific applications.

Microsoft argues generalisation, and “deep mastery of multiple AI technologies”, will help address some of the world’s most pressing issues. These range from global challenges like climate change to creating more personalised issues like healthcare and education.

With its capacity to understand or learn any intellectual task that a human can, AGI is also a popular subject in science-fiction writing, as writers and futurists extrapolate this to machines experiencing consciousness.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said OpenAI CEO Sam Altman.

“Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI. We believe it’s crucial that AGI is deployed safely and securely and that its economic benefits are widely distributed. We are excited about how deeply Microsoft shares this vision.”

OpenAI was founded in December 2015 as an organisation dedicated to researching next-gen AI technologies and the applications for these. Its missions centre on developing AI that serves as an extension of individual humans, not a replacement.

It’s a similar AI vision to Microsoft’s, with the industry giant committing to developing AI grounded in an ethical framework. Its foray into automation and machine learning has largely come in the way of voice recognition and in medical contexts.

It’s a step-change from the culture that led to Microsoft launching, and later shutting down, the infamous Tay bot in 2016. This chat Twitter-based chatbot was initially designed to emulate a teenage girl but ended up parroting racial slurs and conspiracy theories after it was hijacked by trolls.

Microsoft wraps up multibillion-dollar deal with AT&T for Azure migration

Keumars Afifi-Sabet

18 Jul, 2019

US telecoms giant AT&T Communications has signed what’s thought to be a multi-billion dollar cloud partnership agreement with Microsoft to aid the firm’s ‘public cloud-first’ strategy.

Microsoft will embark on a non-exclusive multi-year alliance with AT&T Communications to build on progress made in cloud computing, artificial intelligence (AI) and 5G networking. Azure will also be the preferred provider for non-networking applications, the cloud company announced.

The industry giant will also tap into AT&T’s geographically-dispersed 5G network to design and build edge-computing capabilities, as well as Internet of Things (IoT) devices.

The deal will also see AT&T employees migrate to Microsoft’s Office 365 suite of apps and services, with the company also planning to move most of its non-networking workloads to the public cloud by 2024.

This deal has come just a day after AT&T announced a similar partnership with IBM and its recently-acquired subsidiary Red Hat. On the surface, this second multi-billion-dollar agreement is similar to Microsoft’s deal, but centres on its business side.

“Today’s agreement is another major step forward in delivering flexibility to AT&T Business so it can provide IBM and its customers with innovative services at a faster pace than ever before,” said IBM’s senior vice president for cloud and cognitive software Arvind Krishna.

“We are proud to collaborate with AT&T Business, provide the scale and performance of our global footprint of cloud data centers, and deliver a common environment on which they can build once and deploy in any one of the appropriate footprints to be faster and more agile.”

The deal will allow AT&T to host business applications on IBM Cloud, and the networking firm will also use Red Hat’s open-source platform to manage workloads and applications. AT&T Business will have greater access to the Red Hat Enterprise Linux and OpenShift platforms as part of the deal.

IBM will be the main developer and provider for AT&T Business, the telecoms giant’s enterprise arm. Meanwhile, IBM will help to manage the IT infrastructure of the wider organisation both on and off-premise, as well as on public, private and hybrid cloud.

Just as with AT&T and Microsoft, the two companies will also collaborate on edge computing platforms to allow enterprise clients to take advantage of 5G networking speeds as well as IoT devices. The wider aim is to reduce latency and dramatically improve bandwidth for data transfers between multiple clouds and edge devices.

Europe’s Galileo satellite system crippled by days-long outage

Keumars Afifi-Sabet

15 Jul, 2019

The European Union’s satellite navigation infrastructure, used by businesses and government agencies across the continent, has been offline for more than 100 hours following a network-wide outage.

The Global Navigation Satellite Systems Agency (GSA), which runs the £8 billion Galileo programme, confirmed this weekend the satellite system has been struck with a “technical incident related to its ground infrastructure”.

As a result, all 24 satellites in orbit are non-operational.

“Experts are working to restore the situation as soon as possible,” the GSA said in a statement.

“An Anomaly Review Board has been immediately set up to analyse the exact root cause and to implement recovery actions.”

Galileo is used by government agencies, academics and tech companies for a wide range of applications, from smartphone navigation to search-and-rescue missions.

The programme offers several services including a free Open Service for positioning, navigation and timing, and an encrypted Public Regulated Service (PRS) for government-authorised users like customs officers and the police.

Its business application spans multiple sectors; used by fishing vessels, for example, to provide data to fishery authorities as well as by tractors with guidance for navigation. According to the GSA, 7.5 billion Galileo-friendly apps are expected by the end of 2019.

However, the satellite system, developed so European organisations aren’t wholly entirely reliant on GPS, has been offline since 1 am UTC on Thursday 11 July.

The GSA said at the time that users may experience “service degradation” on all Galileo satellites. A further update then issued two days later claimed users would be experiencing a total service outage until further notice. Neither update offered a concrete explanation for the mysterious outage, which has persisted at the time of writing.

The root cause, however, may lie with a ground station based in Italy, known as the Precise Timing Facility (PTF), according to Inside GNSS. This facility generates the Galileo System Time, which is beamed up to the satellites to enable user localisation. It is also often used as an accurate time reference.

In June, GPS services were also hit by a similar outage which affected a host of Middle-Eastern countries. According to Isreali media, that outage was linked to state-sponsored attacks from Russia.

The government and UK businesses have played an integral role in helping to develop Galileo since its pilot launch in 2016. The continental service is expected to be fully operational by 2020, with 30 satellites in total.

But the UK’s withdrawal from the EU has threatened to fully cut off access by British agencies and companies, should no deal be agreed.

The government has already set aside £92 million to develop an independent satellite system, although it’s unclear how long this would take to implement.

Microsoft Teams now ‘bigger than Slack’

Keumars Afifi-Sabet

12 Jul, 2019

The number of individuals using Microsoft’s flagship workplace hub has soared in the last few months to leave its key competitor – Slack – in the dust, figures released by the firm show.

Two years after Microsoft launched its Teams platform, which is part of the firm’s Office 365 suite of apps and services, the company is boasting the digital workspace has more than 13 million active daily users.

This is one-third more than Slack’s 10 million daily user count according to the latest figures the company has disclosed. Active weekly users for Microsoft’s service, meanwhile, stand at 19 million.

The feat is more staggering considering that Microsoft’s platform was lagging behind its rival as soon as April this year, according to a chart the company produced.

Microsoft Teams owes its recent success to auto-inclusion with the 365 suite of apps and services

The pace of growth has been sharp but will not come as a surprise considering the number of organisations that are reliant on Microsoft Office 365, of which Teams is an integral component.

Distributing Microsoft Teams to its pre-existing customer base has likely been a huge factor in its growth since it was first launched two years ago.

The company says Teams now boasts a user base of 500,000 organisations. Slack, meanwhile, has more than 85,000 paid-for organisations, according to its latest figures, but the total number of businesses signed up to the workplace hub has not been disclosed.

Microsoft has also used this opportunity to introduce a raft of additional features for the workplace app, including priority notifications and read receipts for private chats.

Announcements can allow team members to flag important news in a channel, while cross-channel posting saves time on copy-and-pasting the same message to different audiences.

IT administrators are also being helped to deploy the Teams client and manage policies for every member within an organisation. Pre-defined policies, in areas like messaging and meetings, can be applied to employees based on the needs of their individual roles.

Slack itself recently announced a number of updates to its functionality and user interface. These span shared channels with customers and vendors, as well as added integration between email and calendars.

Commenting on Slack’s IPO a few weeks ago, vice president and principal analyst with Forrester, Michael Facemire, said Slack’s success will be determined by how well it can penetrate enterprises.

“Can Slack prove to the enterprise buyer that it is more than a chat app, more than a collaboration tool, but instead an enterprise collaboration platform? If Slack can do this, expanding out of a tech-savvy user base and into all parts of the business become much easier, as it starts to do work for everyone.

“The next challenge is selling its service into the enterprise. Many companies have multiple instances of free Slack in use. But this group of users face their first hurdle when these free accounts need enterprise governance (single sign-on, message retention rules, etc).

“Will Slack be able to prove the value of both paying the fee and doing the work to integrate with existing systems? This question will also signal how quickly it can succeed in an enterprise market.”

Cloud Pro approached Slack for comment and an update on its active daily user count, but hadn’t received a response at the time of publication.

Microsoft spruces up Outlook in a bid to catch up with major G Suite upgrades

Keumars Afifi-Sabet

5 Jul, 2019

Outlook is set to get a range of new features this month including a dark mode, a redesigned email experience and improvements to calendar synchronicity as part of a major overhaul of the platform.

Users of Microsoft’s Office 365 email service will see a number of improvements to the way messages can be read, categorised and organised, the firm announced. Changes to calendar and meeting functionality, and a series of significant aesthetic tweaks, make up the full complement of changes.

The new Outlook will feature categories that make it easier to tag, find or organise messages, with users able to add multiple categories to a single message.

A favouriting mechanism, in which contacts, groups or entire categories can be highlighted, also offers easier access to certain aspects of any user’s inbox. As with Gmail, meanwhile, users can also draft multiple emails on-the-go using ‘tabs’ that rest on the lower portion of the user interface (UI).

There’s also a snooze function for emails that need to be dealt with later. Snoozing a message removes it temporarily from the inbox, with it reappearing as an unread message at top of the pile once the snooze period expires.

Among the most eye-catching features, however, is a new dark mode, which lets users personalise their UI for night-time or low-light browsing. The lights can be turned back on when reading a specific email or composing one by configuring this mode in the settings menu.

The firm’s main rival in this space, Google, has spent the past year or so updating G Suite productivity suite, including a number of significant changes to Gmail, notably the use of artificial intelligence (AI) for predictive responses and inbox management.

Meanwhile, tweaks are also being made to Outlook’s calendar functionality, including the ability to search across multiple calendars, as well as filters to adjust the parameters when hunting for a person or event.

It’s also now possible to quickly create events and book rooms for meetings from the calendar surface on Outlook, while the ‘week view’ dedicates a larger screen area to today and tomorrow.

The changes will be implemented from in late July, with ‘targeted release’ customers no longer able to see an opt-in toggle that switches between the old Outlook and the beta version of the latest iteration.

‘Software glitch’ to blame for global Cloudflare outage

Keumars Afifi-Sabet

3 Jul, 2019

Cloudflare has resolved an issue that led to websites serviced by the networking and internet security firm to show 502 ‘Bad Gateway’ errors en masse for half an hour yesterday.

From 2:42pm BST the networking giant suffered a massive spike in CPU utilisation to its network, which Cloudflare is blaming on bad software deployment. This affected websites hosted in territories across the entire world.

Ironically, even Downdetector was knocked offline during the outage

Once this faulty deployment was rolled back, its CTO John Graham-Cumming explained, service was returned to normal operation and all domains using Cloudflare returned to normal traffic levels.

“This was not an attack (as some have speculated) and we are incredibly sorry that this incident occurred,” Graham-Cumming said.

“Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.”

The incident affected several massive industries, including cryptocurrency markets, with users not able to properly access exchanges like CoinMarketCap and CoinBase.

Cloudflare issued an update last night suggesting the global outage was caused by the deployment of just one misconfigured rule within the Cloudflare Web Application Firewall (WAF) during a routine deployment. The company had aimed to improve the blocking of inline JavaScript used in cyber attacks.

One of the rules it deployed caused CPU to spike to 100% on its machines worldwide, and subsequently led to the 502 errors seen on sites across the world. Web traffic dropped by 82% at the worst point during the outage.

“We were seeing an unprecedented CPU exhaustion event, which was novel for us as we had not experienced global CPU exhaustion before,” Graham-Cumming continued.

“We make software deployments constantly across the network and have automated systems to run test suites and a procedure for deploying progressively to prevent incidents.

“Unfortunately, these WAF rules were deployed globally in one go and caused today’s outage.”

At 3:02pm BST the company realised what was going on and issued a global kill on the WAF Managed Rulesets which dropped CPU back to normal levels and restored traffic, before fixing the issue and re-enabling the Rulesets approximately an hour later.

Many on social media were speculating during the outage that the 502 Bad Gateway errors may be the result of a distributed denial-of-service (DDoS) attack. However, these suggestions were fairly quickly quashed and confirmed to be untrue by the firm.