Raising the bar on enterprise computing


Cloud Pro

23 May, 2019

With its move to the Xeon Scalable architecture, Intel began a revolution in its enterprise processors that went beyond the normal performance and energy efficiency selling points. Combined with new storage, connectivity and memory technologies, Xeon Scalable was a step change. The new 2nd Gen Intel® Xeon® Scalable processors don’t just continue that work but double down on it, with a raft of improvements and upgrades – some revolutionary – that add up to a significant shift in performance for today’s most crucial workloads. Whether you’re looking to push forward with ambitious modernisation strategies or embrace new technologies around AI, 2nd Gen Intel® Xeon® Scalable processors should be part of your plan. The revised architecture doesn’t just give you a speed boost, but opens up a whole new wave of capabilities.

More cores and higher speeds meet AI acceleration

It’s not that this latest processor doesn’t bring conventional performance improvements. Across the line, from the entry-level Bronze processors to the new, high-end Platinum processors, there are increases in frequency, while models from the Silver family upwards get more cores at roughly the same price point, not to mention more L3 cache. For instance, the new Xeon Silver 4214 has 12 cores running at a base frequency of 2.2GHz with a Turbo frequency of 3.2GHz, plus 16.5MB of L3 cache. That’s a big step on from the 10 cores 2.2GHz and 2.5GHz of the old Xeon Silver 4114, which had just 13.75MB of cache, and one that’s replicated as you move on upwards through the line.

At the high-end, the improvements stand out even further. The new Platinum 9200 family has processors with up to 56 cores running 112 threads with a base frequency of 2.6GHz and a Turbo frequency of 3.8GHz. By any yardstick that’s an incredible amount of power. What’s more, these processors have 77MB of L3 cache and support for up to 2,933MHz DDR4 RAM – the fastest ever natively supported by a Xeon processor. Put up to 112 cores at work in a two-socket configuration, and you’re looking at unbelievable levels of performance for a single unit system.

From heavy duty virtualisation scenarios to cutting-edge, high-performance applications, these CPUs are designed to run the most demanding workloads. Intel claims a 33% performance improvement over previous-generation Xeon Scalable processors, or an up to 3.5x improvement over the Xeon E5 processors of five years ago.

Yet Intel’s enhancements run much deeper. The Xeon Scalable architecture introduced the AVX-512 instruction set, with a double-width register and double the number of registers over the previous AVX2 instruction set, dramatically accelerating high-performance workloads including AI, cryptography and data protection. The 2nd generation Intel® Xeon® Scalable processor takes that one stage further with AVX-512-DL (deep learning) Boost and Vector Neural Network Instruction; new instructions designed specifically to enhance AI performance both at the data centre and the edge.

Deep learning has two major aspects – training and inference – where the algorithm is first trained to assign different ‘weights’ to some aspect of data being input, then asked to infer weights for new data based on what the AI learnt during that training. DL Boost and VNNI are designed specifically to accelerate the inference process by enabling it to work at lower levels of numerical precision, and to do so without any perceptible compromise on accuracy.

Using a new, single instruction to replace the work of three of the old ones, it can offer serious performance upgrades for deep learning applications such as image-recognition, voice recognition and language translation. In internal testing, Intel has seen boosts of up to 30x over previous-generation Xeon Scalable processors. What’s more, these technologies are built to accelerate Intel’s open source MKL-DNN Deep Learning library, which can be found within the Microsoft Cognitive Toolkit, TensorFlow and BigDL libraries. There’s no need for developers to rebuild everything to use the new instructions because they work within the libraries and frameworks DL developers already use.

With AVX-512, VNNI and DL Boost more enterprises have the power to harness the potential of deep learning. Workloads that would have pushed previous processors to their limits, like image analysis or complex modelling and simulation, run at much higher speeds. The end result is a lower barrier to entry for cutting edge DL applications, while significant research, financial and medical applications could expand to bring in more organisations and run at truly practical speeds.

The next-gen platform

Of course, the processor isn’t all that matters in a server or system, which is why 2nd Gen Intel® Xeon® Scalable processors are designed to work hand-in-hand with some of Intel’s most powerful technologies. Perhaps the most crucial is Intel® Optane™ DC Persistent Memory, which combines Intel’s 3D XPoint memory media with Intel memory and storage controllers to bring you a new kind of memory, with the performance of RAM but the persistence – and lower costs – of NAND storage.

Optane is widely known as an alternative to NAND-based SSD technology, but in its DC Persistent Memory form it can replace standard DDR4 DIMMs, augmenting the available RAM and act as a persistent memory store. Paired with a 2nd Gen Intel® Xeon® Scalable processor, you can have up to six Optane DC Persistent Memory modules per socket partnered with at least one DDR4 module. With 128GB, 256GB and 512GB modules available, you can have up to 32TB of low latency, persistent RAM available without the huge costs associated with using conventional DDR4.

The benefits almost speak for themselves. With such lavish quantities of RAM available, there’s scope to run heavier workloads or more virtual machines; Intel testing shows that you can run up to 36% more VMs on 2nd Gen Intel® Xeon® Scalable processors with Intel® Optane™ DC Persistent Memory. What’s more, this same combination opens up powerful but demanding in-memory applications to a much wider range of enterprises, giving more companies the chance to run real-time analytics on near-live data or scour vast datasets for insight. Combine this with the monster AI acceleration of Intel’s new CPUs, and some hugely exciting capabilities hit the mainstream.

Yet there’s still more to these latest Xeon Scalable chips than performance – it’s the foundation of a modern computing platform, built for a connected, data-driven business world. Intel QuickAssist technology adds hardware acceleration for network security, routing and storage, boosting performance in the software-defined data centre. There’s also support for Intel Ethernet with scalable iWARP RDMA, giving you up to four 10Gbits/sec Ethernet ports for high data throughput between systems with ultra-low latency. Add Intel’s new Ethernet 800 Series network adapters, and you can take the next step into 100Gbits/sec connectivity, for incredible levels of scalability and power.

Security, meanwhile, is enhanced by hardware acceleration for the new Intel Security Libraries (SecL-DC) and Intel Threat Detection Technology, providing a real alternative to expensive hardware security modules and protecting the data centre against incoming threats. This makes it tangibly easier to deliver platforms and services based on trust. Finally, Intel’s Infrastructure Management Technologies provide a robust framework for resource management, with platform-level detection, monitoring, reporting and configuration. It’s the key to controlling and managing your compute and storage resources to improve data centre efficiency and utilisation.

The overall effect? A line of processors that covers the needs of every business, and that provides each one with a secure, robust and scalable platform for the big applications of tomorrow. This isn’t just about efficiency or about delivering your existing capabilities faster, but about empowering your business to do more with the best tools available. Don’t let the 2nd generation name fool you. This isn’t just an upgrade; it’s a game-changer.

Discover more about data innovations at Intel.co.uk

Signs that your cloud strategy is in need of a makeover


Sandra Vogel

23 May, 2019

Moving any aspect of an organisation’s work to the cloud is a serious undertaking. It takes time, it costs money, it needs to be justified to the big bosses and justified to the coalface workers. Workflows, roles and day-to-day activity will change. It can take many months to get everything organised, in place and working well.

The work does not stop once the job is done, either. Managing cloud, like managing any other aspect of IT, is an ongoing commitment. This is about more than just tinkering around – it’s about ensuring your cloud strategy remains fit for purpose and keeps pace with the needs of your business. In order to do that, your strategy will need the occasional makeover.

But how do you know when that time has come?

Strategy? What strategy?

We’re beginning with the assumption that your organisation has a good cloud strategy in the first place, yet this isn’t always the case.

A strong cloud strategy is built on solid principles. It isn’t about the detail of specific business processes or workflows – it’s about why you use cloud, what you expect it to bring to the business, how you evaluate its delivery on those expectations and how you will ensure cloud implementation meets your data governance and security requirements.

If parts of your organisation’s workload are in the cloud and you don’t have a cloud strategy that does these things, then Step 1 is to revise what you have and get a good strategy in place.

Be wary of technology-leads

Assuming there is a sound strategy in place, then «any pragmatic strategy should be able to incorporate additions, modifications or removal of services or business processes,» according to Tony Lock, distinguished analyst at Freeform Dynamics.

One sign that your cloud strategy may be faltering is if it is almost entirely technology driven. A move to the cloud can’t just be dictated by the organisation’s technology, as it will ultimately fail to take key parts of the business into consideration. After months of implementation, you might find you have a superbly well-functioning and well-specified platform, but an enormous hole in your people skills. The longer this goes on, the harder that hole will be to fill.

Leaving the tech too late

Conversely, cloud strategies can flounder if the technology element gets left behind. While a strategy needs to be based on the whole business, including its workflows and business processes, it is, of course, imperative that the technology element is specified, planned and delivered in a timely way.

Jason Stewart-Clark, managing director of Cloud Native Architecture at Accenture, told us that it’s common for an organisation to overly focus on building out its staff skillsets and embed the idea of cloud into the business, but will have failed to build the «technical foundations, perhaps expecting the ‘next big project’ that comes along to build them out». He says «this is normally sub-optimal.»

Worry about replication

A third potential problem that can occur is if an organisation has been too conservative in planning its first workload transition.

«A big mistake is trying to replicate existing datacentre approaches on the cloud,» explains Stewart-Clark. «Not only will this not allow you to realise the business agility and delivery pace benefits of cloud, it will almost always mean you end up spending a lot more money than you need to.»

Review, review, review

It’s important to remember that these signs of a faltering cloud strategy may only surface when an organisation is already deeply committed to delivering its first cloud implementation, or perhaps even when cloud has been in place for a while. The trick is to avoid serious issues like these surfacing. They will take time to put right, cost more money, and could well result in a loss of faith in the cloud concept throughout the organisation.

A strategy review is therefore essential. This isn’t just about regular, scheduled reviews as part of the board’s risk management agenda – something which should happen as a matter of course. This refers to what Accenture’s Stewart-Clark called «regular light-touch reviews», which «take into account changes in each of the domains as well as the interconnects between them».

Your cloud strategy will almost certainly need a makeover if a new business unit is created or acquired, or if one is divested, if there is regulatory or compliance change, and if the organisation considers new business alliances or a move into new territories. It’s also worth keeping an eye on developments in cloud technology and service offers and considering key developments as part of the review process.

‘No action’ is a valid outcome

Importantly, regular, light touch reviews might frequently result in no action being taken at all. If everything is functioning as required and the horizon looks smooth and untroubled, then the appropriate response could well be ‘no action necessary at this time’. This is no different to any other business review that the organisation’s board will undertake as a regular part of its work.

The goal is avoiding serious issues like the three signs of a faltering cloud strategy. Regular review with an eye on the horizon is the way to a smoothly developing cloud strategy rather than coming up against the need for a radical makeover.

Cloud providers are under attack – and sabotaged services will freeze operations

Over the next two years, cloud service providers will be systematically sabotaged by attackers aiming to disrupt critical national infrastructure (CNI) or cripple supply chains. Organisations dependent on cloud services will find their operations and supply chains undermined when key cloud services go down for extended periods of time.

Nation states that engage in a digital cold war will aim to disrupt economies and take down CNI by sabotaging cloud infrastructure through traditional physical attacks or by exploiting vulnerabilities across homogeneous technologies. Attacks on cloud providers will become more regular, resulting in significant damage to businesses which share those platforms.

Organisations with a just-in-time supply chain model will be particularly vulnerable to service outages and will struggle to know when services will be restored, as cloud providers scramble to prioritise customer recovery.

Further consolidation of the cloud services market will create a small number of distinct targets that underpin a significant number of business models, government services and critical infrastructure. A single act of sabotage will freeze operations across the globe.

What’s the justification for this threat?

According to Gartner, the cloud services market is expected to grow from $221 billion in 2019 to $303 billion by 2021. The five largest cloud providers account for 66% of the global cloud market, with further consolidation of the market expected. This will create an attractive target for attackers – from nation states aiming to disrupt CNI – to organised criminal groups seeking to steal data. These popular cloud providers will become a point of failure, posing significant risk to businesses which are operationally dependent on them or have supply chain partners with similar dependencies.

The two largest cloud providers (Amazon and Microsoft) account for nearly half of all cloud services. Microsoft, Google and Alibaba have all grown their market shares substantially, but this has not been at the expense of Amazon – it is the small-to-medium sized cloud providers who collectively have seen their market shares diminish. This has effectively consolidated the market, allowing attackers to focus on fewer, but richer targets.

The large cloud providers boast a plethora of high-profile customers, including government departments, organisations involved with CNI and a number of information security providers. If a cloud provider was to be systematically targeted via traditional DDoS, physical attacks or other means, there would be significant disruption to its services and dependent organisations. Some organisations also rely upon multiple cloud providers to underpin individual systems, but in doing so create multiple points of failure.

In order to optimise their services, cloud providers use common technologies, such as virtualisation. Vulnerabilities discovered in these homogeneous technologies will have wide-reaching impact across multiple cloud providers. Issues of this kind have been seen previously with the Spectre and Meltdown security vulnerabilities, which affected a significant number of organisations.

Several previous cloud outages have been caused by human errors or natural disasters. In February 2017 one of Amazon’s regions, US-East-1, was taken offline due to human error. This had a direct effect on IoT devices which use Amazon’s cloud services, such as the smart home app Hive. A number of high-profile websites were also taken completely offline, resulting in lost revenue. In July 2018 Google Cloud also experienced an outage, affecting users’ ability to access Snapchat and Spotify. These incidents exemplify the potential impact of cloud outages. Determined attackers are likely to develop skills and resources to deliberately compromise and exploit these cloud services over the coming years.

How can you prepare?

Organisations that are reliant on cloud providers for one or more critical system or service should prioritise preparation and planning activities to ensure future resilience.

Picture credit: "Icicles", by Eric Lumsden, used under CC BY ND 2.0

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Citrix Synergy 2019: Citrix launches desktop-as-a-service tool with Microsoft Azure


Keumars Afifi-Sabet

22 May, 2019

Virtualisation firm Citrix has built on its partnership with Microsoft to launch a desktop-as-a-service (DaaS) tool that aims to give employees access to a virtual desktop loaded with Windows-based apps.

Citrix Managed Desktops, jointly built with Citrix and Microsoft technology exclusively on the Azure public cloud platform, offers remote Windows sessions managed by Citrix. Moreover, the project aims to cut down the proximity between users and their data by tapping into Azure cloud data centres.

«While Citrix provides a broad range of powerful, flexible virtual app and desktop solutions, Citrix Managed Desktops is all about simplicity and speed of delivery,» said Citrix product manager Kireeti Valicherla.

«This cloud-hosted solution is a turnkey service that enables any organization with any level of IT expertise to quickly deliver Windows-based applications and desktops to their workforce.

«Architected as a one-stop, pay-as-you-go service, it includes everything you need to securely deliver desktops and applications to any device from the cloud with simplicity.»

The DaaS platform, announced at the company’s annual Synergy conference, also comes in addition to day-one support for the widely-touted Windows Virtual Desktop (WVD) platform that Microsoft has been teasing for several months.

Citrix’s own DaaS tool will be based on a Windows Server desktop ahead of Microsoft’s WVD release in order to give a seamless transition to customers. This innovation is described as the only license on the market where businesses can host Windows 10 desktops on the public cloud.

Microsoft’s corporate vice president for enterprise experiences and management Brad Anderson and Citrix’s chief product officer PJ Hough walked conference delegates through the service in an on-stage demonstration.

They showed audience members how Citrix Managed Desktops allows IT administrators to create a new catalogue, assign users, select an Azure region, pick out a Windows 10 image, add custom applications to it, and then invite users to the newly-created virtual desktops.

Meanwhile, the DaaS platform is being primed to appeal to Citrix’s channel business, with the service being built for partners to repackage and deliver to their own customers.

«We know that’s going to be really important,» Hough added. «We’ve also thought about contingent workers, mergers and acquisitions, so we expect enterprises to receive a mixture of traditional apps and desktops and Citrix Managed Desktops.»

How to improve supply chains with machine learning: 10 proven ways

Bottom line: Enterprises are attaining double-digit improvements in forecast error rates, demand planning productivity, cost reductions and on-time shipments using machine learning today, revolutionising supply chain management in the process.

Machine learning algorithms and the models they’re based on excel at finding anomalies, patterns and predictive insights in large data sets. Many supply chain challenges are time, cost and resource constraint-based, making machine learning an ideal technology to solve them.

From Amazon’s Kiva robotics relying on machine learning to improve accuracy, speed and scale to DHL relying on AI and machine learning to power their Predictive Network Management system that analyses 58 different parameters of internal data to identify the top factors influencing shipment delays, machine learning is defining the next generation of supply chain management. Gartner predicts that by 2020, 95% of Supply Chain Planning (SCP) vendors will be relying on supervised and unsupervised machine learning in their solutions. Gartner is also predicting by 2023 intelligent algorithms, and AI techniques will be an embedded or augmented component across 25% of all supply chain technology solutions.

The ten ways that machine learning is revolutionising supply chain management include:

Machine learning-based algorithms are the foundation of the next generation of logistics technologies, with the most significant gains being made with advanced resource scheduling systems

Machine learning and AI-based techniques are the foundation of a broad spectrum of next-generation logistics and supply chain technologies now under development. The most significant gains are being made where machine learning can contribute to solving complex constraint, cost and delivery problems companies face today. McKinsey predicts machine learning’s most significant contributions will be in providing supply chain operators with more significant insights into how supply chain performance can be improved, anticipating anomalies in logistics costs and performance before they occur. Machine learning is also providing insights into where automation can deliver the most significant scale advantages. Source: McKinsey & Company, Automation in logistics: Big opportunity, bigger uncertainty, April 2019. By Ashutosh Dekhne, Greg Hastings, John Murnane, and Florian Neuhaus

The wide variation in data sets generated from the Internet of Things (IoT) sensors, telematics, intelligent transport systems, and traffic data have the potential to deliver the most value to improving supply chains by using machine learning

Applying machine learning algorithms and techniques to improve supply chains starts with data sets that have the greatest variety and variability in them. The most challenging issues supply chains face are often found in optimising logistics, so materials needed to complete a production run arrive on time. Source: KPMG, Supply Chain Big Data Series Part 1

Machine learning shows the potential to reduce logistics costs by finding patterns in track-and-trace data captured using IoT-enabled sensors, contributing to $6M in annual savings

BCG recently looked at how a decentralised supply chain using track-and-trace applications could improve performance and reduce costs. They found that in a 30-node configuration when blockchain is used to share data in real-time across a supplier network, combined with better analytics insight, cost savings of $6M a year is achievable. Source: Boston Consulting Group, Pairing Blockchain with IoT to Cut Supply Chain Costs, December 18, 2018, by Zia Yusuf, Akash Bhatia, Usama Gill, Maciej Kranz, Michelle Fleury, and Anoop Nannra

Reducing forecast errors up to 50% is achievable using machine learning-based techniques

Lost sales due to products not being available are being reduced up to 65% through the use of machine learning-based planning and optimisation techniques. Inventory reductions of 20 to 50% are also being achieved today when machine learning-based supply chain management systems are used. Source: Digital/McKinsey, Smartening up with Artificial Intelligence (AI) – What’s in it for Germany and its Industrial Sector? (PDF, 52 pp., no opt-in).

DHL Research is finding that machine learning enables logistics and supply chain operations to optimise capacity utilisation, improve customer experience, reduce risk, and create new business models

DHL’s research team continually tracks and evaluates the impact of emerging technologies on logistics and supply chain performance. They’re also predicting that AI will enable back-office automation, predictive operations, intelligent logistics assets, and new customer experience models. Source: DHL Trend Research, Logistics Trend Radar, Version 2018/2019 (PDF, 55 pp., no opt-in)

Detecting and acting on inconsistent supplier quality levels and deliveries using machine learning-based applications is an area manufacturers are investing in today

Based on conversations with North American-based mid-tier manufacturers, the second most significant growth barrier they’re facing today is suppliers’ lack of consistent quality and delivery performance. The greatest growth barrier is the lack of skilled labor available. Using machine learning and advanced analytics manufacturers can discover quickly who their best and worst suppliers are, and which production centers are most accurate in catching errors. Manufacturers are using dashboards much like the one below for applying machine learning to supplier quality, delivery and consistency challenges. Source: Microsoft, Supplier Quality Analysis sample for Power BI: Take a tour, 2018

Reducing risk and the potential for fraud, while improving the product and process quality based on insights gained from machine learning is forcing inspection’s inflection point across supply chains today

When inspections are automated using mobile technologies and results are uploaded in real-time to a secure cloud-based platform, machine learning algorithms can deliver insights that immediately reduce risks and the potential for fraud. Inspectorio is a machine learning startup to watch in this area. They’re tackling the many problems that a lack of inspection and supply chain visibility creates, focusing on how they can solve them immediately for brands and retailers. The graphic below explains their platform. Source: Forbes, How Machine Learning Improves Manufacturing Inspections, Product Quality & Supply Chain Visibility, January 23, 2019

Machine learning is making rapid gains in end-to-end supply chain visibility possible, providing predictive and prescriptive insights that are helping companies react faster than before

Combining multi-enterprise commerce networks for global trade and supply chain management with AI and machine learning platforms are revolutionising supply chain end-to-end visibility.

One of the early leaders in this area is Infor’s Control Center. Control Center combines data from the Infor GT Nexus Commerce Network, acquired by the company in September 2015, with Infor’s Coleman Artificial Intelligence (AI) Infor chose to name their AI platform after the inspiring physicist and mathematician Katherine Coleman Johnson, whose trail-blazing work helped NASA land on the moon. Be sure to pick up a copy of the book and see the movie Hidden Figures if you haven’t already to appreciate her and many other brilliant women mathematicians’ many contributions to space exploration. ChainLink Research provides an overview of Control Center in their article, How Infor is Helping to Realise Human Potential, and two screens from Control Center are shown below.

Machine learning is proving to be foundational for thwarting privileged credential abuse which is the leading cause of security breaches across global supply chains

By taking a least privilege access approach, organisations can minimise attack surfaces, improve audit and compliance visibility, and reduce risk, complexity, and the costs of operating a modern, hybrid enterprise. CIOs are solving the paradox of privileged credential abuse in their supply chains by knowing that even if a privileged user has entered the right credentials but the request comes in with risky context, then stronger verification is needed to permit access.  

Zero Trust Privilege is emerging as a proven framework for thwarting privileged credential abuse by verifying who is requesting access, the context of the request, and the risk of the access environment.  Centrify is a leader in this area, with globally-recognised suppliers including Cisco, Intel, Microsoft, and Salesforce being current customers.  Source: Forbes, High-Tech’s Greatest Challenge Will Be Securing Supply Chains In 2019, November 28, 2018.

Capitalising on machine learning to predict preventative maintenance for freight and logistics machinery based on IoT data is improving asset utilisation and reducing operating costs

McKinsey found that predictive maintenance enhanced by machine learning allows for better prediction and avoidance of machine failure by combining data from the advanced Internet of Things (IoT) sensors and maintenance logs as well as external sources. Asset productivity increases of up to 20% are possible and overall maintenance costs may be reduced by up to 10%. Source: Digital/McKinsey, Smartening up with Artificial Intelligence (AI) – What’s in it for Germany and its Industrial Sector? (PDF, 52 pp., no opt-in).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

G Suite passwords stored in plain text for 14 years


Bobby Hellard

22 May, 2019

Google has revealed that some G Suite passwords have been stored in plaintext, meaning without encryption, for 14-years.

The tech giant said it had recently discovered a bug that’s been around since 2005 and has begun resetting any passwords that might be affected, as well as alerting G Suite administrators about the issue.

«We recently notified a subset of our enterprise G Suite customers that some passwords were stored in our encrypted internal systems unhashed,» said Suzanne Frey, VP of Google’s engineering and cloud trust division.

«This is a G Suite issue that affects business users only–no free consumer Google accounts were affected–and we are working with enterprise administrators to ensure that their users reset their passwords.»

Frey added that Google has been conducting a thorough investigation and, so far, hasn’t seen any evidence of improper access or misuse of these affected G Suite credentials.

The blog post goes into great detail about Google’s policy on storing passwords with cryptographic hashes that mask them. Cryptography is a one-way system, as in only seen at Google’s end, where it scrambles user passwords with a hash function – so it becomes something like «72i32hedgqw23328». This is then stored with the relevant user name, encrypted and saved to disk. The next time the user signs in, the password is scrambled in the same way to see if it matches what Google has stored.

But this wasn’t the case back in 2005 for one particular feature. In the enterprise version of G Suite, Google allowed domain administrators with tools to set and recover passwords; supposedly because this was highly requested. This tool was located in the admin console and let administrators upload or manually set user passwords.

The idea was to help administrators load on new users but the function would inadvertently store a copy of the unhashed password in the admin console. Google stressed that these passwords remained in its secure encrypted infrastructure and that the issue had been fixed, but 2005 was a long time ago.

While that’s bad enough, further password encryption flaws were found by the company as it was troubleshooting new G Suite customer sign-up flows. It discovered that from in January 2019 it had inadvertently stored a subset of unhashed passwords in its secure encrypted infrastructure. These passwords were only stored for a maximum of 14 days and once again, Google said the issue has been fixed.

This is one of a number of incidents reported by tech companies in recent times, where password encryption has been hampered by a bug or fault. Last year, Twitter warned its users to update their passwords after the company identified a flaw in its systems that could have allowed staff at the company to view them in plaintext form. Twitter sent an email to users explaining that the bug had been fixed and the resulting internal investigation «showed no indication of a breach of misuse by anyone».

In Google’s defence, despite how long the bug has been in G Suite, its notification has not tried to mask anything. Unlike Facebook, which earlier this year notified users that «some» passwords had been stored in plaintext, only explaining much further down its blog post that actually hundreds of millions of passwords for Facebook, Instagram and Facebook Lite were stored without encryption.

Citrix Synergy 2019: Citrix ports Workspace to Google Cloud


Keumars Afifi-Sabet

22 May, 2019

Citrix has extended its partnership with Google to bring its flagship Workspace product to customers running their companies’ infrastructure on the Google Cloud Platform (GCP).

Businesses will be encouraged to migrate to the cloud with the promise of further integration between Google’s suite of productivity apps and Citrix’s core platform, manifesting in-part as integration with Google Calendar, G-Suite and GCP-based authentication tools.

«We’re going to surface appropriately the notifications that come from G Suite’s collaboration tools into the Workspace so that context is carried forward between the collaboration tools and the individual Workspace experience,» said Citrix chief product officer PJ Hough.  

The move, announced by CEO David Henshall at the company’s annual Synergy conference this year, feeds into his company’s drive to expand its cloud business. And it specifically aims to ease migration to the cloud by providing a means to integrate with Google’s ecosystem of apps and devices.

The announcement follows the company’s launch of the Citrix Workspace suite of tools, a work in progress since at least 2014, at last year’s Synergy conference, and its collaboration with Microsoft Azure.

Companies using Google Cloud could previously utilise Citrix Virtual Apps and Desktops with their systems, but this announcement marks a full integration with the Workspace suite.

Moreover, customers can expect a host of integrations with Google’s productivity apps, like the widely-used G-Suite.

Interoperability with Google’s Cloud Identity tool, for example, means company employees can use their Google or G-Suite login credentials to access Citrix Workspace. Meanwhile, Google Calendar integration means workers will automatically receive notifications on Workspace about significant events through tailored feeds.

«In the old days, all the apps people needed to do their jobs were on their laptops,» said Citrix CEO David Henshall. «Now, some are local, some are in corporate datacentres, some are in the cloud.

«In extending Citrix Workspace to Google Cloud, we’re giving companies greater flexibility and choice in how they deploy the SaaS, cloud, and web apps their employees need to be engaged and productive and a simple, efficient way to do it.»

The company says the new functionality on Google Cloud Platform will shave time employees spend cycling between up to a dozen apps on a day-to-day basis by provisioning these in one place.

Citrix also hopes to use this move as its way to solve the need for customers to have an «always-on infrastructure», required in this day and age to maximise productivity and keep users engaged.

Citrix Synergy 2019: Citrix revamps Workspace to tackle “disengagement epidemic”


Keumars Afifi-Sabet

22 May, 2019

Citrix has announced a slew of features for its flagship Workspace platform that aims to better engage employees and boost their day-to-day productivity.

By the end of the year businesses should expect to benefit from tools such as a central newsfeed and an AI-powered digital assistant, the virtualisation firm announced at its annual Synergy conference, hosted this year in Atlanta, Georgia.

The new ‘intelligent experience’ package, according to the company’s CEO David Henshall, aims to tackle the epidemic of workplace disengagement, which has been caused by an over-burdening of enterprise tech designed and built for just the ‘1% of power-users’.

«This is truly a worldwide epidemic,» said David Henshall during his keynote address, citing Gallup research that showed 85% of people globally are disengaged with work.

«Imagine if only 15% of your teams are completely aligned and driving your business results,» he told an audience comprising the press, analysts and countless Citrix customers.

«When you couple with the fact that in most organisations employees are the single largest expense; that means by definition employees are your most valuable asset. But they’re generally not being treated as such.

«Imagine if any other asset in your portfolio was operating at 15% capacity. You guys would be all over that really driving change across the board.»

The main reasons behind this include a saturation of workplace apps and ecosystems that have over-burdened 99% of users who just need simple and functional interfaces to get things done.

Whether employees use an internet page with a series of links or a web portal with different apps, users are spending far too much time cycling between systems, as well as authentication tools. This «takes up human RAM» trying to go back and forth.

Henshall added that the company’s mission is to give back one day per week to employees, which he claimed that users needlessly waste on retrieving information that could be provided by automated software.

Citrix has pivoted its Workspace platform, launched in 2018, to address these mounting concerns with user interface (UI) upgrades and additional features. These have been heavily inspired by the consumer tech user experience, which the company concedes is pulling well ahead of business-oriented IT.

Features like one-click purchasing, for instance, have been slow to make their way to businesses, according to the company’s chief product officer PJ Hough, and have led to enterprise software «failing» employees.

«There are so many of these things we are familiar with but that haven’t necessarily surfaced inside our work environments,» Hough said.

«We’ve suddenly become so used to having recommendation engines whether it’s in our collections of books or the TV shows we watch or other forms of media that get more tuned to our needs over time.»

He added the digital revolution that has already happened for mobile-based user experience had not yet occurred for enterprise platforms, with Citrix hoping to position itself as a pioneer in this area.

The revamped Workspace will be powered with micro-app integrations that will populate a customisable Facebook-style newsfeed interface. This will pull details integrated apps by Google, Microsoft and SAP, among others, and provide integrations.

Users will also find a newly-developed digital assistant, or chatbot, to assist employee queries in the system, which Hough sees as a manifestation of the company’s major bid on machine learning and artificial intelligence.

Moreover, there will be a mobile device-based platform featuring the newsfeed interface front-and-centre, heavily influenced by mobile social media experiences.

Citrix’s new ‘intelligent experience’ capabilities will be made available to businesses generally in the third quarter of 2019 but have been rolled out to beta users from now.

Box overhauls its Relay workflow tool


Dale Walker

22 May, 2019

Box has launched what it describes as an «all-new» version of its Box Relay workflow management tool, featuring a more powerful workflow engine, a simplified UI, and improved tools for manipulating data.

The company first introduced the platform back in 2016 in a bid to make it easier for multiple departments, both inside and outside an organisation, to collaborate on projects from within the Box app, while automating much of the configuration side. It’s designed to make repeated processes, such as the onboarding of a new employee to the company, easier to automate.

The platform has since received a number of updates and developments, including the launch of an API in July 2018, which allowed Relay to be integrated into other business systems, such as CRM and ERP tools.

The latest version now brings improvements to its core engine, which now builds workflows based on ‘if this then that’ (IFTTT) triggers to support processes that require a larger number of intricate steps. The platform will now also support the option to route content based on metadata attributes, for example, date, dropdown, multi-select and open text fields.

More immediately noticeable changes can be found in the updated visuals, including a new UI that’s been redesigned to allow non-IT staff to build their own processes without the need for additional technical support. The main dashboard has also been given a fresh look, which will now display real-time metrics for workflow history, details on who created, updated or deleted workflows, and the option to export the audit history.

«Enterprise workflows built around content like document reviews and approvals and employee on-boarding and off-boarding need to be reimagined,» said Jeetu Patel, chief product officer at Box. «They’re disconnected from the apps teams use every day, locked behind IT, and don’t support external collaboration.»

«The new Box Relay brings powerful automation to improve these critical business processes, whether it’s creating sales proposals and marketing assets, or driving budget sign-offs and contract renewals, and more. Enterprises now have one platform for secure content management, workflow, and collaboration that’s built for how we work today.»

Relay has also been more tightly integrated into the Box portfolio. Specifically, users can call upon all the tools found in Box Cloud Content Management, including the security and compliance features, as well as the same integrations, such as Office 365 and DocuSign.

The new Box Relay is currently in private beta but will become generally available in «late June 2019». The platform will release with both a paid version and a free ‘Lite’ version.

Alongside the Relay update, Box said it is also working on a new single view UI as part of Box Tasks, which is designed to make it easier for users to see all their tasks at once, which will be supported with mobile push notifications. This addition is currently in public beta and will be added for all users for free once it launches generally.

HSBC focuses cloud and DevOps vision with $10 million investment

HSBC has been moving towards a cloud-first world – and the bank's latest endeavour has shed light on how it is pushing ahead in the DevOps sphere.

The company is investing $10 million (£7.8m) in capital investment to CloudBees, the continuous delivery software provider and arbiter of open source automation server Jenkins. 

This is by no means an entirely altruistic act, with HSBC using CloudBees significantly since 2015 in order to bolster its software delivery system. The companies had previously gone public about their relationship; HSBC was at a CloudBees event in April, as reported by Computerworld UK.

Regular readers of this publication will be aware of the bank's cloudy aspirations, in particular its relationship with Google Cloud. In 2017 Darryl West, HSBC CIO, took to the stage at Google Next in San Francisco to discuss the companies' collaboration. West noted that the total amount of data the company held at the time was more than 100 petabytes, and that, having dipped their toes into the Hadoop ecosystem as far back as 2014, it had been a 'tough road' in some places.

Nevertheless, the DevOps side continues to expand. Only last week the company began to advertise for a big data DevOps engineer role. The job, based at Canary Wharf, requires experience on Google Cloud, or other suitable cloud vendor, as well as skills in Java, Scala and Spark on the programming side, alongside SQL, relational database, and Elasticsearch expertise.

"We invest in technologies which are strategically important to our business, and which help us serve our customers better," said Dinesh Keswani, chief technology officer for HSBC shared services. "The DevOps market is growing fast, as organisations like us drive automation, intelligence and security into the way we deliver software. CloudBees is already a strategic business partner of HSBC; we are excited by our investment and by the opportunity to be part of the story of continuous delivery."

From CloudBees' perspective, the investment takes the company's overall funding to more than $120 million. Among the firm's recent bets include the acquisition of Electric Cloud in April, as well as leading the launch of the Continuous Delivery Foundation in March, alongside Google and the Linux Foundation. CEO Sacha Labourey said the funding would be used for growing strategic partnerships and accelerating business growth.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.