Five key elements to a successful connected enterprise

Consumers are using technology to their advantage. With it, they are less tolerant, less loyal and more promiscuous. They have learned how to exercise control over brands.

In this kind of environment, it’s important to remember that the success of any customer-facing organisation depends on the experience they deliver. Consumers want meaningful, timely, and personalised engagements, and to gain this, businesses need a single view of the customer and a relationship that goes beyond a single transaction.

New technologies, cloud-native applications and ‘always-on’ connectivity provide the core ingredients companies need to diversify their services and become customer-centric. This demands a truly holistic experience that puts people, not processes, at the forefront of decision making and customer communication.

To operate a connected enterprise, you must tick off the following five considerations:

Have a single view of customers

The consumer is the boss – and can walk away at any time. They determine how they want to interact and demand seamless engagement across any channel. To engage customers, organisations must understand them individually, comprehensively and consistently at any point in time. A complete view of the consumer, across the business, is the only way to extract actionable data that leads to customer retention.

However, in our SAP Customer Data Imperative report, based on a survey of 500 client-side marketers, only 42% of respondents felt they had a consolidated view of first-party data across the enterprise, even though 82% thought it was ‘critical’ or ‘important.’

If marketers are to take ownership of customer data, they must ensure they are building the right connections with other business functions to help work towards a single customer view, including both online and offline data.

Front and back office

To achieve a comprehensive view of each individual customer experience, front-to-back office integration is also essential. This requires a unified front office to orchestrate customer journeys, whilst connecting with the back office to gather insights into customer preference. Knowing what the customer wants, when the customer wants it and being able to deliver it by having in-moment insight to inventory, will create a holistic experience for customers.

To get there, an integrated technology stack is imperative for companies seeking to collate customer data. Yet 41% of respondents cite disparate technology platforms as one of their three biggest challenges in the same Customer Data Imperative report. This is why SAP is connecting back-office capabilities with front office SAP ERP products, providing users an end-to-end experience.

Optimised for machine learning and IoT

In a separate SAP survey, six out of 10 business leaders have implemented, or are planning to implement, AI in the next year. This is because machine learning can analyse data at speed and make predictions that guide the strategy of human teams.

Through machine learning, and combining Internet of Things (IoT) data with insight from other applications, businesses can build a 360-degree customer view which allows an organisation to tailor experiences to customer needs.

Data from supply and demand

A unified supply and demand overview enables companies to better understand, analyse, manage and respond to variability within their supply chains. To do this, businesses must have in place real-time supply chain planning solutions to take advantage of analytics, what-if simulation, alerts and more. This will also help them better respond to ever-developing market expectations so that customers are never left wanting.

Unified cloud

Because today’s buying activity will be done before a human interaction ever takes place, organisations are trying to move siloed CRM systems away from sales and use their data in a way that keeps customers engaged at any point in time.

As such, the success of any business in the digital economy depends on the experiences they deliver. Businesses must therefore view customers holistically, end-to-end and continuously.

By having one unified suite of cloud solutions, businesses can manage customer experiences based on one trusted customer data model, which integrates all of the above. It provides the ability to deliver customer-centric processes and better outcomes, which will in turn build customer trust and loyalty.

The consumer-driven growth revolution will require all businesses to change. By putting together these steps to ensure a connected enterprise, then you can be one step ahead of the competition.

Editor’s note: Find out more about how the connected enterprise can help retain and win more customers at SAP Customer Experience LIVE, from October 10-11 in Barcelona.

Beyond automation: Enterprise AI and machine learning solutions in action

A relatively small group of savvy executives have strategies in place to harness new business process automation technologies and thereby advance their digital transformation agenda. Meanwhile, a much larger group is closely following the market leaders, to explore their lessons-learned from pilot projects.

According to the latest worldwide market study by 451 Research, new survey results suggest most organisations are adopting or considering artificial intelligence (AI) and machine learning (ML) due to its commercial growth benefits, rather than the potential to cut jobs.

Despite being somewhat new, there's adoption momentum for the technology.

Machine learning market development

Almost 50 percent of their survey respondents have deployed or plan to deploy machine learning in their organisations within the next 12 months. Therefore, this paints a more optimistic picture of machine learning adoption than is often portrayed by other industry analysts.

"Out of many possible benefits we presented to our survey respondents, 49 percent cited gaining competitive advantage as the most significant benefit they have received from the technology," said Nick Patience, vice president at 451 Research.

Improving the customer experience came a close second, cited by 44 percent of respondents. Despite all the hype around mass job losses, lowering costs was cited by only a quarter of the survey respondents.

He added, "We think this demonstrates that AI and machine learning is an omni-purpose technology that can bring numerous benefits to organisations, beyond just lowering costs through increased automation."

That being said, there are still some major obstacles that inhibit progress. When asked "what is your organisation’s most significant barrier to using machine learning?" most cited a shortage of skilled resources as the top barrier (36 percent).

According to the 451 Research assessment, skilled talent for machine learning projects usually means proven data science skills and experience. And a lack of those capabilities is reinforced further by the finding that data access and preparation is the second biggest barrier cited by survey respondents.

However, 451 Research expects the lack of skills and experience to gradually decline as a barrier when AI tools become easier to use, and the population of users who can leverage machine learning expands.

Outlook for AI and ML application growth

Organisations will need to ensure their machine learning deployment brings the business benefits that matter most to their stakeholders. To learn more about the commercial impact that AI and machine learning might have on your organisation, consider researching the application of 'quick-start solutions' that enable rapid testing and deployment.

Moreover, some forward-thinking vendors are already addressing the bigger challenges that face enterprise developers and data scientists. To help CIOs and CTOs scale projects, smart vendors offer high-performance server platforms and integrated software that will enable organisations to extract better results from their available data and accelerate the reporting of actionable insights.

Interested in finding out more around enterprise AI use cases and how AI and big data will converge? The AI & Big Data Expo World Series is coming to Silicon Valley on November 28-29 2018 – find out more here.

Mark van Rijmenam: On the ‘gestalt shift’ of big data, blockchain and AI convergence

When emerging technologies, such as blockchain, artificial intelligence (AI) and the Internet of Things converge, a ‘gestalt shift’ will occur, according to a new book. “The character of the experience will drastically change,” write Mark van Rijmenam and Dr. Philippa Ryan in Blockchain: Transforming Your Business and Our World. “All of a sudden, we can see the world through a different, more technologically advanced, lens and this opens up a completely new perspective.

“The convergence of multiple disruptive technologies will offer us new possibilities and solutions to improve our lives and create better organisations and societies, as well as build a better world all together.”

Organisations are increasingly taking the approach of exploring these technologies in tandem rather than in silos. Put simply, they all feed into each other. Pat Gelsinger, CEO of VMware, had it nailed down at the recent VMworld event. “Each [technology] is a superpower in [its] own right, but they’re making each other more powerful,” he told attendees. “Cloud enables mobile connectivity; mobile creates more data; more data makes the AI better; AI enables more edge use cases; and more edge requires more cloud to store the data and do the computing.”

For van Rijmenam, already a well-established big data thought leader, it was a natural trend. “The convergence of emerging technologies is the true paradigm shift organisations have to face,” he tells CloudTech. “Big data and blockchain have a lot in common and it will actually make data governance more importance – after all, blockchain makes data immutable, verifiable and traceable, but it does not magically turn low-quality data into high-quality data.”

This feeds into the central problem, that of data – what to do with it and how to utilise it best. But ‘twas ever thus. “When initiating your business intelligence project, you’re likely to be surprised at how bad your raw material – data – really is,” wrote Dan Pratte in TechRepublic. “You’ll discover that if you’re going to be serious about business intelligence, you’re going to have to get very serious (their emphasis) about data quality as well.” The article publication date? May 30 2001.

Today, artificial intelligence is redefining business intelligence at a rapid rate. Take the recent analysis from Work-Bench around the future of enterprise technologies. “Expect all modern BI vendors to release an [automated machine learning] product or buy a startup by [the] end of next year,” the report explained.

This will move down to the rank and file organisations who, ultimately, have to see themselves as a data-centric company going forward. “Organisations need to completely rethink the customer touchpoints and processes to be ready for the convergence of emerging technologies,” says van Rijmenam. “Only those organisations who are capable of seeing themselves as a data company will stand a chance to survive.”

Blockchain: Transforming Your Business and Our World focuses only its last chapter – 14 pages – on convergence. The remaining 180-odd pages explore blockchain’s potential in a variety of scenarios, from poverty, to voting, to climate change. The book describes these throughout as ‘wicked problems.’ Yet the third chapter, on identity, is the ultimate banker.

“We believe that we first and foremost need to solve the identity problem [with blockchain],” says van Rijmenam. “Once we have a self-sovereign identity, it will help make it easier to solve the other issues. That is why we first discussed that problem in our book before discussing the other wicked problems – thus a self-sovereign identity will be the biggest long-term change as it will empower individuals, but also organisations and even connected devices.”

Identity is not the only problem the industry needs to solve before blockchain can make its way truly into the mainstream. While a recent study from Juniper Research found that business leaders’ understanding of the technology is going up solidly, van Rijmenam categorises the issues into three buckets; technological, people, and culture. “Consumers will need to get used to a society where they have to control their own private keys,” he says. “That might be the biggest challenge of them all as it requires a culture shift.”

With this intersection in mind, van Rijmenam is currently working on a new book, focused on ‘the organisation of tomorrow’ and exploring how big data analytics, blockchain and AI will be transformative. “Organisations need to ‘datafy’ their processes, make data available using the cloud, collaborate with industry partners to optimise the supply chain, analyse their data for insights, and automate their business processes using AI,” says van Rijmenam.

Blockchain: Transforming Your Business and Our World is published by Routledge and is available for purchase here.

Main picture credit: https://vanrijmenam.nl/

Ignore multi-cloud today and risk becoming irrelevant in five years, report warns

Multi-cloud initiatives continue to be of great importance to European organisations – and those who aren’t heeding the warning signs today will feel the pinch in five years’ time.

That’s according to a new study from research firm Foresight Factory, alongside application network technology provider F5 Networks. The study, titled ‘The Future of Multi-Cloud’, drew on contributions from Deloitte, CloudSpectator, Ovum and more, having been based on a discussion guide combining publicly available research and Foresight’s proprietary bank of more than 100 trends.

In short – delaying multi-cloud adoption will mean your organisation will become increasingly irrelevant. Yet many organisations will surely be aware of this. Take the study from Virtustream in July, which found the vast majority of organisations (86%) confirming their cloud strategy was a multi-cloud one. Or take how many of the leading cloud vendors are pushing their acquisition and product strategies towards the trend; Cisco acquiring Duo Security, Nutanix buying Netsil, Juniper Networks offering new data centre, campus and branch network offerings.

Everyone is at it. One of the primary drivers for multi-cloud, as the report notes, is fear surrounding vendor lock-in. But the report makes an interesting point: there is a sense of constant change underpinning these initiatives, with the hyperscale vendors more than willing to outspend rivals to keep their market share.

Take machine learning as an example. According to the RightScale 2018 State of the Cloud report, machine learning is the most popular public cloud service with regards to future interest. AWS, Microsoft and Google are all taking big strides in this area, from Google’s pre-packaged AI services, to the various AWS clients citing the technology as key to their success – Major League Baseball, Formula 1, and more. From Microsoft’s perspective, the report notes that its ML focus has led it to invest in new server technologies, with workloads on the edge also contributing.

Yet there are various issues which still need to be overcome. The report cites the well-known skills gap organisations are facing. With multiple cloud services, containers, APIs and more, visibility and management is vital. Plenty of companies have sprung up to help organisations with this – CloudCheckr, CloudHealth Technologies and so on – but ultimately it’s all about service delivery. Consumers may not be interested in the technical intricacies of the multiverse, but they will care if their service becomes inflexible or goes down.

So what can companies do? Their technological landscape is continually changing, driven from the top by initiatives from the largest cloud vendors, and they have more plates spinning than ever. There are a couple of things which can be done, according to the report. Firstly, organisations should focus more on security. Consumers will eventually only be interested in those who have the most watertight systems built in. What’s more, there needs to be an increased focus on nurturing young IT talent – or ‘tapping into the kaleidoscopic potential of youth and promoting industry diversity’, as the report puts it.

In other words, organisations need multi-cloud. With developments in edge computing and artificial intelligence starting to take place driving greater insights and quicker decision making, they need to get on that train as soon as possible. But the skills gap won’t be overcome overnight.

“The multi-cloud ramp-up is one of the ultimate wake-up calls in internal IT to get their act together,” said Eric Marks, VP of cloud consulting at CloudSpectator. “One of the biggest transformative changes is the realisation of what a high performing IT organisation is and how it compares to what they have. Most are finding their IT organisations are sadly underperforming.”

How to make Amazon Web Services highly available for SQL Server

Mission-critical database applications are often the most complex use case in the public cloud for a variety of reasons. They need to keep running 24×7 under all possible failure scenarios. As a result, they require full redundancy, which involves provisioning standby server instances and continuously replicating the data. Configurations that work well in a private cloud may not be possible in the public cloud. And providing high availability can incur considerably higher costs to license more advanced software.

There are, of course, ways to give SQL Server mission-critical high availability and disaster recovery protections on Amazon Web Services. But it is also possible (and all too common) to choose configurations that result in failover provisions failing when needed.

AWS offers two basic choices for running SQL Server applications; a Relational Database Service and the Elastic Compute Cloud. RDS is a managed service that is often suitable for basic applications. While RDS offers a choice of six different database engines, its support for SQL Server requires the more expensive Enterprise Edition to overcome some inherent limitations, such as an inability to detect failovers caused by the application software.

For mission-critical SQL Server applications, the substantially greater capabilities available with EC2 make it the preferred choice when HA and DR are of paramount importance. But EC2 also has a few limitations, especially the lack of shared storage used in traditional HA configurations. And as with RDS, always-on availability groups in the Enterprise Edition might be needed to achieve the desired level of protection.

AWS also offers a choice of running SQL Server on either Windows or Linux. Windows Server Failover Clustering is a powerful and proven capability that is integral to Windows. But because WSFC requires shared storage, the data replication needed for HA/DR protection requires the use of separate commercial or custom-developed software to simulate the sharing of storage across server instances.

For Linux, which lacks a feature like WSFC, the need for additional HA/DR provisions is even greater. Using open source software requires integrating multiple capabilities that, at a minimum, must include data replication, server clustering and heartbeat monitoring with failover/failback provisions. But because getting the full HA stack to work well under all possible failure scenarios can be extraordinarily difficult, only very large organizations have the wherewithal needed to even consider taking on the task.

Failover clustering – purpose-built for the cloud

The growing popularity of private, public and hybrid clouds has been accompanied by increased use of failover clustering solutions designed specifically for a cloud environment. These HA solutions are implemented entirely in software that creates, as their designation implies, a cluster of servers and storage with automatic failover to assure high availability at the application level.

Most of these solutions provide a complete HA/DR solution that includes a combination of real-time block-level data replication, continuous application monitoring and configurable failover/failback recovery policies. Some of the more sophisticated solutions also offer advanced capabilities like support for Always on Failover Clustering in the less expensive Standard Edition of SQL Server for both Windows and Linux, WAN optimisation to maximize multi-region performance, and manual switchover of primary and secondary server assignments to facilitate planned maintenance, including the ability to perform regular backups without disruption to the application.

Although these purpose-built HA/DR solutions are generally storage-agnostic, enabling them to work with shared storage area networks, shared-nothing SANless failover clustering is usually preferred for its ability to eliminate potential single points of failure. Most SANless failover clusters are also application-agnostic, enabling organizations to have a single, universal HA/DR solution. This same capability also affords protection for the entire SQL Server application, including the database, logons, agent jobs, etc., all in an integrated fashion.

The example EC2 configuration in the diagram shows a typical two-node SANless failover cluster that works with either Windows or Linux. The cluster is configured as Virtual Private Cloud with the two SQL Server nodes in different availability zones. The use of synchronous block-level replication across the two availability zones assures both high availability and high performance. The file share witness, which is needed to achieve a quorum, is performed by the domain controller in a separate availability zone. Keeping each server instance of the quorum in a different zone eliminates the possibility of losing more than one vote if any zone goes offline.

Above: SANless failover clustering supports multi-zone and multi-region EC2 configurations with either multiple standby server instances or a single standby server instance, as shown here.

HA and DR configurations involving three or more server instances are also possible with most SANless failover clustering solutions. The server instances can be located entirely within the AWS cloud or in a hybrid cloud. One such three-node configuration is a two-node HA cluster located in an enterprise data center with asynchronous data replication to AWS or another cloud service for DR purposes—or vice versa.

In both two- and three-node clusters, failovers are normally configured to occur automatically, and both failovers and failbacks can be controlled manually (with appropriate authorisation, of course). Three-node clusters can also facilitate planned hardware and software maintenance for all three servers while providing continuous high-availability for the application and its data.

With 44 availability zones spread across 16 geographical regions, the AWS global infrastructure affords tremendous opportunity to maximize availability by configuring SANless failover clusters with multiple, geographically-dispersed redundancies. Such a global footprint also enables SQL Server applications and data to be deployed near end-users to deliver satisfactory performance.

How to Free Up Disk Space on your Mac by Upgrading to Parallels Desktop 14

This is part of a series about the new features in Parallels Desktop® 14 for Mac. If you’re upgrading to Parallels Desktop 14 from an earlier version, you’ll save a lot of space. The exact amount depends on a variety of factors, but this blog post will explain all the new features of Parallels Desktop […]

The post How to Free Up Disk Space on your Mac by Upgrading to Parallels Desktop 14 appeared first on Parallels Blog.

Successful Affiliate Story: How a Hobby Can Become a Business

by Guest Blog Author, Anastasia Barbashina, Affiliate Marketing Manager at Parallels Parallels Desktop® for Mac is the #1 award-winning virtualization software in the world, enabling users to run Windows, Linux, and other OSes on a Mac® without rebooting.  Today, I want to spotlight the Parallels Affiliate Program, which allows anyone to earn extra money by […]

The post Successful Affiliate Story: How a Hobby Can Become a Business appeared first on Parallels Blog.

Parallels Mac Management 7.1 to Offer Zero-Day Support for macOS 10.14 Mojave

One of the key reasons IT admins trust and rely on Parallels® Mac Management for Microsoft® SCCM is immediate support for upcoming new versions of macOS. With macOS® 10.14 Mojave coming up quickly on the horizon, Parallels is releasing version 7.1 of Parallels Mac Management on September 26, 2018, alongside the Mojave release. This follows […]

The post Parallels Mac Management 7.1 to Offer Zero-Day Support for macOS 10.14 Mojave appeared first on Parallels Blog.

Microsoft makes Azure Data Box generally available for heavy duty data migration

More and more enterprise data is being transferred to the cloud, but sometimes the journey can break the network's back – which calls for less virtual and more physical solutions.

Microsoft has announced the general availability of Azure Data Box, a physical box which organisations can order, fill up, and then return to Redmond for it to be uploaded to an Azure environment. 

Companies and users can store up to 100 TB per standard box, with variables either way. The newly announced Data Box Heavy can handle up to 1 PB of data, while Data Box Disks go up to 40 TB.

For those who may consider this a decidedly low-tech method of cloudy data transfer, it is worth noting Amazon Web Services (AWS) has long since had Snowball, a petabyte-scale data migration tool which carries similar bulk as Azure Data Box. AWS also has the Snowmobile, a 45-foot long shipping container, for data loads up to 100 PB.

The customers who really  benefit from these types of tools are those organisations either with reams of offline data from legacy tools, or those collecting data in hard to access places. For instance, moving an exabyte of data across a 10 gigabit per second line would take the better part of two and a half decades to complete.

Oceaneering International was one of the first customers of Azure Data Box last year. Its underwater vehicles generate 2TB of data per day, with the vessel itself generating up to 10TB per day. "We're trying to get the data to the decision maker quicker," explained Mark Stevens, director of global data solutions, adding it is aiming for a seven day turnaround from the field anywhere in the world.

The other addition to the product family is Azure Data Box Edge, which combines on-premises with AI-enabled edge compute capabilities. With increasing amounts of data being created at the edge, the Edge hardware enables data analysis and filtering at the edge of the network, as well as being a storage gateway.

You can find out more about the Azure Data Box family here.

Picture credit: Microsoft

Five Kubernetes role-based access control mistakes to avoid

If you run workloads in Kubernetes, you know how much important data is accessible through the Kubernetes API—from details of deployments to persistent storage configurations to secrets. The Kubernetes community has delivered a number of impactful security features in 2017 and 2018, including Role-Based Access Control (RBAC) for the Kubernetes API.

RBAC is a key security feature that protects your cluster by allowing you to control who can access specific API resources. Because the feature is relatively new, your organization might have configured RBAC in a manner that leaves you unintentionally exposed. To achieve least privilege without leaving unintentional weaknesses, be sure you haven't made any of the following five configuration mistakes.

The most important advice we can give regarding RBAC is: “use it!” Different Kubernetes distributions and platforms have enabled RBAC by default at different times, and newly upgraded older clusters may still not enforce RBAC because the legacy Attribute-Based Access Control (ABAC) controller is still active. If you’re using a cloud provider, this setting is typically visible in the cloud console or using the provider’s command-line tool. For instance, on Google Kubernetes Engine, you can check this setting on all of your clusters using gcloud:

$ gcloud container clusters list –format='table[box](name,legacyAbac.enabled)'
┌───────────┬─────────┐
│ NAME                  │ ENABLED       │
├───────────┼─────────┤
│ with-rbac              │                        │
│ with-abac             │ True                │
└───────────┴─────────┘

Once you know that RBAC is enabled, you’ll want to check that you haven’t made any of the top five configuration mistakes. But first, let’s go over the main concepts in the Kubernetes RBAC system.

Your cluster’s RBAC configuration controls which subjects can execute which verbs on which resource types in which namespaces. For example, a configuration might grant user alice access to view resources of type pod in the namespace external-api. (Resources are also scoped inside of API groups.)

These access privileges are synthesized from definitions of:

  • Roles, which define lists of rules. Each rule is a combination of verbs, resource types, and namespace selectors. (A related noun, Cluster Role, can be used to refer to resources that aren’t namespace-specific, such as nodes.)
  • Role Bindings, which connect (“bind”) roles to subjects (users, groups, and service accounts). (A related noun, Cluster Role Binding, grants access across all namespaces.)

In Kubernetes 1.9 and later, Cluster Roles can be extended to include new rules using the Aggregated ClusterRoles feature.

This design enables fine-grained access limits, but, as in any powerful system, even knowledgeable and attentive administrators can make mistakes. Our experiences with customers have revealed the following five most common mistakes to look for in your RBAC configuration settings.

Configuration mistake 1: Cluster administrator role granted unnecessarily

The built-in cluster-admin role grants effectively unlimited access to the cluster. During the transition from the legacy ABAC controller to RBAC, some administrators and users may have replicated ABAC’s permissive configuration by granting cluster-admin widely, neglecting the warnings in the relevant documentation. If users or groups are routinely granted cluster-admin, account compromises or mistakes can have dangerously broad effects. Service accounts typically also do not need this type of access. In both cases, a more tailored Role or Cluster Role should be created and granted only to the specific users that need it.

Configuration mistake 2: Improper use of role aggregation

In Kubernetes 1.9 and later,Role Aggregation can be used to simplify privilege grants by allowing new privileges to be combined into existing roles. However, if these aggregations are not carefully reviewed, they can change the intended use of a role; for instance, the system:view role could improperly aggregate rules with verbs other than view, violating the intention that subjects granted system:view can never modify the cluster.

Configuration mistake 3: Duplicated role grant

Role definitions may overlap with each other, giving subjects the same access in more than one way. Administrators sometimes intend for this overlap to happen, but this configuration can make it more difficult to understand which subjects are granted which accesses. And, this situation can make access revocation more difficult if an administrator does not realize that multiple role bindings grant the same privileges.

Configuration mistake 4: Unused role

Roles that are created but not granted to any subject can increase the complexity of RBAC management. Similarly, roles that are granted only to subjects that do not exist (such as service accounts in deleted namespaces or users who have left the organization) can make it difficult to see the configurations that do matter. Removing these unused or inactive roles is typically safe and will focus attention on the active roles.

Configuration mistake 5: Grant of missing roles

Role bindings can reference roles that do not exist. If the same role name is reused for a different purpose in the future, these inactive role bindings can suddenly and unexpectedly grant privileges to subjects other than the ones the new role creator intends.

Summary

Kubernetes RBAC configuration is a critical control for the security of your containerized workloads. Properly configuring your cluster RBAC roles and bindings helps minimize the impact of application compromises, user account takeovers, application bugs, or simple human mistakes.

Check your clusters today—have you made any of these configuration mistakes?