SAP reveals C/4 HANA, its bid to reinvent CRM


Joe Curtis

6 Jun, 2018

SAP has moved to curb the impact Salesforce is having on its own CRM play with its latest product, C/4 HANA, analysts say.

C/4 HANA is SAP’s attempt to tie together its ERP applications with front-office customer-facing tools, aiming to create a better customer experience overall.

“SAP was the last to accept the status quo of CRM and is now the first to change it,” said CEO Bill McDermott.

“The legacy CRM systems are all about sales; SAP C/4 HANA is all about the consumer.”

Industry observers suggest that SAP has made the move to stop the likes of Salesforce eating into its installed base.

C/4 will comprise all of SAP’s CRM acquisitions to date – SAP Marketing Cloud, SAP Commerce Cloud, SAP Service Cloud, SAP Customer Data Cloud (including the acquired Gigya services) and SAP Sales Cloud (including the newly-acquired CallidusCloud services and acquired Hybris business).

SAP is the second largest CRM vendor, owning 8.5% market share in 2017, according to Gartner’s statistics. But while customers of SAP’s ERP products are loyal purchasers of its front office kit in sectors like utilities, chemicals and industrial manufacturing, those in retail and consumer goods are more likely to use Salesforce, Gartner research VP and distinguished analyst Ed Thompson said.

“The battle is between those who want a unified front-office and value it more than those who want an integrated front to back-office,” he told Cloud Pro. “The issue has been whether they’re patient enough to wait for SAP to get their act together or whether they want something more quickly.”

Another factor is the size of the CRM market – it overtook the ERP market in 2015 and is expected to be worth $75 billion by 2022, according to Gartner figures, compared to ERP’s $44 billion market cap.

“So in essence SAP has to do well in CRM; it will one day be bigger for them than ERP,” Thompson said.

But bearing in mind Salesforce’s 19% market share, SAP must reach beyond traditional users of its CRM products to offset its dominance.

“It will take more than launching C4/HANA for SAP to close the gap,” Thompson said. “They’ll need to find a way to appeal to those who are not existing SAP customers for CRM, widen their ecosystem of ISV and consulting partners and make ground in the industries they haven’t traditionally sold CRM to.”

C/4 HANA simplifies the branding of SAP’s range of CRM software by putting it under one roof. One level up from that, and SAP is effectively selling S/4 and C/4 under the ‘intelligent cloud suite’ brand as one whole connected service tying together front and back office IT.

As McDermott put it: “When you connect all SAP applications together in an intelligent cloud suite, the demand chain directly fuels the behaviours of the supply chain.”

But Constellation Research’s Mueller stressed that customers now need to see the new positioning backed up in product announcements.

“SAP has a shot to redefine #CRM with the @CallidusCloud and #S4HANA assets,” he tweeted, “but must show complete processes.”

This means integration via APIs, and fast – something Thompson also highlights.

“It will require a quick follow up with details on architecture and roadmap to back up the strategy and it doesn’t yet address the issue of how this will encourage a CRM ecosystem of partners and facilitate increased innovation,” he claimed.

Both analysts view it as a move to limit the impact Salesforce is having on SAP’s CRM business, rather than an aggressive push of its own. “Pressure on CRM must be substantial,” Mueller observed.

“This is a defensive [move] in response to Salesforce,” Thompson added. “It will take more to get on the offensive.”

Image: Shutterstock

AWS moves Amazon EKS to general availability in managed Kubernetes push

Amazon Web Services (AWS) has announced the general availability of Amazon EKS, its managed Kubernetes service – putting it alongside rivals Microsoft and Google in this regard.

EKS, which was announced at AWS re:Invent back in November, is now operational in US East and US West regions, with further expansion happening ‘very soon’, according to the company.

The move puts AWS alongside Microsoft and Google in terms of managed Kubernetes. Naturally, given Google originally designed the container orchestration system, there are no prizes for guessing that the latter has various solutions in place. Microsoft announced AKS (Azure Container Service) in October last year as a managed Kubernetes service, building upon the technology’s move to standardisation.

“Prior to Amazon EKS, customers either had to do considerable work to architect a highly fault-tolerant way to run Kubernetes, or just accept a lack of resiliency,” said Deepak Singh, director of AWS Compute Services. “With the launch of Amazon EKS, customers no longer have to live with either of those trade-offs, and they get a highly available, fault-tolerant, managed Kubernetes service. It’s no wonder so many of our customers are excited.”

Moving EKS to general availability does not mean AWS is struggling for customers in the meantime, however. A total of 25 companies were noted as being adopters of EKS in the press materials. One, GoDaddy, will be familiar to readers of this publication; the company specifically cited an active interest in containerised apps when they went all-in on AWS back in March.

Other companies confirmed as customers include Verizon – another who last month revealed AWS was its preferred public cloud provider – Snap and Pearson. Chris Jackson, director of cloud platforms at Pearson, said in a statement that the move to Amazon EKS ensured his team ‘built like a startup even though [it was] part of a major enterprise business.’

In some ways, the move could not have come sooner. According to data published by the Cloud Native Computing Foundation (CNCF) in December, Amazon EC2 and ECS continues to be the leading container deployment environment, with 69% of survey respondents citing it ahead of Google GCE/GKE (39%) and Azure (23%). The CNCF, which acts as a guardian for Kubernetes, ‘graduated’ the technology in March. Sarah Conway, CNCF senior director of PR services, said at the time the move signalled Kubernetes was ‘mature and resilient enough to manage containers at scale across any industry in companies of all sizes.’

Journey to the cloud: The most common challenges and how to overcome them

Cloud has firmly secured its place on IT professionals’ agenda, with 95% citing it as one of the top five most important IT strategies today, according to the SolarWinds IT Trends Report 2018: The Intersection of Hype and Performance. However, migrating a database to the cloud is not a straightforward process. There will be challenges along the way. So, what do today’s IT professionals need to consider before embarking on their journey to the cloud?

Prepare for a new era of work

While cloud has many benefits, it also has greater complexity—and that complexity needs to be managed. IT professionals who have been tasked with devising new and creative methods to monitor and manage this infrastructure not only need to put the right solutions in place, but also need to prepare themselves (and the organisations they work for) for continued advancements. Moving databases to the cloud creates a new era of work—one that is more global, interconnected, and flexible than ever.

Mind the IT skills gap

The same report found that fifty-eight percent of survey respondents said an IT staff skills gap was one of the five biggest challenges they faced, meaning that organisations need to focus on improving and cultivating fundamental skillsets that will carry them into the cloud. This suggests that the skillset needed across IT is becoming increasingly blended, requiring versatile tech professionals that can adapt and flex in accordance with a changing landscape. Once you’ve got the right team in place, it’s crucial to give the migration project the attention it deserves and, by planning meticulously in advance (including a rollback strategy), you’re setting yourself up for success. The project plan must account for the scope of applications and related objects involved—and the number of stakeholders. By engaging the right people in the process, the migration should run more smoothly.

Create a single view  

By looking to their peer communities, IT teams will be able to better understand and adopt various technology adaptations and abstractions like software-defined constructs, containers, microservices, and serverless architecture. Considering the exponential rate of change happening in the industry right now, IT professionals need a management and monitoring toolset that gives a single view across those platforms. They need the ability to consolidate and correlate data to deliver more breadth, depth, and visibility across the data centre, which will allow them to proactively identify issues and resolve problems quickly.

Take your time and prioritise tasks

Many migration projects fail because the team has tried to do too much at once. For example, one team will be focused on the migration, but another starts deploying code changes at the same time. This often ends in disaster, as it’s not possible to rewind any issues with the new code or hardware. It’s vital that the workflow makes sense from the beginning to avoid having to retrofit a new plan of attack later.

Perform validation tests

Once the migration is complete, it’s vital to perform validation tests before handing it over for final testing. These validation checks will vary by industry. For some, they will require weeks of parallel production testing and sign-off on reports matching to the penny. For others, it will be as simple as executing a few queries. The important piece here is that there is agreement and signoff that the migration is complete, and that’s the validation check.

For such a well-trodden path, migrations to the cloud are rarely simple. However, when planned properly, with a little foresight and flex along the way, you’ll reach the destination in as good a state as possible.

Microsoft drops a clean energy datacentre into the North Sea


Bobby Hellard

6 Jun, 2018

Microsoft has leveraged the technologies of submarines and renewable energy to plunge a 40ft-long datacentre into the sea near Scotland’s Orkney Islands.

The experimental shipping container-sized prototype, called “Project Natick”, is currently in operation on the seafloor next to the European Marine Energy Centre, just off the Northern Isles.

This Davy Jones’ datacentre is the result of a year’s worth of research into environmentally sustainable data storage technology that Redmond hopes could one day be ordered to size, rapidly deployed and left to operate at the bottom of the sea for years.

“That is kind of a crazy set of demands to make,” admitted Peter Lee, corporate vice president of Microsoft AI and research, who leads the NEXT group, a Microsoft research program for pioneering new technologies, which is overseeing the experiment. “Natick is trying to get there.”

Microsoft is pursuing what it calls a “relevant moonshot”; projects the company believes have the potential to transform the core of its business and the wider cloud computing industry.

The cylindrical storage container is loaded with 12 racks containing a total of 864 servers and has a self-sustaining cooling system that Microsoft has adapted from the heat-exchange process used in submarines. It works by piping seawater through radiators on the back of each rack to cool them down, before the now-heated water is expelled back into the ocean where it mixes with the surrounding currents.

An underwater cable from the European Marine Energy Centre on the Orkney Islands will power the datacentre. The energy centre is a test site for experimental tidal turbines and wave energy converters that generate electricity from the movement of seawater.

By plunging datacentres in bodies of water near coastal cities, data would have short distances to travel to reach coastal communities. It’s estimated that half of the world’s population live within 120 miles of a coast, so off-shore datacentres could lead to faster web browsing, streaming and a boost for AI-driven technologies.

“For true delivery of AI, we are really cloud-dependent today,” added Lee. “If we can be within one internet hop of everyone, then it not only benefits our products but also the products our customers serve.”

Microsoft hopes to leave the datacentre in place for five years without having to intervene – once submerged, it’s impossible for engineers to gain access to the facility. It will monitor its performance for 12 months to see if such an idea is practical.

Pictures: Microsoft

How DevOps affects IT performance – and why automation is not universal among companies yet

Everyone’s talking a good game when it comes to DevOps strategies – but automation is not as widespread in organisations as one might think.

That’s the key verdict from software management provider Puppet in its latest report. The report, State of DevOps Market Segmentation, aims to ‘reveal additional insights’ from the company’s most recent State of DevOps report, issued last year.

The 2017 analysis polled almost 3,200 technical professionals from organisations worldwide, and found that high performers have 46 times more frequent code deployments, significantly lower change failure rate, and 440 times faster lead time from commit to deploy.

Yet while there is an evident difference between the highest and lowest performers, the difference in low performers via industry is startling. The media and entertainment industry only had 23% who were categorised in the ‘low performer’ category, while financial services (54%), industrial and manufacturing (53%) and insurance (53%) were significantly more bottom-heavy in comparison.

Leadership particularly affects performance, as this publication reported analysing last year’s report noted – but in the highest cases, a good leader does not necessarily mean good practices across the board. DevOps success also required ‘suitable architecture, good technical practices, [and] use of lean management principles’, the report said.

The majority of respondents said they reported high levels of manual work across configuration management, deployment, testing, and change approval processes. Many organisations start their DevOps journeys at the areas where the pain is most acute; version control, continuous integration and infrastructure automation among others. Puppet says that while automation is ‘a key enabler’ across organisations’ journeys, the process remains ‘inconsistent and spotty’.

Perhaps the most interesting part of the study was assessing how expectations have changed in DevOps initiatives over recent years. “As practices become more widespread, expectations are rising,” the report notes. “What many might have considered ‘great’ IT efforts just a few years ago might appear as fair to middling today.

“That’s an interesting twist that suggests that the gains provided by DevOps – getting departments and teams to work better across an organisation – is no longer just a ‘nice to have’ but a given,” the report adds. “DevOps is simultaneously raising the bar and expectations of what’s possible.”

“Today, every company around the world has the same priority – automation at scale – and they’re achieving this through DevOps,” said Nigel Kersten, report author and chief technical strategist at Puppet in a statement. “While the data shows that companies of all types and sizes are making progress, we still have a long way to go to eliminate manual work that prevents companies from scaling automation success.”

You can find out more about the report here (email required).

Our 5-minute guide to cloud managed networking


Esther Kezia Thorpe

5 Jun, 2018

More than 60% of enterprises expect that at least half their infrastructure will be cloud-based by the end of this year according to research from IDC.

So it’s no surprise that cloud managed networking, often provided ‘as-as-service’, is rapidly growing in popularity with businesses.


Learn about an affordable, secure way to approach cloud managed network deployments in this whitepaper on unlocking networking possibilities with cloud.

Download now


But what is cloud managed networking, and why should businesses be considering it as a way to manage their networking infrastructure? Here, we run through what it is as well as the pros and cons.

What is cloud managed networking?

Cloud managed networking is a way of managing and controlling a business network remotely through resources in the cloud, rather than from onsite network controllers or management software. It uses an SaaS model to make it easy to control and analyse on-premise network devices, such as wireless access points and switches.

This method allows you to manage all network users and devices in a single place, meaning that employees can work flexibly without geographical concerns, making cloud managed networking especially valuable for businesses where employees connect from multiple locations.

Cloud ‘networking-as-a-service’ includes switches, wireless access points and security gateways accompanied by a hardware license, as opposed to other cloud services that a business usually licences. The technology gives you total visibility over deployment, management, monitoring and diagnosis of issues on a network.

Once a device is connected to the network, it can easily load the running configuration from the cloud. This also means that businesses can scale up, from a few key devices to a large deployment under a single platform.

Pros and cons of cloud managed networks

Deployment is just one of the challenges resellers and customers face when implementing both wired and wireless networking equipment. Some form of set-up and configuration is normally required, and this takes time and adds to the cost every time a new device is needed.

Cloud networking makes deployments easier, with devices being provisioned with settings by the cloud provider prior to installation, making it quicker to install and set up. When the device is connected to a network, it securely connects back to a control centre and the configuration is downloaded and initiated automatically, making the device ready to use almost straight away.

It also removes the need for trained IT staff at remote locations, as deployments can be managed from one centralised location.

As well as this, it’s also usually much easier and quicker to identify issues on a cloud managed network. Unplanned downtime, limited resources and network performance issues can be a significant problem in day-to-day business operations, so being able to deal with disruptions efficiently from a centralised management platform is increasingly important.


Cloud managed networking can help with making your business and network security easy. Learn more in this whitepaper.

Download now


As most cloud managed networking solutions are offered on an ‘as-a-service’ basis, with regular, predictable payments spread out over time rather than a large upfront cost, maintenance and support is usually included as part of those costs and can offer long-term savings when compared to traditional networking deployments.

Security and access is a concern with any cloud tool as a key benefit is that users can log in from anywhere. Any business looking to use a cloud managed network provider must ensure that the solution supports different levels of IT admin privileges and multi-factor authentication to ensure that only authorised users are able to log in.

There are also potential connectivity issues that can occur with having a system managed by an external provider. If this provider has an outage that affects your business, it can be much more difficult to resolve it swiftly and without disruption to core day-to-day business operations.

Picture: Bigstock

Microsoft Azure is set to offer 12TB virtual machines


Bobby Hellard

5 Jun, 2018

Microsoft Azure is soon to offer virtual machines with 12TB of RAM for developers looking to run workloads that require lots of memory.

The company made the announcement along with the launch of a number of other VM types that are specifically geared towards running high-memory workloads, such as those running in SAP’s HANA in-memory database service, that need to process huge chunks of data extremely quickly.

Along with the new 12TB VM, Microsoft now also offers a newer M-series range of VMs stretching between 192GB and 4TB capacities certified for HANA, with Microsoft pushing its cloud infrastructure as the ideal place to run your SAP workloads. They are all based on Intel Xeon Scalable (Skylake) processors.

Microsoft will reveal more details on the new VM capacities in the coming months.

They come as Microsoft extends its private cloud, Azure Stack, around the world, doubling the number of countries in which its cloud-in-your-datacentre will operate to 92.

Initially, Azure Stack was launched in 46 countries, but its expansion has been put down to customer demand, spreading to a number of African countries, China and across Europe.

“When I talk with many of our customers about their cloud strategy, there is a clear need for choice and flexibility on where to run workloads and applications,” said Corey Sanders, corporate VP of Azure. “Like most customers, you want to be able to bridge your on-premises and cloud investments.”

“The inclusion of Microsoft Azure Stack services into our portfolio enhances our value proposition in a number of ways, from DevOps tools, a true hybrid cloud offering, access for customers to Azure services like business intelligence and AI, to fully managed service for any customer who wants it,” said Tiberiu Croitoru, CEO of BinBox a Romanian startup telecom service provider.

“Microsoft Azure Stack will bring us customers’ who want to exploit public cloud but were holding back due to data location concerns, In fact, our pipeline already includes about 60 customers we couldn’t have targeted pre-Azure Stack.”

Picture: Shutterstock

Dare you move to the cloud using a ‘finger in the air’ calculation?

In 1747 Lord Chesterfield used the immortal line ‘Take care of the pence; for the pounds will take care of themselves’ in a letter to a friend.  It still holds true today. The cost of a cappuccino and avocado on sourdough every morning on the way into the office soon adds up.

But however lax we are with our pennies, none of us would continue to pay the barista for that early morning caffeine hit if we decide to give up coffee. Oddly, though, that is what many large businesses do every day – albeit not for undrunk coffee but for unnecessary and unwieldy technology contracts.

Ghost spending and a spider’s web of contracts

Getting a firm hold on this ghost spending is vital as enterprises look to migrate to the cloud to boost agility and cut costs. If you don’t have full visibility of all IT costs at the start, then any cost savings identified as part of a cloud migration business case are going to be ‘finger in the air’ estimates. Furthermore, it will be hard to determine whether it makes financial sense to move specific workloads to the cloud, or leave them where they are.

Costs are rarely the sole motivation for moving to the cloud with the operating model, agility and saleability also influential. Despite this, the financial arguments tend to sway the board.

There are various reasons for the lack of a single, holistic view of IT expenditure. Sometimes the people who set up long-term contracts with suppliers have moved to new positions and no one has since thought to check the details. In other cases, individual business units may be merrily spinning up their own marketing apps in the cloud without the knowledge of IT teams, leaving trails of multiple cloud service charges in their wake.

Unravelling this spider’s web of contracts and other unnecessary overheads is made more difficult by the headcount cuts that have lacerated many IT departments over the last 15 or so years.  

During this time, cost optimisation has often taken a back seat but now it is coming back in vogue as organisations realise they need a digital clean up to ensure the commercial structure and cost base are fully aligned ahead of a drive to the cloud.

Digital feather dusters

While it makes financial sense to deploy new applications in the cloud, the decision for existing workloads is often less clear-cut. If you have a Technology Business Management (TBM) function, then the first step in any digital spring clean is to engage with them; if not, consider setting up at TBM with the help of a specialist third party.

Begin by undertaking a cost transparency exercise to establish a baseline. By breaking down the total IT bill into chunks – for example payroll services or email services – you can identify the share of these costs across the relevant IT towers e.g. networks or compute – making it easier to sniff out areas where savings can be made. Doing this, one global company managed to cut its annual network run costs by more than a third.

Another area that would benefit from a pre-cloud spring clean is technology demand. As business priorities evolve over time you will find you have unwanted services that can be ditched and low priority services that can be scaled back. Once this demand optimisation has been achieved the cloud journey can commence in earnest.

All technology expenses need to be reviewed. We find that organisations typically overpay their telecoms provider by up to 15%, often because they are being charged for services that they no longer use or have been decommissioned. In the cloud environment, the billing tends to be accurate but does not include a single view of resource consumption. Without this, it’s hard to maximise the true economic benefits of the cloud and it’s easy to end up paying for capacity you no longer need.

Modelling the cost savings

Once these baseline costs have been reviewed and adjusted, cost transformation can begin. A ‘before and after’ cost analysis lets organisations model different cloud options (e.g. IaaS, PaaS and SaaS), as well as single versus multi-cloud, and determine the most cost-effective approach.

We typically find that large organisations can cut IT costs by about 15% by migrating to the cloud or outsourcing alongside the benefits of agility, flexibility and scalability. In one example, a client made a saving of over £10 million a year on their run costs by moving test environments to the cloud. Mindful of the move away from sunk to variable costs, the team were also far more aware of the cost of spinning up every test environment and worked smarter as a result.

Watertight governance

Despite the positive steps taken by this particular test team to regulate usage, it’s essential to put in place watertight governance procedures. Only then can you ensure that everyone in the organisation is aware of the rules surrounding the use of cloud services and prevent money leaks.

Fine-tuning costs

As well as keeping an up to date record of all the software/hardware owned, it’s good practice to have a single view of all cloud service contracts. This becomes even more important in a multi-cloud model. With intuitive dashboards you can see what cloud services are being used where, and by whom at any time.

From here it’s a natural step to fine-tune the costs based on usage. In an IaaS scenario, for example, swapping out a 4-processor server for the same configuration in the cloud will deliver substantial savings as you tend to move from a fixed cost model to a variable cost model. By monitoring the peaks and troughs in resource usage, further cost savings can be identified – and an optimal profile agreed with your cloud provider.

Turning on a dime

There’s no denying the many benefits of moving to the cloud. But it’s impossible to evaluate the true cost implications without first having a single, granular view of all current expenditure. To gain the agility from the cloud that allows your business to turn on a sixpence, you first need to count your pennies.

Read more: Why organisations need 'reality check' on cloud costs

Everything from WWDC 2018: iOS 12, macOS Mojave, & much more!

What we were expecting from WWDC ’18: macOS  iOS What’s the name of the new macOS??!?!? What we got from WWDC ’18: iOS12 watchOS5  tvOS macOS Mojave Answer to the question: Is Apple merging macOS and iOS? Apple answered: NO! Tim Cook kicked off the highly anticipated Apple Keynote event with a Planet Earth style […]

The post Everything from WWDC 2018: iOS 12, macOS Mojave, & much more! appeared first on Parallels Blog.

Post UKCA Q&A: Apay Obang-Oyway


Cloud Pro

4 Jun, 2018

Can you detail what Ingram does and its history for those not familiar with the company?

Ingram Micro works with thousands of partners in the UK to help them responsibly transform their business through specialism, diversity, and innovation while helping end-users accelerate business outcomes from their technology investments.

We’ve got partnerships with the leading innovative technology vendors in the industry, whose services we offer through the Ingram Micro Ecosystem of cloud, which provides partners with the ideal platform to deliver premium cloud solutions to their end customers.

Describe your role in three words

Vision, motivate, transform.

Who is your tech hero? And why?

Elon Musk for his disruptive approach to business, innovation and breaking boundaries.

What does your current role involve?

Enabling transformation inside and outside of the organisation and embracing disruption. Articulating and amplifying the business opportunity that is cloud, but also motivating my team to deliver excellence every day.

How did you arrive at your current role?

Through good old hard work, fun and natural progression. I believed and embraced the subscription economy long before it became a reality so when opportunity arose to create the UK cloud organisation for the that was about disruption and I jumped at the chance to lead it.

Did you always want to work in the tech industry?

Yes.

What do you enjoy most about your role?

Seeing how the partner ecosystem is embracing cloud technology to enhance end-user organisations and change society.

And what is your least favourite aspect about your role/the industry?

The pace of partner transformation needs to be faster.

What was your first job in tech?

I started as an associate product manager at one of Europe’s largest VARs.

How do you think the cloud is shaping and changing our working and personal lives?

The fourth industrial revolution is gathering pace and this is the year that we’ll look back on and remember how the quiet revolution gained full momentum.

We’re almost precisely at that tipping point where the physical environment of computing gives way to the virtual world of cloud and its associated enabling technologies; not just in bold initiatives here, and ingenious transitions there, not just from pioneers within certain verticals, and visionary disruptors, but for every organisation, everywhere, in every industry.