Category Archives: Enterprise IT

Japan Post, IBM and Apple ink cloud, iPad deal

Tim Cook, Apple CEO and Ginni Rometty, IBM CEO, walking the walk and talking the talk

Tim Cook, Apple CEO and Ginni Rometty, IBM CEO, walking the walk and talking the talk

Japan Post, IBM and Apple are partnering to deploy iPads with IBM-developed apps and cloud services to give local seniors access to healthcare and community services.

As part of its Watch Over service for the elderly, Japan Post will deploy custom iOS apps built by IBM Global Business Services, which will provide services like medication reminders, exercise and diet tracking, community activity scheduling and grocery shopping.

“What we’re starting today draws on IBM’s long heritage of innovation at the intersection of technology, business and society,” said Ginni Rometty, president, chairman and chief executive of IBM.

“The potential we see here – as broad as national economics and as specific as the quality of life of individuals and their families – is one example of the potential of mobile-led transformation anywhere in the world where issues of an aging population exist,” Rometty said.

The move will also see Japan Post deploy iPads and IBM cloud services – thinks like analytics, training services and collaboration services – for its own employees.

“We are joining with two of the world’s most respected leaders in technology to bring our elderly generation into the connected world, expand our businesses by deepening relationships, and discover new ways to strengthen the fabric of our society and economy,” said Taizo Nishimuro, chief executive of Japan Post Group.

Apple chief executive Tim Cook also commented on the deal: “This initiative has potential for global impact, as many countries face the challenge of supporting an aging population, and we are honoured to be involved in supporting Japan’s senior citizens and helping enrich their lives.”

Japan Post Group had been piloting iPads and custom apps and cloud services for the elderly since last year and the company hopes to reach between four and five million elderly customers by 2020.

Mariinsky Theatre taps IBM cloud to improve broadcasting

Russia’s Mariinsky Theatre is using a hybrid cloud to support live performance broadcasts

Russia’s Mariinsky Theatre is using a hybrid cloud to support live performance broadcasts

Russia’s Mariinsky Theatre is working with IBM to deploy a hybrid cloud solution that would improve its ability to stream live videos of performances to mobile devices globally.

The theatre already has more than 250,000 unique viewers around the world tuned into Mariinsky.tv, where it currently hosts webcasts of live performances and on-demand recordings.

It had previously broadcast performances on the web using its own on-premise platform to edit and stream performances, but the company said it sought a cloud-based platform for this in a bid to expand its global reach and improve its ability to withstand peaks in demand.

“To support a growing global community of loyal Theatre audiences, we needed a scalable, hybrid cloud solution that could meet the standards for quality that we and our viewers expect,” said Eugene Barbashin, head of the computer technology department, Mariinsky Theatre.

The company said at times its digital platform would need to scale to support thousands of simultaneous viewers, particularly around very popular orchestra or ballet performances, but that it was struggling to cope with demand at times.

“We tried various competitive offerings from Amazon Web Services and Microsoft Azure, but viewers still had buffering issues while streaming performances. We chose IBM Cloud to more fully meet our needs in terms of reliable performance and ease-of-use,” Barbashin added.

Thomas Cook deploys cloud-based workforce management platform

The travel agency is moving its on-premise workforce management platform to the cloud

The travel agency is moving its on-premise workforce management platform to the cloud

Thomas Cook has swapped out its on-premise workforce management solution for a cloud-based alternative in a bid to make the company more responsive and competitive, it announced this week.

The travel agency said it has been using Nice Systems’ workforce management platform for a number of years, but it decided to move onto the company’s cloud-based service to help gain a consolidated view of its workforce, which would make things like scheduling and forecasting more efficient.

“Following many years of success with Nice’s workforce management solution, we decided to move our operations to the cloud in order to accommodate our growing business needs, which includes a multi-channel service operation,” said Martin West, head of central operations support, Thomas Cook UK & Ireland.

“This has also given us the opportunity to centralise our customer-facing operations, which will help us achieve greater operational efficiency, better service, and reduced costs,” West said.

Benny Einhorn, president, Nice EMEA said: “With this cloud deployment, the company has a clear, organization-wide view into the forecasting and scheduling of staff, while at the same time retail personnel have ownership over their schedules. We’re proud of our partnership with Thomas Cook which provides an excellent example of how a company can deliver outstanding customer service through employee engagement.”

Cloud democratises retail investor services

Cloud has the potential to democratize investment services

Cloud has the potential to democratize investment services

Cloud services are opening up possibilities for the retail investor to create individual customised funds in a way that was previously the preserve of the super-wealthy. Coupled with UK regulation such as the Retail Distribution Review, the effect has been to make new business models possible, according to Michael Newell, chief executive at InvestYourWay.

“Nobody is really talking about how the cloud is fundamental to what they do, but it is,” said Newell. “Where previously it might have taken days or even weeks to get the information to set up a fund, and to change your portfolio and positions completely, and to activate your account, it now takes just a few seconds thanks to Amazon Cloud.”

Newell previously worked at BATS, where he was involved alongside Mark Hemsley in setting up the exchange’s ETF services. For some time, he had been increasingly aware of the kind of services that high net worth investors were getting and began to form an idea that someone could bring that to the common retail investor. The idea was to create a system where each individual person has their own fund. However, Newell soon realised that to make that possible, it would be necessary to service customers investing smaller amounts at significantly lower cost – something that had never really been viable up to that point.

“You’d never get that kind of individual attention unless you were high net worth,” he said. “If you’ve only got £2,000 to invest, it’s not going to be worth a fund manager spending the time with you and charging just a few pounds for their time, which is what they’d need to do to make it viable. It just didn’t work.”

Cloud services changed both the economics of the situation and the practicality of his original idea. Newell found that by obtaining computing power as a service, calculations that would have taken 48 hours on a laptop could now be completed in 30 seconds. A manual Google search process carried out by an individual to work out how best to invest might take days at the least, or more realistically weeks and even months – but on InvestYourWay, it can be done in seconds because the process is automated.

Part of the impetus for the new business was also provided by regulatory change, which began to make it easier to compete in the UK with the established fund managers. Specifically, the Retail Distribution Review which came into effect in January 2013 had the effect of forcing fund managers to unbundle their services, providing transparency into previously opaque business charges. Customers could now see exactly what they were being charged for, and that has had the effect of forcing down prices and changing consumer behaviour.

“It’s amazing that it took so long to bring that to the retail investor,” said Newell. “If you think about it, all of this has been happening in the capital markets for years. The focus on greater transparency and unbundling. The clarity on costs and fees.”

However, the idea still needed visibility and a user-base. This was provided when the platform agreed a deal with broker IG, under which InvestYourWay became a service available as an option on the drop-down menu for IG customers. The platform launched in October 2014, offering investment based on indexes rather than single stocks. This was done in part to keep costs down, and partly for ideological reasons. Newell explains that alternative instruments such as ETFs are popular, but would have involved gradually increasing slippage over time due to the costs of middle men. Focusing on indexes removes that problem.

The platform also claims to be the first to offer non-leveraged contract for difference trading. While around 40% of trading in London is estimated to be accounted for by CFDs, normally these are leveraged such that an investor who puts in £1,000 stands to gain £10,000 (but may also lose on the same scale). But IYW’s contracts are not leveraged.

The interface of the platform has quite a bit in common with the latest personal financial management interfaces. The first page consists of a time slider, a risk slider, and the amount the user wants to invest, as well as preferred geographical focus – Europe, America or Asia. After that, users get a pie chart breaking down how the service has allocated their investment based on the sliders. For example, into categories such as North American fintech startups, Asian banks, European corporates, etc. Users also get bar charts showing the historical performance of the fund they are designing, as they go along. They can also see an Amazon-style recommendation suggesting “People who invested in X, also bought Y…”

After that, the user is presented with optional add-ons such as investment in gold, banks, metals, pharmaceuticals, and other areas that may be of special interest. Hovering the mouse over one of these options allows the user to see what percentage of other funds have used that add-on. Choosing one of the add-ons recalibrates the fund that the user is creating to match, for example adding a bit more Switzerland if the user selected banks.

In a demonstration seen by Banking Technology, it was possible to adjust a fund by getting out of Europe and moving the user’s investment to Asia in a few clicks. According to Newell, it would take weeks to do that the traditional way. The process might involve moving money from one fund manager to another or starting an entirely new fund. It was also possible to see how much the cost of that move was – in a demonstration seen byBanking Technology, on a £10,000 investment the cost was £13. Prices are matched to the most recent available end of day data.

Sinopec taps Alibaba for cloud, analytics services

Sinopec is working with Aliyun to roll out a series of cloud and big data services

Sinopec is working with Aliyun to roll out a series of cloud and big data services

Aliyun, Alibaba’s cloud services division is working with China Petroleum & Chemical Corporation (Sinopec) to roll out a set of cloud-based services and big data technologies to enable the firm to improve is exploration and production operations.

In a statement to BCN the companies said they will work together to roll out a “shared platform for building-based business systems, big data analytics” and other IT services tailored to the petroleum industry.

“We hope to be able to use Alibaba’s technology and experience in dealing with large-scale system architecture, multi-service data sharing, data applications in the large-scale petrochemical, oil and chemical industry operations,” Sinopec said.

The two companies also plan to explore the role of cloud and big data in connected vehicles.

Just last month Aliyun opened its first overseas datacentre in Silicon Valley, a move the Chinese e-commerce giant said will bolster its appeal to Chinese multinational companies.

The company has already firmed up partnerships with large multinationals including PayPal and Dutch electronics giant Philips. The company has five datacentres in China.

It would seem a number of large oil and gas firms have begun to warm to the cloud as of late. Earlier this week Anadarko Petroleum Corporation announced it had signed a five year deal that would see the firm roll out PetroDE’s cloud-based oil and gas field evaluation analytics service.

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

Close to 40% of IT DMs find cloud is falling short once implemented, survey finds

Enterprises are struggling to manage the transition to cloud services, an NTT report suggests

Enterprises are struggling to manage the transition to cloud services, an NTT report suggests

Close to four in ten IT decision makers believe the cloud as it is implemented in their organisation is falling short of its potential, nearly the same proportion (41 percent) that say they find managing cloud vendors confusing, according to a recently published report.

A recently published NTT survey of over 1,600 IT decision makers in Europe and the US sheds some light on the challenges enterprises are facing in adopting cloud services.

While nearly half believe cloud as implemented in their organisations is falling short, many are finding that a drive towards cloud services internally is displacing investment in other key areas of IT – and struggling to manage this bi-modal IT framework.

About 17 per cent of respondents agreed they spend more time developing capabilities for applications hosted in the cloud than they do for those in their datacentres, but many more said they were spending significantly more time maintaining the current performance of both cloud (44 per cent) and corporate datacentre (55 per cent) applications.

Still, about 41 per cent of respondents said just migrating their critical apps to the cloud too challenging to warrant the move.

“Our study shows the reality of cloud in 2015 is potentially as complex as the world it was supposed to replace. ICT decision-makers harbor significant frustrations over cloud, and there are no clear answers over which kinds of applications belong where,” said Len Padilla, vice president of product strategy at NTT Com. “There needs to be a far smoother migration path from the datacentre to the cloud. A different kind of planning approach is required for companies to achieve the large-scale digital transformations business executives are demanding.”

“ICT decision-makers see the cloud as a compelling enabling technology for digital transformation – there’s no better way to take a new app from the sandbox to global production quickly.  However, our study suggests focusing on ambitious plans is not the best approach.  Focusing on continuous improvement and incremental steps is a far more effective strategy,” he explained.

While close to 90 per cent of respondents said they plan to move some applications over to the cloud at some point, the results of the report still raise questions about how enterprises can cope with some of the challenges of managing the transition. For a full copy of the report click here.

Rio Tinto moves ERP, IM systems to Accenture cloud

miningRio Tinto announced a partnership with Accenture that will see the global mining firm move the bulk of its application landscape to Accenture’s public cloud service.

Rather than add new systems into the mix the deal will see Accenture help the British-Australian firm consolidate its ERP and IM platforms and put them on Accenture’s cloud infrastructure. As part of the move Accenture will manage the lifecycle of the applications, which will be hosted in Accenture’s datacentres.

Rio Tinto Group said it moved its application landscape in a bid to save costs and switch to an “as-a-service” IT model that allows it to pay only for the resources it uses.

“Rio Tinto is on an ambitious journey to a world-class IS&T delivery model that is innovative, adaptable and cost-effective, fully supporting our business priorities and group operating model,” said Rio Tinto Group chief information officer Simon Benney.

“We selected Accenture to help us manage this transformation based on its global delivery capabilities, its vision for the intelligent business cloud and its ability to support our digital transformation programme,” Benney said.

Pierre Nanterme, chairman and chief executive of Accenture said: “This solution will allow Rio Tinto to smartly connect its infrastructure, software applications, data and operations capabilities in order to become an agile, intelligent, digital business that can better navigate the commodities cycles.”

Close to half of manufacturers look to cloud for operational efficiency, survey reveals

Manufacturers are flocking to cloud services to reap operational benefits

Manufacturers are flocking to cloud services to reap operational benefits

About half of all large manufacturers globally are using or plan to use IT services based on public cloud platform in a bid to driver operational efficiency, an IDC research survey reveals.

A recently published IDC survey which polled 437 IT decision makers at large manufacturing firms globally suggests manufacturers are looking to cloud services primarily to simplify their operations.

A majority of manufacturers worldwide are currently using public (66 per cent) or private cloud (68 per cent) for more than two applications, and nearly 50 per send of European manufacturers have adopted or intend to adopt ERP in the public cloud.

But only 30 to 35 per cent of respondents said operations, supply chain and logistics, sales, or engineering were likely to benefit through adoption.

“Manufacturers are in the midst of a digital transformation, in which 3rd platform technologies are absolutely essential to the way they do business and in the products and services they provide to their customers.  Consequently, a strategic approach to adopting cloud is absolutely essential,” said Kimberly Knickle, research director, IDC Manufacturing Insights.

“Because of cloud’s tremendous value in making IT resources available to the business based on business terms –speed, cost, and accessibility- manufacturers must  ensure that the line of business and IT management work together in defining their requirements,” Knickle said.

The firm said manufacturers are likely to opt for private cloud platforms in the near term in a bid to expand their IT estates to the cloud, but that capacity requirements will likely eventually shift those platforms onto larger public cloud platforms. A big driver for this will be the Internet of Things, with a cloud a key component in allowing manufacturers to more easily make use of the data that will be connected from sensors throughout manufacturing operations.

US Army deploys hybrid cloud for logistics data analysis

The US Army is working with IBM to deploy a hybrid cloud platform to support its logistics system

The US Army is working with IBM to deploy a hybrid cloud platform to support its logistics system

The US Army is partnering with IBM to deploy a hybrid cloud platform to support data warehousing and data analysis for its Logistics Support Activity (LOGSA) platform, the Army’s logistics support service.

LOGSA provides logistics information capabilities through analytics tools and BI solutions to acquire, manage, equip and sustain the materiel needs of the organisation, and is also the home of the Logistics Information Warehouse (LIW), the Army’s official data system for collecting, storing, organizing and delivering logistics data.

The Army said it is working with IBM to deploy LOGSA, which IBM said it is the US federal government’s largest logistics system, on an internal hybrid cloud platform in a bid to improve its ability to connect to other IT systems, broaden the organisation’s analytics capabilities, and save money (the Army reckons up to 50 per cent).

Anne Altman, General Manager for U.S. Federal at IBM said: “The Army not only recognized a trend in IT that could transform how they deliver services to their logistics personnel around the world, they also implemented a cloud environment quickly and are already experiencing significant benefits. They’re taking advantage of the inherent benefits of hybrid cloud: security and the ability to connect it with an existing IT system. It also gives the Army the flexibility to incorporate new analytics services and mobile capabilities.”