Archivo de la categoría: Enterprise IT

Cloud migrations driven by bosses, business leaders and board – report

multi cloudThe majority of cloud migrations are driven by the three Bs – bosses, board members and business leaders, as technology experts become marginalized, says a new report. However, the report also indicated  most projects end up being led by a technology-savvy third party.

Hosting vendor Rackspace’s new ‘Anatomy of a Cloud Migration’ study found that CEOs, directors and other business leaders are behind 61% of cloud migrations, rather than IT experts. Perhaps surprisingly, 37% of these laymen and laywomen see their cloud migration projects right through to completion, they told the study.

The report, which compiled feedback from a survey of 500 UK IT and business decision-makers, also revealed what’s in the cloud, why it’s there and how much IT has been moved to the cloud already. There was some good news for the technology expert, as the report also indicates that one of the biggest lessons learned was that cloud migration is not a good experience and that the majority of companies end up consulting a third-party supplier. However, in the end, nine out of ten organisations were able to report that their business goals were met, albeit only ‘to some extent’. The report was compiled for Rackspace by Vanson Bourne.

Among the 500 companies quizzed, an average of 43% of the IT estate is now in the cloud. Cost cutting was the main motive in 61% of cases.

Surprisingly, 29% of respondents said they migrated their business-critical applications first, rather than embark on a painful learning curve with a less important application. The report did not cross reference this figure with the figures for migrations led by CIOs. However, 69% of the survey said they learned lessons from their migration that will affect future projects, which almost matches the  71% of people who didn’t make a mission critical application their pilot migration project.

Other hoped-for outcomes nominated by the survey group were improvements in resilience (in 50% of cases), security (38%), agility (38%) and stabilising platforms and applications (37%).

A move to the cloud is no longer an exclusive function of the IT department, concluded Darren Norfolk, UK MD of Rackspace. “Whether business leaders understand the practicalities of a cloud migration project or not, there appears to be broad acceptance that they can do it,” he said.

Infectious Media CTO on how DevOps is affecting ICT teams

people_daniel_de_sybelThe fourth employee of Infectious Media, Dan de Sybel started his career as an Operations Analyst for Advertising.com, where during a six year tenure, he launched the European Technology division, producing bespoke international reporting and workflow platforms, as well as numerous time saving systems and board level business intelligence.

Dan grew the EU Tech team to 12 people before moving agency side, to Media Contacts UK, part of the Havas Media Group. At Havas, Dan was responsible for key technology partnerships and spearheading the agency’s use of the Right Media exchange under its Adnetik trading division.

At Infectious Media, Dan’s Technology division yielded one of the first Big Data analysis systems to reveal and visualise the wealth of information that RTB provides to its clients. From there, the natural next step was to produce the Impression Desk Bidder to be able to action the insights gained from the data in real time and thus close the loop on the programmatic life cycle. Dan’s team continues to enhance its own systems, whilst integrating the technology of other best-in-class suppliers to provide a platform that caters to each and every one of our clients’ needs.

Ahead of his presentation at DevOps World on November 4th in London, Dan shares his insights on how he feels DevOps is affecting ICT teams, the DevOps challenges he is facing as well as what he is doing to overcome it.

What does your role involve and how are you involved with DevOps?

​Infectious Media runs its own real-time bidding software that takes part in hundreds of thousands of online auctions for online advertising space every second. As CTO, it’s my job to ensure we have the right team, processes and practices in place to ensure this high frequency, low latency system remains functional 24×7 and adapts to the ever changing marketplace and standards of the online advertising industry.

DevOps practices naturally evolved at Infectious Media due to our small teams, 1 week sprint cycles and growing complexity of systems. Our heavy use of the cloud meant that we could experiment frequently with different infrastructure setups and adapt code to deliver the best possible value for the investment we were prepared to make. These conditions resulted in far closer working of the developers and the operational engineers and we have not looked back since.

How have you seen DevOps affecting IT teams’ work?

Before adopting the DevOps philosophy, we struggled to bring the real-time bidding​ system to fruition, never sure if problems originated in the code, in the operational configurations of infrastructure, or in the infrastructure itself. Whilst the cloud brought many benefits, never having complete control of the infrastructure stack led to ​many latency and performance issues that could not be easily explained. Furthermore, being unable to accurately simulate a real-world environment for testing without spending hundreds of thousands of pounds meant that we had to work out solutions for de-risking testing new code in live environments. All of these problems became much easier to deal with once we started following DevOps practices and as a result, we have a far happier and more productive technology team.

What is the biggest challenge you are facing with DevOps and how did/are you trying to overcome it?

​The biggest challenge was overcoming initial inertia to switch to a model that was so far unproven and regarded as a bit of a fad. Explaining agile methodologies and the compromises it involves to senior company execs is hard enough, but as soon as you mention multiple daily release cycles necessitating fewer governance processes and testing on live, you are bound to raise more than a few eyebrows.​

​Thankfully, we are a progressive company and the results proved the methodology. Since we adopted DevOps, we’ve had fewer outages, safer, more streamlined deployments and, crucially, more features released in less time

Can you share a book, article, movie that you recently read/watched and inspired you – in regards to technology?

The Phoenix Project. ​Perhaps a bit obvious, but it’s enjoyable reading a novel that covers some of the very real problems IT professionals experience in their day-to-day roles with the very solutions that we were experimenting with at the time.

15880-DevOps-World-LogoWhat are you hoping to achieve by attending the DevOps World?

​Really my goal is to understand and help with some of the problems rolling DevOps practices out across larger companies can yield. In many respects, rolling out DevOps in small startups is somewhat easier as you have far less inertia from tried and trusted practices, comparatively less risk and far fewer people to convince that it’s a good idea. I’ll be interested to hear about other people’s experiences and hopefully be able to share some advice based on our own.

3 approaches to a successful enterprise IT platform rollout strategy

enterprise IT rolloutExecuting a successful enterprise IT platform rollout is as much about earning widespread support as it is about proper pacing. It’s necessary to sell the rollout within the organization, both to win budget approval and to gain general acceptance so that adoption of the new platform goes smoothly.

Each group being asked to change their ways and learn this new platform must have the value of the rollout identified and demonstrated for them. The goal of the rollout process is to see the platform solution become successfully adopted, self-sustaining, efficient in assisting users, and, ultimately, seamlessly embedded into the organization’s way of doing business.

Deploying a new solution for use across an organization boils down to three approaches, each with their advantages and drawbacks: rolling out slowly (to one department at a time), rolling out all at once (across the entire organization), or a cleverly targeted mix of the two.

Vertical Rollouts (taking departments one at a time, slow and steady)

This strategy applies when selecting a single department or business function within the organization (ex: customer support, HR, etc.), for an initial targeted rollout and deploying the new platform in phases to each vertical, one at a time. The benefit here is a greater focus on the specific needs and usage models within the department that is receiving full attention during their phase of the rollout implementation, yielding advantages in the customization of training and tools to best fit those users.

For example, the tools and interfaces used daily by customer service personnel may be entirely irrelevant to HR staff or to engineers, who will appreciate that their own solutions are being streamlined and that their time is being respected, rather than needing to accept a crude one-size-fits-all treatment and have to work to discover what components apply to them. It’s then more obvious to each vertical audience what the value added is for them personally, better garnering support and fast platform adoption. Because this type of rollout is incremental, it’s ripe for iterative improvements and evolution based on user feedback.

Where vertical, phased rollouts are less effective is in gaining visibility within the organization, and in lacking the rallying cry of an all-in effort. This can make it difficult to win over those in departments that aren’t offered the same immediate advantages, and to achieve the critical mass of adoption necessary to launch a platform into a self-sustaining orbit (even for those tools that could benefit any user regardless of their department).

Horizontal Rollouts (deploying to everyone at the same time)

Delivering components of a new platform across all departments at once comes with the power of an official company decree: “get on board because this is what we’re doing now.” This kind of large-scale rollout makes everyone take notice, and often makes it easier not only to get budget approval (for one large scale project and platform rather than a slew of small ones), but also to fold the effort into an overall company roadmap and present it as part of a cohesive strategy. Similar organizational roles in the company can connect and benefit from each other with a horizontal rollout, pooling their knowledge and best practices for using certain relevant tools and templates.

This strategy of reaching widely with the rollout helps to ensure continuity within the organization. However, big rollouts come with big stakes: the organization only gets one try to get the messaging and the execution correct – there aren’t opportunities to learn from missteps on a smaller scale and work out the kinks. Users in each department won’t receive special attention to ensure that they receive and recognize value from the rollout. In the worst-case scenario, a user may log in to the new platform for the first time, not see anything that speaks to them and their needs in a compelling way, and not return, at least not until the organization wages a costly revitalization campaign to try and win them over properly.  Even in this revitalization effort, a company may find users jaded by the loss of their investment in the previous platform rollout.

The Hybrid Approach to Rollouts

For many, the best rollout strategy will borrow a little from both of the approaches above. An organization can control the horizontal and the vertical aspects of a rollout to produce a two-dimensional, targeted deployment, with all the strengths of the approaches detailed above and less of the weaknesses. With this approach, each phase of a rollout can engage more closely with specific vertical groups that the tools being deployed most affect, while simultaneously casting a wide horizontal net to increase visibility and convey the rollouts as company initiatives key to overall strategy and demanding of attention across departments. Smartly targeting hybrid rollouts to introduce tools valuable across verticals – while focusing on the most valuable use case within each vertical – is essential to success with them. In short, hybrid rollouts offer something for many, and a lot specifically for the target user being introduced to the new platform.

In executing a hybrid rollout of your enterprise IT platform, begin with a foundational phase that addresses horizontal use cases, while enticing users with the knowledge that more is coming. Solicit and utilize user feedback, and put this information to work in serving more advanced use cases as the platform iterates and improves. Next, start making the case for why the vertical group with the most horizontally applicable use cases should embrace the platform. With that initial group of supporters won over, you have a staging area to approach other verticals with specific hybrid rollouts, putting together the puzzle of how best to approach each while showcasing a wide scope and specific value added for each type of user. Importantly, don’t try to sell the platform as immediately being all things to all people. Instead, define and convey a solid vision for the platform, identify the purpose of the existing release, and let these hybrid rollouts take hold at a natural pace. This allows the separate phases to win their target constituents and act as segments to a cohesive overall strategy.

If properly planned and executed, your enterprise IT platform rollout will look not like a patchwork quilt with benefits for some and not others, but rather a rich tapestry of solutions inviting to everyone, and beneficial to the organization as a whole.

 

roguen kellerWritten by Roguen Keller, Director of Global Services at Liferay, an enterprise open source portal and collaboration software company.

Veritas warns of ‘databerg’ hidden dangers

Deep WebBackup specialist Veritas Technologies claims European businesses waste billions of euros on huge stories of useless information which are growing every year. By 2020 it claims the damage caused by this excessive data will cost over half a trillion pounds (£576bn) a year.

According to the Veritas Databerg Report 2015, 59% of data stored and processed by UK organisations is invisible and could contain hidden dangers. From this it has estimated that the average mid-sized UK organisation holding 1000 Terabytes of information spends £435k annually on Redundant, Obsolete or Trivial (ROT) data. According to its estimate just 12% of the cost of data storage is justifiably spent on business-critical intelligence.

The report blames employees and management for the waste. The first group treats corporate IT systems as their own personal infrastructure, while management are too reliant on cloud storage, which leaves them open to compliance violations and a higher risk of data loss.

The survey identified three major causes for Databerg growth, which stem from volume, vendor hype and the values of modern users. These root causes create problems in which IT strategies are based on data volumes not business value. Vendor hype, in turn, has convinced users to become increasingly reliant on free storage in the cloud and this consumerisation has led to a growing disregard for corporate data policies, according to the report’s authors.

As a result, big data and cloud computing could lead corporations to hit the databerg and incur massive losses. They could also sink under a prosecution for compliance failing, according to the key findings of the Databerg Report 2015.

It’s time to stop the waste, said Matthew Ellard, Senior VP for EMEA at Veritas. “Companies invest a significant amount of resources to maintain data that is totally redundant, obsolete and trivial.” This ‘ROT’ costs a typical midsize UK company, which can expect to hold 500 Terabytes of data, nearly a million pounds a year on photos, personal ID doc, music and videos.

The study was based on a survey answered by 1,475 respondents in 14 countries, including 200 in the UK.

Oracle announces new levels of cloud, mobile and IoT integration in its Cloud Platform

Oracle openworld 2015Oracle has announced at OpenWorld a ‘comprehensive’ suite of integration services to help clients connect their cloud mobile and IoT systems into the Oracle Cloud Platform.

The Oracle Cloud Platform for Integration portfolio now includes Oracle’s IoT Cloud, Integration Cloud, SOA Cloud and API Manager Cloud range of services.

Oracle says its Integration Cloud is ideal for non-technical users such as citizen integrators, the applications staff in IT departments and line of business managers who need to integration software as a service (SaaS) applications. To this end it comes with a simple, intuitive Web-based, point-and-click user interface.

On the other end of the technical competence spectrum, Oracle’s SOA Cloud was designed for integration developers. It provides a full integration platform, including service virtualisation, process orchestration, B2B integration, managed file transfer and business activity monitoring dashboards. In accordance with the more detailed nature of the work of the typical user it has fine-grained control and the capacity to support various use cases.

Oracle’s integration cloud services are fully portable, it claims, so that users can switch their integration workloads between on-premise and the cloud, as business requirements change.

The IT architectures that organizations have relied on for decades are too rigid and inflexible for the digital age, according to Amit Zavery, Oracle’s senior VP of Cloud Platform and Integration products at Oracle. “Organisations need to rethink API management and service integration across cloud, mobile and IoT initiatives,” said Zavery.

Oracle Cloud Platform’s suite of integration services will provide the flexibility to allow them to adapt, which will boost productivity, slash costs and catalyse the inventive processes, Zavery argued.

Oracle Internet of Things Cloud Service should make it easy to connect any device, whether it generates or analyses data and extends business processes within enterprise applications, says Oracle. This, it says, will lead to faster development of IoT applications, with preventive maintenance and asset tracking pre-integrated with other Oracle systems such as Oracle PaaS, Oracle SaaS, Oracle JD Edwards, Oracle E-Business Suite, and Oracle Fusion.

Meanwhile, the Oracle API Manager Cloud Service will help developers to create and expose APIs to internal or external consumers quickly but without compromising on security, according to the vendor.

The Oracle Cloud Platform is part of the Oracle Cloud. Oracle says its Cloud offering supports 70 million users and more than 34 billion transactions each day and runs on more than 50,000 devices and more than 800 petabytes of storage in 19 data centres around the world.

HP Helion Public Cloud to end, buyers told to go to Amazon

HPHP has revealed that the OpenStack-driven HP Helion Public Cloud will close on January 31 2016 as it looks to focus on private and managed cloud offerings, which is says it will now ramp up.

HP announced the news via its blog in which it also revealed that would invest more in the Helion OpenStack platform which, it said, has more realistic prospects for strong customer adoption. The Helion Openstack system is the foundation of its private cloud offering.

Bill Hilf, HP Cloud’s general manager, explained the logic behind the decision. “The market for hybrid infrastructure is evolving quickly. Today, our customers are consistently telling us they want a hybrid combination of efficiently managed traditional IT and private cloud,” said Hilf. They only want access to software as a service (SaaS) applications and public cloud capabilities for certain workloads, he added.

With customers pushing for private cloud to be delivered faster than ever before, the company has had to prioritise, he said.

“We will continue to innovate and grow in our areas of strength, we will continue to help our partners and to help develop the broader open cloud ecosystem, and we will continue to listen to our customers to understand how we can help them with their entire end-to-end IT strategies,” said Hilf.

HP will support its new model by expanding its partner base and integrating different public cloud environments, Hilf said. Customers who want public cloud should go to Amazon, Hilf said.

“For customers who want access to existing large-scale public cloud providers, we have already added greater support for Amazon Web Services as part of our hybrid delivery with HP Helion Eucalyptus,” said Hilf.

Dell and Microsoft unveil joint hybrid cloud offering

Dell office logoDell has expanded its cloud portfolio with a new hybrid cloud offering with technology jointly developed with Microsoft. The new system is designed to break down the barriers to cloud adoption and offer a simpler but more secure payment system.

According to Dell’s own research, nine out of ten IT decision makers say a hybrid cloud strategy is important to achieve a Future-Ready Enterprise. The recently unveiled Dell Global Technology Adoption Index revealed that 55% of organisations around the world will use more than one type of cloud. The study also identified cost and security as the biggest barriers to adopting the cloud, with complexity being the biggest blockage associated with hybrid cloud.

The new Dell Hybrid Cloud System for Microsoft promises customers an on-premise private cloud with consistent Azure public cloud access in less than three hours. Clients are promised minimised downtime with non-disruptive, fully automated system updates that don’t impose themselves on users when not needed. It also offers workload templates to simplify service provision and governance models. The management of multiple clouds will be simplified by an out-of-the-box integration with Dell Cloud Manager (DCM) and Windows Azure Pack (WAP), Dell says.

The Dell Hybrid Cloud System for Microsoft is built around the CPS Standard, which combines optimised Dell modular infrastructure with pre-configured Microsoft CPS software. This will include Microsoft’s software stack and Azure Services for back-up, site recovery and operational insights.

Meanwhile the Dell Cloud Flex Pay programme gives customers a new flexible option to buy Dell’s Hybrid Cloud System for Microsoft without making a long-term commitment. Cloud Flex Pay will eliminate the risks of being locked into paying for services that aren’t used fully says Dell.

“Customers tell us their cloud journey is too complex, the cost-risk is too high and control isn’t transparent,” said Jim Ganthier, vice president and general manager of engineered solutions and cloud at Dell. “With our new Cloud Flex Pay program, cost-risk is all but eliminated.”

EMC, VMware unveil plans for Virtustream hybrid for the enterprise cloud

 EMC and VMware are to combine their cloud offerings under a jointly-owned 50/50 shared Virtustream brand led by its CEO Rodney Rogers.

The cloud service will be aimed at enterprises with an emphasis on hybrid cloud, which Virtustream’s owners identify as one of the largest markets for IT infrastructure spending. The company will provide managed services for on-premises infrastructure and its enterprise-class Infrastructure-as-a-Service platform. The rationale is to help clients make the transition from on-premise computing to the cloud, migrating their applications to cloud-based IT environments. Since many applications are mission critical, hybrid cloud environments will be instrumental in the conversion process and Virtustream said it will set out to provide a public cloud experience for its Federation Enterprise Hybrid Cloud service.

Nearly one-third of all IT infrastructure spending is going to cloud-related technologies, according to a research by The 451 Group, with cloud service buyers now investing on the application stack. Enterprise adoption is increasing, says the researcher, and buyers increasingly favour private and hybrid cloud infrastructure. Enterprise resource planning (ERP) software is increasingly being run on cloud systems, and enterprises will spend a total of $41.2B annually on ERP software by 2020, says The 451 Group.

Virtustream will incorporate EMC Information Infrastructure, VCE and VMware into one and will offer services using VMware vCloud Air, VCE Cloud Managed Services, Virtustream’s Infrastructure-as-a-Service and EMC’s Storage Managed Services and Object Storage Services offerings. VMware will establish a Cloud Provider Software business unit led by VMware’s senior VP Ajay Patel. The unit will incorporate existing VMware cloud management offerings and Virtustream’s software assets.

The business will integrate existing on-premises EMC Federation private cloud and take them into the public cloud, according to Virtustream. The aim is to maintain a common experience for developers, managers, architects and end users. Virtustream’s cloud services will be delivered directly to customers and through partners.

Virtustream addresses the changes in buying patterns and IT cloud operation models that both vendors are encountering now, said EMC CEO Joe Tucci. “Customers consistently tell us they’re on IT journeys to the hybrid cloud. The EMC Federation is now positioned as a complete provider of hybrid cloud offerings.”

Virtustream’s financial results will be consolidated into VMware’s financial statements beginning in Q1 2016.

IBM cloud service revenue up despite 14th quarterly revenue decline

IBM2IBM has posted an unexpectedly large drop in revenue and cut its full-year profit forecast, blaming the strong US dollar for dampening demand from China and emerging markets. Though cloud, big data, mobile and other strategic markets are growing, their rise is not enough to arrest a long term trend of decline.

IBM, which gets more than half its business from overseas, says it has been affected as the dollar is currently 17% up on its standing against a basket of currencies compared to this time last year.

Chinese sales were particularly affected, with fewer big deals being registered. As a consequence revenue from China fell 17%, IBM’s chief financial officer Martin Schroeter told analysts. Sales in Brazil, Russia, India and China combined were down 30%.

The company’s total revenue fell 13.9% to $19.28 billion in the quarter, below analysts’ average forecast of $19.62 billion.

It was the 14th quarter in a row that IBM has posted a reduction in revenue. As IBM divests itself of low-margin businesses it has failed to make up the shortfall, yet, through cloud computing, according to analysts.

“This is another example of the massive headwinds that traditional tech stalwarts are seeing in this ever-changing environment, as more customers move to the cloud,” said FBR Capital Markets analyst Daniel Ives.

According to IBM CFO Martin Schroeter, weakness in IBM’s consulting and storage businesses account for the revenue shortfall, rather than the performance of its cloud services.

“I would characterize it as the consulting and systems integration business moving away from these large, packaged applications and the storage business moving to flash and to the cloud,” Schroeter told Reuters in an interview.

Revenue from IBM’s ‘strategic imperatives’, cloud and mobile computing, data analytics, social and security software, rose 17 per cent in the third quarter ending on Sept 30th.

IBM’s net income from continuing operations fell to $2.96 billion, or $3.02 per share, from $3.46 billion, or $3.46 per share, a year earlier.

At the close of trading yesterday (Monday) IBM’s shares had fallen 7 per cent this year.

Infosys to use IBM’s Bluemix make next generation of cloud apps

IBM and Infosys have announced a joint venture where Infosys will use IBM’s Bluemix system to prototype, develop and roll out new cloud apps for its client base in 50 countries.

The partners will launch a Bluemix-powered Innovation Lab in which Infosys and its clients can work together to create applications. Infosys developers are to be trained on Bluemix and tutored on cloud app development. Infosys will also get access to the IBM Bluemix Dedicated, a library of cognitive computing and analytics systems and services for building client apps.

The Infosys Innovation Lab will be staffed with a dedicated team of designers, ‘extreme agile’ specialists and industry and technology architects. Infosys has 187,000 employees and a turnover of $8.7 billion.

IBM launched Bluemix with a US$ 1 billion investment in 2014 and it now claims to be the largest Cloud Foundry deployments in the world, with a catalogue of over 120 tools and software-services, with all the top open-source, IBM and third-party technologies.

The partnership is all about getting access to these technologies and sharing them with clients, according to Srikantan Moorthy, Head of Application Development and Maintenance at Infosys. “Our goal is to bring these advanced technologies to clients’ application landscape in the most rapid and collaborative way possible,” said Moorthy, “Infosys will also incorporate any Bluemix-related curriculum into its on-boarding and training process.”

The disruptive forces of cognitive computing, analytics and IoT are all delivered through the cloud and Bluemix will only exacerbate these changes, according to Steve Robinson, IBM Cloud’s General Manager. “Developers can accelerate the deployment of these next-generation apps and this collaboration with Infosys will advance our clients’ journey.”