Traditional software giants are starting to kick it in the cloud — so what’s next?

(c)iStock.com/Macjej Noskowski

For years the benefits of the cloud have been the subject of much debate within the IT industry. In the past, sceptics claimed that the cloud wasn’t reliable or secure — although Gartner sought to dispel that myth. Now, the sceptics are harder to find and Gartner doesn’t have to dispel as many cloud myths. Our latest Businesses @ Work report shows that companies are not only enabling their employees to use their own apps and mobile devices at work, but are also utilising a range of cloud apps in order to boost connectivity with customers, contractors and partners via cloud-based portals. And 76% of IT decision-makers say digital initiatives will be very important or critical to their businesses in 2016.

And with Microsoft, Adobe, SAP and Oracle now adopting a cloud first approach, whether through internal development or via acquisition, it’s safe to say that the digital revolution is here. Look no further than Microsoft CEO Satya Nadella’s now famous first letter to employees, pledging that Microsoft would thrive in a cloud-first world. His big bet on cloud is paying off, as Microsoft Office 365 extended its lead as most popular cloud app this year, growing its customer base 116 per cent within Okta’s network.

SAP has also invested heavily in innovating the business, allocating $50 billion to driving digital transformation. The company is committing to a cloud-first strategy by allowing its customers to trade in on-premises software licenses in favour of new ones in the cloud, a move that 74 per cent of customers are saying has encouraged them to adopt cloud services. Our report confirms this, revealing that cloud application adoption for SAP grew 133 per cent in our network in 2015. This has enabled SAP to position itself as an industry leader and an agile partner to its customers.

The driving force behind cloud transformation

Why are the software giants investing in the cloud? Put simply, it’s because cloud is driving businesses forward and Microsoft, SAP and others want to drive their customers’ businesses forward.

Studies clearly indicate that cloud technologies allow companies to be more agile, improve cost efficiency, increase security and promote collaboration internally and externally with customers. The Exact 2015 SME Cloud Barometer report, an independent study of SME leaders across Europe and the US, found that companies embracing digital transformation are likely to grow faster and enjoy twice the profit of their non-cloud using rivals. Consequently, 47 per cent of SMEs now use at least one cloud business software tool.

Setting cloud security concerns straight

Another driving factor is security. In the past, the idea that cloud was less secure than on-prem was one of the main barriers to cloud adoption. Today, it’s clear that’s not the case. Reports from Ovum suggest that even large enterprises cannot replicate the security provided by the cloud using on-prem software. Attitudes in the industry are changing as well, as shown by a recent Cloud Security Alliance survey, whichrevealed that 65 per cent of IT leaders think the cloud is as or more secure than on-premises software.

Digital transformation starts with identity

IT leaders certainly aren’t afraid of cloud anymore. Our Businesses @ Work report shows 83 per cent of companies in our network are leveraging at least one off-the-shelf cloud app like Salesforce, Office 365 and Box. And 80 per cent of enterprises are building custom applications on their platforms. We’re now witnessing an unparalleled increase in cloud application usage globally. There are no signs of slowing as businesses continue to make efforts to enable their partners, customers and contractors to connect through cloud-based applications, websites or portals.

Now that organisations are beginning to embrace the cloud, what’s the next big hurdle? Figuring out how to manage identities. When every device and app that connects to the cloud has an individual account attached to it, the need for companies to ensure they can control the flow of data is amplified. Effective identity management can improve customer experience, maintain competiveness, generate new revenue and strengthen security. As the cloud drives traditional software companies to change their business models and put the needs of customers at the forefront, the next big priority for enterprises in the cloud is to make sure enterprises can easily and securely authenticate and manage users. 

Samsung acquires containers-cloud company Joyent

Money Tree, Currency, Growth.Samsung has agreed to buy San Francisco based cloud provider Joyent in an effort to diversify its product offering in declining markets, reports Telecoms.com.

Financial for the deal have not been disclosed, however the team stated the acquisition will build Samsung’s capabilities in the mobile and Internet of Things arenas, as well cloud-based software and services markets. The company’s traditional means of differentiating its products have been through increased marketing efforts and effective distribution channels, though the new expertise will add a new string to the bow.

“Samsung evaluated a wide range of potential companies in the public and private cloud infrastructure space with a focus on leading-edge scalable technology and talent,” said Injong Rhee, CTO of the Mobile Communications business at Samsung. “In Joyent, we saw an experienced management team with deep domain expertise and a robust cloud technology validated by some of the largest Fortune 500 customers.”

Joyent itself offers a relatively unique proposition in the cloud market as it runs its platform on containers, as opposed to traditional VM’s which the majority of other cloud platforms run on. The team reckons by using containers efficiency it notably improved, a claim which is generally supported by the industry. A recent poll run on Business Cloud News found 89% of readers found container run cloud platforms more attractive than those on VMs.

While smartphones would now be considered the norm in western societies, the industry has been taking a slight dip in recent months. Using data collected from public announcements and analyst firm Strategy Analytics, estimates showed the number of smartphones shipped in Q1 2016 fell to 334.6 million units from 345 million during the same period in 2015. The slowdown has been attributed to lucrative markets such as China becoming increasingly mature, as well as pessimistic outlook from consumers on the global economy.

As a means to differentiate the brand and tackle a challenging market, Samsung has been looking to software and services offerings, as creating a unique offering from hardware or platform perspective has become next to impossible. In terms of the hardware, the latest release of every smartphone contains pretty much the same features (high-performance camera, lighter than ever before etc.), and for the platform, the majority of the smartphone market operates on Android. Software and services has become the new battle ground for product differentiation.

Last month, the team launched its Artik Cloud Platform, an open data exchange platform designed to connect any data set from any connected device or cloud service. IoT is a market which has been targeted by numerous organizations and is seemingly the focus of a healthy proportion of product announcements. The launch of Artik Cloud puts Samsung in direct competition with the likes of Microsoft Azure and IBM Bluemix, as industry giants jostle for lead position in the IoT race, which has yet to be clarified. The inclusion of Joyent’s technology and engineers will give Samsung extra weight in the developing contest.

The purchase also offers Samsung the opportunity to scale its own scale its own cloud infrastructure. The Samsung team says it’s one of the world’s largest consumers of public cloud data and storage, and the inclusion of Joyent could offer the opportunity to move data in-house to decrease the dependency on third party cloud providers such as AWS.

As part of the agreement, CEO Scott Hammond, CTO Bryan Cantrill, and VP of Product Bill Fine, will join Samsung to work on company-wide initiatives. “We are excited to join the Samsung family,” said Hammond. “Samsung brings us the scale we need to grow our cloud and software business, an anchor tenant for our industry leading Triton container-as-a-service platform and Manta object storage technologies, and a partner for innovation in the emerging and fast growing areas of mobile and IoT, including smart homes and connected cars.”

IBM launches weather predictor Deep Thunder for The Weather Company

cloud storm rainIBM’s Weather Company has announced the launch of Deep Thunder to help companies predict the actual impact of various weather conditions.

By combining hyper-local, short-term custom forecasts developed by IBM Research with The Weather Company’s global forecast model the team hope to improve the accuracy of weather forecasting. Deep Thunder will lean on the capabilities of IBM’s machine learning technologies to aggregate a variety of historical data sets and future forecasts to provide fresh new guidance every three hours.

“The Weather Company has relentlessly focused on mapping the atmosphere, while IBM Research has pioneered the development of techniques to capture very small scale features to boost accuracy at the hyper local level for critical decision making,” said Mary Glackin, Head of Science & Forecast Operations for The Weather Company. “The new combined forecasting model we are introducing today will provide an ideal platform to advance our signature services – understanding the impacts of weather and identifying recommended actions for all kinds of businesses and industry applications.”

The platform itself will combine more than 100 terabytes of third-party data daily, as well as data collected from the company’s 195,000 personal weather stations. The offering can be customized to suit the location of various businesses, with IBM execs claiming hyper-local forecasts can be reduced to between a 0.2 to 1.2 mile resolution, while also taking into account other factors for the locality such as vegetation and soil conditions.

Applications for the new proposition can vary from the agriculture to city planning & maintenance to validating insurance claims, however IBM has also stated consumer influences can also be programmed into the platform, meaning retailers could manage their supply chains and understand what should be stocked on shelves with the insight.

Samsung acquires Joyent in greater cloud push after finding synergies

(c)iStock.com/KreangchaiRungfamai

Samsung has agreed to acquire California-based cloud provider Joyent, adding another cloud platform to the Korean giant’s ever-increasing portfolio of mobile, cloud and IoT services.

The move came after Samsung assessed a “wide range of potential companies in the public and private cloud infrastructure space”, and saw Joyent as the standout with an “experienced management team with deep domain expertise and a robust cloud technology,” according to Samsung mobile communications CTO Injong Rhee.

From Joyent’s perspective, CEO Scott Hammond and CTO Bryan Cantrill both chipped in with their reasons for the deal. “Until today, we lacked one thing,” Hammond wrote in a blog post. “We lacked the scale required to compete effectively in the large, rapidly growing and fiercely competitive cloud computing market. Now that changes.” Naturally, Cantrill looked at the news from more of a technologist’s perspective. “As our engineering teams got to know one another, we found that beneath the exciting vision was a foundation of shared values: we both cared deeply about not only innovation but also robustness – and that we both valued complete understanding when systems misbehaved,” Cantrill wrote.

“The more we got to know one another, the clearer it became that together we could summon a level of scale, agility and innovation that would be greater than the sum of our parts – that together, our technology could create a new titan of container-native computing,” he added.

Like a lot of these deals, the original intent was not acquisition; Samsung had previously examined Manta, Joyent’s object storage system, for implementation but, as Cantrill noted, Joyent had never seen a request at such scale. When Joyent went back to Samsung and explained they did not have sufficient hardware to perform the requested test, Samsung provided it – and the rest, it appears, was history.

“We work closely with startups to bring new software and services into Samsung, and one of the ways we do this is by driving strategic acquisitions,” said David Eun, Samsung global innovation centre president. “Joyent is a great example of a leading and disruptive technology company that will make unique contributions to Samsung while benefitting from Samsung’s global scale and reach.”

Joyent will continue as a standalone brand after the acquisition, while financial terms of the deal were not disclosed.

Why DevOps engineer is the number one hardest tech job to fill

(c)iStock.com/spxChrome

DevOps engineers are notoriously difficult to find. If you needed further proof of this fact, a new study by Indeed.com has revealed that DevOps engineer is the #1 hardest IT job to fill in North America, leading a list that includes software and mobile engineers.

An organisation’s inability to hire – and retain – systems engineers, build automation engineers, and other titles usually grouped under “DevOps” is a major roadblock to digital transformation efforts; in fact, the majority of organisations say the biggest roadblock to cloud migration is finding the right IT talent, not security, cost, or legacy systems.

There is certainly no easy answer — but here are several ways that organisations are attempting to reduce its impact.

The power of automation

Most companies hire DevOps engineers to automate deployment for frequent or continuous deployment. In reality, this means that much of a DevOps engineer’s time is spent deploying and configuring daily builds, troubleshooting failed builds, and communicating with developers and project managers — all while the long-term work of automating deployment and configuration tasks falls by the wayside.

It is possible that the term “DevOps engineer” itself contributes to this confusion and poor prioritisation; many say there is (or should be) no such thing as a DevOps engineer and they should more properly be called by their exact function in your team, like storage engineer, deployment automation engineer, and so on.

The value of deployment automation and the progress towards some variety of “push button” deployment to test environments is obvious; a survey by Puppet found that high performing IT teams deploy thirty times more frequently than low performing teams. Infrastructure automation is often lower on the priority list but of equal importance, and involves the ability of virtual machines to scale, self-heal, and configure themselves. Anecdotally, our experience is that most organisations do the bare minimum (auto scaling), while the vast majority of infrastructure maintenance tasks are still highly manual.

The fact that your DevOps engineers — or if you prefer different titles: build automation engineers, Linux administrators, Puppet engineers, etc. — do not have time to automate tasks (that could save them more time in the future) is clearly a problem. Your sluggish progress on deployment automation drains resources every day. But your lack of infrastructure automation can quickly become a punishing business problem when you find that auto scaling fails, or you forgot to update a package, or your SSL cert is not automatically renewed, or your environment is not automated to deal with your cloud provider’s infrequent outages. Slow deployment pipelines are bad, but broken infrastructure is worse.

Such events cause what we will call “reactive automation”, a sudden burst of interest in infrastructure automation that quickly fades when everything goes back to normal. Unfortunately, the templates and configuration management scripts that automate infrastructure buildout and maintenance themselves must be maintained, and if no one is paying attention, another infrastructure failure is bound to happen.

The result is a team of stressed, overworked engineers that wish they could focus on the “cool stuff”, but are instead stuck in firefighting mode: exactly the opposite of what you want to happen.

“Hire more DevOps engineers”

When faced with overworked engineers, the natural answer is: let’s hire more. But of course you are already doing that. Most companies have a standing open position for DevOps engineer. Is there another answer?

The second answer is training some of your existing systems engineers in new tools and new cultural frameworks. This certainly needs to happen, but will take some time. The other answer is outsourcing. Outsourcing can mean any number of things, but there are two flavours that best complement DevOps teams. The first is outsourcing infrastructure automation. The second is outsourcing day-to-day, boring, repetitive infrastructure maintenance tasks and around-the-clock monitoring.

Infrastructure automation is in many ways the ideal set of tasks to outsource; the line in the sand between your team’s responsibilities (the application) and the outsourced team (the infrastructure) is relatively clear for most applications, and there is often little time, initiative, or advanced experience to automate infrastructure in-house. Your in-house engineers keep doing what they are doing — managing daily builds, interfacing with developers — and the outsourced team co-manages the templates and scripts that control scalability, security, and failover. This team should also integrate this automation with your existing deployment pipeline.

This works out even better if the same team manages day-to-day patching, monitoring, alerting, log management, change management, etc., much like a traditional professional services or managed services team. These are items that distract your valuable DevOps engineers from more important tasks, and also wake them up at 3am when something goes wrong. When you outsource, you are still fulfilling “you build it, you own it” principle, but at least you have a team telling you when things break and helping you fix it faster.

Managed service providers (MSPs) are not what they used to be — in a good way. Among its many positive effects, the cloud has forced MSPs to evolve and provide more value. Now you can use it to your advantage.

The enterprise DevOps team

As DevOps makes its way to the enterprise, the nature and definition of “DevOps team” will change. Enterprises will continue to struggle to attract talent away from big tech. You will likely see more differentiation in what DevOps means, as traditional network engineers become cloud network experts and Puppet engineers become cloud configuration management masters, leading to a complex medley of “traditional” and “cloud” skills.

Adopting DevOps involves adopting a certain amount of risk, and enterprises want to control that risk. They will rely more heavily on outsourced talent to supplement growing internal teams. This will help them achieve higher deployment velocity and automation more quickly, and put guardrails in place to prevent new DevOps teams from costly mistakes.

DevOps engineers will always be hard to find. Great tech talent and great talent generally is hard to find. The key is knowing how to protect your business against the drought.

The post DevOps Engineer: #1 Hardest Job to Fill appeared first on Logicworks Gathering Clouds.

macOS Sierra beta in a VM

As a long-time Mac user, I was excited to hear about the next release of OS X, macOS Sierra (version 10.12 Beta, for those numerically inclined.) As a Mac developer, I had access to the Developer Preview released yesterday. As the product manager for Parallels Desktop, I was looking forward to installing Sierra in a […]

The post macOS Sierra beta in a VM appeared first on Parallels Blog.

Court of Appeals hits back at US telco industry with net neutrality ruling

Lady Justice On The Old Bailey, LondonThe District of Columbia Circuit Court of Appeals has hit back at the US telcos industry, ruling in favour of government net neutrality regulations, reports Telecoms.com.

Although the decision will be appealed to the US Supreme Court, the decision marks a victory for FCC chairman Tom Wheeler’s camp in the FCC, which has been split over the dispute. Republican commissioner Michael O’Rielly championed efforts opposing Wheeler’s Democratic team, though the decision does appear to move US carriers closer to the realms of utilities.

“Today’s ruling is a victory for consumers and innovators who deserve unfettered access to the entire web, and it ensures the internet remains a platform for unparalleled innovation, free expression and economic growth,” said Wheeler in a statement. “After a decade of debate and legal battles, today’s ruling affirms the Commission’s ability to enforce the strongest possible internet protections – both on fixed and mobile networks – that will ensure the internet remains open, now and in the future.”

The decision itself will now ensure US carriers cannot block, degrade or promote internet traffic, which has been strongly opposed by the telecoms industry and members of the Republican Party. The argument against has been based around the idea of an ‘open internet’ where free-trade rules the roost. Texas Senator Ted Cruz once described the move towards net neutrality as “Obamacare for the internet”, believing it is burdensome and would create an environment of over-regulation for the internet.

The ruling also hits back at claims made by industry attorneys that ISPs are like newspaper editors, and thus have the right to edit content which flows over its network. This has been struck down by the DC Court of Appeals stating ISPs should view themselves as ‘conduits for the messages of others’ as opposed to dictating the opinions which are viewed on the internet.

While this would be considered a victory for the Wheeler camp inside the FCC, the dispute is likely to continue for some time. AT&T has already announced it will be appealing the decision and Verizon has stated its investments in Verizon Digital Media Services would be at risk without an open Internet.

The dispute on the whole has seen conflicting opinions at every level. The ruling from the DC Court of Appeals also demonstrated similar conflicts, with Senior Circuit Judge Stephen Williams stating “the ultimate irony of the Commission’s unreasoned patchwork is that, refusing to inquire into competitive conditions, it shunts broadband service onto the legal track suited to natural monopolies.”

In terms of opposition within the FCC itself, O’Rielly said in a statement “If allowed to stand, however, today’s decision will be extremely detrimental to the future of the Internet and all consumers and businesses that use it. More troubling is that the majority opinion fails to apprehend the workings of the Internet, and declines to hold the FCC accountable for an order that ran roughshod over the statute, precedent, and any comments or analyses that did not support the FCC’s quest to deliver a political victory.”

The other Republican Commissioner at the FCC Ajit Pai stated “I am deeply disappointed by the D.C. Circuit’s 2-1 decision upholding the FCC’s Internet regulations. The FCC’s regulations are unnecessary and counterproductive.”

The end of this dispute will unlikely to be seen for some time, and there are strong arguments for both camps. On the commercial side represented by the Republican Party and the telco industry, there has to be a means to commercialize the billions of dollars invested infrastructure. AT&T, Verizon, etc are not charities. However, the net neutrality camp containing the Democrat Party and the FCC Chairman insists there has to be an element of control. There is a requirement for telcos to be held accountable, and invoking the First Amendment right to free speech in this context could potentially have dangerous consequences from a commercial and political perspective.

Atos bolsters digital transformation offerings

Cloud computingAtos has announced the launch of alien4cloud, through its technology brand Bull, a software suite which it claims will accelerate customer digital transformation.

Alien4cloud automates the application lifecycle, from development to deployment and production both on premise and for all types of cloud, allowing customers to abstract applications from the infrastructure to increase efficiencies.

Building on the theme of continuous digital transformation, Atos is aiming to leverage one of the biggest pain points for the industry currently, cloud migration. The team claim two out of three companies will have 50% of the applications in the cloud within three years. The migration to the cloud can often be a complex, costly and time consuming process.

“This announcement is another step towards our ambition of supporting clients in their digital transformation,” said Jérôme Sandrini, Vice President, Head of Big Data Software & Services at Atos. “Alien4cloud helps IT departments to rationalize their IT assets and fosters competitiveness with a shorter application lifecycle in line with the evolving business needs. With alien4cloud, self-service business lines will no longer need to use uncontrolled Shadow IT.”

Atos claims by using DevOps practices, it provides development teams with a self-service portal to improve collaboration, to shorten the entire application lifecycle, and to optimize the ROI. Marketing for the product has focused around a number of areas including a reduction in deployment time, increased collaboration throughout the application lifecycle, flexibility to shift deployment location, leverages TOSCA and continuous application provisioning.

Are you being served? Big data and the disrupters of customer service

(c)iStock.com/sorbetto

We often hear the buzzword phrases of customer service, quality of service, customer first and net promoter score, and both the B2C and B2B markets have been constantly changing, driven by more demanding customer expectations with the bar being set ever higher by new disrupters. Take the age-old example of Blockbuster, a model and brand that was globally successful and known, quickly disrupted when newcomers set the de facto bar: download a movie anywhere, anytime to a growing number of devices, cheaper, quicker and easier, with nothing to return to the shop, and the film you want always being available.

Customers are having that bar set by these new unicorn firms, such as Uber, Airbnb, Netflix, JustEat, eBay, Amazon…the list goes on. This means of course that existing legacy firms are now dealing with companies and people that have a natural in-built expectation for things to be easy, clear, customer friendly and ‘of the new world’.

A Titanic moment on the horizon?

This is not easy to achieve for a legacy business. The phrases ‘digital transformation’ and ‘digitally agile’ abound, as if this is just a button you can click to get there. For existing firms, making this leap can be a big one and a complex, costly and painful journey to undertake. New cloud-born firms design around the new world with processes, systems, people and attitudes aligned to efficiency and customer service – very often self-serve.

Take estate agents as an example. Having engaged with Purple Bricks recently and taken benefit of their online model of estate agency, I can personally see why the existing bricks and mortar estate agents should be massively worried. This new model costs the customer roughly a quarter of the commission, and is slick and effective with self-serve portals combining with a human touch for the initial engagement, and a very slick process flow to sell your property in the most effective way for both you as a customer but also them as a cost model. We had eight offers quickly, a good percentage over what traditional estate agents said to sell for, and we had visibility of all that was happening 24×7 – a very slick process.

The only thing holding this back is the nervousness of ‘don’t we need the traditional model to tell us all what to do and handhold us’. This was done as well and there was nothing complicated, as the traditionalists would want you to think. In meeting with several traditional agents, they were fast to bad mouth these new merchants; ‘It won’t work’, ‘they won’t affect us’ and – more ridiculously – ‘but they don’t have a high street window to put the photo of your property in!’

My message to all those traditionalists thinking ‘what iceberg?’ is to open your eyes: you are heading straight for it. As those who use the new offerings prove it and tell all their friends, more and more will try it, questioning why the old model is so expensive. The Blockbuster-Netflix change will set in. It is the customer that decides, and those decisions are being made easier with a wider gap in service quality versus price attracting customers away from traditional form factors, delivery models and brands, to new ways and new brands. Add to this millennials, generation Z, whatever term you wish to coin; they expect online, self-serve, fast, low cost, slick customer experiences.

Disrupting the disrupters

What will let new entrants down of course is the service quality provided. If this is lacking, customers likewise move fast with their feet and also their bad recommendations fire off faster than good. Coming in as a disrupter or an existing provider refreshing its market offering, customer service remains key. In this new world, dynamic customer service is not simply the ‘speaking to the customer’ experience; it is far wider and far more expensive.

Take a bricks and mortar retail store – its opening hours may be six or seven days a week, 9am to 5pm perhaps, and the customer service is the touchpoints of staff, often only at the point of paying. In the new world offerings, we expect 24×7 access, from anywhere, from any device, with multiple interaction points from easy self-service and search of answers, shared service from other customers, such as provided by GiffGaff and Livechat, as well as email and phone support ideally when it is required. On top of this of course is the delivery quality of that service, its availability, is speed of (online) response, underpinned by the SLAs and processes that make the whole experience seamless and enjoyable.

For the providers and businesses providing these services there is a new dynamic, new business metrics, new approaches of development and support required in order to be operationally and cost efficient. Offering one of these new experiences is one thing – doing it at a market competitive price is another. Deliver it at twice the price of a rival and you will find how fickle the new customer breed is, and how fast you can be displaced and made irrelevant.

For the unicorn disrupters, they first need to change the market dynamic and disrupt the status quo, while second is to maintain and retain this differential – change and change again, and be willing and able to do so in technology, process and service. We will see disrupters being disrupted themselves. Take for example JustEat and Hungry House in the UK – did they see UberEats coming? Will they be able to defend against it? Uber is now reshaping not as a taxi firm, but as a logistics firm – one with a technology platform, processes and brand that it can leverage into other markets.

What’s coming next? After food delivery, why not parcel delivery, flower delivery, and so on? Can you see a time when an Uber driver taxis someone from A to B, gets alerted to pick up a parcel near B that needs to go to C and when approaching there gets alerted for a food delivery near C to deliver – thus maximising the earnings and their utilisation of their third party driver community, and giving a single brand experience to customers with the same lead they have on technology utilisation?

For the customer of new services, there are also challenges. Data privacy is a concern – who is tracking what you do, what if that data is leaked, and what if their service is unavailable, or cannot get online – am I able to operate, or have I relied on it to totality? For the business user of new world services, the issues are more complex, from service quality for the business users, security of access and data, data sovereignty and governance, supplier and SLA management, and control and mitigation of shadow IT – an increasing problem with new services being licensed and accessed by employees on work devices on network and when mobile. Of course, it’s worth noting that the EU General Data Protection Regulation (GDPR) aims to make some of these issues easier from both the consumer and business side.

What this means for IT

In the corporate environment, the service demands are often on IT to fix things when they go wrong, even if the service is externally cloud-provided and often even if the department has procured it directly without IT’s knowledge. When there’s an issue, IT is called upon to resolve and help.

Increasingly, internal IT is going to grow into a service broker, mixing in-house skills with those from external cloud, IoT and mobile providers to enable a cohesive service experience for its business users, and hence onward to the customer. This is important as one service quality often begets the other – how many times have you heard on a phone enquiry that the operator’s system is running slow, or has crashed, or could you be transferred to another advisor?

Corporate IT more than ever is going to need to have the processes and tools to manage not only the traditional environment and user requests, but to encompass those of cloud, IoT and mobile services, as well as smarter, more self-serving users and users with a higher demand expectation on response times and quality of service. Internally facing IT will have to align with external business quality of service delivery onwards to customers, to choose adaptable and empowering ITSM and ITAM tools, and ITIL best practices, to empower their agility and capability to deal with this faster changing IT environment.

puppet 3.8.3 para SLES11SP3

No es ningún secreto que odio SuSE en todas sus encarnaciones: tanto openSuSE como SLES. Hoy vamos a ver cómo instalar una versión reciente (en este caso 3.8.3) de puppet en lugar de la 2.6.18 que he visto que nos viene por defecto.

Para ello deberemos instalar los siguientes repositorios:

zypper addrepo -f --no-gpgcheck http://demeter.uni-regensburg.de/SLES11SP3-x86/DVD1/ "SLES11SP3-x64 DVD1 Online"
zypper addrepo -f --no-gpgcheck http://demeter.uni-regensburg.de/SLE11SP3-SDK-x86/DVD1/ "SUSE-Linux-Enterprise-Software-Development-Kit-11-SP3"
zypper addrepo http://download.opensuse.org/repositories/devel:languages:ruby/SLE_11_SP4/devel:languages:ruby.repo
zypper refresh

Instalamos libyaml como dependencia:

rpm -Uvh http://download.opensuse.org/repositories/devel:/languages:/misc/SLE_11_SP4/i586/libyaml-0-2-0.1.6-15.1.i586.rpm

Instalamos ruby 2.1:

zypper install ruby2.1

Realizamos la instalación rubygems desde .tgz:

cd /usr/local/src
wget https://rubygems.org/rubygems/rubygems-2.6.4.tgz --no-check-certificate
tar xzf rubygems-2.6.4.tgz 
cd rubygems-2.6.4/
ruby.ruby2.1 setup.rb 

Antes de instalar puppet deberemos instalar sus dependencias, en este caso json:

gem install json

Finalmente procedemos a instalar puppet:

cd /usr/local/src/
wget https://downloads.puppetlabs.com/puppet/puppet-3.8.3.tar.gz
wget http://downloads.puppetlabs.com/facter/facter-2.4.1.tar.gz
wget https://downloads.puppetlabs.com/hiera/hiera-1.3.4.tar.gz
tar xzf puppet-3.8.3.tar.gz 
tar xzf facter-2.4.1.tar.gz 
tar xzf hiera-1.3.4.tar.gz 
cd facter-2.4.1
ruby.ruby2.1 install.rb 
cd ../hiera-1.3.4
ruby.ruby2.1 install.rb 
cd ../puppet-3.8.3
ruby.ruby2.1 install.rb 

Una vez finalizado dicho proceso, ya tendremos puppet con una versión decente disponible:

sles11sp3:~ # puppet --version
3.8.3

Tags: , ,