AWS and Chef cook up DevOps deal

Chef is moving onto the AWS Marketplace

Chef is moving onto the AWS Marketplace

IT automation specialist Chef and AWS announced a deal this week that would see Chef’s flagship offering offered via the AWS Marketplace, a move the companies said would help drive DevOps cloud uptake.

Tools like Chef and Puppet Labs, which use an intermediary service to help automate a company’s infrastructure, have grown increasingly popular with DevOps personnel in recent years – particularly given not just the growth but heterogeneity of cloud today. And with DevOps continuing to grow – by 2016 nearly a quarter of the largest enterprises globally will have adopted a DevOps strategy according to Gartner – it’s clear both AWS and Chef see a huge opportunity to onboard more users to the former’s cloud service.

As one might expect, the companies touted the ability to use Chef to migrate workloads off premise and into the AWS without losing all of the code developed to automate lower level services.

Though Chef and Puppet Labs can both be deployed on and automate AWS cloud resources the Chef / AWS deal will see it gain one-click deployment and a more prominent placement in its catalogue of available services.

“Chef is one of the leading offerings for DevOps workflows, which engineers and developers depend on to accelerate their businesses,” said Dave McCann, vice president, AWS Marketplace. “Our customers want easy-to-use software like Chef that is available for immediate purchase and deployment in AWS Marketplace. This new partnership demonstrates our focus on offering low-friction DevOps tools to power customers’ businesses.”

Ken Cheney, vice president of business development at Chef said: “AWS’s market leadership in cloud computing, coupled with our expertise in IT automation and DevOps practices, brings a new level of capabilities to our customers. Together, we’re delivering a single source for automation, cloud, and DevOps, so businesses everywhere can spend minimal calories on managing infrastructure and maximise their ability to develop the software driving today’s economy.”

DevOps 101 – Integration By @JoePruitt | @DevOpsSummit #DevOps

The second pillar in the DevOps stack is that of Integration. DevOps integration targets quality testing, feature development, and product delivery. Integration, or more specifically, Systems Integration is the process of linking together different computing or component systems and software applications physically or functionally to perform as a single consolidated unit.
Systems integration requires a wide skill set in areas such as network architecture, protocols, systems, software engineering, documentation and communication. A DevOps Engineer needs to have a unique skill set that combines that of both the traditional software developer with that of the IT engineer. Typically DevOps engineers are either developers who get interested in the deployment process of their applications or system administrators who have a passion for scripting and software development.

read more

Take the Long View with Digital Transformation

Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin’ terminology in the world of information technology.

Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve.

Digital Transformation means:

handling all the data types being found and created in the organization
understanding that through mobility, data is being generated and analyzed on the edges of the enterprise more than at the center
mixing and matching that data in an ecosystem of specific, loosely connected applications and services
automating processes; analyzing and acting upon data in short timeframes to develop, improve, and enhance products and services directly through IT
having enough well-trained people at all levels of the organization to address the inevitable technology glitches while also maintaining a high-level strategic view of what’s going on.

I recently attended the Cloud Foundry Summit 2015 in Santa Clara, CA, and listened to many stories of Digital Transformation. The catalyst in these cases was, naturally, Cloud Foundry, an open-source PaaS that is used to handle the complex infrastructure that underlies cloud-computing development, provisioning, and operations.

I was struck by the impatience of much of this discussion; many presenters spoke of the old ways of doing things versus the new, slick, automated, loosely coupled, transformative way and how great progress was being made and would continue to be made this year. One could get the idea that Digital Transformation is a product or service itself, and one that needs to be installed right now.

This type of thinking is only natural in a modern business environment that does not reward patience. Things must be accomplished in a few months, rather than years. In an era where servers can be ordered up and deployed “within seconds,” according to many glowing reports, the idea of a true long-term strategy gets lost in the excitement.

But Digital Transformation is something that should be thought of in terms of years, and decades. We must retain the ability to look back over a period of 20 years and longer, to see what’s truly been fundamentally accomplished. My twitter handle of “IoT2040” reflects my viewpoint, as I’m researching, covering, and instigating progress over the next 25 years, whether I’m around to see the end-state of this progress or not.

Taking the long view does not mean throwing a bunch of exotic visions and soothing words in the air. It means setting measurable goals and focusing day-to-day efforts to turn those goals into self-fulfilling prophesies.

I’m inspired by Moore’s Law, which is not an absolute, but rather something that people have assumed is an absolute and therefore worked their tails off for decades to adhere to it. A similar commitment to measurable Digital Transformation would seem to be in order.

I can dovetail this thought into our ongoing research at the Tau Institute, which measures dynamics of IT environments in the nations of the world; and in my role as Conference Chair of Cloud Expo | @ThingsExpo, which continues to offer a mix of proven use cases and envelope-pushing vision.

In the first role, we can argue that a lack of commitment to IT in Greece is a reflection (or perhaps precipitator) of the country’s perilous economic state, whereas an overheated technology environment has caused (and is causing) societal disruption in several countries throughout the world.

In the second role, I’m pleased to point to some slight yet significant tweaks to the eight core tracks being offered at the next Cloud Expo | @ThingsExpo, to be held November 3-5 in Santa Clara. Cloud APIs have their own unique track this time, Containers and Microservices have their own track, and our three IoT-focused tracks have been focused more tightly on the latest developments.

Here’s a list of the specific tracks:
Track 1 – Enterprise Cloud Adoption
 
Track 2 – Mobility | Enterprise Security
 
Track 3 – Containers & Microservices | PaaS
 
Track 4 – Cloud APIs
 
Track 5 – IoT | Big Data & Analytics
 
Track 6 – IoT | Consumer/Wearables

Track 7 – IoT | Enterprise/Industrial Internet

Track 8 – WebRTC Summit | Hot Topics
 
We’ll also be holding another DevOps Summit at the same time, with tracks devoted respectively to Development and Operations.

read more

[slides] From Industry to Society By @JMondanaro | @ThingsExpo @MetraTech @Ericsson #IoT #M2M #InternetOfThings

It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed.
In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.

read more

Columbia Pipeline links up with IBM in $180m cloud deal

CPG is sending most of its applications to the cloud

CPG is sending most of its applications to the cloud

Newly independent Columbia Pipeline Group (CPG) signed a $180m deal with IBM this week that will see the firm support the migration of its application infrastructure from on-premise datacenters into a hybrid cloud environment.

CPG recently split from NiSource to become an independent midstream pipeline and storage business with 15,000 miles of interstate pipeline, gathering and processing assets extending from New York to the Gulf of Mexico.

The company this week announced it has enlisted IBM, a long-time partner of NiSource, to help it migrate its infrastructure and line of business applications (finance, human resources, ERP) off NiSource’s datacenters an into a private cloud platform hosted in IBM’s datacenters in Columbus, Ohio.

The wide-ranging deal will also see CPG lean on IBM’s cloud infrastructure for its network services, help desk, end-user services, cybersecurity, mobile device management and operational big data.

“IBM has been a long-time technology partner for NiSource, providing solutions and services that have helped that company become an energy leader in the U.S.,” said Bob Skaggs, chief executive of CPG. “As an independent business, we are counting on IBM to help provide the continued strong enterprise technology support CPG needs.”

Philip Guido, general manager, IBM Global Technology Services, North America said: “As a premier energy company executing on a significant infrastructure investment program, CPG requires an enterprise technology strategy that’s as forward-thinking and progressive as its business strategy. Employing an IT model incorporating advanced cloud, mobile, analytics and security technologies and services from IBM will effectively support that vision.”

Companies that operate such sensitive infrastructure – like oil and gas pipelines – are generally quite conservative when it comes to where they host their applications and data, though the recent IBM deal speaks to an emerging shift in the sector. Earlier this summer Gaia Gallotti, research manager at IDC Energy Insights told BCN that cloud is edging higher on the agenda of CIOs in the energy and utilities sector, but that they are struggling with a pretty significant skills gap.

For data centres, connectivity and cooling are key – but it’s the people inside who count

(c)iStock.com/4X-image

It’s common for data centre providers to outsource the running of their facilities to outside companies; they provide the building, power and cooling and get other people to run the data centre itself.

These outsourced companies may sign a three, four or five year contract (or for even less if they come in halfway through) and so there is little incentive for the engineers and technicians to improve the smallest things that ultimately pay dividends later on down the line. Those who implement initiatives that pay off in 10, 20 or 30 years are rarely given the credit they deserve; instead, when the effect kicks in, someone else takes the praise. The ‘payback’ period is therefore relatively short and not conducive to a world class, incentivised operation.

There is a focus at an industry and government level on improving the efficiency of data centres. When providers outsource the running of their facilities, they risk missing the incremental improvements that ultimately add up and become best practices.

Permanently employed data centre operatives have a high level of personal investment, which should be reassuring to those handing over a certain degree of their IT infrastructure. These permanent staff maintains accountability for their facility and knows that they can make a serious difference – it feels like ‘theirs.’ If they think of a better way of doing something, they’re empowered to implement process optimisations and drive them into global operations. You can be sure that the rightful credit is given and that person gets the recognition he or she deserves.

The ability to make a real change is a powerful motivator and helps attract the best operatives. Equally, the operations employing permanent staff are incentivised to invest in their people and provide industry-leading training.

There is an important distinction between specialised real estate and professional data centre operation and that is, more often than not, the people.

Connectivity, cooling and power are (of course) fundamentals but it’s the people inside that are the real differentiators.  They have the power to evolve a facility; they have the power to make a potentially great data centre average or a good data centre great. But it’s not just about the quality of staff; it’s about enabling those talented individuals to constantly improve to the benefit of everyone involved. And when data centre engineering staff can stay with their company for 25 years, even the smallest things are worth doing as they feel the benefit down the line.

We’ve all heard the story of how British cycling coach Dave Brailsford enabled Team GB to completely dominate London 2012’s cycling medal table through marginal gains. The same principal applies to running a data centre. It’s the little things that add up to make a difference.

If a data centre is filled with employees who are motivated and enthusiastic about finding new and more efficient ways of doing things and solving issues instead of just leaving things as they are, there will be a constant stream of innovative forward thinking and strategies.

This attitude will inherently be the catalyst that spurs on constant improvement and advancement in the set-up, and provide customers with peace of mind that their infrastructure is being handled by the best in the business.

Microsoft Azure prices rise for European and Australian customers

(c)iStock.com/cruphoto

Pretty much every cloud software or infrastructure pricing article this publication has ever penned has announced the news of lower prices. Google in particular continues to strenuously follow Moore’s Law, the latest being back in May when Compute Engine instance prices were slashed.

This time, however, it’s different: Microsoft is revising its cloud prices, with customers in Europe and Australia most likely to suffer price hikes. The news originally broke through blogger Aidan Finn, who wrote that effective August 1, local prices for Azure and Azure Marketplace will increase by 13% in the Eurozone, while in Australia that number rises to 26%.

On the surface, this news is not particularly surprising. The Euro in particular has predictably been hit hard in recent months. Back in August it was worth $1.36 US dollars. Since then, it hit a low of $1.05 in March, and is now at $1.11.

Yet Finn argues this may not be the only reason for the increases. “One has to wonder about the motivation of the hikes,” he wrote. “Local costs have not increase[d] by this amount. If anything, local costs have probably reduced.

“This would appear to be a bottom line operation to restore profits in the ledger for shareholders to see,” he added.

According to a statement Microsoft sent to The Register, cloud prices are being revised for contracts billed in the Euro, Australian dollar, as well as Norway, Sweden, Denmark and Canada. Azure customers can still acquire Microsoft products or renew their contracts at current prices until July 31.

The move towards lower pricing has been a continued rigmarole for the past couple of years, from Amazon Web Services, to Microsoft and to Google. Whether the other two major players follow suit, like they normally do on price cuts, remains to be seen. Finn added a further update on July 7 noting Office 365 prices will also be increasing as of August 1. All stock keeping units are rising 10%, except for the E3 and E4 enterprise plans which rise 8%, while the Enterprise Mobility Suite will rise 26%.

GE, NTT Docomo to form Internet of Things alliance

GE and NTT are jointly developing IoT solutions for industrial applications

GE and NTT are jointly developing IoT solutions for industrial applications

GE Energy Japan and Japanese operator NTT Docomo signed a memorandum of understanding (MoU) this week that will see the two companies commit to jointly developing Internet of Things (IoT) solutions for industrial uses.

The companies will combine GE Digital Energy’s MDS Orbit Platform, a wireless router for industrial equipment, and Docomo’s embedded communication module, which will provide remote access and monitoring capabilities.

The solution will be capable of monitoring tightly regulated (and hazardous) infrastructure like bridges and electricity, water and gas power plants for fitness and operational productivity.

The data generated by the embedded monitoring sensors will be sent to Docomo’s Toami cloud platform, designed primarily for M2M use cases, and users will be able to manage and analyse the data using strongly authenticated mobile platforms.

GE is an M2M veteran – in aviation and energy it was doing IoT well before the term became en vogue – and NTT Docomo already partners with a number of other technology incumbents embedded in the IoT arena including Panasonic and Jasper Wireless. Its parent, NTT Group, is also fairly active in other IoT initiatives. The company is working with both ARM and Intel on their respective IoT platforms.

Like many in this space the companies are keen to capture a chunk of growing IoT revenues, with the IoT and M2M communications market in particular forecast to swell from $256bn in 2014 to $947bn in 2019 (an estimated 30 per cent CAGR) according to MarketsandMarkets.

BCN and our sister publication Telecoms.com have put together a report on what the industry perceives to be the top benefits and challenges in consumer and industrial IoT. You can download it for free here.

Dev-focused DigitalOcean raises $83m from Access Industries, Andreessen Horowitz

DigitalOcean raised $83m this week, which it will use to add features to its IaaS platform

DigitalOcean raised $83m this week, which it will use to add features to its IaaS platform

DigitalOcean this week announced it has raised $83m in a series B funding round the cloud provider said would help it ramp up global expansion and portfolio development.

The round was led by Access Industries with participation from seasoned tech investment firm Andreessen Horowitz.

DigitalOcean offers infrastructure as a service in a variety of Linux flavours and and aims its services primarily at developers, though the company said the latest round of funding, which brings the total amount it has secured since its founding in 2012 to $173m, will be used to aggressively expand its feature set.

“We are laser­-focused on empowering the developer community,” said Mitch Wainer, co-founder and chief marketing officer at DigitalOcean. “This capital infusion enables us to expand our world­-class engineering team so we can continue to offer the best infrastructure experience in the industry.”

Although the company is fairly young, and with just ten datacentres globally it claims to serve roughly 500,000 (individual) developers deploying cloud services on its IaaS platform, a respectable size by any measure. It also recently added another European datacentre in Frankfurt back in April, the company’s third on the continent.

But with bare bones IaaS competition getting more intense it will be interesting to see how DigitalOcean evolves; given its emphasis on developers it is possible the company’s platform could evolve into something more PaaS-like.

“We began with a vision to simplify infrastructure that will change how millions of developers build, deploy and scale web applications,” said Ben Uretsky, chief exec and co-­founder of DigitalOcean. “Our investors share our vision, and they’ll be essential partners in our continued growth.”

Microsoft shifts ever further to cloud as it writes off entire Nokia acquisition

Nadella's mobile first, cloud first strategy will centre more on software and cloud services than devices

Nadella’s mobile first, cloud first strategy will centre more on software and cloud services than devices

Software giant Microsoft has announced a ‘restructure’ of its phone hardware business that amounts to a write off of the entire Nokia acquisition, reports Telecoms.com.

7,800 jobs will be lost, mainly in the phone business and on top of around $800 million in restructuring charges (over $100,000 per head!), Microsoft is recording an impairment charge of $7.6 billion, which is pretty much what Microsoft paidfor Nokia less than two years ago. No wonder Stephen Elop was shown the door.

In the light of this final Nokia disposal it’s hard to view Microsoft’s acquisition as anything other than a complete failure and to derive any positives from Elop’s involvement in the whole sorry saga. The only consolation is that the market had already priced this write-off into Microsoft’s share price, which at time of writing had been unaffected by the announcement.

“We are moving from a strategy to grow a standalone phone business to a strategy to grow and create a vibrant Windows ecosystem including our first-party device family,” said Microsoft CEO Satya Nadella. “In the near-term, we’ll run a more effective and focused phone portfolio while retaining capability for long-term reinvention in mobility.”

The acquisition was always a strange one, as at the time Microsoft was still trying to apply its standard Windows business model to Windows Phone – i.e. get people to pay for the license. The problem was that a superior platform in the form of Android was already available for free, and Microsoft only secured Nokia’s loyalty with generous inducements. To then turn around and acquire its main customer was effectively an admission that the licensing model had failed in this case.

It was then assumed that Microsoft planned to make money from the devices themselves, in spite of the fact that the rest of the smartphone industry with the exception of Apple and Samsung was struggling to break even. Inevitably this was soon revealed to be a forlorn quest and Microsoft started supporting other mobile platforms.

Today Microsoft’s approach to mobile is to try to sell software and services such as Office 365 and Skype to all mobile platforms. At the same time Windows 10 has been designed to be one unified platform regardless of device, but with smartphones seemingly relegated to an afterthought.

Here’s Nadiella’s full internal email on the matter, which also touches on recent disposals in other non-core areas such as mapping and advertising:

 

Team,

Over the past few weeks, I’ve shared with you our mission, strategy, structure and culture. Today, I want to discuss our plans to focus our talent and investments in areas where we have differentiation and potential for growth, as well as how we’ll partner to drive better scale and results. In all we do, we will take a long-term view and build deep technical capability that allows us to innovate in the future.

With that context, I want to update you on decisions impacting our phone business and share more on last week’s mapping and display advertising announcements.

We anticipate that these changes, in addition to other headcount alignment changes, will result in the reduction of up to 7,800 positions globally, primarily in our phone business. We expect that the reductions will take place over the next several months.

I don’t take changes in plans like these lightly, given that they affect the lives of people who have made an impact at Microsoft. We are deeply committed to helping our team members through these transitions.

Phones. Today, we announced a fundamental restructuring of our phone business. As a result, the company will take an impairment charge of approximately $7.6 billion related to assets associated with the acquisition of the Nokia Devices and Services business in addition to a restructuring charge of approximately $750 million to $850 million.

I am committed to our first-party devices including phones. However, we need to focus our phone efforts in the near term while driving reinvention. We are moving from a strategy to grow a standalone phone business to a strategy to grow and create a vibrant Windows ecosystem that includes our first-party device family.

In the near term, we will run a more effective phone portfolio, with better products and speed to market given the recently formed Windows and Devices Group. We plan to narrow our focus to three customer segments where we can make unique contributions and where we can differentiate through the combination of our hardware and software. We’ll bring business customers the best management, security and productivity experiences they need; value phone buyers the communications services they want; and Windows fans the flagship devices they’ll love.

In the longer term, Microsoft devices will spark innovation, create new categories and generate opportunity for the Windows ecosystem more broadly. Our reinvention will be centered on creating mobility of experiences across the entire device family including phones.

Mapping. Last week, we announced changes to our mapping business and transferred some of our imagery acquisition operations to Uber. We will continue to source base mapping data and imagery from partners. This allows us to focus our efforts on delivering great map products such as Bing Maps, Maps app for Windows and our Bing Maps for Enterprise APIs.

Advertising. We also announced our decision to sharpen our focus in advertising platform technology and concentrate on search, while we partner with AOL and AppNexus for display. Bing will now power search and search advertising across the AOL portfolio of sites, in addition to the partnerships we already have with Yahoo!, Amazon and Apple. Concentrating on search will help us further accelerate the progress we’ve been making over the past six years. Last year Bing grew to 20 percent query share in the U.S. while growing our search advertising revenue 28 percent over the past 12 months. We view search technology as core to our efforts spanning Bing.com, Cortana, Office 365, Windows 10 and Azure services.

I deeply appreciate all of the ideas and hard work of everyone involved in these businesses, and I want to reiterate my commitment to helping each individual impacted.

I know many of you have questions about these changes. I will host an employee Q&A tomorrow to share more, and I hope you can join me.

Satya