All posts by Business Cloud News

Splunk CEO quits while ahead after 2% revenue growth

splunkAs big data pioneer Splunk reported better than expected quarterly revenues and shares surged by 2%, its leader for the last seven years has announced his retirement.

Splunk CEO Godfrey Sullivan, who successfully steered the software company through an initial public offering in 2012, has stepped down to be replaced by Doug Merritt, a senior VP. However, Sullivan will remain at the company as non-executive chairman and has promised a smooth transition.

Sullivan, previously the boss of Hyperion until it was acquired by Oracle, has steered the company since the days when the value of machine-to-machine (M2M) intelligence and big data analytics were relatively unknown. The Splunk IPO in 2012 was a landmark for the industry, being the first time the public were invited to invest in a company specialising in the then-new concept of big data. Splunk’s IPO was acclaimed as one of most successful tech offerings in the decade with share prices surging 108% on the first day of trading.

Under Sullivan Splunk’s customer base grew from 750 to 10,000 and annual revenues from $18 million to $600 million, according to a Splunk statement. His successor, Merritt, a veteran of SAP, Peoplesoft and Cisco, has been with Splunk since 2011 and said he would work with Godfrey on a smooth transition. “We will continue our laser focus on becoming the data fabric for businesses, government agencies, universities and organisations,” he said.

“Doug brings enormous management, sales, product and marketing skills to his new role,” said Splunk’s lead independent director John Connors. “As senior vice president for field operations for Splunk, Doug has consistently delivered outstanding financial results.”

In its results for the third quarter of financial year 2016 Splunk reported total revenues of $174.4 million, up 50% year-over-year, and ahead of analyst expectations by $14.4million.

Google appoints ex-VMware boss to lead enterprise web services business

Google officeGoogle has appointed former VMware CEO and current Google board member Diane Greene to head a new business-oriented cloud service.

Though Google is associated with consumer products and overshadowed by AWS in enterprise cloud computing, the lead is not unassailable, claimed Google CEO Sundar Pichai, in the company’s official blog, as the appointment was announced.

“More than 60% of the Fortune 500 are actively using a paid Google for Work product and only a tiny fraction of the world’s data is currently in the cloud,” he said. “Most businesses and applications aren’t cloud-based yet. This is an important and fast-growing area for Google and we’re investing for the future.”

Since all of Google’s own businesses run on its cloud infrastructure, the company has significantly larger data centre capacity than any other public cloud provider, Pichai argued. “That’s what makes it possible for customers to receive the best price and performance for compute and storage services. All of this demonstrates great momentum, but it’s really just the beginning,” he said.

Pichai stated the new business will bring together product, engineering, marketing and sales, and Green’s brief will be to integrate them into one cohesive offering. “Dianne has a huge amount of operational experience that will continue to help the company,” he said.

In addition, Google is to acquire bebop, a company founded by Greene, to simplify the building and maintain enterprise applications. “This will help many more businesses find great applications and reap the benefits of cloud computing,” said Pichai.

Bebop’s resources will be dedicated to building and integrating the entire range of Google’s cloud products from devices like Android and Chromebooks, through infrastructure and services in the Google Cloud Platform, to developer frameworks for mobile and enterprise users and finally end-user applications like Gmail and Docs.

The market for these cloud development tools will be worth $2.3 billion in 2019, up from $803 million this year, according to IDC. The knock on effect of those developments is that more apps will run on the cloud of the service provider that supported development and that hosting business will triple to $22.6 billion by 2019, IDC says.

Greene and the bebop staff will join Google once the acquisition has completed. The new name for Greene’s division has yet to be named but will include divisions such as Google for Work, Cloud Platform, and Google Apps, according to Android Central.

AWS launches new wind farm in green drive

Wind farmAmazon Web Services has contracted green energy specialist EDP Renewables to build and run the 100 megawatt (MW) Amazon Wind Farm US Central in Ohio.

The project is due to complete by May 2017 when it will begin producing enough to power run, 29,000 average US homes for a year, it claims. AWS says the latest addition to its green energy stable would generate 320,000 MWh of electricity a year from a new wind farm in the US.

Amazon also claims the energy generated will feed the electrical grid supplying both current and future AWS cloud data centres.

In November 2014 AWS committed to running its infrastructure entirely on renewable energy – in the long term – and claimed that 25% of the electricity running its cloud services was green. By the end of 2016 it aims to have pushed the proportion to 40%.

Earlier this year it announced a renewable project with the Amazon Wind Farm (Fowler Ridge) in Indiana could generate 500,000 MWh of wind power annually. In April it then began a pilot project using Tesla’s energy storage batteries to power data centres at times when wind power and solar are not available. In the same month AWS joined the American Council on Renewable Energy and the US Partnership for Renewable Energy Finance to work with government policy makers on developing more renewable energy options.

In June 2015 it said its new AWS Solar Farm in Virginia could generate 170,000 MWh of solar power annually and a month later it added another wind farm in North Carolina that could generate 670,000 MWh a year. In total, AWS claims to have the potential to create 1.6 million MWh of power.

“We continue to pursue projects that help to power AWS data centres and bring us closer to achieving total renewable energy,” said Jerry Hunter, VP of Infrastructure at AWS.

Containers aren’t new, but ecosystem growth has driven development

kyle andersonContainers are getting a fair bit of hype at the moment, and February 2016 will see the first ever dedicated container-based conference take place in Silicon Valley in the US. Here, Business Cloud News talks to Kyle Anderson, who is the lead developer for Yelp, to learn about the company’s use of containers, and whether containers will ultimately live up to all the hype.

Business Cloud News: “What special demands does Yelp’s business put on its internal computing?”

Kyle Anderson: “I wouldn’t say they are very special. In some sense our computing demands are boring. We need standard things like capacity, scaling, and speed. But boring doesn’t quite cut it though, and if you can turn your boring compute needs into something that is a cut above the status quo, it can become a business advantage.”

BCN: “And what was the background to building your own container-based PaaS? What was the decision-making process there?”

KA: “Building our own container-based PaaS came from a vision that things could be better if they were in containers and could be scheduled on-demand.

“Ideas started bubbling internally until we decided to “just build it” with manager support. We knew that containers were going to be the future, not VMS. At the same time, we evaluated what was out there and wrote down what it was that we wanted in a PaaS, and saw the gap. The decision-making process there was just internal to the team, as most engineers at Yelp are trusted to make their own technical decisions.”

BCN: “How did you come to make the decision to open-source it?”

KA: “Many engineers have the desire to open-source things, often simply because they are proud of their work and want to share it with their peers.

“At the same time, management likes open-source because it increases brand awareness and serves as a recruiting tool. It was natural progression for us. I tried to emphasise that it needs to work for Yelp first, and after one and a half years in production, we were confident that it was a good time to announce it.”

BCN: “There’s a lot of hype around containers, with some even suggesting this could be the biggest change in computing since client-server architecture. Where do you stand on its wider significance?

KA: “Saying it’s the biggest change in computing since client-server architecture is very exaggerated. I am very anti-hype. Containers are not new, they just have enough ecosystem built up around them now, to the point where they become a viable option for the community at large.”

Container World is taking place on 16 – 18 February 2016 at the Santa Clara Convention Center, CA, USA.

Google launches virtual machine customisation facility

Google cloud platformGoogle has announced a new more fitting way of buying virtual machines (VMs) in the cloud. It claims the extra attention to detail will stamp out forced over purchasing and save customers money.

With the newly launched beta of Custom Machine Types for Google’s Compute Engine, Google promised that it will bring an end to the days when “major cloud buyers force you to overbuy”. Google has promised that under its new system users can buy the exact amount of processing power and memory that they need for their VM.

The new system, explained in a Google blog, aims to improve the experience for customers when buying a new virtual machine in the cloud. Google says it wants to replace the old system, where users have to choose from a menu of pre-configured CPU and RAM options on machines that are never quite adjusted right to fit the user. Since VMs usually come in multiples of two, Google explained, customers frequently have to buy eight CPUs, even when they only need six.

The Custom Machine Types system will let users buy virtual CPU (vCPU) and RAM in smaller units (Gigibytes rather than Gigabytes) and give customer more options to adjust the number of cores and memory as needed. If a customer’s bottom line expands, the cloud can be ‘let out’ accordingly. In another tailoring option, Google has introduced smaller units of charging (with per-minute billing) in a bid to create more accurate metering of the customer’s consumption of resources.

In the US every vCPU hour will cost $0.03492 and every GiB of RAM will cost $0.00468 per hour. The price for Europe and Asia, however, is a slightly higher rate $0.03841 per vCPU hour. Rates will decrease on bulk purchasing however.

Support is available in Google’s command line tools and through its application programming interface (API) and Google says it will create a special graphical interface for its virtual machine shop in its Developer Console. Developers can specify their choice of operating system for their tailored VM, with the current options being CentOS, CoreOS, Debian, OpenSUSE and Ubuntu.

Meanwhile, elsewhere in the Google organisation, it is working with content deliverer Akamai Technologies to reduce hosting and egress costs and improve performance for Akamai customers taking advantage of Google Cloud Platform.

Microsoft moves Dynamics AX into the cloud

MicrosoftMicrosoft says the latest incarnation of Dynamics AX will mark its transformation from a packaged application to a cloud service.

On Thursday the vendor announced the latest release of its flagship enterprise resource planning (ERP) system will be generally available in the first quarter of 2016. The main difference, it said, is that the ERP is now a service designed for the cloud.

A public preview of the new solution for customers and partners will be available in early December. The new name of the release, Microsoft Dynamics AX, reflects a departure from branding that reflected the year or the version of the product, a characteristic of software packages, it said. From now on the branding will underscore that Dynamics AX is a cloud-based service that will be regularly updated, it said.

Microsoft said it will also implement a new, simple and more transparent subscription pricing model to make it easier for companies to buy the system as they need it. Dynamics AX will offer a new user experience that looks and works like Microsoft Office and shares information between Dynamics AX, Dynamics CRM and Office 365, according to the vendor. It will also combine near-real-time analytics powered by Azure Machine Learning with the ability to visualise data through Power BI embedded in the application, in order to give users more predictive powers.

In response to usability analysis, Dynamics AX will have a browser-based HTML5 client and a new touch-enabled, modern user interface. Now that it’s a cloud system it will adopt the principles of highly visual applications more akin to consumer applications, according to Microsoft.

The classic rigidity of ERP systems has been replaced, according to Scott Guthrie, Microsoft Cloud and Enterprise’s executive VP. “Our ambition to build the intelligent cloud comes to life with apps optimised for modern business. When you combine the hyperscale, enterprise-grade and hybrid-cloud capabilities of Microsoft Azure with the real-time insights and intuitive user experience of Dynamics AX, organisations and individuals are empowered to transform their business operations,” said Guthrie.

WANdisco’s new Fusion system aims to take the fear out of cloud migration

CloudSoftware vendor WANdisco has announced six new products to make cloud migration easier and less dangerous as companies plan to move away from DIY computing.

The vendor claims its latest Fusion system aims to create a safety net of continuous availability and streaming back-up. Building on that, the platform offers uninterrupted migration and gives hybrid cloud systems the capacity to expand across both private public clouds if necessary. These four fundamental conditions are built on seven new software plug-ins designed to make the transition from production systems into live cloud systems smoother, says DevOps specialist WANdisco.

The backbone of Fusion is WANdisco’s replication technology, which ensures that all servers and clusters are fully readable and writeable, always in sync and can recover automatically from each other after planned or unplanned downtime.

The plug-ins that address continuous availability, data consistency and disaster recovery are named as Active-Active Disaster Recovery, Active-Active Hive and Active-Active Hbase. The first guarantees data consistency with failover and automated recovery over any network. It also prevents Hadoop cluster downtime and data loss. The second regulates consistent query results across all clusters and locations. The third, Hbase, aims to create continuously availability and consistency across all locations.

Three further plug ins address the threat of heightened exposure that is created when companies move their system from behind a company firewall and onto a public cloud. These plug-ins are named as Active Back-up, Active Migration and Hybrid Cloud. To supplement these offerings WANdisco has also introduced the Fusion Software Development Kit (SDK) so that enterprise IT departments can programme their own modifications.

“Ease of use isn’t the first thing that comes to mind when one thinks about Big Data, so WANdisco Fusion sets out to simplify the Hadoop crossing,” said WANdisco CEO David Richards.

New Canonical offering makes it cheaper to run OpenStack on Autopilot

canonicalCanonical has launched an OpenStack Autopilot system which it claims will make it so much easier to create and run clouds using open systems that it will ‘dramatically’ cut the cost of ownership. In a statement it promised that the need for staff and consultants will fall as a result of the pre-engineered simplicity built into it OpenStack based system.

The OpenStack Autopilot is a new feature in Canonical’s Landscape management system, a platform based on Linux. The Autopilot can add hardware to an existing cloud, making it easy to grow a private cloud as storage and compute needs change.

According to Canonical, currently the biggest challenge for OpenStack operators is finding a way to adapt their cloud to requirements dynamically, when the computing demands of customers are invariably both volatile and unpredictable. The cost of manually doing this, which involves re-designing entire swathes of infrastructure, is proving prohibitive to many clients, it said. The Autopilot provides a best-practice cloud architecture and automates that entire process, it claims.

Canonical is the company behind Ubuntu, the most widely used cloud platform and the most popular OpenStack distribution. According the latest Linux Foundation survey 65% of large scale production OpenStack clouds are built on Ubuntu. The OpenStack Autopilot allows an operator to choose from a range of software-defined storage and networking options.

The Autopilot presents users with a range of software-defined storage and networking options, studies the available hardware allocated to the cloud, creates an optimised reference architecture for that cluster and installs the cloud from scratch, according to Canonical.

The OpenStack Autopilot is so simple to use that any enterprise can create its own private cloud without hiring specialists, according to Mark Baker, Canonical’s cloud product management leader.

“Over time the Autopilot will manage the cloud, handling upgrades and dealing with operational issues as they occur,” said Baker.

Red Hat launches software defined storage systems that run on commodity hardware

redhat office logoOpen source software vendor Red Hat has launched a portfolio of open, software-defined storage systems which will cut costs by running on commodity hardware. The systems will be sold through a variety of sources across Red Hat’s sales channel.

The logic of selling Red Hat Ceph Storage and Red Hat Gluster Storage system through different channels is to widen the scope of opportunity for Red Hat’s partners, it said. The technology will be made available to any participants in the Red Hat Connect for Business and Red Hat Embedded programmes, as well as all Red Hat Certified Cloud and Service Providers from 2016.

Red Hat Ceph Storage and Red Hat Gluster Storage are open source, software-defined storage systems designed to cater for rapid expansion. They will run on commodity hardware and have durable, programmable architectures the vendor said.

Each is suited for different types of enterprise workloads and similarly enterprise customers will be able to mix and match the Red Hat partners whose skill sets are suited to the technical and vertical market conditions.

Red Hat Advanced and Premier partners are authorised to sell Red Hat Storage solutions, but only if they meet the training requirements for their region’s partner programme via the Red Hat Online Partner Enablement Network (OPEN). Having qualified, however, the resellers can be kept motivated as they benefit from competitive and flexible pricing models. Further service incentives come from opportunities to earn additional margin and recurring revenue.

Red Hat Ceph Storage and Red Hat Gluster Storage subscriptions are scheduled to be available to partners through the Red Hat Embedded Program by the end of 2015. Red Hat Ceph Storage and Red Hat Gluster Storage are scheduled to become available to Red Hat Certified Cloud and Service Providers in 2016.

Training and certification, marketing and sales programs, and technical content for Red Hat Storage solutions will be available to certified partners in the Red Hat Connect for Business Partners portal.

“By making Ceph Storage and Gluster Storage enterprise-procurement friendly, Red Hat is positioning itself as a formidable IT storage supplier,” said Ashish Nadkarni, program director at analyst IDC.

Citrix to axe 1000 staff and spin off GoTo amid shareholder pressure

CitrixCitrix is to decimate its workforce and spin off its GoTo product into a separate listed company as it seeks $200m savings and a return to its most profitable basics.

The loss of 1000 jobs, thought to be sanctioned under pressure from the activist hedge fund manager Elliott Management, represents 10% of the company’s workforce.

In July BCN reported that Citrix was considering its options for the future of its GoTo range of networking systems, which includes videoconferencing and the popular desktop sharing service GoToMeeting. At the time, Elliott Management, a 7% stakeholder in Citrix, made no secret of its wish to see the company spin off any non-core assets, slim down the product portfolio and cut costs dramatically to yield higher rates of growth.

In an operational review released on November 17th, the company said it would stop investing in certain programmes and products and shut down non-core products. The company said it expects about $200 million in annualized pre-tax cost savings, 75% of which is likely to be in 2016. The job cuts do not include the impact of the spinoff, according to Citrix. The job cuts will cost between $65 million and $85 million in the fourth quarter of 2015 and fiscal 2016 with most of the restructuring would be done in November and in January, Citrix disclosed.

The Go To business, which is valued at $3.5 billion to $4 billion (according to FBR Capital Markets analysis) could be sold as a separate business unit.

In a statement, Citrix said it plans to ‘increase emphasis and resources’ to its core enterprise products, including XenApp, XenDesktop, XenMobile, ShareFile and NetScaler. It has previously been reported that Elliott suggested that Citrix to explore the sale of NetScaler, a system that speeds up cloud based applications.

“We are simplifying our product, marketing, sales, operations and development,” said interim Citrix CEO Bob Calderoni, “Focusing on core strengths and simplifying how we work will improve execution, drive higher profit and begin growth in areas in which we provide the greatest value.”

Analyst Clive Longbottom at Quocirca said Citrix has become becalmed and can’t propel itself to the next level it needs to reach.

“It has a decent position as the main virtual desktop vendor, but with a new CEO and pressure from investors, Citrix has to do something,” said Longbottom.

However GoTo in combination with other technology such as ShareFile could have created the means for a massively scalable information platform, said Longbottom.

“There is one possibility around all this – a possible acquisition by another organisation. If Microsoft acquired a company that it has supported for so long, it would gain a platform for full desktop computing in Azure,” said Longbottom.