Openstack targets telcos with NFV push

Digital illustration of Cloud computing devicesA new report indicates that there could be a boom in network function virtualisation projects this year, with NFV the second most popular subject of research after containers, reports Telecoms.com.

According to a report from the OpenStack Foundation, only container technology is under closer scrutiny than NFV by technology buyers and decision makers in the world’s enterprises and service providers.

The paper, Accelerating NFV Delivery with OpenStack, reports on the findings of the foundation’s most recent user survey, in which 76 per cent of those questioned identified an important telecoms function that had to be addressed through virtualisation. Of the OpenStack user base 12% were traditional telcos and another 64% were companies that now include telecoms as part of their roster of services, such as the categories of cable TV and ISP companies, telco and networking and data centre/co-location companies.

By comparison, an OpenStack user survey in 2014 suggested its user base of telcos was much smaller, the Foundation says, and only an elite of global telcos, such as NTT and Deutsche Telekom, were investigating NFV use. Since then there has been a surge in interest, it reports, with

increasing numbers of telecom-specific NFV features, such as support for multiple IPv6 prefixes, being requested or submitted by OpenStack users.

Container technology information is even more sought after than NFV, according to OpenStack, but the two issues are not mutually exclusive. Sources have speculated that the technologies may be used in tandem as OpenStack is the foundation of rationalising the hybrid nature of most telco’s infrastructure.

According to the paper’s executive summary OpenStack could provide cost effective route to the creation of private clouds without vendor lock-in, since proprietary hardware is becoming associated with NFV.

“While the interoperability between NFV infrastructure platforms that use OpenStack is still a work in progress, the majority of configurations surpass expectations,” concluded the paper co-authored by Kathy Cacciatore, the OpenStack Foundation’s Consulting Marketing Manager.

[session] Storage Analytics Engines in Cloud Environments By @mcrepeat | @CloudExpo @FalconStor #Cloud

Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics.
In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, will discuss using predictive analytics to monitor and adjust functions like performance, capacity, caching, security, optimization, uptime and service levels; identify trends or patterns to forecast future requirements; detect problems before they result in failures or downtime; and convert insight into actions like changing policies, storage tiers, or DR strategies.

read more

Say Hello to All-Inclusive Pricing with Parallels Desktop Business Edition

Thousands of companies around the world choose Parallels Desktop for Mac Business Edition to support Mac users who need access to Windows applications in a secure, compliant way.  The added benefit is with our all-inclusive subscription licensing, administrators receive software, support and annual upgrades as well as the Parallels License Management Portal and access to […]

The post Say Hello to All-Inclusive Pricing with Parallels Desktop Business Edition appeared first on Parallels Blog.

AliCloud and NVIDIA to Invest $1 Billion

AliCloud, the public cloud computing sector of Alibaba, has recently joined with NVIDIA to invest a billion dollars in cloud computing research and development. AliCloud claims that it will hire up to 1,000 data developers over the course of the next three years in order to compete with cloud giant Amazon Web Services and develop its big data analytics program. The investments will assist AliCloud’s data analysis implementations. AliCloud has recently commented in a statement: “These products and services cover all aspects of the so-called data development chain, including processing, analysis, computing engine, machine learning and data application.” This investment is made in the hopes that the demand for storage and processing from organizations and agencies will increase in the following years.

The big data platform will allow complex information to be analyzed with increased efficiency. s. Simon Hu, AliCloud’s president claimed that “The Big Data Platform fulfills our vision of sharing our vast data troves that will create immense value to our users. AliCloud’s rate of growth is one of the fastest among global peers.”

NVIDIA will help Alibaba transform its AliCloud sector, allowing the cloud to provide learning capabilities for businesses. It is also rumored that NVIDIA will help AliCloud with it quantum cloud computing research. Because this quantum computing is just beginning to emerge, Alibaba may be planning to become one of the first providers of this advanced form of computing. Alibaba has already co-founded a quantum computing lab with the Chinese Academy of Sciences; so, Alibaba may become the first provider of quantum computing as a service and establish its dominance in the cloud computing industry.

The post AliCloud and NVIDIA to Invest $1 Billion appeared first on Cloud News Daily.

Microsoft Plans to Make Billion Dollar Donation

According to Microsoft CEO Satya Nadella, Microsoft will donate a billion dollars’ worth or cloud computing services over the next three years to 70,000 non-profit groups and researchers. This announcement is only part of Microsoft’s initiative to leverage cloud computing services for public good.  Microsoft President Brad Smith commented in a release, “We’re committed to helping nonprofit groups and universities use cloud computing to address fundamental human challenges. One of our ambitions for Microsoft Philanthropies is to partner with these groups to ensure that cloud computing reaches more people and serves the broadest array of societal needs.” The decision to donate the amount of one billion dollars is not based on the cost to provide these cloud services, but is instead based on the market price of cloud services according to the company.

This initiative is said to consist of three stages, including making cloud services like Microsoft Azure more available to non-profits, which will occur through the donation program. In addition, Microsoft plans to expand the Microsoft Azure for Research program by fifty percent. This program allows free Azure storage and cloud computing resources to help research at the university level; upwards of 600 research projects currently receive free cloud computing through the program. Microsoft also plans to support 20 partnerships focused on connectivity and training in 15 countries by the middle of 2017. The donation program will launch in the spring of 2016.

Satya-Nadella-post-image

Nadella commented, “Among the questions being asked in Davos are these: If cloud computing is one of the most important transformations of our time, how do we ensure that its benefits are universally accessible? What if only wealthy societies have access to the data, intelligence, analytics and insights that come from the power of mobile and cloud computing? Last fall, world leaders at the United Nations adopted 17 sustainable development goals to tackle some of the toughest global problems by 2030, including poverty, hunger, health and education.”

Some have become concerned that this massive donation could undermine the work of companies specializing in software for nonprofits.

The post Microsoft Plans to Make Billion Dollar Donation appeared first on Cloud News Daily.

How hybrid cloud is the “great enabler” of digital transformation

(c)iStock.com/Alija

Almost nine out of 10 (88%) respondents in a survey conducted by tech giant EMC believe hybrid cloud capabilities are ‘important’ or ‘critical’ to organisations who wish to enable digital business transformation.

The study, which polled more than 900 respondents, with one third in EMEA, found an overwhelming need for digital business initiatives. 92% said their company’s strategy called for such, while 90% said digital business was a “top priority” within three years. Almost two thirds (63%) claim they are already on their way to achieving digital transformation goals.

In particular, hybrid cloud enables increased IT agility, as well as making implementation of digital business initiatives easier, quicker, and less expensive, according to the survey respondents. Improving customer experience was the most popular reason behind business change (87%), ahead of acquiring new customers (86%), increasing innovation (82%) and enabling real-time business decisions (82%).

“Becoming digital is a priority for nearly every business on the planet,” commented Jeremy Burton, EMC president of products and marketing. “But how to get there is not as obvious. This study makes it perfectly clear that hybrid cloud – and the savings and agility it brings with it – is a key enabler to becoming a digital business.”

Naturally EMC, like practically every other cloud provider, has its own hybrid cloud offering. Yet the trend of hybrid cloud adoption is only going one way. According to a recent survey from North Bridge, the allure of business agility through cloud is superseding other factors like accessibility and scalability.

This is not the first study to be pumped out by EMC this week. The company, alongside VMware and VCE, previously posited that 85% of line of business decision makers surveyed in the manufacturing industry were using the public cloud in some capacity.

Culture over technology: The key to DevOps success

(c)iStock.com/Courtney Keating

Don’t take this the wrong way. As anyone who has been reading my articles can tell, I am all about the technology that enables DevOps but sometimes the greatest change in the enterprise comes from non-technical places.

To many of you reading this statement, that might be a radical concept – but when it comes to overarching changes such as implementing a DevOps program, culture is much more important than what code repository to use. DevOps in particular not only relies on changes in the technical environment, even more so in how people work together, interact and develop.

The DevOps Enterprise Summit (DOES) attracts development, operations, and business leaders from companies small and large looking for information on implementing DevOps and fundamentally changing their business. A theme which runs through every keynote speech is “you need to change the culture of your enterprise while changing the technology.” Every DOES speaker that I have heard stresses this message and discusses the cultural challenges that they have gone through.

There are three main areas of cultural change which can enable implementation of DevOps:

Teamwork and communications

First and foremost, a one-team attitude must be adopted. This applies to people from application development, infrastructure development, architecture, operations organisations, or business stakeholders. No matter the person’s specific job, satisfying the enterprise goals are everyone’s job. Equally, never say ‘it’s not my job’; although everyone comes to a project with their particular expertise, it is the team’s responsibility to successfully reach the enterprise goals.

Keep your partners close. Partners bring their own unique expertise and capabilities to the enterprise. Involving them in projects early and keeping them involved throughout will provide a new perspective. Make life challenging and exciting; engage those people who have passion and are excited by the prospect of doing something new, then challenge them to go beyond what has already been accomplished.

Leadership also needs to foster a culture where communications is critical. No skunk works allowed here; everyone on the team is kept up to date on progress and – if needs be – setbacks.

Leadership and sponsorship

Leadership’s first job is to identify roadblocks and eliminate them. Whether these roadblocks are process based – the three weeks it takes to get purchasing to read the purchase order you sent – or communication based – ‘oh, you sent me an email last week?’ – leadership must work to reduce and, or, eliminate the bottlenecks that are so prevalent in today’s IT world. To use networking terms, it is not just fostering communication within the team in an east-west manner, but also north-south between leadership and the team, executive management and the rest of the enterprise.

In the traditional model, without executive sponsorship and especially in large organisations, a major rollout will be slowed. When it comes to DevOps, executive sponsorship can be helpful in terms of funding and communicating to the rest of the organisation, but growing the effort at the grass roots level is how DevOps implementations expand. When a DevOps team member sits down to lunch with his or her friend from another development group and talks about how great things are going…well, you get the picture.

Starting up and growing

One team buying in doesn’t guarantee growth, but you have to start somewhere. DevOps in every instance that I have heard of started with one development group and the operations staff that supported them. No grand, big bang implementation of DevOps can work because it requires people of all types to buy in and get used to doing things differently.

Engineers, developers, operation support, techies of all types like to innovate. Technologists see value in doing something new that benefits them and their organisation. A representative of banking firm Capital One, when speaking about the value of DevOps to their engineering staff, was recently quoted as saying “the intrinsic value for engineers is so high, they’ll even willingly deal with lawyers.”

Crucially, DevOps should be fun. Schedule events to get other people involved – not stodgy speeches, but interactive events. Target senior group managers Heather Mickman and Ross Clanton have spoken twice at DOES and have stressed the importance of what they call “DevOps days” – events in growing awareness and interest in joining the wave of DevOps at Target.

Conclusion

Ultimately there are a number of key technologies and methodologies that need to be brought to bear in order to enable DevOps in the enterprise. But while we are implementing cloud environments, common code repositories, agile development practices and infrastructure as code, we need to keep in mind that the cultural aspects of DevOps implementation are just as important, if not more so.

Announcing @VAIsoftware to Exhibit at @CloudExpo New York | #Cloud

SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON’s 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY.
VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Partner, VAI is the 2012 IBM Beacon Award Winner for Outstanding Solutions for Midsize Businesses.

read more

Containers: 3 big myths

schneiderJoe Schneider is DevOps Engineer at Bunchball, a company that offers gamificaiton as a service to likes of Applebee’s and Ford Canada.

This February Schneider is appearing at Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA), where he’ll be cutting through the cloudy abstractions to detail Bunchball’s real world experience with containers. Here, exclusively for Business Cloud News, Schneider explodes three myths surrounding one of the container hype…

One: ‘Containers are contained.’

If you’re really concerned about security, or if you’re in a really security conscious environment, you have to take a lot of extra steps. You can’t just throw containers into the mix and leave it at that: it’s not as secure as VM.

When we instigated containers, at least, the tools weren’t there. Now Docker has made security tools available, but we haven’t transitioned from the stance of ‘OK, Docker is what it is and recognise that’ to a more secure environment. What we have done instead is try to make sure the edges are secure: we put a lot a of emphasis on that. At the container level we haven’t done much, because the tools weren’t there.

Two: The myth of the ten thousand container deployment

You’ll see the likes of Mesosphere, or Docker Swarm, say, ‘we can deploy ten thousand containers in like thirty seconds’ – and similar claims.  Well, that’s a really synthetic test: these kinds of numbers are 100% hype. In the real world such a capacity is pretty much useless. No one cares about deploying ten thousands little apps that do literally nothing, that just go ‘hello world.’

The tricky bit with containers is actually linking them together. When you start with static hosts, or even VMs, they don’t change very often, so you don’t realise how much interconnection there is between your different applications. When you destroy and recreate your applications in their entirety via containers, you discover that you actually have to recreate all that plumbing on the fly and automate that and make it more agile. That can catch you by surprise if you don’t know about it ahead of time.

Three: ‘Deployment is straightforward’

We’ve been running containers in production for a year now. Before then we were playing around a little bit with some internal apps, but now we run everything except one application on containers in production. And that was a bit of a paradigm change for us. The line that Docker gives is that you can take your existing apps and put them in a container that’s going to work in exactly the same way. Well, that’s not really true. You have to actually think about it a little bit differently: Especially with the deployment process.

An example of a real ‘gotcha’ for us was that we presumed Systemd and Docker would play nice together and they don’t. That really hit us in the deployment process – we had to delete the old one and start a new one using system and that was always very flaky. Don’t try to home grow your own one, actually use something that is designed to work with Docker.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA),

IBM Q4 figures indicate painful cloud transition

IBMAnalysts have warned that IBM faces a transformation that could make it a leaner operator – and potentially meaner one for staff.

IBM’s reported on revenue of $22.1 billion for Q4 of 2015, down 9% compared to the same quarter last year, indicate that its cloud and analytics sales growth is failing to offset declines in traditional business. The $4.5 billion earnings on that revenue, however, were better than expected by Wall Street analysts.

Total cloud revenue for the IT vendor and cloud service hybrid was $10.2 billion, but its as-a-service sales were $4.5 billion. According to IBM it has a run rate of $5.3 billion for cloud delivered as a service and its analytics revenue was up 7% on the same period in 2014.

With IBM now generating 35% if its sales income from cloud, analytics, mobile, social and security it’s in the middle of a painful turnaround which has led to a prolonged period of underperformance, according to Wall Street analyst Kulbinder Garcha at Credit Suisse. Large parts of IBM’s traditional business are being cannibalised by the Cloud, warned Garcha. The sales of hardware, operating systems and non cloud services are still a significant part to IBM’s vital functions, said the analyst, since they account for more than 40% of IBM’s business.

As enterprises move to the cloud, there is a danger they will migrate to one of the big three cloud suppliers with IBM still in transition, said analyst Clive Longbottom, service director at Quocirca. However, enterprises may prioritise the value of IBM’s consultancy skills over the lower prices of the top three cloud service providers (AWS, Googe and Azure) according to Longbottom. “I still believe that IBM will remain a major force in the IT world, it just has to make sure it positions and messages itself effectively to its existing customers and to its prospects,” said Longbottom.

There is still a danger for IBM staff as the company enters a stage of metamorphosis. “IBM’s cost of sale for cloud will be lower than its cost of sale for hardware, operating systems and software in the old world, which is good for the company. “However, this will also result in a lot of excess human resource fat in the company,” said Longbottom. “Expect redundancies leading to a far leaner IBM in the future.”