Enterprise SDN – Harnessing ‘Containers as a Service’ | @CloudExpo #SDN #Cloud

A major market for telcos will be the enterprise adoption of SDN. They will be able to harness the SDN/NFV innovations from the telco industry, and apply the technologies within their data centres, as well as using new telco services that they enable.
As SearchDataCenter describes NFV offers the potential to unify the data center, and will be driven under an overall umbrella of SDDC – Software Defined Data Centre.

VMware explains Why Enterprises are Ready for Network Virtualization, and positions their NSX technology as a platform for SDDC. Example use case scenarios are being pioneered such as Virtual Customer Edge – the ability to virtualize the customer edge either through creation of a virtualized platform on customer premises.

read more

Secrets of the Usable REST API By @JKRiggins | @CloudExpo #API #Cloud

Teowaki founder Javier Ramirez kicked off his Future of Web Apps presentation with a striking headline from a local town newspaper:
«I’ve been posting my letters in the dog poo box for two years.»
In an effort to class up the North Yorkshire town (or perhaps to save on supplies), the town had painted dog waste bins the same elegant red of the public mailboxes.
«Everybody agrees web usability is a good investment.» Ramirez said, but «When I use an API, usability is not that cool,» he said, then running through the tedious process to get information via the Amazon API. «Ten simple steps,» he said, tongue in cheek.

read more

[session] Cognitive Computing: How Big Data Becomes Big Insights | @CloudExpo #CognitiveComputing

Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it’s happening right now in industries across the globe.

read more

Public cloud service revenue forecast to top $200 billion in 2016

GrowthThis year the global public cloud services market will grow by 16.5 per cent on last year’s total of $175 billion in sales, according to market analyst Gartner. Total sales of the various cloud services will be worth $204 billion, it forecast.

The most exciting market to be in will be infrastructure services, with its 38.5% rate of expansion making it the fastest growing cloud market. Sales of infrastructure as a service [IaaS] will create $22.4billion in revenue in 2016, according to Gartner’s forecast.

Cloud advertising is the largest segment of the global cloud services market. Though it is growing at around a third of the rate of IaaS (at 13.6%) its sales in 2016 will reach $90.3 billion. The next biggest segment is predicted to be sales of business processes (BPaaS) which will be worth $42 billion, while cloud application services (SaaS) will create $37.75 billion of revenue in 2016. Surprisingly, cloud management and security will be worth a relatively lowly figure of $6.248 billion, a figure that is possibly due to expand as the cloud industry matures.

IaaS is booming because enterprises are abandoning the idea of building their own data centres and moving their infrastructure to the public cloud, according to report author Sid Nag, research director at Gartner. However, Nag had words of warning for vendors in this area. “Certain market leaders have built a significant lead in this segment, so providers should focus on creating differentiation for success,” said Nag.

This year it will be impossible to go wrong in the public cloud as high rates of growth will be enjoyed across all markets. Gartner expects this to continue through 2017, said Nag. “This strong growth reflects a shift away from legacy IT services to cloud-based services, due to increased trend of organisations pursuing a digital business strategy,” said Nag.

VMware lay offs will herald year of mass global IT redundancies says analyst

business cloud network worldCloud driven IT industry convergence will result in 330,000 job losses across the globe in 2016, according to one analyst.

The prediction come from IT market watcher Trip Chowdhry at Delaware based Global Equities Research, following the speculation that VMware is to make 5% of its workforce (around 900 staff) redundant as VMware’s parent company EMC merges with Dell.

The job losses at VMware, according to a report in Fortune magazine, will be a consequence of a restructure of VMware in order to make the merger deal look more advantageous to investors.  VMware is just one of a number of EMC Federation companies, a roster that also includes RSA Security, VCE and Pivotal. There has been criticism, according to Fortune, that Dell’s owner Michael Dell Cynics was potentially getting VMware at a bargain basement price, since its stock was being valued on the basis of the parent company, when its own stock has outperformed EMC shares. The redundancies may help give the investors a better deal as the convergence of the IT giants continues, said the report.

According to analyst Chowdhry this is a pattern that will be repeated throughout 2016, as the boom in cloud computing drives IT industry consolidation. The shift to cloud computing, said Chowdhry, will make much of the IT expertise unnecessary, particularly those who were once needed to support back-end operations. Around 70% of the work done in IT goes on at the back end, Chowdhry told clients in a briefing. As a result, the number of back end staff across the IT industry who face redundancy in 2016 could hit 330,000, he said.

According to Chowdhry’s figures the highest percentage of losses will be at HPE, HP, Yahoo and Yelp, all of which can expect to have to let 30% of their staff go. Losses at the two HP spin off would amount to 72,000 and 86,000 redundancies respectively. IBM, facing 25% staff layoffs in 2016, would put 95,000 IT staff back onto the employment market. Even Cisco, Juniper, Oracle and Microsoft staff would face redundancies, shedding a collective 80,000 staff between them.

The good news, however, is that non back-end IT jobs, involving the other Functional and Customer Domain responsibilities, are to boom. However, Chowdhry warned, these jobs can’t be immediately filled as the education is unable to create the skills in time.

Docker buys Unikernel Systems to make micro containers

containersUS based container software pioneer Docker has announced the acquisition of Cambridge start up Unikernel Systems, so it can create even tinier self contained virtual system instances.

Open source based Docker automates the running of applications in self contained units of operating system software (containers). It traditionally did this by creating a layer of abstraction from operating-system-level virtualization on Linux. This resource isolation allows multiple independent jobs to run within a single Linux instance, which obviates the need to spin up a new virtual machine. The technology provided by Unikernel, according to Docker, takes the autonomy of individual events to a new level, with independent entities running on a virtual server at an even smaller, more microcosmic level.

The new expertise bought by Docker means that it can give every application its own Virtual Machine with a specialized unikernel, according to Docker community marketing manager Adam Herzog.

Unikernel takes away the rigid distinction between operating system Kernels and the applications that run over them, creating more fluidity and exchange between the two. When source code is compiled a custom operating system is created for each application which makes for a much more efficient way of working and more effective functions. The key to efficiency of unikernels is their size and adaptability, according to the Docker blog. Being brought into the open source stable will make them more readily available to developers, it argued.

Unikernel was founded by ex-alumni from hypervisor company Xen including Anil Madhavapeddy, David Scott, Thomas Gazagnaire and Amir Chaudhry. Since unikernels can run on ‘bare metal’ (hardware without any operating system or hypervisor) they take the efficiency of virtual machines further, according to the Docker blog. Unikernels are an important part of the future of the container ecosystem since they effectively absorb the operating system into the containers, Scott says. Since an application only needs to take on the scraps of operating system code that it needs, Unikernels could eventually make the operating system redundant, it claimed.

Cloud academy: Rudy Rigot and his new Holberton School

rudy rigotBusiness Cloud News talks to Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA) keynote Rudy Rigot about his new software college, which opens today.

Business Cloud News: Rudy, first of all – can you introduce yourself and tell us about your new Holberton School?

Rudy Rigot: Sure! I’ve been working in tech for the past 10 years, mostly in web-related stuff. Lately, I’ve worked at Apple as a full-stack software engineer for their localization department, which I left this year to found Holberton School.

Holberton School is a 2-year community-driven and project-oriented school, training software engineers for the real world. No classes, just real-world hands-on projects designed to optimize their learning, in close contact with volunteer mentors who all work for small companies or large ones like Google, Facebook, Apple, … One of the other two co-founders is Julien Barbier, formerly the Head of Community, Marketing and Growth at Docker.

Our first batch of students started last week!

What are some of the challenges you’ve had to anticipate?

Since we’re a project-oriented school, students are mostly being graded on the code they turn in, that they push to GitHub. Some of this code is graded automatically, so we needed to be able to run each student’s code (or each team’s code) automatically in a fair and equal way.

We needed to get information on the “what” (what is returned in the console), but also on the “how”: how long does the code take to run?  How much resource is being consumed? What is the return code? Also, since Holberton students are trained on a wide variety of languages; how do you ensure you can grade a Ruby project, and later a C project, and later a JavaScript project, etc. with the same host while minimizing issues?

Finally we had to make sure that the student can commit code that is as malicious as they want, we can’t need to have a human check it before running it, it should only break their program, not the whole host.

So how on earth do you negotiate all these?

Our project-oriented training concept is new in the United States, but it’s been successful for decades in Europe, and we knew the European schools, who built their programs before containers became mainstream, typically run the code directly on a host system that has all of the software they need directly installed on the host; and then they simply run a chroot before running the student’s code. This didn’t solve all of the problem, while containers did in a very elegant way; so we took the container road!

HolbertonCloud is the solution we built to that end. It fetches a student’s code on command, then runs it based on a Dockerfile and a series of tests, and finally returns information about how that went. The information is then used to compute a score.

What’s amazing about it is that by using Docker, building the infrastructure has been trivial; the hard part has been about writing the tests, the scoring algorithm … basically the things that we actively want to be focused on!

So you’ve made use of containers. How much disruption do you expect their development to engender over the coming years?

Since I’m personally more on the “dev” end use of devops, I see how striking it is that containers restore focus on actual development for my peers. So, I’m mostly excited by the innovation that software engineers will be focusing on instead of focusing on the issues that containers are taking care of for them.

Of course, it will be very hard to measure which of those innovations were able to exist because containers are involved; but it also makes them innovations about virtually every corner of the tech industry, so that’s really exciting!

What effect do you think containers are going to have on the delivery of enterprise IT?

I think one takeaway from the very specific HolbertonCloud use case is that cases where code can be run trivially in production are getting rare, and one needs guarantees that only containers can bring efficiently.

Also, a lot of modern architectures fulfil needs with systems that are made of more and more micro-services, since we now have enough hindsight to see the positive outcomes on their resiliences. Each micro-service may have different requirements and therefore be relevant to be done each with different technologies, so managing a growing set of different software configurations is getting increasingly relevant. Considering the positive outcomes, this trend will only keep growing, making the need for containers keep growing as well.

You’re delivering a keynote at Container World. What’s the main motivation for attending?

I’m tremendously excited by the stellar line-up! We’re all going to get amazing insight from many different and relevant perspectives, that’s going to be very enlightening!

The very existence of Container World is exciting too: it’s crazy the long way containers have gone over the span of just a few years.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA)

Want Your Team in the Super Bowl? Download a Virtualization Optimization Assessment (VOA)

We’re coming to the end of the NFL Playoffs. When I wrote the first draft of this post, there were four teams remaining with a chance to make the Super Bowl in Santa Clara, CA. After yesterday’s action, it’s down to two (being a huge Patriots fan this is difficult to swallow).  There is a very good reason why the Patriots, Broncos, Cardinals and Panthers were still remaining heading into yesterday’s conference finals games.  They built teams with solid defenses and offenses, created the perfect schemes to run, pass, and defend, and drafted or brought in the perfect player to fulfill their schemes. These winners did their research. They diagnosed and found out what their needs were and rectified, fixed and solved the issues that may have prevented them from being successful. Teams like the Browns, 49ers, or Chargers did not. Sorry if you’re a fan of one of those teams, but, the reality is, based on their records, these teams likely didn’t make the correct investments to ensure long term success and obtain a return on their investments. They didn’t implement a system to help them understand where their shortfalls were, allowing them to correct them. They made investments on the wrong players (hello Johnny Manziel) and now they are stuck with an underachieving asset.

Does this remind you of your virtualization infrastructure at all? Do you have the tools to find out how your investments are running? If not, it would be a great idea to run a free VMware vSphere Optimization Assessment (VOA) to get a better understanding.

VMware vSphere Optimization Assessment (VOA)

VMware vSphere Optimization Assessment (VOA) is a tool that can be downloaded that essentially allows you to install vRealize Operations to monitor your vSphere Hosts. It runs on your environment for a month or so and is different from a trial because it documents how your virtualized environment is running. It will let you know if you’re over-provisioned and provide capacity planning. It’s proactive in identifying issues and will even help fix them. At the end of the assessment, it will provide a detailed report of your environment and help you identify any potential weaknesses in your infrastructure. If you download the tool from GreenPages we will review the report with you and if additional software and licenses are required, we will work with you on the logistics.  Yeah, ok there’s the rub, we’d like you to purchase more licenses, if necessary; however we’d also inform you if you’re oversaturated with licenses and prevent you from over purchasing licenses you don’t require.

It’s always a good idea to step back and assess where you’re at when you’ve invested a lot of money and resources over the years in an area of your IT environment. When reviewing monitoring and management toolsets with a broad stroke, it’s easy to say they’re nice to have but not absolutely necessary. Yet if you dive deeper, there are many features and functions that make the investment worthwhile in the long-term growth and planning of your virtualization environment.

What the VOA can help identify

vRealize Operations enables IT to not just see immediate issues but also potential future problems which can have a dramatic impact on reducing unplanned outages. With Predictive Analytics and Smart Alerts, it proactively identifies and remedies system issues, while dynamic thresholds automatically adapt to environments to provide fewer and more specific alerts resulting in a 30% decrease in time to diagnose and resolve performance issues. That’s three hours of your day you get back allowing you to work on improving and emerging your environment rather than troubleshooting constant alert noises and notifications.

The old saying, it’s better to be safe than sorry, speaks volumes in a virtual environment and especially in over-provisioning.  Research has shown that 9 out of 10 virtual machines are over-provisioned. While this may not seem like a bad thing on the surface, it leads to diminishing efficiency and optimization within the virtual infrastructure and, more importantly, increased infrastructure costs. Having the ability to manage your VMs more closely and effectively with vRealize Operations (vCOPS), allows you to be able to finely tune each VM, allocating the resources that are really necessary and as a result save in potential hardware costs. This solution provides a holistic overview of your virtualized environment and provides deep insight into the health of your infrastructure which would otherwise be invisible.

Capacity planning is another key feature of the vRealize Operations toolset allowing you to model future resource needs and alert on constraints before those constraints result in unexpected system downtime.

As I said, I wrote the first draft of this post before yesterday’s games. The example I had in here originally involved the Patriot’s Malcom Butler. Well guess what? I’m keeping it in. Malcom Butler was an unknown defensive back out of University of West Alabama.  He was an undrafted free agent that the Patriots signed in 2014. Butler went on to make the game saving interception against the Seahawks in Super Bowl XLIX allowing the Patriots to win their fourth Super Bowl title.  It wasn’t sheer luck that the Patriots signed Malcom Butler. A lot of scouting helped identify key traits that he had that perhaps other teams missed. Once on the team, solid coaching and hard work allowed Butler to exceed. Due to this, the Patriots had the utmost confidence to put him in during the most crucial defensive play of the season. The Patriots could have easily drafted a much more touted defensive back from a larger school, but they didn’t want to assume a more recognized player would do the job. They made an investment and didn’t assume the easier model of managing their draft was a better model. They implemented a system to strategically provide them a better resolution and a stronger outcome and the reward for that was a Super Bowl.

The VOA is no cost tool and a no-brainer investment. The long term results of implementing such a tool could be very rewarding for your environment.

 

Click to start your free assessment

 

By Rob O’Shaughnessy, Director of Software Sales & Renewals

 

Change in Citrix XenApp Architecture Investigation

The Issue The virtualization revolution has brought BYOD networks and virtual offices into reality. Today, businesses have multiple options to create a virtual infrastructure and remotely deliver desktops and applications. One of those options is Citrix XenApp, which recently released XenApp 7.6. Because XenApp 7.6 uses a different Citrix XenApp architecture than previous versions, there […]

The post Change in Citrix XenApp Architecture Investigation appeared first on Parallels Blog.

If Mac OSes Had Dating Profiles

Let’s be honest, the Mac OSes already seem to have full-fledged personalities, so why not support them in their quest for love? We know all of you Apple aficionados already have an opinion as to which OS is best, so let’s see how that holds up in the world of (fictional) online dating: Cheetah “Even […]

The post If Mac OSes Had Dating Profiles appeared first on Parallels Blog.