Cloud education struggling at US universities – meaning an even wider skills gap

Cloud computing skills are certainly in demand, whether it is microservices, containers, or DevOps. Yet according to a new US study, colleges and universities are struggling to adapt.

A new report from B2B research company Clutch says that despite LinkedIn naming ‘cloud and distributed computing’ as the number one global skill of 2016, spiralling costs of course resources, limited expertise on-campus and the pace of innovation in the industry were contributing factors to the struggle.

“I do not think any university is able to teach this topic because of its lightning fast evolution,” said Dr. Ken Birman, a professor at Cornell University. “As a purely pragmatic matter, we cannot teach the area until it begins to slow down and hold still for at least a few years at a time.”

This certainly makes sense as a cause for concern. Birman added that as new products might individually take a few weeks to learn to use, and then the time added on to develop a lecture, set the examination and so forth, once it is out there it can quickly become obsolete.

Clutch therefore took a look at US establishments which were teaching cloud computing courses to see how they compared. The University of North Florida offers ‘CEN 6086: Cloud Computing’, a graduate study with a heavy research base, which covers the different cloud models from aaS to public, private and hybrid, as well as architectures. According to Dr. Sanjay Ahuja, a full professor of computer science at the university, the course is a “hybrid of research-driven content and work with public clouds, specifically Amazon EC2 and Google Compute Engine.”

This is all well and good – but it’s worth noting that Amazon Web Services (AWS) released more than 1,000 updates last year. “In terms of resources, a knowledgeable professor, grad student, or assistant is necessary,” said Ahuja. “Someone who is very educated about cloud computing and architecture, including platforms like Hadoop, and automation tools, like Chef and Puppet.”

This side is something which this publication has covered for a while – the latest piece of research on the topic, from Rackspace earlier this week, picked out microservices as the sector to soar in the coming 12 months, while containers and automation continue to perform strongly in job postings. Before then, however, Ahuja recommends that an industry leader with an interest in teaching could step in to fill the gaps.

Another issue is around cost. For students, who are not usually known for being flush with cash, it’s not the course itself which is the issue, but using the likes of AWS, Google, Microsoft et al in the more hands-on elements. This aspect is however being monitored by Dr. Majid Sakr, a professor of computer science at Carnegie Mellon; if a student goes above budget, they get a 10% penalty on their final grade, while if they go more than double above budget they automatically get a 0. “What we’ve heard from our industry partners is that it would be wonderful if the students are going to program this very large cloud, that they can also be thoughtful about the budget,” said Sakr.

So what is the solution? There are plenty of certifications and courses available from the vendors themselves – this publication examined the better ones in April, including ones from AWS, Microsoft and Cisco – but these all have their own cost. Riley Panko, Clutch content developer and marketer and author of the report, noted that this does not solve the problem of the dreaded ‘skills gap’.

“Traditional university and college programs are where a large proportion of the workforce learns their career skills,” she told CloudTech. “Non-traditional options need to be purposefully sought out, which students may not do if they have no exposure to cloud computing.

“By having limited cloud computing course options within traditional higher education programs, students potentially won’t gain these skills, despite being qualified,” Panko added. “This does not help alleviate the skills gap present in cloud computing.”

Kevin McDonald, who teaches a cloud computing course at Georgetown University, noted the importance of computer science students learning about cloud. “It’s becoming more important to understand cloud computing simply because… it’s being adopted quite rapidly now,” he said. “Having in-depth experience and knowledge of the cloud is probably a core competency going forward.”

You can read the full Clutch report here.

Microsoft Enters Africa

Africa is catching on with technology as many countries as adopting computers and cloud in a big way. The widespread work of many private organizations, philanthropists and nongovernmental organizations have helped to eradicate poverty and illiteracy in many communities, and this new generation of educated people are turning to technology to improve their lives and those of others in their communities.

With such a trend, it’s only natural for all major companies to make a bee-line to this continent to tap into the new and growing opportunities it presents. Microsoft has announced that it will be opening two data centers here by next year to serve customers who use the Azure cloud platform. This will be one of the largest data centers ever in this continent and Microsoft plans to open them in Johannesburg and Cape Town, both located in South Africa.

The choice of location is a little disappointing from a technology adoption point of view. South Africa is the most developed country in the African continent and has one of the highest rates of literacy. Also, there’s much economic development happening there already. Microsoft could have chosen a developing economy to boost their presence and impact on the local markets.

In their defense though, Microsoft would argue that it’s the nest choice from an economic standpoint. South Africa is a stable nation with plenty of educated workforce, so the data centers are more likely to be safe and secure here. In addition, cloud growth is happening more in this country than other areas in Africa.

According to Data Corp International, the renowned Research Firm, cloud revenue in South Africa was a mere $243 million last year, but is expected to grow by an annual rate of 20% a year through 2021. Considering these numbers, it’s only fair that Microsoft cater to this growing market before expanding to other regions within this continent.

Like Microsoft, Alphabet and Amazon also have been working to capture this potential market, but none have so far opened a dedicated operational center in this continent yet. In this sense, Microsoft has a lead over the other two. But, we can expect both Alphabet and Amazon to follow suit soon.

Reports show that these three major companies combined together have $31.54 billion in 2016 alone for capital expenses and leases. This is a whopping 22% more than what these companies spent in 2015. Though not every penny was spent on cloud infrastructure or data centers, a substantial part of this investment went into their cloud computing line of business.

This entry of Microsoft marks a new beginning for Africa and hopefully it can act as a catalyst for this continent to develop more rapidly. We can hope that these companies will soon move beyond lucrative countries like South Africa and spread their operations in poorer countries too, in order to give every community a chance at development.

Overall, this is a positive move that could transform Africa, like it did for many Asian countries that were grappling with many social and developmental issues.

The post Microsoft Enters Africa appeared first on Cloud News Daily.

Is edge computing set to blow away the cloud?

Just about every new piece of technology is considered disruptive to the extent that they are expected to replace older technologies. Sometimes as with the cloud, old technology is simply re-branded to make it more appealing to customers and thereby to create the illusion of a new market. Let’s remember that cloud computing had previously existed in one shape or form. At one stage it was called on-demand computing, and then it became ‘application service provision’.

Now there is edge computing, which some people are also calling fog computing and which some industry commentators feel is going to replace the cloud as an entity. Yet the question has to be: Will it really? The same viewpoint was given when television was invented. Its invention was meant to be the death of radio. Yet people still tune into radio stations by their thousands each and every day of every year.

Of course, there are some technologies that are really disruptive in that they change people’s habits and their way of thinking. Once people enjoyed listening to Sony Walkmans, but today most folk listen to their favourite tunes using smartphones – thanks to iPods and the launch of the first iPhone by Steve Jobs in 2007, which put the internet in our pockets and more besides.

Levine’s prophecy

So why do people think edge computing will blow away the cloud? This claim is made in many online articles. Clint Boulton, for example, writes about it in his Asia Cloud Forum article, ‘Edge Computing Will Blow Away The Cloud’, in March this year. He cites venture capitalist Andrew Levine, a general partner at Andreessen Horowitz, who believes that more computational and data processing resources will move towards “edge devices” – such as driverless cars and drones – which make up at least part of the Internet of Things. Levine prophesises that this will mean the end of the cloud as data processing will move back towards the edge of the network.

In other words, the trend has been up to now to centralise computing within the data centre, while in the past it was often decentralised or localised nearer to the point of use. Levine sees driverless cars as being a data centre; they have more than 200 CPUs working to enable them to operate without going off the road and causing an accident. The nature of autonomous vehicles means that their computing capabilities must be self-contained, and to ensure safety they minimise any reliance they might otherwise have on the cloud. Yet they don’t dispense with it.

Complementary models

The two approaches may in fact end up complementing each other. Part of the argument for bringing data computation back to the edge falls down to increasing data volumes, which lead to ever more frustratingly slow networks. Latency is the culprit. Data is becoming ever larger. So there is going to be more data per transaction, more video and sensor data. Virtual and augmented reality are going to play an increasing part in its growth too. With this growth, latency will become more challenging than it was previously. Furthermore, while it might make sense to put data close to a device such as an autonomous vehicle to eliminate latency, a remote way of storing data via the cloud remains critical.

The cloud can still be used to deliver certain services too, such as media and entertainment. It can also be used to back up data and to share data emanating from a vehicle for analysis by a number of disparate stakeholders. From a data centre perspective, and moving beyond autonomous vehicles to a general operational business scenario, creating a number of smaller data centres or disaster recovery sites may reduce economies of scale and make operations more inefficient than efficient. Yes, latency might be mitigated, but the data may also be held within the same circles of disruption with disastrous consequences when disaster strikes; so for the sake of business continuity some data may still have to be stored or processed elsewhere, away from the edge of a network. In the case of autonomous vehicles, and because they must operate whether a network connection exists or not, it makes sense for certain types of computation and analysis to be completed by the vehicle itself. However, much of this data is still backed up via a cloud connection whenever it is available. So, edge and cloud computing are likely to follow more of a hybrid approach than a standalone one.

Edge to cloud

Saju Skaria, senior director at consulting firm TCS, offers several examples of where edge computing could prove advantageous in his LinkedIn Pulse article, ‘Edge Computing Vs. Cloud Computing: Where Does the Future Lie?’. He certainly doesn’t think that the cloud is going to blow away.

“Edge computing does not replace cloud computing…in reality, an analytical model or rules might be created in a cloud then pushed out to edge devices… and some [of these] are capable of doing analysis.” He then goes on to talk about fog computing, which involves data processing from the edge to a cloud. He is suggesting that people shouldn’t forget data warehousing too, because it is used for “the massive storage of data and slow analytical queries.”

Eating the cloud

In spite of this argument, Gartner’s Thomas Bittman, seems to agree that ‘Edge Will Eat The Cloud’. “Today, cloud computing is eating enterprise datacentres, as more and more workloads are born in the cloud, and some are transforming and moving to the cloud… but there’s another trend that will shift workloads, data, processing and business value significantly away from the cloud. The edge will eat the cloud… and this is perhaps as important as the cloud computing trend ever was.”

Later on in his blog, Bittman says: “The agility of cloud computing is great – but it simply isn’t enough. Massive centralisation, economies of scale, self-service and full automation get us most of the way there – but it doesn’t overcome physics – the weight of data, the speed of light. As people need to interact with their digitally-assisted realities in real-time, waiting on a data centre miles (or many miles) away isn’t going to work. Latency matters. I’m here right now and I’m gone in seconds. Put up the right advertising before I look away, point out the store that I’ve been looking for as I driver, let me know that a colleague is heading my way, help my self-driving car to avoid other cars through a busy intersection. And do it now.”

Data acceleration

He makes some valid points, but he falls into the argument that has often been used about latency and data centres: They have to be close together. The truth, however, is that wide area networks will always be the foundation stone of both edge and cloud computing. Secondly, Bittman clearly hasn’t come across data acceleration tools such as PORTrockIT and WANrockIT. While physics is certainly a limiting and challenging factor that will always be at play in networks of all kinds – including WANs, it is possible today to place your datacentres at a distance from each other without suffering an increase in data and network latency. Latency can be mitigated, and its impact can be significantly reduced no matter where the data processing occurs, and no matter where the data resides.

So let’s not see edge computing as a new solution. It is but one solution, and so is the cloud. Together the two technologies can support each other. One commentator says in response to a Quora question about the difference between edge computing and cloud computing that “edge computing is a method of accelerating and improving the performance of cloud computing for mobile users.” So the argument that edge will replace cloud computing is a very foggy one. Cloud computing may at one stage be re-named for marketing reasons – but it’s still here to stay.

How to open .exe files on a Mac

Problem: You need to open an .exe file but you have a Mac®. Solution: You can easily open an .exe from your Mac by using Parallels Desktop® for Mac.   I frequently get questions like this from Mac users: My brother sent me a file named “naib.exe”, but I can’t open it on my Mac. How […]

The post How to open .exe files on a Mac appeared first on Parallels Blog.

[session] Cloud Security by Design By @EmeSec | @CloudExpo #Cloud #DevOps #Compliance

As enterprise cloud becomes the norm, businesses and government programs must address compounded regulatory compliance related to data privacy and information protection. The most recent, Controlled Unclassified Information and the EU’s GDPR have board level implications and companies still struggle with demonstrating due diligence. Developers and DevOps leaders, as part of the pre-planning process and the associated supply chain, could benefit from updating their code libraries and design by incorporating changes.
In her session at 20th Cloud Expo, Maria Horton, CEO of EmeSec, will discuss how addressing the new liabilities of sensitive data tracking for GDPR and CUI via code design can reap huge benefits for organizations and for developers that due just that whether in cloud, IoT or traditional environments.

read more

[session] Know Your Adversary: A Live Hack Simulation By @VinnyTroia | @CloudExpo #Cloud #Security

When NSA’s digital armory was leaked, it was only a matter of time before the code was morphed into a ransom seeking worm. This talk, designed for C-level attendees, demonstrates a Live Hack of a virtual environment to show the ease in which any average user can leverage these tools and infiltrate their network environment.
This session will include an overview of the Shadbrokers NSA leak situation.

read more

Why SaaS is a ‘runaway train that’s showing no signs of stopping’ in business

Three quarters of organisations say 80% of their apps will be software as a service (SaaS) by 2020, while 38% of companies are already running ‘almost entirely’ on SaaS, according to a new report from BetterCloud.

The report, which polled more than 1,800 IT professionals, also found that larger organisations are not the laggards that many believe. Only a quarter (27%) of medium to large enterprises have the vast majority of their applications as SaaS today, but this number is expected to rise to 69% by 2020 – compared to 75% for SMBs – and 86% after 2022.

“Larger enterprises are slower to act, but once they do, they add legitimacy, turning a fringe trend to a mainstream mainstay,” the report notes. “In regards to SaaS adoption, that tipping point has occurred.”

BetterCloud coins those who have moved almost everything away from on-prem as ‘SaaS-powered workplaces’. These organisations use 34 SaaS apps on average – more than two times the average workplace – while benefits are evident if scattered. An overwhelming 52% said that compared to the average organisation they were more likely to say SaaS helps attract better talent, with 31% citing employee satisfaction, while only 8% and 7% respectively cited communication and cost benefits.

The management side is also worth noting. According to the research, 88% of very small businesses and 74% of small to medium businesses say their SaaS apps are managed by the IT department. This number goes down to 42% when looking at the larger organisations, but the trend is there for the emergence of the SaaS-powered workplace in ‘empowering IT to run the world’s best workplaces through technology’, as the report notes. In contrast, the 2016 report from BetterCloud found that corporate IT departments were drowning in the deluge of SaaS app requests.

“SaaS is a double-edged sword – while it brings incredible benefits, it also creates formidable challenges that are taking the roles and responsibilities of IT to new extremes,” said David Politis, founder and CEO of BetterCloud.

You can download the full report here.

How virtualisation is a vital stepping stone to the cloud

It is true to say that there has been a lot of talk about virtualisation over the years and you would be forgiven for thinking that every server and every storage system has had some kind of virtualisation treatment, but there are still some companies out there who have yet to virtualise and indeed realise the benefits that both virtualisation and cloud offer.

So for those who have not gone down the virtualisation route let me summarise the benefits from a business and technical perspective and whilst we are there let’s take a look at virtualisation in the context of cloud adoption and the benefits it brings in a cloud environment.   The need to embrace virtualisation normally comes hand in hand with growing hybrid environments and having some of your IT infrastructure in the cloud and some on premise.  In fact virtualisation is often seen as a stepping stone into cloud.

Why virtualise?

Let’s start by looking at the benefits that virtualisation brings to the business. Virtualisation can increase IT agility, flexibility, and scalability while creating significant cost savings. Workloads get deployed faster, performance and availability increases and operations become automated.  This results in an IT environment that is simpler to manage and less costly to own and operate.  Other key business benefits of virtualisation include the ability for the organisation to reduce capital and operating costs, to more effectively minimise or even eliminate downtime.  IT can provision applications and resources faster, enable business continuity and disaster recovery, and finally virtualisation helps to simplify data centre management.

The key technical benefits of virtualisation are many and include:

  • Encapsulation – the virtual machine is described as a small number of files, typically its virtual disk drive and a resource description file contains its virtual hardware requirements
  • Virtual machines separate the operating system from the physical hardware, so they are no longer tightly locked to particular hardware through device drivers. This makes moving a VM from a Dell server, say, to an HP or Lenovo server much more straightforward
  • Live migration – a virtual machine can be moved from one physical host to another with no downtime for the operating system. This is very useful for both load balancing, but also for maintenance or upgrading of hosts

Best practice tools and approaches

Tools are available to virtualise current physical servers, such as VMware Converter and Platespin Powerconvert.  However, often it is better to build a virtual machine from new and reload the application and data.  As organisations choose to upgrade their operating systems and applications, they are normally doing this from a fresh build of Windows or Linux as a virtual machine.  And in terms of best practices for virtualising servers, all Intel x86 servers are now candidates for virtualisation.

The main challenges around virtualisation are normally about the risk associated with the migration process, especially downtime, and around the licensing policies of the software being run.  For this reason, the main servers still running on physical hardware tend to be large transactional databases such as SQL server clusters and Oracle.

To get the greatest benefits out of virtualisation it should be in a shared storage infrastructure.  In the early days, this meant fibre-channel SAN or network-attached (NFS) storage.  Recently, hyper-converged infrastructures have appeared on the scene from companies such as VMware (VSAN), Nutanix and Simplivity. This takes industry-standard x86 servers with large amounts of CPU and RAM, but it uses internal disks rather than a SAN, together with high-speed networking.  Technologies such as flash and solid-state disks, as well as compression and deduplication have made these storage systems cheaper and faster for virtualised workloads.  Software ensures that all the data is fully redundant across the cluster, with virtual machines load balanced at the same time.

Virtualising servers and storage in preparation for creating a private cloud

Virtualisation is a key requirement for the cloud, whether migrating or creating from new. Once virtualised, virtual machines can be migrated to cloud providers via export/import mechanisms such as the Open Virtualisation Format (OVF).  In some cases the format of the virtual machine will need to be changed depending on the source and target format.  For example, VMware to MS Azure will require conversion to Hyper-V, while Amazon AWS would require conversion to Xen. 

Rather than migrating and converting, it is often better to replicate the data over a period of time using solutions such as Zerto, Veeam or DoubleTake.  This will result in a much shorter downtime required to switch over from running on-premises to a cloud provider. As an established cloud service provider, iland helps customers to migrate their physical and virtual servers to the public or private cloud.

If this is all starting to sound too good to be true let me reassure you, there are very few pitfalls. Sure, in the early days there was a slight performance overhead associated with virtualisation when compared to the raw physical server, but these days server and storage technology have all but removed that.

Future developments

Virtualisation technology has matured over the past few years. The new developments are really around automation and the final virtualisation of networking which together allow for what is often termed the Software Defined Data Centre. In this world everything from servers, storage and networking can be defined and controlled through software. The utopia is where virtual machines can be moved and managed using common networks wherever they happen to be.

Virtualisation offers a host of benefits and very few challenges as the market matures.  That said without right-sizing VMs and failing to track and anticipate resource needs, businesses could face issues such as VM sprawl and over consumption, so it does need to be tightly managed.  My advice is that businesses should plan for the long-term to make sure that they have enough resources on hand to meet future business demands.

Microservices job postings soar over the past 12 months, argues Rackspace

If you want to get ahead, get tooled up on microservices as vacancies for job roles have gone up by 133% over the last year, according to the latest report from Rackspace.

Microservices, which enable applications to become more agile and prevent servers from being jammed, are becoming increasingly prevalent in a world of data overload. Think of the stat, as Microsoft CEO Satya Nadella put it at BUILD earlier this month, that 90% of the data in the world today have been created over the last two years, or the fact that 300 hours of video are uploaded to YouTube every minute, and you get the idea of its importance.

It’s also forcing change in other areas, Rackspace argues. Job postings for agile software development, a fundamental skill in microservices, went up 16% over the past 12 months, while skills for more traditional service-orientated architecture (SOA) declined by 22%.

Skills with regard to container technologies continued to gain ground in the past year, with demand for engineers versed in Google Kubernetes going up an astonishing 919% since 2016, with Docker continuing to tick over nicely at 83%. The rise in Docker-related job postings had gone up 341% and 991% over the previous two years, so there is something of a slowdown.

“The demand for different technology skills, and consequently, the rise in new job roles are constantly evolving in the IT sector both in the UK and beyond,” said Darren Norfolk, managing director of Rackspace UK. “In 2015, we saw the number of cloud engineering roles grow dramatically, but now we’re seeing a demand for more nuanced, specific skills emerge.

“This will only become more pronounced as businesses expand and look for expertise to help manage different elements of their business,” he added. “Business leaders must be able to plan ahead and make sure they have skills in the right areas in order to benefit both the business going forward, and the employees who work in it.”

CloudTech’s 2017 examination of the key cloud skills needed for success, as provided by Firebrand Training, include database and big data skills, application security, enterprise cloud migration, cloud enterprise application development, as well as containers.

Read more: The top five in-demand cloud skills for 2017

Google’s Next Frontier – a New AI Chip

Artificial intelligence, AI for short, is almost overtaking software to become the preferred technology for all operations. This is not a surprising trend because AI is the next generation of machines that come with the ability to learn and implement certain behavior without needing a constant intervention from humans. AI is becoming a reality sooner than we may have guessed, and Google wants to lead the way.

During the company’s annual conference, its CEO, Sundar Pichai, announced that the company will be releasing a new computer chip that can perform advanced machine learning and AI tasks. Called Cloud Tensor processing Unit, this chip is named after Google’s open-source machine learning framework called TensorFlow.

Machine learning, deep learning, supercomputers and AI technologies have been transforming tech companies and its clients over the last few years, and this announcement from Google has made this trend official now. In many ways, AI is transforming Google and its operations too, so it is only natural that this company wants to set the trend and capture the AI market even when it is in its nascent stages.

So, what’s this new chip all about? What’s new in it? Well, lots!

This chip is being dubbed as the first one that can work at blistering speed not only in its executions, but also in its ability to learn. In other words, this chip can be trained incredibly fast and this can be a vital difference that can set this chip apart from others in the same category.

Let’s take a simple example. You want a machine to identify a pizza from other foods such as hot dogs, salads and burgers. To do this, you’ll have to feed in hundreds of different images of pizzas along with other foods to help the system to learn. The sheer amount of calculations needed to train such a model is mind-boggling and complex, so it can take days or even weeks for a system to identify a pizza from other foods.

Google wants to simplify this process, so the learning process is shortened greatly. To do this, the company plans to create machine learning supercomputers called cloud TPU pods. Many Cloud TPUs will be wired together using high-speed data connection. As a result, the learning will be split across different TPUs and they will happen in parallel, thereby leading to a shorter learning curve.

According to Pichai, Google wants to create thousands of TPUs and make them available over the Internet so that it can benefit researchers and developers in a big way. To start with, Google announced that it will share 1,000 TPUs with artificial intelligence researchers to help them in their studies.

In addition to this AI chip, Google plans to create algorithms that will fine-tune other machine learning algorithms and was developing AI tools for advanced studies such as genome analysis, molecular discovery and more.

In all, this is a great initiative from Google and one that could potentially transform our society in a big way. In return, Google can emerge as the leader of the AI chip market, right from its very beginning.

The post Google’s Next Frontier – a New AI Chip appeared first on Cloud News Daily.