How does Cloud Impact the Job Market?

Gone are the days when people used to work in an office from 9 AM to 5 PM before heading back home. Today, it’s a connected world where you can work at anytime and from any place of your choice. Much of this job convenience can be attributed to rapid advancements in the world of connectivity, and the emergence of cloud as a platform to bring workers together.

In general, if you need a computer to do your work, then you can do it from anywhere. Obviously, more people are taking to this idea of remote working because it gives them the choice to balance different aspects of their life. It also reduces the need for people to take breaks from work. For example, a young mom can continue working from home while caring for her infant, and this means, she can continue to focus on her career without having to give up her priorities at home. Such conveniences go a long way in bringing more people into the economic world, thereby generating greater wealth for individuals, companies, and economies at large. In addition, they are also not restricted to any specific geographical area to find their dream job. Rather, the entire world is their option.

For companies too, this is a convenient option, as they can cut back on overheads. They no longer need huge plush offices with air-conditioning all through the day and night. This is sure to bring down their operating expenses substantially. Further, they are also not restricted when it comes to hiring talented people. They can choose to hire anyone located in any part of the world, so in this sense, they can always have the best of talent.

Due to such conveniences for both employers and employees, more people are looking at this option. In fact, it is estimated that more than three million Americans already work on cloud-based platforms like Upwork, CrowdFlower, and Amazon Mechanical Turk. It’s not going to be long before more people take this route. A report by London Business School shows that more than one-third of the workforce would be working remotely by 2020!

Much of this idea of remote working has been possible because of the cloud. Since this technology allows users to store and access their files on virtual servers, rather than on a particular computer’s hard drive, they can access it on any device and from any location of their choice. Further, many of the applications they use are located in cloud servers, and this also gives them flexibility to access these apps from anywhere. Many cloud tools like SugarSync allow real-time collaboration, and this means, workers from different parts of the world can work on a document at the same time.

As cloud becomes more sophisticated, more jobs are likely to be remotely doable. If you’re already working in the fields of data entry, programming, content creation, design, and customer service, you’re likely to be working remote. Soon, teachers, lawyers, psychologists, counselors, researchers, nurses, paralegals and others too will join the bandwagon.

The post How does Cloud Impact the Job Market? appeared first on Cloud News Daily.

Containers, NFV and SDN most interesting technology for OpenStack users

(c)iStock.com/Prasit Rodphan

With the OpenStack jamboree in Barcelona only a week away, it’s a good time to note the technologies driving its users forward – and according to the latest survey data, containers lead the way, with NFV (network function virtualisation) and bare metal also seeing an uptick.

The user study from OpenStack, which has been running semi-annually since 2013 and polled almost 400 respondents, found that container technologies continue to lead the way, cited by 78% of respondents and topping the rankings for the third year in a row. SDN and NFV (61%) and bare metal (56%) were a little further behind, ahead of hybrid cloud (46%) and platform as a service (42%). Internet of Things polled 32% of the vote.

In terms of specific container and PaaS technologies, Kubernetes came out on top. Almost half of respondents (47%) are using it in some capacity, with 31% in production compared to 10% in dev/QA and 6% at the proof of concept stage. 17% of respondents are using CloudFoundry in production, compared to 13% for OpenShift and Mesos. Over the past year, the research noted that Kubernetes usage went up 20 percentage points, while CloudFoundry fell 16 percentage points.

The survey also examined reasons why organisations choose OpenStack. For 72%, saving money is the primary business driver, while increased operational efficiency and accelerating an organisation’s ability to compete by deploying applications faster were also cited.

Similarly, the spread was pretty even when it came to OpenStack usage by organisation size; 18% of those polled who logged deployments had between 10 and 99 employees, while 12% had 100,000 employees or more.

Given the survey’s provenance, the results are understandably on the positive side – yet the technological innovations are certainly of interest. Back in March, cloud software firm Talligent noted that while OpenStack adoption was maturing and increasingly being seen as a viable alternative to public clouds, complexity in deployments remains an issue. Writing for this publication in July last year, David Auslander noted that the downside to an open source model is that “lots of developers with lots of ideas breeds complexity”, but argued that with the right planning, OpenStack was “ready for prime time.”

You can read the full OpenStack report here.

SAP aims to succeed in the cloud – but can it be the next IoT giant?

Picture credit: SAP

Opinion Industry leaders in each sector are carving out their share of the IoT market. The latest to stake a claim is SAP, the world’s largest inter-enterprise software company and the world’s fourth-largest independent software supplier, overall.

When the company was founded by five former IBM employees in 1972, the original premise was to provide customers with a way of interacting with common corporate databases. Now, the most common application of SAP software is to run internal business operations; both IBM and Microsoft use SAP applications to run their enterprises.

With an established claim on business-level software, it is easy to see why SAP would be working feverishly to ensure that they do not lose ground as demand for IoT business solutions increases. SAP recently acquired PLAT.ONE, an IoT platform/solutions provider and Fedem Technology, an analytics software company. Both companies are being integrated into SAP HANA in support of their launch of SAP IoT, set to focus on “applying machine learning/advanced analytics to the vast amount of data that IoT devices collect.”

SAP IoT in action

The company is already testing the waters in a few applications. A mining company in Russia uses SAP IoT to monitor the health of mine workers in an effort to reduce safety and health risks on the jobsite. Employees undergo a health screening, performed by a robotic device, prior to each shift. The results of the screening are fed to mining leaders and used to calculate potential health and safety hazards and the long-term impacts of environmental working conditions on employees.

In Japan, a public transit company is using a connected sensor in combination with weather monitoring, traffic monitoring and other data to increase the safety of their commuters. Information on driver behavior and biofeedback is delivered to a monitoring center where alerts are created when a potentially unsafe condition arises. This information allows the monitoring facility to respond with plans that promote the safety and well-being of the passengers and drivers.

SAP is even working in the energy sector in Norway where IoT devices are connected to wind turbines in the field to feed data back to engineers. The teams use the data to power scale-models of wind turbines and analyze the potential impact of weather conditions and changes to the design using real-world data. Meanwhile other industry giants are staking an IoT claim.

General Electric supports industry

In the industrial space, GE has become a clear frontrunner. The company’s initiative, dubbed “Power of the 1%” is predicted to save billions of dollars over the next fifteen years; GE claims that a 1% increase in efficiency will save the oil and gas industry $90 billion; aviation $66 billion, healthcare $63 billion and rail $27 billion. These efficiencies are driven primarily by GE’s Predix platform, a PaaS solution that is billed as laying “the foundation for the world’s first and largest marketplace for industrial applications.” In its mature state, Predix will bring together industrial data from multiple companies and applications to drive better understanding of field data, improve designs and decrease the financial burden associated with maintenance.

Google covers the home

Google has already been incorporated into the daily lives of many users. We use it communicate in real-time with friends and family via Hangouts, Google’s Calendar integrates with nearly every other calendar platform making it easy to plan family events and keep tabs on schedules; and through the purchase of Nest, Google is building a connected home experience that will be driven from our Android phones.

IBM is building the backend

Supporting much of the connected technology is IBM’s Bluemix portfolio; a service that gives developers the tools they need to “quickly and easily extend Internet-connected devices to the cloud to not only leverage data from, but also to build an IoT application in just a few minutes.”

This capability is giving companies the power they need to adapt existing technologies to compete in a cloud-based world.

While SAP may be the current front-runner in connected business solutions, history has shown us that it does not take much to lose footing in this competitive space. The major players in each sector primarily gain ground through the acquisition of companies that have perfected specific solutions in their niche and then incorporating the intellectual property into their own products. Maintaining leadership status requires a combination of development activities and intelligent acquisition of best-in-class solutions.

GreenPages in Barcelona for VMworld EU – vBrownBag TechTalks schedule

Our own GreenPages enterprise consultant, Chris Williams, is over in Barcelona right now helping to host the VMworld TechTalks for vBrownBag.  He’ll also be doing the Tech Talks for the OpenStack Summit next week.  Tons of great technical content will getting posted over the course of the next 2 weeks.

If you are up (very) early, the live feed can be found here: https://goo.gl/WZYHAP .

Otherwise the content will be recorded and posted on http://vbrownbag.com

Here is the schedule for this week (please note times are in CEST):

Tuesday 18th Wednesday 19th
11:00 vExpert Daily with Mike Letschin vExpert Daily with Mike Letschin
11:30 Michael White – Veeam – Use the cloud to protect your baby pictures and so much more
11:45 Mike Resseler – Veeam – Why a cloud architects capabilities differ from a data center architect Alan Renouf – Everything changes with PowerCLI from now
12:00  Steve Flanders – Fun with Log Insight APIs
12:15 Amit Panchal – Career Disruption 101 – Brand is King Steve Flanders – Let’s Talk about Log Insight Webhooks
12:30 Ather Beg – Work/Life Balance for the “Elderly” IT Professional Matt Gillard – So you want to migrate your legacy workloads to the Cloud?
12:45 Faizan Yousaf – VCP6
1:00 Luc Dekens – The “Community” part in the vSphereDSC resource module
1:15 Kyle Grossmiller – Pure Storage – Cisco CVD:  Large scale virtual desktops with Horizon 7 Chris Bradshaw – The Amazing World of IT in Higher Education
1:30  Dean lewis – Documentation doesn’t have to be daunting
1:45
2:00 Keith Norbie – an overview of the EUClaunchpad.com  Juan Lage – Cisco – ACI Micro Segmentation
2:15 Cody Hosterman – Pure Storage – Storage as a service using vRealize Automation and Orchestration Continued
2:30 David Klee – Performance Perspectives Joerg Lew – Scaling vRealize Orchestrator (vRO)
2:45 Tim Hynes – PowerCLI – where to start?
3:00 Mark Brookfield – Automating SRM with PowerCLI  In Tech We Trust Podcast
3:15 Continued
3:30 Ron Fuller – Understanding DVS Port Mirror Options
3:45
4:00 VMUG IT hosted by Andrea Mauro in Italian
4:15 Continued
4:30
4:45

 

[whitepaper] New Vistas Of Revenue | @CloudExpo @CalsoftInc #DataCenter

Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availability, efficient and greener solutions can offer much reduced operational expenses (OPEX) to the cloud service providers. Further considering the reduced OPEX, voluminous deployment in cloud environment, lesser form factors (i.e. reduced real estate investment) and intrinsic design solutions, capital expenses (CAPEX) becomes much equitable and compelling proposition. This white paper also discusses how Calsoft can resolve the technical challenges of integrating storage in the cloud with various approaches, backed by understanding of both cloud requirements and the capabilities of large class of enterprise storage products.

read more

Announcing Cemware to Exhibit at @CloudExpo Silicon Valley | #PaaS #Cloud #APM #Monitoring

SYS-CON Events announced today that Cemware will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Use MATLAB functions by just visiting website mathfreeon.com. MATLAB compatible, freely usable, online platform services. As of October 2016, 80,000 users from 180 countries are enjoying our platform service.

read more

x.ai launches professional version of its AI personal assistant

(c)iStock.com/simarik

Artificial intelligence (AI) software provider x.ai has announced the launch of the Professional version of its personal assistant product.

The personal assistant, Amy – or twin brother Andrew – schedules meetings on behalf of users, saving plenty of admin time. All users need to do is connect their calendars to x.ai, and cc Amy or Andrew, who does the rest.

The beta version was launched in June 2014, and since then the New York-based startup has been building up its data set and infrastructure. According to Dennis R. Mortensen, x.ai CEO, the key to launching a professional-grade version was playing the waiting game.

“Teaching a machine to understand natural language, even using the most advanced data science, is by any definition super hard. Add to that the fact that meeting scheduling is a high accuracy setting,” he said. “So it was important for us to wait to roll out a paid product until we felt Amy was smart enough to schedule meetings nearly flawlessly.”

Customers already using the Professional product include Walmart, Salesforce, LinkedIn, and The New York Times, as well as Astronomer, a data science engineering platform. “Once I had access to Amy, I was able to consistently schedule twice as many meetings with a small fraction of the effort,” said Ry Walker, Astronomer CEO. “There’s no question in my mind that x.ai has helped me move faster to do my real job, which is to build a great company.”

The full name of the assistants are Amy and Andrew Ingram. The initials referring to AI are obvious, but the remainder of their name, ‘N-Gram’, is a reference to a computational linguistics and probability technique.

x.ai uses the MongoDB NoSQL database to power its personal assistants, and writing for sister publication CloudTech in December last year, Kelly Stirman, VP strategy, discussed the importance of the correct infrastructure in powering AI initiatives. “Cloud computing solved the two biggest hurdles for AI: abundant, low cost computing and a way to leverage massive volumes of data,” he wrote. “Small, focused, cloud-based algorithms are going to be the AI that changes our lives over the next decade. It’s better to solve one problem really well than it is to solve 100 problems poorly.”

IBM Bags A Cloud Contract from the US Army

IBM was awarded a contract to run a pilot program, that could lay the basis for this company to build, own, and operate data centers on behalf of the US Army. This contract, worth $62 million, is called the Army Private Cloud Enterprise, and it is the first step ever taken by the US Army to tap into the expertise of commercial IT industry to run a large-scale data center on its behalf.

The exact document was not revealed, so the scope of the project is not known. But press releases show that IBM will get one base year, and four option years to build a data center, and manage it for the Army. Also, this new data center would start off as a migration point for all the systems and applications that are currently hosted at different government data centers located at Redstone Arsenal in the city of Huntsville, Alabama. It is also expected that other systems from the Army, spanning all its operations, would be moved to this center within the next five years, provided of course, there are no challenges during this period.

Though this award was in the offing for some time now, it’s still a surprise as the Army deals with large amounts of classified data, including secret-level data that are hugely sensitive and can have immediate ramifications for national security. Despite this level of confidentiality, the Army has chosen a private company to run data centers on its behalf. Why?

Cloud computing offers many benefits that are hard for any organization to ignore, and the Army is no exception. This award, in many ways, represents the first step towards implementing the Army’s cloud computing strategy, that is aimed to create an excellent user-experience, improve mission command, and reduce IT costs as well as the overall fiscal footprint of the Army.

Also, Redstone Arsenal is considered to be a safe haven, so it makes for an ideal location to try out the idea of a private cloud for the Army, within the gates of its own military establishment. In addition, the Army plans to implement the necessary secret controls to handle such high levels of secure data.

This contract is sure to have a substantial positive impact for the Army, the primary of which is the choice to reduce inefficient data centers that are run by different governmental agencies. Currently, the Army runs anywhere between 200 to 1,200 data centers, most of which are done under the guidance of the Office of Management and Budget (OMB). With this contract in place, it plans to close at least 350 of these data centers over the next two years. In Redstone Arsenal alone, it owns 11 out of the 24 data centers that operate here. Over the next couple years, the Army wants to consolidate all its information and applications within the 11 data centers it owns. Such a move is sure to save tons of taxpayer dollars for the government, and this money can be used for beneficial social, welfare, and economic programs.

The post IBM Bags A Cloud Contract from the US Army appeared first on Cloud News Daily.

Microsoft launches cloud services due diligence checklist

(c)iStock.com/cruphoto

Microsoft has launched a cloud services due diligence checklist aimed at providing organisations with more standardised procedures for their potential cloud push.

The checklist is based on the emerging ISO/IEC 19086 standard which focuses on cloud service level agreements, and gives structure to organisations of all sizes and sectors to identify their objectives and requirements, before comparing the offerings of different cloud service providers.

“Cloud adoption is no longer simply a technology decision,” Microsoft writes in a page explaining the checklist. “Because checklist requirements touch on every aspect of an organisation, they serve to convene all key internal decision makers – the CIO and CISO as well as legal, risk management, procurement, and compliance professionals.

“The checklist promotes a thoroughly vetted move to the cloud, providing a structured guidance and a consistent, repeatable approach to choosing a cloud service provider,” it adds.

It’s worth noting, as Microsoft does in the document, that the checklist is not intended to be, nor should be considered a substitute for the 19086 standard – its role is to essentially distil the 37 page standard document into two pages – yet it does go through performance, service, data management and governance checks.

In putting the checklist together, Microsoft also cited a Forrester research study which argued that more than 94% of organisations polled would change some terms in their current cloud agreement. Agreements often miss key considerations due to their complex nature, the research notes, while topics cloud buyers regret not having in their agreements are around security, privacy, and awareness around internal key stakeholders.

Earlier this month, Microsoft CEO Satya Nadella told delegates at the company’s Transform conference in London how cloud computing, through powering artificial intelligence and machine learning, was putting technology “in the hands of humanity.” Back in August, the company announced it had obtained ISO 27017 compliance, which gives additional controls specifically relating to cloud services.

You can download the full checklist here.

Recovering from disaster: Develop, test, and assess

(c)iStock.com/natasaadzic

Disaster recovery (DR) forms a critical part of a comprehensive business continuity plan and can often be the difference between the success and failure of an organisation. After all, disasters do happen — whether that’s a DDoS attack, data breach, network failure, human error, or by a natural event like a flood.

While the importance of having such a strategy is well recognised, how many organisations actually have the right plan in place? Not many, according to the 2014 Disaster Recovery Preparedness Benchmarking Survey which revealed that more than 60% of companies don’t actually have a documented DR strategy. More than that, the survey found that 40% of those companies that do have one said it wasn’t effective during a disaster or DR event.

Taking the above into consideration, what can businesses do to ensure their plans are not only in place, but also work as they should and allow organisations to recover quickly and effectively post disaster?

One aspect to consider is using the cloud to handle your DR requirements as it is a cost-effective and agile way of keeping your business running during and after a disaster. DR cloud solutions or disaster recovery as-a-service (DRaaS) deliver a number of benefits to business. These include: faster recovery, better flexibility, off-site data backup, real-time replication of data, excellent scalability, and the use of secure infrastructure. In addition, there’s a significant cost saving as no hardware is required — hardware that would be sitting idle while your business is functioning as normal.

Another aspect is testing. Not only should DR strategies be continuously tested, but they should also be updated and adapted in line with changes in the business environment and wider technology ecosystem, as well as industry or market shifts. Again, this is seen as important, but practically, isn’t happening as it should. According to the same benchmarking survey, only 6.7% of organisations surveyed test their plans weekly, while 19.2% test annually and 23.3% never test them at all.

The practicalities of implementation can often be challenging — from budgetary issues, buy-in from CIOs and the type of solution itself. DR means different things to different people — from recovery time, in terms of minutes or weeks, to its scope covering just critical systems or encompassing all IT.

So where do you start?

Identify and define your needs

The first stage of defining these requirements includes performing a risk assessment often in conjunction with a business impact analysis. This includes considering how age, volume and criticality of data, and looks at your organisation’s entire IT estate. DR can be an expensive exercise and the initial stage of strategy development can help you with evaluating the risk versus the cost.

Your data could be hosted on or off site; and for externally hosted solutions this means making sure your hosting provider has the right credentials (for example, ISO 27001) and expertise to supply the infrastructure, connectivity and support needed to guarantee uptime and availability.

It is also during this phase that you should define your recovery time objectives — the anticipated time you would need to recover IT and business activities — and your recovery point objectives — the point in time to which you recover your backed up data.

Creating your DR plan

A successful DR strategy encompasses a number of components, from data and technology, to people and physical facilities. When developing the actual plan and the steps within it, you need to remember that it affects the entire organisation.

Connectivity plays a critical role here, specifically in how staff will access the recovered environment, i.e. though a dedicated link or VPN. Is additional connectivity needed for the implementation of the strategy to work? And if so, how much will this cost?

Test, assess, test, assess

The final stage is an ongoing one and is all about testing the plan. With traditional DR it is often difficult to do live testing without causing a significant system disruption. In additional testing complex plans comes with its own degree of risk. However, with DRaaS, many solutions on the market include no impact testing options.

At this point it is also important to assess how the plan performs in the event of an actual disaster. In this way weaknesses or gaps can be identified, driving areas of improvement for future plans.

Conclusion

In today’s business environment it is safe to assume that your organisation will experience a disaster or event of some kind that will affect operations, cause downtime or make certain services unavailable. Having a DR strategy in place — one that works, is regularly tested and addresses all areas of operations — will help mitigate the risk and ensure the organisation can recover quickly without the event having too much of a negative impact on customer experience, the brand or the bottom line.