Azure Data Factory Launch

Microsoft has greatly improved its big data analytics sector through the release of Azure Data Factory, a cloud based data integration service. Azure Data Factory (ADF) has been designed to both automate and streamline moving data from sources into and organizations business intelligence and analytics system to change it into a form companies can utilize.
Joseph Sirosh, corporate vice president of Information Management and Machine Learning at Microsoft, has stated, “With ADF, existing data processing services can be composed into data pipelines that are highly available and managed in the cloud. These data pipelines can be scheduled to ingest, prepare, transform, analyze, and publish data, and ADF will manage and orchestrate all of the complex data and processing dependencies without human intervention. Solutions can be quickly built and deployed in the cloud, connecting a growing number of on-premises and cloud data sources.”
DataFactoryOverview_960
ADF is the connecting thread between services such as Azure HDInsight, Azure ML, and Power BI.
Sirosh also added “Using ADF, businesses can enjoy the benefits of using a fully managed cloud service without procuring hardware; reduce costs with automatic cloud resource management, efficiently move data using a globally deployed data transfer infrastructure, and easily monitor and manage complex schedule and data dependencies.”
The new service is very efficient and has been decreasing prices. Sirosh has commented, “At the same time, with the new cost efficiencies gained with the on-demand use of cloud resources, they were able to utilize 600 percent more compute hours and double their supported customer base.”

The post Azure Data Factory Launch appeared first on Cloud News Daily.

Want to get ahead in the cloud? Docker and DevOps skills are what you need

(c)iStock.com/portishead1

According to recent IT Jobs Watch figures, job roles involving container technology Docker have risen 317 places to number two in the 500 most sought after IT skills. With this in mind, recent research from Rackspace has shown a similar spike in Docker and DevOps-related skills in the UK technology industry.

An analysis of job adverts has shown an almost tenfold (991%) increase in posts seeking Docker skills over the past 12 months, while DevOps expertise continues to rise, with permanent roles posted increasing by 57% year on year. Between 2013 and 2014, demand rose by 351%.

This demand for new DevOps roles has so far not translated into salary increases, according to Rackspace. Salaries for DevOps-related skills rose by only 2% last year, compared with 28% for Docker skills. The skill sets for each role, as one would expect, were similar; for Docker roles, the core competencies were Linux (66%), DevOps (60%), Python (60%) and Puppet, while for DevOps engineers it was Linux (86%), Puppet (69%), Chef (57%), Python (53%) and Amazon Web Services (40%).

Naturally Rackspace, as a managed cloud hosting provider, aims to reassure potential customers it has this disparity covered, not least through offering support for the likes of Microsoft Azure. Yet Darren Norfolk, Rackspace UK managing director, said: “We know that technology skills and job roles are constantly evolving. What this means for both the workforce and employers alike is a deeper responsibility to stay aware of what are considered to be the best working practices and cutting edge platforms at all times. Last year it was all about DevOps coming to the fore, but the demand for Docker has surged over the last 12 months.

“These roles are fundamental to the cloud industry’s future but the reality is that businesses are struggling to fill them, as there just aren’t enough candidates with the right skill sets out there,” he added.

The importance of AWS skills aligning with DevOps is not a surprise. As was pointed out on these pages back in November 2014, AWS is bundled with offerings to deploy and manage applications and infrastructure as code “with an inherent DevOps bent”, while in July IT automation and DevOps provider Chef announced its availability in the AWS marketplace.

For vendors, the RightScale 2015 State of the Cloud report, published in February, found revealing findings. DevOps usage hit two thirds (66%) in 2015 with Chef (28%) the most popular tool, followed by Puppet (24%) and Docker (13%). While Docker’s number may be seen as low, the survey also noted significantly larger numbers (35%) who said they would utilise it in the future.

Disaster recovery and backup – and all that is between them

(c)iStock.com/-MG-

Everyone should back up important or valuable data, whether that data is family photos for individuals or business documents for companies. Individuals may be happy to buy a local hard drive and put it in the attic but for companies where their business depends on their emails or file records it would make sense to have a more robust solution.

Many companies are now backing up their data with cloud vendors. Besides the relative simplicity of this solution, it makes commercial sense to store the backup in a remote location in case the “disaster” doesn’t only strike the main servers but perhaps the entire office or area.

An example of a ‘regional’ disaster occurred in May in Holborn when an underground fire left thousands of employees without power or access to their offices. Backing up from one drive to the other in the same vicinity would have been useless.

It is good practice to back up data, but does everyone also need disaster recovery (DR)? To put it simply, the difference between the two is that a backup ensures there are safe copies of the data, whereas DR typically refers to a solution whereby not only is the data backed up but the company can be up and running within a certain time frame. Backup is just the first component of DR, while a full DR solution will require considerable replication of equipment.

Hopefully, disasters are few and far between and if companies back up their data, to what extent should they worry about the speed of being fully operational if a disaster does happen? The answer would obviously depend on the nature of the company, its needs and budget; but the answer would also depend on the set-up of the company’s primary IT solution. The more complex that arrangement is, the more complex and expensive the DR solution will be.  

Balancing

I would like to explore these considerations in more depth. Firstly, there is the question of balancing between cost and need. No one wants to stop working for a few days, but if the company could survive with patchy IT for a short time while the primary IT solution is being restored, perhaps it is not worth maintaining an expensive DR solution for years in anticipation of the disaster which hopefully will never arrive. If on the other hand, the company cannot survive without continuous IT, then it probably cannot cut corners when preparing for a disaster.

If we were to compare this to bike riders, some will go out for a Sunday spin without even a spare inner tube, while if they are racing in the Tour de France they will have a support vehicle with spares for every part – plus mechanics who will get the cyclist back on the road in no time should anything happen to the bike.

Secondly, besides the balancing of cost and requirement, there is a question of the ease in putting the DR solution into place. When a company manages its own servers, DR becomes a stand-alone project which typically requires an additional vendor relationship, dedicated communication solutions and a complete reconfiguration of the IT infrastructure. However, when a company has 100% of its primary solution in the cloud, then backup and even a variety of DR solutions are simple add-ons.

DaaS and DRaaS

The hosted desktop solution, which provides a holistic cloud service for companies and removes the need for any internal IT management, is sometimes referred to with the acronym DaaS or Desktop as a Service. The idea is that all of a company’s IT needs can be bought on a per-user per-month fee. Similarly, companies using the Hosted desktop can have DRaaS or disaster recovery as a service. Here, too, the company need not worry about DR or the CAPEX outlay, because it is all provided as part of the service.

Let me try to explain using an analogy from another industry.  Imagine that you own a brand new Mercedes, but rather than service it at the authorised garage, you decide to use the local mechanic; then you try to go to the Mercedes authorised dealer to rent an equivalent replacement vehicle while your car is in service with the mechanic. This is doable, but it would involve more paperwork, potentially more cost and certainly less peace of mind. If your car is precious, which it is, would you not prefer to outsource the maintenance to an authorised professional dealer which provides a warranty, peace of mind and a replacement car as standard?

Similarly, for your business’ precious IT, using a top hosted desktop provider offers not only peace of mind for all of your primary use, it offers backup and even disaster recovery as standard. Using your fleet of cars should be a pleasant, seamless experience. Why shouldn’t using your IT be the same, particularly if is being managed via a company holding the ISO 27001 accreditation?

If peace of mind can apply to how a vehicle is serviced and generally looked after by its supplier, the same can apply to IT and the cloud. ISO 27001 is the international gold standard for information security management.  Accreditation – by, for example, a cloud services provider – ensures that that standard runs throughout the provider’s services – and is a guarantee that all the steps that should be taken to safeguard a client’s information are rigidly enforced.  DR and backup, and all that is between them, are covered by the standard.  

[slides] Your Private Cloud as a Service By @Bear_field | @CloudExpo #Cloud

Deploying and operating cloud technology is difficult and falls outside the core competencies of most organizations. As a result, many on-premises private cloud installations falter, casting doubt on private cloud as a solution in general. Yet private clouds come with a unique set of benefits that continue to drive widespread interest. There must be a better way.
In his session at 16th Cloud Expo, Andre Bearfield, Senior Director of Product at Blue Box Group, gave participants a clear roadmap of private cloud implementation challenges and offered actionable ideas on how to overcome them.
They also learned how private cloud as a service might work in their organization and for their application workloads.

read more

Put the Power in PowerPoint by Creating Meaningful Connections | @CloudExpo #Cloud

We also believe that great outcomes are achieved not just by talking but by listening. Zeetings creates this genuine connection between presenters and audiences, changing the paradigm from a one-way monologue to a two-way conversation. Unlike a traditional presentation, a “zeeting” is a social and interactive experience accessible via any connected device, coupled with data-driven analytics to help hosts understand what their content community is thinking.

read more

[slides] Embracing Software Defined By @BHIvanov | @CloudExpo #Cloud

As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale – when building software defined infrastructure.
In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing «software-defined» in the datacenter.

read more

Build-Operate-Transfer Model: Creating a Valuable Framework for IT

The build-operate-transfer model is about taking the concept of a long term outsourced service, traditional in the Managed Services space, and addressing it in a way that allows the customer to get value out of the services at the end of the engagement. It’s also a way to address challenges within the IT operational team that feel like their services are being replaced by outside services.

With a build-operate-transfer model, you really need to start with the end-game in mind. Where are you going to be in 5 years? 7 years? 10 years? Are the services you’re consuming today going to be the same services you need then? How could your future plans be altered (mergers, acquisitions, etc.)? You need a way to be able to transfer those services but get value out of what you have been consuming in the previous term. That’s what the build-operate-transfer model is all about.

 

 

The corporate IT department has evolved. Has yours kept pace?

 

By Geoff Smith, Director, Managed Services Business Development

[download] Turning the Tide: Surviving (and Thriving in) the Software Developer Shortage | @CloudExpo #Cloud

U.S. companies are desperately trying to recruit and hire skilled software engineers and developers, but there is simply not enough quality talent to go around. Tiempo Development is a nearshore software development company. Our headquarters are in AZ, but we are a pioneer and leader in outsourcing to Mexico, based on our three software development centers there. We have a proven process and we are experts at providing our customers with powerful solutions. We transform ideas into reality.

read more

Research sheds light on changing role of IT admin through cloud

(c)iStock.com/baona

IT administrators in organisations which have migrated to Google Apps or Office 365 argue they spend much less time doing mundane activities – but it’s not exactly ‘put your feet up’ time either.

That’s the key finding from the latest study by BetterCloud on the state of the cloud office. Back in July, the first set of results was revealed, which showed a clear trend for larger organisations to move to Office 365, as opposed to SMEs running Google.

This time around, however, the spotlight focuses on the admins. BetterCloud argues that with cloud offices, tasks such as scheduled and unscheduled maintenance are not as big a factor in the IT admin’s day, with 87% and 86% of respondents respectively agreeing. Less time is also being spent on upgrades (88%), storage and quota management (86%), and data recovery (84%).

So what do admins do instead of these tasks? The report cites a wide variety of strategic and proactive jobs, including improving security, application integrations, and end user training. Not to be forgotten, however, is migrating systems over to the cloud; 67% of Office 365 and 68% of Google Apps admins say their migrations happen in-house.

Equally, there is a clear correlation between the pace of cloud migration and less time admins spend stuck doing routine tasks. For companies currently 100% in the cloud, 94% of admins polled agree they save time, compared with organisations who expect a complete move by 2020 (88%) and by at least 2026 (78%).

A particular ‘eureka’ moment cited in the report is through Office 365 admins who tell their users to run Outlook on the web when issues arise with the desktop client. This “confirm[s] to them that if they can just get their users to work in the cloud, helpdesk tickets will begin to decline,” according to the report.

Overall, however, the report argues cloud IT admins need to prepare themselves for the changes to come, and the potential expansion of the department’s role from the “cost centre” and “department of no” mentality which frequently pervades it.

“Every admin has a different agenda depending on the needs of their organisation, but with more time and less busy work, cloud IT admins have an opportunity to become agents of innovation for their organisation and truly capitalise on their emerging role as leaders,” the report concludes.

You can read the full post here.

Enterprises Need a Panic Button for Security Breaches By @CKeene | @CloudExpo #Cloud

Most home security systems have a panic button – if you hear something go bump in the night you can push a panic button to starts the sirens wailing, call the cops and hopefully sends the bad guys scurrying. As useful as this is for home owners, enterprises need a security panic button even more.
Security spending is heavily weighted towards keeping bad guys out. Media coverage has demonstrated how often they get in anyway. According to the CyberEdge Group, 71% of large enterprises reported at least 1 successful hacking attack in 2014.

read more