A New Digital Training Program from AWS

Amazon Web Services (AWS) is the undisputed leader in the cloud market today, and it is further expanding its reach by starting a new program for skills development. On Thursday, January 12th, AWS announced a program called re:Start, through which it aims to provide skills training for about 1,000 people in the UK. At the end of this training program, it plans to provide employment for these people within AWS or in partner companies and organizations.

This is an important move by AWS, and could signal the beginning of a new approach towards technology in general, and cloud in particular. Currently, the cloud industry is facing a big gap in skills. In other words, not many people are trained in cloud technology, and this is causing a big problem for companies like AWS. Due to this shortage in skills, their talent pool is greatly reduced. This also affects their expansion plans, as the growth is not supported by great minds that are well-versed in technology. As a result, some companies are forced to scale back their operations, and maybe even reconsider their growth strategy.

In addition, cost of labor is also high. The economics of job market dictates that when demand is greater than supply, companies will have to pay more to retain the knowledgeable workforce. Over time, these costs can really add up. Besides paying high salaries, companies have to depend on some critical resources for their continued operations. This dependence works against established management principles, and obviously can put projects at risk, especially when the critical resources are not available.

To overcome these multi-pronged challenges, AWS decided to start its own training division through which it plans to train employees, and later employ them for its own needs. As a first step, it has offered this training program in the UK. It’s interesting that AWS started this program in the UK, considering the fallout of Brexit and the widely reported skill gaps that have been reported in the UK. Last year, many MPs warned that the UK needs almost 750,000 people with advanced technical skills to meet the growing demands with this economy, and the existing government initiatives are not enough to fill this gap.

In a way, AWS has set an example for other companies to follow suit. Without waiting for any government action or program, it has started its own division for training. The obvious advantage with such a program is that AWS can train people based on the specific skills it needs. Hence, it has the flexibility to choose one technology over another, depending on its immediate and future need.

Such a move also augurs well for the economy as it reduces unemployment rate and increases the productivity and contribution of workers. This program aims to attract young adults, military veterans, reservists, spouses and other deserving candidates. As of now, AWS has entrusted the task of finding the right candidates to a HR outsourcing company called QA Consulting in partnership with the Ministry of Defense (MoD) and The Prince’s Trust.

The post A New Digital Training Program from AWS appeared first on Cloud News Daily.

Hackers versus hurricanes: It’s time to begin planning for real disasters

(c)iStock.com/vchal

The most common objection I come across when discussing disaster recovery (DR) is, “Oh, well we don’t get hurricanes or floods or anything like that, so we don’t need DR”. To which I always reply “That’s all well and good, but do you have humans? What about hardware and power? Do you have HVAC in your data centre?”

A disaster can be any number of things and yes, whilst nature can definitely be a cause, in research that Opinion Matters undertook on behalf of iland in 2016, we found that it actually accounts for only 20% of outages. Our research also found that more often than not, disaster also comes in the form of operational failures (53%) and human error (52%). It is actually as a result of the latter two factors that the most outages occur.

When people think of disasters, they tend to think dramatically and on a large scale of incidents capable of taking out an entire data centre for long periods of time. In actual fact, when businesses leverage DR solutions, it tends to be because of much more isolated and smaller issues that have impacted the business. Perhaps only a couple of systems or mission critical applications have gone down; entire IT systems don’t need to implode before one’s eyes to warrant a disaster. It is the smaller disasters that we really need to be focusing on as it’s usually a build-up of these which results in a cascading effect which can then have a bigger impact on the business.

We have many customers that will failover just one or two systems. Maybe a patch went wrong, or a virus occurred and the best thing to do was to keep the system running, fail the affected machine over to keep the application running and fail back when systems are repaired. It doesn’t always have to be on a massive scale. By grouping VMs supporting multi-tier applications into virtual protection groups, you can perform partial failovers within that group, reducing cost and simplifying failback.

Significant disruptions to IT systems, however, do happen and can take a heavy toll on the business; hence the importance of having a reliable DR plan in place. Of course, there are the obvious implications of a disaster such as interruptions to trade, service and loss in revenue. But, perhaps what you might not have considered is the effect on employee morale and additionally on customer confidence. In order to reduce disruptions such as these, a proactive and strategic approach to disaster recovery is required.

Five or ten years ago, the solution to DR felt heavy and complex and, as a result, many organisations put it in the ‘too hard or expensive’ basket and DR was put off to next month or the next quarter, maybe even the next year. In the last few years, however, cloud-based disaster recovery options have given organisations new and more affordable route to protecting their businesses.

As companies started to virtualise and people start moving to the cloud, we found that other parts of the datacentre weren’t keeping up. Efficiency was increasing but replication was still lagging, still embedded in the hardware layer. This not only made it difficult to work with an external vendor, as matching hardware requirements could not be met but, it also meant that everything had to be replicated, rather than just the data that companies were interested in. As a result, the investment in digitisation was being undermined as resources were being wasted. iland’s DR software, powered by Zerto, allows for replication to take place in the hypervisor at the VM level, providing its users with a far more efficient solution.

So what should you look for in a DRaaS solution?  In my mind, excellent technical support is key in enabling users to craft a complete DR solution that meets their cost requirements and risk tolerance, whilst also ensuring that the solution will be implemented quickly and tailored for success within their organisation. A successful DRaaS solution should also allow for testing by the user themselves to make sure their plan actually works. Finally, a DRaaS solution should be able extend the security measures and compliance rules already present on premise into the cloud.

As I mentioned earlier we commissioned a survey of 250 IT decision makers with responsibility for DR from medium to large companies in the UK. From outage experiences to achievable recovery times to failover confidence levels and barriers to adoption of cloud-based DR, the survey revealed a wealth of insights that can serve as benchmarks for developing a successful disaster recovery strategy. If you would be interested in finding out more about the results of the survey, the full report can be found here.

Read more: Why you can’t let disaster recovery slide off your IT budget in 2017

Hackers versus hurricanes: It’s time to begin planning for real disasters

(c)iStock.com/vchal

The most common objection I come across when discussing disaster recovery (DR) is, “Oh, well we don’t get hurricanes or floods or anything like that, so we don’t need DR”. To which I always reply “That’s all well and good, but do you have humans? What about hardware and power? Do you have HVAC in your data centre?”

A disaster can be any number of things and yes, whilst nature can definitely be a cause, in research that Opinion Matters undertook on behalf of iland in 2016, we found that it actually accounts for only 20% of outages. Our research also found that more often than not, disaster also comes in the form of operational failures (53%) and human error (52%). It is actually as a result of the latter two factors that the most outages occur.

When people think of disasters, they tend to think dramatically and on a large scale of incidents capable of taking out an entire data centre for long periods of time. In actual fact, when businesses leverage DR solutions, it tends to be because of much more isolated and smaller issues that have impacted the business. Perhaps only a couple of systems or mission critical applications have gone down; entire IT systems don’t need to implode before one’s eyes to warrant a disaster. It is the smaller disasters that we really need to be focusing on as it’s usually a build-up of these which results in a cascading effect which can then have a bigger impact on the business.

We have many customers that will failover just one or two systems. Maybe a patch went wrong, or a virus occurred and the best thing to do was to keep the system running, fail the affected machine over to keep the application running and fail back when systems are repaired. It doesn’t always have to be on a massive scale. By grouping VMs supporting multi-tier applications into virtual protection groups, you can perform partial failovers within that group, reducing cost and simplifying failback.

Significant disruptions to IT systems, however, do happen and can take a heavy toll on the business; hence the importance of having a reliable DR plan in place. Of course, there are the obvious implications of a disaster such as interruptions to trade, service and loss in revenue. But, perhaps what you might not have considered is the effect on employee morale and additionally on customer confidence. In order to reduce disruptions such as these, a proactive and strategic approach to disaster recovery is required.

Five or ten years ago, the solution to DR felt heavy and complex and, as a result, many organisations put it in the ‘too hard or expensive’ basket and DR was put off to next month or the next quarter, maybe even the next year. In the last few years, however, cloud-based disaster recovery options have given organisations new and more affordable route to protecting their businesses.

As companies started to virtualise and people start moving to the cloud, we found that other parts of the datacentre weren’t keeping up. Efficiency was increasing but replication was still lagging, still embedded in the hardware layer. This not only made it difficult to work with an external vendor, as matching hardware requirements could not be met but, it also meant that everything had to be replicated, rather than just the data that companies were interested in. As a result, the investment in digitisation was being undermined as resources were being wasted. iland’s DR software, powered by Zerto, allows for replication to take place in the hypervisor at the VM level, providing its users with a far more efficient solution.

So what should you look for in a DRaaS solution?  In my mind, excellent technical support is key in enabling users to craft a complete DR solution that meets their cost requirements and risk tolerance, whilst also ensuring that the solution will be implemented quickly and tailored for success within their organisation. A successful DRaaS solution should also allow for testing by the user themselves to make sure their plan actually works. Finally, a DRaaS solution should be able extend the security measures and compliance rules already present on premise into the cloud.

As I mentioned earlier we commissioned a survey of 250 IT decision makers with responsibility for DR from medium to large companies in the UK. From outage experiences to achievable recovery times to failover confidence levels and barriers to adoption of cloud-based DR, the survey revealed a wealth of insights that can serve as benchmarks for developing a successful disaster recovery strategy. If you would be interested in finding out more about the results of the survey, the full report can be found here.

Read more: Why you can’t let disaster recovery slide off your IT budget in 2017

Parallels with Douglas Stewart EDU at Bett Show in London

Together with our partner Douglas Stewart EDU Ltd we will be at Bett Show in London this year: When? 25 -28 January 2017 Where? ExCeL London, Royal Victoria Dock, 1 Western Gateway, London E16 1XL, United Kingdom Stand? B425 What is Bett Show? The Bett Show (British Educational Training and Technology Show) is an annual trade […]

The post Parallels with Douglas Stewart EDU at Bett Show in London appeared first on Parallels Blog.

[session] Routing Data By @Sematext | @DevOpsSummit #DevOps #Logstash

The idea behind this session is my blog post – 5 Logstash Alternatives – which is unfortunately too short to do the presented log shippers justice.
In his session at @DevOpsSummit at 20th Cloud Expo, Radu Gheorghe, Software Engineer at Sematext Group, will talk more about the things that matter: kinds of buffers, protocols, ways of parsing, correlating and de-duplicating messages, as well as supported inputs and outputs. And of course performance. All this should let you know which log shippers work well for which kinds of use-cases.

read more

2017 Digital Transformation Predictions | @CloudExpo #AI #ML #Blockchain

Once again, we find ourselves at the dawn of a new year.
And many would say, not a moment too soon. With a series of tumultuous elections around the world and an unusual number of celebrity passings, it’s been a rough year.
But there is at least one bright spot from 2016: Intellyx’s digital transformation prognostications were close to spot on!
As is our tradition, each year we review last year’s predictions and make all new fresh ones! This year, it is my turn to review Jason’s 2016 predictions and let you know what I see happening in the coming year.

read more

The Wireless Market in 2017 (Part 3)

In the third and final video of a three-part series, Network & Security Solutions Architect, Dan Allen, discusses wireless solutions and topologies, trends in the market and what to think about when starting your wireless project. To view part 1 of the series, click here. To view part 2 of the series, click here.

If you would like to discuss how to make your next wireless project a success, reach out to us.

Download this free White Paper and get 6 quick tips to avoid common mistakes and to help ensure your wireless infrastructure can support the demanding needs of the business.

By Dan Allen, Network & Security Solutions Architect

How to Use Microsoft Ink in Word on a Mac

Parallels Desktop 12 Update 1 adds even greater support for Microsoft Ink. (You can read an overview of Ink on the Mac here.) In this blog post, I will specifically discuss the uses of Microsoft Ink in Word for Windows 2016 running on a Mac with the use of Parallels Desktop 12. As outlined in […]

The post How to Use Microsoft Ink in Word on a Mac appeared first on Parallels Blog.

Google Releases New Cloud Encryption Key Management Service

Almost every major cloud provider is coming up with innovative products and add-ons that’ll add value to their customers. In this line, Google has released a new cloud encryption key management service to make it easy for organizations to create and use encryption keys to protect their data.  This service is currently available in the beta version in 50 countries including the US, Canada, Australia and Denmark.

This service, in many ways, is a necessity considering the number of hacking incidents that have taken place over the last year, including the widely spoken rigging of the US election. It can add an extra layer of protection for confidential data stored on the cloud. A salient feature of this KMS system is that you can not only manage keys for encrypting user credentials, but also take API tokens associated with applications and store them outside Google Cloud.

At the same time, managing encryption keys is also easy with this service. Enterprises can create, use, recycle, and destroy millions of AES-256 standard symmetric keys in any cloud-hosted solution or environment, with a set of simple user-interface driven clicks. These keys can also be automated to rotate at certain intervals, so the password keeps changing all the time, and only authorized users can access the application.

In addition, Google KMS integrates well with existing services from Google such as the Cloud Identity Access Management system and the Cloud Audit Logging services, to help customers have greater control over their encryption keys. Also, this service falls between the default encryption options that’s considered to be fairly lenient and the customer supplier encryption keys (CSEK) that are the most stringent. So, its stringent enough to be easy to use and at the same time, to protect your data strongly.  Such a feature can be particularly useful for companies that operate in regulated sectors such as healthcare and finance, as they can meet the regulatory requirements laid down by different statutory bodies.

These unique features are sure to bring in greater adoption through this year. The pricing is also fairly reasonable, and depends on the level of usage. Currently, Google  plans to charge $0.06 for every active key per month, and the rate for using the key is $0.03 for every 10,000 operations. This means, an organization that stores and uses 500 encryption keys over 100,000 operations can expect to pay around $30 a month. While this is not cheap, it’s fair considering that encryption by itself is an expensive process.

In short, Google’s KMS is sure to address gaps that exist in cloud security by giving it an additional layer of protection. It’s no surprise that Google is the first company to release such a product, as it’s always been a strong advocate of end-to-end encryption of data on the Internet. For enterprise customers, this service is expected to address many fears and concerns regarding hacking, which unfortunately, have been more prevalent than we’d like. This service will hopefully put an end to some of the security loopholes on the Internet.

The post Google Releases New Cloud Encryption Key Management Service appeared first on Cloud News Daily.

Netskope gives another warning to businesses struggling with GDPR compliance

(c)iStock.com/maxkabakov

An overwhelming 94% of cloud apps in enterprises across EMEA are not enterprise-ready, while two thirds of overall cloud services are not up to scratch, according to the latest research from Netskope.

The findings, which appear in the company’s latest quarterly cloud report, found that 82.4% of services do not encrypt data at rest, 66.4% of cloud services do not specify that the customers owns the data in their terms of services, while 42% do not allow admins to enforce password controls.

The report arrives amidst the backdrop of the European General Data Protection Regulation (GDPR), which comes into effect in May of next year. Netskope warns that businesses are potentially falling behind in their efforts to become au fait with the legislation; 40% of services analysed back up to a secondary location, the company says, not all of which are GDPR-compliant.

“Until very recently, organisations had to take an all-or-nothing approach to allowing cloud services,” said Sanjay Beri, Netskope founder and CEO. “If they sanctioned a cloud storage service for corporate use, they also needed to accept any additional personal instances of that cloud storage service or block the service entirely.

“As our customers make cloud services a strategic advantage for their businesses, when it comes to governing and securing those services, they are realising granular policies can ensure that sensitive data does not leak from the sanctioned instance of a corporate cloud service to an unsanctioned one,” Beri added.

The report also examined wider cloud trends. More than 90% of Netskope customers use IaaS services, predominantly AWS, Microsoft Azure and Google Cloud Platform, with enterprises using on average four IaaS services, while Office 365 remains the number one app with Slack continuing to rise up the top 20.

In February last year, Netskope reported that more than 4% of enterprises have sanctioned cloud apps laced with malware.