Seven elements for a successful cloud migration plan

When it comes to cloud, the good news is that we’re past the period of fear, uncertainty, and doubt. Everyone now agrees that the cloud is a key part of any firm’s IT investment. The not-so-good news is that there is still confusion about what to move, how to move it, and the best practices needed to protect your investment.

This was the topic of our recent webinar with Dwayne Monroe, a Microsoft Cloud Solutions Architect at McGraw-Hill. While it may be tempting to simply relocate existing infrastructure to the cloud in its current form, Dwayne points out that we must reimagine what is possible rather than automatically reverting to lift and shift. That’s why a cloud migration plan is an essential part of any migration process.

Regardless of which workloads you’re migrating—databases, storage, compute—there is no single perfect formula for moving to the cloud. Instead, Dwayne believes that a well-designed project plan can ensure that teams make the most of cloud investments, as long as they include these 7 essential steps.

Identify the need

To understand what problems you’ll be solving, you need to understand what the people using the existing platform require. Bring platform owners and users together to get a full understanding of their pain points.

Find your champion

Regardless of which platform you’re using, you need an enthusiastic Azure, AWS, or Google Cloud champion on your side. A key part of the champion’s role is to be the political liaison between management and other parts of the technical infrastructure to explain why the project is important and what the value will be once the project is completed.

Listen

As a technologist, the temptation may be to simply start “tech-ing the tech.”If you want your cloud migration to be a success, make efforts to listen to the end users and the teams using the technology to make sure you’re creating the right fit.

Partner with deeply skilled people

While you may have deep knowledge of your existing infrastructure, architecting for public clouds is relatively new and requires a unique set of skills. Despite your expertise, there will be things that you don’t know and opportunities that may not be obvious to you. Partner with individuals who eat, sleep, and breathe your team’s platform of choice to optimize your outcome.

Create a POC environment

Azure makes it possible to create a proof-of-concept environment. This will be a playground for developers and other members of the team to see what the platform can do. (Editor note: If you can’t easily spin up sandboxes or POC environments, implement a strategic training platform like Cloud Academy that automates this for you and staff.)

Evolve an approach that meets the need

You are evolving your solution from on-premises into the cloud. Don’t implement technology for technology’s sake. In your early days of cloud adoption, make sure that the workflow elements of your cloud migration are handled intelligently and patiently.

Get comfortable with baby steps and a lot of patience

Do not be excessively aggressive. While you may be tempted to go for the quick win, realize that cutting corners will almost certainly guarantee failure for your cloud migration project. Baby steps, logical steps are very very important.

Technology challenges won’t be the only obstacles you’ll face. Moving to the cloud will be seen by some as a threat to how things have always been done. Patience, careful planning, and a focus on solutions will be the keys to your success.

Editor’s note: This post is an excerpt from Cloud Academy’s webinar, Microsoft Azure: Moving Your Workloads to the Cloud the Right Way and Securing Your Investment with Dwayne Monroe.

How cloud service providers can halt hackers – with smart security protocols and reporting

When you call yourself "the global leader in secure content collaboration," you can't afford security gaffes.

Huddle, a SaaS tool used throughout the U.K. government, learned that the hard way when a BBC journalist logged into its system and was redirected to the wrong account. Imagine his shock when he realized he had access to confidential KPMG financial data. 

Luckily for Huddle, the journalist left the sensitive information untouched, but he wasn't about to leave the story untold. The world soon knew of Huddle's head-scratching glitch: When two users signed on during a 20-millisecond period, they received identical authentication codes. The first to gain entry could be directed to either user’s account.

Of course, Huddle acted quickly to fix the flaw. But the security mistake left its mark on Huddle's reputation, especially, no doubt, among flagship clients like KPMG.

Security protocols to implement pronto

Although it's easy to point fingers at Huddle, other cloud service providers (CSPs) should take the chance to review their own security operations. Without the following four security processes, they're but one opportunistic hack away from a storm of upset clients, lawsuits, and unflattering media attention:

Multifactor authentication: Password-gated portals are the norm among cloud-based services, but passwords are far too easy to crack or steal. In addition, CSPs should require a secondary, and perhaps even tertiary, form of authentication. Be it a phone-based approach or a token device, a multifactor login system is part and parcel of the security responsibilities that infrastructure-as-a-service, platform-as-a-service, and software-as-a-service providers share with their clients. 

Patch management: There's a reason your Windows or Mac computer constantly wants to install security updates. Software providers use patches to plug security holes found or created by hackers before they infect other systems. Without patch management systems in place, CSPs are at risk of malware and, more common today, ransomware. The bigger the CSP, the more likely it is to become a hacking target, making patches all the more important. 

Credential management: Companies often share login information internally, but that leaves the keys to their kingdom in many hands. Eventually, that information could get in the wrong person’s pockets. Ensuring each user has his or her own credentials helps CSPs hold users accountable for their behaviors. It also prevents what happened when Amazon Web Services’ S3 buckets leaked due to misconfiguration. Because IaaS companies manage servers and hardware for downstream PaaS and SaaS providers, they have a particular responsibility to manage credentials carefully. 

Key management:  Picture a cul-de-sac where every resident knows where the master key is stored that can unlock any house on the street. What happens when one person moves away but the locks aren’t changed? That master key, used to decrypt encrypted data, could later be used to break in by practically anyone. This is often how CSPs unknowingly manage their security keys. Key management systems are critical and can save an organization in the event of a breach of third-party cloud systems that the organization may not control.

Communicating your security steps

Just as IKEA provides detailed setup and use instructions for its customers, CSPs must share security best practices associated with their systems. This includes explaining their own security protocols to clients and prospects. Not only is transparently communicating security features the ethical thing to do, but it can also boost sales through greater client trust.
To get the word out about your cloud service's security, start with these three strategies: 

Draft a public-facing communications strategy: You already have a website, so use it to educate people on your security measures. You don’t have to give away the recipe to your secret sauce, but do pull together a whitepaper outlining your services and tying them to security best practices. Your sales, marketing, and technology teams may want to create a security toolbox of whitepapers to reflect different industries' and environments' security needs.

Arm your sales force with detailed protocol content: Every salesperson for your company should be able to prove to prospects that your security protocols meet their compliance challenges. Again, consider creating a series of whitepapers that map out your processes for technical personnel, auditors, vendor risk managers, and C-suite parties. Technical jargon won't help most businesspeople, and most technical roles will expect more than surface-level explanations.

Develop third-party audit reports: The best assurance of your company's security comes from a third-party audit. Be sure that your report not only provides external validation of your protocols, but also explains how they apply in the real world. For example, the SOC 2+ report offers enhanced reporting that can address multiple compliance and assurance needs. If your CSP provides financial services in the state of New York, such a report should show how you meet its financial cybersecurity standards through features like multifactor authentication. Or if your company deals in medical data, the report should prove that your protocols align with HIPAA standards. 

CSPs operate in a world where trust is golden. But like real gold, that trust can be easily contorted or broken by breaches or other security flaws. Maintaining or mending trust takes a twofold approach: proper protocols to deter cybercrime and smart reporting to ensure clients know they're protected.

Why data science and machine learning jobs are the most in-demand on LinkedIn

  • Machine learning engineers, data scientists, and big data engineers rank among the top emerging jobs on LinkedIn.
  • Data scientist roles have grown over 650% since 2012, but currently, 35,000 people in the US have data science skills, while hundreds of companies are hiring for those roles.
  • There are currently 1,829 open machine learning engineering positions on LinkedIn.
  • Job growth in the next decade is expected to outstrip growth during the previous decade, creating 11.5m jobs by 2026, according to the U.S. Bureau of Labor Statistics.

These and many other insights are from the recently released LinkedIn 2017 U.S. Emerging Jobs Report. LinkedIn has provided an overview of the methodology in their post, The Fastest-Growing Jobs in the U.S. Based on LinkedIn Data. “Emerging jobs” refers to the job titles that saw the largest growth in frequency over that five year period. LinkedIn reports that based on their analysis, the job market in the U.S. is brimming right now with fresh and exciting opportunities for professionals in a range of emerging roles.

Key takeaways from the study include the following:

  • There are 9.8 times more machine learning engineers working today than five years ago based on LinkedIn’s research, with 1,829 open positions listed on the site today. There are 6.5 times more data scientists than five years ago, and 5.5 times more big data developers. The following graphic illustrates the rapid growth of key data scient, machine leanring, big data and full stack developers in addition to sales development and customer success managers.

  • Software engineering is a common starting point for professionals who are in the top five fasting growing jobs today. The career path to machine learning engineer and big data developer begins with a solid software engineering background. The top five highest growth job typical career paths are shown below:

  • The skills most strongly represented across the 20 fastest growing jobs include management, sales, communication, and marketing. Additional skills represented across the highest growing jobs include marketing expertise (analytics and marketing automation), start-ups, Python, software development, analytics, cloud computing and knowledge of retail systems.
  • LinkedIn interviewed 1,200 hiring managers to determine which soft skills are most in-demand and adaptability came out on top. Additional soft skills include culture fit, collaboration, leadership, growth potential, and prioritization.

Sources:

LinkedIn Blog: The Fastest-Growing Jobs in the U.S. Based on LinkedIn Data

LinkedIn’s 2017 U.S. Emerging Jobs Report

DevOps Predictions for 2018 | @DevOpsSummit @CAinc @Aruna13 #DevOps

For many of us laboring in the fields of digital transformation, 2017 was a year of high-intensity work and high-reward achievement. So we’re looking forward to a little breather over the end-of-year holiday season.
But we’re going to have to get right back on the Continuous Delivery bullet train in 2018. Markets move too fast and customer expectations elevate too precipitously for businesses to rest on their laurels.
Here’s a DevOps “to-do list” for 2018 that should be priorities for anyone who wants to make sure their organization is running at the front of the digital pack through next year – and beyond.

read more

Tech News Recap for the Week of 01/01/18

Welcome to 2018!

If you had a busy week in the office following the holidays and need to catch up, here’s a tech news recap of articles you may have missed the week of 01/01/2017!

Why 2018 is the year for Kubernetes. The biggest hardware and software milestones of 2017 for Microsoft. Updates and patches for the Meltdown and Spectre vulnerabilities. Eight burning questions for enterprise technology in 2018 and more top news this week you may have missed! Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap

Join GreenPages’ Cloud Experts on January 17th for a lively, non-biased discussion at our webinar:

AWS or Azure? How to Move from Analysis Paralysis Toward a Smart Cloud Choice

Click here to register!

IT Operations

Microsoft

  • Microsoft Monday: Foldable smartphone patent, Surface Pro with LTE availability, Andromeda OS hinted
  • Microsoft in 2017: The biggest hardware and software milestones
  • VDI with Citrix on Azure Government
  • Microsoft is already fixing the big chip bug – here are the Windows PCs that will be the most affected
  • Why 2018 could be a big breakout year for Microsoft’s Stream video service

HPE

  • HP’s newest ‘zero’ client offers improved security, VMware and Amazon integrations

VMware

  • Once again, analysts name VMware a leader in cloud management

Cisco

Cloud

Security

  • 2017 Threat Trends – The Year in Review
  • One surprising statistic explains why phishing will remain the most common cyber attack for the next few years
  • Emergency Windows Meltdown patch may be incompatible with PC
  • Anonymous no more: Reusing complex passwords gives your identity away

Thanks for checking out our tech news recap!

By Jake Cryan, Digital Marketing Specialist

While you’re here, check out this white paper on how to rethink your IT security, especially when it comes to financial services.

Global cloud computing market revenues reached $180 billion in the past year

The global cloud computing market is now worth $180 billion in vendor revenues with the market still growing by 24% annually, according to the latest note from Synergy Research.

The industry is put into six different buckets. Infrastructure as a service (IaaS) and platform as a service (PaaS), the first, remains the fastest growing sector, with 47% growth, and – as this publication has long since explored – Amazon and Microsoft at the top of the tree. The second fastest growing area was enterprise software as a service (SaaS), at 31%, with Microsoft and Salesforce the leading vendors.

The weakest growing areas were private and public cloud, led by Dell EMC/HPE and Cisco/Dell EMC respectively, while unified communications as a service (UCaaS) gained just over 20% annual growth, with RingCentral and Mitel the leading vendors, and hosted private cloud hit almost 30% led by IBM and Rackspace.

“We tagged 2015 as the year when cloud became mainstream and 2016 as the year when cloud started to dominate many IT market segments. In 2017, cloud was the new normal,” said John Dinsdale, a chief analyst and research director at Synergy.

“Major barriers to cloud adoption are now almost a thing of the past, with previously perceived weaknesses such as security now often seen as strengths,” Dinsdale added.

“Cloud technologies are now generating massive revenues for cloud service providers and technology vendors and we forecast that current market growth rates will decline only slowly over the next five years.”

Synergy has obviously been busy during the end of year break, issuing three research notes this week. According to their analysis, the data centre market saw record mergers and acquisitions in 2017, ahead of 2015 and 2016’s totals combined, while hosted cloud and collaboration revenues remain the quickest growing area of enterprise IT infrastructure.

Continuing in the face of disaster: Assessing disaster recovery in the cloud age

With 73% of businesses having had some type of operations interruption in the last five years, business continuity is becoming a concern for many organisations, especially the SMEs. Business continuity incorporates pre-emptive measures such as cyber-defences to minimise risk, proactive tactics such as system backups in case a problem arises and plans for a reactive strategy, which should include disaster recovery (DR), ready in case the worst happens.

But in the wake of disaster, how do businesses continue with everyday operations?

Business continuity

Traditional on-premise backup systems use removable media in the form of tapes or disk drives to store backup data. But this often means designated employees are required to manage and shuffle the backup media every day and preferably, take a copy offsite for safekeeping. The relatively high level of manual intervention can lead to errors being made, resulting in failed or incomplete backups. The removable media is typically a consumable and needs to be replaced at regular intervals, which can be costly, especially for larger capacity backups and media.

Beyond simple backups, conventional disaster recovery is a much more complex and costly proposition and typically requires a duplicate set of all the critical systems installed at a remote location, ready to step in if disaster strikes at the primary location. Many businesses have other concerns when it comes to backups and DR so it’s easy to see why organisations would question spending often serious budget on ‘what if’ technology that may never be needed. But what if disaster does strike? 

Cloud-based DR

Cloud technology has drastically reduced storage costs and has made backing up entire systems much more cost-effective and straightforward. All of the leading cloud providers – Microsoft, Amazon and Google – now offer backup as a core service of their cloud offerings, and clients can generally select whichever backup schedule and retention policy they wish to utilise.

Cloud computing also addresses the DR requirement. Major cloud service providers employ large-scale resilience and redundancy to ensure their systems remain operational. In the unlikely event an entire data centre goes down, client systems could operate from a second data centre. Most providers will also be able to backup on-premise systems and store that data in their cloud-based storage with the same freedom to define schedule and retention. However, the very best systems can also provide a full DR service for on-premise systems by replicating on-premise data in almost real-time into the cloud. Then, if disaster strikes, the systems can automatically allocate computing resource e.g. CPUs, RAM etc. and “spin-up” virtual servers to seamlessly take over until normal service is resumed on-site. Once the disaster has passed, the cloud systems will “fail-back” to the on-premise systems and synchronise all data that was changed during the disaster window. This means that when it comes to defining a DR strategy, businesses now have far more options available, with genuine DR systems now a cost-effective possibility for SMEs.

The SME

SMEs in particular are starting to discover the advantages of utilising cloud-based DR strategies. For businesses that may not have significant budget set aside specifically for IT resource, cloud-based solutions hold the key to successful adoption. Operating on usage-based costings, this type of system is ideal for cloud DR as the secondary or replicated IT infrastructure lays in wait until it’s required and businesses need only pay for it when, or if, they need it. Without the need for physical storage in data centres, smaller businesses are able to deploy their own disaster recovery strategy, making it no longer just for the larger enterprises. 

So, what now?

Although business continuity should be a priority for businesses, in traditionally ‘offline’ industries, organisations often see IT decisions as tactical rather than strategic. Businesses will cease to function at full capacity if a disaster strikes and the necessary business continuity procedures are not in place; and as a direct result will experience a significant increase in down time and expenditure.    

If it isn’t already, business continuity must become a priority for organisations. It’s now easier than ever to migrate to the cloud and take advantage of the inbuilt backup and disaster recovery options available. With the rate of cyber attacks on businesses of all sizes increasing significantly, no company is immune from the threat of hacking, human error or natural disasters and there is no longer an excuse to not have these systems and procedures in place.

Why more than half of companies are now making serious investments in big data analytics

  • Big data adoption reached 53% in 2017 for all companies interviewed, up from 17% in 2015, with telecom and financial services leading early adopters.
  • Reporting, dashboards, advanced visualization end-user “self-service” and data warehousing are the top five technologies and initiatives strategic to business intelligence.
  • Data warehouse optimization remains the top use case for big data, followed by customer/social analysis and predictive maintenance.
  • Among big data distributions, Cloudera is the most popular, followed by Hortonworks, MAP/R, and Amazon EMR.

These and many other insights are from Dresner Advisory Services’ insightful 2017 Big Data Analytics Market Study (94 pp., PDF, client accessed reqd), which is part of their Wisdom of Crowds® series of research. This third annual report examines end-user trends and intentions surrounding big data analytics, defined as systems that enable end-user access to and analysis of data contained and managed within the Hadoop ecosystem. The 2017 Big Data Analytics Market Study represents a cross-section of data that spans geographies, functions, organization size, and vertical industries. Please see page 10 of the study for additional details regarding the methodology.

“Across the three years of our comprehensive study of big data analytics, we see a significant increase in uptake and usage and a large drop of those with no plans to adopt,” said Howard Dresner, founder and chief research officer at Dresner Advisory Services. “In 2017, IT has emerged as the most typical adopter of big data, although all departments – including finance – are considering future use. This is an indication that big data is becoming less an experimental endeavor and more of a practical pursuit within organizations.”

Key takeaways include the following:

Reporting, dashboards, advanced visualization end-user “self-service” and data warehousing are the top five technologies and initiatives strategic to business intelligence

Big data ranks 20th across 33 key technologies Dresner Advisory Services currently tracks.  Big data analytics is of greater strategic importance than the Internet of Things (IoT), natural language analytics, cognitive business intelligence (BI) and location intelligence.

53% of companies are using big data analytics today, up from 17% in 2015 with telecom and financial services industries fueling the fastest adoption

Telecom and financial services are the most active early adopters, with technology and healthcare being the third and fourth industries seeing big data analytics. Education has the lowest adoption as 2017 comes to a close, with the majority of institutions in that vertical saying they are evaluating big data analytics for the future. North America (55%) narrowly leads EMEA (53%) in their current levels of big data analytics adoption. Asia-Pacific respondents report 44% current adoption and are most likely to say they “may use big data in the future.”

Data warehouse optimization is considered the most important big data analytics use case in 2017, followed by customer/social analysis and predictive maintenance

Data warehouse optimization is considered critical or very important by 70% of all respondents. It’s interesting to note and ironic that the Internet of Things (IoT) is among the lowest priority use cases for big data analytics today.

Big data analytics use cases vary significantly by industry with data warehouse optimization dominating financial services

Customer/social analysis is the leading use case in technology-based companies. Fraud detection use cases also dominate financial services and telecommunications. Using big data for clickstream analytics is most popular in financial services.

Spark, MapReduce, and Yarn are the three most popular software frameworks today

Over 30% of respondents consider Spark critical to their big data analytics strategies. MapReduce and Yarn are “critical” to more than 20 percent of respondents.

The big data access methods most preferred by respondents include Spark SQL, Hive, HDFS and Amazon S3

73% of the respondents consider Spark SQL critical to their analytics strategies. Over 30% of respondents consider Hive and HDFS critical as well. Amazon S3 is critical to one of five respondents for managing big data access. The following graphic shows the distribution of big data access methods.

Machine learning continues to gain more industry support and investment plans with Spark Machine Learning Library (MLib) adoption projected to grow by 60% in the next 12 months

In the next 24 months, MLib will dominate machine learning according to the survey results. MLib is accessible from the Sparklyr R Package and many others, which continues to fuel its growth. The following graphic compares projected two-year adoption rates by machine learning libraries and frameworks.

AWS, Microsoft, Google and more respond on chip vulnerability issue

Leading cloud providers have said they are aware of and working on securing systems after the disclosure of two major chip-level security vulnerabilities earlier this week.

As first reported by The Register, a ‘fundamental’ design flaw in Intel’s processor chips, dubbed Meltdown, was followed by another flaw, called Spectre, found in chips from Intel, AMD and ARM. The latter was confirmed by Google researchers in a blog post published yesterday.

The key to the vulnerability is through a processor technique called ‘speculative execution’. In other words, modern processors can estimate what task needs to be done next, and if it is correct, then is executed in a much quicker time than otherwise. As the Google blog notes, malicious actors ‘could take advantage of speculative execution to read system memory that should have been inaccessible’, such as passwords or encryption keys.

So how does this affect cloud providers? A blogger going under the name of Python Sweetness asserted on January 1 that the vulnerability will affect major cloud providers. “There are hints the attack impacts common virtualisation environments including Amazon EC2 and Google Compute Engine,” the post reads.

In a security bulletin, Amazon Web Services (AWS) said ‘all but a small single-digit percentage of instances across the Amazon EC2 fleet’ were already protected. Microsoft said in a statement that it was “in the process of deploying mitigations to cloud services”, as well as releasing security updates. Google issued a bulletin for its cloud products with Compute Engine, Kubernetes Engine, Cloud Dataflow and Cloud Dataproc requiring updates, while a statement from Josh Feinblum, chief security officer at DigitalOcean, recommended server reboots for users and promised urgent maintenance if this was unsuccessful.  

A statement from Intel issued yesterday said the company was committed to product and customer security and was working with AMD, ARM, and others ‘to develop an industry-wide approach to resolve this issue promptly and constructively.’

“Intel has begun providing software and firmware updates to mitigate these exploits,” the statement added. “Contrary to some reports, any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time.”

AMD also issued an update, stressing the importance that the research was performed in lab conditions and the threat had not been seen in the public domain.

The enterprise IT infrastructure market: Microsoft leads cloud collaboration, Cisco leads overall

New figures from Synergy Research around the state of the enterprise IT infrastructure market show that hosted and cloud collaboration revenues continue to grow quickly – with Microsoft at the top of the tree.

The overall market however – including data centre servers, switchers and routers, on-premise collaboration, network security and WLAN – has Cisco at its front with HPE behind. Aside from data centre servers and cloud collaboration, where it is second behind Microsoft, Cisco leads in the other segments and has a 26% overall market share, according to Synergy. HPE has 11% market share across the six segments.

Not surprisingly, hosted and cloud collaboration remains the fastest growing segment of enterprise IT infrastructure, with a growth rate of more than 12% year on year. WLAN, switchers and routers and network security also grew above the average rate, with the data centre server market flatlining and on-premise collaboration going backwards.

Other cited vendors include Dell EMC, with second position in data centres servers, Huawei for switches and routers, and Check Point for network security.

“Despite a burgeoning public cloud market, enterprise IT infrastructure spending was still on the rise in 2017 and will be for the next five years,” said Jeremy Duke, Synergy founder and chief analyst in a statement. “The focus of that spending is changing, however, with a growing emphasis on hosted solutions, subscription-based business models and emerging technologies.

“Those changes will continue to present challenges for incumbent vendors and opportunities for new market entrants.”

Figures issued by Synergy earlier this week focused on the data centre market, with M&A deals for 2017 outpacing 2015 and 2016 combined.