Cisco cybersecurity report shows how cloud use rises – and cyberattacks rise with it

Cloud computing usage continues to grow, but at the expense of security as attackers take advantage, according to the latest Cisco annual cybersecurity report.

The study, now in its 11th iteration, puts together threat intelligence and cybersecurity trends from researchers and six technology partners, and found more than a quarter (27%) of security professionals polled use off-premises private clouds. This is up from 20% this time last year.

Of that number, more than half (57%) say they host networks in the cloud because of better data security, compared to 48% who cite scalability, and 46% for ease of use.

“While cloud offers better data security, attackers are taking advantage of the fact that security teams are having difficulty defending evolving and expanding cloud environments,” the company notes. “The combination of best practices, advanced security techniques like machine learning, and first-line-of-defence tools like cloud security platforms can help protect this environment.”

So what can be done? According to the 2018 Cisco Security Capabilities Benchmark report, a proportion of chief information security officers (CISOs) said they were eager to add tools such as machine learning and artificial intelligence (AI) to their technology mix, but were frustrated by issues such as false positives.

The company added that machine learning and AI technologies will over time mature and learn what is ‘normal’ in the network environments they are monitoring.

Elsewhere, the report revealed how while security is getting more complex, the scope of breaches is expanding. Last year a quarter of security professionals said they used products from 11 to 20 vendors, up from 18% in 2016, while a third (32%) of breaches in 2017 affected more than half of systems.

Read more: Hackers ran crypto mining scripts on Tesla's cloud, research reveals

Disaster recovery: The importance of choosing the right provider for you

I wrote an article recently which centred on Gartner’s prediction that the disaster recovery as a service (DRaaS) market would grow from $2.01bn in 2017 to $3.7bn by 2021.

In my opinion, one of the main drivers for this rapid level of growth is the fact that it is ‘as a service’ and not the complex and expensive ‘create your own’ environment that it used to be. As a result, this has made DRaaS much more accessible to the SMB market, as well as enterprise customers. But, as the list of DRaaS solutions grows along with adoption rates, it's important for customers to carefully consider how their choice of cloud provider should be influenced by their existing infrastructure. This will help to avoid technical challenges down the road.

The concept of disaster recovery

Before I delve into the key considerations for customers when choosing a DR solution, I should, for the sake of the uninitiated amongst us, explain what DR is. It literally means to recover from a disaster, and so encompasses the time and labour required to be up and running again after a data loss or downtime. DR depends on the solution that is chosen to protect the business against data loss. It is not simply about the time during which systems and employees cannot work. It is also about the amount of data lost when having to fall back on a previous version of that data. Businesses should always ask themselves: “how much would an hour of downtime cost?” And, moreover, “is it possible to remember and reproduce the work that employees, or systems did in the last few hours?”

When choosing a DR solution, what are the considerations?

In the past, customers would usually have resorted to building out a secondary data centre complete with a suitably sized stack of infrastructure to support their key production servers in the event of a DR undertaking. They could either build with new infrastructure, or eke out a few more years from older servers and networking equipment. Often, they would even buy similar storage technology that would support replication.

More recently, software-based replication technologies have enabled a more heterogeneous set up, but still requiring a significant investment in the secondary data centre, and, not forgetting the power and cooling required in the secondary DC, coupled with the ongoing maintenance of the hardware, all of which increases the overall cost and management task of the DR strategy.

Even recent announcements such as VMware Cloud on AWS, are effectively managed co-location offerings, involving a large financial commitment to physical servers and storage which will be running 24/7.

So, should customers be looking to develop their own DR solutions, or would it be easier and more cost-effective to buy a service offering?

Enter DRaaS. Now, customers need only pay for the storage associated with their virtual machines being replicated and protected, and only pay for CPU and RAM when there is a DR test or real failover.

Choosing the right DR provider for you

When determining the right DR provider for you, I would always recommend undertaking a disaster recovery requirements checklist and regardless of whether you are choosing an in-house or DRaaS solution. This checklist should include the following points:

Performance

  • Does the DR solution offer continuous replication?
  • Which RTO and RPO does the solution offer?
  • DRaaS – Does the Cloud Service Provider offer a reliable and fast networking solution, and does the DRaaS solution offer networking efficiencies like compression?

Support of your systems

  • Is the DR solution storage agnostic?
  • How scalable is the solution (up and also down in a DRaaS environment)?
  • DRaaS – Does it offer securely isolated data streams for business critical applications and compliance?

Functionality

  • Is it a complete off-site protection solution, offering both DR and archival (backup) storage?
  • Is it suited for both hardware and logical failures?
  • Does it offer sufficient failover and failback functionality?

Compliance

  • Can it be tested easily and are testing reports available?
  • DRaaS – Are there any licence issues or other investments upfront?
  • DRaaS – Where is the data being kept? Does the service provider comply with EU regulations?

Let’s take VMware customers as an example. What are the benefits for VMware on-premises customers to working with a VMware-based DRaaS service provider?

Clearly, one of the main benefits is that the VMs will not need to be converted to a different hypervisor platform such as Hyper-V, KVM or Xen. This can cause problems as VMware tools will need to be removed (deleting any drivers) and the equivalent tools installed for the new hypervisor. Network Interface Controllers (NICs) will be deleted and new ones will need to be configured. This results in significantly longer on-boarding times as well as ongoing DR management challenges; these factors increase the overall TCO of the DRaaS solution.

In the case of the hyperscale cloud providers, there is also the need to align VM configuration to the nearest instance of CPU, RAM and storage that those providers support. If you have several virtual disks, this may mean that you need more CPU and RAM in order to allow more disks (the number of disks is usually a function of the number of CPU cores). Again, this can significantly drive up the cost of your DRaaS solution.

In some hyperscale cloud providers, the performance of the virtual disks is limited to a certain number of IOPS. For typical VMware VM implementations, with a C: drive and a data disk or two, this can result in very slow performance.

Over the past few years, iland has developed a highly functional web-based console, that gives DRaaS customers the same VMware functionality that they used to on-premises. This allows them to launch remote consoles, reconfigure VMs, see detailed performance data, take snapshots while running in DR and, importantly, perform test failovers in addition to other functions.

For VMware customers, leveraging a VMware-based cloud provider for Disaster Recovery as a Service delivers rapid on-boarding, cost-effectiveness, ease of ongoing management and a more flexible and reliable solution to protect your business.

HubSpot adopts Google Cloud to expand international cloud infrastructure

Inbound marketing software provider HubSpot is adopting Google Cloud to expand its international cloud infrastructure.

HubSpot will use the Google Cloud Platform Frankfurt region to provide outage and data protection, as well as support local customer data.

The company has long since been a customer of Google, particularly G Suite, with integrations for Google Calendar, Gmail, AdWords, Docs, and Drive being among the most popular among HubSpot users.

Kerry Munz, director of engineering and platform infrastructure at HubSpot, said the ‘speed and coverage of Google’s global network and strong capabilities in emerging technologies’ were also a factor in HubSpot’s decision.

Google has made various moves this year not only in emerging technologies but in emerging geographical areas. In January, the company expanded its infrastructure plans, opening five new regions this year – in the Netherlands, Montreal, Los Angeles, Finland, and Hong Kong – with three subsea cables commissioned across three continents.

“Our choice to use Google Cloud for our international cloud infrastructure is an investment in best in class technologies to support the rich systems that make up HubSpot’s software,” said Brad Coffey, chief strategy officer at HubSpot in a statement. “Given our existing successful integrations with AdWords and G Suite, this move is another meaningful step forward in our strategic partnership with Google Cloud.

“We look forward to working more closely with the Google team to better serve our customers,” added Coffey.

In July, Google said it had tripled the number of its big cloud deals – worth $500,000 or more – year over year, with CEO Sundar Pichai telling analysts the company’s cloud arm was experiencing “impressive growth across products, sectors and geographies.”

You can read the full HubSpot blog post here.

Alternatives to Parallels Toolbox Are Difficult and Costly

New versions of Parallels® Toolbox have just been released (Parallels Toolbox for Mac 2.5 and Parallels Toolbox for Windows 1.5), and they contain both new tools and new functionality of existing tools. I’ve been a Parallels Toolbox user from its first release, and I use one or more of the tools every day. (Remember that […]

The post Alternatives to Parallels Toolbox Are Difficult and Costly appeared first on Parallels Blog.

The cloud goes critical in 2018: Deep learning, smart cloud infrastructure, and more

From cut-throat competition, eyebrow-raising co-opetition, and major advances in cloud-based machine learning, 2017 was a pivotal – and productive – year for the cloud, setting the stage for what looks likely to be the most exciting year yet. 

The market swing is already in full force. Thanks to a full-fledged embrace by the enterprise, the cloud is undergoing dramatic transformation as vendors rush to meet the infrastructure and business needs of today's top companies. According to Gartner, the overall market likely grew by close to 20 percent in 2017, and IaaS in particular saw close to 40 percent growth. With digital transformation at the top of every executive's mind, it's likely that this trend will only accelerate. In fact, by 2020, Gartner estimates that the overall market will reach a whopping $411 billion, and IaaS $72 billion, 87 percent and 185 percent raises respectively from 2016. 

What we considered in the realms of "crazy" 10 years ago is now a reality, and the leapfrogging will continue. Based on our collective experience, interactions with customers and conversations with colleagues, here are four key trends we see unfolding in 2018: 

Deep learning 

There's no doubt that 2017 was the year of machine learning. Culminating in the blockbuster announcement of Gluon, a brand new cloud-based open source machine learning platform, as well as other advances in technologies, ML is finally set to become a real part of the enterprise business strategy. 

While advances in ML will continue in 2018, expect this to lead major breakthroughs with deep learning as well. According to a new survey from Vanson Bourne, 80 percent of enterprises already have a form of AI in production today, and 30 percent are planning to expand their capabilities over the next three years. Cloud providers are already anticipating this need, as some already allow enterprises to leverage GPUs (the key piece of technology for deep learning) for massive parallel computational power. Expect to see an explosion of deep learning, as the costs for this service drop thanks to commodification and more cloud providers offer the service. 

Smart cloud infrastructure 

As major advances in automation and machine learning continue to make pace, expect to see the beginnings of more smart and automated cloud infrastructures, ones that go beyond traditional automation and can actually make seemingly human-like decisions about important issues around authorisation, security, vMotion, dynamic resource scheduling, load balancing, and self-healing environments. This will change the way IT departments approach technology, with the same impact that virtualisation had when it was introduced to the market in 2003. 

The "instant" private cloud 

There may be quite a few disadvantages for enterprises using a public cloud, but there is one main draw that keeps people coming back: the simplicity of setup. As a result, despite all the drawbacks, many companies continue to leverage public cloud for this reason alone, but with an eye to providers focused on mission-critical workloads. 

In the coming year, expect mission-critical cloud providers to bring the public cloud experience on-premises. Private cloud vendors will provide their own "one-click" setups for customers that will go well beyond just having a server that is instantly up and running. Instead, you'll get one that is pre-configured to the specific needs of your enterprise before you even turn it on. 

Continued co-opetition among cloud vendors 

Although 2017 had its fair share of drama, it was also very much the year of market maturation. For example, VMware teamed up with AWS and Pivotal announced a partnership with Google Cloud, all for the good of the customers. 

Cloud providers have three options: the expensive route – they can go it alone, spend hundreds of millions of dollars on acquisitions, or they can partner with their competitors to build ecosystems that meet the unique needs of their customer base. 

This year, expect to see more interesting (and unexpected) partnerships develop, as cloud providers compete to meet the needs of customers. It certainly looks as if 2018 will be the year of the specialised cloud.

Hackers ran crypto mining scripts on Tesla’s cloud, research reveals

Hackers have been running crypto mining scripts on unsecured Kubernetes instances owned by Tesla, according to new research from security monitoring provider RedLock.

According to the study, which analysed public cloud environments monitored by RedLock – more than 12 million resources processing petabytes of network traffic – the unsecured Kubernetes pod exposed access credentials to Tesla’s Amazon Web Services (AWS) environment. From there, the environment contained an AWS S3 bucket which held sensitive data, such as telemetry.

While the issue was quickly closed off – it was immediately reported to Tesla by RedLock and rectified before it became public – the more interesting use case is around cryptojacking, whereby unused CPU resources on unwitting users’ machines are targeted to help mine cryptocurrencies.

A blog post from the company explained how the operation was carried out. The hackers, instead of using a ‘mining pool’ – where processing power is shared over a network to split the reward equally dependent on how much work was put in – installed mining pool software which was then configured to an unlisted endpoint. The real IP address was also hidden behind CloudFlare, while the hackers had ‘most likely’ purposely configured the mining software to keep CPU usage low.

All told, the measures meant IP address-based detection of the crypto mining activity was far more difficult. RedLock added that monitoring configurations, user behaviour and network traffic, and correlating the latter with configuration data, could help in tracking similar issues.

While there are some examples of crypto mining which are transparent – US news website Salon asking visitors to go through with the process if they have an ad blocker installed being a case in point – many are much more sinister. “The skyrocketing value of cryptocurrencies is prompting hackers to shift their focus from stealing data to stealing compute power in organisations’ public cloud environments,” the RedLock blog explained. “The nefarious network activity is going completely unnoticed.”

Cloud security best practices

On a wider theme, however, the report once again assesses the importance of the shared responsibility model in cloud computing. Almost three quarters (73%) of organisations analysed use their public cloud root user account to perform activities.

This creates a serious issue with data getting into the wrong hands; and indeed, AWS strongly advises such activity. Think of the AWS account key as like a credit card number and protect it as such, the company says in its best practice guide. As this publication has reported on several occasions, a provider such as AWS has security ‘of’ the cloud – data centre, hypervisor, routers and so on – while the organisation is responsible for security ‘in’ the cloud.

A couple of stories which have broken in the past week shed light on this. Last week, the BBC reported on a service called Buckhacker, which allowed users to trawl S3 buckets for unsecured sensitive data, while yesterday another story found security researchers had posted ‘friendly warnings’ to companies whose private content had been made public.

At the time of the Buckhacker release, Mark Hickman, chief operating officer at WinMagic, said organisations ‘must fulfil their part of the shared responsibility deal’ with regards to cloud security. “Customers should encrypt all data before it is placed in the cloud,” he said. “It is the last line of defence if a hacker gains access to their cloud services.

“Equally important is that encryption is employed where the keys are centrally managed and remain under the customer’s constant control, and the keys never stored on a public cloud service, or servers that could be exposed to a hack,” Hickman added.

The RedLock report shows this is less than common practice – and it is a concern shared by the company.

“In our analysis, cloud service providers such as Amazon, Microsoft and Google are trying to do their part, and none of the major breaches in 2017 was caused by their negligence. However, security is a shared responsibility,” said Gaurav Kumar, CTO of RedLock.

“Organisations of every stripe are fundamentally obliged to monitor their infrastructures for risky configurations, anomalous user activities, suspicious network traffic, and host vulnerabilities,” Kumar added. “Without that, anything the providers do will never be enough.”

You can read the full RedLock report here (email required).

Screenshot Page: A New Tool in Parallels Toolbox for Mac 2.5

One of the new utilities in Parallels® Toolbox for Mac 2.5  is the Screenshot Page tool. This creates screenshots of web pages—even especially long pages. In this blog post, I’ll show you this tool in action. Without a tool like this, creating a screenshot of a long web page can be quite tedious. You must […]

The post Screenshot Page: A New Tool in Parallels Toolbox for Mac 2.5 appeared first on Parallels Blog.

Buy Parallels Desktop and get 8 Mac apps for FREE!

Parallels Desktop® for Mac enables users to run Windows, Linux, and other popular OSes without rebooting your Mac®. Parallels stands tall as the #1 solution for desktop virtualization for millions of users—for over 11 years. Start 2018 with extreme savings with the Parallels Desktop Premium Mac App Bundle. We’ve made saving money as easy as […]

The post Buy Parallels Desktop and get 8 Mac apps for FREE! appeared first on Parallels Blog.

Just Released: Parallels Toolbox for Mac 2.5 and Parallels Toolbox for Windows 1.5!

When Parallels® Toolbox for Mac was first released in August 2016, it had 20 tools, and we promised a steady stream of new tools in later releases. We’ve kept that promise: Version 1.0 – 20 tools Version 1.3 – 25 tools Version 1.5 – 25 tools Version 1.7 – 29 tools Version 2.0 – 32 […]

The post Just Released: Parallels Toolbox for Mac 2.5 and Parallels Toolbox for Windows 1.5! appeared first on Parallels Blog.

Global ICT investment will hit $4 trillion in 2018 – with cloud and hybrid IT infrastructure driving it

Information and communications technology (ICT) is an enabler of economic progress, and a driving force of the Global Networked Economy. Those organizations that have mastered the applications of next-generation technologies are making waves of market disruption everywhere. That said, expect more of the same, at an accelerated pace in future.

Worldwide spending on ICT will be nearly $4 trillion in 2018, according to the latest global market study by International Data Corporation (IDC). Ongoing growth will be driven by enterprise investment on cloud services, software and hybrid IT infrastructure.

Global ICT market development

The consumer market will account for more than $1.5 trillion in ICT spending in 2018 and will deliver more than one third of all worldwide spending throughout the forecast period. Consumer spending will also experience the slowest growth over the forecast period with a CAGR of 1.2 percent. Roughly 80 percent of consumer spending will go to devices and mobile telecom services.

Banking, discrete manufacturing, telecommunications, and professional services will be the four largest industries for ICT spending in 2018 at more than $900 billion combined. While all four industries will invest heavily in applications, infrastructure, outsourcing, and telecom services, spending levels will vary depending on industry needs.

Banking will invest the most in IT outsourcing and project-oriented outsourcing ($115 billion combined) while telecommunications spending will be led by infrastructure purchases ($85 billion). Professional services and banking will experience the fastest growth in ICT spending with five-year CAGRs or 5.9 percent and 5.2 percent, respectively.

The United States will see $1.3 trillion in ICT spending in 2018 making it the largest geographic market this year and throughout the forecast, with spending expected to grow at a CAGR of 3.6 percent. China will be the second largest market for ICT spending at $499 billion this year with solid growth (5.2 percent CAGR) forecast through 2021.

Japan, the UK, and Germany will round out the top five countries for ICT spending in 2018. The countries that will experience the fastest ICT spending growth over the 2016-2021 forecast period are the Philippines (7.5 percent CAGR), India (7 percent CAGR), and Peru (6.7 percent CAGR).

"The growth of technology spending in the U.S. professional services industry is propelled by the tech-savvy firms that comprise it. Meanwhile, banks and retailers share the common desire to deliver a delightful, cohesive, channel-agnostic customer experience. These initiatives are enabled by technology investments to help organizations unite their physical and online worlds," said Jessica Goepfert, program director at IDC.

In terms of company size, the small office category will account for 7 percent all ICT spending throughout the forecast period. Most of this spending (around $100 billion per year) will go toward fixed and mobile telecom services, while devices will also be a significant spending category.

On the other end of the spectrum, very large businesses will account for more than 50 percent of all ICT spending throughout the forecast. These businesses will focus the majority of their spending on IT outsourcing, project-oriented outsourcing, applications, and infrastructure as they pursue their digital transformation strategy.

The spending patterns for small businesses will closely resemble those of the small office category with slightly more spending going toward applications and outsourcing. Medium and large businesses will experience more balanced spending across all technology categories.

Outlook for global IT investments

Spending on information technology (IT) will reach $2.16 trillion this year, led by business and consumer spending on devices, applications, IT outsourcing, and project-oriented outsourcing — including application development and system and network implementation.

In addition, more than $300 billion will be spent on business process outsourcing and business consulting services this year. Telecommunications spending is forecast to be $1.5 trillion this year, with 95 percent of the total going to fixed and mobile telecom services.

Mobile phones will be the largest segment of technology spending at nearly $500 billion in 2018, followed by mobile data and mobile voice at more than $400 billion each.