Apple’s Event 2018: iPhones, iPhones, & one more iPhone.

Grab your black turtlenecks, hotspots, and cold brew because I’m covering the latest Apple event. Our setting is the gorgeous Steve Jobs Theater, named after the co-founder and former CEO of Apple, the is located on the Apple Park Campus in Cupertino, California. It is an underground, 1,000-seat auditorium intended for Apple product launches and […]

The post Apple’s Event 2018: iPhones, iPhones, & one more iPhone. appeared first on Parallels Blog.

Meet Parallels at the Insight Technology Conference

Guest blog post by Ian Appleby, Northern Europe Territory Manager, Cross Platform Solutions at Parallels Over the many years I’ve worked in and around IT in the UK and Europe, I’ve attended a huge number of events of varying quality. Quite rightly, many of these events are now consigned to history. You know the ones I […]

The post Meet Parallels at the Insight Technology Conference appeared first on Parallels Blog.

Data centre infrastructure figures continue to rise – driven by public cloud and enterprise servers

As cloud usage continues to skyrocket, getting prime data centre real estate is a bigger priority than ever. According to the latest figures from analyst firm Synergy Research, over the past two years quarterly spend on data centre hardware and software has grown by 28%.

Total data centre infrastructure equipment revenues, taking into account cloud, non-cloud, hardware and software, hit $38 billion in the second quarter of 2018. Public cloud has gone up 54%, with private cloud going up 45% and the traditional non-cloud base declining 3%.

Original design manufacturers (ODMs) lead the way in the public cloud space, which may not come as much of a surprise. As this publication – and indeed, Synergy – has frequently reported, capital expenditure of the hyperscalers in public cloud continues to rise, building out their data centre empires and speculating to keep accumulating. Aside from the ODMs, Dell EMC leads Cisco and HPE in the public cloud market.

For private cloud, Dell EMC is again on top – the company leads in both server and storage revenues – ahead of Microsoft, HPE and Cisco, while Microsoft leads the declining non-cloud market, ahead of Dell EMC, HPE and Cisco in that order.

“We are seeing cloud service revenues continuing to grow by 50% per year, enterprise SaaS revenues growing by over 30%, search [and] social networking revenues growing by over 25%, and eCommerce revenues growing by over 40%, all of which are driving big increases in spending on public cloud infrastructure,” said John Dinsdale, a chief analyst at Synergy. “That is not a new phenomenon.

“But what has been different over the last three quarters is that enterprise spending on data centre infrastructure has really jumped, driven primarily by hybrid cloud requirements, increased server functionality and higher component costs.”

Microsoft digs down on Azure outage, explores data loss and failover question

Microsoft has put together a post-mortem on what it described as an 'unprecedented' Azure outage – exploring an interesting question of data loss and failover capability.

The outage, which affected customers on the VSTS – or Azure DevOps – service in the South Central US region, required more than 21 hours to recover all facilities, as well as an additional incident regarding a database which went offline taking another two hours to resolve.

As the status page – which originally went down with the rest of the service – noted at the time, the cause was blamed on high storms in the Texas area. With the power swells that resulted, the data centres were able to maintain temperature through a thermal buffer – but when that was depleted, automated shutdown took place after temperatures exceeded safe levels.

At the time, users queried Microsoft's claims that South Central US was the only region affected – but as the company explained, customers globally were affected due to cross-service dependencies.

Writing in a blog post, Buck Hodges, director of engineering for Azure DevOps, apologised to customers and said the company was exploring the feasibility of asynchronous replication. With asynchronous replication, data which did not have time to be copied across the network on the second server is lost if the first server fails. As Hodges explained: "If the asynchronous copy is fast, then under normal conditions, the effect is essentially the same as synchronous replication." Synchronous replication, where data loss is less of an issue, has problems particularly across regions, Hodges added, as the time it takes does not equate to performance, particularly across mission-critical applications.

For the customers themselves, it's not an either-or question. Hodges said that some customers would be happy to take a certain loss of data if it meant getting a large team up and running again, while others would prefer to wait for a full recovery however long it took.

"The only way to satisfy both is to provide customers the ability to choose to fail over their organisations in the event of a region being unavailable," Hodges wrote. "We've started to explore how we might be able to give customers that choice, including an indication of whether the secondary is up to date and possibly provide manual reconciliation once the primary data centre recovers.

"This is really the key to whether or not we should implement asynchronous cross-region fail over," Hodges added. "Since it's something we've only begun to look into, it's too early to know if it will be feasible."

Regardless of the problems outages cause and the frustration they cause to users, whether they be down to natural causes or otherwise, it is interesting to see an introspective exploration from Microsoft here.

Why healthcare providers need Zero Trust Security to boost their digital initiatives

  • 58% of healthcare systems breach attempts involve inside actors, which makes this the leading industry for insider threats today.
  • Ransomware leads all malicious code categories, responsible for 70% of breach attempt incidents.
  • Stealing laptops from medical professionals’ cars to obtain privileged access credentials to gain access and install malware on healthcare networks, exfiltrate valuable data or sabotage systems and applications are all common breach strategies.

These and many other fascinating insights are from Verizon’s 2018 Protected Health Information Data Breach Report (PHIDBR). A copy of the study is available for download here (PDF, 20 pp., no opt-in).  The study is based on 1,368 incidents across 27 countries. Healthcare medical records were the focus of breaches, and the data victims were patients and their medical histories, treatment plans, and identities. The data comprising the report is a subset of Verizon’s Annual Data Breach Investigations Report (DBIR) and spans 2016 and 2017.

Why healthcare needs Zero Trust Security to grow

One of the most compelling insights from the Verizon PHIDBR study is how quickly healthcare is becoming a digitally driven business with strong growth potential. What’s holding its growth back, however, is how porous healthcare digital security is. 66% of internal and external actors are abusing privileged access credentials to access databases and exfiltrate proprietary information, and 58% of breach attempts involve internal actors.

Solving the security challenges healthcare providers face is going to fuel faster growth. Digitally-enabled healthcare providers and fast-growing digital businesses in other industries are standardizing on Zero Trust Security (ZTS), which aims to protect every internal and external endpoint and attack surface. ZTS is based on four pillars, which include verifying the identity of every user, validating every device, limiting access and privilege, and learning and adapting using machine learning to analyze user behavior and gain greater insights from analytics.

Identities need to be every healthcare providers’ new security perimeter

ZTS starts by defining a digital business’ security perimeter as every employees’ and patients’ identity, regardless of their location. Every login attempt, resource request, device operating system, and many other variables are analyzed using machine learning algorithms in real time to produce a risk score, which is used to empower Next-Gen Access (NGA).

The higher the risk score, the more authentication is required before providing access. Multi-Factor Authentication (MFA) is required first, and if a login attempt doesn’t pass, additional screening is requested up to shutting off an account’s access.

NGA is proving to be an effective strategy for thwarting stolen and sold healthcare provider’s privileged access credentials from gaining access to networks and systems, combining Identity-as-a-Service (IDaaS), Enterprise Mobility Management (EMM) and Privileged Access Management (PAM). Centrify is one of the leaders in this field, with expertise in the healthcare industry.

NGA can also assure healthcare providers’ privileged access credentials don’t make the best seller list on the Dark Web. Another recent study from Accenture titled, “Losing the Cyber Culture War in Healthcare: Accenture 2018 Healthcare Workforce Survey on Cybersecurity” found that 18% of healthcare employees are willing to sell confidential data to unauthorized parties for as little as $500 to $1,000. 24% of employees know of someone who has sold privileged credentials to outsiders, according to the survey. By verifying every login attempt from any location, NGA can thwart the many privilege access credentials for sale on the Dark Web.

The following are the key takeaways from Verizon’s 2018 Protected Health Information Data Breach Report (PHIDBR):

58% of healthcare security breach attempts involve inside actors, which makes it the leading industry for insider threats today

External actors are attempting 42% of healthcare breaches. Inside actors rely on their privileged access credentials or steal them from fellow employees to launch breaches the majority of the time. By utilizing NGA, healthcare providers can get this epidemic of internal security breaches under control by forcing verification for every access request, anywhere, on a 24/7 basis.

Most healthcare breaches are motivated by financial gain, with healthcare workers most often using patient data to commit tax return and credit fraud

Verizon found 876 total breach incidents initiated by healthcare insiders in 2017, leading all categories. External actors initiated 523 breach incidents, while partners initiated 109 breach incidents. 496 of all breach attempts are motivated by financial gain across internal, external and partner actors. Internal actors are known for attempting breaches for fun and curiosity-driven by interest in celebrities’ health histories that are accessible from the systems they use daily. When internal actors are collaborating with external actors and partners for financial gain and accessing confidential health records of patients, it’s time for healthcare providers to take a more aggressive stance on securing patient records with a Zero Trust approach.

Abusing privileged access credentials (66%) and abusing credentials and physical access points (17%) to gain unauthorized access comprise 82.9% of all misuse-based breach attempts and incidents

Verizon’s study accentuates that misuse of credentials and the breaching of physical access points with little or no security is intentional, deliberate and driven by financial gain the majority of the time. Internal, external and partner actors acting alone or in collaboration with each other know the easiest attack surface to exploit are accessed credentials, with database access being the goal half of the time. When there’s little to no protection on web application and payment card access points to a network, breaches happen. Shutting down privilege abuse starts with a solid ZTS strategy based on NGA where every login attempt is verified before access is granted and anomalies trigger MFA and further user validation. Please click on the graphic to expand it for easier reading.

70.2% of all hacking attempts are based on stolen privileged access credentials (49.3%) combined with brute force to obtain credentials from POS terminals and controllers (20.9%)

Hackers devise ingenious ways of stealing privileged access credentials, even resorting to hacking a POS terminal or controllers to get them. Healthcare insiders also steal credentials to gain access to mainframes, servers, databases and internal systems. Verizon’s findings below are supported by Accenture’s research showing that 18% of healthcare employees are willing to sell privileged access credentials and confidential data to unauthorized parties for as little as $500 to $1,000. Please click on the graphic to expand it for easier reading.

Hospitals are most often targeted for breaches using privileged access credentials followed by ambulatory health care services, the latter of which is seen as the most penetrable business via hacking and brute force credential acquisition

Verizon compared breach incidents by North American Industry Classification System (NAICS) and found privileged credential misuse is flourishing in hospitals where inside and outside actors seek to access databases and web applications. Internal, external and partner actors are concentrating on hospitals due to the massive scale of sensitive data they can attain with stolen privileged access credentials and quickly sell them or profit from them through fraudulent means. Verizon also says a favorite hacking strategy is to use USB drives to exfiltrate proprietary information and sell it to health professionals intent on launching competing clinics and practices. Please click on the graphic to expand it for easier reading.

Conclusion

With the same intensity they invest in returning patients to health, healthcare providers need to strengthen their digital security, and Zero Trust Security is the best place to start. ZTS begins with Next-Gen Access by not trusting a single device, login attempt, or privileged access credential for every attack surface protected. Every device’s login attempt, resource request, and access credentials are verified through NGA, thwarting the rampant misuse and hacking based on comprised privileged access credentials. The bottom line is, it’s time for healthcare providers to get in better security shape by adopting a Zero Trust approach.

Why You’re Finding More and More Mac Devices in Companies

There are many reasons why Mac® devices are proliferating in organizations. Many say this is due to the fact that users practiced bring your own device (BYOD), bringing  their private MacBook® computers into the office. Additionally, some think that employees asked for Apple® devices because it was in-line with their private preference or fitted in […]

The post Why You’re Finding More and More Mac Devices in Companies appeared first on Parallels Blog.

ParkMyCloud and CloudHealth team up for greater multi-cloud optimisation tools

It’s certainly a sign that the cloud industry is seriously mature – when we’re not just talking about multiple clouds, but multiple cloud management providers.

ParkMyCloud and CloudHealth Technologies, two companies in the cloud optimisation and management space, have announced an extension of their partnership with multi-cloud in mind.

The integrated product aims to offer the best of both companies’ offerings. SmartParkingTM, part of ParkMyCloud which offers recommendations to optimise the ‘on’ and ‘off’ time of resources, is now manageable through the CloudHealth platform, alongside the latter’s recommendations to optimise public and private cloud resources.

The partnership was first announced at the start of this year with automation being the name of the game in terms of the contribution ParkMyCloud brought. One early customer who was utilising both successfully was Connotate, an AI startup that automates web data collection and monitoring, who was able to reduce costs by up to 65% automatically, as well as automated AWS, Azure, and Google Cloud Platform scheduling in 15 minutes.

Writing exclusively for this publication in July, Jay Chapel, co-founder and CEO of ParkMyCloud, cited on-demand instances and VMs, relational databases, load balancers, and containers as the four cloud resources most likely to squeeze budgets without due care and attention.

“Most non-production resources can be parked about 65% of the time – that is, parked 12 hours per day and all day on weekends,” wrote Chapel. “Many of the companies I talk to are paying their cloud providers an average list price of $220 per month for their instances. If you’re currently paying $220 per month for an instance and leaving it running all the time, that means you’re wasting $143 per instance per month.

“Maybe that doesn’t sound like much – but if that’s the case for 10 instances, you’re wasting $1.430 per month,” added Chapel. “One hundred instances? You’re up to a bill of $14,300 for time you’re not using.

“That’s just a simple micro example – at a macro level, that’s literally billions of dollars in wasted cloud spend.”

The move also marks the first business CloudHealth has announced since it was acquired by VMware at the end of last month.

Why Google needs to make machine learning its growth fuel

  • In 2017 Google outspent Microsoft, Apple, and Facebook on R&D spending with the majority being on AI and machine learning.
  • Google needs new AI- and machine learning-driven businesses that have lower Total Acquisition Costs (TAC) to offset the rising acquisition costs of their ad and search businesses.
  • One of the company’s initial forays into AI and machine learning was its $600M acquisition of AI startup DeepMind in January 2014.
  • Google has launched two funds dedicated solely to AI: Gradient Ventures and the Google Assistant Investment Program, both of which are accepting pitches from AI and machine learning startups today.
  • On its Q4’17 earnings call, the company announced that its cloud business is now bringing in $1B per quarter. The number of cloud deals worth $1M+ that Google has sold more than tripled between 2016 and 2017.
  • Google’s M&A strategy is concentrating on strengthening their cloud business to better compete against Amazon AWS and Microsoft Azure.

These and many other fascinating insights are from CB Insight’s report, Google Strategy Teardown (PDF, 49 pp., opt-in). The report explores how Alphabet, Google’s parent company is relying on Artificial Intelligence (AI) and machine learning to capture new streams of revenue in enterprise cloud computing and services. Also, the report looks at how Alphabet can combine search, AI, and machine learning to revolutionise logistics, healthcare, and transportation. It’s a thorough teardown of Google’s potential acquisitions, strategic investments, and partnerships needed to maintain search dominance while driving revenue from new markets.

Key takeaways from the report include the following:

Google needs new AI- and machine learning-driven businesses that have lower total acquisition costs (TAC) to offset the rising acquisition costs of their ad and search businesses

CB Insights found Google is experiencing rising TAC in their core ad and search businesses. With the strategic shift to mobile, Google will see TAC escalate even further. Their greatest potential for growth is infusing greater contextual intelligence and knowledge across the entire series of companies that comprise Alphabet, shown in the graphic below.

Google has launched two funds dedicated solely to AI: Gradient Ventures and the Google Assistant Investment Program, both of which are accepting pitches from AI and machine learning startups today

Gradient Ventures is an ROI fund focused on supporting the most talented founders building AI-powered companies. Former tech founders are leading Gradient Ventures, assisting in turning ideas into companies. Gradient Venture’s portfolio is shown below:

In 2017 Google outspent Microsoft, Apple, and Facebook on R&D spending with the majority being on AI and machine learning

Amazon dominates R&D spending across the top five tech companies investments in R&D in 2017 with $22.6B. Facebook leads in percent of total sales invested in R&D with 19.1%.

Google AI led the development of Google’s highly popular open source machine software library and framework Tensor Flow and is home to the Google Brain team

Google’s approach to primary research in the fields of AI, machine learning, and deep learning is leading to a prolific amount of research being produced and published. Here’s the search engine for their publication database, which includes many fascinating studies for review. Part of Google Brain’s role is to work with other Alphabet subsidiaries to support and lead their AI and machine learning product initiatives. An example of this CB Insights mentions in the report is how Google Brain collaborated with autonomous driving division Waymo, where it has helped apply deep neural nets to vehicles’ pedestrian detection The team has also been successful in increasing the number of AI and machine learning patents, as CB Insight’s analysis below shows:

Mentions of AI and machine learning are soaring on Google quarterly earnings calls, signaling senior management’s prioritising these areas as growth fuel

CB Insights has an Insights Trends tool that is designed to analyse unstructured text and find linguistics-based associations, models and statistical insights from them. Analysing Google earnings calls transcripts found AI and machine learning mentions are soaring during the last call.

Google’s M&A strategy is concentrating on strengthening their cloud business to better compete against Amazon AWS and Microsoft Azure

Google acquired Xively in Q1 of this year followed by Cask Data and Velostrata in Q2. Google needs to continue acquiring cloud-based companies who can accelerate more customer wins in the enterprise and mid-tier, two areas Amazon AWS and Microsoft Azure have strong momentum today.

Gary Arora Joins @CloudEXPO NY Faculty | @AroraGary @DeloitteUS #CloudNative #Serverless #DevOps #DigitalTransformation

92% of enterprises are using the public cloud today. As a result, simply being in the cloud is no longer enough to remain competitive. The benefit of reduced costs has normalized while the market forces are demanding more innovation at faster release cycles. Enter Cloud Native! Cloud Native enables a microservices driven architecture. The shift from monolithic to microservices yields a lot of benefits – but if not done right – can quickly outweigh the benefits. The effort required in monitoring, tracing, circuit breakers, routing, load balancing, etc. for thousands of microservices can become overwhelming. This talk will address strategies to run & manage microservices from 0 to 60 using Istio and other tools in a cloud native world.

read more

How hybrid industrial cloud computing is gaining momentum

Why do most internet of things (IoT) analytics operations occur in the cloud? The public cloud offers a centralised location for large amounts of affordable storage and computing power. But there are many instances in which it makes more sense to perform analytics closer to the thing or activity that is generating or collecting data ­– equipment deployed at customer sites.

This is particularly true in industrial and manufacturing environments, which are familiar with the challenges of managing massive amounts of unstructured data, but may lag when it comes to the virtualisation of IT infrastructure.

Industrial cloud market development

Advances in intelligent process manufacturing, factory automation, artificial intelligence and machine learning models all benefit from edge analytics implementations, yet will likely become islands of automation without a cohesive industrial cloud computing platform.

The industrial cloud covers everything from the factory floor to the industrial campus, and it is unifying the supply chain as companies employ a combination of digital business, product, manufacturing, asset, and logistics planning to streamline operations across both internal and external processes.

Industrial cloud applications make it easier to optimise asset and process allocations by modeling the physical world and use data and subsequent insights to enable new services or improve control over environmental, health, and safety issues.

The virtualisation of business-critical infrastructure is transforming the production and distribution of goods and services throughout the supply chain, as industrial organisations shift focus to hybrid cloud computing deployments that connect and integrate on-premise IT resources with public cloud resources.

According to the latest worldwide market study by ABI Research, hybrid industrial cloud adoption will more than double over the next five years at a respectable 21.1 percent CAGR.

Initial IoT deployments in industrial markets reflect the sector's machine to machine (M2M) heritage: private cloud infrastructure as a service (IaaS). The private IaaS model served as a solid starting point for many organisations that wanted the benefit of cloud scale, but with minimal interruption to normal IT operations.

The industrial cloud platform as a service (PaaS) model extended the functional capabilities of on-premise IaaS solutions by shifting commodity tasks – such as capacity planning, software maintenance, patching — to public cloud service providers. Software as a service (SaaS) took it a step further but in the form of managed services.

"Manufacturing and industrial organisations were not born from the same digital core as the people they employ or the products they produce," said Ryan Martin, principal analyst at ABI Research. "But they also harness some of the greatest potential thanks to massive amounts of untapped plant and process log data. Harvested with the right analytical tools and guidance, these data streams can deliver value greater than the sum of their parts."

The factory floor’s historical predisposition toward on-premise solutions has been supplanted by a campus-led approach underscored by a more recent push to connect HMI, SCADA, and control networks to higher-level enterprise systems, as well as the public cloud.

However, getting to the point where all these moving pieces come together in a real-world, production environment can be messy. Many operational technology (OT) devices come up short in key areas such as interoperability and security due to the prevalence of proprietary protocols in the legacy M2M market.

Outlook for industrial cloud app development

"Most OT systems depend on infrastructure with lifetimes measured in decades, while IT systems can be upgraded frequently at little or no cost," concludes Martin.

As a result, industrial and manufacturing markets typically employ a staged technology integration strategy that favors suppliers whose hardware, software, and services can be acquired incrementally, with minimal disruption to existing operations. The hybrid IT infrastructure models can fit very well in this operational environment.