Oracle opts for UK AI expansion due to ‘wealth of local talent’


Connor Jones

15 Nov, 2018

Oracle has announced an expansion to its AI programme in the UK, pledging to double the size of its development team at its site in Reading.

The company said it will take advantage of the wealth of local talent available and hire a new generation of data scientists and architects.

Oracle’s industry-leading technology has benefited many businesses, including the NHS Business Services Authority (NHSBSA) which used Oracle’s AI tools help the NHS and other clients make better-informed treatment decisions for patients.

Using their DALL built using Oracle AI tools, the NHSBSA was able to retrieve £581 million of savings for the NHS to re-invest in patient care.

«Our expansion in the UK reflects the region’s strong technology talent,» said Oracle CEO Safra Catz. «The global AI development hub in Reading accelerates innovation and helps customers take advantage of these critical emerging technologies by making them pervasive throughout our cloud offerings.»

It’s the latest endorsement to the UK’s surging AI industry. In April, the UK Government announced its AI Sector Deal – a £1 billion funding package backed by 50 leading businesses and organisations. Japanese venture capital firm Global Brain has also announced its plans to open its European HQ in the UK and invest £35 million in British AI start-ups, while Vancouver-based Chrysalix has also pledged to plough £110 million into AI and robotics enterprises throughout the country.

Professional services firms Accenture and PwC agree that the country’s GDP will be 10.3% higher come 2030, so it’s no surprise the country is getting a massive surge of investment.

«We are already Europe’s leading tech hub, with global firms and thriving startups choosing the UK as the place to grow their business and create high-skilled jobs,» said UK Government Digital Secretary Jeremy Wright, in response to Oracle’s decision. «We are a world leader in artificial intelligence and our modern Industrial Strategy puts pioneering technologies at the heart of our plans to build a Britain fit for the future.»

However, recent reports suggest the UK faces a brain drain when it comes to technology students, particularly given the lure of highly paid jobs in the US. A report in September found that only one in seven post-graduate students or research graduates are joining a UK tech startup after their studies. Also, around a third of leading machine learning and AI specialists have left the UK for work at Silicon Valley tech companies.

The UK government is hoping to put a plaster over the bleeding talent in the country with its AI Sector deal which promises to invest £17 million into AI development at British universities.

«The UK is one of the world’s leading technology nations and is recognised as a place where ingenuity and entrepreneurship can flourish,» said Sam Gyimah, the Science Minister, speaking on the brain drain issue.

«We are a beacon for global talent, and as part of our modern industrial strategy, through our £1 billion AI sector deal, we are capitalising on the UK’s global advantage in artificial intelligence.»

Majority of businesses now ditching public cloud for hybrid


Clare Hopping

15 Nov, 2018

The majority of businesses think the hybrid cloud is the best option when it comes from transforming their operations, even though only a fifth are using a combined public and private cloud set-up.

According to Nutanix’s Enterprise Cloud Index, 91% of businesses think a hybrid cloud approach is the ideal model to adopt, with 88% believing that application mobility across cloud platforms has the potential to “solve a lot of problems”.

However, despite overwhelming agreement, the report found that only 19% of those surveyed said that had already deployed such a model.

Findings also showed that organisations are becoming increasingly jaded when it came to opting for public cloud deployments, with 35% of respondents admitting overreliance on public cloud resulted in frequent overspending of annual budgets.

Nutanix’s survey of 2,300 IT decision makers worldwide, released on Thursday, identified that interoperability between cloud types and the ability to move applications between cloud environments was more important than cost or security.

“As enterprises demand stronger application mobility and interoperability, they are increasingly choosing hybrid cloud infrastructure,” said Ben Gibson, chief marketing officer for Nutanix. “While the advent of public cloud has increased IT efficiency in certain areas, hybrid cloud capabilities are the next step in providing the freedom to dynamically provision and manage applications based on business needs.”

The report also highlighted that finding IT talent specialising in hybrid IT is a challenge and more than half of respondents said they are struggling to retain those with hybrid IT skills in their organisation.

This is likely to become a growing concern for businesses in the next 12-24 months and a problem that needs to be addressed if firms are to make the most of hybrid cloud infrastructure.

Netskope secures $168m series F funding to further accelerate enterprise cloud security

Cloud access security broker (CASB) Netskope has announced the close of a series F funding round of $168.7 million (£130.5m) to ‘further cement [its] position as the leader in accelerating security transformation throughout the enterprise’, in the company’s words.

The funding round was led by existing investor Lightspeed Venture Partners – whose interests in the cloud security space have included Datrium and Zscaler – alongside investment from Accel, Base partners, Geodesic Capital, Iconiq Capital, Sapphire Ventures and Social Capital.

CEO Sanjay Beri argued in a blog post following the announcement how, without security transformation, digital transformation will fail – and how Netskope is best placed to fulfil organisations’ needs.

“Legacy cyber security vendors have reacted to this [cloud] shift by acquiring companies and cobbling together disparate architectures and products in an attempt to present customers with a ‘unified’ solution, but in reality these products are unified in name only and they were designed for an environment where data and applications were placed in centralised data centres and IT teams were primarily the ones responsible for selecting and deploying applications,” wrote Beri.

“Unlike legacy vendors, Netskope was born in the cloud. We empower our customers to achieve the security transformation they need in order to make the shift to a digital-first model.”

Netskope’s vision is around protecting all assets with a unified SaaS, IaaS and web security platform. Having been purely cloud-focused, the strategic decision was made last year to increase visibility to the entire web. The company was named as a leader in Gartner’s CASB Magic Quadrant earlier this month. Gartner praised Netskope’s comprehensive risk database and access control policies in its analysis, but noted a caution around a ‘minor’ increase in inquiries around installation challenges and service performance.

Speaking to this publication on the occasion of Netskope’s series E funding last June, Beri noted how even traditional cloud laggards, such as healthcare and finance, were key customers. In some aspects, it was an ideal opportunity; these companies did not wish to fall behind in their technology roadmaps, but they were scared stiff around the data security element.

“You think [financial and insurance would] be the laggards but some of the largest financial institutions are Netskope customers, and they’re leveraging cloud not only because their end users want [it], but because it’s a corporate strategy now,” said Beri. “It’s a competitive advantage, if you can leverage these properly.”

Total funding for Netskope now stands at just over $400 million.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Nigerian firm holds hands up for misrouted Google traffic


Bobby Hellard

14 Nov, 2018

A Nigerian internet service provider (ISP) has taken responsibility for a glitch that caused some Google traffic to be misrouted through Russia and China.

A misconfigured border gateway protocol (BGP) filter, used to route traffic across the internet, inadvertently sent Google traffic through Russia and China, raising concerns of intentional hijacking.

But, the Main One Cable Co, or MainOne, a small firm in Lagos, Nigeria, said it was due to a «technical glitch» during a planned upgrade.

«In the early hours of Tuesday morning, MainOne experienced a technical glitch during a planned network upgrade and access to some Google services was impacted,» the company said in a statement. «We promptly corrected the situation at our end and are doing all that is necessary to ensure it doesn’t happen again.

«The error was accidental on our part; we are not aware that any Google services were compromised as a result. MainOne is a major internet service provider in West Africa and has direct reachability with over 100 leading networks globally.»

Two of those leading global networks were TransTelekom in Russia and China Telecom, the latter being a partner with MainOne. China Telecom is said to have leaked the routing information out to the rest of the world, where TransTelekom picked it up.

Google is said to have lost control of several million IP addresses for more than an hour on Monday, causing problems for its cloud service and a number of other sites such as YouTube and Spotify. But it said it had no reason to believe it was a malicious act.

«We’re aware that a portion of internet traffic was affected by incorrect routing of IP addresses, and access to some Google services was impacted,» said a Google spokesperson. «The root cause of the issue was external to Google and there was no compromise of Google services.»

Adding to suspicions of hijacking, some Cloudflare-owned IP addresses were also sent through China Telecom. But again, the cloud company has said this is due to the Nigerian ISP inadvertently leaked the routing information to China Telecom, who in turn then leaked it out to the rest of the world.

«Route leaks like this are relatively common and typically just the result of a mistaken configuration of a router,» said John Graham-Cumming, Cloudflare CTO. «The global routing system, which is based on BGP, is entirely trust-based. As a result, if a major network wrongly claims that they are the rightful destination for certain traffic then it can cause a disruption.»

«The impact on us was minimal. Cloudflare’s systems automatically noticed the leak and changed our routing to mitigate the effects.»

Graham-Cumming added that if there was something nefarious afoot there would have been a lot more direct, and potentially less disruptive and detectable, ways to reroute traffic.

Google becomes the first cloud provider to use Nvidia’s Turing T4 tech


Clare Hopping

14 Nov, 2018

Google has become the first cloud provider to offer customers access to Nvidia’s T4 GPU, helping them test out a cheaper alternative to its high-performance computing (HPC)-focused V100.

Although Google is the first cloud provider to offer the GPU, it’s already seen widespread adoption across physical servers, such as those offered by Dell EMC, Hewlett Packard Enterprise, IBM, Lenovo and Supermicro. In total, it’s available in 57 separate server designs, Nvidia said, making it the most used server GPU it’s ever developed.

“We have never before seen such rapid adoption of a datacenter processor,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “Just 60 days after the T4’s launch, it’s now available in the cloud and is supported by a worldwide network of server makers. The T4 gives today’s public and private clouds the performance and efficiency needed for compute-intensive workloads at scale.”

The reason the T4 GPU is so popular is that it’s so powerful. AI, machine learning and even just data processing tasks are carried out at such a scale in the modern datacentre, a powerful GPU is required to handle them. Nvidia’s newest GPU features Turing Tensor Cores and new RT Cores that significantly reduce latency and speed up the processes when used alongside accelerated containerized software stacks.

“Real-time visualization and online inference workloads need low latency for their end users. We are delighted to partner with NVIDIA to offer T4 GPU support for Google Cloud customers,” said Damion Heredia, senior director of Product Management at Google Cloud.

“NVIDIA T4 GPUs for Google Cloud offer a highly scalable, cost-effective, low-latency platform for our ML and visualization customers. Google Cloud’s network capabilities together with the T4 offering enable customers to innovate in new ways, speeding up applications while reducing costs.”

How should CIOs manage data at the edge?

The ubiquity of popular buzzwords or phrases in the technology community brings a certain kind of pressure. If everyone seems to be talking about the importance and transformative potential of an exciting new technology, then as a CIO it’s only natural to want to dive straight in and explore its potential use cases as soon as possible.

This is particularly true of businesses and edge computing. After all, edge implementation can deliver new experiences to the customer, help develop new products and open up new lines of revenue – why wouldn’t you want to get started as quickly as possible?

However, while no-one wants to stand in the way of transformative new technologies, CIOs have to bring clarity and common sense to the conversation – particularly on how edge implementation will affect a business’s security portfolio. The CIO must take a unified approach to data management when it comes to storage, security, and accessibility, ensuring that security is part of the conversation from the start and is given the same amount of attention from the business as all the exciting edge use cases.

Security centrality

Edge creates a whole new set of security challenges for CIOs, who are used to just dealing with the data centre. Under an edge setup, data is processed closer to the source, away from the centralised data centre that is more physically secure.

For CIOs, the task is to build a data management plan that will future-proof against edge expansion – allowing the business to scale up quickly while maintaining security and cost-efficiency

Because of this, it is imperative that CIOs build comprehensive security into any edge implementation proposal from the start. If security is bolted on after the business goals and ambitions of edge have been set, there will undoubtedly be trouble ahead.

The need for processing at the edge comes from the sheer amount of data generated as our connected world expands over the coming years – according to DataAge 2025, a report sponsored by Seagate and conducted by IDC, 90% of the data created in 2025 will require security protection. More data, of course, means more vulnerability – which is why security, with intelligent data storage and data-at-rest encryption at its foundation, has to be at the heart of any business’s edge computing plans. Couple this with the increased physical concerns – more locations means that there are more sites to keep secure – and it’s clear that this is a complex challenge that must be managed methodically. 

Plan for expansion

Implementing edge is all about driving business growth – the new customer experiences and revenue streams that come with it will mean that your business expands and becomes more complex. For CIOs, therefore, the task is to build a data management plan that will future-proof against edge expansion. This plan must allow the business to scale up quickly while maintaining security and cost-efficiency.

While driven by the particular needs of his or her organisation, a CIO’s edge computing strategy will also need to consider internal politics and agendas. A CIO might find themselves, for example, caught between the business’s OT teams – who will be looking for edge computing spend to help them bring powerful real-time analytics to the factory floor or their supply chain technology – and the IT best practices which are more likely to adopt a more cautious approach. CIOs will need to build a strategy that is flexible enough to handle expansion and can deal with the different priorities of different parts of the business, all while keeping a laser focus on security.

Embrace the benefits

However, the architecture inherent in mobile edge applications can make maintaining a tight security set-up easier. For one thing, storing data at the edge can help enterprises better manage their data from a compliance point of view.

A CIO’s edge computing strategy will also need to consider internal politics and agendas

Because edge allows you to avoid much of the data transfer between devices and the cloud, it is possible to filter sensitive information on the device and only transfer the essential data-model building information to the central data centre.

This makes it easier for enterprises to build the security and compliance framework to meet audits and new regulations.

The edge strategist

When it comes to edge implementation, CIOs need to be at the heart of the planning and rollout. Ultimately, they are responsible for the state and security of all the organisation’s data, and, as detailed in DataAge 2025, the data we all generate will increase exponentially in the coming yearsAs edge disperses that data across a wider area than ever before, the scope of the CIO role is only going to expand.

As we know, the role of the CIO is always an evolving one as technology changes and opens up new possibilities. While CIOs must tweak the way they work slightly and adapt to a world where data is more distributed, they have the opportunity to be at the heart of the growth of their business and make even more of an impact its future direction.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Virgin Media Business unveils SD-WAN services aimed at businesses using cloud tech


Bobby Hellard

13 Nov, 2018

Virgin Media has launched a secure software-defined wide area network (SD-WAN) services designed for enterprise customers that want to rapidly scale their networks to meet operational demands. 

The SD-WAN is a «cloud-ready» service meaning it will operate with the next-generation networking infrastructures and cloud data, apps and services many companies are no putting to use as they pursue digital transformation doctrines. 

It comes with optimisation tools, for example, that will enable large organisations to access cloud data, applications and services such as Office 365 from multiple sites.

Virgin said its service will provide insight, analytics and visibility of network traffic so that key applications can be prioritised and traffic optimised to improve performance. The service is end-to-end IPsec encrypted and comes with a secure stateful firewall as standard.

Peter Kelly, managing director of Virgin Media Business, said that the service will give its users greater control.

«SD-WAN as a service transforms legacy infrastructure into an agile, responsive and secure digital platform with the cloud at its heart to help enterprises transform the way they work,» he said.

«By giving customers greater flexibility and control, SD-WAN helps businesses to evolve and tailor their networks quickly and easily.»

For security, Virgin Media is partnering with award-winning Versa Networks, which is a provider of next-generation software-based networking and security services.

The company said the launch comes after thorough integration and penetration testing that ensures a proven secure service.

Get off of your cloud: AWS CEO claims move away from Oracle is imminent

The Oracle and Amazon mud-slinging shows no signs of slowing down. Amazon Web Services (AWS) has claimed its consumer businesses are now fully off Oracle’s data warehouse, with the vast majority of critical system databases running on its own solution by the end of the year.

The source comes right from the top; writing on Twitter, CEO Andy Jassy said Amazon’s consumer arm turned off Oracle on November 1 and moved to Redshift, Amazon’s equivalent data warehousing tool.

Oracle, and in particular co-founder and CTO Larry Ellison, has been taking sideswipes at all things AWS for almost as long as they have been in the cloud game. As this publication put it last month for the Oracle OpenWorld keynote: “Oracle’s autonomous database is certainly Ellison’s favourite topic right now – but bashing the biggest player in cloud infrastructure must rank a close second.”

Increasingly, Ellison had been telling customers, analysts, and anyone else who would listen of the irony that AWS was still a major Oracle customer. Back in December, following the company’s Q2 2018 results, questions turned to customers moving off the platform. Ellison said: “Let me tell you who’s not moving off of Oracle – a company you’ve heard of that gave us another $50 million this quarter. That company is Amazon.

“Our competitors, who have no reason to like us very much, continue to invest in and run their entire business on Oracle.”

Similarly, Oracle’s much-vaunted autonomous database technology makes Amazon look distinctly second best, according to Ellison. At OpenWorld, the Oracle CTO called Amazon’s upcoming offering ‘semi-autonomous’ and described it as akin to a ‘semi-self-driving car.’ “You get in, you drive, and you die,” he told attendees.

Amazon CTO Werner Vogels added to Jassy’s remarks. Writing again on Twitter, Vogels said the company had “moved on to newer, faster, more reliable, more agile, more versatile technology at lower cost and higher scale.”

Vogels had previously taken to Twitter to claim a CNBC article which alleged Amazon’s moving off Oracle caused a debilitating Prime Day outage was inaccurate. The Amazon CTO refuted the article’s accuracy, claiming the internal document from which the story was based was from an unrelated issue. “Clickbait won,” he wrote.

According to the most recent figures from industry watcher Synergy Research, AWS continues to dominate the market holding more than a third (34%) of total share. Microsoft Azure remains a clear second player with 14% of the market, with Google (7%), IBM (7%) and Alibaba (4%) rounding off the top five. The analyst firm noted that while growth rates were starting to plateau, growing at 100% when reaching massive scale was not an option. More importantly, these companies are all still growing and all still increasing market share.

For now, two events later this month will make for interesting viewing from AWS’ side. Black Friday will give Amazon a chance to show off how good its data warehouse is – with hopefully no Prime Day-esque outages to fuel the fire – while AWS re:Invent kicks off in two weeks’ time. Jassy tweeted that his keynote would ‘squeeze four hours of content into less than three.’ Expect a couple of barbed comments about a certain Redwood-based company to still make the cut, though.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How manufacturers need to move from products to services with the rise of IoT

  • 91% of manufacturers are investing in predictive analytics in the next 12 months, and 50% consider artificial intelligence (AI) a major planned investment for 2019 to support their subscription-based business models.
  • New subscription business models and smart, connected products are freeing manufacturers up from competing for one-time transaction revenues to recurring revenues based on subscriptions.
  • By 2020, manufacturers are predicting 67% of their product portfolios will be smart, connected products according to an excellent study by Capgemini.
  • 71% of manufacturers are using automated sensors for real-time monitoring and data capture of a product’s condition and performance, yet just 25% have the infrastructure in place to analyse it and maximise product uptime.

Manufacturers need to break their dependence on just selling products to selling services if they’re going to grow. Smart, connected products with IoT sensors embedded in them are the future of subscription business models and a key foundation of the subscription economy.

Product reliability and uptime help create subscription economies

In a subscription economy world, whoever excels at product reliability and uptime grows faster than competitors and defines the market. Airlines with the highest on-time ratings have designed in reliability and uptime as part of their company’s identity; their DNA is based on these goals. Worldwide Business Research (WBR) in collaboration with Syncron, a global provider of cloud-based after-sales service solutions focused on empowering the world’s leading manufacturers to maximise product uptime and deliver exceptional customer experiences, recently surveyed to see how manufacturers are addressing the reliability and uptime challenges so critical to growing subscription business.

The research study, Maximised Product Uptime: The Emerging Industry Standard provides insights into how manufacturers can improve their after-sales service solutions. A copy of the study can be downloaded here (PDF, 23 pp., opt-in). Please see pages 20 – 23 for additional details on the report’s methodology. WBR and Syncron designed the survey to gain a deep understanding of manufacturers’ ability to deliver on their customers increasing demand for maximised product uptime, surveying 200 original equipment manufacturers (OEMs), with respondents evenly split between the U.S. and European markets, as well as 100 equipment end-users

Key insights from the study include the following:

34% of manufacturers are ready to compete in a subscription economy and have created a service strategy based on maximised product uptime

39% are planning to have one in two years, and 22% are predicting it will be in 2020 or later before they have on in place. Capgemini found that manufacturers’ plans for smart, connected products would extend beyond these projections, making it a challenge of manufacturers to realise the new subscription revenue they’re planning on in the future.

71% of manufacturers are using automated sensors including IoT for real-time monitoring and data capture of a product’s condition and performance, yet just 25% have the infrastructure in place to analyse it and maximise product uptime

51% of manufacturers have systems in place for analysing the inbound data generated from sensors, yet report they still have more work to do to make them operational.  The 25% of manufacturers with systems in place and at scale will have at least an 18-month jump on competitors who are just now planning on how to make use of the real-time data streams IoT sensors provide.

Predicting part failures before they occur (83%), optimising product functionality based on usage (67%), and using stronger analytics to evaluate product performance (61%) matter most to manufacturers pursuing subscription models

Autonomous product operation (56%) and implementing stronger analytics on ROI ( 50%) are also extremely important. These findings further underscore how manufacturers need to design in reliability and uptime if they are going to succeed with subscription-based business models.

91% of manufacturers are investing in predictive analytics in the next 12 months, and 50% consider artificial intelligence (AI) a major planned investment for 2019

Creating meaningful data models from the massive amount of manufacturing data being captured using automated sensors and IoT devices is making predictive analytics, AI and machine learning extremely important to manufacturers’ IT planning budgets for 2019 and beyond. Combining predictive analytics, AI and machine learning to gain greater insights into pre-emptive maintenance on each production asset, installed product or device is the goal. Knowing when a machine or product will most likely fail is invaluable in ensuring the highest uptime and service reliability levels possible.

77% of manufacturers say having an after-sales service model is critical to their customers’ success today

Customers are ready to move beyond the legacy transactional, break-fix model of the past and want a more Amazon-like experience when it comes to uptime and reliability of every device they own as consumers and use at work. Speed, scale and simplicity are the foundational elements of a subscription business model, and the majority of manufacturers surveyed say their customers are leading them into a value-added after-sales service model.

Cloudflare brings its 1.1.1.1 privacy-centric DNS service to mobile


Clare Hopping

13 Nov, 2018

Cloudflare has unveiled its 1.1.1 privacy service to mobile devices so anyone can use its free DNS protection service from any device.

The app, which has been developed for both iOS and Android makes it much easier for users to control their privacy on their mobile devices.

Cloudflare’s 1.1.1.1 privacy service funnels web traffic through the company’s DNS, making it much harder for website owners and connected services to uncover who’s accessing their services.

Another benefit of Cloudflare service is that is can speed up web connections if you’re in an area with slow internet services. One user in particular said a web page that normally takes 5-7 seconds to load in Vietnam only takes 3 seconds using the 1.1.1.1 app.

Although it was previously possible to do this by tweaking your browser’s settings manually, it’s not always the easiest task to do and can result in you being unable to access any website at all. Most mobile browsers don’t even allow you to change the DNS settings.

And although the generic 1.1.1.1 service has always been accessible through a browser on an iPad, iPhone or Android device, the mobile-specific app means you just need to tap the screen once to activate it across your device.

“We launched 1.1.1.1 to offer consumers everywhere a better choice for fast and private Internet browsing,” Matthew Prince, Cloudflare chief executive said. “The 1.1.1.1 app makes it even easier for users to unlock fast and encrypted DNS on their phones.”

Cloudflare’s 1.1.1.1 Faster & Safer Internet app is now available to download from Google Play and the App Store for free.