Digital Realty to add direct access for Oracle cloud infrastructure across the US

Digital Realty has announced it will offer dedicated and private access to Oracle Cloud in 14 major metropolitan areas, boosting its relationship and connectivity with the Redwood giant.

According to the press materials, access to Oracle Cloud Infrastructure FastConnect – a product launched by Oracle in 2016 to help customers connect their data centre to the cloud more easily – will be made available through Digital Realty’s Service Exchange in Ashburn, Atlanta, Boston, Chicago, Dallas, London, Los Angeles, Miami, New York, Phoenix, Portland, San Francisco, Seattle, and Silicon Valley.

In total, 59 Digital Realty data centres support private connections to Oracle’s infrastructure as a service, the company added.

“Customers require seamless connectivity from their data centres and networks to Oracle Cloud for their most demanding workloads and applications,” said Don Johnson, Oracle Cloud Infrastructure senior vice president for product development. “With Oracle’s FastConnect service via Digital Realty, customers can provision the dedicated and private connections they need today and easily scale with their growing business demands.”

“Our direct connections to Oracle Cloud Infrastructure build upon our commitment to ensure that our customers have interconnected access to the critical IT resources they need to drive business success,” said Chris Sharp, Digital Realty CTO. “The rapid growth of Oracle Cloud is a testament to its strength in the marketplace, and we are extremely pleased to be working closely with Oracle to accelerate its momentum.”

According to figures from Synergy Research in June, Digital Realty and Equinix remain the primary players in the colocation market, extending their lead over the competition thanks to – in the former’s case – merging with DuPont Fabros. As for Oracle, the company posted strong financial results in mid-December, and boosted its Australian operations by announcing the acquisition of Aconex in the same week.

Why cloud storage, DRaaS, multi-cloud and data security will all be key cloud drivers in 2018

It's that time of year when industry commentators are weighing in with their predictions and projections for the year ahead.

While the subject of cloud computing is a big topic, probably one of the most pressing subjects hitting the headlines in 2018 is increasing regulations relating to GDPR.  However, there are a few other cloud-related topics that I would like to put the spotlight on as we look at the anticipated growth areas for cloud service providers in the year ahead. In particular, I’d like to focus on cloud storage, DRaaS, multi-cloud, and data security.

Growth of cloud storage

Cisco estimates that the total cloud storage market will increase from 370EB in 2017 to 1.1ZB in 2018 which reinforces that this will be a particular growth area for cloud service providers. Increased regulation has driven requirements for several copies of backup data – on-premises, off-site or in the cloud and in certain industries, legislation requires longer term retention of data, often up to 10 years.

According to a 2017 Gartner survey, 42% of respondents said they would be looking to implement cloud backup in the next year, while 13% said they were already doing so. Increased availability of high-speed fibre broadband, as well as FTTP and MPLS circuits, means backup to the cloud has become much more accessible for small to medium sized businesses.

Over the last year, we have seen massive growth in the take-up of cloud backup offerings. Cloud backup is probably one of the easiest cloud services to test and adopt. For example, it takes only a few clicks within the Veeam Backup and Recovery console to add iland as a service provider and start sending backup or copy jobs to the cloud.

Growth of disaster recovery as a service

Statistics from Gartner indicate that the DRaaS market is set to grow from $2.01B in 2017 to $3.7B by 2021.  The fact that 2017 has seen a great deal of natural disasters around the world, from hurricanes and floods to wild fires, has exacerbated this. As a result we have seen customers rushing to buy DRaaS services, and existing customers invoke their DRaaS for real. One organisation in Florida was able to go from having no disaster recovery to having a fully replicated and tested solution within five days as Hurricane Irma swept in.

Aside from natural disasters, the rise of ransomware has been another important driver for DRaaS. The very low RPOs often make DRaaS a better solution than backing up and recovering data on a daily basis. As with cloud backup, the increased availability of high-speed fibre broadband has made DRaaS replication across the internet much more achievable for most customers.

Multi-cloud strategies are taking off

It’s hard to deny the massive shift that is taking place among businesses in favour of multiple cloud environments, including public and private clouds, as well as on-premises infrastructure. As businesses deploy new applications and move critical workloads to save money and boost agility, it's safe to say that the trend of mixing and matching cloud environments will only accelerate.

According to Gartner, the IaaS market grew 31.4% in 2016. While the hyper-scale providers accounted for the lion’s share of this figure, others in the market saw a 13.2% growth. 451 Research predicts that IaaS will continue to grow from an estimated $16B in 2017 to $30B in 2021.

Cloud lock-in is seen as an issue with many hyper-scale cloud service providers. There is concern that many businesses lack contingency plans should they wish to switch from one provider to another, likewise they may want to stagger the risk and use more than one cloud provider. For example, in heavily regulated industries organisations are strongly advised not to put all of their eggs in one basket.

GDPR compliance and security

For many years security was seen as a hindrance to cloud adoption. Now, in most cases, security is covered by the cloud provider and their vendor partners.

GDPR has created increased requirements for security and compliance around data ownership, access, and deletion, and, importantly, who is responsible for the data.

From the outset, the iland secure cloud has been built to provide all the aspects of security and compliance that an enterprise customer would require. This includes Trend Micro Deep Security to protect the virtual machines running in the customer's virtual data centres, as well as Tenable Nessus to monitor and protect VMs exposed to the internet.

From a compliance perspective, iland has a dedicated team of professionals to ensure that we are at the forefront of compliance initiatives such as ISO 27001, CSA Star, SOC, HIPAA, PCI and GCloud.

GDPR will bring in a whole set of new requirements around data privacy, and iland is constantly improving processes and procedures, as well as offering services to enable customers to understand their commitments around data protection.

As a cloud service provider, we continue to invest in our DRaaS offering to help businesses prepare for natural disasters, ransomware attacks, and other potential threats to data. We have also seen increased usage of our cloud backup offering, based on Veeam Cloud Connect. Understanding that a multi-cloud solution is something that businesses will increasingly seek, we aim to help our customers diversify their cloud strategy in the year ahead.

451 Research posits ‘new IT world order’ with many enterprises moving off-prem by 2019

Three in five enterprises will move the majority of their IT away from enterprise data centres and onto public cloud infrastructure and software as a service (SaaS), according to a new report from 451 Research.

The study, the analyst firm’s inaugural Voice of the Enterprise Digital Pulse survey, polled more than 1,000 IT professionals worldwide, finding the largest spending increase for IT teams this year is for ‘as a service’ delivery.

Naturally, providers such as Microsoft and Amazon Web Services (AWS) are emerging as likely strategic technology suppliers for enterprises. One in three enterprises already consider Microsoft in this role, with the number expected to rise to 35% by 2019, while 17% will opt for AWS in 2019 compared with 7% today.

Business intelligence and analytics was the main IT priority in 2018, according to 45% of respondents, ahead of machine learning and artificial intelligence (29%), big data (28%), and software-defined networking (25%). The figures are interesting when considering all of the emerging technologies interesting CIOs and CTOs alike. Machine learning and AI polled well, but interest in blockchain – cited by 12% of respondents – and fog and edge computing (7%) was not as significant.

Ultimately, the research does suggest a trend; that of data-centric technologies. “The survey suggests that many – but certainly not all – organisations are finally reaching the point where they can focus on endeavours that help differentiate the business, instead of merely keeping the lights on,” said Melanie Posey, research vice president at 451 Research. “In 2018 we expect to see much of this effort focused around a new set of approaches to data optimisation and analysis.”

Back in July, the analyst firm said that ‘everything as a service’ was rising towards the mainstream, thanks to the increased demand for managed security, disaster recovery, and networking.

You can find out more about the report here (subscription required).

10 charts that will change your perspective on artificial intelligence’s growth

  • There has been a 14X increase in the number of active AI startups since 2000.
  • Investment into AI startups by venture capitalists has increased 6X since 2000.
  • The share of jobs requiring AI skills has grown 4.5X since 2013.

These and many other fascinating insights are from Stanford University’s inaugural AI Index (PDF, no opt-in, 101 pp.). Stanford has undertaken a One Hundred Year Study on Artificial Intelligence (AI100) looking at the effects of AI on people’s lives, basing the inaugural report and index on the initial findings. The study finds “that we’re essentially “flying blind” in our conversations and decision-making related to Artificial Intelligence.” The AI Index is focused on tracking activity and progress on AI initiatives, and to facilitate informed conversations grounded with reliable, verifiable data. All data used to produce the AI Index and report is available at aiindex.org. Please see the AI Index for additional details regarding the methodology used to create each of the following graphs.

The following 10 charts from the AI Index report provides insights into AI’s rapid growth:

The number of computer science academic papers and studies has soared by more than 9X since 1996

Academic studies and research are often the precursors to new intellectual property and patents. The entire Scopus database contains over 200,000 (200,237) papers in the field of Computer Science that have been indexed with the key term “Artificial Intelligence.” The Scopus database contains almost 5 million (4,868,421) papers in the subject area “Computer Science.”

There have been a 6X increase in the annual investment levels by venture capital (VC) investors into US-based AI startups since 2000

Crunchbase, VentureSource, and Sand Hill Econometrics were used to determine the amount of funding invested each year by venture capitalists into startups where AI plays an important role in some key function of the business. The following graphic illustrates the amount of annual funding by VC’s into US AI startups across all funding stages.

There has been a 14X increase in the number of active AI startups since 2000

Crunchbase, VentureSource, and Sand Hill Econometrics were also used for completing this analysis with AI startups in Crunchbase cross-referenced to venture-backed companies in the VentureSource database. Any venture-backed companies from the Crunchbase list that were identified in the VentureSource database were included.

The share of jobs requiring AI skills has grown 4.5X since 2013

The growth of the share of US jobs requiring AI skills on the Indeed.com platform was calculated by first identifying AI-related jobs using titles and keywords in descriptions. Job growth is a calculated as a multiple of the share of jobs on the Indeed platform that required AI skills in the U.S. starting in January 2013. The study also calculated the growth of the share of jobs requiring AI skills on the Indeed.com platform, by country. Despite the rapid growth of the Canada and UK. AI job markets, Indeed.com reports they are respectively still 5% and 27% of the absolute size of the US AI job market.

Machine learning, deep learning and natural language processing (NLP) are the three most in-demand skills on Monster.com

Just two years ago NLP had been predicted to be the most in-demand skill for application developers creating new AI apps. In addition to skills creating AI apps, machine learning techniques, Python, Java, C++, experience with open source development environments, Spark, MATLAB, and Hadoop are the most in-demand skills. Based on an analysis of Monster.com entries as of today, the median salary is $127,000 in the U.S. for Data Scientists, Senior Data Scientists, Artificial Intelligence Consultants and Machine Learning Managers.

Error rates for image labeling have fallen from 28.5% to below 2.5% since 2010

AI’s inflection point for Object Detection task of the Large Scale Visual Recognition Challenge (LSVRC) Competition occurred in 2014. On this specific test, AI is now more accurate than human These findings are from the competition data from the leaderboards for each LSVRC competition hosted on the ImageNet website.

Internationally, robot imports have risen from around 100,000 in 2000 to around 250,000 in 2015

The data displayed is the number of industrial robots imported each year into North America and Internationally. Industrial robots are defined by the ISO 8373:2012 standardInternational Data Corporation (IDC) expects robotics spending to accelerate over the five-year forecast period, reaching $230.7B in 2021, attaining a Compound Annual Growth Rate (CAGR) of 22.8%.

Global revenues from AI for enterprise applications is projected to grow from $1.62bn in 2018 to $31.2bn in 2025 attaining a 52.59% CAGR in the forecast period

Image recognition and tagging, patient data processing, localization and mapping, predictive maintenance, use of algorithms and machine learning to predict and thwart security threats, intelligent recruitment, and HR systems are a few of the many enterprise application use cases predicted to fuel the projected rapid growth of AI in the enterprise. Source: Statista.

84% of enterprises believe investing in AI will lead to greater competitive advantages

75% believe that AI will open up new businesses while also providing competitors new ways to gain access to their markets. 63% believe the pressure to reduce costs will require the use of AI. Source: Statista.

87% of current AI adopters said they were using or considering using AI for sales forecasting and for improving email marketing

61% of all respondents said that they currently used or were planning to use AI for sales forecasting. The following graphic compares adoption rates of current AI adopters versus all respondents. Source: Statista.

Tech News Recap for the Week of 01/15/17

If you had a busy week in the office and need to catch up, here’s our tech news recap of articles you may have missed the week of 01/15/2017!

Cisco augments IoT platform with software and analytics. Ransomware and DoS attacks will continue to grow in 2018. 10 ways AI will impact the enterprise in 2018. What WANs are and where they’re headed and more top news this week you may have missed! Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap

Join GreenPages’ Cloud Experts for a lively, non-biased discussion at our webinar:

AWS or Azure? How to Move from Analysis Paralysis Toward a Smart Cloud Choice

Click here to register!

IT Operations

Microsoft

  • Linux Foundations shares some love back for Microsoft Azure
  • Here’s one way Microsoft’s Amazon Alexa rival could win
  • Learn how to run Linux on Microsoft’s Azure cloud
  • Microsoft’s newest Windows Server test build adds new storage, failover clustering updates

VMware

  • VMware VVOLs adoption now poised to grow after slow start
  • VMware vSphere 6.5 enhancements and why you should upgrade

Cisco

AWS

Citrix

  • Citrix to unify product suite, kill off today’s product names in May

Cloud

Security

Thanks for checking out our tech news recap!

By Jake Cryan, Digital Marketing Specialist

AWS or Azure? How to Move from Analysis Paralysis Toward a Smart Cloud Choice

Click here to register!

IBM ends revenue decline, says it has strengthened cloud and blockchain leadership

IBM has finally stopped its revenue slide after more than five and a half years – with the company saying it has strengthened its position as the leading enterprise cloud and blockchain leader.

The Armonk giant posted revenues of $22.5 billion (£16.2bn) for the fourth quarter of 2017, up 3.5% on this time last year, where revenues were at $21.7bn. Of this figure, 40% was comprised of the technology services and cloud platforms bucket, at $9.2bn, while 24% came from cognitive solutions and 18% was derived from global business services.

In prepared remarks, Martin Schroeter, senior vice president for IBM global markets, told analysts that cloud revenue was up 27%, with revenue for the year totalling $17 billion, up from $7bn three years ago. “We play an important role in running our clients’ most critical processes,” said Schroeter, as transcribed by Seeking Alpha. “And now with the IBM Cloud, which is built for the enterprise, each of the 10 largest global banks, nine of the top 10 retailers and eight of the top 10 airlines are cloud-as-a-service clients.

“We also continue to make progress in emerging areas like blockchain,” added Schroeter. “Remember that for us, blockchain is a set of technologies that allow our clients to simplify complex, end-to-end processes in a way that couldn’t have been done before. It requires the attributes of immutability, permissioning and scalability, and we’re already performing thousands of transactions per second.”

“Our strategic imperatives revenue again grew at a double-digit rate and now represents 46% of our total revenue, and we are pleased with our overall revenue growth in the quarter,” said IBM chief executive Ginni Rometty in a statement. “During 2017, we strengthened our position as the leading enterprise cloud provider and established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

Among the quarter’s highlights for IBM were the launch of IBM Cloud Private and, on the branding side, removing reference to Bluemix and moving it forward as IBM Cloud. Cloud Private aims to ‘extend cloud-native tools across public and private clouds’ and is compatible with a range of systems manufacturers, such as Cisco, Dell EMC and Lenovo.

IBM has also announced it is extending its partnership with Salesforce, with the latter naming IBM a preferred cloud services provider and the former naming Salesforce as a preferred customer engagement platform for sales and service. The two companies already have a complementary relationship; Salesforce CEO Marc Benioff took to the stage at IBM’s InterConnect event back in March to discuss with Rometty the rise of artificial intelligence.

You can read the full IBM financial results and statement here (pdf).

IDC forecasts global spending on public cloud services to hit $160bn in 2018

Global spending on public cloud services is forecast to hit $160 billion in 2018 at an increase of 23.2% the previous year, according to figures from IDC.

Software as a service (SaaS) will remain the largest cloud category with almost two thirds of all public cloud spending this year, followed by infrastructure as a service (IaaS) and platform as a service (PaaS). Of the SaaS spend, applications purchases will dominate the segment, with CRM and enterprise resource management (ERM) apps seeing the most spending.

IaaS spending will be ‘fairly balanced’ through the coming year with servers just ahead of storage, IDC added, while PaaS spending will be led by data management software purchases, ahead of app platforms, integration and orchestration middleware, and data access, analysis and delivery applications.

Not surprisingly, the US will comprise the largest market for public cloud services, with $97 billion – more than 60% of the overall total – with the UK ($7.9bn) just ahead of Germany ($7.4bn) for Western Europe. Japan and China lead the way for Asia and complete the top five, with spending of $5.8bn and $5.4bn respectively.

For the US, the industries that will spend the most on public cloud services this year are discrete manufacturing, professional services, and banking. According to Eileen Smith, program director of customer insights and analysis, this makes for an interesting comparison. “The industries that are spending the most are the ones that have come to recognise the tremendous benefits that can be gained from public cloud services,” said Smith.

“Organisations within these industries are leveraging public cloud services to quickly develop and launch third platform solutions, such as big data and analytics and the Internet of Things, that will enhance and optimise the customer’s journey and lower operational costs.”

According to figures issued by Synergy Research earlier this month, the global cloud computing market is at $180bn in vendor revenues, with IaaS and PaaS – put into the same bucket for the purposes of the analysis – the fastest-growing sector.

Consuming public cloud services on-premise: A guide

As we enter our second decade of its existence, the role of the public cloud is changing. Infrastructure as a service altered the way that virtualised resources are consumed, but what has emerged is far more powerful than allocating compute, storage, and networking on-demand.

The derivative services that the public cloud providers now offer include speech-to-text, sentiment analysis, and machine learning functionality that are constantly being improved. While it is often prudent to run an application on virtual machines or a container cluster on-premises for cost, security, or data gravity reasons, this new breed of public cloud services can often be used in a stateless manner that enables them to be utilised no matter where the business logic for an application resides.

How are on-prem applications utilising these services today and how can that usage evolve over time to work at scale?

Common usage today

Today, application code has to be bound to a specific instance of a public cloud service in order for the interaction between the code and the service to work correctly. Typically, that binding involves standing up an instance of the service using the public cloud console, granting access to a particular user with a particular set of security authorisations, making access keys for that user available to the developer, who then has to embed references to both the access keys and the service instance those keys grant access to.

Here’s an example of that from the developer perspective using a Kubernetes-based application on-prem to connect to the Google Natural Language API. First, the deployment.yaml file that describes how the front end component of our application should be deployed:

The key portion for this discussion is at the bottom where a volume is mounted so that the launched containers can access the local disk that contains the access keys and both the access keys location (GOOGLE_APPLICATION_CREDENTIALS) and project ID for pointing to the correct instance of the service (GOOGLE_PROJECT_ID, where the value is blurred) are injected into the container as environment variables.

In the front-end Python code put into this container, first it must create an instance of the natural language object that is part of the Google Python client library:

Here, there is a specific reference being made to that project ID and the client library is smart enough to look for the access keys location in the aforementioned environment variable. At this point, the client library can be used to do things like measure the sentiment of an input string:

Needless to say, this process is both cumbersome and fragile. What if the volume breaks and the code cannot get to the access keys? How about typos of the project IDs? What happens if you want to change either one?

This is complicated in aggregate across an application portfolio that would otherwise have to do this individually for every public cloud service. Hard-coding project IDs is subject to human error and rotating access keys – to ensure better security of the public cloud service consumption – forces a new deployment. Usage metrics are locked inside the individual accounts from which the project IDs are generated, making it difficult for anyone to get a real sense of public cloud service usage across multiple applications.

A better future

What is a better way to tackle this problem so that developers can create applications that get deployed on-prem, but can still take advantage of public cloud services that would be difficult to replicate? Catalog and brokering tools are emerging that remove many of the steps described above by consolidating public cloud service access into a single interface that is orthogonal to the developer view of the world. Instead of a developer baking in key access and project IDs into the deployment process, the IT ops staff is able to provide a container cluster environment that injects the necessary information. This simplifies deployments for the developer and provides a single place to collect aggregate metrics.

For example, here is a screenshot from a catalog tool where an IT ops admin can create an instance of a pub/sub service (left), before creating a binding (right) for that service to be used by an individual application:

The code required to complete the binding is simpler than the previous example (shown in Node.js):

By removing the need to inject binding-necessary information during the deployment process and instead having it handled by the environment itself, public cloud services can be reused by providing multiple application bindings to the same service. Access keys can be rotated in-memory so that security can be improved without forcing a deployment. Usage flows through a single point, making metrics collection much easier.

In summary

Certain public cloud services, especially those involving large AI datasets like natural language processing or image analysis, are difficult if not impossible to replicate on-prem. Increasingly, though, users expect applications to contain features based on these services. The trick for any developer or enterprise is to find a way to streamline access to these services across an application portfolio in a way that makes the individual applications more secure, more resilient, and provide more useful usage metrics.

Current techniques of binding applications to these public cloud services prevent this – but a set of catalog and brokering tools are emerging that make this far easier to deliver on these promises that customers demand.

Analysing the next generation of machine learning tools for financial services

“Machine learning is so tantalizing for most every day developers and scientists. Still, there are a lot of constraints for builders….How do we turn machine learning from a capability of the few, into something that many more people can take advantage of?” – Andy Jassy, Keynote from AWS re:Invent 2017

The fintech industry has been hyped about the potential of machine learning technology for years. Despite all the noise, it’s still very early for most companies. Expert machine learning practitioners are rare, and even if you manage to find one, it usually takes more than a year to launch a machine learning app in production.

But all that’s set to change in 2018. First and most importantly, SaaS-based machine learning platforms are maturing and ready for use by fintech companies. Equally exciting are the tools made available by Amazon Web Services (AWS) — the platform most FinTech companies are already running on — to make the process of building your own machine learning algorithms much easier.

SaaS machine learning platforms for fintech

Creating a machine learning model isn’t easy. First you have to get your data in one place, then choose an algorithm, train your model, tune your model, deploy the model, and fine-tune it over time. Given the pace of change in the industry, algorithms need to be tuned constantly. But data analysis power is not enough. The tougher job is understanding how to communicate the insights of machine learning to consumers.

Given all of these challenges, finance companies usually begin by searching for a SaaS-based machine learning platform that solves an existing challenge, rather than building their own tool. For finance companies that ingest large amounts of financial data, machine learning means using data from thousands of consumers to pinpoint investment opportunities, uncover fraud, or underwrite a loan.

Here are some of the popular SaaS machine learning apps and APIs for finance:

User logins and facial recognition: User login is changing, and having usernames and passwords to access your bank account might not be around forever. Technology for facial recognition and biometrics is finally reaching the mainstream; Facebook facial recognition finds photos you’re untagged in; facial recognition cameras have been installed in apartment complexes in China to provide keyless entry; facial scanning pilot programs are currently in use in six American airports. In late 2017, Amazon released a Deep Lens, “the world’s first deep learning enabled video camera”, which will likely spur further innovation in facial recognition.

  • Kairos – a “human analytics” platform for face detection, identification, emotion detection, and more. It’s already used by companies such as Carnival, Pepsico, IPG, and more.
  • Luxand FaceSDK – a system that detects faces and facial features, used for building authentication, video search, and even augmented reality. Used by large enterprise such as Universal, Samsung, and Ford.
  • IBM Watson Visual Recognition API – an API that allows you to tag and classify visual content, including faces.

Portfolio management: Companies like Betterment, Mint and others have proven that millennial customers don’t need to speak with a human advisor in order to feel comfortable investing. Instead, they trust algorithms that change their investments according to market changes. These complex, machine-learning led services are taking significant market share from more traditional advisory channels.

  • ai – a platform used by private wealth managers and institutions to provide clients with a digital experience to track investments, plus automated recommendations. Also provides analytics to the wealth managers across their client base.
  • BlackRock Aladdin Platform – an end-to-end investment platform that combines risk analytics with portfolio management and trading tools.
  • Clinc – a conversational AI platform for personal banking. Clinc can provide wealth managers’ clients with personalized insight into spending patterns, notify customers of unusual transactions, and recommend new financial products.

If you’re interested in learning how to build a machine learning portfolio management platform in-house, read this fascinating article about Man Group, which built its own AI tool and even has its own Institute at Oxford to experiment with different AI-built trading systems.

Fraud detection: According to the Association of Certified Fraud Examiners, the money lost by businesses to fraud is over $3.5 trillion every year. Machine learning-based platforms help warn companies of potential fraudsters or phishing attacks in real time.

  • Kount – A platform that allows you to identify fraud in real time. Kount AI Servicescombines their core platform with custom machine learning rules developed by their data science team.
  • IBM Trusteer – IBM’s Pinpoint Detect is a cloud-based platform that correlates a wide range of fraud indicators to detect phishing attacks, malware, and advanced evasion methods. It also learns each customer’s behavior across multiple sessions to help identify when fraudsters assume that customer’s identity.

Still want to build your own? Machine learning on AWS

Finance companies that want to build proprietary machine learning algorithms will not be satisfied with a one-size-fits-all SaaS tool. If you want to build your own machine learning app, AWS can significantly reduce the amount of time it takes to train, tune, and deploy your model.

AWS has always been at the forefront of machine learning; think of Amazon’s recommendation engine that displays products that customers like you have purchased, or Amazon Echo, the popular voice-controlled smart home hub. They’ve released a series of machine learning tools over the past 3 years for their AWS customers, including the technology behind Echo’s Alexa.

At re:Invent 2017, Amazon released a service that packages together many of their previously-announced machine learning capabilities into an easy-to-use, fully-managed service: AWS SageMaker.

SageMaker is designed to empower any developer to use machine learning, making it easy to build and train models and deploy them to production. It automates all the time-consuming training techniques and has built-in machine learning algorithms so you can get up and running quickly. Essentially, it’s one-click machine learning for developers; you provide the data set, and it’ll give you some interesting outputs. This is a big deal for smaller companies without a fleet of data scientists who want to build machine learning applications. Granted, developers still have to understand what they’re doing and apply that model in a useful way to your customers.

Machine learning will continue to be a huge force in finance in 2018. As the market matures, expect more SaaS products and more platforms like AWS SageMaker that ease adoption of machine learning.

The post The Next Generation of Machine Learning Tools for Financial Services appeared first on Logicworks.

Intel admits data centre performance slowdown after Meltdown and Spectre updates

Good and bad news from Intel regarding the Meltdown and Spectre vulnerabilities; firmware updates are being pushed through for the vast majority of CPUs issued by the company in the past five years, but patches are impacting data centre performance.

According to an update published yesterday, impacts in performance have ranged from 0% to 2% on industry-standard measures, including integer and floating point throughput, and server-side Java – in other words, common workloads for enterprise and cloud customers. For a benchmark simulating different types of I/O loads, however, testing to stress the CPU in a 100% write case saw an 18% decrease in throughput performance.

Navin Shenoy, Intel EVP and general manager of the Data Center Group, stressed the importance of these being guidelines, with customer-specific workloads likely to differ.

“As expected, our testing results to date show performance impact that ranges depending on specific workloads and configurations,” wrote Shenoy. “Generally speaking, the workloads that incorporate a larger number of user/kernel privilege changes and spend a significant amount of time in privileged mode will be more adversely impacted.”

Intel added that it was “working hard” with partners and customers in the more serious cases of performance degradation. The company recommends Retpoline, a project headed by Google, as a potential mitigation. Retpoline – a portmanteau of ‘return’ and ‘trampoline’ – aims to attack speculative execution by ‘bouncing’ endlessly, in the process allowing indirect branches to be isolated. The company also recommends options that can be found in a more detailed whitepaper.

According to a report from Bridgeway earlier this week, only 4% of enterprise mobile devices have been protected against Meltdown and Spectre vulnerabilities. The company added that at least 72% of the 100,000 mobile devices analysed were still exposed to the threats.